Mar 18 17:39:42.630048 master-0 systemd[1]: Starting Kubernetes Kubelet... Mar 18 17:39:43.287440 master-0 kubenswrapper[4090]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 18 17:39:43.287440 master-0 kubenswrapper[4090]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Mar 18 17:39:43.287440 master-0 kubenswrapper[4090]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 18 17:39:43.287440 master-0 kubenswrapper[4090]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 18 17:39:43.287440 master-0 kubenswrapper[4090]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Mar 18 17:39:43.287440 master-0 kubenswrapper[4090]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 18 17:39:43.289949 master-0 kubenswrapper[4090]: I0318 17:39:43.289783 4090 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 18 17:39:43.296942 master-0 kubenswrapper[4090]: W0318 17:39:43.296888 4090 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Mar 18 17:39:43.296942 master-0 kubenswrapper[4090]: W0318 17:39:43.296920 4090 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Mar 18 17:39:43.296942 master-0 kubenswrapper[4090]: W0318 17:39:43.296930 4090 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Mar 18 17:39:43.296942 master-0 kubenswrapper[4090]: W0318 17:39:43.296939 4090 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Mar 18 17:39:43.296942 master-0 kubenswrapper[4090]: W0318 17:39:43.296949 4090 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Mar 18 17:39:43.297247 master-0 kubenswrapper[4090]: W0318 17:39:43.296958 4090 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Mar 18 17:39:43.297247 master-0 kubenswrapper[4090]: W0318 17:39:43.296967 4090 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Mar 18 17:39:43.297247 master-0 kubenswrapper[4090]: W0318 17:39:43.296976 4090 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Mar 18 17:39:43.297247 master-0 kubenswrapper[4090]: W0318 17:39:43.296985 4090 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Mar 18 17:39:43.297247 master-0 kubenswrapper[4090]: W0318 17:39:43.296995 4090 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Mar 18 17:39:43.297247 master-0 kubenswrapper[4090]: W0318 17:39:43.297004 4090 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Mar 18 17:39:43.297247 master-0 kubenswrapper[4090]: W0318 17:39:43.297012 4090 feature_gate.go:330] unrecognized feature gate: GatewayAPI Mar 18 17:39:43.297247 master-0 kubenswrapper[4090]: W0318 17:39:43.297021 4090 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Mar 18 17:39:43.297247 master-0 kubenswrapper[4090]: W0318 17:39:43.297029 4090 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Mar 18 17:39:43.297247 master-0 kubenswrapper[4090]: W0318 17:39:43.297038 4090 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Mar 18 17:39:43.297247 master-0 kubenswrapper[4090]: W0318 17:39:43.297046 4090 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Mar 18 17:39:43.297247 master-0 kubenswrapper[4090]: W0318 17:39:43.297056 4090 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Mar 18 17:39:43.297247 master-0 kubenswrapper[4090]: W0318 17:39:43.297069 4090 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Mar 18 17:39:43.297247 master-0 kubenswrapper[4090]: W0318 17:39:43.297080 4090 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Mar 18 17:39:43.297247 master-0 kubenswrapper[4090]: W0318 17:39:43.297091 4090 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Mar 18 17:39:43.297247 master-0 kubenswrapper[4090]: W0318 17:39:43.297100 4090 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Mar 18 17:39:43.297247 master-0 kubenswrapper[4090]: W0318 17:39:43.297110 4090 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Mar 18 17:39:43.297247 master-0 kubenswrapper[4090]: W0318 17:39:43.297119 4090 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Mar 18 17:39:43.297247 master-0 kubenswrapper[4090]: W0318 17:39:43.297128 4090 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Mar 18 17:39:43.297247 master-0 kubenswrapper[4090]: W0318 17:39:43.297138 4090 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Mar 18 17:39:43.298253 master-0 kubenswrapper[4090]: W0318 17:39:43.297150 4090 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Mar 18 17:39:43.298253 master-0 kubenswrapper[4090]: W0318 17:39:43.297161 4090 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Mar 18 17:39:43.298253 master-0 kubenswrapper[4090]: W0318 17:39:43.297170 4090 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Mar 18 17:39:43.298253 master-0 kubenswrapper[4090]: W0318 17:39:43.297179 4090 feature_gate.go:330] unrecognized feature gate: OVNObservability Mar 18 17:39:43.298253 master-0 kubenswrapper[4090]: W0318 17:39:43.297188 4090 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Mar 18 17:39:43.298253 master-0 kubenswrapper[4090]: W0318 17:39:43.297200 4090 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Mar 18 17:39:43.298253 master-0 kubenswrapper[4090]: W0318 17:39:43.297211 4090 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Mar 18 17:39:43.298253 master-0 kubenswrapper[4090]: W0318 17:39:43.297222 4090 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Mar 18 17:39:43.298253 master-0 kubenswrapper[4090]: W0318 17:39:43.297231 4090 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Mar 18 17:39:43.298253 master-0 kubenswrapper[4090]: W0318 17:39:43.297240 4090 feature_gate.go:330] unrecognized feature gate: Example Mar 18 17:39:43.298253 master-0 kubenswrapper[4090]: W0318 17:39:43.297250 4090 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Mar 18 17:39:43.298253 master-0 kubenswrapper[4090]: W0318 17:39:43.297260 4090 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Mar 18 17:39:43.298253 master-0 kubenswrapper[4090]: W0318 17:39:43.297303 4090 feature_gate.go:330] unrecognized feature gate: InsightsConfig Mar 18 17:39:43.298253 master-0 kubenswrapper[4090]: W0318 17:39:43.297313 4090 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Mar 18 17:39:43.298253 master-0 kubenswrapper[4090]: W0318 17:39:43.297322 4090 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Mar 18 17:39:43.298253 master-0 kubenswrapper[4090]: W0318 17:39:43.297330 4090 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Mar 18 17:39:43.298253 master-0 kubenswrapper[4090]: W0318 17:39:43.297339 4090 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Mar 18 17:39:43.298253 master-0 kubenswrapper[4090]: W0318 17:39:43.297348 4090 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Mar 18 17:39:43.298253 master-0 kubenswrapper[4090]: W0318 17:39:43.297357 4090 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Mar 18 17:39:43.299245 master-0 kubenswrapper[4090]: W0318 17:39:43.297365 4090 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Mar 18 17:39:43.299245 master-0 kubenswrapper[4090]: W0318 17:39:43.297377 4090 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Mar 18 17:39:43.299245 master-0 kubenswrapper[4090]: W0318 17:39:43.297386 4090 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Mar 18 17:39:43.299245 master-0 kubenswrapper[4090]: W0318 17:39:43.297394 4090 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Mar 18 17:39:43.299245 master-0 kubenswrapper[4090]: W0318 17:39:43.297402 4090 feature_gate.go:330] unrecognized feature gate: NewOLM Mar 18 17:39:43.299245 master-0 kubenswrapper[4090]: W0318 17:39:43.297411 4090 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Mar 18 17:39:43.299245 master-0 kubenswrapper[4090]: W0318 17:39:43.297419 4090 feature_gate.go:330] unrecognized feature gate: PlatformOperators Mar 18 17:39:43.299245 master-0 kubenswrapper[4090]: W0318 17:39:43.297428 4090 feature_gate.go:330] unrecognized feature gate: PinnedImages Mar 18 17:39:43.299245 master-0 kubenswrapper[4090]: W0318 17:39:43.297436 4090 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Mar 18 17:39:43.299245 master-0 kubenswrapper[4090]: W0318 17:39:43.297444 4090 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Mar 18 17:39:43.299245 master-0 kubenswrapper[4090]: W0318 17:39:43.297454 4090 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Mar 18 17:39:43.299245 master-0 kubenswrapper[4090]: W0318 17:39:43.297463 4090 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Mar 18 17:39:43.299245 master-0 kubenswrapper[4090]: W0318 17:39:43.297473 4090 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Mar 18 17:39:43.299245 master-0 kubenswrapper[4090]: W0318 17:39:43.297481 4090 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Mar 18 17:39:43.299245 master-0 kubenswrapper[4090]: W0318 17:39:43.297489 4090 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Mar 18 17:39:43.299245 master-0 kubenswrapper[4090]: W0318 17:39:43.297498 4090 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Mar 18 17:39:43.299245 master-0 kubenswrapper[4090]: W0318 17:39:43.297506 4090 feature_gate.go:330] unrecognized feature gate: SignatureStores Mar 18 17:39:43.299245 master-0 kubenswrapper[4090]: W0318 17:39:43.297515 4090 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Mar 18 17:39:43.299245 master-0 kubenswrapper[4090]: W0318 17:39:43.297523 4090 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Mar 18 17:39:43.299245 master-0 kubenswrapper[4090]: W0318 17:39:43.297534 4090 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Mar 18 17:39:43.299245 master-0 kubenswrapper[4090]: W0318 17:39:43.297543 4090 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Mar 18 17:39:43.300478 master-0 kubenswrapper[4090]: W0318 17:39:43.297551 4090 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Mar 18 17:39:43.300478 master-0 kubenswrapper[4090]: W0318 17:39:43.297559 4090 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Mar 18 17:39:43.300478 master-0 kubenswrapper[4090]: W0318 17:39:43.297568 4090 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Mar 18 17:39:43.300478 master-0 kubenswrapper[4090]: W0318 17:39:43.297580 4090 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Mar 18 17:39:43.300478 master-0 kubenswrapper[4090]: W0318 17:39:43.297592 4090 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Mar 18 17:39:43.300478 master-0 kubenswrapper[4090]: W0318 17:39:43.297602 4090 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Mar 18 17:39:43.300478 master-0 kubenswrapper[4090]: W0318 17:39:43.297612 4090 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Mar 18 17:39:43.300478 master-0 kubenswrapper[4090]: I0318 17:39:43.297816 4090 flags.go:64] FLAG: --address="0.0.0.0" Mar 18 17:39:43.300478 master-0 kubenswrapper[4090]: I0318 17:39:43.297834 4090 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Mar 18 17:39:43.300478 master-0 kubenswrapper[4090]: I0318 17:39:43.297852 4090 flags.go:64] FLAG: --anonymous-auth="true" Mar 18 17:39:43.300478 master-0 kubenswrapper[4090]: I0318 17:39:43.297864 4090 flags.go:64] FLAG: --application-metrics-count-limit="100" Mar 18 17:39:43.300478 master-0 kubenswrapper[4090]: I0318 17:39:43.297877 4090 flags.go:64] FLAG: --authentication-token-webhook="false" Mar 18 17:39:43.300478 master-0 kubenswrapper[4090]: I0318 17:39:43.297888 4090 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Mar 18 17:39:43.300478 master-0 kubenswrapper[4090]: I0318 17:39:43.297902 4090 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Mar 18 17:39:43.300478 master-0 kubenswrapper[4090]: I0318 17:39:43.297915 4090 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Mar 18 17:39:43.300478 master-0 kubenswrapper[4090]: I0318 17:39:43.297925 4090 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Mar 18 17:39:43.300478 master-0 kubenswrapper[4090]: I0318 17:39:43.297935 4090 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Mar 18 17:39:43.300478 master-0 kubenswrapper[4090]: I0318 17:39:43.297946 4090 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Mar 18 17:39:43.300478 master-0 kubenswrapper[4090]: I0318 17:39:43.297956 4090 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Mar 18 17:39:43.300478 master-0 kubenswrapper[4090]: I0318 17:39:43.297966 4090 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Mar 18 17:39:43.300478 master-0 kubenswrapper[4090]: I0318 17:39:43.297976 4090 flags.go:64] FLAG: --cgroup-root="" Mar 18 17:39:43.301513 master-0 kubenswrapper[4090]: I0318 17:39:43.297986 4090 flags.go:64] FLAG: --cgroups-per-qos="true" Mar 18 17:39:43.301513 master-0 kubenswrapper[4090]: I0318 17:39:43.297997 4090 flags.go:64] FLAG: --client-ca-file="" Mar 18 17:39:43.301513 master-0 kubenswrapper[4090]: I0318 17:39:43.298008 4090 flags.go:64] FLAG: --cloud-config="" Mar 18 17:39:43.301513 master-0 kubenswrapper[4090]: I0318 17:39:43.298018 4090 flags.go:64] FLAG: --cloud-provider="" Mar 18 17:39:43.301513 master-0 kubenswrapper[4090]: I0318 17:39:43.298027 4090 flags.go:64] FLAG: --cluster-dns="[]" Mar 18 17:39:43.301513 master-0 kubenswrapper[4090]: I0318 17:39:43.298039 4090 flags.go:64] FLAG: --cluster-domain="" Mar 18 17:39:43.301513 master-0 kubenswrapper[4090]: I0318 17:39:43.298049 4090 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Mar 18 17:39:43.301513 master-0 kubenswrapper[4090]: I0318 17:39:43.298059 4090 flags.go:64] FLAG: --config-dir="" Mar 18 17:39:43.301513 master-0 kubenswrapper[4090]: I0318 17:39:43.298069 4090 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Mar 18 17:39:43.301513 master-0 kubenswrapper[4090]: I0318 17:39:43.298079 4090 flags.go:64] FLAG: --container-log-max-files="5" Mar 18 17:39:43.301513 master-0 kubenswrapper[4090]: I0318 17:39:43.298092 4090 flags.go:64] FLAG: --container-log-max-size="10Mi" Mar 18 17:39:43.301513 master-0 kubenswrapper[4090]: I0318 17:39:43.298102 4090 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Mar 18 17:39:43.301513 master-0 kubenswrapper[4090]: I0318 17:39:43.298112 4090 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Mar 18 17:39:43.301513 master-0 kubenswrapper[4090]: I0318 17:39:43.298123 4090 flags.go:64] FLAG: --containerd-namespace="k8s.io" Mar 18 17:39:43.301513 master-0 kubenswrapper[4090]: I0318 17:39:43.298133 4090 flags.go:64] FLAG: --contention-profiling="false" Mar 18 17:39:43.301513 master-0 kubenswrapper[4090]: I0318 17:39:43.298143 4090 flags.go:64] FLAG: --cpu-cfs-quota="true" Mar 18 17:39:43.301513 master-0 kubenswrapper[4090]: I0318 17:39:43.298153 4090 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Mar 18 17:39:43.301513 master-0 kubenswrapper[4090]: I0318 17:39:43.298163 4090 flags.go:64] FLAG: --cpu-manager-policy="none" Mar 18 17:39:43.301513 master-0 kubenswrapper[4090]: I0318 17:39:43.298173 4090 flags.go:64] FLAG: --cpu-manager-policy-options="" Mar 18 17:39:43.301513 master-0 kubenswrapper[4090]: I0318 17:39:43.298185 4090 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Mar 18 17:39:43.301513 master-0 kubenswrapper[4090]: I0318 17:39:43.298195 4090 flags.go:64] FLAG: --enable-controller-attach-detach="true" Mar 18 17:39:43.301513 master-0 kubenswrapper[4090]: I0318 17:39:43.298205 4090 flags.go:64] FLAG: --enable-debugging-handlers="true" Mar 18 17:39:43.301513 master-0 kubenswrapper[4090]: I0318 17:39:43.298216 4090 flags.go:64] FLAG: --enable-load-reader="false" Mar 18 17:39:43.301513 master-0 kubenswrapper[4090]: I0318 17:39:43.298226 4090 flags.go:64] FLAG: --enable-server="true" Mar 18 17:39:43.301513 master-0 kubenswrapper[4090]: I0318 17:39:43.298235 4090 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Mar 18 17:39:43.303144 master-0 kubenswrapper[4090]: I0318 17:39:43.298249 4090 flags.go:64] FLAG: --event-burst="100" Mar 18 17:39:43.303144 master-0 kubenswrapper[4090]: I0318 17:39:43.298259 4090 flags.go:64] FLAG: --event-qps="50" Mar 18 17:39:43.303144 master-0 kubenswrapper[4090]: I0318 17:39:43.298298 4090 flags.go:64] FLAG: --event-storage-age-limit="default=0" Mar 18 17:39:43.303144 master-0 kubenswrapper[4090]: I0318 17:39:43.298311 4090 flags.go:64] FLAG: --event-storage-event-limit="default=0" Mar 18 17:39:43.303144 master-0 kubenswrapper[4090]: I0318 17:39:43.298322 4090 flags.go:64] FLAG: --eviction-hard="" Mar 18 17:39:43.303144 master-0 kubenswrapper[4090]: I0318 17:39:43.298334 4090 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Mar 18 17:39:43.303144 master-0 kubenswrapper[4090]: I0318 17:39:43.298343 4090 flags.go:64] FLAG: --eviction-minimum-reclaim="" Mar 18 17:39:43.303144 master-0 kubenswrapper[4090]: I0318 17:39:43.298353 4090 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Mar 18 17:39:43.303144 master-0 kubenswrapper[4090]: I0318 17:39:43.298364 4090 flags.go:64] FLAG: --eviction-soft="" Mar 18 17:39:43.303144 master-0 kubenswrapper[4090]: I0318 17:39:43.298375 4090 flags.go:64] FLAG: --eviction-soft-grace-period="" Mar 18 17:39:43.303144 master-0 kubenswrapper[4090]: I0318 17:39:43.298385 4090 flags.go:64] FLAG: --exit-on-lock-contention="false" Mar 18 17:39:43.303144 master-0 kubenswrapper[4090]: I0318 17:39:43.298395 4090 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Mar 18 17:39:43.303144 master-0 kubenswrapper[4090]: I0318 17:39:43.298405 4090 flags.go:64] FLAG: --experimental-mounter-path="" Mar 18 17:39:43.303144 master-0 kubenswrapper[4090]: I0318 17:39:43.298415 4090 flags.go:64] FLAG: --fail-cgroupv1="false" Mar 18 17:39:43.303144 master-0 kubenswrapper[4090]: I0318 17:39:43.298425 4090 flags.go:64] FLAG: --fail-swap-on="true" Mar 18 17:39:43.303144 master-0 kubenswrapper[4090]: I0318 17:39:43.298435 4090 flags.go:64] FLAG: --feature-gates="" Mar 18 17:39:43.303144 master-0 kubenswrapper[4090]: I0318 17:39:43.298447 4090 flags.go:64] FLAG: --file-check-frequency="20s" Mar 18 17:39:43.303144 master-0 kubenswrapper[4090]: I0318 17:39:43.298457 4090 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Mar 18 17:39:43.303144 master-0 kubenswrapper[4090]: I0318 17:39:43.298468 4090 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Mar 18 17:39:43.303144 master-0 kubenswrapper[4090]: I0318 17:39:43.298478 4090 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Mar 18 17:39:43.303144 master-0 kubenswrapper[4090]: I0318 17:39:43.298488 4090 flags.go:64] FLAG: --healthz-port="10248" Mar 18 17:39:43.303144 master-0 kubenswrapper[4090]: I0318 17:39:43.298498 4090 flags.go:64] FLAG: --help="false" Mar 18 17:39:43.303144 master-0 kubenswrapper[4090]: I0318 17:39:43.298508 4090 flags.go:64] FLAG: --hostname-override="" Mar 18 17:39:43.303144 master-0 kubenswrapper[4090]: I0318 17:39:43.298517 4090 flags.go:64] FLAG: --housekeeping-interval="10s" Mar 18 17:39:43.303144 master-0 kubenswrapper[4090]: I0318 17:39:43.298527 4090 flags.go:64] FLAG: --http-check-frequency="20s" Mar 18 17:39:43.303144 master-0 kubenswrapper[4090]: I0318 17:39:43.298537 4090 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Mar 18 17:39:43.304619 master-0 kubenswrapper[4090]: I0318 17:39:43.298547 4090 flags.go:64] FLAG: --image-credential-provider-config="" Mar 18 17:39:43.304619 master-0 kubenswrapper[4090]: I0318 17:39:43.298556 4090 flags.go:64] FLAG: --image-gc-high-threshold="85" Mar 18 17:39:43.304619 master-0 kubenswrapper[4090]: I0318 17:39:43.298567 4090 flags.go:64] FLAG: --image-gc-low-threshold="80" Mar 18 17:39:43.304619 master-0 kubenswrapper[4090]: I0318 17:39:43.298577 4090 flags.go:64] FLAG: --image-service-endpoint="" Mar 18 17:39:43.304619 master-0 kubenswrapper[4090]: I0318 17:39:43.298586 4090 flags.go:64] FLAG: --kernel-memcg-notification="false" Mar 18 17:39:43.304619 master-0 kubenswrapper[4090]: I0318 17:39:43.298596 4090 flags.go:64] FLAG: --kube-api-burst="100" Mar 18 17:39:43.304619 master-0 kubenswrapper[4090]: I0318 17:39:43.298607 4090 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Mar 18 17:39:43.304619 master-0 kubenswrapper[4090]: I0318 17:39:43.298617 4090 flags.go:64] FLAG: --kube-api-qps="50" Mar 18 17:39:43.304619 master-0 kubenswrapper[4090]: I0318 17:39:43.298627 4090 flags.go:64] FLAG: --kube-reserved="" Mar 18 17:39:43.304619 master-0 kubenswrapper[4090]: I0318 17:39:43.298637 4090 flags.go:64] FLAG: --kube-reserved-cgroup="" Mar 18 17:39:43.304619 master-0 kubenswrapper[4090]: I0318 17:39:43.298647 4090 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Mar 18 17:39:43.304619 master-0 kubenswrapper[4090]: I0318 17:39:43.298657 4090 flags.go:64] FLAG: --kubelet-cgroups="" Mar 18 17:39:43.304619 master-0 kubenswrapper[4090]: I0318 17:39:43.298667 4090 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Mar 18 17:39:43.304619 master-0 kubenswrapper[4090]: I0318 17:39:43.298678 4090 flags.go:64] FLAG: --lock-file="" Mar 18 17:39:43.304619 master-0 kubenswrapper[4090]: I0318 17:39:43.298688 4090 flags.go:64] FLAG: --log-cadvisor-usage="false" Mar 18 17:39:43.304619 master-0 kubenswrapper[4090]: I0318 17:39:43.298698 4090 flags.go:64] FLAG: --log-flush-frequency="5s" Mar 18 17:39:43.304619 master-0 kubenswrapper[4090]: I0318 17:39:43.298708 4090 flags.go:64] FLAG: --log-json-info-buffer-size="0" Mar 18 17:39:43.304619 master-0 kubenswrapper[4090]: I0318 17:39:43.298722 4090 flags.go:64] FLAG: --log-json-split-stream="false" Mar 18 17:39:43.304619 master-0 kubenswrapper[4090]: I0318 17:39:43.298732 4090 flags.go:64] FLAG: --log-text-info-buffer-size="0" Mar 18 17:39:43.304619 master-0 kubenswrapper[4090]: I0318 17:39:43.298743 4090 flags.go:64] FLAG: --log-text-split-stream="false" Mar 18 17:39:43.304619 master-0 kubenswrapper[4090]: I0318 17:39:43.298753 4090 flags.go:64] FLAG: --logging-format="text" Mar 18 17:39:43.304619 master-0 kubenswrapper[4090]: I0318 17:39:43.298762 4090 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Mar 18 17:39:43.304619 master-0 kubenswrapper[4090]: I0318 17:39:43.298773 4090 flags.go:64] FLAG: --make-iptables-util-chains="true" Mar 18 17:39:43.304619 master-0 kubenswrapper[4090]: I0318 17:39:43.298783 4090 flags.go:64] FLAG: --manifest-url="" Mar 18 17:39:43.304619 master-0 kubenswrapper[4090]: I0318 17:39:43.298793 4090 flags.go:64] FLAG: --manifest-url-header="" Mar 18 17:39:43.306213 master-0 kubenswrapper[4090]: I0318 17:39:43.298806 4090 flags.go:64] FLAG: --max-housekeeping-interval="15s" Mar 18 17:39:43.306213 master-0 kubenswrapper[4090]: I0318 17:39:43.298816 4090 flags.go:64] FLAG: --max-open-files="1000000" Mar 18 17:39:43.306213 master-0 kubenswrapper[4090]: I0318 17:39:43.298827 4090 flags.go:64] FLAG: --max-pods="110" Mar 18 17:39:43.306213 master-0 kubenswrapper[4090]: I0318 17:39:43.298837 4090 flags.go:64] FLAG: --maximum-dead-containers="-1" Mar 18 17:39:43.306213 master-0 kubenswrapper[4090]: I0318 17:39:43.298847 4090 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Mar 18 17:39:43.306213 master-0 kubenswrapper[4090]: I0318 17:39:43.298857 4090 flags.go:64] FLAG: --memory-manager-policy="None" Mar 18 17:39:43.306213 master-0 kubenswrapper[4090]: I0318 17:39:43.298866 4090 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Mar 18 17:39:43.306213 master-0 kubenswrapper[4090]: I0318 17:39:43.298877 4090 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Mar 18 17:39:43.306213 master-0 kubenswrapper[4090]: I0318 17:39:43.298886 4090 flags.go:64] FLAG: --node-ip="192.168.32.10" Mar 18 17:39:43.306213 master-0 kubenswrapper[4090]: I0318 17:39:43.298902 4090 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Mar 18 17:39:43.306213 master-0 kubenswrapper[4090]: I0318 17:39:43.298923 4090 flags.go:64] FLAG: --node-status-max-images="50" Mar 18 17:39:43.306213 master-0 kubenswrapper[4090]: I0318 17:39:43.298933 4090 flags.go:64] FLAG: --node-status-update-frequency="10s" Mar 18 17:39:43.306213 master-0 kubenswrapper[4090]: I0318 17:39:43.298943 4090 flags.go:64] FLAG: --oom-score-adj="-999" Mar 18 17:39:43.306213 master-0 kubenswrapper[4090]: I0318 17:39:43.298953 4090 flags.go:64] FLAG: --pod-cidr="" Mar 18 17:39:43.306213 master-0 kubenswrapper[4090]: I0318 17:39:43.298963 4090 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:53d66d524ca3e787d8dbe30dbc4d9b8612c9cebd505ccb4375a8441814e85422" Mar 18 17:39:43.306213 master-0 kubenswrapper[4090]: I0318 17:39:43.298979 4090 flags.go:64] FLAG: --pod-manifest-path="" Mar 18 17:39:43.306213 master-0 kubenswrapper[4090]: I0318 17:39:43.298989 4090 flags.go:64] FLAG: --pod-max-pids="-1" Mar 18 17:39:43.306213 master-0 kubenswrapper[4090]: I0318 17:39:43.298999 4090 flags.go:64] FLAG: --pods-per-core="0" Mar 18 17:39:43.306213 master-0 kubenswrapper[4090]: I0318 17:39:43.299009 4090 flags.go:64] FLAG: --port="10250" Mar 18 17:39:43.306213 master-0 kubenswrapper[4090]: I0318 17:39:43.299020 4090 flags.go:64] FLAG: --protect-kernel-defaults="false" Mar 18 17:39:43.306213 master-0 kubenswrapper[4090]: I0318 17:39:43.299029 4090 flags.go:64] FLAG: --provider-id="" Mar 18 17:39:43.306213 master-0 kubenswrapper[4090]: I0318 17:39:43.299039 4090 flags.go:64] FLAG: --qos-reserved="" Mar 18 17:39:43.306213 master-0 kubenswrapper[4090]: I0318 17:39:43.299049 4090 flags.go:64] FLAG: --read-only-port="10255" Mar 18 17:39:43.306213 master-0 kubenswrapper[4090]: I0318 17:39:43.299059 4090 flags.go:64] FLAG: --register-node="true" Mar 18 17:39:43.307560 master-0 kubenswrapper[4090]: I0318 17:39:43.299070 4090 flags.go:64] FLAG: --register-schedulable="true" Mar 18 17:39:43.307560 master-0 kubenswrapper[4090]: I0318 17:39:43.299081 4090 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Mar 18 17:39:43.307560 master-0 kubenswrapper[4090]: I0318 17:39:43.299097 4090 flags.go:64] FLAG: --registry-burst="10" Mar 18 17:39:43.307560 master-0 kubenswrapper[4090]: I0318 17:39:43.299107 4090 flags.go:64] FLAG: --registry-qps="5" Mar 18 17:39:43.307560 master-0 kubenswrapper[4090]: I0318 17:39:43.299116 4090 flags.go:64] FLAG: --reserved-cpus="" Mar 18 17:39:43.307560 master-0 kubenswrapper[4090]: I0318 17:39:43.299127 4090 flags.go:64] FLAG: --reserved-memory="" Mar 18 17:39:43.307560 master-0 kubenswrapper[4090]: I0318 17:39:43.299138 4090 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Mar 18 17:39:43.307560 master-0 kubenswrapper[4090]: I0318 17:39:43.299148 4090 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Mar 18 17:39:43.307560 master-0 kubenswrapper[4090]: I0318 17:39:43.299158 4090 flags.go:64] FLAG: --rotate-certificates="false" Mar 18 17:39:43.307560 master-0 kubenswrapper[4090]: I0318 17:39:43.299168 4090 flags.go:64] FLAG: --rotate-server-certificates="false" Mar 18 17:39:43.307560 master-0 kubenswrapper[4090]: I0318 17:39:43.299178 4090 flags.go:64] FLAG: --runonce="false" Mar 18 17:39:43.307560 master-0 kubenswrapper[4090]: I0318 17:39:43.299188 4090 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Mar 18 17:39:43.307560 master-0 kubenswrapper[4090]: I0318 17:39:43.299198 4090 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Mar 18 17:39:43.307560 master-0 kubenswrapper[4090]: I0318 17:39:43.299208 4090 flags.go:64] FLAG: --seccomp-default="false" Mar 18 17:39:43.307560 master-0 kubenswrapper[4090]: I0318 17:39:43.299217 4090 flags.go:64] FLAG: --serialize-image-pulls="true" Mar 18 17:39:43.307560 master-0 kubenswrapper[4090]: I0318 17:39:43.299227 4090 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Mar 18 17:39:43.307560 master-0 kubenswrapper[4090]: I0318 17:39:43.299238 4090 flags.go:64] FLAG: --storage-driver-db="cadvisor" Mar 18 17:39:43.307560 master-0 kubenswrapper[4090]: I0318 17:39:43.299248 4090 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Mar 18 17:39:43.307560 master-0 kubenswrapper[4090]: I0318 17:39:43.299261 4090 flags.go:64] FLAG: --storage-driver-password="root" Mar 18 17:39:43.307560 master-0 kubenswrapper[4090]: I0318 17:39:43.299296 4090 flags.go:64] FLAG: --storage-driver-secure="false" Mar 18 17:39:43.307560 master-0 kubenswrapper[4090]: I0318 17:39:43.299306 4090 flags.go:64] FLAG: --storage-driver-table="stats" Mar 18 17:39:43.307560 master-0 kubenswrapper[4090]: I0318 17:39:43.299317 4090 flags.go:64] FLAG: --storage-driver-user="root" Mar 18 17:39:43.307560 master-0 kubenswrapper[4090]: I0318 17:39:43.299327 4090 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Mar 18 17:39:43.307560 master-0 kubenswrapper[4090]: I0318 17:39:43.299338 4090 flags.go:64] FLAG: --sync-frequency="1m0s" Mar 18 17:39:43.307560 master-0 kubenswrapper[4090]: I0318 17:39:43.299348 4090 flags.go:64] FLAG: --system-cgroups="" Mar 18 17:39:43.308779 master-0 kubenswrapper[4090]: I0318 17:39:43.299358 4090 flags.go:64] FLAG: --system-reserved="cpu=500m,ephemeral-storage=1Gi,memory=1Gi" Mar 18 17:39:43.308779 master-0 kubenswrapper[4090]: I0318 17:39:43.299374 4090 flags.go:64] FLAG: --system-reserved-cgroup="" Mar 18 17:39:43.308779 master-0 kubenswrapper[4090]: I0318 17:39:43.299383 4090 flags.go:64] FLAG: --tls-cert-file="" Mar 18 17:39:43.308779 master-0 kubenswrapper[4090]: I0318 17:39:43.299393 4090 flags.go:64] FLAG: --tls-cipher-suites="[]" Mar 18 17:39:43.308779 master-0 kubenswrapper[4090]: I0318 17:39:43.299405 4090 flags.go:64] FLAG: --tls-min-version="" Mar 18 17:39:43.308779 master-0 kubenswrapper[4090]: I0318 17:39:43.299415 4090 flags.go:64] FLAG: --tls-private-key-file="" Mar 18 17:39:43.308779 master-0 kubenswrapper[4090]: I0318 17:39:43.299425 4090 flags.go:64] FLAG: --topology-manager-policy="none" Mar 18 17:39:43.308779 master-0 kubenswrapper[4090]: I0318 17:39:43.299434 4090 flags.go:64] FLAG: --topology-manager-policy-options="" Mar 18 17:39:43.308779 master-0 kubenswrapper[4090]: I0318 17:39:43.299445 4090 flags.go:64] FLAG: --topology-manager-scope="container" Mar 18 17:39:43.308779 master-0 kubenswrapper[4090]: I0318 17:39:43.299456 4090 flags.go:64] FLAG: --v="2" Mar 18 17:39:43.308779 master-0 kubenswrapper[4090]: I0318 17:39:43.299468 4090 flags.go:64] FLAG: --version="false" Mar 18 17:39:43.308779 master-0 kubenswrapper[4090]: I0318 17:39:43.299483 4090 flags.go:64] FLAG: --vmodule="" Mar 18 17:39:43.308779 master-0 kubenswrapper[4090]: I0318 17:39:43.299495 4090 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Mar 18 17:39:43.308779 master-0 kubenswrapper[4090]: I0318 17:39:43.299506 4090 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Mar 18 17:39:43.308779 master-0 kubenswrapper[4090]: W0318 17:39:43.299731 4090 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Mar 18 17:39:43.308779 master-0 kubenswrapper[4090]: W0318 17:39:43.299742 4090 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Mar 18 17:39:43.308779 master-0 kubenswrapper[4090]: W0318 17:39:43.299751 4090 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Mar 18 17:39:43.308779 master-0 kubenswrapper[4090]: W0318 17:39:43.299761 4090 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Mar 18 17:39:43.308779 master-0 kubenswrapper[4090]: W0318 17:39:43.299773 4090 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Mar 18 17:39:43.308779 master-0 kubenswrapper[4090]: W0318 17:39:43.299785 4090 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Mar 18 17:39:43.308779 master-0 kubenswrapper[4090]: W0318 17:39:43.299795 4090 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Mar 18 17:39:43.308779 master-0 kubenswrapper[4090]: W0318 17:39:43.299803 4090 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Mar 18 17:39:43.310090 master-0 kubenswrapper[4090]: W0318 17:39:43.299813 4090 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Mar 18 17:39:43.310090 master-0 kubenswrapper[4090]: W0318 17:39:43.299822 4090 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Mar 18 17:39:43.310090 master-0 kubenswrapper[4090]: W0318 17:39:43.299830 4090 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Mar 18 17:39:43.310090 master-0 kubenswrapper[4090]: W0318 17:39:43.299842 4090 feature_gate.go:330] unrecognized feature gate: PinnedImages Mar 18 17:39:43.310090 master-0 kubenswrapper[4090]: W0318 17:39:43.299851 4090 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Mar 18 17:39:43.310090 master-0 kubenswrapper[4090]: W0318 17:39:43.299861 4090 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Mar 18 17:39:43.310090 master-0 kubenswrapper[4090]: W0318 17:39:43.299870 4090 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Mar 18 17:39:43.310090 master-0 kubenswrapper[4090]: W0318 17:39:43.299878 4090 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Mar 18 17:39:43.310090 master-0 kubenswrapper[4090]: W0318 17:39:43.299887 4090 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Mar 18 17:39:43.310090 master-0 kubenswrapper[4090]: W0318 17:39:43.299896 4090 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Mar 18 17:39:43.310090 master-0 kubenswrapper[4090]: W0318 17:39:43.299904 4090 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Mar 18 17:39:43.310090 master-0 kubenswrapper[4090]: W0318 17:39:43.299913 4090 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Mar 18 17:39:43.310090 master-0 kubenswrapper[4090]: W0318 17:39:43.299922 4090 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Mar 18 17:39:43.310090 master-0 kubenswrapper[4090]: W0318 17:39:43.299930 4090 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Mar 18 17:39:43.310090 master-0 kubenswrapper[4090]: W0318 17:39:43.299939 4090 feature_gate.go:330] unrecognized feature gate: NewOLM Mar 18 17:39:43.310090 master-0 kubenswrapper[4090]: W0318 17:39:43.299947 4090 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Mar 18 17:39:43.310090 master-0 kubenswrapper[4090]: W0318 17:39:43.299956 4090 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Mar 18 17:39:43.310090 master-0 kubenswrapper[4090]: W0318 17:39:43.299965 4090 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Mar 18 17:39:43.310090 master-0 kubenswrapper[4090]: W0318 17:39:43.299974 4090 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Mar 18 17:39:43.310090 master-0 kubenswrapper[4090]: W0318 17:39:43.299985 4090 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Mar 18 17:39:43.311416 master-0 kubenswrapper[4090]: W0318 17:39:43.299996 4090 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Mar 18 17:39:43.311416 master-0 kubenswrapper[4090]: W0318 17:39:43.300007 4090 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Mar 18 17:39:43.311416 master-0 kubenswrapper[4090]: W0318 17:39:43.300016 4090 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Mar 18 17:39:43.311416 master-0 kubenswrapper[4090]: W0318 17:39:43.300025 4090 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Mar 18 17:39:43.311416 master-0 kubenswrapper[4090]: W0318 17:39:43.300035 4090 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Mar 18 17:39:43.311416 master-0 kubenswrapper[4090]: W0318 17:39:43.300054 4090 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Mar 18 17:39:43.311416 master-0 kubenswrapper[4090]: W0318 17:39:43.300067 4090 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Mar 18 17:39:43.311416 master-0 kubenswrapper[4090]: W0318 17:39:43.300076 4090 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Mar 18 17:39:43.311416 master-0 kubenswrapper[4090]: W0318 17:39:43.300085 4090 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Mar 18 17:39:43.311416 master-0 kubenswrapper[4090]: W0318 17:39:43.300095 4090 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Mar 18 17:39:43.311416 master-0 kubenswrapper[4090]: W0318 17:39:43.300105 4090 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Mar 18 17:39:43.311416 master-0 kubenswrapper[4090]: W0318 17:39:43.300114 4090 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Mar 18 17:39:43.311416 master-0 kubenswrapper[4090]: W0318 17:39:43.300124 4090 feature_gate.go:330] unrecognized feature gate: InsightsConfig Mar 18 17:39:43.311416 master-0 kubenswrapper[4090]: W0318 17:39:43.300133 4090 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Mar 18 17:39:43.311416 master-0 kubenswrapper[4090]: W0318 17:39:43.300143 4090 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Mar 18 17:39:43.311416 master-0 kubenswrapper[4090]: W0318 17:39:43.300158 4090 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Mar 18 17:39:43.311416 master-0 kubenswrapper[4090]: W0318 17:39:43.300168 4090 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Mar 18 17:39:43.311416 master-0 kubenswrapper[4090]: W0318 17:39:43.300177 4090 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Mar 18 17:39:43.311416 master-0 kubenswrapper[4090]: W0318 17:39:43.300186 4090 feature_gate.go:330] unrecognized feature gate: Example Mar 18 17:39:43.312321 master-0 kubenswrapper[4090]: W0318 17:39:43.300195 4090 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Mar 18 17:39:43.312321 master-0 kubenswrapper[4090]: W0318 17:39:43.300204 4090 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Mar 18 17:39:43.312321 master-0 kubenswrapper[4090]: W0318 17:39:43.300212 4090 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Mar 18 17:39:43.312321 master-0 kubenswrapper[4090]: W0318 17:39:43.300222 4090 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Mar 18 17:39:43.312321 master-0 kubenswrapper[4090]: W0318 17:39:43.300230 4090 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Mar 18 17:39:43.312321 master-0 kubenswrapper[4090]: W0318 17:39:43.300239 4090 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Mar 18 17:39:43.312321 master-0 kubenswrapper[4090]: W0318 17:39:43.300248 4090 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Mar 18 17:39:43.312321 master-0 kubenswrapper[4090]: W0318 17:39:43.300257 4090 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Mar 18 17:39:43.312321 master-0 kubenswrapper[4090]: W0318 17:39:43.300266 4090 feature_gate.go:330] unrecognized feature gate: OVNObservability Mar 18 17:39:43.312321 master-0 kubenswrapper[4090]: W0318 17:39:43.300301 4090 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Mar 18 17:39:43.312321 master-0 kubenswrapper[4090]: W0318 17:39:43.300310 4090 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Mar 18 17:39:43.312321 master-0 kubenswrapper[4090]: W0318 17:39:43.300319 4090 feature_gate.go:330] unrecognized feature gate: GatewayAPI Mar 18 17:39:43.312321 master-0 kubenswrapper[4090]: W0318 17:39:43.300328 4090 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Mar 18 17:39:43.312321 master-0 kubenswrapper[4090]: W0318 17:39:43.300337 4090 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Mar 18 17:39:43.312321 master-0 kubenswrapper[4090]: W0318 17:39:43.300346 4090 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Mar 18 17:39:43.312321 master-0 kubenswrapper[4090]: W0318 17:39:43.300354 4090 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Mar 18 17:39:43.312321 master-0 kubenswrapper[4090]: W0318 17:39:43.300363 4090 feature_gate.go:330] unrecognized feature gate: PlatformOperators Mar 18 17:39:43.312321 master-0 kubenswrapper[4090]: W0318 17:39:43.300371 4090 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Mar 18 17:39:43.312321 master-0 kubenswrapper[4090]: W0318 17:39:43.300381 4090 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Mar 18 17:39:43.312321 master-0 kubenswrapper[4090]: W0318 17:39:43.300390 4090 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Mar 18 17:39:43.313320 master-0 kubenswrapper[4090]: W0318 17:39:43.300399 4090 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Mar 18 17:39:43.313320 master-0 kubenswrapper[4090]: W0318 17:39:43.300407 4090 feature_gate.go:330] unrecognized feature gate: SignatureStores Mar 18 17:39:43.313320 master-0 kubenswrapper[4090]: W0318 17:39:43.300417 4090 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Mar 18 17:39:43.313320 master-0 kubenswrapper[4090]: W0318 17:39:43.300426 4090 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Mar 18 17:39:43.313320 master-0 kubenswrapper[4090]: W0318 17:39:43.300435 4090 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Mar 18 17:39:43.313320 master-0 kubenswrapper[4090]: I0318 17:39:43.300460 4090 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false StreamingCollectionEncodingToJSON:true StreamingCollectionEncodingToProtobuf:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Mar 18 17:39:43.313320 master-0 kubenswrapper[4090]: I0318 17:39:43.310810 4090 server.go:491] "Kubelet version" kubeletVersion="v1.31.14" Mar 18 17:39:43.313320 master-0 kubenswrapper[4090]: I0318 17:39:43.310850 4090 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 18 17:39:43.313320 master-0 kubenswrapper[4090]: W0318 17:39:43.310984 4090 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Mar 18 17:39:43.313320 master-0 kubenswrapper[4090]: W0318 17:39:43.310998 4090 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Mar 18 17:39:43.313320 master-0 kubenswrapper[4090]: W0318 17:39:43.311007 4090 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Mar 18 17:39:43.313320 master-0 kubenswrapper[4090]: W0318 17:39:43.311016 4090 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Mar 18 17:39:43.313320 master-0 kubenswrapper[4090]: W0318 17:39:43.311024 4090 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Mar 18 17:39:43.313320 master-0 kubenswrapper[4090]: W0318 17:39:43.311033 4090 feature_gate.go:330] unrecognized feature gate: GatewayAPI Mar 18 17:39:43.313320 master-0 kubenswrapper[4090]: W0318 17:39:43.311043 4090 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Mar 18 17:39:43.314229 master-0 kubenswrapper[4090]: W0318 17:39:43.311053 4090 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Mar 18 17:39:43.314229 master-0 kubenswrapper[4090]: W0318 17:39:43.311061 4090 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Mar 18 17:39:43.314229 master-0 kubenswrapper[4090]: W0318 17:39:43.311069 4090 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Mar 18 17:39:43.314229 master-0 kubenswrapper[4090]: W0318 17:39:43.311078 4090 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Mar 18 17:39:43.314229 master-0 kubenswrapper[4090]: W0318 17:39:43.311086 4090 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Mar 18 17:39:43.314229 master-0 kubenswrapper[4090]: W0318 17:39:43.311094 4090 feature_gate.go:330] unrecognized feature gate: OVNObservability Mar 18 17:39:43.314229 master-0 kubenswrapper[4090]: W0318 17:39:43.311102 4090 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Mar 18 17:39:43.314229 master-0 kubenswrapper[4090]: W0318 17:39:43.311109 4090 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Mar 18 17:39:43.314229 master-0 kubenswrapper[4090]: W0318 17:39:43.311117 4090 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Mar 18 17:39:43.314229 master-0 kubenswrapper[4090]: W0318 17:39:43.311125 4090 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Mar 18 17:39:43.314229 master-0 kubenswrapper[4090]: W0318 17:39:43.311133 4090 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Mar 18 17:39:43.314229 master-0 kubenswrapper[4090]: W0318 17:39:43.311141 4090 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Mar 18 17:39:43.314229 master-0 kubenswrapper[4090]: W0318 17:39:43.311148 4090 feature_gate.go:330] unrecognized feature gate: NewOLM Mar 18 17:39:43.314229 master-0 kubenswrapper[4090]: W0318 17:39:43.311157 4090 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Mar 18 17:39:43.314229 master-0 kubenswrapper[4090]: W0318 17:39:43.311165 4090 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Mar 18 17:39:43.314229 master-0 kubenswrapper[4090]: W0318 17:39:43.311173 4090 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Mar 18 17:39:43.314229 master-0 kubenswrapper[4090]: W0318 17:39:43.311180 4090 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Mar 18 17:39:43.314229 master-0 kubenswrapper[4090]: W0318 17:39:43.311188 4090 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Mar 18 17:39:43.314229 master-0 kubenswrapper[4090]: W0318 17:39:43.311198 4090 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Mar 18 17:39:43.314229 master-0 kubenswrapper[4090]: W0318 17:39:43.311208 4090 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Mar 18 17:39:43.315339 master-0 kubenswrapper[4090]: W0318 17:39:43.311217 4090 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Mar 18 17:39:43.315339 master-0 kubenswrapper[4090]: W0318 17:39:43.311225 4090 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Mar 18 17:39:43.315339 master-0 kubenswrapper[4090]: W0318 17:39:43.311233 4090 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Mar 18 17:39:43.315339 master-0 kubenswrapper[4090]: W0318 17:39:43.311243 4090 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Mar 18 17:39:43.315339 master-0 kubenswrapper[4090]: W0318 17:39:43.311251 4090 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Mar 18 17:39:43.315339 master-0 kubenswrapper[4090]: W0318 17:39:43.311259 4090 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Mar 18 17:39:43.315339 master-0 kubenswrapper[4090]: W0318 17:39:43.311267 4090 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Mar 18 17:39:43.315339 master-0 kubenswrapper[4090]: W0318 17:39:43.311303 4090 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Mar 18 17:39:43.315339 master-0 kubenswrapper[4090]: W0318 17:39:43.311312 4090 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Mar 18 17:39:43.315339 master-0 kubenswrapper[4090]: W0318 17:39:43.311320 4090 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Mar 18 17:39:43.315339 master-0 kubenswrapper[4090]: W0318 17:39:43.311329 4090 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Mar 18 17:39:43.315339 master-0 kubenswrapper[4090]: W0318 17:39:43.311337 4090 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Mar 18 17:39:43.315339 master-0 kubenswrapper[4090]: W0318 17:39:43.311345 4090 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Mar 18 17:39:43.315339 master-0 kubenswrapper[4090]: W0318 17:39:43.311352 4090 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Mar 18 17:39:43.315339 master-0 kubenswrapper[4090]: W0318 17:39:43.311360 4090 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Mar 18 17:39:43.315339 master-0 kubenswrapper[4090]: W0318 17:39:43.311368 4090 feature_gate.go:330] unrecognized feature gate: PinnedImages Mar 18 17:39:43.315339 master-0 kubenswrapper[4090]: W0318 17:39:43.311375 4090 feature_gate.go:330] unrecognized feature gate: PlatformOperators Mar 18 17:39:43.315339 master-0 kubenswrapper[4090]: W0318 17:39:43.311384 4090 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Mar 18 17:39:43.315339 master-0 kubenswrapper[4090]: W0318 17:39:43.311392 4090 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Mar 18 17:39:43.315339 master-0 kubenswrapper[4090]: W0318 17:39:43.311400 4090 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Mar 18 17:39:43.316864 master-0 kubenswrapper[4090]: W0318 17:39:43.311408 4090 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Mar 18 17:39:43.316864 master-0 kubenswrapper[4090]: W0318 17:39:43.311416 4090 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Mar 18 17:39:43.316864 master-0 kubenswrapper[4090]: W0318 17:39:43.311426 4090 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Mar 18 17:39:43.316864 master-0 kubenswrapper[4090]: W0318 17:39:43.311438 4090 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Mar 18 17:39:43.316864 master-0 kubenswrapper[4090]: W0318 17:39:43.311466 4090 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Mar 18 17:39:43.316864 master-0 kubenswrapper[4090]: W0318 17:39:43.311475 4090 feature_gate.go:330] unrecognized feature gate: Example Mar 18 17:39:43.316864 master-0 kubenswrapper[4090]: W0318 17:39:43.311483 4090 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Mar 18 17:39:43.316864 master-0 kubenswrapper[4090]: W0318 17:39:43.311491 4090 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Mar 18 17:39:43.316864 master-0 kubenswrapper[4090]: W0318 17:39:43.311499 4090 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Mar 18 17:39:43.316864 master-0 kubenswrapper[4090]: W0318 17:39:43.311507 4090 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Mar 18 17:39:43.316864 master-0 kubenswrapper[4090]: W0318 17:39:43.311514 4090 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Mar 18 17:39:43.316864 master-0 kubenswrapper[4090]: W0318 17:39:43.311523 4090 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Mar 18 17:39:43.316864 master-0 kubenswrapper[4090]: W0318 17:39:43.311530 4090 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Mar 18 17:39:43.316864 master-0 kubenswrapper[4090]: W0318 17:39:43.311538 4090 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Mar 18 17:39:43.316864 master-0 kubenswrapper[4090]: W0318 17:39:43.311546 4090 feature_gate.go:330] unrecognized feature gate: InsightsConfig Mar 18 17:39:43.316864 master-0 kubenswrapper[4090]: W0318 17:39:43.311554 4090 feature_gate.go:330] unrecognized feature gate: SignatureStores Mar 18 17:39:43.316864 master-0 kubenswrapper[4090]: W0318 17:39:43.311562 4090 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Mar 18 17:39:43.316864 master-0 kubenswrapper[4090]: W0318 17:39:43.311570 4090 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Mar 18 17:39:43.316864 master-0 kubenswrapper[4090]: W0318 17:39:43.311577 4090 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Mar 18 17:39:43.317925 master-0 kubenswrapper[4090]: W0318 17:39:43.311586 4090 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Mar 18 17:39:43.317925 master-0 kubenswrapper[4090]: W0318 17:39:43.311593 4090 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Mar 18 17:39:43.317925 master-0 kubenswrapper[4090]: W0318 17:39:43.311602 4090 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Mar 18 17:39:43.317925 master-0 kubenswrapper[4090]: W0318 17:39:43.311612 4090 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Mar 18 17:39:43.317925 master-0 kubenswrapper[4090]: W0318 17:39:43.311623 4090 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Mar 18 17:39:43.317925 master-0 kubenswrapper[4090]: W0318 17:39:43.311632 4090 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Mar 18 17:39:43.317925 master-0 kubenswrapper[4090]: I0318 17:39:43.311645 4090 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false StreamingCollectionEncodingToJSON:true StreamingCollectionEncodingToProtobuf:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Mar 18 17:39:43.317925 master-0 kubenswrapper[4090]: W0318 17:39:43.311860 4090 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Mar 18 17:39:43.317925 master-0 kubenswrapper[4090]: W0318 17:39:43.311872 4090 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Mar 18 17:39:43.317925 master-0 kubenswrapper[4090]: W0318 17:39:43.311881 4090 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Mar 18 17:39:43.317925 master-0 kubenswrapper[4090]: W0318 17:39:43.311889 4090 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Mar 18 17:39:43.317925 master-0 kubenswrapper[4090]: W0318 17:39:43.311897 4090 feature_gate.go:330] unrecognized feature gate: GatewayAPI Mar 18 17:39:43.317925 master-0 kubenswrapper[4090]: W0318 17:39:43.311906 4090 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Mar 18 17:39:43.317925 master-0 kubenswrapper[4090]: W0318 17:39:43.311914 4090 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Mar 18 17:39:43.318697 master-0 kubenswrapper[4090]: W0318 17:39:43.311924 4090 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Mar 18 17:39:43.318697 master-0 kubenswrapper[4090]: W0318 17:39:43.311936 4090 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Mar 18 17:39:43.318697 master-0 kubenswrapper[4090]: W0318 17:39:43.311946 4090 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Mar 18 17:39:43.318697 master-0 kubenswrapper[4090]: W0318 17:39:43.311956 4090 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Mar 18 17:39:43.318697 master-0 kubenswrapper[4090]: W0318 17:39:43.311965 4090 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Mar 18 17:39:43.318697 master-0 kubenswrapper[4090]: W0318 17:39:43.311973 4090 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Mar 18 17:39:43.318697 master-0 kubenswrapper[4090]: W0318 17:39:43.311981 4090 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Mar 18 17:39:43.318697 master-0 kubenswrapper[4090]: W0318 17:39:43.311988 4090 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Mar 18 17:39:43.318697 master-0 kubenswrapper[4090]: W0318 17:39:43.311999 4090 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Mar 18 17:39:43.318697 master-0 kubenswrapper[4090]: W0318 17:39:43.312009 4090 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Mar 18 17:39:43.318697 master-0 kubenswrapper[4090]: W0318 17:39:43.312017 4090 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Mar 18 17:39:43.318697 master-0 kubenswrapper[4090]: W0318 17:39:43.312026 4090 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Mar 18 17:39:43.318697 master-0 kubenswrapper[4090]: W0318 17:39:43.312034 4090 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Mar 18 17:39:43.318697 master-0 kubenswrapper[4090]: W0318 17:39:43.312043 4090 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Mar 18 17:39:43.318697 master-0 kubenswrapper[4090]: W0318 17:39:43.312051 4090 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Mar 18 17:39:43.318697 master-0 kubenswrapper[4090]: W0318 17:39:43.312060 4090 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Mar 18 17:39:43.318697 master-0 kubenswrapper[4090]: W0318 17:39:43.312068 4090 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Mar 18 17:39:43.318697 master-0 kubenswrapper[4090]: W0318 17:39:43.312075 4090 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Mar 18 17:39:43.318697 master-0 kubenswrapper[4090]: W0318 17:39:43.312083 4090 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Mar 18 17:39:43.319668 master-0 kubenswrapper[4090]: W0318 17:39:43.312092 4090 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Mar 18 17:39:43.319668 master-0 kubenswrapper[4090]: W0318 17:39:43.312101 4090 feature_gate.go:330] unrecognized feature gate: NewOLM Mar 18 17:39:43.319668 master-0 kubenswrapper[4090]: W0318 17:39:43.312109 4090 feature_gate.go:330] unrecognized feature gate: PlatformOperators Mar 18 17:39:43.319668 master-0 kubenswrapper[4090]: W0318 17:39:43.312117 4090 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Mar 18 17:39:43.319668 master-0 kubenswrapper[4090]: W0318 17:39:43.312125 4090 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Mar 18 17:39:43.319668 master-0 kubenswrapper[4090]: W0318 17:39:43.312133 4090 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Mar 18 17:39:43.319668 master-0 kubenswrapper[4090]: W0318 17:39:43.312140 4090 feature_gate.go:330] unrecognized feature gate: OVNObservability Mar 18 17:39:43.319668 master-0 kubenswrapper[4090]: W0318 17:39:43.312152 4090 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Mar 18 17:39:43.319668 master-0 kubenswrapper[4090]: W0318 17:39:43.312162 4090 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Mar 18 17:39:43.319668 master-0 kubenswrapper[4090]: W0318 17:39:43.312171 4090 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Mar 18 17:39:43.319668 master-0 kubenswrapper[4090]: W0318 17:39:43.312180 4090 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Mar 18 17:39:43.319668 master-0 kubenswrapper[4090]: W0318 17:39:43.312189 4090 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Mar 18 17:39:43.319668 master-0 kubenswrapper[4090]: W0318 17:39:43.312197 4090 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Mar 18 17:39:43.319668 master-0 kubenswrapper[4090]: W0318 17:39:43.312206 4090 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Mar 18 17:39:43.319668 master-0 kubenswrapper[4090]: W0318 17:39:43.312214 4090 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Mar 18 17:39:43.319668 master-0 kubenswrapper[4090]: W0318 17:39:43.312223 4090 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Mar 18 17:39:43.319668 master-0 kubenswrapper[4090]: W0318 17:39:43.312232 4090 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Mar 18 17:39:43.319668 master-0 kubenswrapper[4090]: W0318 17:39:43.312241 4090 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Mar 18 17:39:43.319668 master-0 kubenswrapper[4090]: W0318 17:39:43.312249 4090 feature_gate.go:330] unrecognized feature gate: PinnedImages Mar 18 17:39:43.319668 master-0 kubenswrapper[4090]: W0318 17:39:43.312257 4090 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Mar 18 17:39:43.320754 master-0 kubenswrapper[4090]: W0318 17:39:43.312264 4090 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Mar 18 17:39:43.320754 master-0 kubenswrapper[4090]: W0318 17:39:43.312295 4090 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Mar 18 17:39:43.320754 master-0 kubenswrapper[4090]: W0318 17:39:43.312303 4090 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Mar 18 17:39:43.320754 master-0 kubenswrapper[4090]: W0318 17:39:43.312311 4090 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Mar 18 17:39:43.320754 master-0 kubenswrapper[4090]: W0318 17:39:43.312319 4090 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Mar 18 17:39:43.320754 master-0 kubenswrapper[4090]: W0318 17:39:43.312326 4090 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Mar 18 17:39:43.320754 master-0 kubenswrapper[4090]: W0318 17:39:43.312334 4090 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Mar 18 17:39:43.320754 master-0 kubenswrapper[4090]: W0318 17:39:43.312343 4090 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Mar 18 17:39:43.320754 master-0 kubenswrapper[4090]: W0318 17:39:43.312350 4090 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Mar 18 17:39:43.320754 master-0 kubenswrapper[4090]: W0318 17:39:43.312358 4090 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Mar 18 17:39:43.320754 master-0 kubenswrapper[4090]: W0318 17:39:43.312366 4090 feature_gate.go:330] unrecognized feature gate: Example Mar 18 17:39:43.320754 master-0 kubenswrapper[4090]: W0318 17:39:43.312374 4090 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Mar 18 17:39:43.320754 master-0 kubenswrapper[4090]: W0318 17:39:43.312381 4090 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Mar 18 17:39:43.320754 master-0 kubenswrapper[4090]: W0318 17:39:43.312389 4090 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Mar 18 17:39:43.320754 master-0 kubenswrapper[4090]: W0318 17:39:43.312397 4090 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Mar 18 17:39:43.320754 master-0 kubenswrapper[4090]: W0318 17:39:43.312404 4090 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Mar 18 17:39:43.320754 master-0 kubenswrapper[4090]: W0318 17:39:43.312412 4090 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Mar 18 17:39:43.320754 master-0 kubenswrapper[4090]: W0318 17:39:43.312423 4090 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Mar 18 17:39:43.320754 master-0 kubenswrapper[4090]: W0318 17:39:43.312432 4090 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Mar 18 17:39:43.320754 master-0 kubenswrapper[4090]: W0318 17:39:43.312440 4090 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Mar 18 17:39:43.322001 master-0 kubenswrapper[4090]: W0318 17:39:43.312447 4090 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Mar 18 17:39:43.322001 master-0 kubenswrapper[4090]: W0318 17:39:43.312455 4090 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Mar 18 17:39:43.322001 master-0 kubenswrapper[4090]: W0318 17:39:43.312463 4090 feature_gate.go:330] unrecognized feature gate: SignatureStores Mar 18 17:39:43.322001 master-0 kubenswrapper[4090]: W0318 17:39:43.312472 4090 feature_gate.go:330] unrecognized feature gate: InsightsConfig Mar 18 17:39:43.322001 master-0 kubenswrapper[4090]: W0318 17:39:43.312479 4090 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Mar 18 17:39:43.322001 master-0 kubenswrapper[4090]: W0318 17:39:43.312487 4090 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Mar 18 17:39:43.322001 master-0 kubenswrapper[4090]: I0318 17:39:43.312499 4090 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false StreamingCollectionEncodingToJSON:true StreamingCollectionEncodingToProtobuf:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Mar 18 17:39:43.322001 master-0 kubenswrapper[4090]: I0318 17:39:43.313749 4090 server.go:940] "Client rotation is on, will bootstrap in background" Mar 18 17:39:43.322001 master-0 kubenswrapper[4090]: I0318 17:39:43.317704 4090 bootstrap.go:101] "Use the bootstrap credentials to request a cert, and set kubeconfig to point to the certificate dir" Mar 18 17:39:43.322001 master-0 kubenswrapper[4090]: I0318 17:39:43.319223 4090 server.go:997] "Starting client certificate rotation" Mar 18 17:39:43.322001 master-0 kubenswrapper[4090]: I0318 17:39:43.319254 4090 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Mar 18 17:39:43.322001 master-0 kubenswrapper[4090]: I0318 17:39:43.319466 4090 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Mar 18 17:39:43.347588 master-0 kubenswrapper[4090]: I0318 17:39:43.347516 4090 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Mar 18 17:39:43.352579 master-0 kubenswrapper[4090]: E0318 17:39:43.352517 4090 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.sno.openstack.lab:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Mar 18 17:39:43.353731 master-0 kubenswrapper[4090]: I0318 17:39:43.353653 4090 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Mar 18 17:39:43.372702 master-0 kubenswrapper[4090]: I0318 17:39:43.372639 4090 log.go:25] "Validated CRI v1 runtime API" Mar 18 17:39:43.379885 master-0 kubenswrapper[4090]: I0318 17:39:43.379815 4090 log.go:25] "Validated CRI v1 image API" Mar 18 17:39:43.383932 master-0 kubenswrapper[4090]: I0318 17:39:43.383870 4090 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Mar 18 17:39:43.392142 master-0 kubenswrapper[4090]: I0318 17:39:43.392027 4090 fs.go:135] Filesystem UUIDs: map[7B77-95E7:/dev/vda2 910678ff-f77e-4a7d-8d53-86f2ac47a823:/dev/vda4 fad39e74-417f-48de-99cb-6a377eb68dd8:/dev/vda3] Mar 18 17:39:43.392142 master-0 kubenswrapper[4090]: I0318 17:39:43.392073 4090 fs.go:136] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0}] Mar 18 17:39:43.424984 master-0 kubenswrapper[4090]: I0318 17:39:43.424696 4090 manager.go:217] Machine: {Timestamp:2026-03-18 17:39:43.423663262 +0000 UTC m=+0.615935216 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2799998 MemoryCapacity:33654128640 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:6ad73e7bdc944176a9641991d01dd6fa SystemUUID:6ad73e7b-dc94-4176-a964-1991d01dd6fa BootID:00a5b6c0-ddc6-4fc3-aaa2-1f9950d0acc4 Filesystems:[{Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827064320 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827064320 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none} 252:16:{Name:vdb Major:252 Minor:16 Size:21474836480 Scheduler:none} 252:32:{Name:vdc Major:252 Minor:32 Size:21474836480 Scheduler:none} 252:48:{Name:vdd Major:252 Minor:48 Size:21474836480 Scheduler:none} 252:64:{Name:vde Major:252 Minor:64 Size:21474836480 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:9e:81:f6:10 Speed:0 Mtu:9000} {Name:eth0 MacAddress:fa:16:9e:81:f6:10 Speed:-1 Mtu:9000} {Name:eth1 MacAddress:fa:16:3e:91:e0:f5 Speed:-1 Mtu:9000} {Name:eth2 MacAddress:fa:16:3e:ff:27:ac Speed:-1 Mtu:9000} {Name:ovs-system MacAddress:96:16:48:af:1f:d9 Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654128640 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Mar 18 17:39:43.424984 master-0 kubenswrapper[4090]: I0318 17:39:43.424923 4090 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Mar 18 17:39:43.425500 master-0 kubenswrapper[4090]: I0318 17:39:43.425032 4090 manager.go:233] Version: {KernelVersion:5.14.0-427.113.1.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202603021444-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Mar 18 17:39:43.425500 master-0 kubenswrapper[4090]: I0318 17:39:43.425361 4090 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Mar 18 17:39:43.425618 master-0 kubenswrapper[4090]: I0318 17:39:43.425585 4090 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 18 17:39:43.425872 master-0 kubenswrapper[4090]: I0318 17:39:43.425619 4090 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"master-0","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"500m","ephemeral-storage":"1Gi","memory":"1Gi"},"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 18 17:39:43.425872 master-0 kubenswrapper[4090]: I0318 17:39:43.425869 4090 topology_manager.go:138] "Creating topology manager with none policy" Mar 18 17:39:43.426033 master-0 kubenswrapper[4090]: I0318 17:39:43.425880 4090 container_manager_linux.go:303] "Creating device plugin manager" Mar 18 17:39:43.426033 master-0 kubenswrapper[4090]: I0318 17:39:43.425961 4090 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Mar 18 17:39:43.426033 master-0 kubenswrapper[4090]: I0318 17:39:43.425975 4090 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Mar 18 17:39:43.426190 master-0 kubenswrapper[4090]: I0318 17:39:43.426107 4090 state_mem.go:36] "Initialized new in-memory state store" Mar 18 17:39:43.426190 master-0 kubenswrapper[4090]: I0318 17:39:43.426184 4090 server.go:1245] "Using root directory" path="/var/lib/kubelet" Mar 18 17:39:43.433264 master-0 kubenswrapper[4090]: I0318 17:39:43.433232 4090 kubelet.go:418] "Attempting to sync node with API server" Mar 18 17:39:43.433264 master-0 kubenswrapper[4090]: I0318 17:39:43.433256 4090 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 18 17:39:43.433480 master-0 kubenswrapper[4090]: I0318 17:39:43.433326 4090 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Mar 18 17:39:43.433480 master-0 kubenswrapper[4090]: I0318 17:39:43.433339 4090 kubelet.go:324] "Adding apiserver pod source" Mar 18 17:39:43.433480 master-0 kubenswrapper[4090]: I0318 17:39:43.433357 4090 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 18 17:39:43.441081 master-0 kubenswrapper[4090]: I0318 17:39:43.441053 4090 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.13-8.rhaos4.18.gitd78977c.el9" apiVersion="v1" Mar 18 17:39:43.445415 master-0 kubenswrapper[4090]: I0318 17:39:43.445365 4090 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Mar 18 17:39:43.445803 master-0 kubenswrapper[4090]: I0318 17:39:43.445567 4090 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Mar 18 17:39:43.445803 master-0 kubenswrapper[4090]: I0318 17:39:43.445605 4090 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Mar 18 17:39:43.445803 master-0 kubenswrapper[4090]: I0318 17:39:43.445616 4090 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Mar 18 17:39:43.445803 master-0 kubenswrapper[4090]: I0318 17:39:43.445625 4090 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Mar 18 17:39:43.445803 master-0 kubenswrapper[4090]: I0318 17:39:43.445634 4090 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Mar 18 17:39:43.445803 master-0 kubenswrapper[4090]: I0318 17:39:43.445642 4090 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Mar 18 17:39:43.445803 master-0 kubenswrapper[4090]: I0318 17:39:43.445650 4090 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Mar 18 17:39:43.445803 master-0 kubenswrapper[4090]: I0318 17:39:43.445664 4090 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Mar 18 17:39:43.445803 master-0 kubenswrapper[4090]: I0318 17:39:43.445673 4090 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Mar 18 17:39:43.445803 master-0 kubenswrapper[4090]: I0318 17:39:43.445682 4090 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Mar 18 17:39:43.445803 master-0 kubenswrapper[4090]: I0318 17:39:43.445699 4090 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Mar 18 17:39:43.446536 master-0 kubenswrapper[4090]: W0318 17:39:43.445913 4090 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.sno.openstack.lab:6443/api/v1/nodes?fieldSelector=metadata.name%3Dmaster-0&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 18 17:39:43.446536 master-0 kubenswrapper[4090]: W0318 17:39:43.445904 4090 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.sno.openstack.lab:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 18 17:39:43.446536 master-0 kubenswrapper[4090]: E0318 17:39:43.446076 4090 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes?fieldSelector=metadata.name%3Dmaster-0&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Mar 18 17:39:43.446536 master-0 kubenswrapper[4090]: E0318 17:39:43.446082 4090 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.sno.openstack.lab:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Mar 18 17:39:43.446536 master-0 kubenswrapper[4090]: I0318 17:39:43.446400 4090 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Mar 18 17:39:43.447446 master-0 kubenswrapper[4090]: I0318 17:39:43.447410 4090 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Mar 18 17:39:43.447887 master-0 kubenswrapper[4090]: I0318 17:39:43.447849 4090 server.go:1280] "Started kubelet" Mar 18 17:39:43.449533 master-0 systemd[1]: Started Kubernetes Kubelet. Mar 18 17:39:43.453762 master-0 kubenswrapper[4090]: I0318 17:39:43.452635 4090 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Mar 18 17:39:43.454764 master-0 kubenswrapper[4090]: I0318 17:39:43.454571 4090 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 18 17:39:43.454877 master-0 kubenswrapper[4090]: I0318 17:39:43.454803 4090 server_v1.go:47] "podresources" method="list" useActivePods=true Mar 18 17:39:43.455547 master-0 kubenswrapper[4090]: I0318 17:39:43.455499 4090 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 18 17:39:43.455729 master-0 kubenswrapper[4090]: I0318 17:39:43.455670 4090 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 18 17:39:43.457141 master-0 kubenswrapper[4090]: I0318 17:39:43.457059 4090 server.go:449] "Adding debug handlers to kubelet server" Mar 18 17:39:43.457818 master-0 kubenswrapper[4090]: I0318 17:39:43.457764 4090 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Mar 18 17:39:43.457936 master-0 kubenswrapper[4090]: I0318 17:39:43.457831 4090 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 18 17:39:43.459234 master-0 kubenswrapper[4090]: I0318 17:39:43.459199 4090 volume_manager.go:287] "The desired_state_of_world populator starts" Mar 18 17:39:43.459234 master-0 kubenswrapper[4090]: I0318 17:39:43.459223 4090 volume_manager.go:289] "Starting Kubelet Volume Manager" Mar 18 17:39:43.459434 master-0 kubenswrapper[4090]: I0318 17:39:43.459381 4090 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Mar 18 17:39:43.459657 master-0 kubenswrapper[4090]: E0318 17:39:43.459626 4090 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 17:39:43.462484 master-0 kubenswrapper[4090]: I0318 17:39:43.462441 4090 reconstruct.go:97] "Volume reconstruction finished" Mar 18 17:39:43.462484 master-0 kubenswrapper[4090]: I0318 17:39:43.462471 4090 reconciler.go:26] "Reconciler: start to sync state" Mar 18 17:39:43.462807 master-0 kubenswrapper[4090]: I0318 17:39:43.462768 4090 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Mar 18 17:39:43.462807 master-0 kubenswrapper[4090]: I0318 17:39:43.462805 4090 factory.go:55] Registering systemd factory Mar 18 17:39:43.462934 master-0 kubenswrapper[4090]: I0318 17:39:43.462818 4090 factory.go:221] Registration of the systemd container factory successfully Mar 18 17:39:43.463943 master-0 kubenswrapper[4090]: I0318 17:39:43.463495 4090 factory.go:153] Registering CRI-O factory Mar 18 17:39:43.463943 master-0 kubenswrapper[4090]: I0318 17:39:43.463515 4090 factory.go:221] Registration of the crio container factory successfully Mar 18 17:39:43.463943 master-0 kubenswrapper[4090]: I0318 17:39:43.463554 4090 factory.go:103] Registering Raw factory Mar 18 17:39:43.463943 master-0 kubenswrapper[4090]: I0318 17:39:43.463569 4090 manager.go:1196] Started watching for new ooms in manager Mar 18 17:39:43.464742 master-0 kubenswrapper[4090]: I0318 17:39:43.464307 4090 manager.go:319] Starting recovery of all containers Mar 18 17:39:43.465566 master-0 kubenswrapper[4090]: W0318 17:39:43.465512 4090 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 18 17:39:43.465791 master-0 kubenswrapper[4090]: E0318 17:39:43.465703 4090 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Mar 18 17:39:43.465989 master-0 kubenswrapper[4090]: E0318 17:39:43.465956 4090 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="200ms" Mar 18 17:39:43.471263 master-0 kubenswrapper[4090]: E0318 17:39:43.465250 4090 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/default/events\": dial tcp 192.168.32.10:6443: connect: connection refused" event="&Event{ObjectMeta:{master-0.189e00413e402932 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 17:39:43.447820594 +0000 UTC m=+0.640092528,LastTimestamp:2026-03-18 17:39:43.447820594 +0000 UTC m=+0.640092528,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 17:39:43.471706 master-0 kubenswrapper[4090]: E0318 17:39:43.471627 4090 kubelet.go:1495] "Image garbage collection failed once. Stats initialization may not have completed yet" err="failed to get imageFs info: unable to find data in memory cache" Mar 18 17:39:43.489149 master-0 kubenswrapper[4090]: I0318 17:39:43.488848 4090 manager.go:324] Recovery completed Mar 18 17:39:43.511081 master-0 kubenswrapper[4090]: I0318 17:39:43.511023 4090 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 17:39:43.513326 master-0 kubenswrapper[4090]: I0318 17:39:43.513283 4090 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 18 17:39:43.513430 master-0 kubenswrapper[4090]: I0318 17:39:43.513417 4090 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 18 17:39:43.513544 master-0 kubenswrapper[4090]: I0318 17:39:43.513531 4090 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 18 17:39:43.517280 master-0 kubenswrapper[4090]: I0318 17:39:43.517214 4090 cpu_manager.go:225] "Starting CPU manager" policy="none" Mar 18 17:39:43.517280 master-0 kubenswrapper[4090]: I0318 17:39:43.517260 4090 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Mar 18 17:39:43.517408 master-0 kubenswrapper[4090]: I0318 17:39:43.517301 4090 state_mem.go:36] "Initialized new in-memory state store" Mar 18 17:39:43.521422 master-0 kubenswrapper[4090]: I0318 17:39:43.521405 4090 policy_none.go:49] "None policy: Start" Mar 18 17:39:43.522505 master-0 kubenswrapper[4090]: I0318 17:39:43.522475 4090 memory_manager.go:170] "Starting memorymanager" policy="None" Mar 18 17:39:43.522572 master-0 kubenswrapper[4090]: I0318 17:39:43.522517 4090 state_mem.go:35] "Initializing new in-memory state store" Mar 18 17:39:43.559900 master-0 kubenswrapper[4090]: E0318 17:39:43.559798 4090 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 17:39:43.583650 master-0 kubenswrapper[4090]: I0318 17:39:43.582342 4090 manager.go:334] "Starting Device Plugin manager" Mar 18 17:39:43.583650 master-0 kubenswrapper[4090]: I0318 17:39:43.583512 4090 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Mar 18 17:39:43.583650 master-0 kubenswrapper[4090]: I0318 17:39:43.583534 4090 server.go:79] "Starting device plugin registration server" Mar 18 17:39:43.602952 master-0 kubenswrapper[4090]: I0318 17:39:43.585900 4090 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 18 17:39:43.602952 master-0 kubenswrapper[4090]: I0318 17:39:43.585919 4090 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 18 17:39:43.602952 master-0 kubenswrapper[4090]: E0318 17:39:43.587594 4090 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"master-0\" not found" Mar 18 17:39:43.602952 master-0 kubenswrapper[4090]: I0318 17:39:43.589003 4090 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Mar 18 17:39:43.602952 master-0 kubenswrapper[4090]: I0318 17:39:43.589087 4090 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Mar 18 17:39:43.602952 master-0 kubenswrapper[4090]: I0318 17:39:43.589095 4090 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 18 17:39:43.604040 master-0 kubenswrapper[4090]: I0318 17:39:43.603959 4090 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Mar 18 17:39:43.606260 master-0 kubenswrapper[4090]: I0318 17:39:43.606218 4090 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Mar 18 17:39:43.606376 master-0 kubenswrapper[4090]: I0318 17:39:43.606340 4090 status_manager.go:217] "Starting to sync pod status with apiserver" Mar 18 17:39:43.606452 master-0 kubenswrapper[4090]: I0318 17:39:43.606419 4090 kubelet.go:2335] "Starting kubelet main sync loop" Mar 18 17:39:43.606561 master-0 kubenswrapper[4090]: E0318 17:39:43.606520 4090 kubelet.go:2359] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Mar 18 17:39:43.607451 master-0 kubenswrapper[4090]: W0318 17:39:43.607396 4090 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.sno.openstack.lab:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 18 17:39:43.607512 master-0 kubenswrapper[4090]: E0318 17:39:43.607474 4090 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.sno.openstack.lab:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Mar 18 17:39:43.668431 master-0 kubenswrapper[4090]: E0318 17:39:43.668183 4090 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="400ms" Mar 18 17:39:43.686219 master-0 kubenswrapper[4090]: I0318 17:39:43.686135 4090 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 17:39:43.687789 master-0 kubenswrapper[4090]: I0318 17:39:43.687726 4090 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 18 17:39:43.687871 master-0 kubenswrapper[4090]: I0318 17:39:43.687803 4090 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 18 17:39:43.687871 master-0 kubenswrapper[4090]: I0318 17:39:43.687829 4090 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 18 17:39:43.687963 master-0 kubenswrapper[4090]: I0318 17:39:43.687881 4090 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Mar 18 17:39:43.689185 master-0 kubenswrapper[4090]: E0318 17:39:43.689081 4090 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/nodes\": dial tcp 192.168.32.10:6443: connect: connection refused" node="master-0" Mar 18 17:39:43.707350 master-0 kubenswrapper[4090]: I0318 17:39:43.707268 4090 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/bootstrap-kube-apiserver-master-0","kube-system/bootstrap-kube-controller-manager-master-0","kube-system/bootstrap-kube-scheduler-master-0","openshift-machine-config-operator/kube-rbac-proxy-crio-master-0","openshift-etcd/etcd-master-0-master-0"] Mar 18 17:39:43.707438 master-0 kubenswrapper[4090]: I0318 17:39:43.707400 4090 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 17:39:43.708863 master-0 kubenswrapper[4090]: I0318 17:39:43.708823 4090 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 18 17:39:43.708938 master-0 kubenswrapper[4090]: I0318 17:39:43.708874 4090 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 18 17:39:43.708938 master-0 kubenswrapper[4090]: I0318 17:39:43.708885 4090 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 18 17:39:43.709051 master-0 kubenswrapper[4090]: I0318 17:39:43.709028 4090 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 17:39:43.710027 master-0 kubenswrapper[4090]: I0318 17:39:43.709994 4090 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 18 17:39:43.710124 master-0 kubenswrapper[4090]: I0318 17:39:43.710049 4090 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 18 17:39:43.710124 master-0 kubenswrapper[4090]: I0318 17:39:43.710067 4090 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 18 17:39:43.710477 master-0 kubenswrapper[4090]: I0318 17:39:43.710422 4090 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 18 17:39:43.710553 master-0 kubenswrapper[4090]: I0318 17:39:43.710492 4090 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 17:39:43.710553 master-0 kubenswrapper[4090]: I0318 17:39:43.710505 4090 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 17:39:43.710634 master-0 kubenswrapper[4090]: I0318 17:39:43.710611 4090 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 18 17:39:43.710675 master-0 kubenswrapper[4090]: I0318 17:39:43.710653 4090 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 17:39:43.711617 master-0 kubenswrapper[4090]: I0318 17:39:43.711573 4090 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 18 17:39:43.711617 master-0 kubenswrapper[4090]: I0318 17:39:43.711621 4090 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 18 17:39:43.711743 master-0 kubenswrapper[4090]: I0318 17:39:43.711645 4090 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 18 17:39:43.711743 master-0 kubenswrapper[4090]: I0318 17:39:43.711661 4090 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 18 17:39:43.711743 master-0 kubenswrapper[4090]: I0318 17:39:43.711675 4090 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 18 17:39:43.711743 master-0 kubenswrapper[4090]: I0318 17:39:43.711688 4090 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 18 17:39:43.711995 master-0 kubenswrapper[4090]: I0318 17:39:43.711762 4090 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 18 17:39:43.711995 master-0 kubenswrapper[4090]: I0318 17:39:43.711779 4090 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 17:39:43.711995 master-0 kubenswrapper[4090]: I0318 17:39:43.711805 4090 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 18 17:39:43.711995 master-0 kubenswrapper[4090]: I0318 17:39:43.711827 4090 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 18 17:39:43.711995 master-0 kubenswrapper[4090]: I0318 17:39:43.711821 4090 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/bootstrap-kube-scheduler-master-0" Mar 18 17:39:43.711995 master-0 kubenswrapper[4090]: I0318 17:39:43.711872 4090 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 17:39:43.712846 master-0 kubenswrapper[4090]: I0318 17:39:43.712820 4090 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 18 17:39:43.712902 master-0 kubenswrapper[4090]: I0318 17:39:43.712848 4090 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 18 17:39:43.712902 master-0 kubenswrapper[4090]: I0318 17:39:43.712858 4090 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 18 17:39:43.713099 master-0 kubenswrapper[4090]: I0318 17:39:43.713047 4090 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 18 17:39:43.713154 master-0 kubenswrapper[4090]: I0318 17:39:43.713116 4090 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 18 17:39:43.713154 master-0 kubenswrapper[4090]: I0318 17:39:43.713137 4090 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 18 17:39:43.713438 master-0 kubenswrapper[4090]: I0318 17:39:43.713399 4090 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 17:39:43.713676 master-0 kubenswrapper[4090]: I0318 17:39:43.713619 4090 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 18 17:39:43.713754 master-0 kubenswrapper[4090]: I0318 17:39:43.713692 4090 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 17:39:43.714604 master-0 kubenswrapper[4090]: I0318 17:39:43.714560 4090 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 18 17:39:43.714670 master-0 kubenswrapper[4090]: I0318 17:39:43.714608 4090 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 18 17:39:43.714670 master-0 kubenswrapper[4090]: I0318 17:39:43.714630 4090 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 18 17:39:43.714847 master-0 kubenswrapper[4090]: I0318 17:39:43.714809 4090 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-master-0-master-0" Mar 18 17:39:43.714897 master-0 kubenswrapper[4090]: I0318 17:39:43.714853 4090 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 17:39:43.714949 master-0 kubenswrapper[4090]: I0318 17:39:43.714912 4090 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 18 17:39:43.715003 master-0 kubenswrapper[4090]: I0318 17:39:43.714958 4090 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 18 17:39:43.715003 master-0 kubenswrapper[4090]: I0318 17:39:43.714981 4090 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 18 17:39:43.715769 master-0 kubenswrapper[4090]: I0318 17:39:43.715725 4090 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 18 17:39:43.715834 master-0 kubenswrapper[4090]: I0318 17:39:43.715775 4090 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 18 17:39:43.715834 master-0 kubenswrapper[4090]: I0318 17:39:43.715795 4090 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 18 17:39:43.764153 master-0 kubenswrapper[4090]: I0318 17:39:43.764102 4090 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/host-path/46f265536aba6292ead501bc9b49f327-config\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"46f265536aba6292ead501bc9b49f327\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 18 17:39:43.764260 master-0 kubenswrapper[4090]: I0318 17:39:43.764211 4090 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/46f265536aba6292ead501bc9b49f327-logs\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"46f265536aba6292ead501bc9b49f327\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 18 17:39:43.764339 master-0 kubenswrapper[4090]: I0318 17:39:43.764316 4090 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/c83737980b9ee109184b1d78e942cf36-secrets\") pod \"bootstrap-kube-scheduler-master-0\" (UID: \"c83737980b9ee109184b1d78e942cf36\") " pod="kube-system/bootstrap-kube-scheduler-master-0" Mar 18 17:39:43.764392 master-0 kubenswrapper[4090]: I0318 17:39:43.764350 4090 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/1249822f86f23526277d165c0d5d3c19-etc-kube\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"1249822f86f23526277d165c0d5d3c19\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 18 17:39:43.764436 master-0 kubenswrapper[4090]: I0318 17:39:43.764414 4090 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/d664a6d0d2a24360dee10612610f1b59-data-dir\") pod \"etcd-master-0-master-0\" (UID: \"d664a6d0d2a24360dee10612610f1b59\") " pod="openshift-etcd/etcd-master-0-master-0" Mar 18 17:39:43.764544 master-0 kubenswrapper[4090]: I0318 17:39:43.764448 4090 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-etc-kubernetes-cloud\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"49fac1b46a11e49501805e891baae4a9\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 18 17:39:43.764983 master-0 kubenswrapper[4090]: I0318 17:39:43.764904 4090 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-config\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"49fac1b46a11e49501805e891baae4a9\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 18 17:39:43.765056 master-0 kubenswrapper[4090]: I0318 17:39:43.764993 4090 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-logs\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"49fac1b46a11e49501805e891baae4a9\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 18 17:39:43.765113 master-0 kubenswrapper[4090]: I0318 17:39:43.765057 4090 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/1249822f86f23526277d165c0d5d3c19-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"1249822f86f23526277d165c0d5d3c19\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 18 17:39:43.765172 master-0 kubenswrapper[4090]: I0318 17:39:43.765089 4090 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/host-path/d664a6d0d2a24360dee10612610f1b59-certs\") pod \"etcd-master-0-master-0\" (UID: \"d664a6d0d2a24360dee10612610f1b59\") " pod="openshift-etcd/etcd-master-0-master-0" Mar 18 17:39:43.765172 master-0 kubenswrapper[4090]: I0318 17:39:43.765155 4090 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/46f265536aba6292ead501bc9b49f327-secrets\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"46f265536aba6292ead501bc9b49f327\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 18 17:39:43.765357 master-0 kubenswrapper[4090]: I0318 17:39:43.765264 4090 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/46f265536aba6292ead501bc9b49f327-etc-kubernetes-cloud\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"46f265536aba6292ead501bc9b49f327\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 18 17:39:43.765357 master-0 kubenswrapper[4090]: I0318 17:39:43.765342 4090 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/46f265536aba6292ead501bc9b49f327-ssl-certs-host\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"46f265536aba6292ead501bc9b49f327\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 18 17:39:43.765462 master-0 kubenswrapper[4090]: I0318 17:39:43.765369 4090 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-secrets\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"49fac1b46a11e49501805e891baae4a9\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 18 17:39:43.765462 master-0 kubenswrapper[4090]: I0318 17:39:43.765392 4090 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-ssl-certs-host\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"49fac1b46a11e49501805e891baae4a9\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 18 17:39:43.765462 master-0 kubenswrapper[4090]: I0318 17:39:43.765410 4090 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-audit-dir\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"49fac1b46a11e49501805e891baae4a9\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 18 17:39:43.765462 master-0 kubenswrapper[4090]: I0318 17:39:43.765431 4090 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/c83737980b9ee109184b1d78e942cf36-logs\") pod \"bootstrap-kube-scheduler-master-0\" (UID: \"c83737980b9ee109184b1d78e942cf36\") " pod="kube-system/bootstrap-kube-scheduler-master-0" Mar 18 17:39:43.866598 master-0 kubenswrapper[4090]: I0318 17:39:43.866463 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-ssl-certs-host\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"49fac1b46a11e49501805e891baae4a9\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 18 17:39:43.866598 master-0 kubenswrapper[4090]: I0318 17:39:43.866553 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-audit-dir\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"49fac1b46a11e49501805e891baae4a9\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 18 17:39:43.867093 master-0 kubenswrapper[4090]: I0318 17:39:43.866640 4090 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-audit-dir\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"49fac1b46a11e49501805e891baae4a9\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 18 17:39:43.867093 master-0 kubenswrapper[4090]: I0318 17:39:43.866673 4090 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-ssl-certs-host\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"49fac1b46a11e49501805e891baae4a9\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 18 17:39:43.867093 master-0 kubenswrapper[4090]: I0318 17:39:43.866746 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/c83737980b9ee109184b1d78e942cf36-logs\") pod \"bootstrap-kube-scheduler-master-0\" (UID: \"c83737980b9ee109184b1d78e942cf36\") " pod="kube-system/bootstrap-kube-scheduler-master-0" Mar 18 17:39:43.867093 master-0 kubenswrapper[4090]: I0318 17:39:43.866805 4090 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/c83737980b9ee109184b1d78e942cf36-logs\") pod \"bootstrap-kube-scheduler-master-0\" (UID: \"c83737980b9ee109184b1d78e942cf36\") " pod="kube-system/bootstrap-kube-scheduler-master-0" Mar 18 17:39:43.867093 master-0 kubenswrapper[4090]: I0318 17:39:43.866867 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-secrets\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"49fac1b46a11e49501805e891baae4a9\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 18 17:39:43.867093 master-0 kubenswrapper[4090]: I0318 17:39:43.866903 4090 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-secrets\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"49fac1b46a11e49501805e891baae4a9\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 18 17:39:43.867093 master-0 kubenswrapper[4090]: I0318 17:39:43.866946 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/46f265536aba6292ead501bc9b49f327-logs\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"46f265536aba6292ead501bc9b49f327\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 18 17:39:43.867093 master-0 kubenswrapper[4090]: I0318 17:39:43.867003 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/c83737980b9ee109184b1d78e942cf36-secrets\") pod \"bootstrap-kube-scheduler-master-0\" (UID: \"c83737980b9ee109184b1d78e942cf36\") " pod="kube-system/bootstrap-kube-scheduler-master-0" Mar 18 17:39:43.867093 master-0 kubenswrapper[4090]: I0318 17:39:43.867049 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/1249822f86f23526277d165c0d5d3c19-etc-kube\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"1249822f86f23526277d165c0d5d3c19\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 18 17:39:43.867093 master-0 kubenswrapper[4090]: I0318 17:39:43.867081 4090 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/c83737980b9ee109184b1d78e942cf36-secrets\") pod \"bootstrap-kube-scheduler-master-0\" (UID: \"c83737980b9ee109184b1d78e942cf36\") " pod="kube-system/bootstrap-kube-scheduler-master-0" Mar 18 17:39:43.867093 master-0 kubenswrapper[4090]: I0318 17:39:43.867056 4090 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/46f265536aba6292ead501bc9b49f327-logs\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"46f265536aba6292ead501bc9b49f327\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 18 17:39:43.867093 master-0 kubenswrapper[4090]: I0318 17:39:43.867103 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/d664a6d0d2a24360dee10612610f1b59-data-dir\") pod \"etcd-master-0-master-0\" (UID: \"d664a6d0d2a24360dee10612610f1b59\") " pod="openshift-etcd/etcd-master-0-master-0" Mar 18 17:39:43.867817 master-0 kubenswrapper[4090]: I0318 17:39:43.867142 4090 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/d664a6d0d2a24360dee10612610f1b59-data-dir\") pod \"etcd-master-0-master-0\" (UID: \"d664a6d0d2a24360dee10612610f1b59\") " pod="openshift-etcd/etcd-master-0-master-0" Mar 18 17:39:43.867817 master-0 kubenswrapper[4090]: I0318 17:39:43.867153 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/host-path/46f265536aba6292ead501bc9b49f327-config\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"46f265536aba6292ead501bc9b49f327\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 18 17:39:43.867817 master-0 kubenswrapper[4090]: I0318 17:39:43.867183 4090 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/host-path/46f265536aba6292ead501bc9b49f327-config\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"46f265536aba6292ead501bc9b49f327\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 18 17:39:43.867817 master-0 kubenswrapper[4090]: I0318 17:39:43.867220 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-config\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"49fac1b46a11e49501805e891baae4a9\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 18 17:39:43.867817 master-0 kubenswrapper[4090]: I0318 17:39:43.867261 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-logs\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"49fac1b46a11e49501805e891baae4a9\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 18 17:39:43.867817 master-0 kubenswrapper[4090]: I0318 17:39:43.867313 4090 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-config\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"49fac1b46a11e49501805e891baae4a9\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 18 17:39:43.867817 master-0 kubenswrapper[4090]: I0318 17:39:43.867332 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/1249822f86f23526277d165c0d5d3c19-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"1249822f86f23526277d165c0d5d3c19\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 18 17:39:43.867817 master-0 kubenswrapper[4090]: I0318 17:39:43.867366 4090 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/1249822f86f23526277d165c0d5d3c19-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"1249822f86f23526277d165c0d5d3c19\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 18 17:39:43.867817 master-0 kubenswrapper[4090]: I0318 17:39:43.867369 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/host-path/d664a6d0d2a24360dee10612610f1b59-certs\") pod \"etcd-master-0-master-0\" (UID: \"d664a6d0d2a24360dee10612610f1b59\") " pod="openshift-etcd/etcd-master-0-master-0" Mar 18 17:39:43.867817 master-0 kubenswrapper[4090]: I0318 17:39:43.867397 4090 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/host-path/d664a6d0d2a24360dee10612610f1b59-certs\") pod \"etcd-master-0-master-0\" (UID: \"d664a6d0d2a24360dee10612610f1b59\") " pod="openshift-etcd/etcd-master-0-master-0" Mar 18 17:39:43.867817 master-0 kubenswrapper[4090]: I0318 17:39:43.867394 4090 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-logs\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"49fac1b46a11e49501805e891baae4a9\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 18 17:39:43.867817 master-0 kubenswrapper[4090]: I0318 17:39:43.867226 4090 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/1249822f86f23526277d165c0d5d3c19-etc-kube\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"1249822f86f23526277d165c0d5d3c19\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 18 17:39:43.867817 master-0 kubenswrapper[4090]: I0318 17:39:43.867441 4090 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-etc-kubernetes-cloud\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"49fac1b46a11e49501805e891baae4a9\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 18 17:39:43.867817 master-0 kubenswrapper[4090]: I0318 17:39:43.867415 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-etc-kubernetes-cloud\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"49fac1b46a11e49501805e891baae4a9\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 18 17:39:43.867817 master-0 kubenswrapper[4090]: I0318 17:39:43.867566 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/46f265536aba6292ead501bc9b49f327-etc-kubernetes-cloud\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"46f265536aba6292ead501bc9b49f327\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 18 17:39:43.867817 master-0 kubenswrapper[4090]: I0318 17:39:43.867618 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/46f265536aba6292ead501bc9b49f327-ssl-certs-host\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"46f265536aba6292ead501bc9b49f327\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 18 17:39:43.867817 master-0 kubenswrapper[4090]: I0318 17:39:43.867737 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/46f265536aba6292ead501bc9b49f327-secrets\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"46f265536aba6292ead501bc9b49f327\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 18 17:39:43.868759 master-0 kubenswrapper[4090]: I0318 17:39:43.867667 4090 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/46f265536aba6292ead501bc9b49f327-etc-kubernetes-cloud\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"46f265536aba6292ead501bc9b49f327\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 18 17:39:43.868759 master-0 kubenswrapper[4090]: I0318 17:39:43.867800 4090 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/46f265536aba6292ead501bc9b49f327-ssl-certs-host\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"46f265536aba6292ead501bc9b49f327\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 18 17:39:43.868759 master-0 kubenswrapper[4090]: I0318 17:39:43.867813 4090 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/46f265536aba6292ead501bc9b49f327-secrets\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"46f265536aba6292ead501bc9b49f327\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 18 17:39:43.889661 master-0 kubenswrapper[4090]: I0318 17:39:43.889584 4090 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 17:39:43.894492 master-0 kubenswrapper[4090]: I0318 17:39:43.894435 4090 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 18 17:39:43.894618 master-0 kubenswrapper[4090]: I0318 17:39:43.894507 4090 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 18 17:39:43.894618 master-0 kubenswrapper[4090]: I0318 17:39:43.894529 4090 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 18 17:39:43.894618 master-0 kubenswrapper[4090]: I0318 17:39:43.894600 4090 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Mar 18 17:39:43.895910 master-0 kubenswrapper[4090]: E0318 17:39:43.895852 4090 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/nodes\": dial tcp 192.168.32.10:6443: connect: connection refused" node="master-0" Mar 18 17:39:44.043245 master-0 kubenswrapper[4090]: I0318 17:39:44.043034 4090 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 18 17:39:44.063882 master-0 kubenswrapper[4090]: I0318 17:39:44.063817 4090 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 18 17:39:44.069791 master-0 kubenswrapper[4090]: E0318 17:39:44.069711 4090 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="800ms" Mar 18 17:39:44.074130 master-0 kubenswrapper[4090]: I0318 17:39:44.074035 4090 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/bootstrap-kube-scheduler-master-0" Mar 18 17:39:44.092174 master-0 kubenswrapper[4090]: I0318 17:39:44.092116 4090 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 18 17:39:44.100442 master-0 kubenswrapper[4090]: I0318 17:39:44.100370 4090 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-master-0-master-0" Mar 18 17:39:44.296693 master-0 kubenswrapper[4090]: I0318 17:39:44.296505 4090 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 17:39:44.298563 master-0 kubenswrapper[4090]: I0318 17:39:44.298027 4090 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 18 17:39:44.298563 master-0 kubenswrapper[4090]: I0318 17:39:44.298080 4090 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 18 17:39:44.298563 master-0 kubenswrapper[4090]: I0318 17:39:44.298098 4090 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 18 17:39:44.298563 master-0 kubenswrapper[4090]: I0318 17:39:44.298213 4090 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Mar 18 17:39:44.299337 master-0 kubenswrapper[4090]: E0318 17:39:44.299239 4090 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/nodes\": dial tcp 192.168.32.10:6443: connect: connection refused" node="master-0" Mar 18 17:39:44.429219 master-0 kubenswrapper[4090]: W0318 17:39:44.429114 4090 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.sno.openstack.lab:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 18 17:39:44.429219 master-0 kubenswrapper[4090]: E0318 17:39:44.429216 4090 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.sno.openstack.lab:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Mar 18 17:39:44.457133 master-0 kubenswrapper[4090]: I0318 17:39:44.457038 4090 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 18 17:39:44.474664 master-0 kubenswrapper[4090]: W0318 17:39:44.474542 4090 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 18 17:39:44.474664 master-0 kubenswrapper[4090]: E0318 17:39:44.474640 4090 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Mar 18 17:39:44.729778 master-0 kubenswrapper[4090]: W0318 17:39:44.729664 4090 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.sno.openstack.lab:6443/api/v1/nodes?fieldSelector=metadata.name%3Dmaster-0&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 18 17:39:44.730013 master-0 kubenswrapper[4090]: E0318 17:39:44.729781 4090 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes?fieldSelector=metadata.name%3Dmaster-0&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Mar 18 17:39:44.871004 master-0 kubenswrapper[4090]: E0318 17:39:44.870910 4090 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="1.6s" Mar 18 17:39:45.036873 master-0 kubenswrapper[4090]: W0318 17:39:45.036674 4090 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.sno.openstack.lab:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 18 17:39:45.036873 master-0 kubenswrapper[4090]: E0318 17:39:45.036819 4090 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.sno.openstack.lab:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Mar 18 17:39:45.099977 master-0 kubenswrapper[4090]: I0318 17:39:45.099917 4090 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 17:39:45.101948 master-0 kubenswrapper[4090]: I0318 17:39:45.101895 4090 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 18 17:39:45.101948 master-0 kubenswrapper[4090]: I0318 17:39:45.101958 4090 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 18 17:39:45.102130 master-0 kubenswrapper[4090]: I0318 17:39:45.101976 4090 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 18 17:39:45.102130 master-0 kubenswrapper[4090]: I0318 17:39:45.102038 4090 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Mar 18 17:39:45.103158 master-0 kubenswrapper[4090]: E0318 17:39:45.103089 4090 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/nodes\": dial tcp 192.168.32.10:6443: connect: connection refused" node="master-0" Mar 18 17:39:45.104453 master-0 kubenswrapper[4090]: W0318 17:39:45.104381 4090 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod46f265536aba6292ead501bc9b49f327.slice/crio-4c05ce2500dc59522a6a15e9d7a181f449cb0590dccc6ae6225c9f2d2a528378 WatchSource:0}: Error finding container 4c05ce2500dc59522a6a15e9d7a181f449cb0590dccc6ae6225c9f2d2a528378: Status 404 returned error can't find the container with id 4c05ce2500dc59522a6a15e9d7a181f449cb0590dccc6ae6225c9f2d2a528378 Mar 18 17:39:45.105017 master-0 kubenswrapper[4090]: W0318 17:39:45.104965 4090 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd664a6d0d2a24360dee10612610f1b59.slice/crio-6a85b3ee12aea7b46bda118fb48d0b8760d887f0c07b29fb0b4386fa0f1ccc35 WatchSource:0}: Error finding container 6a85b3ee12aea7b46bda118fb48d0b8760d887f0c07b29fb0b4386fa0f1ccc35: Status 404 returned error can't find the container with id 6a85b3ee12aea7b46bda118fb48d0b8760d887f0c07b29fb0b4386fa0f1ccc35 Mar 18 17:39:45.105917 master-0 kubenswrapper[4090]: W0318 17:39:45.105877 4090 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc83737980b9ee109184b1d78e942cf36.slice/crio-95d9ed6b43dde926cc6bff2e2109470565941e7ec534301d19b34350a3fd9914 WatchSource:0}: Error finding container 95d9ed6b43dde926cc6bff2e2109470565941e7ec534301d19b34350a3fd9914: Status 404 returned error can't find the container with id 95d9ed6b43dde926cc6bff2e2109470565941e7ec534301d19b34350a3fd9914 Mar 18 17:39:45.110472 master-0 kubenswrapper[4090]: I0318 17:39:45.110386 4090 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 18 17:39:45.219991 master-0 kubenswrapper[4090]: W0318 17:39:45.219943 4090 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1249822f86f23526277d165c0d5d3c19.slice/crio-e03411b7c2bf8367c968ed1adc09d9fecafd420d1122805ea765d466352e23a3 WatchSource:0}: Error finding container e03411b7c2bf8367c968ed1adc09d9fecafd420d1122805ea765d466352e23a3: Status 404 returned error can't find the container with id e03411b7c2bf8367c968ed1adc09d9fecafd420d1122805ea765d466352e23a3 Mar 18 17:39:45.256661 master-0 kubenswrapper[4090]: W0318 17:39:45.256624 4090 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod49fac1b46a11e49501805e891baae4a9.slice/crio-6a2b235f936941b3155c2b3d3485ca14ee2d3465fd9996759d1380c11160d84a WatchSource:0}: Error finding container 6a2b235f936941b3155c2b3d3485ca14ee2d3465fd9996759d1380c11160d84a: Status 404 returned error can't find the container with id 6a2b235f936941b3155c2b3d3485ca14ee2d3465fd9996759d1380c11160d84a Mar 18 17:39:45.457445 master-0 kubenswrapper[4090]: I0318 17:39:45.457356 4090 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 18 17:39:45.542994 master-0 kubenswrapper[4090]: I0318 17:39:45.513887 4090 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Mar 18 17:39:45.542994 master-0 kubenswrapper[4090]: E0318 17:39:45.515517 4090 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.sno.openstack.lab:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Mar 18 17:39:45.614794 master-0 kubenswrapper[4090]: I0318 17:39:45.614544 4090 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" event={"ID":"49fac1b46a11e49501805e891baae4a9","Type":"ContainerStarted","Data":"6a2b235f936941b3155c2b3d3485ca14ee2d3465fd9996759d1380c11160d84a"} Mar 18 17:39:45.616845 master-0 kubenswrapper[4090]: I0318 17:39:45.616791 4090 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"1249822f86f23526277d165c0d5d3c19","Type":"ContainerStarted","Data":"e03411b7c2bf8367c968ed1adc09d9fecafd420d1122805ea765d466352e23a3"} Mar 18 17:39:45.618672 master-0 kubenswrapper[4090]: I0318 17:39:45.618603 4090 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-scheduler-master-0" event={"ID":"c83737980b9ee109184b1d78e942cf36","Type":"ContainerStarted","Data":"95d9ed6b43dde926cc6bff2e2109470565941e7ec534301d19b34350a3fd9914"} Mar 18 17:39:45.620157 master-0 kubenswrapper[4090]: I0318 17:39:45.620081 4090 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"46f265536aba6292ead501bc9b49f327","Type":"ContainerStarted","Data":"4c05ce2500dc59522a6a15e9d7a181f449cb0590dccc6ae6225c9f2d2a528378"} Mar 18 17:39:45.621186 master-0 kubenswrapper[4090]: I0318 17:39:45.621117 4090 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0-master-0" event={"ID":"d664a6d0d2a24360dee10612610f1b59","Type":"ContainerStarted","Data":"6a85b3ee12aea7b46bda118fb48d0b8760d887f0c07b29fb0b4386fa0f1ccc35"} Mar 18 17:39:46.457617 master-0 kubenswrapper[4090]: I0318 17:39:46.457543 4090 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 18 17:39:46.472426 master-0 kubenswrapper[4090]: E0318 17:39:46.472382 4090 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="3.2s" Mar 18 17:39:46.704181 master-0 kubenswrapper[4090]: I0318 17:39:46.704129 4090 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 17:39:46.705198 master-0 kubenswrapper[4090]: I0318 17:39:46.705174 4090 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 18 17:39:46.705247 master-0 kubenswrapper[4090]: I0318 17:39:46.705202 4090 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 18 17:39:46.705247 master-0 kubenswrapper[4090]: I0318 17:39:46.705212 4090 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 18 17:39:46.705332 master-0 kubenswrapper[4090]: I0318 17:39:46.705253 4090 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Mar 18 17:39:46.705801 master-0 kubenswrapper[4090]: E0318 17:39:46.705773 4090 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/nodes\": dial tcp 192.168.32.10:6443: connect: connection refused" node="master-0" Mar 18 17:39:46.800245 master-0 kubenswrapper[4090]: W0318 17:39:46.800098 4090 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.sno.openstack.lab:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 18 17:39:46.800245 master-0 kubenswrapper[4090]: E0318 17:39:46.800150 4090 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.sno.openstack.lab:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Mar 18 17:39:47.029140 master-0 kubenswrapper[4090]: W0318 17:39:47.028947 4090 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 18 17:39:47.029226 master-0 kubenswrapper[4090]: E0318 17:39:47.029157 4090 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Mar 18 17:39:47.184410 master-0 kubenswrapper[4090]: W0318 17:39:47.184331 4090 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.sno.openstack.lab:6443/api/v1/nodes?fieldSelector=metadata.name%3Dmaster-0&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 18 17:39:47.184410 master-0 kubenswrapper[4090]: E0318 17:39:47.184392 4090 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes?fieldSelector=metadata.name%3Dmaster-0&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Mar 18 17:39:47.457629 master-0 kubenswrapper[4090]: I0318 17:39:47.457471 4090 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 18 17:39:47.626541 master-0 kubenswrapper[4090]: I0318 17:39:47.626488 4090 generic.go:334] "Generic (PLEG): container finished" podID="1249822f86f23526277d165c0d5d3c19" containerID="3803a5540326c74452858cec12bf5343a5ecb670acc1d4e7c87a18dad91b712b" exitCode=0 Mar 18 17:39:47.626541 master-0 kubenswrapper[4090]: I0318 17:39:47.626531 4090 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"1249822f86f23526277d165c0d5d3c19","Type":"ContainerDied","Data":"3803a5540326c74452858cec12bf5343a5ecb670acc1d4e7c87a18dad91b712b"} Mar 18 17:39:47.627215 master-0 kubenswrapper[4090]: I0318 17:39:47.626571 4090 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 17:39:47.627215 master-0 kubenswrapper[4090]: I0318 17:39:47.627200 4090 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 18 17:39:47.627299 master-0 kubenswrapper[4090]: I0318 17:39:47.627231 4090 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 18 17:39:47.627299 master-0 kubenswrapper[4090]: I0318 17:39:47.627244 4090 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 18 17:39:48.183804 master-0 kubenswrapper[4090]: W0318 17:39:48.183722 4090 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.sno.openstack.lab:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 18 17:39:48.183804 master-0 kubenswrapper[4090]: E0318 17:39:48.183792 4090 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.sno.openstack.lab:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Mar 18 17:39:48.456701 master-0 kubenswrapper[4090]: I0318 17:39:48.456569 4090 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 18 17:39:48.631182 master-0 kubenswrapper[4090]: I0318 17:39:48.631109 4090 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0-master-0" event={"ID":"d664a6d0d2a24360dee10612610f1b59","Type":"ContainerStarted","Data":"1d30b6f37f4ad53c3294bea48dd4a0769d42ea2d80a5395f6ef8c16034150f6c"} Mar 18 17:39:48.632034 master-0 kubenswrapper[4090]: I0318 17:39:48.631201 4090 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0-master-0" event={"ID":"d664a6d0d2a24360dee10612610f1b59","Type":"ContainerStarted","Data":"99dc9cff4665f248f4ae68c96db3198a4bcd4d7b5dbfb367bdf3864e44ad29fc"} Mar 18 17:39:48.632034 master-0 kubenswrapper[4090]: I0318 17:39:48.631150 4090 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 17:39:48.633072 master-0 kubenswrapper[4090]: I0318 17:39:48.633039 4090 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 18 17:39:48.633072 master-0 kubenswrapper[4090]: I0318 17:39:48.633077 4090 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 18 17:39:48.633181 master-0 kubenswrapper[4090]: I0318 17:39:48.633088 4090 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 18 17:39:48.634848 master-0 kubenswrapper[4090]: I0318 17:39:48.634809 4090 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-master-0_1249822f86f23526277d165c0d5d3c19/kube-rbac-proxy-crio/0.log" Mar 18 17:39:48.635483 master-0 kubenswrapper[4090]: I0318 17:39:48.635116 4090 generic.go:334] "Generic (PLEG): container finished" podID="1249822f86f23526277d165c0d5d3c19" containerID="ab38890abe77fe8ca49ec5c2e51b884c386846e7033bb1eec66a9126ced4b179" exitCode=1 Mar 18 17:39:48.635483 master-0 kubenswrapper[4090]: I0318 17:39:48.635161 4090 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"1249822f86f23526277d165c0d5d3c19","Type":"ContainerDied","Data":"ab38890abe77fe8ca49ec5c2e51b884c386846e7033bb1eec66a9126ced4b179"} Mar 18 17:39:48.635578 master-0 kubenswrapper[4090]: I0318 17:39:48.635468 4090 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 17:39:48.636559 master-0 kubenswrapper[4090]: I0318 17:39:48.636530 4090 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 18 17:39:48.636619 master-0 kubenswrapper[4090]: I0318 17:39:48.636563 4090 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 18 17:39:48.636619 master-0 kubenswrapper[4090]: I0318 17:39:48.636574 4090 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 18 17:39:48.636862 master-0 kubenswrapper[4090]: I0318 17:39:48.636829 4090 scope.go:117] "RemoveContainer" containerID="ab38890abe77fe8ca49ec5c2e51b884c386846e7033bb1eec66a9126ced4b179" Mar 18 17:39:49.457766 master-0 kubenswrapper[4090]: I0318 17:39:49.457668 4090 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 18 17:39:49.563623 master-0 kubenswrapper[4090]: I0318 17:39:49.563554 4090 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Mar 18 17:39:49.564937 master-0 kubenswrapper[4090]: E0318 17:39:49.564893 4090 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.sno.openstack.lab:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Mar 18 17:39:49.639678 master-0 kubenswrapper[4090]: I0318 17:39:49.639626 4090 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-master-0_1249822f86f23526277d165c0d5d3c19/kube-rbac-proxy-crio/1.log" Mar 18 17:39:49.640364 master-0 kubenswrapper[4090]: I0318 17:39:49.640302 4090 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-master-0_1249822f86f23526277d165c0d5d3c19/kube-rbac-proxy-crio/0.log" Mar 18 17:39:49.640915 master-0 kubenswrapper[4090]: I0318 17:39:49.640867 4090 generic.go:334] "Generic (PLEG): container finished" podID="1249822f86f23526277d165c0d5d3c19" containerID="61bd789344076c47f8d7fd3e3af6f341ca32ad16550699ddcda9363e78e1e116" exitCode=1 Mar 18 17:39:49.640997 master-0 kubenswrapper[4090]: I0318 17:39:49.640966 4090 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"1249822f86f23526277d165c0d5d3c19","Type":"ContainerDied","Data":"61bd789344076c47f8d7fd3e3af6f341ca32ad16550699ddcda9363e78e1e116"} Mar 18 17:39:49.641058 master-0 kubenswrapper[4090]: I0318 17:39:49.641017 4090 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 17:39:49.641122 master-0 kubenswrapper[4090]: I0318 17:39:49.641055 4090 scope.go:117] "RemoveContainer" containerID="ab38890abe77fe8ca49ec5c2e51b884c386846e7033bb1eec66a9126ced4b179" Mar 18 17:39:49.641122 master-0 kubenswrapper[4090]: I0318 17:39:49.641080 4090 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 17:39:49.642892 master-0 kubenswrapper[4090]: I0318 17:39:49.642068 4090 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 18 17:39:49.642892 master-0 kubenswrapper[4090]: I0318 17:39:49.642105 4090 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 18 17:39:49.642892 master-0 kubenswrapper[4090]: I0318 17:39:49.642121 4090 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 18 17:39:49.642892 master-0 kubenswrapper[4090]: I0318 17:39:49.642407 4090 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 18 17:39:49.642892 master-0 kubenswrapper[4090]: I0318 17:39:49.642432 4090 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 18 17:39:49.642892 master-0 kubenswrapper[4090]: I0318 17:39:49.642447 4090 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 18 17:39:49.642892 master-0 kubenswrapper[4090]: I0318 17:39:49.642785 4090 scope.go:117] "RemoveContainer" containerID="61bd789344076c47f8d7fd3e3af6f341ca32ad16550699ddcda9363e78e1e116" Mar 18 17:39:49.643962 master-0 kubenswrapper[4090]: E0318 17:39:49.642996 4090 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy-crio\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-rbac-proxy-crio pod=kube-rbac-proxy-crio-master-0_openshift-machine-config-operator(1249822f86f23526277d165c0d5d3c19)\"" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" podUID="1249822f86f23526277d165c0d5d3c19" Mar 18 17:39:49.673477 master-0 kubenswrapper[4090]: E0318 17:39:49.673419 4090 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="6.4s" Mar 18 17:39:49.906987 master-0 kubenswrapper[4090]: I0318 17:39:49.906877 4090 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 17:39:49.908734 master-0 kubenswrapper[4090]: I0318 17:39:49.908697 4090 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 18 17:39:49.908797 master-0 kubenswrapper[4090]: I0318 17:39:49.908753 4090 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 18 17:39:49.908797 master-0 kubenswrapper[4090]: I0318 17:39:49.908770 4090 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 18 17:39:49.908879 master-0 kubenswrapper[4090]: I0318 17:39:49.908827 4090 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Mar 18 17:39:49.909717 master-0 kubenswrapper[4090]: E0318 17:39:49.909673 4090 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/nodes\": dial tcp 192.168.32.10:6443: connect: connection refused" node="master-0" Mar 18 17:39:50.457586 master-0 kubenswrapper[4090]: I0318 17:39:50.457537 4090 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 18 17:39:50.643798 master-0 kubenswrapper[4090]: I0318 17:39:50.643709 4090 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-master-0_1249822f86f23526277d165c0d5d3c19/kube-rbac-proxy-crio/1.log" Mar 18 17:39:50.644311 master-0 kubenswrapper[4090]: I0318 17:39:50.644290 4090 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 17:39:50.645088 master-0 kubenswrapper[4090]: I0318 17:39:50.644883 4090 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 18 17:39:50.645088 master-0 kubenswrapper[4090]: I0318 17:39:50.644915 4090 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 18 17:39:50.645088 master-0 kubenswrapper[4090]: I0318 17:39:50.644924 4090 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 18 17:39:50.645325 master-0 kubenswrapper[4090]: I0318 17:39:50.645308 4090 scope.go:117] "RemoveContainer" containerID="61bd789344076c47f8d7fd3e3af6f341ca32ad16550699ddcda9363e78e1e116" Mar 18 17:39:50.645475 master-0 kubenswrapper[4090]: E0318 17:39:50.645444 4090 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy-crio\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-rbac-proxy-crio pod=kube-rbac-proxy-crio-master-0_openshift-machine-config-operator(1249822f86f23526277d165c0d5d3c19)\"" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" podUID="1249822f86f23526277d165c0d5d3c19" Mar 18 17:39:51.346822 master-0 kubenswrapper[4090]: E0318 17:39:51.346643 4090 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/default/events\": dial tcp 192.168.32.10:6443: connect: connection refused" event="&Event{ObjectMeta:{master-0.189e00413e402932 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 17:39:43.447820594 +0000 UTC m=+0.640092528,LastTimestamp:2026-03-18 17:39:43.447820594 +0000 UTC m=+0.640092528,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 17:39:51.457068 master-0 kubenswrapper[4090]: I0318 17:39:51.457003 4090 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 18 17:39:52.021760 master-0 kubenswrapper[4090]: W0318 17:39:52.021657 4090 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.sno.openstack.lab:6443/api/v1/nodes?fieldSelector=metadata.name%3Dmaster-0&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 18 17:39:52.021760 master-0 kubenswrapper[4090]: E0318 17:39:52.021761 4090 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes?fieldSelector=metadata.name%3Dmaster-0&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Mar 18 17:39:52.457442 master-0 kubenswrapper[4090]: I0318 17:39:52.457321 4090 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 18 17:39:52.873090 master-0 kubenswrapper[4090]: W0318 17:39:52.872972 4090 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.sno.openstack.lab:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 18 17:39:52.873330 master-0 kubenswrapper[4090]: E0318 17:39:52.873103 4090 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.sno.openstack.lab:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Mar 18 17:39:53.360701 master-0 kubenswrapper[4090]: W0318 17:39:53.360552 4090 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 18 17:39:53.360701 master-0 kubenswrapper[4090]: E0318 17:39:53.360657 4090 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Mar 18 17:39:53.457629 master-0 kubenswrapper[4090]: I0318 17:39:53.457566 4090 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 18 17:39:53.587826 master-0 kubenswrapper[4090]: E0318 17:39:53.587777 4090 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"master-0\" not found" Mar 18 17:39:53.651606 master-0 kubenswrapper[4090]: I0318 17:39:53.651044 4090 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-scheduler-master-0" event={"ID":"c83737980b9ee109184b1d78e942cf36","Type":"ContainerStarted","Data":"774c63dac090e52a2318d2a44e73b16fc328b4dc2d265dcfd10522ed7532c288"} Mar 18 17:39:53.651606 master-0 kubenswrapper[4090]: I0318 17:39:53.651138 4090 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 17:39:53.652490 master-0 kubenswrapper[4090]: I0318 17:39:53.652060 4090 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 18 17:39:53.652490 master-0 kubenswrapper[4090]: I0318 17:39:53.652078 4090 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 18 17:39:53.652490 master-0 kubenswrapper[4090]: I0318 17:39:53.652086 4090 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 18 17:39:53.653817 master-0 kubenswrapper[4090]: I0318 17:39:53.653776 4090 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"46f265536aba6292ead501bc9b49f327","Type":"ContainerStarted","Data":"f0b81b3cfa0e5fd097c90e819ab6e9fd9565a5c9b97fe1b2e8315e5233938b1e"} Mar 18 17:39:53.655885 master-0 kubenswrapper[4090]: I0318 17:39:53.655568 4090 generic.go:334] "Generic (PLEG): container finished" podID="49fac1b46a11e49501805e891baae4a9" containerID="9229a0847dcc4bfd99187b8d4d1c4189d57cc38cb01e1689224e1d421ed9426b" exitCode=0 Mar 18 17:39:53.655885 master-0 kubenswrapper[4090]: I0318 17:39:53.655590 4090 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" event={"ID":"49fac1b46a11e49501805e891baae4a9","Type":"ContainerDied","Data":"9229a0847dcc4bfd99187b8d4d1c4189d57cc38cb01e1689224e1d421ed9426b"} Mar 18 17:39:53.655885 master-0 kubenswrapper[4090]: I0318 17:39:53.655646 4090 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 17:39:53.656185 master-0 kubenswrapper[4090]: I0318 17:39:53.656136 4090 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 18 17:39:53.656185 master-0 kubenswrapper[4090]: I0318 17:39:53.656149 4090 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 18 17:39:53.656380 master-0 kubenswrapper[4090]: I0318 17:39:53.656322 4090 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 18 17:39:53.660051 master-0 kubenswrapper[4090]: I0318 17:39:53.660020 4090 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 17:39:53.661113 master-0 kubenswrapper[4090]: I0318 17:39:53.660889 4090 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 18 17:39:53.661113 master-0 kubenswrapper[4090]: I0318 17:39:53.660909 4090 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 18 17:39:53.661113 master-0 kubenswrapper[4090]: I0318 17:39:53.660924 4090 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 18 17:39:53.939527 master-0 kubenswrapper[4090]: W0318 17:39:53.939265 4090 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.sno.openstack.lab:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 18 17:39:53.939527 master-0 kubenswrapper[4090]: E0318 17:39:53.939436 4090 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.sno.openstack.lab:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Mar 18 17:39:54.662771 master-0 kubenswrapper[4090]: I0318 17:39:54.662716 4090 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" event={"ID":"49fac1b46a11e49501805e891baae4a9","Type":"ContainerStarted","Data":"3c6f642b736991fd20242697f9273f8f6a126bc6027f7c5ddd27e70569fd9054"} Mar 18 17:39:54.664303 master-0 kubenswrapper[4090]: I0318 17:39:54.663443 4090 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 17:39:54.664402 master-0 kubenswrapper[4090]: I0318 17:39:54.664358 4090 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 18 17:39:54.664402 master-0 kubenswrapper[4090]: I0318 17:39:54.664392 4090 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 18 17:39:54.664402 master-0 kubenswrapper[4090]: I0318 17:39:54.664401 4090 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 18 17:39:55.349293 master-0 kubenswrapper[4090]: I0318 17:39:55.347427 4090 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 18 17:39:55.460199 master-0 kubenswrapper[4090]: I0318 17:39:55.460155 4090 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 18 17:39:56.080312 master-0 kubenswrapper[4090]: E0318 17:39:56.080236 4090 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"master-0\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Mar 18 17:39:56.312968 master-0 kubenswrapper[4090]: I0318 17:39:56.312866 4090 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 17:39:56.314374 master-0 kubenswrapper[4090]: I0318 17:39:56.314310 4090 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 18 17:39:56.314495 master-0 kubenswrapper[4090]: I0318 17:39:56.314377 4090 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 18 17:39:56.314495 master-0 kubenswrapper[4090]: I0318 17:39:56.314403 4090 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 18 17:39:56.314495 master-0 kubenswrapper[4090]: I0318 17:39:56.314485 4090 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Mar 18 17:39:56.322712 master-0 kubenswrapper[4090]: E0318 17:39:56.322639 4090 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="master-0" Mar 18 17:39:56.475334 master-0 kubenswrapper[4090]: I0318 17:39:56.467545 4090 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 18 17:39:57.467065 master-0 kubenswrapper[4090]: I0318 17:39:57.466926 4090 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 18 17:39:57.680000 master-0 kubenswrapper[4090]: I0318 17:39:57.679885 4090 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"46f265536aba6292ead501bc9b49f327","Type":"ContainerStarted","Data":"6a3212eaacddf8a633d9171d89d86f056fc2eaf17af107aa2bced9e6262d3611"} Mar 18 17:39:57.680000 master-0 kubenswrapper[4090]: I0318 17:39:57.679953 4090 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 17:39:57.681094 master-0 kubenswrapper[4090]: I0318 17:39:57.681033 4090 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 18 17:39:57.681094 master-0 kubenswrapper[4090]: I0318 17:39:57.681079 4090 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 18 17:39:57.681094 master-0 kubenswrapper[4090]: I0318 17:39:57.681095 4090 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 18 17:39:57.683632 master-0 kubenswrapper[4090]: I0318 17:39:57.683580 4090 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" event={"ID":"49fac1b46a11e49501805e891baae4a9","Type":"ContainerStarted","Data":"b07f4eb106a117d2a3aedb26bb538e640c6545e341eb4a44bae581e10c947c17"} Mar 18 17:39:57.683879 master-0 kubenswrapper[4090]: I0318 17:39:57.683807 4090 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 17:39:57.685707 master-0 kubenswrapper[4090]: I0318 17:39:57.685640 4090 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 18 17:39:57.685707 master-0 kubenswrapper[4090]: I0318 17:39:57.685694 4090 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 18 17:39:57.685886 master-0 kubenswrapper[4090]: I0318 17:39:57.685718 4090 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 18 17:39:57.797237 master-0 kubenswrapper[4090]: I0318 17:39:57.797063 4090 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Mar 18 17:39:57.820369 master-0 kubenswrapper[4090]: I0318 17:39:57.820263 4090 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Mar 18 17:39:58.464744 master-0 kubenswrapper[4090]: I0318 17:39:58.464652 4090 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 18 17:39:58.656239 master-0 kubenswrapper[4090]: I0318 17:39:58.656093 4090 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 18 17:39:58.661344 master-0 kubenswrapper[4090]: I0318 17:39:58.661294 4090 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 18 17:39:58.686341 master-0 kubenswrapper[4090]: I0318 17:39:58.686259 4090 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 17:39:58.686510 master-0 kubenswrapper[4090]: I0318 17:39:58.686352 4090 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 18 17:39:58.686510 master-0 kubenswrapper[4090]: I0318 17:39:58.686354 4090 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 17:39:58.687480 master-0 kubenswrapper[4090]: I0318 17:39:58.687413 4090 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 18 17:39:58.687480 master-0 kubenswrapper[4090]: I0318 17:39:58.687444 4090 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 18 17:39:58.687480 master-0 kubenswrapper[4090]: I0318 17:39:58.687455 4090 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 18 17:39:58.688035 master-0 kubenswrapper[4090]: I0318 17:39:58.687997 4090 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 18 17:39:58.688157 master-0 kubenswrapper[4090]: I0318 17:39:58.688045 4090 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 18 17:39:58.688157 master-0 kubenswrapper[4090]: I0318 17:39:58.688059 4090 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 18 17:39:59.465017 master-0 kubenswrapper[4090]: I0318 17:39:59.464930 4090 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 18 17:39:59.689101 master-0 kubenswrapper[4090]: I0318 17:39:59.689053 4090 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 17:39:59.690177 master-0 kubenswrapper[4090]: I0318 17:39:59.690160 4090 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 18 17:39:59.690286 master-0 kubenswrapper[4090]: I0318 17:39:59.690264 4090 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 18 17:39:59.690356 master-0 kubenswrapper[4090]: I0318 17:39:59.690346 4090 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 18 17:39:59.749597 master-0 kubenswrapper[4090]: W0318 17:39:59.749428 4090 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes "master-0" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Mar 18 17:39:59.749597 master-0 kubenswrapper[4090]: E0318 17:39:59.749517 4090 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes \"master-0\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" Mar 18 17:40:00.464346 master-0 kubenswrapper[4090]: I0318 17:40:00.464254 4090 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 18 17:40:00.650825 master-0 kubenswrapper[4090]: I0318 17:40:00.650699 4090 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 18 17:40:00.651140 master-0 kubenswrapper[4090]: I0318 17:40:00.650953 4090 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 17:40:00.652905 master-0 kubenswrapper[4090]: I0318 17:40:00.652787 4090 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 18 17:40:00.652905 master-0 kubenswrapper[4090]: I0318 17:40:00.652864 4090 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 18 17:40:00.652905 master-0 kubenswrapper[4090]: I0318 17:40:00.652889 4090 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 18 17:40:01.223775 master-0 kubenswrapper[4090]: I0318 17:40:01.223709 4090 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 18 17:40:01.224182 master-0 kubenswrapper[4090]: I0318 17:40:01.223906 4090 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 17:40:01.227795 master-0 kubenswrapper[4090]: I0318 17:40:01.227729 4090 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 18 17:40:01.227795 master-0 kubenswrapper[4090]: I0318 17:40:01.227759 4090 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 18 17:40:01.227795 master-0 kubenswrapper[4090]: I0318 17:40:01.227772 4090 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 18 17:40:01.230062 master-0 kubenswrapper[4090]: I0318 17:40:01.229999 4090 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 18 17:40:01.355954 master-0 kubenswrapper[4090]: E0318 17:40:01.355693 4090 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189e00413e402932 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 17:39:43.447820594 +0000 UTC m=+0.640092528,LastTimestamp:2026-03-18 17:39:43.447820594 +0000 UTC m=+0.640092528,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 17:40:01.364630 master-0 kubenswrapper[4090]: E0318 17:40:01.364415 4090 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189e00414228bc01 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node master-0 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 17:39:43.513394177 +0000 UTC m=+0.705666111,LastTimestamp:2026-03-18 17:39:43.513394177 +0000 UTC m=+0.705666111,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 17:40:01.372405 master-0 kubenswrapper[4090]: E0318 17:40:01.372247 4090 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189e0041422a8495 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node master-0 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 17:39:43.513511061 +0000 UTC m=+0.705782985,LastTimestamp:2026-03-18 17:39:43.513511061 +0000 UTC m=+0.705782985,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 17:40:01.378678 master-0 kubenswrapper[4090]: E0318 17:40:01.378473 4090 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189e0041422bd21c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node master-0 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 17:39:43.513596444 +0000 UTC m=+0.705868368,LastTimestamp:2026-03-18 17:39:43.513596444 +0000 UTC m=+0.705868368,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 17:40:01.385330 master-0 kubenswrapper[4090]: E0318 17:40:01.385156 4090 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189e0041474d93c8 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeAllocatableEnforced,Message:Updated Node Allocatable limit across pods,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 17:39:43.599694792 +0000 UTC m=+0.791966746,LastTimestamp:2026-03-18 17:39:43.599694792 +0000 UTC m=+0.791966746,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 17:40:01.392968 master-0 kubenswrapper[4090]: E0318 17:40:01.392783 4090 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189e00414228bc01\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189e00414228bc01 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node master-0 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 17:39:43.513394177 +0000 UTC m=+0.705666111,LastTimestamp:2026-03-18 17:39:43.687782696 +0000 UTC m=+0.880054640,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 17:40:01.401479 master-0 kubenswrapper[4090]: E0318 17:40:01.401247 4090 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189e0041422a8495\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189e0041422a8495 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node master-0 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 17:39:43.513511061 +0000 UTC m=+0.705782985,LastTimestamp:2026-03-18 17:39:43.687819227 +0000 UTC m=+0.880091181,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 17:40:01.411828 master-0 kubenswrapper[4090]: E0318 17:40:01.410961 4090 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189e0041422bd21c\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189e0041422bd21c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node master-0 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 17:39:43.513596444 +0000 UTC m=+0.705868368,LastTimestamp:2026-03-18 17:39:43.687840998 +0000 UTC m=+0.880112952,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 17:40:01.419081 master-0 kubenswrapper[4090]: E0318 17:40:01.418838 4090 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189e00414228bc01\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189e00414228bc01 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node master-0 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 17:39:43.513394177 +0000 UTC m=+0.705666111,LastTimestamp:2026-03-18 17:39:43.708853403 +0000 UTC m=+0.901125317,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 17:40:01.427088 master-0 kubenswrapper[4090]: E0318 17:40:01.426922 4090 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189e0041422a8495\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189e0041422a8495 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node master-0 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 17:39:43.513511061 +0000 UTC m=+0.705782985,LastTimestamp:2026-03-18 17:39:43.708881764 +0000 UTC m=+0.901153678,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 17:40:01.435551 master-0 kubenswrapper[4090]: E0318 17:40:01.435369 4090 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189e0041422bd21c\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189e0041422bd21c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node master-0 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 17:39:43.513596444 +0000 UTC m=+0.705868368,LastTimestamp:2026-03-18 17:39:43.708891595 +0000 UTC m=+0.901163499,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 17:40:01.443653 master-0 kubenswrapper[4090]: E0318 17:40:01.443411 4090 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189e00414228bc01\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189e00414228bc01 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node master-0 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 17:39:43.513394177 +0000 UTC m=+0.705666111,LastTimestamp:2026-03-18 17:39:43.710028098 +0000 UTC m=+0.902300012,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 17:40:01.452617 master-0 kubenswrapper[4090]: E0318 17:40:01.452406 4090 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189e0041422a8495\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189e0041422a8495 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node master-0 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 17:39:43.513511061 +0000 UTC m=+0.705782985,LastTimestamp:2026-03-18 17:39:43.710061269 +0000 UTC m=+0.902333183,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 17:40:01.460512 master-0 kubenswrapper[4090]: I0318 17:40:01.460451 4090 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 18 17:40:01.460655 master-0 kubenswrapper[4090]: E0318 17:40:01.460430 4090 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189e0041422bd21c\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189e0041422bd21c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node master-0 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 17:39:43.513596444 +0000 UTC m=+0.705868368,LastTimestamp:2026-03-18 17:39:43.710073699 +0000 UTC m=+0.902345613,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 17:40:01.465493 master-0 kubenswrapper[4090]: E0318 17:40:01.464965 4090 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189e00414228bc01\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189e00414228bc01 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node master-0 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 17:39:43.513394177 +0000 UTC m=+0.705666111,LastTimestamp:2026-03-18 17:39:43.711607076 +0000 UTC m=+0.903879020,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 17:40:01.470397 master-0 kubenswrapper[4090]: E0318 17:40:01.469864 4090 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189e0041422a8495\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189e0041422a8495 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node master-0 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 17:39:43.513511061 +0000 UTC m=+0.705782985,LastTimestamp:2026-03-18 17:39:43.711635777 +0000 UTC m=+0.903907721,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 17:40:01.475389 master-0 kubenswrapper[4090]: E0318 17:40:01.475164 4090 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189e0041422bd21c\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189e0041422bd21c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node master-0 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 17:39:43.513596444 +0000 UTC m=+0.705868368,LastTimestamp:2026-03-18 17:39:43.711660608 +0000 UTC m=+0.903932562,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 17:40:01.482872 master-0 kubenswrapper[4090]: E0318 17:40:01.482710 4090 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189e00414228bc01\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189e00414228bc01 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node master-0 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 17:39:43.513394177 +0000 UTC m=+0.705666111,LastTimestamp:2026-03-18 17:39:43.711671919 +0000 UTC m=+0.903943833,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 17:40:01.485877 master-0 kubenswrapper[4090]: E0318 17:40:01.485702 4090 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189e0041422a8495\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189e0041422a8495 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node master-0 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 17:39:43.513511061 +0000 UTC m=+0.705782985,LastTimestamp:2026-03-18 17:39:43.711682709 +0000 UTC m=+0.903954623,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 17:40:01.490868 master-0 kubenswrapper[4090]: E0318 17:40:01.490697 4090 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189e0041422bd21c\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189e0041422bd21c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node master-0 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 17:39:43.513596444 +0000 UTC m=+0.705868368,LastTimestamp:2026-03-18 17:39:43.711693529 +0000 UTC m=+0.903965443,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 17:40:01.494891 master-0 kubenswrapper[4090]: E0318 17:40:01.494712 4090 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189e00414228bc01\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189e00414228bc01 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node master-0 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 17:39:43.513394177 +0000 UTC m=+0.705666111,LastTimestamp:2026-03-18 17:39:43.711789313 +0000 UTC m=+0.904061267,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 17:40:01.501631 master-0 kubenswrapper[4090]: E0318 17:40:01.501430 4090 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189e0041422a8495\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189e0041422a8495 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node master-0 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 17:39:43.513511061 +0000 UTC m=+0.705782985,LastTimestamp:2026-03-18 17:39:43.711819095 +0000 UTC m=+0.904091059,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 17:40:01.504361 master-0 kubenswrapper[4090]: E0318 17:40:01.503829 4090 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189e0041422bd21c\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189e0041422bd21c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node master-0 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 17:39:43.513596444 +0000 UTC m=+0.705868368,LastTimestamp:2026-03-18 17:39:43.711853046 +0000 UTC m=+0.904125010,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 17:40:01.512671 master-0 kubenswrapper[4090]: E0318 17:40:01.512358 4090 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189e00414228bc01\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189e00414228bc01 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node master-0 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 17:39:43.513394177 +0000 UTC m=+0.705666111,LastTimestamp:2026-03-18 17:39:43.712834763 +0000 UTC m=+0.905106677,Count:8,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 17:40:01.522068 master-0 kubenswrapper[4090]: E0318 17:40:01.521832 4090 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189e0041422a8495\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189e0041422a8495 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node master-0 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 17:39:43.513511061 +0000 UTC m=+0.705782985,LastTimestamp:2026-03-18 17:39:43.712854133 +0000 UTC m=+0.905126047,Count:8,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 17:40:01.532334 master-0 kubenswrapper[4090]: E0318 17:40:01.532161 4090 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-controller-manager-master-0.189e0041a1581c24 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-controller-manager-master-0,UID:46f265536aba6292ead501bc9b49f327,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Pulling,Message:Pulling image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b23c544d3894e5b31f66a18c554f03b0d29f92c2000c46b57b1c96da7ec25db9\",Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 17:39:45.1103345 +0000 UTC m=+2.302606444,LastTimestamp:2026-03-18 17:39:45.1103345 +0000 UTC m=+2.302606444,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 17:40:01.540739 master-0 kubenswrapper[4090]: E0318 17:40:01.540542 4090 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-master-0-master-0.189e0041a15995bd openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-master-0-master-0,UID:d664a6d0d2a24360dee10612610f1b59,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Pulling,Message:Pulling image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e29dc9f042f2d0471171a0611070886cb2f7c57338ab7f112613417bcd33b278\",Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 17:39:45.110431165 +0000 UTC m=+2.302703109,LastTimestamp:2026-03-18 17:39:45.110431165 +0000 UTC m=+2.302703109,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 17:40:01.548156 master-0 kubenswrapper[4090]: E0318 17:40:01.548008 4090 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-scheduler-master-0.189e0041a16c0431 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-scheduler-master-0,UID:c83737980b9ee109184b1d78e942cf36,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Pulling,Message:Pulling image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b23c544d3894e5b31f66a18c554f03b0d29f92c2000c46b57b1c96da7ec25db9\",Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 17:39:45.111639089 +0000 UTC m=+2.303911043,LastTimestamp:2026-03-18 17:39:45.111639089 +0000 UTC m=+2.303911043,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 17:40:01.554525 master-0 kubenswrapper[4090]: E0318 17:40:01.554383 4090 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.189e0041a80a2311 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:1249822f86f23526277d165c0d5d3c19,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulling,Message:Pulling image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d12d0dc7eb86bbedf6b2d7689a28fd51f0d928f720e4a6783744304297c661ed\",Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 17:39:45.222664977 +0000 UTC m=+2.414936911,LastTimestamp:2026-03-18 17:39:45.222664977 +0000 UTC m=+2.414936911,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 17:40:01.562199 master-0 kubenswrapper[4090]: E0318 17:40:01.561961 4090 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{bootstrap-kube-apiserver-master-0.189e0041aa3b1854 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:bootstrap-kube-apiserver-master-0,UID:49fac1b46a11e49501805e891baae4a9,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulling,Message:Pulling image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b23c544d3894e5b31f66a18c554f03b0d29f92c2000c46b57b1c96da7ec25db9\",Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 17:39:45.259427924 +0000 UTC m=+2.451699848,LastTimestamp:2026-03-18 17:39:45.259427924 +0000 UTC m=+2.451699848,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 17:40:01.571155 master-0 kubenswrapper[4090]: E0318 17:40:01.570617 4090 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.189e0042086b5824 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:1249822f86f23526277d165c0d5d3c19,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Successfully pulled image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d12d0dc7eb86bbedf6b2d7689a28fd51f0d928f720e4a6783744304297c661ed\" in 1.616s (1.616s including waiting). Image size: 465090934 bytes.,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 17:39:46.839648292 +0000 UTC m=+4.031920206,LastTimestamp:2026-03-18 17:39:46.839648292 +0000 UTC m=+4.031920206,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 17:40:01.581817 master-0 kubenswrapper[4090]: E0318 17:40:01.581651 4090 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.189e004214f157bf openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:1249822f86f23526277d165c0d5d3c19,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Created,Message:Created container: setup,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 17:39:47.049756607 +0000 UTC m=+4.242028521,LastTimestamp:2026-03-18 17:39:47.049756607 +0000 UTC m=+4.242028521,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 17:40:01.589590 master-0 kubenswrapper[4090]: E0318 17:40:01.589392 4090 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.189e004215916296 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:1249822f86f23526277d165c0d5d3c19,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Started,Message:Started container setup,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 17:39:47.060245142 +0000 UTC m=+4.252517056,LastTimestamp:2026-03-18 17:39:47.060245142 +0000 UTC m=+4.252517056,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 17:40:01.597631 master-0 kubenswrapper[4090]: E0318 17:40:01.597498 4090 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.189e00424460a926 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:1249822f86f23526277d165c0d5d3c19,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d12d0dc7eb86bbedf6b2d7689a28fd51f0d928f720e4a6783744304297c661ed\" already present on machine,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 17:39:47.845581094 +0000 UTC m=+5.037853008,LastTimestamp:2026-03-18 17:39:47.845581094 +0000 UTC m=+5.037853008,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 17:40:01.604093 master-0 kubenswrapper[4090]: E0318 17:40:01.603962 4090 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-master-0-master-0.189e004247c4481a openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-master-0-master-0,UID:d664a6d0d2a24360dee10612610f1b59,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Pulled,Message:Successfully pulled image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e29dc9f042f2d0471171a0611070886cb2f7c57338ab7f112613417bcd33b278\" in 2.791s (2.791s including waiting). Image size: 529326739 bytes.,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 17:39:47.902441498 +0000 UTC m=+5.094713412,LastTimestamp:2026-03-18 17:39:47.902441498 +0000 UTC m=+5.094713412,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 17:40:01.611108 master-0 kubenswrapper[4090]: E0318 17:40:01.610966 4090 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.189e00425034414a openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:1249822f86f23526277d165c0d5d3c19,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Created,Message:Created container: kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 17:39:48.043997514 +0000 UTC m=+5.236269428,LastTimestamp:2026-03-18 17:39:48.043997514 +0000 UTC m=+5.236269428,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 17:40:01.618311 master-0 kubenswrapper[4090]: E0318 17:40:01.618158 4090 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.189e0042516c9576 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:1249822f86f23526277d165c0d5d3c19,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Started,Message:Started container kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 17:39:48.064466294 +0000 UTC m=+5.256738208,LastTimestamp:2026-03-18 17:39:48.064466294 +0000 UTC m=+5.256738208,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 17:40:01.624699 master-0 kubenswrapper[4090]: E0318 17:40:01.624562 4090 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-master-0-master-0.189e004252269223 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-master-0-master-0,UID:d664a6d0d2a24360dee10612610f1b59,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Created,Message:Created container: etcdctl,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 17:39:48.076655139 +0000 UTC m=+5.268927053,LastTimestamp:2026-03-18 17:39:48.076655139 +0000 UTC m=+5.268927053,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 17:40:01.631554 master-0 kubenswrapper[4090]: E0318 17:40:01.631424 4090 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-master-0-master-0.189e004252f9ca61 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-master-0-master-0,UID:d664a6d0d2a24360dee10612610f1b59,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Started,Message:Started container etcdctl,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 17:39:48.090497633 +0000 UTC m=+5.282769547,LastTimestamp:2026-03-18 17:39:48.090497633 +0000 UTC m=+5.282769547,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 17:40:01.639473 master-0 kubenswrapper[4090]: E0318 17:40:01.638544 4090 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-master-0-master-0.189e00425328fab0 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-master-0-master-0,UID:d664a6d0d2a24360dee10612610f1b59,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e29dc9f042f2d0471171a0611070886cb2f7c57338ab7f112613417bcd33b278\" already present on machine,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 17:39:48.093590192 +0000 UTC m=+5.285862116,LastTimestamp:2026-03-18 17:39:48.093590192 +0000 UTC m=+5.285862116,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 17:40:01.646163 master-0 kubenswrapper[4090]: E0318 17:40:01.646023 4090 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-master-0-master-0.189e00425ed34f3d openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-master-0-master-0,UID:d664a6d0d2a24360dee10612610f1b59,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Created,Message:Created container: etcd,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 17:39:48.289302333 +0000 UTC m=+5.481574247,LastTimestamp:2026-03-18 17:39:48.289302333 +0000 UTC m=+5.481574247,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 17:40:01.647757 master-0 kubenswrapper[4090]: I0318 17:40:01.647705 4090 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 18 17:40:01.647940 master-0 kubenswrapper[4090]: I0318 17:40:01.647896 4090 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 17:40:01.649592 master-0 kubenswrapper[4090]: I0318 17:40:01.649524 4090 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 18 17:40:01.649740 master-0 kubenswrapper[4090]: I0318 17:40:01.649598 4090 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 18 17:40:01.649740 master-0 kubenswrapper[4090]: I0318 17:40:01.649620 4090 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 18 17:40:01.653704 master-0 kubenswrapper[4090]: E0318 17:40:01.653524 4090 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-master-0-master-0.189e00425fb20c30 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-master-0-master-0,UID:d664a6d0d2a24360dee10612610f1b59,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Started,Message:Started container etcd,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 17:39:48.303899696 +0000 UTC m=+5.496171610,LastTimestamp:2026-03-18 17:39:48.303899696 +0000 UTC m=+5.496171610,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 17:40:01.657119 master-0 kubenswrapper[4090]: I0318 17:40:01.657036 4090 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 18 17:40:01.662008 master-0 kubenswrapper[4090]: E0318 17:40:01.661756 4090 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-rbac-proxy-crio-master-0.189e00424460a926\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.189e00424460a926 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:1249822f86f23526277d165c0d5d3c19,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d12d0dc7eb86bbedf6b2d7689a28fd51f0d928f720e4a6783744304297c661ed\" already present on machine,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 17:39:47.845581094 +0000 UTC m=+5.037853008,LastTimestamp:2026-03-18 17:39:48.639456776 +0000 UTC m=+5.831728690,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 17:40:01.670944 master-0 kubenswrapper[4090]: E0318 17:40:01.670551 4090 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-rbac-proxy-crio-master-0.189e00425034414a\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.189e00425034414a openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:1249822f86f23526277d165c0d5d3c19,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Created,Message:Created container: kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 17:39:48.043997514 +0000 UTC m=+5.236269428,LastTimestamp:2026-03-18 17:39:48.833163022 +0000 UTC m=+6.025434936,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 17:40:01.680341 master-0 kubenswrapper[4090]: E0318 17:40:01.680090 4090 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-rbac-proxy-crio-master-0.189e0042516c9576\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.189e0042516c9576 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:1249822f86f23526277d165c0d5d3c19,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Started,Message:Started container kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 17:39:48.064466294 +0000 UTC m=+5.256738208,LastTimestamp:2026-03-18 17:39:48.843764349 +0000 UTC m=+6.036036263,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 17:40:01.685725 master-0 kubenswrapper[4090]: E0318 17:40:01.685476 4090 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.189e0042af8223a4 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:1249822f86f23526277d165c0d5d3c19,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:BackOff,Message:Back-off restarting failed container kube-rbac-proxy-crio in pod kube-rbac-proxy-crio-master-0_openshift-machine-config-operator(1249822f86f23526277d165c0d5d3c19),Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 17:39:49.642937252 +0000 UTC m=+6.835209176,LastTimestamp:2026-03-18 17:39:49.642937252 +0000 UTC m=+6.835209176,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 17:40:01.696467 master-0 kubenswrapper[4090]: I0318 17:40:01.696386 4090 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 17:40:01.696467 master-0 kubenswrapper[4090]: I0318 17:40:01.696511 4090 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 18 17:40:01.697140 master-0 kubenswrapper[4090]: E0318 17:40:01.696786 4090 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-rbac-proxy-crio-master-0.189e0042af8223a4\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.189e0042af8223a4 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:1249822f86f23526277d165c0d5d3c19,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:BackOff,Message:Back-off restarting failed container kube-rbac-proxy-crio in pod kube-rbac-proxy-crio-master-0_openshift-machine-config-operator(1249822f86f23526277d165c0d5d3c19),Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 17:39:49.642937252 +0000 UTC m=+6.835209176,LastTimestamp:2026-03-18 17:39:50.645416033 +0000 UTC m=+7.837687947,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 17:40:01.697267 master-0 kubenswrapper[4090]: I0318 17:40:01.697149 4090 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 17:40:01.697665 master-0 kubenswrapper[4090]: I0318 17:40:01.697592 4090 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 18 17:40:01.697665 master-0 kubenswrapper[4090]: I0318 17:40:01.697651 4090 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 18 17:40:01.697665 master-0 kubenswrapper[4090]: I0318 17:40:01.697672 4090 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 18 17:40:01.698565 master-0 kubenswrapper[4090]: I0318 17:40:01.698520 4090 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 18 17:40:01.698882 master-0 kubenswrapper[4090]: I0318 17:40:01.698571 4090 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 18 17:40:01.698882 master-0 kubenswrapper[4090]: I0318 17:40:01.698590 4090 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 18 17:40:01.701109 master-0 kubenswrapper[4090]: I0318 17:40:01.701039 4090 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 18 17:40:01.703546 master-0 kubenswrapper[4090]: E0318 17:40:01.703240 4090 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-scheduler-master-0.189e004366c44eb3 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-scheduler-master-0,UID:c83737980b9ee109184b1d78e942cf36,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Pulled,Message:Successfully pulled image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b23c544d3894e5b31f66a18c554f03b0d29f92c2000c46b57b1c96da7ec25db9\" in 7.605s (7.605s including waiting). Image size: 943841779 bytes.,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 17:39:52.717504179 +0000 UTC m=+9.909776123,LastTimestamp:2026-03-18 17:39:52.717504179 +0000 UTC m=+9.909776123,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 17:40:01.709654 master-0 kubenswrapper[4090]: E0318 17:40:01.709464 4090 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{bootstrap-kube-apiserver-master-0.189e00436a9c1685 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:bootstrap-kube-apiserver-master-0,UID:49fac1b46a11e49501805e891baae4a9,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Successfully pulled image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b23c544d3894e5b31f66a18c554f03b0d29f92c2000c46b57b1c96da7ec25db9\" in 7.522s (7.522s including waiting). Image size: 943841779 bytes.,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 17:39:52.781977221 +0000 UTC m=+9.974249175,LastTimestamp:2026-03-18 17:39:52.781977221 +0000 UTC m=+9.974249175,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 17:40:01.716680 master-0 kubenswrapper[4090]: E0318 17:40:01.716481 4090 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-controller-manager-master-0.189e00436b0fd9a5 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-controller-manager-master-0,UID:46f265536aba6292ead501bc9b49f327,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Pulled,Message:Successfully pulled image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b23c544d3894e5b31f66a18c554f03b0d29f92c2000c46b57b1c96da7ec25db9\" in 7.679s (7.679s including waiting). Image size: 943841779 bytes.,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 17:39:52.789563813 +0000 UTC m=+9.981835767,LastTimestamp:2026-03-18 17:39:52.789563813 +0000 UTC m=+9.981835767,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 17:40:01.720745 master-0 kubenswrapper[4090]: E0318 17:40:01.720591 4090 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-scheduler-master-0.189e00437654c2f7 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-scheduler-master-0,UID:c83737980b9ee109184b1d78e942cf36,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Created,Message:Created container: kube-scheduler,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 17:39:52.978629367 +0000 UTC m=+10.170901281,LastTimestamp:2026-03-18 17:39:52.978629367 +0000 UTC m=+10.170901281,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 17:40:01.727225 master-0 kubenswrapper[4090]: E0318 17:40:01.727023 4090 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-scheduler-master-0.189e004377342427 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-scheduler-master-0,UID:c83737980b9ee109184b1d78e942cf36,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Started,Message:Started container kube-scheduler,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 17:39:52.993268775 +0000 UTC m=+10.185540729,LastTimestamp:2026-03-18 17:39:52.993268775 +0000 UTC m=+10.185540729,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 17:40:01.733264 master-0 kubenswrapper[4090]: E0318 17:40:01.733044 4090 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-controller-manager-master-0.189e0043799d1e39 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-controller-manager-master-0,UID:46f265536aba6292ead501bc9b49f327,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Created,Message:Created container: kube-controller-manager,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 17:39:53.033702969 +0000 UTC m=+10.225974923,LastTimestamp:2026-03-18 17:39:53.033702969 +0000 UTC m=+10.225974923,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 17:40:01.738540 master-0 kubenswrapper[4090]: E0318 17:40:01.738417 4090 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{bootstrap-kube-apiserver-master-0.189e004379b9e1bd openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:bootstrap-kube-apiserver-master-0,UID:49fac1b46a11e49501805e891baae4a9,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Created,Message:Created container: setup,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 17:39:53.035588029 +0000 UTC m=+10.227859973,LastTimestamp:2026-03-18 17:39:53.035588029 +0000 UTC m=+10.227859973,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 17:40:01.745468 master-0 kubenswrapper[4090]: E0318 17:40:01.745226 4090 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{bootstrap-kube-apiserver-master-0.189e00437a563f56 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:bootstrap-kube-apiserver-master-0,UID:49fac1b46a11e49501805e891baae4a9,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Started,Message:Started container setup,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 17:39:53.045835606 +0000 UTC m=+10.238107520,LastTimestamp:2026-03-18 17:39:53.045835606 +0000 UTC m=+10.238107520,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 17:40:01.752183 master-0 kubenswrapper[4090]: E0318 17:40:01.752022 4090 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-controller-manager-master-0.189e00437a6bdd08 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-controller-manager-master-0,UID:46f265536aba6292ead501bc9b49f327,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Started,Message:Started container kube-controller-manager,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 17:39:53.047252232 +0000 UTC m=+10.239524146,LastTimestamp:2026-03-18 17:39:53.047252232 +0000 UTC m=+10.239524146,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 17:40:01.759689 master-0 kubenswrapper[4090]: E0318 17:40:01.759477 4090 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-controller-manager-master-0.189e00437a7e8763 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-controller-manager-master-0,UID:46f265536aba6292ead501bc9b49f327,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Pulling,Message:Pulling image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fbbcb390de2563a0177b92fba1b5a65777366e2dc80e2808b61d87c41b47a2d\",Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 17:39:53.048475491 +0000 UTC m=+10.240747435,LastTimestamp:2026-03-18 17:39:53.048475491 +0000 UTC m=+10.240747435,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 17:40:01.765851 master-0 kubenswrapper[4090]: E0318 17:40:01.765761 4090 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{bootstrap-kube-apiserver-master-0.189e00439eefdc48 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:bootstrap-kube-apiserver-master-0,UID:49fac1b46a11e49501805e891baae4a9,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b23c544d3894e5b31f66a18c554f03b0d29f92c2000c46b57b1c96da7ec25db9\" already present on machine,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 17:39:53.659882568 +0000 UTC m=+10.852154522,LastTimestamp:2026-03-18 17:39:53.659882568 +0000 UTC m=+10.852154522,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 17:40:01.774082 master-0 kubenswrapper[4090]: E0318 17:40:01.773935 4090 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{bootstrap-kube-apiserver-master-0.189e0043ac907f74 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:bootstrap-kube-apiserver-master-0,UID:49fac1b46a11e49501805e891baae4a9,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Created,Message:Created container: kube-apiserver,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 17:39:53.888513908 +0000 UTC m=+11.080785862,LastTimestamp:2026-03-18 17:39:53.888513908 +0000 UTC m=+11.080785862,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 17:40:01.779186 master-0 kubenswrapper[4090]: E0318 17:40:01.778989 4090 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{bootstrap-kube-apiserver-master-0.189e0043ad83909b openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:bootstrap-kube-apiserver-master-0,UID:49fac1b46a11e49501805e891baae4a9,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Started,Message:Started container kube-apiserver,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 17:39:53.904443547 +0000 UTC m=+11.096715481,LastTimestamp:2026-03-18 17:39:53.904443547 +0000 UTC m=+11.096715481,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 17:40:01.784258 master-0 kubenswrapper[4090]: E0318 17:40:01.784150 4090 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{bootstrap-kube-apiserver-master-0.189e0043ad966990 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:bootstrap-kube-apiserver-master-0,UID:49fac1b46a11e49501805e891baae4a9,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Pulling,Message:Pulling image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c5ce3d1134d6500e2b8528516c1889d7bbc6259aba4981c6983395b0e9eeff65\",Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 17:39:53.905678736 +0000 UTC m=+11.097950680,LastTimestamp:2026-03-18 17:39:53.905678736 +0000 UTC m=+11.097950680,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 17:40:01.789372 master-0 kubenswrapper[4090]: E0318 17:40:01.789214 4090 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-controller-manager-master-0.189e0044667d1718 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-controller-manager-master-0,UID:46f265536aba6292ead501bc9b49f327,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Pulled,Message:Successfully pulled image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fbbcb390de2563a0177b92fba1b5a65777366e2dc80e2808b61d87c41b47a2d\" in 3.959s (3.959s including waiting). Image size: 505246690 bytes.,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 17:39:57.007804184 +0000 UTC m=+14.200076098,LastTimestamp:2026-03-18 17:39:57.007804184 +0000 UTC m=+14.200076098,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 17:40:01.794855 master-0 kubenswrapper[4090]: E0318 17:40:01.794663 4090 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{bootstrap-kube-apiserver-master-0.189e004467589753 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:bootstrap-kube-apiserver-master-0,UID:49fac1b46a11e49501805e891baae4a9,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Pulled,Message:Successfully pulled image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c5ce3d1134d6500e2b8528516c1889d7bbc6259aba4981c6983395b0e9eeff65\" in 3.116s (3.116s including waiting). Image size: 514984269 bytes.,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 17:39:57.022189395 +0000 UTC m=+14.214461309,LastTimestamp:2026-03-18 17:39:57.022189395 +0000 UTC m=+14.214461309,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 17:40:01.800075 master-0 kubenswrapper[4090]: E0318 17:40:01.799961 4090 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-controller-manager-master-0.189e0044720ba310 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-controller-manager-master-0,UID:46f265536aba6292ead501bc9b49f327,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Created,Message:Created container: cluster-policy-controller,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 17:39:57.201695504 +0000 UTC m=+14.393967458,LastTimestamp:2026-03-18 17:39:57.201695504 +0000 UTC m=+14.393967458,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 17:40:01.808460 master-0 kubenswrapper[4090]: E0318 17:40:01.808320 4090 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{bootstrap-kube-apiserver-master-0.189e004472b14fa1 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:bootstrap-kube-apiserver-master-0,UID:49fac1b46a11e49501805e891baae4a9,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Created,Message:Created container: kube-apiserver-insecure-readyz,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 17:39:57.212553121 +0000 UTC m=+14.404825045,LastTimestamp:2026-03-18 17:39:57.212553121 +0000 UTC m=+14.404825045,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 17:40:01.814328 master-0 kubenswrapper[4090]: E0318 17:40:01.814140 4090 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-controller-manager-master-0.189e004472ca2c2a kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-controller-manager-master-0,UID:46f265536aba6292ead501bc9b49f327,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Started,Message:Started container cluster-policy-controller,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 17:39:57.214182442 +0000 UTC m=+14.406454356,LastTimestamp:2026-03-18 17:39:57.214182442 +0000 UTC m=+14.406454356,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 17:40:01.820222 master-0 kubenswrapper[4090]: E0318 17:40:01.820091 4090 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{bootstrap-kube-apiserver-master-0.189e00447369c3ed openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:bootstrap-kube-apiserver-master-0,UID:49fac1b46a11e49501805e891baae4a9,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Started,Message:Started container kube-apiserver-insecure-readyz,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 17:39:57.224641517 +0000 UTC m=+14.416913421,LastTimestamp:2026-03-18 17:39:57.224641517 +0000 UTC m=+14.416913421,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 17:40:02.464150 master-0 kubenswrapper[4090]: I0318 17:40:02.464065 4090 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 18 17:40:02.700043 master-0 kubenswrapper[4090]: I0318 17:40:02.699707 4090 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 17:40:02.700043 master-0 kubenswrapper[4090]: I0318 17:40:02.699792 4090 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 17:40:02.700969 master-0 kubenswrapper[4090]: I0318 17:40:02.700906 4090 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 18 17:40:02.700969 master-0 kubenswrapper[4090]: I0318 17:40:02.700920 4090 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 18 17:40:02.700969 master-0 kubenswrapper[4090]: I0318 17:40:02.700950 4090 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 18 17:40:02.700969 master-0 kubenswrapper[4090]: I0318 17:40:02.700957 4090 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 18 17:40:02.700969 master-0 kubenswrapper[4090]: I0318 17:40:02.700976 4090 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 18 17:40:02.701246 master-0 kubenswrapper[4090]: I0318 17:40:02.700963 4090 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 18 17:40:03.092914 master-0 kubenswrapper[4090]: E0318 17:40:03.092829 4090 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"master-0\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Mar 18 17:40:03.323450 master-0 kubenswrapper[4090]: I0318 17:40:03.323365 4090 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 17:40:03.324905 master-0 kubenswrapper[4090]: I0318 17:40:03.324854 4090 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 18 17:40:03.324905 master-0 kubenswrapper[4090]: I0318 17:40:03.324909 4090 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 18 17:40:03.325059 master-0 kubenswrapper[4090]: I0318 17:40:03.324932 4090 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 18 17:40:03.325059 master-0 kubenswrapper[4090]: I0318 17:40:03.324999 4090 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Mar 18 17:40:03.332409 master-0 kubenswrapper[4090]: E0318 17:40:03.332056 4090 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="master-0" Mar 18 17:40:03.461467 master-0 kubenswrapper[4090]: I0318 17:40:03.461009 4090 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 18 17:40:03.588070 master-0 kubenswrapper[4090]: E0318 17:40:03.587956 4090 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"master-0\" not found" Mar 18 17:40:03.696318 master-0 kubenswrapper[4090]: I0318 17:40:03.696053 4090 csr.go:261] certificate signing request csr-chjtp is approved, waiting to be issued Mar 18 17:40:04.465017 master-0 kubenswrapper[4090]: I0318 17:40:04.464946 4090 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 18 17:40:04.607421 master-0 kubenswrapper[4090]: I0318 17:40:04.606738 4090 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 17:40:04.608810 master-0 kubenswrapper[4090]: I0318 17:40:04.608730 4090 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 18 17:40:04.608810 master-0 kubenswrapper[4090]: I0318 17:40:04.608811 4090 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 18 17:40:04.609027 master-0 kubenswrapper[4090]: I0318 17:40:04.608830 4090 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 18 17:40:04.609490 master-0 kubenswrapper[4090]: I0318 17:40:04.609437 4090 scope.go:117] "RemoveContainer" containerID="61bd789344076c47f8d7fd3e3af6f341ca32ad16550699ddcda9363e78e1e116" Mar 18 17:40:04.624660 master-0 kubenswrapper[4090]: E0318 17:40:04.624337 4090 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-rbac-proxy-crio-master-0.189e00424460a926\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.189e00424460a926 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:1249822f86f23526277d165c0d5d3c19,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d12d0dc7eb86bbedf6b2d7689a28fd51f0d928f720e4a6783744304297c661ed\" already present on machine,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 17:39:47.845581094 +0000 UTC m=+5.037853008,LastTimestamp:2026-03-18 17:40:04.614254691 +0000 UTC m=+21.806526645,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 17:40:04.909040 master-0 kubenswrapper[4090]: E0318 17:40:04.908797 4090 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-rbac-proxy-crio-master-0.189e00425034414a\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.189e00425034414a openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:1249822f86f23526277d165c0d5d3c19,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Created,Message:Created container: kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 17:39:48.043997514 +0000 UTC m=+5.236269428,LastTimestamp:2026-03-18 17:40:04.899941045 +0000 UTC m=+22.092212999,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 17:40:04.938175 master-0 kubenswrapper[4090]: E0318 17:40:04.937936 4090 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-rbac-proxy-crio-master-0.189e0042516c9576\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.189e0042516c9576 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:1249822f86f23526277d165c0d5d3c19,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Started,Message:Started container kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 17:39:48.064466294 +0000 UTC m=+5.256738208,LastTimestamp:2026-03-18 17:40:04.926536105 +0000 UTC m=+22.118808059,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 17:40:05.100978 master-0 kubenswrapper[4090]: W0318 17:40:05.100701 4090 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Mar 18 17:40:05.100978 master-0 kubenswrapper[4090]: E0318 17:40:05.100809 4090 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" Mar 18 17:40:05.110868 master-0 kubenswrapper[4090]: W0318 17:40:05.110771 4090 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Mar 18 17:40:05.110868 master-0 kubenswrapper[4090]: E0318 17:40:05.110856 4090 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"runtimeclasses\" in API group \"node.k8s.io\" at the cluster scope" logger="UnhandledError" Mar 18 17:40:05.426578 master-0 kubenswrapper[4090]: W0318 17:40:05.426407 4090 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Mar 18 17:40:05.426578 master-0 kubenswrapper[4090]: E0318 17:40:05.426485 4090 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" Mar 18 17:40:05.475574 master-0 kubenswrapper[4090]: I0318 17:40:05.475481 4090 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 18 17:40:05.711560 master-0 kubenswrapper[4090]: I0318 17:40:05.711371 4090 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-master-0_1249822f86f23526277d165c0d5d3c19/kube-rbac-proxy-crio/2.log" Mar 18 17:40:05.712455 master-0 kubenswrapper[4090]: I0318 17:40:05.712330 4090 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-master-0_1249822f86f23526277d165c0d5d3c19/kube-rbac-proxy-crio/1.log" Mar 18 17:40:05.713192 master-0 kubenswrapper[4090]: I0318 17:40:05.713126 4090 generic.go:334] "Generic (PLEG): container finished" podID="1249822f86f23526277d165c0d5d3c19" containerID="7b9bddc18b37dc8b661d9d8aa6fa9e351c2bd5fe2e18de32692f6b5ce1bb25c8" exitCode=1 Mar 18 17:40:05.713321 master-0 kubenswrapper[4090]: I0318 17:40:05.713205 4090 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"1249822f86f23526277d165c0d5d3c19","Type":"ContainerDied","Data":"7b9bddc18b37dc8b661d9d8aa6fa9e351c2bd5fe2e18de32692f6b5ce1bb25c8"} Mar 18 17:40:05.713321 master-0 kubenswrapper[4090]: I0318 17:40:05.713306 4090 scope.go:117] "RemoveContainer" containerID="61bd789344076c47f8d7fd3e3af6f341ca32ad16550699ddcda9363e78e1e116" Mar 18 17:40:05.713490 master-0 kubenswrapper[4090]: I0318 17:40:05.713436 4090 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 17:40:05.715240 master-0 kubenswrapper[4090]: I0318 17:40:05.715187 4090 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 18 17:40:05.715358 master-0 kubenswrapper[4090]: I0318 17:40:05.715248 4090 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 18 17:40:05.715358 master-0 kubenswrapper[4090]: I0318 17:40:05.715267 4090 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 18 17:40:05.716084 master-0 kubenswrapper[4090]: I0318 17:40:05.715797 4090 scope.go:117] "RemoveContainer" containerID="7b9bddc18b37dc8b661d9d8aa6fa9e351c2bd5fe2e18de32692f6b5ce1bb25c8" Mar 18 17:40:05.716084 master-0 kubenswrapper[4090]: E0318 17:40:05.716026 4090 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy-crio\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-rbac-proxy-crio pod=kube-rbac-proxy-crio-master-0_openshift-machine-config-operator(1249822f86f23526277d165c0d5d3c19)\"" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" podUID="1249822f86f23526277d165c0d5d3c19" Mar 18 17:40:05.724471 master-0 kubenswrapper[4090]: E0318 17:40:05.724256 4090 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-rbac-proxy-crio-master-0.189e0042af8223a4\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.189e0042af8223a4 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:1249822f86f23526277d165c0d5d3c19,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:BackOff,Message:Back-off restarting failed container kube-rbac-proxy-crio in pod kube-rbac-proxy-crio-master-0_openshift-machine-config-operator(1249822f86f23526277d165c0d5d3c19),Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 17:39:49.642937252 +0000 UTC m=+6.835209176,LastTimestamp:2026-03-18 17:40:05.715979265 +0000 UTC m=+22.908251219,Count:3,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 17:40:06.463842 master-0 kubenswrapper[4090]: I0318 17:40:06.463730 4090 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 18 17:40:06.719662 master-0 kubenswrapper[4090]: I0318 17:40:06.719471 4090 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-master-0_1249822f86f23526277d165c0d5d3c19/kube-rbac-proxy-crio/2.log" Mar 18 17:40:07.211647 master-0 kubenswrapper[4090]: I0318 17:40:07.211541 4090 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 18 17:40:07.212251 master-0 kubenswrapper[4090]: I0318 17:40:07.211843 4090 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 17:40:07.213666 master-0 kubenswrapper[4090]: I0318 17:40:07.213611 4090 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 18 17:40:07.213666 master-0 kubenswrapper[4090]: I0318 17:40:07.213666 4090 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 18 17:40:07.213814 master-0 kubenswrapper[4090]: I0318 17:40:07.213686 4090 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 18 17:40:07.219152 master-0 kubenswrapper[4090]: I0318 17:40:07.219090 4090 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 18 17:40:07.464640 master-0 kubenswrapper[4090]: I0318 17:40:07.464444 4090 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 18 17:40:07.725498 master-0 kubenswrapper[4090]: I0318 17:40:07.724688 4090 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 17:40:07.726374 master-0 kubenswrapper[4090]: I0318 17:40:07.726054 4090 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 18 17:40:07.726374 master-0 kubenswrapper[4090]: I0318 17:40:07.726152 4090 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 18 17:40:07.726374 master-0 kubenswrapper[4090]: I0318 17:40:07.726172 4090 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 18 17:40:08.465306 master-0 kubenswrapper[4090]: I0318 17:40:08.465177 4090 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 18 17:40:09.465130 master-0 kubenswrapper[4090]: I0318 17:40:09.465043 4090 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 18 17:40:10.101824 master-0 kubenswrapper[4090]: E0318 17:40:10.101743 4090 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"master-0\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Mar 18 17:40:10.332981 master-0 kubenswrapper[4090]: I0318 17:40:10.332897 4090 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 17:40:10.334542 master-0 kubenswrapper[4090]: I0318 17:40:10.334490 4090 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 18 17:40:10.334647 master-0 kubenswrapper[4090]: I0318 17:40:10.334547 4090 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 18 17:40:10.334647 master-0 kubenswrapper[4090]: I0318 17:40:10.334564 4090 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 18 17:40:10.334647 master-0 kubenswrapper[4090]: I0318 17:40:10.334635 4090 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Mar 18 17:40:10.340528 master-0 kubenswrapper[4090]: E0318 17:40:10.340469 4090 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="master-0" Mar 18 17:40:10.463411 master-0 kubenswrapper[4090]: I0318 17:40:10.463152 4090 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 18 17:40:11.464102 master-0 kubenswrapper[4090]: I0318 17:40:11.464022 4090 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 18 17:40:12.466762 master-0 kubenswrapper[4090]: I0318 17:40:12.466690 4090 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 18 17:40:12.554750 master-0 kubenswrapper[4090]: I0318 17:40:12.554699 4090 csr.go:257] certificate signing request csr-chjtp is issued Mar 18 17:40:13.318532 master-0 kubenswrapper[4090]: I0318 17:40:13.318471 4090 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Mar 18 17:40:13.473325 master-0 kubenswrapper[4090]: I0318 17:40:13.473238 4090 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Mar 18 17:40:13.491888 master-0 kubenswrapper[4090]: I0318 17:40:13.491838 4090 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Mar 18 17:40:13.550406 master-0 kubenswrapper[4090]: I0318 17:40:13.550353 4090 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Mar 18 17:40:13.556647 master-0 kubenswrapper[4090]: I0318 17:40:13.556590 4090 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-03-19 17:31:47 +0000 UTC, rotation deadline is 2026-03-19 14:24:58.514904094 +0000 UTC Mar 18 17:40:13.556849 master-0 kubenswrapper[4090]: I0318 17:40:13.556821 4090 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 20h44m44.958095698s for next certificate rotation Mar 18 17:40:13.589350 master-0 kubenswrapper[4090]: E0318 17:40:13.589202 4090 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"master-0\" not found" Mar 18 17:40:13.806952 master-0 kubenswrapper[4090]: I0318 17:40:13.806887 4090 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Mar 18 17:40:13.806952 master-0 kubenswrapper[4090]: E0318 17:40:13.806920 4090 csi_plugin.go:305] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "master-0" not found Mar 18 17:40:13.827910 master-0 kubenswrapper[4090]: I0318 17:40:13.827856 4090 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Mar 18 17:40:13.843726 master-0 kubenswrapper[4090]: I0318 17:40:13.843597 4090 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Mar 18 17:40:13.901083 master-0 kubenswrapper[4090]: I0318 17:40:13.901034 4090 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Mar 18 17:40:14.167622 master-0 kubenswrapper[4090]: I0318 17:40:14.167444 4090 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Mar 18 17:40:14.167622 master-0 kubenswrapper[4090]: E0318 17:40:14.167490 4090 csi_plugin.go:305] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "master-0" not found Mar 18 17:40:14.270797 master-0 kubenswrapper[4090]: I0318 17:40:14.270697 4090 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Mar 18 17:40:14.286752 master-0 kubenswrapper[4090]: I0318 17:40:14.286685 4090 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Mar 18 17:40:14.342461 master-0 kubenswrapper[4090]: I0318 17:40:14.342386 4090 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Mar 18 17:40:14.600195 master-0 kubenswrapper[4090]: I0318 17:40:14.600111 4090 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Mar 18 17:40:14.600195 master-0 kubenswrapper[4090]: E0318 17:40:14.600168 4090 csi_plugin.go:305] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "master-0" not found Mar 18 17:40:15.159269 master-0 kubenswrapper[4090]: I0318 17:40:15.159195 4090 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Mar 18 17:40:15.175488 master-0 kubenswrapper[4090]: I0318 17:40:15.175388 4090 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Mar 18 17:40:15.232897 master-0 kubenswrapper[4090]: I0318 17:40:15.232822 4090 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Mar 18 17:40:15.508064 master-0 kubenswrapper[4090]: I0318 17:40:15.507934 4090 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Mar 18 17:40:15.508431 master-0 kubenswrapper[4090]: E0318 17:40:15.508398 4090 csi_plugin.go:305] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "master-0" not found Mar 18 17:40:17.108663 master-0 kubenswrapper[4090]: E0318 17:40:17.108594 4090 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"master-0\" not found" node="master-0" Mar 18 17:40:17.341740 master-0 kubenswrapper[4090]: I0318 17:40:17.341605 4090 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 17:40:17.344509 master-0 kubenswrapper[4090]: I0318 17:40:17.344419 4090 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 18 17:40:17.344509 master-0 kubenswrapper[4090]: I0318 17:40:17.344500 4090 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 18 17:40:17.344779 master-0 kubenswrapper[4090]: I0318 17:40:17.344525 4090 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 18 17:40:17.344779 master-0 kubenswrapper[4090]: I0318 17:40:17.344607 4090 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Mar 18 17:40:17.359182 master-0 kubenswrapper[4090]: I0318 17:40:17.359041 4090 kubelet_node_status.go:79] "Successfully registered node" node="master-0" Mar 18 17:40:17.359182 master-0 kubenswrapper[4090]: E0318 17:40:17.359085 4090 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": node \"master-0\" not found" Mar 18 17:40:17.375037 master-0 kubenswrapper[4090]: E0318 17:40:17.374979 4090 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 17:40:17.475848 master-0 kubenswrapper[4090]: E0318 17:40:17.475722 4090 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 17:40:17.479101 master-0 kubenswrapper[4090]: I0318 17:40:17.479018 4090 certificate_manager.go:356] kubernetes.io/kubelet-serving: Rotating certificates Mar 18 17:40:17.495337 master-0 kubenswrapper[4090]: I0318 17:40:17.495217 4090 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Mar 18 17:40:17.576021 master-0 kubenswrapper[4090]: E0318 17:40:17.575902 4090 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 17:40:17.677175 master-0 kubenswrapper[4090]: E0318 17:40:17.676979 4090 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 17:40:17.778058 master-0 kubenswrapper[4090]: E0318 17:40:17.777937 4090 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 17:40:17.878526 master-0 kubenswrapper[4090]: E0318 17:40:17.878422 4090 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 17:40:17.976338 master-0 kubenswrapper[4090]: I0318 17:40:17.976074 4090 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Mar 18 17:40:17.978901 master-0 kubenswrapper[4090]: E0318 17:40:17.978832 4090 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 17:40:18.079845 master-0 kubenswrapper[4090]: E0318 17:40:18.079676 4090 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 17:40:18.180576 master-0 kubenswrapper[4090]: E0318 17:40:18.180476 4090 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 17:40:18.281439 master-0 kubenswrapper[4090]: E0318 17:40:18.281258 4090 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 17:40:18.381914 master-0 kubenswrapper[4090]: E0318 17:40:18.381790 4090 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 17:40:18.482813 master-0 kubenswrapper[4090]: E0318 17:40:18.482739 4090 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 17:40:18.584414 master-0 kubenswrapper[4090]: E0318 17:40:18.584320 4090 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 17:40:18.606973 master-0 kubenswrapper[4090]: I0318 17:40:18.606864 4090 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 17:40:18.608588 master-0 kubenswrapper[4090]: I0318 17:40:18.608551 4090 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 18 17:40:18.608672 master-0 kubenswrapper[4090]: I0318 17:40:18.608621 4090 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 18 17:40:18.608672 master-0 kubenswrapper[4090]: I0318 17:40:18.608646 4090 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 18 17:40:18.609192 master-0 kubenswrapper[4090]: I0318 17:40:18.609159 4090 scope.go:117] "RemoveContainer" containerID="7b9bddc18b37dc8b661d9d8aa6fa9e351c2bd5fe2e18de32692f6b5ce1bb25c8" Mar 18 17:40:18.609512 master-0 kubenswrapper[4090]: E0318 17:40:18.609469 4090 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy-crio\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-rbac-proxy-crio pod=kube-rbac-proxy-crio-master-0_openshift-machine-config-operator(1249822f86f23526277d165c0d5d3c19)\"" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" podUID="1249822f86f23526277d165c0d5d3c19" Mar 18 17:40:18.684558 master-0 kubenswrapper[4090]: E0318 17:40:18.684463 4090 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 17:40:18.785201 master-0 kubenswrapper[4090]: E0318 17:40:18.785121 4090 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 17:40:18.885680 master-0 kubenswrapper[4090]: E0318 17:40:18.885487 4090 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 17:40:18.986644 master-0 kubenswrapper[4090]: E0318 17:40:18.986574 4090 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 17:40:19.087524 master-0 kubenswrapper[4090]: E0318 17:40:19.087462 4090 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 17:40:19.188349 master-0 kubenswrapper[4090]: E0318 17:40:19.188155 4090 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 17:40:19.225809 master-0 kubenswrapper[4090]: I0318 17:40:19.225739 4090 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Mar 18 17:40:19.288650 master-0 kubenswrapper[4090]: E0318 17:40:19.288561 4090 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 17:40:19.388868 master-0 kubenswrapper[4090]: E0318 17:40:19.388723 4090 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 17:40:19.489069 master-0 kubenswrapper[4090]: E0318 17:40:19.488908 4090 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 17:40:19.589893 master-0 kubenswrapper[4090]: E0318 17:40:19.589816 4090 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 17:40:19.690668 master-0 kubenswrapper[4090]: E0318 17:40:19.690575 4090 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 17:40:19.791438 master-0 kubenswrapper[4090]: E0318 17:40:19.791236 4090 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 17:40:19.892108 master-0 kubenswrapper[4090]: E0318 17:40:19.892011 4090 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 17:40:19.910177 master-0 kubenswrapper[4090]: I0318 17:40:19.910110 4090 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Mar 18 17:40:19.992696 master-0 kubenswrapper[4090]: E0318 17:40:19.992613 4090 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 17:40:20.093559 master-0 kubenswrapper[4090]: E0318 17:40:20.093456 4090 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 17:40:20.194645 master-0 kubenswrapper[4090]: E0318 17:40:20.194500 4090 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 17:40:20.295489 master-0 kubenswrapper[4090]: E0318 17:40:20.295376 4090 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 17:40:20.396651 master-0 kubenswrapper[4090]: E0318 17:40:20.396492 4090 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 17:40:20.497026 master-0 kubenswrapper[4090]: E0318 17:40:20.496940 4090 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 17:40:20.597909 master-0 kubenswrapper[4090]: E0318 17:40:20.597761 4090 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 17:40:20.698959 master-0 kubenswrapper[4090]: E0318 17:40:20.698744 4090 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 17:40:20.799003 master-0 kubenswrapper[4090]: E0318 17:40:20.798898 4090 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 17:40:20.899775 master-0 kubenswrapper[4090]: E0318 17:40:20.899695 4090 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 17:40:21.000976 master-0 kubenswrapper[4090]: E0318 17:40:21.000812 4090 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 17:40:21.101757 master-0 kubenswrapper[4090]: E0318 17:40:21.101645 4090 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 17:40:21.202637 master-0 kubenswrapper[4090]: E0318 17:40:21.202468 4090 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 17:40:21.303735 master-0 kubenswrapper[4090]: E0318 17:40:21.303629 4090 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 17:40:21.404940 master-0 kubenswrapper[4090]: E0318 17:40:21.404840 4090 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 17:40:21.505554 master-0 kubenswrapper[4090]: E0318 17:40:21.505462 4090 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 17:40:21.606496 master-0 kubenswrapper[4090]: E0318 17:40:21.606268 4090 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 17:40:21.707047 master-0 kubenswrapper[4090]: E0318 17:40:21.706980 4090 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 17:40:21.807818 master-0 kubenswrapper[4090]: E0318 17:40:21.807730 4090 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 17:40:21.907977 master-0 kubenswrapper[4090]: E0318 17:40:21.907831 4090 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 17:40:22.009155 master-0 kubenswrapper[4090]: E0318 17:40:22.009035 4090 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 17:40:22.110220 master-0 kubenswrapper[4090]: E0318 17:40:22.110135 4090 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 17:40:22.161384 master-0 kubenswrapper[4090]: I0318 17:40:22.161154 4090 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Mar 18 17:40:22.459490 master-0 kubenswrapper[4090]: I0318 17:40:22.459312 4090 apiserver.go:52] "Watching apiserver" Mar 18 17:40:22.481949 master-0 kubenswrapper[4090]: I0318 17:40:22.481853 4090 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Mar 18 17:40:22.482189 master-0 kubenswrapper[4090]: I0318 17:40:22.482128 4090 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["assisted-installer/assisted-installer-controller-trlzv","openshift-cluster-version/cluster-version-operator-56d8475767-lqvvj","openshift-network-operator/network-operator-7bd846bfc4-dxxbl"] Mar 18 17:40:22.482826 master-0 kubenswrapper[4090]: I0318 17:40:22.482780 4090 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="assisted-installer/assisted-installer-controller-trlzv" Mar 18 17:40:22.482826 master-0 kubenswrapper[4090]: I0318 17:40:22.482805 4090 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-56d8475767-lqvvj" Mar 18 17:40:22.482961 master-0 kubenswrapper[4090]: I0318 17:40:22.482896 4090 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-7bd846bfc4-dxxbl" Mar 18 17:40:22.486421 master-0 kubenswrapper[4090]: I0318 17:40:22.486362 4090 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Mar 18 17:40:22.486732 master-0 kubenswrapper[4090]: I0318 17:40:22.486684 4090 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Mar 18 17:40:22.486931 master-0 kubenswrapper[4090]: I0318 17:40:22.486886 4090 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Mar 18 17:40:22.487122 master-0 kubenswrapper[4090]: I0318 17:40:22.487074 4090 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Mar 18 17:40:22.489023 master-0 kubenswrapper[4090]: I0318 17:40:22.488964 4090 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Mar 18 17:40:22.489491 master-0 kubenswrapper[4090]: I0318 17:40:22.489439 4090 reflector.go:368] Caches populated for *v1.ConfigMap from object-"assisted-installer"/"assisted-installer-controller-config" Mar 18 17:40:22.489828 master-0 kubenswrapper[4090]: I0318 17:40:22.489773 4090 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Mar 18 17:40:22.489940 master-0 kubenswrapper[4090]: I0318 17:40:22.489828 4090 reflector.go:368] Caches populated for *v1.ConfigMap from object-"assisted-installer"/"openshift-service-ca.crt" Mar 18 17:40:22.490135 master-0 kubenswrapper[4090]: I0318 17:40:22.490086 4090 reflector.go:368] Caches populated for *v1.Secret from object-"assisted-installer"/"assisted-installer-controller-secret" Mar 18 17:40:22.490318 master-0 kubenswrapper[4090]: I0318 17:40:22.490240 4090 reflector.go:368] Caches populated for *v1.ConfigMap from object-"assisted-installer"/"kube-root-ca.crt" Mar 18 17:40:22.560807 master-0 kubenswrapper[4090]: I0318 17:40:22.560713 4090 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Mar 18 17:40:22.601882 master-0 kubenswrapper[4090]: I0318 17:40:22.601790 4090 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-resolv-conf\" (UniqueName: \"kubernetes.io/host-path/be6633f4-7370-49b8-a607-6a3fa52a098e-host-resolv-conf\") pod \"assisted-installer-controller-trlzv\" (UID: \"be6633f4-7370-49b8-a607-6a3fa52a098e\") " pod="assisted-installer/assisted-installer-controller-trlzv" Mar 18 17:40:22.601882 master-0 kubenswrapper[4090]: I0318 17:40:22.601861 4090 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c2w69\" (UniqueName: \"kubernetes.io/projected/be6633f4-7370-49b8-a607-6a3fa52a098e-kube-api-access-c2w69\") pod \"assisted-installer-controller-trlzv\" (UID: \"be6633f4-7370-49b8-a607-6a3fa52a098e\") " pod="assisted-installer/assisted-installer-controller-trlzv" Mar 18 17:40:22.602256 master-0 kubenswrapper[4090]: I0318 17:40:22.601905 4090 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/a02399de-859b-45b1-9b00-18a08f285f39-etc-ssl-certs\") pod \"cluster-version-operator-56d8475767-lqvvj\" (UID: \"a02399de-859b-45b1-9b00-18a08f285f39\") " pod="openshift-cluster-version/cluster-version-operator-56d8475767-lqvvj" Mar 18 17:40:22.602256 master-0 kubenswrapper[4090]: I0318 17:40:22.601945 4090 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a02399de-859b-45b1-9b00-18a08f285f39-serving-cert\") pod \"cluster-version-operator-56d8475767-lqvvj\" (UID: \"a02399de-859b-45b1-9b00-18a08f285f39\") " pod="openshift-cluster-version/cluster-version-operator-56d8475767-lqvvj" Mar 18 17:40:22.602256 master-0 kubenswrapper[4090]: I0318 17:40:22.601977 4090 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/a02399de-859b-45b1-9b00-18a08f285f39-service-ca\") pod \"cluster-version-operator-56d8475767-lqvvj\" (UID: \"a02399de-859b-45b1-9b00-18a08f285f39\") " pod="openshift-cluster-version/cluster-version-operator-56d8475767-lqvvj" Mar 18 17:40:22.602256 master-0 kubenswrapper[4090]: I0318 17:40:22.602008 4090 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a02399de-859b-45b1-9b00-18a08f285f39-kube-api-access\") pod \"cluster-version-operator-56d8475767-lqvvj\" (UID: \"a02399de-859b-45b1-9b00-18a08f285f39\") " pod="openshift-cluster-version/cluster-version-operator-56d8475767-lqvvj" Mar 18 17:40:22.602256 master-0 kubenswrapper[4090]: I0318 17:40:22.602040 4090 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-ca-bundle\" (UniqueName: \"kubernetes.io/host-path/be6633f4-7370-49b8-a607-6a3fa52a098e-host-ca-bundle\") pod \"assisted-installer-controller-trlzv\" (UID: \"be6633f4-7370-49b8-a607-6a3fa52a098e\") " pod="assisted-installer/assisted-installer-controller-trlzv" Mar 18 17:40:22.602256 master-0 kubenswrapper[4090]: I0318 17:40:22.602072 4090 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/a02399de-859b-45b1-9b00-18a08f285f39-etc-cvo-updatepayloads\") pod \"cluster-version-operator-56d8475767-lqvvj\" (UID: \"a02399de-859b-45b1-9b00-18a08f285f39\") " pod="openshift-cluster-version/cluster-version-operator-56d8475767-lqvvj" Mar 18 17:40:22.602256 master-0 kubenswrapper[4090]: I0318 17:40:22.602104 4090 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/14a0661b-7bde-4e22-a9a9-5e3fb24df77f-metrics-tls\") pod \"network-operator-7bd846bfc4-dxxbl\" (UID: \"14a0661b-7bde-4e22-a9a9-5e3fb24df77f\") " pod="openshift-network-operator/network-operator-7bd846bfc4-dxxbl" Mar 18 17:40:22.602256 master-0 kubenswrapper[4090]: I0318 17:40:22.602138 4090 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2pqww\" (UniqueName: \"kubernetes.io/projected/14a0661b-7bde-4e22-a9a9-5e3fb24df77f-kube-api-access-2pqww\") pod \"network-operator-7bd846bfc4-dxxbl\" (UID: \"14a0661b-7bde-4e22-a9a9-5e3fb24df77f\") " pod="openshift-network-operator/network-operator-7bd846bfc4-dxxbl" Mar 18 17:40:22.602256 master-0 kubenswrapper[4090]: I0318 17:40:22.602169 4090 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sno-bootstrap-files\" (UniqueName: \"kubernetes.io/host-path/be6633f4-7370-49b8-a607-6a3fa52a098e-sno-bootstrap-files\") pod \"assisted-installer-controller-trlzv\" (UID: \"be6633f4-7370-49b8-a607-6a3fa52a098e\") " pod="assisted-installer/assisted-installer-controller-trlzv" Mar 18 17:40:22.602256 master-0 kubenswrapper[4090]: I0318 17:40:22.602202 4090 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-run-resolv-conf\" (UniqueName: \"kubernetes.io/host-path/be6633f4-7370-49b8-a607-6a3fa52a098e-host-var-run-resolv-conf\") pod \"assisted-installer-controller-trlzv\" (UID: \"be6633f4-7370-49b8-a607-6a3fa52a098e\") " pod="assisted-installer/assisted-installer-controller-trlzv" Mar 18 17:40:22.602256 master-0 kubenswrapper[4090]: I0318 17:40:22.602233 4090 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/14a0661b-7bde-4e22-a9a9-5e3fb24df77f-host-etc-kube\") pod \"network-operator-7bd846bfc4-dxxbl\" (UID: \"14a0661b-7bde-4e22-a9a9-5e3fb24df77f\") " pod="openshift-network-operator/network-operator-7bd846bfc4-dxxbl" Mar 18 17:40:22.704889 master-0 kubenswrapper[4090]: I0318 17:40:22.703191 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2pqww\" (UniqueName: \"kubernetes.io/projected/14a0661b-7bde-4e22-a9a9-5e3fb24df77f-kube-api-access-2pqww\") pod \"network-operator-7bd846bfc4-dxxbl\" (UID: \"14a0661b-7bde-4e22-a9a9-5e3fb24df77f\") " pod="openshift-network-operator/network-operator-7bd846bfc4-dxxbl" Mar 18 17:40:22.704889 master-0 kubenswrapper[4090]: I0318 17:40:22.703558 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sno-bootstrap-files\" (UniqueName: \"kubernetes.io/host-path/be6633f4-7370-49b8-a607-6a3fa52a098e-sno-bootstrap-files\") pod \"assisted-installer-controller-trlzv\" (UID: \"be6633f4-7370-49b8-a607-6a3fa52a098e\") " pod="assisted-installer/assisted-installer-controller-trlzv" Mar 18 17:40:22.704889 master-0 kubenswrapper[4090]: I0318 17:40:22.703676 4090 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sno-bootstrap-files\" (UniqueName: \"kubernetes.io/host-path/be6633f4-7370-49b8-a607-6a3fa52a098e-sno-bootstrap-files\") pod \"assisted-installer-controller-trlzv\" (UID: \"be6633f4-7370-49b8-a607-6a3fa52a098e\") " pod="assisted-installer/assisted-installer-controller-trlzv" Mar 18 17:40:22.704889 master-0 kubenswrapper[4090]: I0318 17:40:22.703747 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-run-resolv-conf\" (UniqueName: \"kubernetes.io/host-path/be6633f4-7370-49b8-a607-6a3fa52a098e-host-var-run-resolv-conf\") pod \"assisted-installer-controller-trlzv\" (UID: \"be6633f4-7370-49b8-a607-6a3fa52a098e\") " pod="assisted-installer/assisted-installer-controller-trlzv" Mar 18 17:40:22.704889 master-0 kubenswrapper[4090]: I0318 17:40:22.703786 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/14a0661b-7bde-4e22-a9a9-5e3fb24df77f-host-etc-kube\") pod \"network-operator-7bd846bfc4-dxxbl\" (UID: \"14a0661b-7bde-4e22-a9a9-5e3fb24df77f\") " pod="openshift-network-operator/network-operator-7bd846bfc4-dxxbl" Mar 18 17:40:22.704889 master-0 kubenswrapper[4090]: I0318 17:40:22.703818 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-resolv-conf\" (UniqueName: \"kubernetes.io/host-path/be6633f4-7370-49b8-a607-6a3fa52a098e-host-resolv-conf\") pod \"assisted-installer-controller-trlzv\" (UID: \"be6633f4-7370-49b8-a607-6a3fa52a098e\") " pod="assisted-installer/assisted-installer-controller-trlzv" Mar 18 17:40:22.704889 master-0 kubenswrapper[4090]: I0318 17:40:22.703834 4090 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-run-resolv-conf\" (UniqueName: \"kubernetes.io/host-path/be6633f4-7370-49b8-a607-6a3fa52a098e-host-var-run-resolv-conf\") pod \"assisted-installer-controller-trlzv\" (UID: \"be6633f4-7370-49b8-a607-6a3fa52a098e\") " pod="assisted-installer/assisted-installer-controller-trlzv" Mar 18 17:40:22.704889 master-0 kubenswrapper[4090]: I0318 17:40:22.703851 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c2w69\" (UniqueName: \"kubernetes.io/projected/be6633f4-7370-49b8-a607-6a3fa52a098e-kube-api-access-c2w69\") pod \"assisted-installer-controller-trlzv\" (UID: \"be6633f4-7370-49b8-a607-6a3fa52a098e\") " pod="assisted-installer/assisted-installer-controller-trlzv" Mar 18 17:40:22.704889 master-0 kubenswrapper[4090]: I0318 17:40:22.703955 4090 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-resolv-conf\" (UniqueName: \"kubernetes.io/host-path/be6633f4-7370-49b8-a607-6a3fa52a098e-host-resolv-conf\") pod \"assisted-installer-controller-trlzv\" (UID: \"be6633f4-7370-49b8-a607-6a3fa52a098e\") " pod="assisted-installer/assisted-installer-controller-trlzv" Mar 18 17:40:22.704889 master-0 kubenswrapper[4090]: I0318 17:40:22.703997 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/a02399de-859b-45b1-9b00-18a08f285f39-etc-ssl-certs\") pod \"cluster-version-operator-56d8475767-lqvvj\" (UID: \"a02399de-859b-45b1-9b00-18a08f285f39\") " pod="openshift-cluster-version/cluster-version-operator-56d8475767-lqvvj" Mar 18 17:40:22.704889 master-0 kubenswrapper[4090]: I0318 17:40:22.704030 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a02399de-859b-45b1-9b00-18a08f285f39-serving-cert\") pod \"cluster-version-operator-56d8475767-lqvvj\" (UID: \"a02399de-859b-45b1-9b00-18a08f285f39\") " pod="openshift-cluster-version/cluster-version-operator-56d8475767-lqvvj" Mar 18 17:40:22.704889 master-0 kubenswrapper[4090]: I0318 17:40:22.704089 4090 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/a02399de-859b-45b1-9b00-18a08f285f39-etc-ssl-certs\") pod \"cluster-version-operator-56d8475767-lqvvj\" (UID: \"a02399de-859b-45b1-9b00-18a08f285f39\") " pod="openshift-cluster-version/cluster-version-operator-56d8475767-lqvvj" Mar 18 17:40:22.704889 master-0 kubenswrapper[4090]: I0318 17:40:22.704082 4090 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/14a0661b-7bde-4e22-a9a9-5e3fb24df77f-host-etc-kube\") pod \"network-operator-7bd846bfc4-dxxbl\" (UID: \"14a0661b-7bde-4e22-a9a9-5e3fb24df77f\") " pod="openshift-network-operator/network-operator-7bd846bfc4-dxxbl" Mar 18 17:40:22.704889 master-0 kubenswrapper[4090]: I0318 17:40:22.704137 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/a02399de-859b-45b1-9b00-18a08f285f39-service-ca\") pod \"cluster-version-operator-56d8475767-lqvvj\" (UID: \"a02399de-859b-45b1-9b00-18a08f285f39\") " pod="openshift-cluster-version/cluster-version-operator-56d8475767-lqvvj" Mar 18 17:40:22.704889 master-0 kubenswrapper[4090]: I0318 17:40:22.704172 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a02399de-859b-45b1-9b00-18a08f285f39-kube-api-access\") pod \"cluster-version-operator-56d8475767-lqvvj\" (UID: \"a02399de-859b-45b1-9b00-18a08f285f39\") " pod="openshift-cluster-version/cluster-version-operator-56d8475767-lqvvj" Mar 18 17:40:22.704889 master-0 kubenswrapper[4090]: I0318 17:40:22.704208 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-ca-bundle\" (UniqueName: \"kubernetes.io/host-path/be6633f4-7370-49b8-a607-6a3fa52a098e-host-ca-bundle\") pod \"assisted-installer-controller-trlzv\" (UID: \"be6633f4-7370-49b8-a607-6a3fa52a098e\") " pod="assisted-installer/assisted-installer-controller-trlzv" Mar 18 17:40:22.709947 master-0 kubenswrapper[4090]: I0318 17:40:22.704250 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/a02399de-859b-45b1-9b00-18a08f285f39-etc-cvo-updatepayloads\") pod \"cluster-version-operator-56d8475767-lqvvj\" (UID: \"a02399de-859b-45b1-9b00-18a08f285f39\") " pod="openshift-cluster-version/cluster-version-operator-56d8475767-lqvvj" Mar 18 17:40:22.709947 master-0 kubenswrapper[4090]: E0318 17:40:22.704173 4090 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Mar 18 17:40:22.709947 master-0 kubenswrapper[4090]: I0318 17:40:22.704338 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/14a0661b-7bde-4e22-a9a9-5e3fb24df77f-metrics-tls\") pod \"network-operator-7bd846bfc4-dxxbl\" (UID: \"14a0661b-7bde-4e22-a9a9-5e3fb24df77f\") " pod="openshift-network-operator/network-operator-7bd846bfc4-dxxbl" Mar 18 17:40:22.709947 master-0 kubenswrapper[4090]: E0318 17:40:22.704415 4090 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a02399de-859b-45b1-9b00-18a08f285f39-serving-cert podName:a02399de-859b-45b1-9b00-18a08f285f39 nodeName:}" failed. No retries permitted until 2026-03-18 17:40:23.204384521 +0000 UTC m=+40.396656475 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/a02399de-859b-45b1-9b00-18a08f285f39-serving-cert") pod "cluster-version-operator-56d8475767-lqvvj" (UID: "a02399de-859b-45b1-9b00-18a08f285f39") : secret "cluster-version-operator-serving-cert" not found Mar 18 17:40:22.709947 master-0 kubenswrapper[4090]: I0318 17:40:22.704498 4090 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-ca-bundle\" (UniqueName: \"kubernetes.io/host-path/be6633f4-7370-49b8-a607-6a3fa52a098e-host-ca-bundle\") pod \"assisted-installer-controller-trlzv\" (UID: \"be6633f4-7370-49b8-a607-6a3fa52a098e\") " pod="assisted-installer/assisted-installer-controller-trlzv" Mar 18 17:40:22.709947 master-0 kubenswrapper[4090]: I0318 17:40:22.704670 4090 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/a02399de-859b-45b1-9b00-18a08f285f39-etc-cvo-updatepayloads\") pod \"cluster-version-operator-56d8475767-lqvvj\" (UID: \"a02399de-859b-45b1-9b00-18a08f285f39\") " pod="openshift-cluster-version/cluster-version-operator-56d8475767-lqvvj" Mar 18 17:40:22.709947 master-0 kubenswrapper[4090]: I0318 17:40:22.705835 4090 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/a02399de-859b-45b1-9b00-18a08f285f39-service-ca\") pod \"cluster-version-operator-56d8475767-lqvvj\" (UID: \"a02399de-859b-45b1-9b00-18a08f285f39\") " pod="openshift-cluster-version/cluster-version-operator-56d8475767-lqvvj" Mar 18 17:40:22.709947 master-0 kubenswrapper[4090]: I0318 17:40:22.706186 4090 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Mar 18 17:40:22.714331 master-0 kubenswrapper[4090]: I0318 17:40:22.713963 4090 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/14a0661b-7bde-4e22-a9a9-5e3fb24df77f-metrics-tls\") pod \"network-operator-7bd846bfc4-dxxbl\" (UID: \"14a0661b-7bde-4e22-a9a9-5e3fb24df77f\") " pod="openshift-network-operator/network-operator-7bd846bfc4-dxxbl" Mar 18 17:40:22.981466 master-0 kubenswrapper[4090]: I0318 17:40:22.980133 4090 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a02399de-859b-45b1-9b00-18a08f285f39-kube-api-access\") pod \"cluster-version-operator-56d8475767-lqvvj\" (UID: \"a02399de-859b-45b1-9b00-18a08f285f39\") " pod="openshift-cluster-version/cluster-version-operator-56d8475767-lqvvj" Mar 18 17:40:22.988908 master-0 kubenswrapper[4090]: I0318 17:40:22.988844 4090 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2pqww\" (UniqueName: \"kubernetes.io/projected/14a0661b-7bde-4e22-a9a9-5e3fb24df77f-kube-api-access-2pqww\") pod \"network-operator-7bd846bfc4-dxxbl\" (UID: \"14a0661b-7bde-4e22-a9a9-5e3fb24df77f\") " pod="openshift-network-operator/network-operator-7bd846bfc4-dxxbl" Mar 18 17:40:22.995226 master-0 kubenswrapper[4090]: I0318 17:40:22.995156 4090 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c2w69\" (UniqueName: \"kubernetes.io/projected/be6633f4-7370-49b8-a607-6a3fa52a098e-kube-api-access-c2w69\") pod \"assisted-installer-controller-trlzv\" (UID: \"be6633f4-7370-49b8-a607-6a3fa52a098e\") " pod="assisted-installer/assisted-installer-controller-trlzv" Mar 18 17:40:23.133567 master-0 kubenswrapper[4090]: I0318 17:40:23.133411 4090 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="assisted-installer/assisted-installer-controller-trlzv" Mar 18 17:40:23.145016 master-0 kubenswrapper[4090]: I0318 17:40:23.144954 4090 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-7bd846bfc4-dxxbl" Mar 18 17:40:23.164846 master-0 kubenswrapper[4090]: W0318 17:40:23.164793 4090 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod14a0661b_7bde_4e22_a9a9_5e3fb24df77f.slice/crio-db5aec9e61cde7bec4f42fa13ed7d132af8e2baca532c12d1638296fcf06dd34 WatchSource:0}: Error finding container db5aec9e61cde7bec4f42fa13ed7d132af8e2baca532c12d1638296fcf06dd34: Status 404 returned error can't find the container with id db5aec9e61cde7bec4f42fa13ed7d132af8e2baca532c12d1638296fcf06dd34 Mar 18 17:40:23.208145 master-0 kubenswrapper[4090]: I0318 17:40:23.208072 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a02399de-859b-45b1-9b00-18a08f285f39-serving-cert\") pod \"cluster-version-operator-56d8475767-lqvvj\" (UID: \"a02399de-859b-45b1-9b00-18a08f285f39\") " pod="openshift-cluster-version/cluster-version-operator-56d8475767-lqvvj" Mar 18 17:40:23.208345 master-0 kubenswrapper[4090]: E0318 17:40:23.208308 4090 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Mar 18 17:40:23.208421 master-0 kubenswrapper[4090]: E0318 17:40:23.208385 4090 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a02399de-859b-45b1-9b00-18a08f285f39-serving-cert podName:a02399de-859b-45b1-9b00-18a08f285f39 nodeName:}" failed. No retries permitted until 2026-03-18 17:40:24.208360575 +0000 UTC m=+41.400632489 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/a02399de-859b-45b1-9b00-18a08f285f39-serving-cert") pod "cluster-version-operator-56d8475767-lqvvj" (UID: "a02399de-859b-45b1-9b00-18a08f285f39") : secret "cluster-version-operator-serving-cert" not found Mar 18 17:40:23.773092 master-0 kubenswrapper[4090]: I0318 17:40:23.772959 4090 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-7bd846bfc4-dxxbl" event={"ID":"14a0661b-7bde-4e22-a9a9-5e3fb24df77f","Type":"ContainerStarted","Data":"db5aec9e61cde7bec4f42fa13ed7d132af8e2baca532c12d1638296fcf06dd34"} Mar 18 17:40:23.774808 master-0 kubenswrapper[4090]: I0318 17:40:23.774753 4090 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="assisted-installer/assisted-installer-controller-trlzv" event={"ID":"be6633f4-7370-49b8-a607-6a3fa52a098e","Type":"ContainerStarted","Data":"1d9e36c9c12a1291e1dc0d36bf35c4d9718af9aa6ca59ee2ad69bf2e6669af26"} Mar 18 17:40:24.215446 master-0 kubenswrapper[4090]: I0318 17:40:24.215387 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a02399de-859b-45b1-9b00-18a08f285f39-serving-cert\") pod \"cluster-version-operator-56d8475767-lqvvj\" (UID: \"a02399de-859b-45b1-9b00-18a08f285f39\") " pod="openshift-cluster-version/cluster-version-operator-56d8475767-lqvvj" Mar 18 17:40:24.215677 master-0 kubenswrapper[4090]: E0318 17:40:24.215547 4090 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Mar 18 17:40:24.215677 master-0 kubenswrapper[4090]: E0318 17:40:24.215608 4090 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a02399de-859b-45b1-9b00-18a08f285f39-serving-cert podName:a02399de-859b-45b1-9b00-18a08f285f39 nodeName:}" failed. No retries permitted until 2026-03-18 17:40:26.215589137 +0000 UTC m=+43.407861061 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/a02399de-859b-45b1-9b00-18a08f285f39-serving-cert") pod "cluster-version-operator-56d8475767-lqvvj" (UID: "a02399de-859b-45b1-9b00-18a08f285f39") : secret "cluster-version-operator-serving-cert" not found Mar 18 17:40:24.551305 master-0 kubenswrapper[4090]: I0318 17:40:24.551237 4090 csr.go:261] certificate signing request csr-fmw4s is approved, waiting to be issued Mar 18 17:40:24.557266 master-0 kubenswrapper[4090]: I0318 17:40:24.557197 4090 csr.go:257] certificate signing request csr-fmw4s is issued Mar 18 17:40:25.559501 master-0 kubenswrapper[4090]: I0318 17:40:25.559426 4090 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-03-19 17:31:47 +0000 UTC, rotation deadline is 2026-03-19 12:33:41.685778089 +0000 UTC Mar 18 17:40:25.559501 master-0 kubenswrapper[4090]: I0318 17:40:25.559463 4090 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 18h53m16.126317537s for next certificate rotation Mar 18 17:40:26.229850 master-0 kubenswrapper[4090]: I0318 17:40:26.229776 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a02399de-859b-45b1-9b00-18a08f285f39-serving-cert\") pod \"cluster-version-operator-56d8475767-lqvvj\" (UID: \"a02399de-859b-45b1-9b00-18a08f285f39\") " pod="openshift-cluster-version/cluster-version-operator-56d8475767-lqvvj" Mar 18 17:40:26.230169 master-0 kubenswrapper[4090]: E0318 17:40:26.230004 4090 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Mar 18 17:40:26.230169 master-0 kubenswrapper[4090]: E0318 17:40:26.230059 4090 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a02399de-859b-45b1-9b00-18a08f285f39-serving-cert podName:a02399de-859b-45b1-9b00-18a08f285f39 nodeName:}" failed. No retries permitted until 2026-03-18 17:40:30.230044372 +0000 UTC m=+47.422316286 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/a02399de-859b-45b1-9b00-18a08f285f39-serving-cert") pod "cluster-version-operator-56d8475767-lqvvj" (UID: "a02399de-859b-45b1-9b00-18a08f285f39") : secret "cluster-version-operator-serving-cert" not found Mar 18 17:40:26.560091 master-0 kubenswrapper[4090]: I0318 17:40:26.560047 4090 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-03-19 17:31:47 +0000 UTC, rotation deadline is 2026-03-19 13:16:29.641772323 +0000 UTC Mar 18 17:40:26.560091 master-0 kubenswrapper[4090]: I0318 17:40:26.560085 4090 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 19h36m3.081689969s for next certificate rotation Mar 18 17:40:30.257695 master-0 kubenswrapper[4090]: I0318 17:40:30.257631 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a02399de-859b-45b1-9b00-18a08f285f39-serving-cert\") pod \"cluster-version-operator-56d8475767-lqvvj\" (UID: \"a02399de-859b-45b1-9b00-18a08f285f39\") " pod="openshift-cluster-version/cluster-version-operator-56d8475767-lqvvj" Mar 18 17:40:30.258143 master-0 kubenswrapper[4090]: E0318 17:40:30.257969 4090 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Mar 18 17:40:30.258194 master-0 kubenswrapper[4090]: E0318 17:40:30.258153 4090 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a02399de-859b-45b1-9b00-18a08f285f39-serving-cert podName:a02399de-859b-45b1-9b00-18a08f285f39 nodeName:}" failed. No retries permitted until 2026-03-18 17:40:38.258086264 +0000 UTC m=+55.450358208 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/a02399de-859b-45b1-9b00-18a08f285f39-serving-cert") pod "cluster-version-operator-56d8475767-lqvvj" (UID: "a02399de-859b-45b1-9b00-18a08f285f39") : secret "cluster-version-operator-serving-cert" not found Mar 18 17:40:30.793914 master-0 kubenswrapper[4090]: I0318 17:40:30.793745 4090 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-7bd846bfc4-dxxbl" event={"ID":"14a0661b-7bde-4e22-a9a9-5e3fb24df77f","Type":"ContainerStarted","Data":"2832f3b1f879c3c460b4f098e3d00d5c7b56e3d42bebbd0ce9bb5a5d15d6f5ef"} Mar 18 17:40:30.797583 master-0 kubenswrapper[4090]: I0318 17:40:30.797476 4090 generic.go:334] "Generic (PLEG): container finished" podID="be6633f4-7370-49b8-a607-6a3fa52a098e" containerID="5a8c8b2dda583c7f8335b717181054066b935f797ea92e14efe72d4f776836d4" exitCode=0 Mar 18 17:40:30.797744 master-0 kubenswrapper[4090]: I0318 17:40:30.797604 4090 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="assisted-installer/assisted-installer-controller-trlzv" event={"ID":"be6633f4-7370-49b8-a607-6a3fa52a098e","Type":"ContainerDied","Data":"5a8c8b2dda583c7f8335b717181054066b935f797ea92e14efe72d4f776836d4"} Mar 18 17:40:30.814762 master-0 kubenswrapper[4090]: I0318 17:40:30.814645 4090 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-network-operator/network-operator-7bd846bfc4-dxxbl" podStartSLOduration=5.814112251 podStartE2EDuration="12.814616156s" podCreationTimestamp="2026-03-18 17:40:18 +0000 UTC" firstStartedPulling="2026-03-18 17:40:23.16817121 +0000 UTC m=+40.360443114" lastFinishedPulling="2026-03-18 17:40:30.168675065 +0000 UTC m=+47.360947019" observedRunningTime="2026-03-18 17:40:30.814164932 +0000 UTC m=+48.006436886" watchObservedRunningTime="2026-03-18 17:40:30.814616156 +0000 UTC m=+48.006888120" Mar 18 17:40:31.623692 master-0 kubenswrapper[4090]: I0318 17:40:31.623634 4090 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-master-0"] Mar 18 17:40:31.625245 master-0 kubenswrapper[4090]: I0318 17:40:31.625191 4090 scope.go:117] "RemoveContainer" containerID="7b9bddc18b37dc8b661d9d8aa6fa9e351c2bd5fe2e18de32692f6b5ce1bb25c8" Mar 18 17:40:31.823089 master-0 kubenswrapper[4090]: I0318 17:40:31.823034 4090 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="assisted-installer/assisted-installer-controller-trlzv" Mar 18 17:40:31.968977 master-0 kubenswrapper[4090]: I0318 17:40:31.968872 4090 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c2w69\" (UniqueName: \"kubernetes.io/projected/be6633f4-7370-49b8-a607-6a3fa52a098e-kube-api-access-c2w69\") pod \"be6633f4-7370-49b8-a607-6a3fa52a098e\" (UID: \"be6633f4-7370-49b8-a607-6a3fa52a098e\") " Mar 18 17:40:31.968977 master-0 kubenswrapper[4090]: I0318 17:40:31.968934 4090 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sno-bootstrap-files\" (UniqueName: \"kubernetes.io/host-path/be6633f4-7370-49b8-a607-6a3fa52a098e-sno-bootstrap-files\") pod \"be6633f4-7370-49b8-a607-6a3fa52a098e\" (UID: \"be6633f4-7370-49b8-a607-6a3fa52a098e\") " Mar 18 17:40:31.968977 master-0 kubenswrapper[4090]: I0318 17:40:31.968962 4090 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-ca-bundle\" (UniqueName: \"kubernetes.io/host-path/be6633f4-7370-49b8-a607-6a3fa52a098e-host-ca-bundle\") pod \"be6633f4-7370-49b8-a607-6a3fa52a098e\" (UID: \"be6633f4-7370-49b8-a607-6a3fa52a098e\") " Mar 18 17:40:31.968977 master-0 kubenswrapper[4090]: I0318 17:40:31.968990 4090 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-run-resolv-conf\" (UniqueName: \"kubernetes.io/host-path/be6633f4-7370-49b8-a607-6a3fa52a098e-host-var-run-resolv-conf\") pod \"be6633f4-7370-49b8-a607-6a3fa52a098e\" (UID: \"be6633f4-7370-49b8-a607-6a3fa52a098e\") " Mar 18 17:40:31.969525 master-0 kubenswrapper[4090]: I0318 17:40:31.969023 4090 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-resolv-conf\" (UniqueName: \"kubernetes.io/host-path/be6633f4-7370-49b8-a607-6a3fa52a098e-host-resolv-conf\") pod \"be6633f4-7370-49b8-a607-6a3fa52a098e\" (UID: \"be6633f4-7370-49b8-a607-6a3fa52a098e\") " Mar 18 17:40:31.969525 master-0 kubenswrapper[4090]: I0318 17:40:31.969145 4090 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/be6633f4-7370-49b8-a607-6a3fa52a098e-host-resolv-conf" (OuterVolumeSpecName: "host-resolv-conf") pod "be6633f4-7370-49b8-a607-6a3fa52a098e" (UID: "be6633f4-7370-49b8-a607-6a3fa52a098e"). InnerVolumeSpecName "host-resolv-conf". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 17:40:31.969525 master-0 kubenswrapper[4090]: I0318 17:40:31.969197 4090 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/be6633f4-7370-49b8-a607-6a3fa52a098e-sno-bootstrap-files" (OuterVolumeSpecName: "sno-bootstrap-files") pod "be6633f4-7370-49b8-a607-6a3fa52a098e" (UID: "be6633f4-7370-49b8-a607-6a3fa52a098e"). InnerVolumeSpecName "sno-bootstrap-files". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 17:40:31.969525 master-0 kubenswrapper[4090]: I0318 17:40:31.969225 4090 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/be6633f4-7370-49b8-a607-6a3fa52a098e-host-ca-bundle" (OuterVolumeSpecName: "host-ca-bundle") pod "be6633f4-7370-49b8-a607-6a3fa52a098e" (UID: "be6633f4-7370-49b8-a607-6a3fa52a098e"). InnerVolumeSpecName "host-ca-bundle". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 17:40:31.969525 master-0 kubenswrapper[4090]: I0318 17:40:31.969250 4090 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/be6633f4-7370-49b8-a607-6a3fa52a098e-host-var-run-resolv-conf" (OuterVolumeSpecName: "host-var-run-resolv-conf") pod "be6633f4-7370-49b8-a607-6a3fa52a098e" (UID: "be6633f4-7370-49b8-a607-6a3fa52a098e"). InnerVolumeSpecName "host-var-run-resolv-conf". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 17:40:31.975753 master-0 kubenswrapper[4090]: I0318 17:40:31.975660 4090 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/be6633f4-7370-49b8-a607-6a3fa52a098e-kube-api-access-c2w69" (OuterVolumeSpecName: "kube-api-access-c2w69") pod "be6633f4-7370-49b8-a607-6a3fa52a098e" (UID: "be6633f4-7370-49b8-a607-6a3fa52a098e"). InnerVolumeSpecName "kube-api-access-c2w69". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 17:40:32.069821 master-0 kubenswrapper[4090]: I0318 17:40:32.069747 4090 reconciler_common.go:293] "Volume detached for volume \"host-var-run-resolv-conf\" (UniqueName: \"kubernetes.io/host-path/be6633f4-7370-49b8-a607-6a3fa52a098e-host-var-run-resolv-conf\") on node \"master-0\" DevicePath \"\"" Mar 18 17:40:32.069821 master-0 kubenswrapper[4090]: I0318 17:40:32.069793 4090 reconciler_common.go:293] "Volume detached for volume \"host-resolv-conf\" (UniqueName: \"kubernetes.io/host-path/be6633f4-7370-49b8-a607-6a3fa52a098e-host-resolv-conf\") on node \"master-0\" DevicePath \"\"" Mar 18 17:40:32.069821 master-0 kubenswrapper[4090]: I0318 17:40:32.069812 4090 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c2w69\" (UniqueName: \"kubernetes.io/projected/be6633f4-7370-49b8-a607-6a3fa52a098e-kube-api-access-c2w69\") on node \"master-0\" DevicePath \"\"" Mar 18 17:40:32.069821 master-0 kubenswrapper[4090]: I0318 17:40:32.069830 4090 reconciler_common.go:293] "Volume detached for volume \"sno-bootstrap-files\" (UniqueName: \"kubernetes.io/host-path/be6633f4-7370-49b8-a607-6a3fa52a098e-sno-bootstrap-files\") on node \"master-0\" DevicePath \"\"" Mar 18 17:40:32.070259 master-0 kubenswrapper[4090]: I0318 17:40:32.069849 4090 reconciler_common.go:293] "Volume detached for volume \"host-ca-bundle\" (UniqueName: \"kubernetes.io/host-path/be6633f4-7370-49b8-a607-6a3fa52a098e-host-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 18 17:40:32.538408 master-0 kubenswrapper[4090]: I0318 17:40:32.538070 4090 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-operator/mtu-prober-m7wng"] Mar 18 17:40:32.538672 master-0 kubenswrapper[4090]: E0318 17:40:32.538446 4090 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="be6633f4-7370-49b8-a607-6a3fa52a098e" containerName="assisted-installer-controller" Mar 18 17:40:32.538672 master-0 kubenswrapper[4090]: I0318 17:40:32.538460 4090 state_mem.go:107] "Deleted CPUSet assignment" podUID="be6633f4-7370-49b8-a607-6a3fa52a098e" containerName="assisted-installer-controller" Mar 18 17:40:32.538672 master-0 kubenswrapper[4090]: I0318 17:40:32.538487 4090 memory_manager.go:354] "RemoveStaleState removing state" podUID="be6633f4-7370-49b8-a607-6a3fa52a098e" containerName="assisted-installer-controller" Mar 18 17:40:32.538865 master-0 kubenswrapper[4090]: I0318 17:40:32.538686 4090 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/mtu-prober-m7wng" Mar 18 17:40:32.677080 master-0 kubenswrapper[4090]: I0318 17:40:32.676991 4090 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-spd4d\" (UniqueName: \"kubernetes.io/projected/f2eeb961-15e7-4c19-8f37-659cc2cb6539-kube-api-access-spd4d\") pod \"mtu-prober-m7wng\" (UID: \"f2eeb961-15e7-4c19-8f37-659cc2cb6539\") " pod="openshift-network-operator/mtu-prober-m7wng" Mar 18 17:40:32.777590 master-0 kubenswrapper[4090]: I0318 17:40:32.777540 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-spd4d\" (UniqueName: \"kubernetes.io/projected/f2eeb961-15e7-4c19-8f37-659cc2cb6539-kube-api-access-spd4d\") pod \"mtu-prober-m7wng\" (UID: \"f2eeb961-15e7-4c19-8f37-659cc2cb6539\") " pod="openshift-network-operator/mtu-prober-m7wng" Mar 18 17:40:32.805384 master-0 kubenswrapper[4090]: I0318 17:40:32.805349 4090 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-master-0_1249822f86f23526277d165c0d5d3c19/kube-rbac-proxy-crio/2.log" Mar 18 17:40:32.806102 master-0 kubenswrapper[4090]: I0318 17:40:32.806051 4090 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"1249822f86f23526277d165c0d5d3c19","Type":"ContainerStarted","Data":"43d0194c7af8a79987b694f6624dcbd9737a923184624c98fa52f07e27abb8b3"} Mar 18 17:40:32.807989 master-0 kubenswrapper[4090]: I0318 17:40:32.807952 4090 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-spd4d\" (UniqueName: \"kubernetes.io/projected/f2eeb961-15e7-4c19-8f37-659cc2cb6539-kube-api-access-spd4d\") pod \"mtu-prober-m7wng\" (UID: \"f2eeb961-15e7-4c19-8f37-659cc2cb6539\") " pod="openshift-network-operator/mtu-prober-m7wng" Mar 18 17:40:32.808262 master-0 kubenswrapper[4090]: I0318 17:40:32.808186 4090 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="assisted-installer/assisted-installer-controller-trlzv" event={"ID":"be6633f4-7370-49b8-a607-6a3fa52a098e","Type":"ContainerDied","Data":"1d9e36c9c12a1291e1dc0d36bf35c4d9718af9aa6ca59ee2ad69bf2e6669af26"} Mar 18 17:40:32.808262 master-0 kubenswrapper[4090]: I0318 17:40:32.808244 4090 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="assisted-installer/assisted-installer-controller-trlzv" Mar 18 17:40:32.808480 master-0 kubenswrapper[4090]: I0318 17:40:32.808266 4090 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1d9e36c9c12a1291e1dc0d36bf35c4d9718af9aa6ca59ee2ad69bf2e6669af26" Mar 18 17:40:32.827994 master-0 kubenswrapper[4090]: I0318 17:40:32.827893 4090 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" podStartSLOduration=1.827867002 podStartE2EDuration="1.827867002s" podCreationTimestamp="2026-03-18 17:40:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 17:40:32.827737298 +0000 UTC m=+50.020009242" watchObservedRunningTime="2026-03-18 17:40:32.827867002 +0000 UTC m=+50.020138946" Mar 18 17:40:32.854441 master-0 kubenswrapper[4090]: I0318 17:40:32.854409 4090 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/mtu-prober-m7wng" Mar 18 17:40:33.813223 master-0 kubenswrapper[4090]: I0318 17:40:33.813117 4090 generic.go:334] "Generic (PLEG): container finished" podID="f2eeb961-15e7-4c19-8f37-659cc2cb6539" containerID="c94a2985fe4117cc55a54b6163c21e92395f0ed45215b4c6fffd52daf31ec16f" exitCode=0 Mar 18 17:40:33.814065 master-0 kubenswrapper[4090]: I0318 17:40:33.813634 4090 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/mtu-prober-m7wng" event={"ID":"f2eeb961-15e7-4c19-8f37-659cc2cb6539","Type":"ContainerDied","Data":"c94a2985fe4117cc55a54b6163c21e92395f0ed45215b4c6fffd52daf31ec16f"} Mar 18 17:40:33.814065 master-0 kubenswrapper[4090]: I0318 17:40:33.813704 4090 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/mtu-prober-m7wng" event={"ID":"f2eeb961-15e7-4c19-8f37-659cc2cb6539","Type":"ContainerStarted","Data":"44b61e136de21d6c51f86eb4424513da867694db0dfb6fc4c6a30b8dc6efbae6"} Mar 18 17:40:34.849950 master-0 kubenswrapper[4090]: I0318 17:40:34.849882 4090 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/mtu-prober-m7wng" Mar 18 17:40:34.996077 master-0 kubenswrapper[4090]: I0318 17:40:34.995920 4090 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-spd4d\" (UniqueName: \"kubernetes.io/projected/f2eeb961-15e7-4c19-8f37-659cc2cb6539-kube-api-access-spd4d\") pod \"f2eeb961-15e7-4c19-8f37-659cc2cb6539\" (UID: \"f2eeb961-15e7-4c19-8f37-659cc2cb6539\") " Mar 18 17:40:35.001732 master-0 kubenswrapper[4090]: I0318 17:40:35.001613 4090 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f2eeb961-15e7-4c19-8f37-659cc2cb6539-kube-api-access-spd4d" (OuterVolumeSpecName: "kube-api-access-spd4d") pod "f2eeb961-15e7-4c19-8f37-659cc2cb6539" (UID: "f2eeb961-15e7-4c19-8f37-659cc2cb6539"). InnerVolumeSpecName "kube-api-access-spd4d". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 17:40:35.096566 master-0 kubenswrapper[4090]: I0318 17:40:35.096503 4090 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-spd4d\" (UniqueName: \"kubernetes.io/projected/f2eeb961-15e7-4c19-8f37-659cc2cb6539-kube-api-access-spd4d\") on node \"master-0\" DevicePath \"\"" Mar 18 17:40:35.822202 master-0 kubenswrapper[4090]: I0318 17:40:35.822157 4090 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/mtu-prober-m7wng" event={"ID":"f2eeb961-15e7-4c19-8f37-659cc2cb6539","Type":"ContainerDied","Data":"44b61e136de21d6c51f86eb4424513da867694db0dfb6fc4c6a30b8dc6efbae6"} Mar 18 17:40:35.822482 master-0 kubenswrapper[4090]: I0318 17:40:35.822469 4090 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="44b61e136de21d6c51f86eb4424513da867694db0dfb6fc4c6a30b8dc6efbae6" Mar 18 17:40:35.822573 master-0 kubenswrapper[4090]: I0318 17:40:35.822311 4090 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/mtu-prober-m7wng" Mar 18 17:40:37.536319 master-0 kubenswrapper[4090]: I0318 17:40:37.535619 4090 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-network-operator/mtu-prober-m7wng"] Mar 18 17:40:37.542016 master-0 kubenswrapper[4090]: I0318 17:40:37.541963 4090 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-network-operator/mtu-prober-m7wng"] Mar 18 17:40:37.614003 master-0 kubenswrapper[4090]: I0318 17:40:37.613907 4090 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f2eeb961-15e7-4c19-8f37-659cc2cb6539" path="/var/lib/kubelet/pods/f2eeb961-15e7-4c19-8f37-659cc2cb6539/volumes" Mar 18 17:40:38.322626 master-0 kubenswrapper[4090]: I0318 17:40:38.322422 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a02399de-859b-45b1-9b00-18a08f285f39-serving-cert\") pod \"cluster-version-operator-56d8475767-lqvvj\" (UID: \"a02399de-859b-45b1-9b00-18a08f285f39\") " pod="openshift-cluster-version/cluster-version-operator-56d8475767-lqvvj" Mar 18 17:40:38.324219 master-0 kubenswrapper[4090]: E0318 17:40:38.323105 4090 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Mar 18 17:40:38.324219 master-0 kubenswrapper[4090]: E0318 17:40:38.323202 4090 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a02399de-859b-45b1-9b00-18a08f285f39-serving-cert podName:a02399de-859b-45b1-9b00-18a08f285f39 nodeName:}" failed. No retries permitted until 2026-03-18 17:40:54.323176125 +0000 UTC m=+71.515448079 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/a02399de-859b-45b1-9b00-18a08f285f39-serving-cert") pod "cluster-version-operator-56d8475767-lqvvj" (UID: "a02399de-859b-45b1-9b00-18a08f285f39") : secret "cluster-version-operator-serving-cert" not found Mar 18 17:40:42.686086 master-0 kubenswrapper[4090]: I0318 17:40:42.685882 4090 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-64tx9"] Mar 18 17:40:42.686086 master-0 kubenswrapper[4090]: E0318 17:40:42.686025 4090 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f2eeb961-15e7-4c19-8f37-659cc2cb6539" containerName="prober" Mar 18 17:40:42.686086 master-0 kubenswrapper[4090]: I0318 17:40:42.686044 4090 state_mem.go:107] "Deleted CPUSet assignment" podUID="f2eeb961-15e7-4c19-8f37-659cc2cb6539" containerName="prober" Mar 18 17:40:42.686086 master-0 kubenswrapper[4090]: I0318 17:40:42.686076 4090 memory_manager.go:354] "RemoveStaleState removing state" podUID="f2eeb961-15e7-4c19-8f37-659cc2cb6539" containerName="prober" Mar 18 17:40:42.686876 master-0 kubenswrapper[4090]: I0318 17:40:42.686411 4090 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-64tx9" Mar 18 17:40:42.688969 master-0 kubenswrapper[4090]: I0318 17:40:42.688532 4090 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Mar 18 17:40:42.688969 master-0 kubenswrapper[4090]: I0318 17:40:42.688818 4090 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Mar 18 17:40:42.695258 master-0 kubenswrapper[4090]: I0318 17:40:42.693329 4090 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Mar 18 17:40:42.696114 master-0 kubenswrapper[4090]: I0318 17:40:42.696042 4090 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Mar 18 17:40:42.811727 master-0 kubenswrapper[4090]: I0318 17:40:42.811644 4090 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-ttbr5"] Mar 18 17:40:42.812879 master-0 kubenswrapper[4090]: I0318 17:40:42.812844 4090 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-ttbr5" Mar 18 17:40:42.815828 master-0 kubenswrapper[4090]: I0318 17:40:42.815556 4090 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"whereabouts-flatfile-config" Mar 18 17:40:42.815828 master-0 kubenswrapper[4090]: I0318 17:40:42.815563 4090 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Mar 18 17:40:42.877890 master-0 kubenswrapper[4090]: I0318 17:40:42.877720 4090 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/5b0e38f3-3ab5-4519-86a6-68003deb94da-host-run-k8s-cni-cncf-io\") pod \"multus-64tx9\" (UID: \"5b0e38f3-3ab5-4519-86a6-68003deb94da\") " pod="openshift-multus/multus-64tx9" Mar 18 17:40:42.877890 master-0 kubenswrapper[4090]: I0318 17:40:42.877788 4090 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-grnqn\" (UniqueName: \"kubernetes.io/projected/5b0e38f3-3ab5-4519-86a6-68003deb94da-kube-api-access-grnqn\") pod \"multus-64tx9\" (UID: \"5b0e38f3-3ab5-4519-86a6-68003deb94da\") " pod="openshift-multus/multus-64tx9" Mar 18 17:40:42.877890 master-0 kubenswrapper[4090]: I0318 17:40:42.877833 4090 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/5b0e38f3-3ab5-4519-86a6-68003deb94da-host-run-netns\") pod \"multus-64tx9\" (UID: \"5b0e38f3-3ab5-4519-86a6-68003deb94da\") " pod="openshift-multus/multus-64tx9" Mar 18 17:40:42.878245 master-0 kubenswrapper[4090]: I0318 17:40:42.877944 4090 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/5b0e38f3-3ab5-4519-86a6-68003deb94da-host-var-lib-cni-multus\") pod \"multus-64tx9\" (UID: \"5b0e38f3-3ab5-4519-86a6-68003deb94da\") " pod="openshift-multus/multus-64tx9" Mar 18 17:40:42.878245 master-0 kubenswrapper[4090]: I0318 17:40:42.878048 4090 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/5b0e38f3-3ab5-4519-86a6-68003deb94da-cnibin\") pod \"multus-64tx9\" (UID: \"5b0e38f3-3ab5-4519-86a6-68003deb94da\") " pod="openshift-multus/multus-64tx9" Mar 18 17:40:42.878245 master-0 kubenswrapper[4090]: I0318 17:40:42.878100 4090 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/5b0e38f3-3ab5-4519-86a6-68003deb94da-host-run-multus-certs\") pod \"multus-64tx9\" (UID: \"5b0e38f3-3ab5-4519-86a6-68003deb94da\") " pod="openshift-multus/multus-64tx9" Mar 18 17:40:42.878245 master-0 kubenswrapper[4090]: I0318 17:40:42.878157 4090 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/5b0e38f3-3ab5-4519-86a6-68003deb94da-cni-binary-copy\") pod \"multus-64tx9\" (UID: \"5b0e38f3-3ab5-4519-86a6-68003deb94da\") " pod="openshift-multus/multus-64tx9" Mar 18 17:40:42.878245 master-0 kubenswrapper[4090]: I0318 17:40:42.878188 4090 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/5b0e38f3-3ab5-4519-86a6-68003deb94da-multus-socket-dir-parent\") pod \"multus-64tx9\" (UID: \"5b0e38f3-3ab5-4519-86a6-68003deb94da\") " pod="openshift-multus/multus-64tx9" Mar 18 17:40:42.878245 master-0 kubenswrapper[4090]: I0318 17:40:42.878240 4090 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/5b0e38f3-3ab5-4519-86a6-68003deb94da-etc-kubernetes\") pod \"multus-64tx9\" (UID: \"5b0e38f3-3ab5-4519-86a6-68003deb94da\") " pod="openshift-multus/multus-64tx9" Mar 18 17:40:42.878453 master-0 kubenswrapper[4090]: I0318 17:40:42.878310 4090 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/5b0e38f3-3ab5-4519-86a6-68003deb94da-host-var-lib-cni-bin\") pod \"multus-64tx9\" (UID: \"5b0e38f3-3ab5-4519-86a6-68003deb94da\") " pod="openshift-multus/multus-64tx9" Mar 18 17:40:42.878453 master-0 kubenswrapper[4090]: I0318 17:40:42.878347 4090 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/5b0e38f3-3ab5-4519-86a6-68003deb94da-os-release\") pod \"multus-64tx9\" (UID: \"5b0e38f3-3ab5-4519-86a6-68003deb94da\") " pod="openshift-multus/multus-64tx9" Mar 18 17:40:42.878453 master-0 kubenswrapper[4090]: I0318 17:40:42.878381 4090 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/5b0e38f3-3ab5-4519-86a6-68003deb94da-multus-conf-dir\") pod \"multus-64tx9\" (UID: \"5b0e38f3-3ab5-4519-86a6-68003deb94da\") " pod="openshift-multus/multus-64tx9" Mar 18 17:40:42.878453 master-0 kubenswrapper[4090]: I0318 17:40:42.878414 4090 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/5b0e38f3-3ab5-4519-86a6-68003deb94da-multus-daemon-config\") pod \"multus-64tx9\" (UID: \"5b0e38f3-3ab5-4519-86a6-68003deb94da\") " pod="openshift-multus/multus-64tx9" Mar 18 17:40:42.878453 master-0 kubenswrapper[4090]: I0318 17:40:42.878447 4090 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/5b0e38f3-3ab5-4519-86a6-68003deb94da-system-cni-dir\") pod \"multus-64tx9\" (UID: \"5b0e38f3-3ab5-4519-86a6-68003deb94da\") " pod="openshift-multus/multus-64tx9" Mar 18 17:40:42.878588 master-0 kubenswrapper[4090]: I0318 17:40:42.878486 4090 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/5b0e38f3-3ab5-4519-86a6-68003deb94da-multus-cni-dir\") pod \"multus-64tx9\" (UID: \"5b0e38f3-3ab5-4519-86a6-68003deb94da\") " pod="openshift-multus/multus-64tx9" Mar 18 17:40:42.878588 master-0 kubenswrapper[4090]: I0318 17:40:42.878524 4090 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/5b0e38f3-3ab5-4519-86a6-68003deb94da-host-var-lib-kubelet\") pod \"multus-64tx9\" (UID: \"5b0e38f3-3ab5-4519-86a6-68003deb94da\") " pod="openshift-multus/multus-64tx9" Mar 18 17:40:42.878588 master-0 kubenswrapper[4090]: I0318 17:40:42.878579 4090 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/5b0e38f3-3ab5-4519-86a6-68003deb94da-hostroot\") pod \"multus-64tx9\" (UID: \"5b0e38f3-3ab5-4519-86a6-68003deb94da\") " pod="openshift-multus/multus-64tx9" Mar 18 17:40:42.979289 master-0 kubenswrapper[4090]: I0318 17:40:42.979059 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/5b0e38f3-3ab5-4519-86a6-68003deb94da-cnibin\") pod \"multus-64tx9\" (UID: \"5b0e38f3-3ab5-4519-86a6-68003deb94da\") " pod="openshift-multus/multus-64tx9" Mar 18 17:40:42.979289 master-0 kubenswrapper[4090]: I0318 17:40:42.979155 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/5b0e38f3-3ab5-4519-86a6-68003deb94da-host-run-multus-certs\") pod \"multus-64tx9\" (UID: \"5b0e38f3-3ab5-4519-86a6-68003deb94da\") " pod="openshift-multus/multus-64tx9" Mar 18 17:40:42.979522 master-0 kubenswrapper[4090]: I0318 17:40:42.979394 4090 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/5b0e38f3-3ab5-4519-86a6-68003deb94da-cnibin\") pod \"multus-64tx9\" (UID: \"5b0e38f3-3ab5-4519-86a6-68003deb94da\") " pod="openshift-multus/multus-64tx9" Mar 18 17:40:42.979522 master-0 kubenswrapper[4090]: I0318 17:40:42.979500 4090 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/5b0e38f3-3ab5-4519-86a6-68003deb94da-host-run-multus-certs\") pod \"multus-64tx9\" (UID: \"5b0e38f3-3ab5-4519-86a6-68003deb94da\") " pod="openshift-multus/multus-64tx9" Mar 18 17:40:42.979637 master-0 kubenswrapper[4090]: I0318 17:40:42.979390 4090 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ts9b9\" (UniqueName: \"kubernetes.io/projected/fea7b899-fde4-4463-9520-4d433a8ebe21-kube-api-access-ts9b9\") pod \"multus-additional-cni-plugins-ttbr5\" (UID: \"fea7b899-fde4-4463-9520-4d433a8ebe21\") " pod="openshift-multus/multus-additional-cni-plugins-ttbr5" Mar 18 17:40:42.979739 master-0 kubenswrapper[4090]: I0318 17:40:42.979691 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/5b0e38f3-3ab5-4519-86a6-68003deb94da-multus-socket-dir-parent\") pod \"multus-64tx9\" (UID: \"5b0e38f3-3ab5-4519-86a6-68003deb94da\") " pod="openshift-multus/multus-64tx9" Mar 18 17:40:42.979804 master-0 kubenswrapper[4090]: I0318 17:40:42.979755 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/5b0e38f3-3ab5-4519-86a6-68003deb94da-etc-kubernetes\") pod \"multus-64tx9\" (UID: \"5b0e38f3-3ab5-4519-86a6-68003deb94da\") " pod="openshift-multus/multus-64tx9" Mar 18 17:40:42.979859 master-0 kubenswrapper[4090]: I0318 17:40:42.979804 4090 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/fea7b899-fde4-4463-9520-4d433a8ebe21-os-release\") pod \"multus-additional-cni-plugins-ttbr5\" (UID: \"fea7b899-fde4-4463-9520-4d433a8ebe21\") " pod="openshift-multus/multus-additional-cni-plugins-ttbr5" Mar 18 17:40:42.979904 master-0 kubenswrapper[4090]: I0318 17:40:42.979851 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/5b0e38f3-3ab5-4519-86a6-68003deb94da-cni-binary-copy\") pod \"multus-64tx9\" (UID: \"5b0e38f3-3ab5-4519-86a6-68003deb94da\") " pod="openshift-multus/multus-64tx9" Mar 18 17:40:42.979946 master-0 kubenswrapper[4090]: I0318 17:40:42.979897 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/5b0e38f3-3ab5-4519-86a6-68003deb94da-host-var-lib-cni-bin\") pod \"multus-64tx9\" (UID: \"5b0e38f3-3ab5-4519-86a6-68003deb94da\") " pod="openshift-multus/multus-64tx9" Mar 18 17:40:42.979992 master-0 kubenswrapper[4090]: I0318 17:40:42.979952 4090 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/fea7b899-fde4-4463-9520-4d433a8ebe21-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-ttbr5\" (UID: \"fea7b899-fde4-4463-9520-4d433a8ebe21\") " pod="openshift-multus/multus-additional-cni-plugins-ttbr5" Mar 18 17:40:42.980099 master-0 kubenswrapper[4090]: I0318 17:40:42.980060 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/5b0e38f3-3ab5-4519-86a6-68003deb94da-os-release\") pod \"multus-64tx9\" (UID: \"5b0e38f3-3ab5-4519-86a6-68003deb94da\") " pod="openshift-multus/multus-64tx9" Mar 18 17:40:42.980156 master-0 kubenswrapper[4090]: I0318 17:40:42.980117 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/5b0e38f3-3ab5-4519-86a6-68003deb94da-multus-conf-dir\") pod \"multus-64tx9\" (UID: \"5b0e38f3-3ab5-4519-86a6-68003deb94da\") " pod="openshift-multus/multus-64tx9" Mar 18 17:40:42.980199 master-0 kubenswrapper[4090]: I0318 17:40:42.980161 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/5b0e38f3-3ab5-4519-86a6-68003deb94da-system-cni-dir\") pod \"multus-64tx9\" (UID: \"5b0e38f3-3ab5-4519-86a6-68003deb94da\") " pod="openshift-multus/multus-64tx9" Mar 18 17:40:42.980241 master-0 kubenswrapper[4090]: I0318 17:40:42.980211 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/5b0e38f3-3ab5-4519-86a6-68003deb94da-multus-cni-dir\") pod \"multus-64tx9\" (UID: \"5b0e38f3-3ab5-4519-86a6-68003deb94da\") " pod="openshift-multus/multus-64tx9" Mar 18 17:40:42.980327 master-0 kubenswrapper[4090]: I0318 17:40:42.980259 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/5b0e38f3-3ab5-4519-86a6-68003deb94da-host-var-lib-kubelet\") pod \"multus-64tx9\" (UID: \"5b0e38f3-3ab5-4519-86a6-68003deb94da\") " pod="openshift-multus/multus-64tx9" Mar 18 17:40:42.980375 master-0 kubenswrapper[4090]: I0318 17:40:42.980332 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/5b0e38f3-3ab5-4519-86a6-68003deb94da-multus-daemon-config\") pod \"multus-64tx9\" (UID: \"5b0e38f3-3ab5-4519-86a6-68003deb94da\") " pod="openshift-multus/multus-64tx9" Mar 18 17:40:42.980417 master-0 kubenswrapper[4090]: I0318 17:40:42.980386 4090 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whereabouts-flatfile-configmap\" (UniqueName: \"kubernetes.io/configmap/fea7b899-fde4-4463-9520-4d433a8ebe21-whereabouts-flatfile-configmap\") pod \"multus-additional-cni-plugins-ttbr5\" (UID: \"fea7b899-fde4-4463-9520-4d433a8ebe21\") " pod="openshift-multus/multus-additional-cni-plugins-ttbr5" Mar 18 17:40:42.980460 master-0 kubenswrapper[4090]: I0318 17:40:42.980437 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/5b0e38f3-3ab5-4519-86a6-68003deb94da-hostroot\") pod \"multus-64tx9\" (UID: \"5b0e38f3-3ab5-4519-86a6-68003deb94da\") " pod="openshift-multus/multus-64tx9" Mar 18 17:40:42.980559 master-0 kubenswrapper[4090]: I0318 17:40:42.980512 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/5b0e38f3-3ab5-4519-86a6-68003deb94da-host-run-k8s-cni-cncf-io\") pod \"multus-64tx9\" (UID: \"5b0e38f3-3ab5-4519-86a6-68003deb94da\") " pod="openshift-multus/multus-64tx9" Mar 18 17:40:42.980619 master-0 kubenswrapper[4090]: I0318 17:40:42.980576 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-grnqn\" (UniqueName: \"kubernetes.io/projected/5b0e38f3-3ab5-4519-86a6-68003deb94da-kube-api-access-grnqn\") pod \"multus-64tx9\" (UID: \"5b0e38f3-3ab5-4519-86a6-68003deb94da\") " pod="openshift-multus/multus-64tx9" Mar 18 17:40:42.980663 master-0 kubenswrapper[4090]: I0318 17:40:42.980634 4090 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/fea7b899-fde4-4463-9520-4d433a8ebe21-cnibin\") pod \"multus-additional-cni-plugins-ttbr5\" (UID: \"fea7b899-fde4-4463-9520-4d433a8ebe21\") " pod="openshift-multus/multus-additional-cni-plugins-ttbr5" Mar 18 17:40:42.980706 master-0 kubenswrapper[4090]: I0318 17:40:42.980685 4090 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/fea7b899-fde4-4463-9520-4d433a8ebe21-cni-binary-copy\") pod \"multus-additional-cni-plugins-ttbr5\" (UID: \"fea7b899-fde4-4463-9520-4d433a8ebe21\") " pod="openshift-multus/multus-additional-cni-plugins-ttbr5" Mar 18 17:40:42.980762 master-0 kubenswrapper[4090]: I0318 17:40:42.980730 4090 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/fea7b899-fde4-4463-9520-4d433a8ebe21-tuning-conf-dir\") pod \"multus-additional-cni-plugins-ttbr5\" (UID: \"fea7b899-fde4-4463-9520-4d433a8ebe21\") " pod="openshift-multus/multus-additional-cni-plugins-ttbr5" Mar 18 17:40:42.980813 master-0 kubenswrapper[4090]: I0318 17:40:42.980779 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/5b0e38f3-3ab5-4519-86a6-68003deb94da-host-var-lib-cni-multus\") pod \"multus-64tx9\" (UID: \"5b0e38f3-3ab5-4519-86a6-68003deb94da\") " pod="openshift-multus/multus-64tx9" Mar 18 17:40:42.980868 master-0 kubenswrapper[4090]: I0318 17:40:42.980842 4090 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/fea7b899-fde4-4463-9520-4d433a8ebe21-system-cni-dir\") pod \"multus-additional-cni-plugins-ttbr5\" (UID: \"fea7b899-fde4-4463-9520-4d433a8ebe21\") " pod="openshift-multus/multus-additional-cni-plugins-ttbr5" Mar 18 17:40:42.980915 master-0 kubenswrapper[4090]: I0318 17:40:42.980893 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/5b0e38f3-3ab5-4519-86a6-68003deb94da-host-run-netns\") pod \"multus-64tx9\" (UID: \"5b0e38f3-3ab5-4519-86a6-68003deb94da\") " pod="openshift-multus/multus-64tx9" Mar 18 17:40:42.981072 master-0 kubenswrapper[4090]: I0318 17:40:42.981032 4090 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/5b0e38f3-3ab5-4519-86a6-68003deb94da-host-run-netns\") pod \"multus-64tx9\" (UID: \"5b0e38f3-3ab5-4519-86a6-68003deb94da\") " pod="openshift-multus/multus-64tx9" Mar 18 17:40:42.981214 master-0 kubenswrapper[4090]: I0318 17:40:42.981171 4090 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/5b0e38f3-3ab5-4519-86a6-68003deb94da-multus-socket-dir-parent\") pod \"multus-64tx9\" (UID: \"5b0e38f3-3ab5-4519-86a6-68003deb94da\") " pod="openshift-multus/multus-64tx9" Mar 18 17:40:42.981324 master-0 kubenswrapper[4090]: I0318 17:40:42.981256 4090 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/5b0e38f3-3ab5-4519-86a6-68003deb94da-etc-kubernetes\") pod \"multus-64tx9\" (UID: \"5b0e38f3-3ab5-4519-86a6-68003deb94da\") " pod="openshift-multus/multus-64tx9" Mar 18 17:40:42.981570 master-0 kubenswrapper[4090]: I0318 17:40:42.981524 4090 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/5b0e38f3-3ab5-4519-86a6-68003deb94da-multus-conf-dir\") pod \"multus-64tx9\" (UID: \"5b0e38f3-3ab5-4519-86a6-68003deb94da\") " pod="openshift-multus/multus-64tx9" Mar 18 17:40:42.981829 master-0 kubenswrapper[4090]: I0318 17:40:42.981780 4090 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/5b0e38f3-3ab5-4519-86a6-68003deb94da-host-run-k8s-cni-cncf-io\") pod \"multus-64tx9\" (UID: \"5b0e38f3-3ab5-4519-86a6-68003deb94da\") " pod="openshift-multus/multus-64tx9" Mar 18 17:40:42.982040 master-0 kubenswrapper[4090]: I0318 17:40:42.981981 4090 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/5b0e38f3-3ab5-4519-86a6-68003deb94da-hostroot\") pod \"multus-64tx9\" (UID: \"5b0e38f3-3ab5-4519-86a6-68003deb94da\") " pod="openshift-multus/multus-64tx9" Mar 18 17:40:42.982103 master-0 kubenswrapper[4090]: I0318 17:40:42.982012 4090 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/5b0e38f3-3ab5-4519-86a6-68003deb94da-multus-cni-dir\") pod \"multus-64tx9\" (UID: \"5b0e38f3-3ab5-4519-86a6-68003deb94da\") " pod="openshift-multus/multus-64tx9" Mar 18 17:40:42.982185 master-0 kubenswrapper[4090]: I0318 17:40:42.982165 4090 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/5b0e38f3-3ab5-4519-86a6-68003deb94da-host-var-lib-kubelet\") pod \"multus-64tx9\" (UID: \"5b0e38f3-3ab5-4519-86a6-68003deb94da\") " pod="openshift-multus/multus-64tx9" Mar 18 17:40:42.982477 master-0 kubenswrapper[4090]: I0318 17:40:42.982430 4090 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/5b0e38f3-3ab5-4519-86a6-68003deb94da-system-cni-dir\") pod \"multus-64tx9\" (UID: \"5b0e38f3-3ab5-4519-86a6-68003deb94da\") " pod="openshift-multus/multus-64tx9" Mar 18 17:40:42.982624 master-0 kubenswrapper[4090]: I0318 17:40:42.982586 4090 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/5b0e38f3-3ab5-4519-86a6-68003deb94da-os-release\") pod \"multus-64tx9\" (UID: \"5b0e38f3-3ab5-4519-86a6-68003deb94da\") " pod="openshift-multus/multus-64tx9" Mar 18 17:40:42.982719 master-0 kubenswrapper[4090]: I0318 17:40:42.982677 4090 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/5b0e38f3-3ab5-4519-86a6-68003deb94da-host-var-lib-cni-multus\") pod \"multus-64tx9\" (UID: \"5b0e38f3-3ab5-4519-86a6-68003deb94da\") " pod="openshift-multus/multus-64tx9" Mar 18 17:40:42.982949 master-0 kubenswrapper[4090]: I0318 17:40:42.982905 4090 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/5b0e38f3-3ab5-4519-86a6-68003deb94da-host-var-lib-cni-bin\") pod \"multus-64tx9\" (UID: \"5b0e38f3-3ab5-4519-86a6-68003deb94da\") " pod="openshift-multus/multus-64tx9" Mar 18 17:40:42.983141 master-0 kubenswrapper[4090]: I0318 17:40:42.983081 4090 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/5b0e38f3-3ab5-4519-86a6-68003deb94da-cni-binary-copy\") pod \"multus-64tx9\" (UID: \"5b0e38f3-3ab5-4519-86a6-68003deb94da\") " pod="openshift-multus/multus-64tx9" Mar 18 17:40:42.983881 master-0 kubenswrapper[4090]: I0318 17:40:42.983827 4090 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/5b0e38f3-3ab5-4519-86a6-68003deb94da-multus-daemon-config\") pod \"multus-64tx9\" (UID: \"5b0e38f3-3ab5-4519-86a6-68003deb94da\") " pod="openshift-multus/multus-64tx9" Mar 18 17:40:43.013378 master-0 kubenswrapper[4090]: I0318 17:40:43.013083 4090 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-grnqn\" (UniqueName: \"kubernetes.io/projected/5b0e38f3-3ab5-4519-86a6-68003deb94da-kube-api-access-grnqn\") pod \"multus-64tx9\" (UID: \"5b0e38f3-3ab5-4519-86a6-68003deb94da\") " pod="openshift-multus/multus-64tx9" Mar 18 17:40:43.081491 master-0 kubenswrapper[4090]: I0318 17:40:43.081373 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/fea7b899-fde4-4463-9520-4d433a8ebe21-cnibin\") pod \"multus-additional-cni-plugins-ttbr5\" (UID: \"fea7b899-fde4-4463-9520-4d433a8ebe21\") " pod="openshift-multus/multus-additional-cni-plugins-ttbr5" Mar 18 17:40:43.081802 master-0 kubenswrapper[4090]: I0318 17:40:43.081520 4090 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/fea7b899-fde4-4463-9520-4d433a8ebe21-cnibin\") pod \"multus-additional-cni-plugins-ttbr5\" (UID: \"fea7b899-fde4-4463-9520-4d433a8ebe21\") " pod="openshift-multus/multus-additional-cni-plugins-ttbr5" Mar 18 17:40:43.081802 master-0 kubenswrapper[4090]: I0318 17:40:43.081667 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/fea7b899-fde4-4463-9520-4d433a8ebe21-cni-binary-copy\") pod \"multus-additional-cni-plugins-ttbr5\" (UID: \"fea7b899-fde4-4463-9520-4d433a8ebe21\") " pod="openshift-multus/multus-additional-cni-plugins-ttbr5" Mar 18 17:40:43.081802 master-0 kubenswrapper[4090]: I0318 17:40:43.081751 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/fea7b899-fde4-4463-9520-4d433a8ebe21-tuning-conf-dir\") pod \"multus-additional-cni-plugins-ttbr5\" (UID: \"fea7b899-fde4-4463-9520-4d433a8ebe21\") " pod="openshift-multus/multus-additional-cni-plugins-ttbr5" Mar 18 17:40:43.082336 master-0 kubenswrapper[4090]: I0318 17:40:43.082219 4090 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/fea7b899-fde4-4463-9520-4d433a8ebe21-tuning-conf-dir\") pod \"multus-additional-cni-plugins-ttbr5\" (UID: \"fea7b899-fde4-4463-9520-4d433a8ebe21\") " pod="openshift-multus/multus-additional-cni-plugins-ttbr5" Mar 18 17:40:43.083026 master-0 kubenswrapper[4090]: I0318 17:40:43.082974 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/fea7b899-fde4-4463-9520-4d433a8ebe21-system-cni-dir\") pod \"multus-additional-cni-plugins-ttbr5\" (UID: \"fea7b899-fde4-4463-9520-4d433a8ebe21\") " pod="openshift-multus/multus-additional-cni-plugins-ttbr5" Mar 18 17:40:43.083122 master-0 kubenswrapper[4090]: I0318 17:40:43.083034 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ts9b9\" (UniqueName: \"kubernetes.io/projected/fea7b899-fde4-4463-9520-4d433a8ebe21-kube-api-access-ts9b9\") pod \"multus-additional-cni-plugins-ttbr5\" (UID: \"fea7b899-fde4-4463-9520-4d433a8ebe21\") " pod="openshift-multus/multus-additional-cni-plugins-ttbr5" Mar 18 17:40:43.083360 master-0 kubenswrapper[4090]: I0318 17:40:43.083256 4090 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/fea7b899-fde4-4463-9520-4d433a8ebe21-cni-binary-copy\") pod \"multus-additional-cni-plugins-ttbr5\" (UID: \"fea7b899-fde4-4463-9520-4d433a8ebe21\") " pod="openshift-multus/multus-additional-cni-plugins-ttbr5" Mar 18 17:40:43.083496 master-0 kubenswrapper[4090]: I0318 17:40:43.083368 4090 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/fea7b899-fde4-4463-9520-4d433a8ebe21-system-cni-dir\") pod \"multus-additional-cni-plugins-ttbr5\" (UID: \"fea7b899-fde4-4463-9520-4d433a8ebe21\") " pod="openshift-multus/multus-additional-cni-plugins-ttbr5" Mar 18 17:40:43.083496 master-0 kubenswrapper[4090]: I0318 17:40:43.083360 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/fea7b899-fde4-4463-9520-4d433a8ebe21-os-release\") pod \"multus-additional-cni-plugins-ttbr5\" (UID: \"fea7b899-fde4-4463-9520-4d433a8ebe21\") " pod="openshift-multus/multus-additional-cni-plugins-ttbr5" Mar 18 17:40:43.083679 master-0 kubenswrapper[4090]: I0318 17:40:43.083507 4090 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/fea7b899-fde4-4463-9520-4d433a8ebe21-os-release\") pod \"multus-additional-cni-plugins-ttbr5\" (UID: \"fea7b899-fde4-4463-9520-4d433a8ebe21\") " pod="openshift-multus/multus-additional-cni-plugins-ttbr5" Mar 18 17:40:43.083679 master-0 kubenswrapper[4090]: I0318 17:40:43.083505 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/fea7b899-fde4-4463-9520-4d433a8ebe21-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-ttbr5\" (UID: \"fea7b899-fde4-4463-9520-4d433a8ebe21\") " pod="openshift-multus/multus-additional-cni-plugins-ttbr5" Mar 18 17:40:43.083679 master-0 kubenswrapper[4090]: I0318 17:40:43.083579 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"whereabouts-flatfile-configmap\" (UniqueName: \"kubernetes.io/configmap/fea7b899-fde4-4463-9520-4d433a8ebe21-whereabouts-flatfile-configmap\") pod \"multus-additional-cni-plugins-ttbr5\" (UID: \"fea7b899-fde4-4463-9520-4d433a8ebe21\") " pod="openshift-multus/multus-additional-cni-plugins-ttbr5" Mar 18 17:40:43.084670 master-0 kubenswrapper[4090]: I0318 17:40:43.084596 4090 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"whereabouts-flatfile-configmap\" (UniqueName: \"kubernetes.io/configmap/fea7b899-fde4-4463-9520-4d433a8ebe21-whereabouts-flatfile-configmap\") pod \"multus-additional-cni-plugins-ttbr5\" (UID: \"fea7b899-fde4-4463-9520-4d433a8ebe21\") " pod="openshift-multus/multus-additional-cni-plugins-ttbr5" Mar 18 17:40:43.084895 master-0 kubenswrapper[4090]: I0318 17:40:43.084833 4090 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/fea7b899-fde4-4463-9520-4d433a8ebe21-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-ttbr5\" (UID: \"fea7b899-fde4-4463-9520-4d433a8ebe21\") " pod="openshift-multus/multus-additional-cni-plugins-ttbr5" Mar 18 17:40:43.144421 master-0 kubenswrapper[4090]: I0318 17:40:43.144337 4090 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ts9b9\" (UniqueName: \"kubernetes.io/projected/fea7b899-fde4-4463-9520-4d433a8ebe21-kube-api-access-ts9b9\") pod \"multus-additional-cni-plugins-ttbr5\" (UID: \"fea7b899-fde4-4463-9520-4d433a8ebe21\") " pod="openshift-multus/multus-additional-cni-plugins-ttbr5" Mar 18 17:40:43.299735 master-0 kubenswrapper[4090]: I0318 17:40:43.299554 4090 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-64tx9" Mar 18 17:40:43.318551 master-0 kubenswrapper[4090]: W0318 17:40:43.318475 4090 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5b0e38f3_3ab5_4519_86a6_68003deb94da.slice/crio-a94ad7630ca05ccfa8a345aad202de39848166c994875cfad1d5875137f9cf66 WatchSource:0}: Error finding container a94ad7630ca05ccfa8a345aad202de39848166c994875cfad1d5875137f9cf66: Status 404 returned error can't find the container with id a94ad7630ca05ccfa8a345aad202de39848166c994875cfad1d5875137f9cf66 Mar 18 17:40:43.406427 master-0 kubenswrapper[4090]: I0318 17:40:43.406319 4090 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-mfn52"] Mar 18 17:40:43.406834 master-0 kubenswrapper[4090]: I0318 17:40:43.406789 4090 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mfn52" Mar 18 17:40:43.406977 master-0 kubenswrapper[4090]: E0318 17:40:43.406920 4090 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mfn52" podUID="5a4f94f3-d63a-4869-b723-ae9637610b4b" Mar 18 17:40:43.424646 master-0 kubenswrapper[4090]: I0318 17:40:43.424572 4090 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-ttbr5" Mar 18 17:40:43.486304 master-0 kubenswrapper[4090]: I0318 17:40:43.486154 4090 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5a4f94f3-d63a-4869-b723-ae9637610b4b-metrics-certs\") pod \"network-metrics-daemon-mfn52\" (UID: \"5a4f94f3-d63a-4869-b723-ae9637610b4b\") " pod="openshift-multus/network-metrics-daemon-mfn52" Mar 18 17:40:43.486304 master-0 kubenswrapper[4090]: I0318 17:40:43.486211 4090 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hgnz6\" (UniqueName: \"kubernetes.io/projected/5a4f94f3-d63a-4869-b723-ae9637610b4b-kube-api-access-hgnz6\") pod \"network-metrics-daemon-mfn52\" (UID: \"5a4f94f3-d63a-4869-b723-ae9637610b4b\") " pod="openshift-multus/network-metrics-daemon-mfn52" Mar 18 17:40:43.587221 master-0 kubenswrapper[4090]: I0318 17:40:43.587154 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5a4f94f3-d63a-4869-b723-ae9637610b4b-metrics-certs\") pod \"network-metrics-daemon-mfn52\" (UID: \"5a4f94f3-d63a-4869-b723-ae9637610b4b\") " pod="openshift-multus/network-metrics-daemon-mfn52" Mar 18 17:40:43.587221 master-0 kubenswrapper[4090]: I0318 17:40:43.587205 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hgnz6\" (UniqueName: \"kubernetes.io/projected/5a4f94f3-d63a-4869-b723-ae9637610b4b-kube-api-access-hgnz6\") pod \"network-metrics-daemon-mfn52\" (UID: \"5a4f94f3-d63a-4869-b723-ae9637610b4b\") " pod="openshift-multus/network-metrics-daemon-mfn52" Mar 18 17:40:43.587554 master-0 kubenswrapper[4090]: E0318 17:40:43.587419 4090 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Mar 18 17:40:43.587554 master-0 kubenswrapper[4090]: E0318 17:40:43.587468 4090 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5a4f94f3-d63a-4869-b723-ae9637610b4b-metrics-certs podName:5a4f94f3-d63a-4869-b723-ae9637610b4b nodeName:}" failed. No retries permitted until 2026-03-18 17:40:44.08745333 +0000 UTC m=+61.279725244 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/5a4f94f3-d63a-4869-b723-ae9637610b4b-metrics-certs") pod "network-metrics-daemon-mfn52" (UID: "5a4f94f3-d63a-4869-b723-ae9637610b4b") : object "openshift-multus"/"metrics-daemon-secret" not registered Mar 18 17:40:43.615845 master-0 kubenswrapper[4090]: I0318 17:40:43.615767 4090 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hgnz6\" (UniqueName: \"kubernetes.io/projected/5a4f94f3-d63a-4869-b723-ae9637610b4b-kube-api-access-hgnz6\") pod \"network-metrics-daemon-mfn52\" (UID: \"5a4f94f3-d63a-4869-b723-ae9637610b4b\") " pod="openshift-multus/network-metrics-daemon-mfn52" Mar 18 17:40:43.843066 master-0 kubenswrapper[4090]: I0318 17:40:43.842923 4090 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-ttbr5" event={"ID":"fea7b899-fde4-4463-9520-4d433a8ebe21","Type":"ContainerStarted","Data":"b4db07afd1a03d8c1456d9bd3e2fc4e66947bcaa942aef9864e3ed3e54889795"} Mar 18 17:40:43.844468 master-0 kubenswrapper[4090]: I0318 17:40:43.844404 4090 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-64tx9" event={"ID":"5b0e38f3-3ab5-4519-86a6-68003deb94da","Type":"ContainerStarted","Data":"a94ad7630ca05ccfa8a345aad202de39848166c994875cfad1d5875137f9cf66"} Mar 18 17:40:44.089944 master-0 kubenswrapper[4090]: I0318 17:40:44.089839 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5a4f94f3-d63a-4869-b723-ae9637610b4b-metrics-certs\") pod \"network-metrics-daemon-mfn52\" (UID: \"5a4f94f3-d63a-4869-b723-ae9637610b4b\") " pod="openshift-multus/network-metrics-daemon-mfn52" Mar 18 17:40:44.090468 master-0 kubenswrapper[4090]: E0318 17:40:44.090067 4090 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Mar 18 17:40:44.090468 master-0 kubenswrapper[4090]: E0318 17:40:44.090167 4090 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5a4f94f3-d63a-4869-b723-ae9637610b4b-metrics-certs podName:5a4f94f3-d63a-4869-b723-ae9637610b4b nodeName:}" failed. No retries permitted until 2026-03-18 17:40:45.090139871 +0000 UTC m=+62.282411825 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/5a4f94f3-d63a-4869-b723-ae9637610b4b-metrics-certs") pod "network-metrics-daemon-mfn52" (UID: "5a4f94f3-d63a-4869-b723-ae9637610b4b") : object "openshift-multus"/"metrics-daemon-secret" not registered Mar 18 17:40:45.097643 master-0 kubenswrapper[4090]: I0318 17:40:45.097581 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5a4f94f3-d63a-4869-b723-ae9637610b4b-metrics-certs\") pod \"network-metrics-daemon-mfn52\" (UID: \"5a4f94f3-d63a-4869-b723-ae9637610b4b\") " pod="openshift-multus/network-metrics-daemon-mfn52" Mar 18 17:40:45.098108 master-0 kubenswrapper[4090]: E0318 17:40:45.097768 4090 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Mar 18 17:40:45.098108 master-0 kubenswrapper[4090]: E0318 17:40:45.097839 4090 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5a4f94f3-d63a-4869-b723-ae9637610b4b-metrics-certs podName:5a4f94f3-d63a-4869-b723-ae9637610b4b nodeName:}" failed. No retries permitted until 2026-03-18 17:40:47.097818508 +0000 UTC m=+64.290090462 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/5a4f94f3-d63a-4869-b723-ae9637610b4b-metrics-certs") pod "network-metrics-daemon-mfn52" (UID: "5a4f94f3-d63a-4869-b723-ae9637610b4b") : object "openshift-multus"/"metrics-daemon-secret" not registered Mar 18 17:40:45.607158 master-0 kubenswrapper[4090]: I0318 17:40:45.607078 4090 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mfn52" Mar 18 17:40:45.607412 master-0 kubenswrapper[4090]: E0318 17:40:45.607353 4090 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mfn52" podUID="5a4f94f3-d63a-4869-b723-ae9637610b4b" Mar 18 17:40:46.853683 master-0 kubenswrapper[4090]: I0318 17:40:46.853261 4090 generic.go:334] "Generic (PLEG): container finished" podID="fea7b899-fde4-4463-9520-4d433a8ebe21" containerID="88001466f79b98c5070d70264ed313350538e29ea013a0dee819ce0396f0e3a4" exitCode=0 Mar 18 17:40:46.853683 master-0 kubenswrapper[4090]: I0318 17:40:46.853410 4090 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-ttbr5" event={"ID":"fea7b899-fde4-4463-9520-4d433a8ebe21","Type":"ContainerDied","Data":"88001466f79b98c5070d70264ed313350538e29ea013a0dee819ce0396f0e3a4"} Mar 18 17:40:47.115321 master-0 kubenswrapper[4090]: I0318 17:40:47.115213 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5a4f94f3-d63a-4869-b723-ae9637610b4b-metrics-certs\") pod \"network-metrics-daemon-mfn52\" (UID: \"5a4f94f3-d63a-4869-b723-ae9637610b4b\") " pod="openshift-multus/network-metrics-daemon-mfn52" Mar 18 17:40:47.115594 master-0 kubenswrapper[4090]: E0318 17:40:47.115510 4090 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Mar 18 17:40:47.115670 master-0 kubenswrapper[4090]: E0318 17:40:47.115632 4090 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5a4f94f3-d63a-4869-b723-ae9637610b4b-metrics-certs podName:5a4f94f3-d63a-4869-b723-ae9637610b4b nodeName:}" failed. No retries permitted until 2026-03-18 17:40:51.115599809 +0000 UTC m=+68.307871763 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/5a4f94f3-d63a-4869-b723-ae9637610b4b-metrics-certs") pod "network-metrics-daemon-mfn52" (UID: "5a4f94f3-d63a-4869-b723-ae9637610b4b") : object "openshift-multus"/"metrics-daemon-secret" not registered Mar 18 17:40:47.607067 master-0 kubenswrapper[4090]: I0318 17:40:47.607008 4090 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mfn52" Mar 18 17:40:47.607382 master-0 kubenswrapper[4090]: E0318 17:40:47.607127 4090 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mfn52" podUID="5a4f94f3-d63a-4869-b723-ae9637610b4b" Mar 18 17:40:49.607144 master-0 kubenswrapper[4090]: I0318 17:40:49.607073 4090 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mfn52" Mar 18 17:40:49.607872 master-0 kubenswrapper[4090]: E0318 17:40:49.607329 4090 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mfn52" podUID="5a4f94f3-d63a-4869-b723-ae9637610b4b" Mar 18 17:40:51.146535 master-0 kubenswrapper[4090]: I0318 17:40:51.146462 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5a4f94f3-d63a-4869-b723-ae9637610b4b-metrics-certs\") pod \"network-metrics-daemon-mfn52\" (UID: \"5a4f94f3-d63a-4869-b723-ae9637610b4b\") " pod="openshift-multus/network-metrics-daemon-mfn52" Mar 18 17:40:51.147306 master-0 kubenswrapper[4090]: E0318 17:40:51.146576 4090 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Mar 18 17:40:51.147306 master-0 kubenswrapper[4090]: E0318 17:40:51.146641 4090 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5a4f94f3-d63a-4869-b723-ae9637610b4b-metrics-certs podName:5a4f94f3-d63a-4869-b723-ae9637610b4b nodeName:}" failed. No retries permitted until 2026-03-18 17:40:59.146625205 +0000 UTC m=+76.338897119 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/5a4f94f3-d63a-4869-b723-ae9637610b4b-metrics-certs") pod "network-metrics-daemon-mfn52" (UID: "5a4f94f3-d63a-4869-b723-ae9637610b4b") : object "openshift-multus"/"metrics-daemon-secret" not registered Mar 18 17:40:51.607371 master-0 kubenswrapper[4090]: I0318 17:40:51.607315 4090 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mfn52" Mar 18 17:40:51.607568 master-0 kubenswrapper[4090]: E0318 17:40:51.607459 4090 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mfn52" podUID="5a4f94f3-d63a-4869-b723-ae9637610b4b" Mar 18 17:40:53.607682 master-0 kubenswrapper[4090]: I0318 17:40:53.607636 4090 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mfn52" Mar 18 17:40:53.608710 master-0 kubenswrapper[4090]: E0318 17:40:53.608667 4090 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mfn52" podUID="5a4f94f3-d63a-4869-b723-ae9637610b4b" Mar 18 17:40:54.370232 master-0 kubenswrapper[4090]: I0318 17:40:54.370184 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a02399de-859b-45b1-9b00-18a08f285f39-serving-cert\") pod \"cluster-version-operator-56d8475767-lqvvj\" (UID: \"a02399de-859b-45b1-9b00-18a08f285f39\") " pod="openshift-cluster-version/cluster-version-operator-56d8475767-lqvvj" Mar 18 17:40:54.370510 master-0 kubenswrapper[4090]: E0318 17:40:54.370463 4090 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Mar 18 17:40:54.370654 master-0 kubenswrapper[4090]: E0318 17:40:54.370614 4090 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a02399de-859b-45b1-9b00-18a08f285f39-serving-cert podName:a02399de-859b-45b1-9b00-18a08f285f39 nodeName:}" failed. No retries permitted until 2026-03-18 17:41:26.370574314 +0000 UTC m=+103.562846278 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/a02399de-859b-45b1-9b00-18a08f285f39-serving-cert") pod "cluster-version-operator-56d8475767-lqvvj" (UID: "a02399de-859b-45b1-9b00-18a08f285f39") : secret "cluster-version-operator-serving-cert" not found Mar 18 17:40:54.818506 master-0 kubenswrapper[4090]: I0318 17:40:54.818383 4090 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-57f769d897-m82wx"] Mar 18 17:40:54.820706 master-0 kubenswrapper[4090]: I0318 17:40:54.820646 4090 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57f769d897-m82wx" Mar 18 17:40:54.822978 master-0 kubenswrapper[4090]: I0318 17:40:54.822259 4090 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Mar 18 17:40:54.822978 master-0 kubenswrapper[4090]: I0318 17:40:54.822569 4090 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Mar 18 17:40:54.823680 master-0 kubenswrapper[4090]: I0318 17:40:54.823259 4090 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Mar 18 17:40:54.823680 master-0 kubenswrapper[4090]: I0318 17:40:54.823417 4090 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Mar 18 17:40:54.823858 master-0 kubenswrapper[4090]: I0318 17:40:54.823834 4090 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Mar 18 17:40:54.875042 master-0 kubenswrapper[4090]: I0318 17:40:54.874982 4090 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/7b94e08c-7944-445e-bfb7-6c7c14880c65-ovnkube-config\") pod \"ovnkube-control-plane-57f769d897-m82wx\" (UID: \"7b94e08c-7944-445e-bfb7-6c7c14880c65\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57f769d897-m82wx" Mar 18 17:40:54.875042 master-0 kubenswrapper[4090]: I0318 17:40:54.875040 4090 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/7b94e08c-7944-445e-bfb7-6c7c14880c65-env-overrides\") pod \"ovnkube-control-plane-57f769d897-m82wx\" (UID: \"7b94e08c-7944-445e-bfb7-6c7c14880c65\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57f769d897-m82wx" Mar 18 17:40:54.875412 master-0 kubenswrapper[4090]: I0318 17:40:54.875129 4090 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g4zcv\" (UniqueName: \"kubernetes.io/projected/7b94e08c-7944-445e-bfb7-6c7c14880c65-kube-api-access-g4zcv\") pod \"ovnkube-control-plane-57f769d897-m82wx\" (UID: \"7b94e08c-7944-445e-bfb7-6c7c14880c65\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57f769d897-m82wx" Mar 18 17:40:54.875412 master-0 kubenswrapper[4090]: I0318 17:40:54.875181 4090 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/7b94e08c-7944-445e-bfb7-6c7c14880c65-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-57f769d897-m82wx\" (UID: \"7b94e08c-7944-445e-bfb7-6c7c14880c65\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57f769d897-m82wx" Mar 18 17:40:54.975788 master-0 kubenswrapper[4090]: I0318 17:40:54.975711 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/7b94e08c-7944-445e-bfb7-6c7c14880c65-ovnkube-config\") pod \"ovnkube-control-plane-57f769d897-m82wx\" (UID: \"7b94e08c-7944-445e-bfb7-6c7c14880c65\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57f769d897-m82wx" Mar 18 17:40:54.975788 master-0 kubenswrapper[4090]: I0318 17:40:54.975768 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/7b94e08c-7944-445e-bfb7-6c7c14880c65-env-overrides\") pod \"ovnkube-control-plane-57f769d897-m82wx\" (UID: \"7b94e08c-7944-445e-bfb7-6c7c14880c65\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57f769d897-m82wx" Mar 18 17:40:54.976350 master-0 kubenswrapper[4090]: I0318 17:40:54.976006 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g4zcv\" (UniqueName: \"kubernetes.io/projected/7b94e08c-7944-445e-bfb7-6c7c14880c65-kube-api-access-g4zcv\") pod \"ovnkube-control-plane-57f769d897-m82wx\" (UID: \"7b94e08c-7944-445e-bfb7-6c7c14880c65\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57f769d897-m82wx" Mar 18 17:40:54.976350 master-0 kubenswrapper[4090]: I0318 17:40:54.976174 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/7b94e08c-7944-445e-bfb7-6c7c14880c65-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-57f769d897-m82wx\" (UID: \"7b94e08c-7944-445e-bfb7-6c7c14880c65\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57f769d897-m82wx" Mar 18 17:40:54.977176 master-0 kubenswrapper[4090]: I0318 17:40:54.977018 4090 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/7b94e08c-7944-445e-bfb7-6c7c14880c65-env-overrides\") pod \"ovnkube-control-plane-57f769d897-m82wx\" (UID: \"7b94e08c-7944-445e-bfb7-6c7c14880c65\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57f769d897-m82wx" Mar 18 17:40:54.977616 master-0 kubenswrapper[4090]: I0318 17:40:54.977516 4090 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/7b94e08c-7944-445e-bfb7-6c7c14880c65-ovnkube-config\") pod \"ovnkube-control-plane-57f769d897-m82wx\" (UID: \"7b94e08c-7944-445e-bfb7-6c7c14880c65\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57f769d897-m82wx" Mar 18 17:40:54.981926 master-0 kubenswrapper[4090]: I0318 17:40:54.981872 4090 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/7b94e08c-7944-445e-bfb7-6c7c14880c65-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-57f769d897-m82wx\" (UID: \"7b94e08c-7944-445e-bfb7-6c7c14880c65\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57f769d897-m82wx" Mar 18 17:40:55.009232 master-0 kubenswrapper[4090]: I0318 17:40:55.009165 4090 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g4zcv\" (UniqueName: \"kubernetes.io/projected/7b94e08c-7944-445e-bfb7-6c7c14880c65-kube-api-access-g4zcv\") pod \"ovnkube-control-plane-57f769d897-m82wx\" (UID: \"7b94e08c-7944-445e-bfb7-6c7c14880c65\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57f769d897-m82wx" Mar 18 17:40:55.023184 master-0 kubenswrapper[4090]: I0318 17:40:55.023146 4090 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-w28hf"] Mar 18 17:40:55.023782 master-0 kubenswrapper[4090]: I0318 17:40:55.023752 4090 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-w28hf" Mar 18 17:40:55.025631 master-0 kubenswrapper[4090]: I0318 17:40:55.025605 4090 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Mar 18 17:40:55.025796 master-0 kubenswrapper[4090]: I0318 17:40:55.025781 4090 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Mar 18 17:40:55.076824 master-0 kubenswrapper[4090]: I0318 17:40:55.076678 4090 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/eda1dca7-9f5f-4955-8522-345e4f6e82a2-host-run-netns\") pod \"ovnkube-node-w28hf\" (UID: \"eda1dca7-9f5f-4955-8522-345e4f6e82a2\") " pod="openshift-ovn-kubernetes/ovnkube-node-w28hf" Mar 18 17:40:55.076824 master-0 kubenswrapper[4090]: I0318 17:40:55.076734 4090 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/eda1dca7-9f5f-4955-8522-345e4f6e82a2-var-lib-openvswitch\") pod \"ovnkube-node-w28hf\" (UID: \"eda1dca7-9f5f-4955-8522-345e4f6e82a2\") " pod="openshift-ovn-kubernetes/ovnkube-node-w28hf" Mar 18 17:40:55.076824 master-0 kubenswrapper[4090]: I0318 17:40:55.076757 4090 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/eda1dca7-9f5f-4955-8522-345e4f6e82a2-log-socket\") pod \"ovnkube-node-w28hf\" (UID: \"eda1dca7-9f5f-4955-8522-345e4f6e82a2\") " pod="openshift-ovn-kubernetes/ovnkube-node-w28hf" Mar 18 17:40:55.076824 master-0 kubenswrapper[4090]: I0318 17:40:55.076772 4090 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/eda1dca7-9f5f-4955-8522-345e4f6e82a2-run-ovn\") pod \"ovnkube-node-w28hf\" (UID: \"eda1dca7-9f5f-4955-8522-345e4f6e82a2\") " pod="openshift-ovn-kubernetes/ovnkube-node-w28hf" Mar 18 17:40:55.076824 master-0 kubenswrapper[4090]: I0318 17:40:55.076792 4090 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/eda1dca7-9f5f-4955-8522-345e4f6e82a2-ovn-node-metrics-cert\") pod \"ovnkube-node-w28hf\" (UID: \"eda1dca7-9f5f-4955-8522-345e4f6e82a2\") " pod="openshift-ovn-kubernetes/ovnkube-node-w28hf" Mar 18 17:40:55.076824 master-0 kubenswrapper[4090]: I0318 17:40:55.076814 4090 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/eda1dca7-9f5f-4955-8522-345e4f6e82a2-env-overrides\") pod \"ovnkube-node-w28hf\" (UID: \"eda1dca7-9f5f-4955-8522-345e4f6e82a2\") " pod="openshift-ovn-kubernetes/ovnkube-node-w28hf" Mar 18 17:40:55.076824 master-0 kubenswrapper[4090]: I0318 17:40:55.076830 4090 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/eda1dca7-9f5f-4955-8522-345e4f6e82a2-etc-openvswitch\") pod \"ovnkube-node-w28hf\" (UID: \"eda1dca7-9f5f-4955-8522-345e4f6e82a2\") " pod="openshift-ovn-kubernetes/ovnkube-node-w28hf" Mar 18 17:40:55.076824 master-0 kubenswrapper[4090]: I0318 17:40:55.076845 4090 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/eda1dca7-9f5f-4955-8522-345e4f6e82a2-node-log\") pod \"ovnkube-node-w28hf\" (UID: \"eda1dca7-9f5f-4955-8522-345e4f6e82a2\") " pod="openshift-ovn-kubernetes/ovnkube-node-w28hf" Mar 18 17:40:55.077444 master-0 kubenswrapper[4090]: I0318 17:40:55.076863 4090 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/eda1dca7-9f5f-4955-8522-345e4f6e82a2-host-kubelet\") pod \"ovnkube-node-w28hf\" (UID: \"eda1dca7-9f5f-4955-8522-345e4f6e82a2\") " pod="openshift-ovn-kubernetes/ovnkube-node-w28hf" Mar 18 17:40:55.077444 master-0 kubenswrapper[4090]: I0318 17:40:55.076879 4090 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/eda1dca7-9f5f-4955-8522-345e4f6e82a2-host-slash\") pod \"ovnkube-node-w28hf\" (UID: \"eda1dca7-9f5f-4955-8522-345e4f6e82a2\") " pod="openshift-ovn-kubernetes/ovnkube-node-w28hf" Mar 18 17:40:55.077444 master-0 kubenswrapper[4090]: I0318 17:40:55.076896 4090 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fr594\" (UniqueName: \"kubernetes.io/projected/eda1dca7-9f5f-4955-8522-345e4f6e82a2-kube-api-access-fr594\") pod \"ovnkube-node-w28hf\" (UID: \"eda1dca7-9f5f-4955-8522-345e4f6e82a2\") " pod="openshift-ovn-kubernetes/ovnkube-node-w28hf" Mar 18 17:40:55.077444 master-0 kubenswrapper[4090]: I0318 17:40:55.076925 4090 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/eda1dca7-9f5f-4955-8522-345e4f6e82a2-systemd-units\") pod \"ovnkube-node-w28hf\" (UID: \"eda1dca7-9f5f-4955-8522-345e4f6e82a2\") " pod="openshift-ovn-kubernetes/ovnkube-node-w28hf" Mar 18 17:40:55.077444 master-0 kubenswrapper[4090]: I0318 17:40:55.076965 4090 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/eda1dca7-9f5f-4955-8522-345e4f6e82a2-ovnkube-config\") pod \"ovnkube-node-w28hf\" (UID: \"eda1dca7-9f5f-4955-8522-345e4f6e82a2\") " pod="openshift-ovn-kubernetes/ovnkube-node-w28hf" Mar 18 17:40:55.077444 master-0 kubenswrapper[4090]: I0318 17:40:55.076986 4090 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/eda1dca7-9f5f-4955-8522-345e4f6e82a2-host-cni-bin\") pod \"ovnkube-node-w28hf\" (UID: \"eda1dca7-9f5f-4955-8522-345e4f6e82a2\") " pod="openshift-ovn-kubernetes/ovnkube-node-w28hf" Mar 18 17:40:55.077444 master-0 kubenswrapper[4090]: I0318 17:40:55.077004 4090 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/eda1dca7-9f5f-4955-8522-345e4f6e82a2-ovnkube-script-lib\") pod \"ovnkube-node-w28hf\" (UID: \"eda1dca7-9f5f-4955-8522-345e4f6e82a2\") " pod="openshift-ovn-kubernetes/ovnkube-node-w28hf" Mar 18 17:40:55.077444 master-0 kubenswrapper[4090]: I0318 17:40:55.077023 4090 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/eda1dca7-9f5f-4955-8522-345e4f6e82a2-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-w28hf\" (UID: \"eda1dca7-9f5f-4955-8522-345e4f6e82a2\") " pod="openshift-ovn-kubernetes/ovnkube-node-w28hf" Mar 18 17:40:55.077444 master-0 kubenswrapper[4090]: I0318 17:40:55.077044 4090 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/eda1dca7-9f5f-4955-8522-345e4f6e82a2-host-cni-netd\") pod \"ovnkube-node-w28hf\" (UID: \"eda1dca7-9f5f-4955-8522-345e4f6e82a2\") " pod="openshift-ovn-kubernetes/ovnkube-node-w28hf" Mar 18 17:40:55.077444 master-0 kubenswrapper[4090]: I0318 17:40:55.077065 4090 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/eda1dca7-9f5f-4955-8522-345e4f6e82a2-run-systemd\") pod \"ovnkube-node-w28hf\" (UID: \"eda1dca7-9f5f-4955-8522-345e4f6e82a2\") " pod="openshift-ovn-kubernetes/ovnkube-node-w28hf" Mar 18 17:40:55.077444 master-0 kubenswrapper[4090]: I0318 17:40:55.077084 4090 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/eda1dca7-9f5f-4955-8522-345e4f6e82a2-run-openvswitch\") pod \"ovnkube-node-w28hf\" (UID: \"eda1dca7-9f5f-4955-8522-345e4f6e82a2\") " pod="openshift-ovn-kubernetes/ovnkube-node-w28hf" Mar 18 17:40:55.077444 master-0 kubenswrapper[4090]: I0318 17:40:55.077104 4090 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/eda1dca7-9f5f-4955-8522-345e4f6e82a2-host-run-ovn-kubernetes\") pod \"ovnkube-node-w28hf\" (UID: \"eda1dca7-9f5f-4955-8522-345e4f6e82a2\") " pod="openshift-ovn-kubernetes/ovnkube-node-w28hf" Mar 18 17:40:55.135170 master-0 kubenswrapper[4090]: I0318 17:40:55.135118 4090 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57f769d897-m82wx" Mar 18 17:40:55.183036 master-0 kubenswrapper[4090]: I0318 17:40:55.179966 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/eda1dca7-9f5f-4955-8522-345e4f6e82a2-host-cni-bin\") pod \"ovnkube-node-w28hf\" (UID: \"eda1dca7-9f5f-4955-8522-345e4f6e82a2\") " pod="openshift-ovn-kubernetes/ovnkube-node-w28hf" Mar 18 17:40:55.183036 master-0 kubenswrapper[4090]: I0318 17:40:55.180330 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/eda1dca7-9f5f-4955-8522-345e4f6e82a2-ovnkube-script-lib\") pod \"ovnkube-node-w28hf\" (UID: \"eda1dca7-9f5f-4955-8522-345e4f6e82a2\") " pod="openshift-ovn-kubernetes/ovnkube-node-w28hf" Mar 18 17:40:55.183036 master-0 kubenswrapper[4090]: I0318 17:40:55.181218 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/eda1dca7-9f5f-4955-8522-345e4f6e82a2-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-w28hf\" (UID: \"eda1dca7-9f5f-4955-8522-345e4f6e82a2\") " pod="openshift-ovn-kubernetes/ovnkube-node-w28hf" Mar 18 17:40:55.183036 master-0 kubenswrapper[4090]: I0318 17:40:55.180257 4090 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/eda1dca7-9f5f-4955-8522-345e4f6e82a2-host-cni-bin\") pod \"ovnkube-node-w28hf\" (UID: \"eda1dca7-9f5f-4955-8522-345e4f6e82a2\") " pod="openshift-ovn-kubernetes/ovnkube-node-w28hf" Mar 18 17:40:55.183036 master-0 kubenswrapper[4090]: I0318 17:40:55.181259 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/eda1dca7-9f5f-4955-8522-345e4f6e82a2-host-cni-netd\") pod \"ovnkube-node-w28hf\" (UID: \"eda1dca7-9f5f-4955-8522-345e4f6e82a2\") " pod="openshift-ovn-kubernetes/ovnkube-node-w28hf" Mar 18 17:40:55.183036 master-0 kubenswrapper[4090]: I0318 17:40:55.181430 4090 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/eda1dca7-9f5f-4955-8522-345e4f6e82a2-host-cni-netd\") pod \"ovnkube-node-w28hf\" (UID: \"eda1dca7-9f5f-4955-8522-345e4f6e82a2\") " pod="openshift-ovn-kubernetes/ovnkube-node-w28hf" Mar 18 17:40:55.183036 master-0 kubenswrapper[4090]: I0318 17:40:55.181582 4090 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/eda1dca7-9f5f-4955-8522-345e4f6e82a2-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-w28hf\" (UID: \"eda1dca7-9f5f-4955-8522-345e4f6e82a2\") " pod="openshift-ovn-kubernetes/ovnkube-node-w28hf" Mar 18 17:40:55.183036 master-0 kubenswrapper[4090]: I0318 17:40:55.181631 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/eda1dca7-9f5f-4955-8522-345e4f6e82a2-run-systemd\") pod \"ovnkube-node-w28hf\" (UID: \"eda1dca7-9f5f-4955-8522-345e4f6e82a2\") " pod="openshift-ovn-kubernetes/ovnkube-node-w28hf" Mar 18 17:40:55.183036 master-0 kubenswrapper[4090]: I0318 17:40:55.181659 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/eda1dca7-9f5f-4955-8522-345e4f6e82a2-run-openvswitch\") pod \"ovnkube-node-w28hf\" (UID: \"eda1dca7-9f5f-4955-8522-345e4f6e82a2\") " pod="openshift-ovn-kubernetes/ovnkube-node-w28hf" Mar 18 17:40:55.183036 master-0 kubenswrapper[4090]: I0318 17:40:55.181702 4090 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/eda1dca7-9f5f-4955-8522-345e4f6e82a2-run-openvswitch\") pod \"ovnkube-node-w28hf\" (UID: \"eda1dca7-9f5f-4955-8522-345e4f6e82a2\") " pod="openshift-ovn-kubernetes/ovnkube-node-w28hf" Mar 18 17:40:55.183036 master-0 kubenswrapper[4090]: I0318 17:40:55.181717 4090 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/eda1dca7-9f5f-4955-8522-345e4f6e82a2-run-systemd\") pod \"ovnkube-node-w28hf\" (UID: \"eda1dca7-9f5f-4955-8522-345e4f6e82a2\") " pod="openshift-ovn-kubernetes/ovnkube-node-w28hf" Mar 18 17:40:55.183036 master-0 kubenswrapper[4090]: I0318 17:40:55.181760 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/eda1dca7-9f5f-4955-8522-345e4f6e82a2-host-run-ovn-kubernetes\") pod \"ovnkube-node-w28hf\" (UID: \"eda1dca7-9f5f-4955-8522-345e4f6e82a2\") " pod="openshift-ovn-kubernetes/ovnkube-node-w28hf" Mar 18 17:40:55.183036 master-0 kubenswrapper[4090]: I0318 17:40:55.181843 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/eda1dca7-9f5f-4955-8522-345e4f6e82a2-host-run-netns\") pod \"ovnkube-node-w28hf\" (UID: \"eda1dca7-9f5f-4955-8522-345e4f6e82a2\") " pod="openshift-ovn-kubernetes/ovnkube-node-w28hf" Mar 18 17:40:55.183036 master-0 kubenswrapper[4090]: I0318 17:40:55.181873 4090 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/eda1dca7-9f5f-4955-8522-345e4f6e82a2-host-run-ovn-kubernetes\") pod \"ovnkube-node-w28hf\" (UID: \"eda1dca7-9f5f-4955-8522-345e4f6e82a2\") " pod="openshift-ovn-kubernetes/ovnkube-node-w28hf" Mar 18 17:40:55.183036 master-0 kubenswrapper[4090]: I0318 17:40:55.181879 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/eda1dca7-9f5f-4955-8522-345e4f6e82a2-var-lib-openvswitch\") pod \"ovnkube-node-w28hf\" (UID: \"eda1dca7-9f5f-4955-8522-345e4f6e82a2\") " pod="openshift-ovn-kubernetes/ovnkube-node-w28hf" Mar 18 17:40:55.183036 master-0 kubenswrapper[4090]: I0318 17:40:55.181921 4090 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/eda1dca7-9f5f-4955-8522-345e4f6e82a2-host-run-netns\") pod \"ovnkube-node-w28hf\" (UID: \"eda1dca7-9f5f-4955-8522-345e4f6e82a2\") " pod="openshift-ovn-kubernetes/ovnkube-node-w28hf" Mar 18 17:40:55.183036 master-0 kubenswrapper[4090]: I0318 17:40:55.181920 4090 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/eda1dca7-9f5f-4955-8522-345e4f6e82a2-var-lib-openvswitch\") pod \"ovnkube-node-w28hf\" (UID: \"eda1dca7-9f5f-4955-8522-345e4f6e82a2\") " pod="openshift-ovn-kubernetes/ovnkube-node-w28hf" Mar 18 17:40:55.183889 master-0 kubenswrapper[4090]: I0318 17:40:55.182104 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/eda1dca7-9f5f-4955-8522-345e4f6e82a2-log-socket\") pod \"ovnkube-node-w28hf\" (UID: \"eda1dca7-9f5f-4955-8522-345e4f6e82a2\") " pod="openshift-ovn-kubernetes/ovnkube-node-w28hf" Mar 18 17:40:55.183889 master-0 kubenswrapper[4090]: I0318 17:40:55.182155 4090 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/eda1dca7-9f5f-4955-8522-345e4f6e82a2-log-socket\") pod \"ovnkube-node-w28hf\" (UID: \"eda1dca7-9f5f-4955-8522-345e4f6e82a2\") " pod="openshift-ovn-kubernetes/ovnkube-node-w28hf" Mar 18 17:40:55.183889 master-0 kubenswrapper[4090]: I0318 17:40:55.182201 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/eda1dca7-9f5f-4955-8522-345e4f6e82a2-run-ovn\") pod \"ovnkube-node-w28hf\" (UID: \"eda1dca7-9f5f-4955-8522-345e4f6e82a2\") " pod="openshift-ovn-kubernetes/ovnkube-node-w28hf" Mar 18 17:40:55.183889 master-0 kubenswrapper[4090]: I0318 17:40:55.182234 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/eda1dca7-9f5f-4955-8522-345e4f6e82a2-ovn-node-metrics-cert\") pod \"ovnkube-node-w28hf\" (UID: \"eda1dca7-9f5f-4955-8522-345e4f6e82a2\") " pod="openshift-ovn-kubernetes/ovnkube-node-w28hf" Mar 18 17:40:55.183889 master-0 kubenswrapper[4090]: I0318 17:40:55.182267 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/eda1dca7-9f5f-4955-8522-345e4f6e82a2-env-overrides\") pod \"ovnkube-node-w28hf\" (UID: \"eda1dca7-9f5f-4955-8522-345e4f6e82a2\") " pod="openshift-ovn-kubernetes/ovnkube-node-w28hf" Mar 18 17:40:55.183889 master-0 kubenswrapper[4090]: I0318 17:40:55.182322 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/eda1dca7-9f5f-4955-8522-345e4f6e82a2-etc-openvswitch\") pod \"ovnkube-node-w28hf\" (UID: \"eda1dca7-9f5f-4955-8522-345e4f6e82a2\") " pod="openshift-ovn-kubernetes/ovnkube-node-w28hf" Mar 18 17:40:55.183889 master-0 kubenswrapper[4090]: I0318 17:40:55.182349 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/eda1dca7-9f5f-4955-8522-345e4f6e82a2-node-log\") pod \"ovnkube-node-w28hf\" (UID: \"eda1dca7-9f5f-4955-8522-345e4f6e82a2\") " pod="openshift-ovn-kubernetes/ovnkube-node-w28hf" Mar 18 17:40:55.183889 master-0 kubenswrapper[4090]: I0318 17:40:55.182381 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/eda1dca7-9f5f-4955-8522-345e4f6e82a2-host-kubelet\") pod \"ovnkube-node-w28hf\" (UID: \"eda1dca7-9f5f-4955-8522-345e4f6e82a2\") " pod="openshift-ovn-kubernetes/ovnkube-node-w28hf" Mar 18 17:40:55.183889 master-0 kubenswrapper[4090]: I0318 17:40:55.182413 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/eda1dca7-9f5f-4955-8522-345e4f6e82a2-host-slash\") pod \"ovnkube-node-w28hf\" (UID: \"eda1dca7-9f5f-4955-8522-345e4f6e82a2\") " pod="openshift-ovn-kubernetes/ovnkube-node-w28hf" Mar 18 17:40:55.183889 master-0 kubenswrapper[4090]: I0318 17:40:55.182478 4090 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/eda1dca7-9f5f-4955-8522-345e4f6e82a2-host-slash\") pod \"ovnkube-node-w28hf\" (UID: \"eda1dca7-9f5f-4955-8522-345e4f6e82a2\") " pod="openshift-ovn-kubernetes/ovnkube-node-w28hf" Mar 18 17:40:55.183889 master-0 kubenswrapper[4090]: I0318 17:40:55.182493 4090 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/eda1dca7-9f5f-4955-8522-345e4f6e82a2-etc-openvswitch\") pod \"ovnkube-node-w28hf\" (UID: \"eda1dca7-9f5f-4955-8522-345e4f6e82a2\") " pod="openshift-ovn-kubernetes/ovnkube-node-w28hf" Mar 18 17:40:55.183889 master-0 kubenswrapper[4090]: I0318 17:40:55.182497 4090 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/eda1dca7-9f5f-4955-8522-345e4f6e82a2-host-kubelet\") pod \"ovnkube-node-w28hf\" (UID: \"eda1dca7-9f5f-4955-8522-345e4f6e82a2\") " pod="openshift-ovn-kubernetes/ovnkube-node-w28hf" Mar 18 17:40:55.183889 master-0 kubenswrapper[4090]: I0318 17:40:55.182514 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fr594\" (UniqueName: \"kubernetes.io/projected/eda1dca7-9f5f-4955-8522-345e4f6e82a2-kube-api-access-fr594\") pod \"ovnkube-node-w28hf\" (UID: \"eda1dca7-9f5f-4955-8522-345e4f6e82a2\") " pod="openshift-ovn-kubernetes/ovnkube-node-w28hf" Mar 18 17:40:55.183889 master-0 kubenswrapper[4090]: I0318 17:40:55.182519 4090 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/eda1dca7-9f5f-4955-8522-345e4f6e82a2-run-ovn\") pod \"ovnkube-node-w28hf\" (UID: \"eda1dca7-9f5f-4955-8522-345e4f6e82a2\") " pod="openshift-ovn-kubernetes/ovnkube-node-w28hf" Mar 18 17:40:55.183889 master-0 kubenswrapper[4090]: I0318 17:40:55.182603 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/eda1dca7-9f5f-4955-8522-345e4f6e82a2-systemd-units\") pod \"ovnkube-node-w28hf\" (UID: \"eda1dca7-9f5f-4955-8522-345e4f6e82a2\") " pod="openshift-ovn-kubernetes/ovnkube-node-w28hf" Mar 18 17:40:55.183889 master-0 kubenswrapper[4090]: I0318 17:40:55.182639 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/eda1dca7-9f5f-4955-8522-345e4f6e82a2-ovnkube-config\") pod \"ovnkube-node-w28hf\" (UID: \"eda1dca7-9f5f-4955-8522-345e4f6e82a2\") " pod="openshift-ovn-kubernetes/ovnkube-node-w28hf" Mar 18 17:40:55.183889 master-0 kubenswrapper[4090]: I0318 17:40:55.182737 4090 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/eda1dca7-9f5f-4955-8522-345e4f6e82a2-systemd-units\") pod \"ovnkube-node-w28hf\" (UID: \"eda1dca7-9f5f-4955-8522-345e4f6e82a2\") " pod="openshift-ovn-kubernetes/ovnkube-node-w28hf" Mar 18 17:40:55.183889 master-0 kubenswrapper[4090]: I0318 17:40:55.182872 4090 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/eda1dca7-9f5f-4955-8522-345e4f6e82a2-node-log\") pod \"ovnkube-node-w28hf\" (UID: \"eda1dca7-9f5f-4955-8522-345e4f6e82a2\") " pod="openshift-ovn-kubernetes/ovnkube-node-w28hf" Mar 18 17:40:55.184586 master-0 kubenswrapper[4090]: I0318 17:40:55.183794 4090 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/eda1dca7-9f5f-4955-8522-345e4f6e82a2-ovnkube-config\") pod \"ovnkube-node-w28hf\" (UID: \"eda1dca7-9f5f-4955-8522-345e4f6e82a2\") " pod="openshift-ovn-kubernetes/ovnkube-node-w28hf" Mar 18 17:40:55.184586 master-0 kubenswrapper[4090]: I0318 17:40:55.183883 4090 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/eda1dca7-9f5f-4955-8522-345e4f6e82a2-ovnkube-script-lib\") pod \"ovnkube-node-w28hf\" (UID: \"eda1dca7-9f5f-4955-8522-345e4f6e82a2\") " pod="openshift-ovn-kubernetes/ovnkube-node-w28hf" Mar 18 17:40:55.184586 master-0 kubenswrapper[4090]: I0318 17:40:55.184358 4090 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/eda1dca7-9f5f-4955-8522-345e4f6e82a2-env-overrides\") pod \"ovnkube-node-w28hf\" (UID: \"eda1dca7-9f5f-4955-8522-345e4f6e82a2\") " pod="openshift-ovn-kubernetes/ovnkube-node-w28hf" Mar 18 17:40:55.195139 master-0 kubenswrapper[4090]: I0318 17:40:55.195086 4090 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/eda1dca7-9f5f-4955-8522-345e4f6e82a2-ovn-node-metrics-cert\") pod \"ovnkube-node-w28hf\" (UID: \"eda1dca7-9f5f-4955-8522-345e4f6e82a2\") " pod="openshift-ovn-kubernetes/ovnkube-node-w28hf" Mar 18 17:40:55.207033 master-0 kubenswrapper[4090]: I0318 17:40:55.206976 4090 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fr594\" (UniqueName: \"kubernetes.io/projected/eda1dca7-9f5f-4955-8522-345e4f6e82a2-kube-api-access-fr594\") pod \"ovnkube-node-w28hf\" (UID: \"eda1dca7-9f5f-4955-8522-345e4f6e82a2\") " pod="openshift-ovn-kubernetes/ovnkube-node-w28hf" Mar 18 17:40:55.347561 master-0 kubenswrapper[4090]: I0318 17:40:55.347428 4090 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-w28hf" Mar 18 17:40:55.607785 master-0 kubenswrapper[4090]: I0318 17:40:55.607688 4090 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mfn52" Mar 18 17:40:55.607963 master-0 kubenswrapper[4090]: E0318 17:40:55.607821 4090 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mfn52" podUID="5a4f94f3-d63a-4869-b723-ae9637610b4b" Mar 18 17:40:56.630795 master-0 kubenswrapper[4090]: W0318 17:40:56.630708 4090 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7b94e08c_7944_445e_bfb7_6c7c14880c65.slice/crio-506ad6c80a1b7c5f38a2826db8ee7d4115a8417001018ebd69e1309cf067fc6a WatchSource:0}: Error finding container 506ad6c80a1b7c5f38a2826db8ee7d4115a8417001018ebd69e1309cf067fc6a: Status 404 returned error can't find the container with id 506ad6c80a1b7c5f38a2826db8ee7d4115a8417001018ebd69e1309cf067fc6a Mar 18 17:40:56.877113 master-0 kubenswrapper[4090]: I0318 17:40:56.877009 4090 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-64tx9" event={"ID":"5b0e38f3-3ab5-4519-86a6-68003deb94da","Type":"ContainerStarted","Data":"ae0a1707611e9351aaa40ed742fb913fcc467808ed79a67ebcb8858f6ae2c49a"} Mar 18 17:40:56.880362 master-0 kubenswrapper[4090]: I0318 17:40:56.878910 4090 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57f769d897-m82wx" event={"ID":"7b94e08c-7944-445e-bfb7-6c7c14880c65","Type":"ContainerStarted","Data":"0dba0e7c8f1ce99ade190b2f470fce7c5b787893fde9bf4c21d9c8f36bc07646"} Mar 18 17:40:56.880362 master-0 kubenswrapper[4090]: I0318 17:40:56.878933 4090 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57f769d897-m82wx" event={"ID":"7b94e08c-7944-445e-bfb7-6c7c14880c65","Type":"ContainerStarted","Data":"506ad6c80a1b7c5f38a2826db8ee7d4115a8417001018ebd69e1309cf067fc6a"} Mar 18 17:40:56.880362 master-0 kubenswrapper[4090]: I0318 17:40:56.879759 4090 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w28hf" event={"ID":"eda1dca7-9f5f-4955-8522-345e4f6e82a2","Type":"ContainerStarted","Data":"de191ef380880e41074c916544a090af370497a2183310a181d94c72cfa6a53a"} Mar 18 17:40:56.881933 master-0 kubenswrapper[4090]: I0318 17:40:56.881845 4090 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-ttbr5" event={"ID":"fea7b899-fde4-4463-9520-4d433a8ebe21","Type":"ContainerStarted","Data":"de9eecaae100670e0a012da69d0c99fbaef83817e585514383e37a63852714c7"} Mar 18 17:40:56.894194 master-0 kubenswrapper[4090]: I0318 17:40:56.894128 4090 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-64tx9" podStartSLOduration=1.474344558 podStartE2EDuration="14.894109233s" podCreationTimestamp="2026-03-18 17:40:42 +0000 UTC" firstStartedPulling="2026-03-18 17:40:43.321181868 +0000 UTC m=+60.513453812" lastFinishedPulling="2026-03-18 17:40:56.740946573 +0000 UTC m=+73.933218487" observedRunningTime="2026-03-18 17:40:56.89396758 +0000 UTC m=+74.086239504" watchObservedRunningTime="2026-03-18 17:40:56.894109233 +0000 UTC m=+74.086381147" Mar 18 17:40:57.607737 master-0 kubenswrapper[4090]: I0318 17:40:57.607620 4090 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mfn52" Mar 18 17:40:57.607737 master-0 kubenswrapper[4090]: E0318 17:40:57.607753 4090 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mfn52" podUID="5a4f94f3-d63a-4869-b723-ae9637610b4b" Mar 18 17:40:57.887913 master-0 kubenswrapper[4090]: I0318 17:40:57.887226 4090 generic.go:334] "Generic (PLEG): container finished" podID="fea7b899-fde4-4463-9520-4d433a8ebe21" containerID="de9eecaae100670e0a012da69d0c99fbaef83817e585514383e37a63852714c7" exitCode=0 Mar 18 17:40:57.887913 master-0 kubenswrapper[4090]: I0318 17:40:57.887329 4090 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-ttbr5" event={"ID":"fea7b899-fde4-4463-9520-4d433a8ebe21","Type":"ContainerDied","Data":"de9eecaae100670e0a012da69d0c99fbaef83817e585514383e37a63852714c7"} Mar 18 17:40:58.116297 master-0 kubenswrapper[4090]: I0318 17:40:58.110159 4090 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-diagnostics/network-check-target-ctd49"] Mar 18 17:40:58.116297 master-0 kubenswrapper[4090]: I0318 17:40:58.110483 4090 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-ctd49" Mar 18 17:40:58.116297 master-0 kubenswrapper[4090]: E0318 17:40:58.110532 4090 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-ctd49" podUID="978dcca6-b396-463f-9614-9e24194a1aaa" Mar 18 17:40:58.215625 master-0 kubenswrapper[4090]: I0318 17:40:58.215494 4090 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5s6f5\" (UniqueName: \"kubernetes.io/projected/978dcca6-b396-463f-9614-9e24194a1aaa-kube-api-access-5s6f5\") pod \"network-check-target-ctd49\" (UID: \"978dcca6-b396-463f-9614-9e24194a1aaa\") " pod="openshift-network-diagnostics/network-check-target-ctd49" Mar 18 17:40:58.316995 master-0 kubenswrapper[4090]: I0318 17:40:58.316933 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5s6f5\" (UniqueName: \"kubernetes.io/projected/978dcca6-b396-463f-9614-9e24194a1aaa-kube-api-access-5s6f5\") pod \"network-check-target-ctd49\" (UID: \"978dcca6-b396-463f-9614-9e24194a1aaa\") " pod="openshift-network-diagnostics/network-check-target-ctd49" Mar 18 17:40:58.371040 master-0 kubenswrapper[4090]: E0318 17:40:58.370668 4090 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 18 17:40:58.371040 master-0 kubenswrapper[4090]: E0318 17:40:58.370707 4090 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 18 17:40:58.371040 master-0 kubenswrapper[4090]: E0318 17:40:58.370724 4090 projected.go:194] Error preparing data for projected volume kube-api-access-5s6f5 for pod openshift-network-diagnostics/network-check-target-ctd49: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 18 17:40:58.371040 master-0 kubenswrapper[4090]: E0318 17:40:58.370836 4090 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/978dcca6-b396-463f-9614-9e24194a1aaa-kube-api-access-5s6f5 podName:978dcca6-b396-463f-9614-9e24194a1aaa nodeName:}" failed. No retries permitted until 2026-03-18 17:40:58.870815405 +0000 UTC m=+76.063087339 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-5s6f5" (UniqueName: "kubernetes.io/projected/978dcca6-b396-463f-9614-9e24194a1aaa-kube-api-access-5s6f5") pod "network-check-target-ctd49" (UID: "978dcca6-b396-463f-9614-9e24194a1aaa") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 18 17:40:58.922023 master-0 kubenswrapper[4090]: I0318 17:40:58.921909 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5s6f5\" (UniqueName: \"kubernetes.io/projected/978dcca6-b396-463f-9614-9e24194a1aaa-kube-api-access-5s6f5\") pod \"network-check-target-ctd49\" (UID: \"978dcca6-b396-463f-9614-9e24194a1aaa\") " pod="openshift-network-diagnostics/network-check-target-ctd49" Mar 18 17:40:58.922841 master-0 kubenswrapper[4090]: E0318 17:40:58.922172 4090 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 18 17:40:58.922841 master-0 kubenswrapper[4090]: E0318 17:40:58.922227 4090 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 18 17:40:58.922841 master-0 kubenswrapper[4090]: E0318 17:40:58.922242 4090 projected.go:194] Error preparing data for projected volume kube-api-access-5s6f5 for pod openshift-network-diagnostics/network-check-target-ctd49: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 18 17:40:58.922841 master-0 kubenswrapper[4090]: E0318 17:40:58.922403 4090 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/978dcca6-b396-463f-9614-9e24194a1aaa-kube-api-access-5s6f5 podName:978dcca6-b396-463f-9614-9e24194a1aaa nodeName:}" failed. No retries permitted until 2026-03-18 17:40:59.92236811 +0000 UTC m=+77.114640024 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-5s6f5" (UniqueName: "kubernetes.io/projected/978dcca6-b396-463f-9614-9e24194a1aaa-kube-api-access-5s6f5") pod "network-check-target-ctd49" (UID: "978dcca6-b396-463f-9614-9e24194a1aaa") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 18 17:40:59.224900 master-0 kubenswrapper[4090]: I0318 17:40:59.224389 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5a4f94f3-d63a-4869-b723-ae9637610b4b-metrics-certs\") pod \"network-metrics-daemon-mfn52\" (UID: \"5a4f94f3-d63a-4869-b723-ae9637610b4b\") " pod="openshift-multus/network-metrics-daemon-mfn52" Mar 18 17:40:59.224900 master-0 kubenswrapper[4090]: E0318 17:40:59.224515 4090 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Mar 18 17:40:59.224900 master-0 kubenswrapper[4090]: E0318 17:40:59.224563 4090 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5a4f94f3-d63a-4869-b723-ae9637610b4b-metrics-certs podName:5a4f94f3-d63a-4869-b723-ae9637610b4b nodeName:}" failed. No retries permitted until 2026-03-18 17:41:15.224548324 +0000 UTC m=+92.416820238 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/5a4f94f3-d63a-4869-b723-ae9637610b4b-metrics-certs") pod "network-metrics-daemon-mfn52" (UID: "5a4f94f3-d63a-4869-b723-ae9637610b4b") : object "openshift-multus"/"metrics-daemon-secret" not registered Mar 18 17:40:59.607596 master-0 kubenswrapper[4090]: I0318 17:40:59.607072 4090 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-ctd49" Mar 18 17:40:59.607596 master-0 kubenswrapper[4090]: E0318 17:40:59.607196 4090 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-ctd49" podUID="978dcca6-b396-463f-9614-9e24194a1aaa" Mar 18 17:40:59.607596 master-0 kubenswrapper[4090]: I0318 17:40:59.607051 4090 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mfn52" Mar 18 17:40:59.607596 master-0 kubenswrapper[4090]: E0318 17:40:59.607502 4090 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mfn52" podUID="5a4f94f3-d63a-4869-b723-ae9637610b4b" Mar 18 17:40:59.948799 master-0 kubenswrapper[4090]: I0318 17:40:59.948619 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5s6f5\" (UniqueName: \"kubernetes.io/projected/978dcca6-b396-463f-9614-9e24194a1aaa-kube-api-access-5s6f5\") pod \"network-check-target-ctd49\" (UID: \"978dcca6-b396-463f-9614-9e24194a1aaa\") " pod="openshift-network-diagnostics/network-check-target-ctd49" Mar 18 17:40:59.949574 master-0 kubenswrapper[4090]: E0318 17:40:59.948864 4090 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 18 17:40:59.949574 master-0 kubenswrapper[4090]: E0318 17:40:59.948900 4090 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 18 17:40:59.949574 master-0 kubenswrapper[4090]: E0318 17:40:59.948916 4090 projected.go:194] Error preparing data for projected volume kube-api-access-5s6f5 for pod openshift-network-diagnostics/network-check-target-ctd49: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 18 17:40:59.949574 master-0 kubenswrapper[4090]: E0318 17:40:59.949025 4090 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/978dcca6-b396-463f-9614-9e24194a1aaa-kube-api-access-5s6f5 podName:978dcca6-b396-463f-9614-9e24194a1aaa nodeName:}" failed. No retries permitted until 2026-03-18 17:41:01.949000517 +0000 UTC m=+79.141272621 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-5s6f5" (UniqueName: "kubernetes.io/projected/978dcca6-b396-463f-9614-9e24194a1aaa-kube-api-access-5s6f5") pod "network-check-target-ctd49" (UID: "978dcca6-b396-463f-9614-9e24194a1aaa") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 18 17:41:00.704986 master-0 kubenswrapper[4090]: I0318 17:41:00.701964 4090 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-node-identity/network-node-identity-7s68k"] Mar 18 17:41:00.704986 master-0 kubenswrapper[4090]: I0318 17:41:00.702886 4090 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-7s68k" Mar 18 17:41:00.706588 master-0 kubenswrapper[4090]: I0318 17:41:00.706466 4090 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Mar 18 17:41:00.706813 master-0 kubenswrapper[4090]: I0318 17:41:00.706717 4090 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Mar 18 17:41:00.707855 master-0 kubenswrapper[4090]: I0318 17:41:00.706885 4090 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Mar 18 17:41:00.708377 master-0 kubenswrapper[4090]: I0318 17:41:00.708240 4090 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Mar 18 17:41:00.708627 master-0 kubenswrapper[4090]: I0318 17:41:00.708511 4090 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Mar 18 17:41:00.873929 master-0 kubenswrapper[4090]: I0318 17:41:00.873710 4090 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/9875ed82-813c-483d-8471-8f9b74b774ee-webhook-cert\") pod \"network-node-identity-7s68k\" (UID: \"9875ed82-813c-483d-8471-8f9b74b774ee\") " pod="openshift-network-node-identity/network-node-identity-7s68k" Mar 18 17:41:00.873929 master-0 kubenswrapper[4090]: I0318 17:41:00.873802 4090 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/9875ed82-813c-483d-8471-8f9b74b774ee-ovnkube-identity-cm\") pod \"network-node-identity-7s68k\" (UID: \"9875ed82-813c-483d-8471-8f9b74b774ee\") " pod="openshift-network-node-identity/network-node-identity-7s68k" Mar 18 17:41:00.873929 master-0 kubenswrapper[4090]: I0318 17:41:00.873902 4090 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/9875ed82-813c-483d-8471-8f9b74b774ee-env-overrides\") pod \"network-node-identity-7s68k\" (UID: \"9875ed82-813c-483d-8471-8f9b74b774ee\") " pod="openshift-network-node-identity/network-node-identity-7s68k" Mar 18 17:41:00.873929 master-0 kubenswrapper[4090]: I0318 17:41:00.873928 4090 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-76j8w\" (UniqueName: \"kubernetes.io/projected/9875ed82-813c-483d-8471-8f9b74b774ee-kube-api-access-76j8w\") pod \"network-node-identity-7s68k\" (UID: \"9875ed82-813c-483d-8471-8f9b74b774ee\") " pod="openshift-network-node-identity/network-node-identity-7s68k" Mar 18 17:41:00.974323 master-0 kubenswrapper[4090]: I0318 17:41:00.974271 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/9875ed82-813c-483d-8471-8f9b74b774ee-env-overrides\") pod \"network-node-identity-7s68k\" (UID: \"9875ed82-813c-483d-8471-8f9b74b774ee\") " pod="openshift-network-node-identity/network-node-identity-7s68k" Mar 18 17:41:00.974775 master-0 kubenswrapper[4090]: I0318 17:41:00.974714 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-76j8w\" (UniqueName: \"kubernetes.io/projected/9875ed82-813c-483d-8471-8f9b74b774ee-kube-api-access-76j8w\") pod \"network-node-identity-7s68k\" (UID: \"9875ed82-813c-483d-8471-8f9b74b774ee\") " pod="openshift-network-node-identity/network-node-identity-7s68k" Mar 18 17:41:00.975005 master-0 kubenswrapper[4090]: I0318 17:41:00.974808 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/9875ed82-813c-483d-8471-8f9b74b774ee-webhook-cert\") pod \"network-node-identity-7s68k\" (UID: \"9875ed82-813c-483d-8471-8f9b74b774ee\") " pod="openshift-network-node-identity/network-node-identity-7s68k" Mar 18 17:41:00.975005 master-0 kubenswrapper[4090]: I0318 17:41:00.974857 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/9875ed82-813c-483d-8471-8f9b74b774ee-ovnkube-identity-cm\") pod \"network-node-identity-7s68k\" (UID: \"9875ed82-813c-483d-8471-8f9b74b774ee\") " pod="openshift-network-node-identity/network-node-identity-7s68k" Mar 18 17:41:00.975194 master-0 kubenswrapper[4090]: I0318 17:41:00.975171 4090 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/9875ed82-813c-483d-8471-8f9b74b774ee-env-overrides\") pod \"network-node-identity-7s68k\" (UID: \"9875ed82-813c-483d-8471-8f9b74b774ee\") " pod="openshift-network-node-identity/network-node-identity-7s68k" Mar 18 17:41:00.977647 master-0 kubenswrapper[4090]: I0318 17:41:00.975827 4090 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/9875ed82-813c-483d-8471-8f9b74b774ee-ovnkube-identity-cm\") pod \"network-node-identity-7s68k\" (UID: \"9875ed82-813c-483d-8471-8f9b74b774ee\") " pod="openshift-network-node-identity/network-node-identity-7s68k" Mar 18 17:41:00.986018 master-0 kubenswrapper[4090]: I0318 17:41:00.985895 4090 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/9875ed82-813c-483d-8471-8f9b74b774ee-webhook-cert\") pod \"network-node-identity-7s68k\" (UID: \"9875ed82-813c-483d-8471-8f9b74b774ee\") " pod="openshift-network-node-identity/network-node-identity-7s68k" Mar 18 17:41:01.091169 master-0 kubenswrapper[4090]: I0318 17:41:01.091105 4090 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-76j8w\" (UniqueName: \"kubernetes.io/projected/9875ed82-813c-483d-8471-8f9b74b774ee-kube-api-access-76j8w\") pod \"network-node-identity-7s68k\" (UID: \"9875ed82-813c-483d-8471-8f9b74b774ee\") " pod="openshift-network-node-identity/network-node-identity-7s68k" Mar 18 17:41:01.329064 master-0 kubenswrapper[4090]: I0318 17:41:01.328996 4090 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-7s68k" Mar 18 17:41:01.342171 master-0 kubenswrapper[4090]: W0318 17:41:01.342120 4090 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9875ed82_813c_483d_8471_8f9b74b774ee.slice/crio-3d376cebaacd129d547a2b1f5d7c73be7200c80d9de53fd252db3ff4f06f931e WatchSource:0}: Error finding container 3d376cebaacd129d547a2b1f5d7c73be7200c80d9de53fd252db3ff4f06f931e: Status 404 returned error can't find the container with id 3d376cebaacd129d547a2b1f5d7c73be7200c80d9de53fd252db3ff4f06f931e Mar 18 17:41:01.607347 master-0 kubenswrapper[4090]: I0318 17:41:01.607082 4090 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mfn52" Mar 18 17:41:01.607347 master-0 kubenswrapper[4090]: I0318 17:41:01.607160 4090 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-ctd49" Mar 18 17:41:01.607347 master-0 kubenswrapper[4090]: E0318 17:41:01.607276 4090 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mfn52" podUID="5a4f94f3-d63a-4869-b723-ae9637610b4b" Mar 18 17:41:01.607826 master-0 kubenswrapper[4090]: E0318 17:41:01.607398 4090 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-ctd49" podUID="978dcca6-b396-463f-9614-9e24194a1aaa" Mar 18 17:41:01.907400 master-0 kubenswrapper[4090]: I0318 17:41:01.907165 4090 generic.go:334] "Generic (PLEG): container finished" podID="fea7b899-fde4-4463-9520-4d433a8ebe21" containerID="3d5985c493f4dbc8ecc65a775668e215bdb1fee71a640074b8e4b3117da777c6" exitCode=0 Mar 18 17:41:01.907400 master-0 kubenswrapper[4090]: I0318 17:41:01.907264 4090 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-ttbr5" event={"ID":"fea7b899-fde4-4463-9520-4d433a8ebe21","Type":"ContainerDied","Data":"3d5985c493f4dbc8ecc65a775668e215bdb1fee71a640074b8e4b3117da777c6"} Mar 18 17:41:01.908380 master-0 kubenswrapper[4090]: I0318 17:41:01.908348 4090 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-7s68k" event={"ID":"9875ed82-813c-483d-8471-8f9b74b774ee","Type":"ContainerStarted","Data":"3d376cebaacd129d547a2b1f5d7c73be7200c80d9de53fd252db3ff4f06f931e"} Mar 18 17:41:01.988603 master-0 kubenswrapper[4090]: I0318 17:41:01.988562 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5s6f5\" (UniqueName: \"kubernetes.io/projected/978dcca6-b396-463f-9614-9e24194a1aaa-kube-api-access-5s6f5\") pod \"network-check-target-ctd49\" (UID: \"978dcca6-b396-463f-9614-9e24194a1aaa\") " pod="openshift-network-diagnostics/network-check-target-ctd49" Mar 18 17:41:01.989063 master-0 kubenswrapper[4090]: E0318 17:41:01.988748 4090 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 18 17:41:01.989063 master-0 kubenswrapper[4090]: E0318 17:41:01.988766 4090 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 18 17:41:01.989063 master-0 kubenswrapper[4090]: E0318 17:41:01.988777 4090 projected.go:194] Error preparing data for projected volume kube-api-access-5s6f5 for pod openshift-network-diagnostics/network-check-target-ctd49: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 18 17:41:01.989063 master-0 kubenswrapper[4090]: E0318 17:41:01.988851 4090 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/978dcca6-b396-463f-9614-9e24194a1aaa-kube-api-access-5s6f5 podName:978dcca6-b396-463f-9614-9e24194a1aaa nodeName:}" failed. No retries permitted until 2026-03-18 17:41:05.988835474 +0000 UTC m=+83.181107388 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-5s6f5" (UniqueName: "kubernetes.io/projected/978dcca6-b396-463f-9614-9e24194a1aaa-kube-api-access-5s6f5") pod "network-check-target-ctd49" (UID: "978dcca6-b396-463f-9614-9e24194a1aaa") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 18 17:41:03.609878 master-0 kubenswrapper[4090]: I0318 17:41:03.609605 4090 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mfn52" Mar 18 17:41:03.609878 master-0 kubenswrapper[4090]: I0318 17:41:03.609662 4090 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-ctd49" Mar 18 17:41:03.609878 master-0 kubenswrapper[4090]: E0318 17:41:03.609769 4090 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mfn52" podUID="5a4f94f3-d63a-4869-b723-ae9637610b4b" Mar 18 17:41:03.609878 master-0 kubenswrapper[4090]: E0318 17:41:03.609847 4090 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-ctd49" podUID="978dcca6-b396-463f-9614-9e24194a1aaa" Mar 18 17:41:05.610461 master-0 kubenswrapper[4090]: I0318 17:41:05.610388 4090 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mfn52" Mar 18 17:41:05.610461 master-0 kubenswrapper[4090]: I0318 17:41:05.610439 4090 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-ctd49" Mar 18 17:41:05.611245 master-0 kubenswrapper[4090]: E0318 17:41:05.610518 4090 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mfn52" podUID="5a4f94f3-d63a-4869-b723-ae9637610b4b" Mar 18 17:41:05.611245 master-0 kubenswrapper[4090]: E0318 17:41:05.610593 4090 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-ctd49" podUID="978dcca6-b396-463f-9614-9e24194a1aaa" Mar 18 17:41:06.029400 master-0 kubenswrapper[4090]: I0318 17:41:06.029229 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5s6f5\" (UniqueName: \"kubernetes.io/projected/978dcca6-b396-463f-9614-9e24194a1aaa-kube-api-access-5s6f5\") pod \"network-check-target-ctd49\" (UID: \"978dcca6-b396-463f-9614-9e24194a1aaa\") " pod="openshift-network-diagnostics/network-check-target-ctd49" Mar 18 17:41:06.029666 master-0 kubenswrapper[4090]: E0318 17:41:06.029490 4090 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 18 17:41:06.029666 master-0 kubenswrapper[4090]: E0318 17:41:06.029566 4090 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 18 17:41:06.029666 master-0 kubenswrapper[4090]: E0318 17:41:06.029582 4090 projected.go:194] Error preparing data for projected volume kube-api-access-5s6f5 for pod openshift-network-diagnostics/network-check-target-ctd49: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 18 17:41:06.029805 master-0 kubenswrapper[4090]: E0318 17:41:06.029669 4090 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/978dcca6-b396-463f-9614-9e24194a1aaa-kube-api-access-5s6f5 podName:978dcca6-b396-463f-9614-9e24194a1aaa nodeName:}" failed. No retries permitted until 2026-03-18 17:41:14.029649272 +0000 UTC m=+91.221921186 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-5s6f5" (UniqueName: "kubernetes.io/projected/978dcca6-b396-463f-9614-9e24194a1aaa-kube-api-access-5s6f5") pod "network-check-target-ctd49" (UID: "978dcca6-b396-463f-9614-9e24194a1aaa") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 18 17:41:07.091118 master-0 kubenswrapper[4090]: I0318 17:41:07.080965 4090 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["kube-system/bootstrap-kube-scheduler-master-0"] Mar 18 17:41:07.607487 master-0 kubenswrapper[4090]: I0318 17:41:07.607401 4090 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mfn52" Mar 18 17:41:07.607487 master-0 kubenswrapper[4090]: I0318 17:41:07.607493 4090 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-ctd49" Mar 18 17:41:07.607838 master-0 kubenswrapper[4090]: E0318 17:41:07.607618 4090 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mfn52" podUID="5a4f94f3-d63a-4869-b723-ae9637610b4b" Mar 18 17:41:07.607838 master-0 kubenswrapper[4090]: E0318 17:41:07.607673 4090 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-ctd49" podUID="978dcca6-b396-463f-9614-9e24194a1aaa" Mar 18 17:41:08.978897 master-0 kubenswrapper[4090]: W0318 17:41:08.978801 4090 warnings.go:70] would violate PodSecurity "restricted:latest": host namespaces (hostNetwork=true), hostPort (container "etcd" uses hostPorts 2379, 2380), privileged (containers "etcdctl", "etcd" must not set securityContext.privileged=true), allowPrivilegeEscalation != false (containers "etcdctl", "etcd" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (containers "etcdctl", "etcd" must set securityContext.capabilities.drop=["ALL"]), restricted volume types (volumes "certs", "data-dir" use restricted volume type "hostPath"), runAsNonRoot != true (pod or containers "etcdctl", "etcd" must set securityContext.runAsNonRoot=true), seccompProfile (pod or containers "etcdctl", "etcd" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost") Mar 18 17:41:08.981152 master-0 kubenswrapper[4090]: I0318 17:41:08.981063 4090 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-master-0-master-0"] Mar 18 17:41:09.607509 master-0 kubenswrapper[4090]: I0318 17:41:09.607439 4090 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-ctd49" Mar 18 17:41:09.607837 master-0 kubenswrapper[4090]: I0318 17:41:09.607519 4090 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mfn52" Mar 18 17:41:09.607837 master-0 kubenswrapper[4090]: E0318 17:41:09.607736 4090 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mfn52" podUID="5a4f94f3-d63a-4869-b723-ae9637610b4b" Mar 18 17:41:09.607837 master-0 kubenswrapper[4090]: E0318 17:41:09.607562 4090 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-ctd49" podUID="978dcca6-b396-463f-9614-9e24194a1aaa" Mar 18 17:41:10.622763 master-0 kubenswrapper[4090]: I0318 17:41:10.622701 4090 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/bootstrap-kube-apiserver-master-0"] Mar 18 17:41:11.607564 master-0 kubenswrapper[4090]: I0318 17:41:11.607494 4090 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mfn52" Mar 18 17:41:11.607907 master-0 kubenswrapper[4090]: I0318 17:41:11.607601 4090 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-ctd49" Mar 18 17:41:11.607907 master-0 kubenswrapper[4090]: E0318 17:41:11.607643 4090 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mfn52" podUID="5a4f94f3-d63a-4869-b723-ae9637610b4b" Mar 18 17:41:11.607907 master-0 kubenswrapper[4090]: E0318 17:41:11.607787 4090 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-ctd49" podUID="978dcca6-b396-463f-9614-9e24194a1aaa" Mar 18 17:41:13.606781 master-0 kubenswrapper[4090]: I0318 17:41:13.606694 4090 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-ctd49" Mar 18 17:41:13.606781 master-0 kubenswrapper[4090]: I0318 17:41:13.606758 4090 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mfn52" Mar 18 17:41:13.608128 master-0 kubenswrapper[4090]: E0318 17:41:13.607383 4090 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-ctd49" podUID="978dcca6-b396-463f-9614-9e24194a1aaa" Mar 18 17:41:13.608128 master-0 kubenswrapper[4090]: E0318 17:41:13.607639 4090 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mfn52" podUID="5a4f94f3-d63a-4869-b723-ae9637610b4b" Mar 18 17:41:14.030121 master-0 kubenswrapper[4090]: I0318 17:41:14.029944 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5s6f5\" (UniqueName: \"kubernetes.io/projected/978dcca6-b396-463f-9614-9e24194a1aaa-kube-api-access-5s6f5\") pod \"network-check-target-ctd49\" (UID: \"978dcca6-b396-463f-9614-9e24194a1aaa\") " pod="openshift-network-diagnostics/network-check-target-ctd49" Mar 18 17:41:14.030457 master-0 kubenswrapper[4090]: E0318 17:41:14.030180 4090 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 18 17:41:14.030457 master-0 kubenswrapper[4090]: E0318 17:41:14.030209 4090 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 18 17:41:14.030457 master-0 kubenswrapper[4090]: E0318 17:41:14.030229 4090 projected.go:194] Error preparing data for projected volume kube-api-access-5s6f5 for pod openshift-network-diagnostics/network-check-target-ctd49: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 18 17:41:14.030457 master-0 kubenswrapper[4090]: E0318 17:41:14.030348 4090 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/978dcca6-b396-463f-9614-9e24194a1aaa-kube-api-access-5s6f5 podName:978dcca6-b396-463f-9614-9e24194a1aaa nodeName:}" failed. No retries permitted until 2026-03-18 17:41:30.030325217 +0000 UTC m=+107.222597161 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-5s6f5" (UniqueName: "kubernetes.io/projected/978dcca6-b396-463f-9614-9e24194a1aaa-kube-api-access-5s6f5") pod "network-check-target-ctd49" (UID: "978dcca6-b396-463f-9614-9e24194a1aaa") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 18 17:41:14.917847 master-0 kubenswrapper[4090]: I0318 17:41:14.917748 4090 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/bootstrap-kube-scheduler-master-0" podStartSLOduration=8.917718143 podStartE2EDuration="8.917718143s" podCreationTimestamp="2026-03-18 17:41:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 17:41:14.917372266 +0000 UTC m=+92.109644200" watchObservedRunningTime="2026-03-18 17:41:14.917718143 +0000 UTC m=+92.109990057" Mar 18 17:41:14.918673 master-0 kubenswrapper[4090]: I0318 17:41:14.917880 4090 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" podStartSLOduration=4.917872956 podStartE2EDuration="4.917872956s" podCreationTimestamp="2026-03-18 17:41:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 17:41:14.63832898 +0000 UTC m=+91.830600904" watchObservedRunningTime="2026-03-18 17:41:14.917872956 +0000 UTC m=+92.110144870" Mar 18 17:41:14.934538 master-0 kubenswrapper[4090]: I0318 17:41:14.934396 4090 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-master-0-master-0" podStartSLOduration=7.934326367 podStartE2EDuration="7.934326367s" podCreationTimestamp="2026-03-18 17:41:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 17:41:14.931059268 +0000 UTC m=+92.123331182" watchObservedRunningTime="2026-03-18 17:41:14.934326367 +0000 UTC m=+92.126598281" Mar 18 17:41:15.242208 master-0 kubenswrapper[4090]: I0318 17:41:15.242038 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5a4f94f3-d63a-4869-b723-ae9637610b4b-metrics-certs\") pod \"network-metrics-daemon-mfn52\" (UID: \"5a4f94f3-d63a-4869-b723-ae9637610b4b\") " pod="openshift-multus/network-metrics-daemon-mfn52" Mar 18 17:41:15.242208 master-0 kubenswrapper[4090]: E0318 17:41:15.242188 4090 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Mar 18 17:41:15.242555 master-0 kubenswrapper[4090]: E0318 17:41:15.242247 4090 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5a4f94f3-d63a-4869-b723-ae9637610b4b-metrics-certs podName:5a4f94f3-d63a-4869-b723-ae9637610b4b nodeName:}" failed. No retries permitted until 2026-03-18 17:41:47.242230959 +0000 UTC m=+124.434502863 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/5a4f94f3-d63a-4869-b723-ae9637610b4b-metrics-certs") pod "network-metrics-daemon-mfn52" (UID: "5a4f94f3-d63a-4869-b723-ae9637610b4b") : object "openshift-multus"/"metrics-daemon-secret" not registered Mar 18 17:41:15.607944 master-0 kubenswrapper[4090]: I0318 17:41:15.607805 4090 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mfn52" Mar 18 17:41:15.608125 master-0 kubenswrapper[4090]: E0318 17:41:15.607992 4090 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mfn52" podUID="5a4f94f3-d63a-4869-b723-ae9637610b4b" Mar 18 17:41:15.608699 master-0 kubenswrapper[4090]: I0318 17:41:15.608434 4090 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-ctd49" Mar 18 17:41:15.610838 master-0 kubenswrapper[4090]: E0318 17:41:15.610789 4090 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-ctd49" podUID="978dcca6-b396-463f-9614-9e24194a1aaa" Mar 18 17:41:16.043010 master-0 kubenswrapper[4090]: I0318 17:41:16.042918 4090 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-7s68k" event={"ID":"9875ed82-813c-483d-8471-8f9b74b774ee","Type":"ContainerStarted","Data":"e68d50794bc18082c3da1be336c93731deac7bad0cc308995bf349c65577d305"} Mar 18 17:41:16.043010 master-0 kubenswrapper[4090]: I0318 17:41:16.042997 4090 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-7s68k" event={"ID":"9875ed82-813c-483d-8471-8f9b74b774ee","Type":"ContainerStarted","Data":"8f717b6ed059618cb85325de3ace977a636bd1e6836d5a76c011b0e857bb327e"} Mar 18 17:41:16.047651 master-0 kubenswrapper[4090]: I0318 17:41:16.047576 4090 generic.go:334] "Generic (PLEG): container finished" podID="fea7b899-fde4-4463-9520-4d433a8ebe21" containerID="f487efac96ddc2a1600d3e4cc87d8a45b4d735699e028d3a82f0ba6a3bf9f4b3" exitCode=0 Mar 18 17:41:16.047735 master-0 kubenswrapper[4090]: I0318 17:41:16.047639 4090 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-ttbr5" event={"ID":"fea7b899-fde4-4463-9520-4d433a8ebe21","Type":"ContainerDied","Data":"f487efac96ddc2a1600d3e4cc87d8a45b4d735699e028d3a82f0ba6a3bf9f4b3"} Mar 18 17:41:16.050623 master-0 kubenswrapper[4090]: I0318 17:41:16.050571 4090 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57f769d897-m82wx" event={"ID":"7b94e08c-7944-445e-bfb7-6c7c14880c65","Type":"ContainerStarted","Data":"10ef0540ad110067bbacf0ae0c51fcdf81ed8a0e014b67c2675d03499d28dfab"} Mar 18 17:41:16.055711 master-0 kubenswrapper[4090]: I0318 17:41:16.055648 4090 generic.go:334] "Generic (PLEG): container finished" podID="eda1dca7-9f5f-4955-8522-345e4f6e82a2" containerID="f1d14468646824242e1b501b4d730c725291ad7baae06d0e85140b05ffb53e55" exitCode=0 Mar 18 17:41:16.055820 master-0 kubenswrapper[4090]: I0318 17:41:16.055712 4090 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w28hf" event={"ID":"eda1dca7-9f5f-4955-8522-345e4f6e82a2","Type":"ContainerDied","Data":"f1d14468646824242e1b501b4d730c725291ad7baae06d0e85140b05ffb53e55"} Mar 18 17:41:16.099973 master-0 kubenswrapper[4090]: I0318 17:41:16.098916 4090 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-network-node-identity/network-node-identity-7s68k" podStartSLOduration=1.743652511 podStartE2EDuration="16.098882168s" podCreationTimestamp="2026-03-18 17:41:00 +0000 UTC" firstStartedPulling="2026-03-18 17:41:01.345190564 +0000 UTC m=+78.537462478" lastFinishedPulling="2026-03-18 17:41:15.700420221 +0000 UTC m=+92.892692135" observedRunningTime="2026-03-18 17:41:16.06805919 +0000 UTC m=+93.260331104" watchObservedRunningTime="2026-03-18 17:41:16.098882168 +0000 UTC m=+93.291154082" Mar 18 17:41:16.113160 master-0 kubenswrapper[4090]: I0318 17:41:16.113072 4090 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57f769d897-m82wx" podStartSLOduration=3.327511252 podStartE2EDuration="22.113046131s" podCreationTimestamp="2026-03-18 17:40:54 +0000 UTC" firstStartedPulling="2026-03-18 17:40:56.840497463 +0000 UTC m=+74.032769377" lastFinishedPulling="2026-03-18 17:41:15.626032342 +0000 UTC m=+92.818304256" observedRunningTime="2026-03-18 17:41:16.11301593 +0000 UTC m=+93.305287844" watchObservedRunningTime="2026-03-18 17:41:16.113046131 +0000 UTC m=+93.305318045" Mar 18 17:41:17.064395 master-0 kubenswrapper[4090]: I0318 17:41:17.063779 4090 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w28hf" event={"ID":"eda1dca7-9f5f-4955-8522-345e4f6e82a2","Type":"ContainerStarted","Data":"5a0cc38196c956571e1020fd07aac66b445ed9098590c86d0420398246fccc70"} Mar 18 17:41:17.064395 master-0 kubenswrapper[4090]: I0318 17:41:17.064302 4090 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w28hf" event={"ID":"eda1dca7-9f5f-4955-8522-345e4f6e82a2","Type":"ContainerStarted","Data":"76fe39ffc3ee88a37aec5098c7c41d8e5c952b0cd635cc6932fa8c25c90fc7b2"} Mar 18 17:41:17.064395 master-0 kubenswrapper[4090]: I0318 17:41:17.064320 4090 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w28hf" event={"ID":"eda1dca7-9f5f-4955-8522-345e4f6e82a2","Type":"ContainerStarted","Data":"a2f7840253ba77627965588c80fd6601ad2c7b85616e8b0045ab019fad2b4eb5"} Mar 18 17:41:17.064395 master-0 kubenswrapper[4090]: I0318 17:41:17.064335 4090 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w28hf" event={"ID":"eda1dca7-9f5f-4955-8522-345e4f6e82a2","Type":"ContainerStarted","Data":"dd71bfd3525e8ab627d591d8e984b8e06b2147e143ab566d2810a6c719c82058"} Mar 18 17:41:17.064395 master-0 kubenswrapper[4090]: I0318 17:41:17.064344 4090 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w28hf" event={"ID":"eda1dca7-9f5f-4955-8522-345e4f6e82a2","Type":"ContainerStarted","Data":"6af32710a9a093c2ffb5165e045584759becf8ee99079c76ea090f36b418b7ac"} Mar 18 17:41:17.064395 master-0 kubenswrapper[4090]: I0318 17:41:17.064354 4090 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w28hf" event={"ID":"eda1dca7-9f5f-4955-8522-345e4f6e82a2","Type":"ContainerStarted","Data":"e7320e65f254a24edc9e01cc30587d714155472069d129fd73b300ad45b8a90c"} Mar 18 17:41:17.607438 master-0 kubenswrapper[4090]: I0318 17:41:17.606894 4090 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mfn52" Mar 18 17:41:17.607438 master-0 kubenswrapper[4090]: I0318 17:41:17.606932 4090 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-ctd49" Mar 18 17:41:17.607438 master-0 kubenswrapper[4090]: E0318 17:41:17.607098 4090 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mfn52" podUID="5a4f94f3-d63a-4869-b723-ae9637610b4b" Mar 18 17:41:17.607438 master-0 kubenswrapper[4090]: E0318 17:41:17.607163 4090 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-ctd49" podUID="978dcca6-b396-463f-9614-9e24194a1aaa" Mar 18 17:41:19.074813 master-0 kubenswrapper[4090]: I0318 17:41:19.074635 4090 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w28hf" event={"ID":"eda1dca7-9f5f-4955-8522-345e4f6e82a2","Type":"ContainerStarted","Data":"e94cf1ae09720b90a837378ce21a224cf3d1ec15ebe4df07aca3dc99ebf1c5b9"} Mar 18 17:41:19.608871 master-0 kubenswrapper[4090]: I0318 17:41:19.607595 4090 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-ctd49" Mar 18 17:41:19.608871 master-0 kubenswrapper[4090]: E0318 17:41:19.607727 4090 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-ctd49" podUID="978dcca6-b396-463f-9614-9e24194a1aaa" Mar 18 17:41:19.608871 master-0 kubenswrapper[4090]: I0318 17:41:19.607620 4090 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mfn52" Mar 18 17:41:19.608871 master-0 kubenswrapper[4090]: E0318 17:41:19.607908 4090 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mfn52" podUID="5a4f94f3-d63a-4869-b723-ae9637610b4b" Mar 18 17:41:21.607227 master-0 kubenswrapper[4090]: I0318 17:41:21.607110 4090 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mfn52" Mar 18 17:41:21.608353 master-0 kubenswrapper[4090]: I0318 17:41:21.607126 4090 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-ctd49" Mar 18 17:41:21.608353 master-0 kubenswrapper[4090]: E0318 17:41:21.607372 4090 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mfn52" podUID="5a4f94f3-d63a-4869-b723-ae9637610b4b" Mar 18 17:41:21.608353 master-0 kubenswrapper[4090]: E0318 17:41:21.607418 4090 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-ctd49" podUID="978dcca6-b396-463f-9614-9e24194a1aaa" Mar 18 17:41:22.088126 master-0 kubenswrapper[4090]: I0318 17:41:22.087542 4090 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w28hf" event={"ID":"eda1dca7-9f5f-4955-8522-345e4f6e82a2","Type":"ContainerStarted","Data":"78c48fca4536f9843571b1df443be561e2eacf57b33e35b13d706903ef574249"} Mar 18 17:41:23.323562 master-0 kubenswrapper[4090]: I0318 17:41:23.323499 4090 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["kube-system/bootstrap-kube-controller-manager-master-0"] Mar 18 17:41:23.606781 master-0 kubenswrapper[4090]: I0318 17:41:23.606682 4090 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-ctd49" Mar 18 17:41:23.606781 master-0 kubenswrapper[4090]: I0318 17:41:23.606706 4090 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mfn52" Mar 18 17:41:23.606955 master-0 kubenswrapper[4090]: E0318 17:41:23.606831 4090 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-ctd49" podUID="978dcca6-b396-463f-9614-9e24194a1aaa" Mar 18 17:41:23.607032 master-0 kubenswrapper[4090]: E0318 17:41:23.606989 4090 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mfn52" podUID="5a4f94f3-d63a-4869-b723-ae9637610b4b" Mar 18 17:41:24.050035 master-0 kubenswrapper[4090]: I0318 17:41:24.049904 4090 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-w28hf" podStartSLOduration=10.089186751 podStartE2EDuration="29.049882565s" podCreationTimestamp="2026-03-18 17:40:55 +0000 UTC" firstStartedPulling="2026-03-18 17:40:56.63352942 +0000 UTC m=+73.825801334" lastFinishedPulling="2026-03-18 17:41:15.594225234 +0000 UTC m=+92.786497148" observedRunningTime="2026-03-18 17:41:24.04972651 +0000 UTC m=+101.241998464" watchObservedRunningTime="2026-03-18 17:41:24.049882565 +0000 UTC m=+101.242154479" Mar 18 17:41:25.264574 master-0 kubenswrapper[4090]: I0318 17:41:25.264483 4090 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/bootstrap-kube-controller-manager-master-0" podStartSLOduration=3.264451871 podStartE2EDuration="3.264451871s" podCreationTimestamp="2026-03-18 17:41:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 17:41:24.096114181 +0000 UTC m=+101.288386095" watchObservedRunningTime="2026-03-18 17:41:25.264451871 +0000 UTC m=+102.456723785" Mar 18 17:41:25.265150 master-0 kubenswrapper[4090]: I0318 17:41:25.264692 4090 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-w28hf"] Mar 18 17:41:25.265150 master-0 kubenswrapper[4090]: I0318 17:41:25.265098 4090 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-w28hf" podUID="eda1dca7-9f5f-4955-8522-345e4f6e82a2" containerName="ovn-controller" containerID="cri-o://e7320e65f254a24edc9e01cc30587d714155472069d129fd73b300ad45b8a90c" gracePeriod=30 Mar 18 17:41:25.265436 master-0 kubenswrapper[4090]: I0318 17:41:25.265413 4090 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-w28hf" Mar 18 17:41:25.265436 master-0 kubenswrapper[4090]: I0318 17:41:25.265444 4090 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-w28hf" Mar 18 17:41:25.265550 master-0 kubenswrapper[4090]: I0318 17:41:25.265490 4090 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-w28hf" Mar 18 17:41:25.265750 master-0 kubenswrapper[4090]: I0318 17:41:25.265723 4090 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-w28hf" podUID="eda1dca7-9f5f-4955-8522-345e4f6e82a2" containerName="sbdb" containerID="cri-o://e94cf1ae09720b90a837378ce21a224cf3d1ec15ebe4df07aca3dc99ebf1c5b9" gracePeriod=30 Mar 18 17:41:25.265813 master-0 kubenswrapper[4090]: I0318 17:41:25.265780 4090 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-w28hf" podUID="eda1dca7-9f5f-4955-8522-345e4f6e82a2" containerName="nbdb" containerID="cri-o://5a0cc38196c956571e1020fd07aac66b445ed9098590c86d0420398246fccc70" gracePeriod=30 Mar 18 17:41:25.265847 master-0 kubenswrapper[4090]: I0318 17:41:25.265823 4090 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-w28hf" podUID="eda1dca7-9f5f-4955-8522-345e4f6e82a2" containerName="northd" containerID="cri-o://76fe39ffc3ee88a37aec5098c7c41d8e5c952b0cd635cc6932fa8c25c90fc7b2" gracePeriod=30 Mar 18 17:41:25.265877 master-0 kubenswrapper[4090]: I0318 17:41:25.265861 4090 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-w28hf" podUID="eda1dca7-9f5f-4955-8522-345e4f6e82a2" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://a2f7840253ba77627965588c80fd6601ad2c7b85616e8b0045ab019fad2b4eb5" gracePeriod=30 Mar 18 17:41:25.265909 master-0 kubenswrapper[4090]: I0318 17:41:25.265899 4090 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-w28hf" podUID="eda1dca7-9f5f-4955-8522-345e4f6e82a2" containerName="kube-rbac-proxy-node" containerID="cri-o://dd71bfd3525e8ab627d591d8e984b8e06b2147e143ab566d2810a6c719c82058" gracePeriod=30 Mar 18 17:41:25.265959 master-0 kubenswrapper[4090]: I0318 17:41:25.265935 4090 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-w28hf" podUID="eda1dca7-9f5f-4955-8522-345e4f6e82a2" containerName="ovn-acl-logging" containerID="cri-o://6af32710a9a093c2ffb5165e045584759becf8ee99079c76ea090f36b418b7ac" gracePeriod=30 Mar 18 17:41:25.271558 master-0 kubenswrapper[4090]: E0318 17:41:25.271079 4090 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="e94cf1ae09720b90a837378ce21a224cf3d1ec15ebe4df07aca3dc99ebf1c5b9" cmd=["/bin/bash","-c","set -xeo pipefail\n. /ovnkube-lib/ovnkube-lib.sh || exit 1\novndb-readiness-probe \"sb\"\n"] Mar 18 17:41:25.274657 master-0 kubenswrapper[4090]: E0318 17:41:25.274551 4090 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="e94cf1ae09720b90a837378ce21a224cf3d1ec15ebe4df07aca3dc99ebf1c5b9" cmd=["/bin/bash","-c","set -xeo pipefail\n. /ovnkube-lib/ovnkube-lib.sh || exit 1\novndb-readiness-probe \"sb\"\n"] Mar 18 17:41:25.284151 master-0 kubenswrapper[4090]: E0318 17:41:25.284058 4090 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="5a0cc38196c956571e1020fd07aac66b445ed9098590c86d0420398246fccc70" cmd=["/bin/bash","-c","set -xeo pipefail\n. /ovnkube-lib/ovnkube-lib.sh || exit 1\novndb-readiness-probe \"nb\"\n"] Mar 18 17:41:25.285417 master-0 kubenswrapper[4090]: E0318 17:41:25.284649 4090 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="e94cf1ae09720b90a837378ce21a224cf3d1ec15ebe4df07aca3dc99ebf1c5b9" cmd=["/bin/bash","-c","set -xeo pipefail\n. /ovnkube-lib/ovnkube-lib.sh || exit 1\novndb-readiness-probe \"sb\"\n"] Mar 18 17:41:25.285417 master-0 kubenswrapper[4090]: E0318 17:41:25.284770 4090 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-w28hf" podUID="eda1dca7-9f5f-4955-8522-345e4f6e82a2" containerName="sbdb" Mar 18 17:41:25.302323 master-0 kubenswrapper[4090]: I0318 17:41:25.302075 4090 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-w28hf" podUID="eda1dca7-9f5f-4955-8522-345e4f6e82a2" containerName="ovnkube-controller" containerID="cri-o://78c48fca4536f9843571b1df443be561e2eacf57b33e35b13d706903ef574249" gracePeriod=30 Mar 18 17:41:25.302922 master-0 kubenswrapper[4090]: E0318 17:41:25.302832 4090 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="5a0cc38196c956571e1020fd07aac66b445ed9098590c86d0420398246fccc70" cmd=["/bin/bash","-c","set -xeo pipefail\n. /ovnkube-lib/ovnkube-lib.sh || exit 1\novndb-readiness-probe \"nb\"\n"] Mar 18 17:41:25.305622 master-0 kubenswrapper[4090]: E0318 17:41:25.305544 4090 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="5a0cc38196c956571e1020fd07aac66b445ed9098590c86d0420398246fccc70" cmd=["/bin/bash","-c","set -xeo pipefail\n. /ovnkube-lib/ovnkube-lib.sh || exit 1\novndb-readiness-probe \"nb\"\n"] Mar 18 17:41:25.305622 master-0 kubenswrapper[4090]: E0318 17:41:25.305606 4090 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-w28hf" podUID="eda1dca7-9f5f-4955-8522-345e4f6e82a2" containerName="nbdb" Mar 18 17:41:25.352448 master-0 kubenswrapper[4090]: E0318 17:41:25.352366 4090 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="5a0cc38196c956571e1020fd07aac66b445ed9098590c86d0420398246fccc70" cmd=["/bin/bash","-c","set -xeo pipefail\n. /ovnkube-lib/ovnkube-lib.sh || exit 1\novndb-readiness-probe \"nb\"\n"] Mar 18 17:41:25.352931 master-0 kubenswrapper[4090]: E0318 17:41:25.352896 4090 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 5a0cc38196c956571e1020fd07aac66b445ed9098590c86d0420398246fccc70 is running failed: container process not found" containerID="5a0cc38196c956571e1020fd07aac66b445ed9098590c86d0420398246fccc70" cmd=["/bin/bash","-c","set -xeo pipefail\n. /ovnkube-lib/ovnkube-lib.sh || exit 1\novndb-readiness-probe \"nb\"\n"] Mar 18 17:41:25.353445 master-0 kubenswrapper[4090]: E0318 17:41:25.353358 4090 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 5a0cc38196c956571e1020fd07aac66b445ed9098590c86d0420398246fccc70 is running failed: container process not found" containerID="5a0cc38196c956571e1020fd07aac66b445ed9098590c86d0420398246fccc70" cmd=["/bin/bash","-c","set -xeo pipefail\n. /ovnkube-lib/ovnkube-lib.sh || exit 1\novndb-readiness-probe \"nb\"\n"] Mar 18 17:41:25.353542 master-0 kubenswrapper[4090]: E0318 17:41:25.353480 4090 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 5a0cc38196c956571e1020fd07aac66b445ed9098590c86d0420398246fccc70 is running failed: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-w28hf" podUID="eda1dca7-9f5f-4955-8522-345e4f6e82a2" containerName="nbdb" Mar 18 17:41:25.353987 master-0 kubenswrapper[4090]: E0318 17:41:25.353811 4090 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="e94cf1ae09720b90a837378ce21a224cf3d1ec15ebe4df07aca3dc99ebf1c5b9" cmd=["/bin/bash","-c","set -xeo pipefail\n. /ovnkube-lib/ovnkube-lib.sh || exit 1\novndb-readiness-probe \"sb\"\n"] Mar 18 17:41:25.359815 master-0 kubenswrapper[4090]: E0318 17:41:25.359655 4090 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="e94cf1ae09720b90a837378ce21a224cf3d1ec15ebe4df07aca3dc99ebf1c5b9" cmd=["/bin/bash","-c","set -xeo pipefail\n. /ovnkube-lib/ovnkube-lib.sh || exit 1\novndb-readiness-probe \"sb\"\n"] Mar 18 17:41:25.363249 master-0 kubenswrapper[4090]: E0318 17:41:25.363170 4090 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="e94cf1ae09720b90a837378ce21a224cf3d1ec15ebe4df07aca3dc99ebf1c5b9" cmd=["/bin/bash","-c","set -xeo pipefail\n. /ovnkube-lib/ovnkube-lib.sh || exit 1\novndb-readiness-probe \"sb\"\n"] Mar 18 17:41:25.363364 master-0 kubenswrapper[4090]: E0318 17:41:25.363280 4090 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-w28hf" podUID="eda1dca7-9f5f-4955-8522-345e4f6e82a2" containerName="sbdb" Mar 18 17:41:25.608096 master-0 kubenswrapper[4090]: I0318 17:41:25.607685 4090 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mfn52" Mar 18 17:41:25.608243 master-0 kubenswrapper[4090]: E0318 17:41:25.608216 4090 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mfn52" podUID="5a4f94f3-d63a-4869-b723-ae9637610b4b" Mar 18 17:41:25.608910 master-0 kubenswrapper[4090]: I0318 17:41:25.607738 4090 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-ctd49" Mar 18 17:41:25.608910 master-0 kubenswrapper[4090]: E0318 17:41:25.608462 4090 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-ctd49" podUID="978dcca6-b396-463f-9614-9e24194a1aaa" Mar 18 17:41:25.699058 master-0 kubenswrapper[4090]: I0318 17:41:25.697837 4090 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-w28hf_eda1dca7-9f5f-4955-8522-345e4f6e82a2/ovnkube-controller/0.log" Mar 18 17:41:25.702449 master-0 kubenswrapper[4090]: I0318 17:41:25.702401 4090 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-w28hf_eda1dca7-9f5f-4955-8522-345e4f6e82a2/kube-rbac-proxy-ovn-metrics/0.log" Mar 18 17:41:25.704342 master-0 kubenswrapper[4090]: I0318 17:41:25.704246 4090 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-w28hf_eda1dca7-9f5f-4955-8522-345e4f6e82a2/kube-rbac-proxy-node/0.log" Mar 18 17:41:25.705530 master-0 kubenswrapper[4090]: I0318 17:41:25.705487 4090 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-w28hf_eda1dca7-9f5f-4955-8522-345e4f6e82a2/ovn-acl-logging/0.log" Mar 18 17:41:25.706334 master-0 kubenswrapper[4090]: I0318 17:41:25.706220 4090 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-w28hf_eda1dca7-9f5f-4955-8522-345e4f6e82a2/ovn-controller/0.log" Mar 18 17:41:25.707535 master-0 kubenswrapper[4090]: I0318 17:41:25.707103 4090 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-w28hf" Mar 18 17:41:25.753541 master-0 kubenswrapper[4090]: I0318 17:41:25.751555 4090 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/eda1dca7-9f5f-4955-8522-345e4f6e82a2-host-cni-bin\") pod \"eda1dca7-9f5f-4955-8522-345e4f6e82a2\" (UID: \"eda1dca7-9f5f-4955-8522-345e4f6e82a2\") " Mar 18 17:41:25.753541 master-0 kubenswrapper[4090]: I0318 17:41:25.751610 4090 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/eda1dca7-9f5f-4955-8522-345e4f6e82a2-etc-openvswitch\") pod \"eda1dca7-9f5f-4955-8522-345e4f6e82a2\" (UID: \"eda1dca7-9f5f-4955-8522-345e4f6e82a2\") " Mar 18 17:41:25.753541 master-0 kubenswrapper[4090]: I0318 17:41:25.751678 4090 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/eda1dca7-9f5f-4955-8522-345e4f6e82a2-run-ovn\") pod \"eda1dca7-9f5f-4955-8522-345e4f6e82a2\" (UID: \"eda1dca7-9f5f-4955-8522-345e4f6e82a2\") " Mar 18 17:41:25.753541 master-0 kubenswrapper[4090]: I0318 17:41:25.751702 4090 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/eda1dca7-9f5f-4955-8522-345e4f6e82a2-host-run-ovn-kubernetes\") pod \"eda1dca7-9f5f-4955-8522-345e4f6e82a2\" (UID: \"eda1dca7-9f5f-4955-8522-345e4f6e82a2\") " Mar 18 17:41:25.753541 master-0 kubenswrapper[4090]: I0318 17:41:25.751722 4090 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/eda1dca7-9f5f-4955-8522-345e4f6e82a2-host-run-netns\") pod \"eda1dca7-9f5f-4955-8522-345e4f6e82a2\" (UID: \"eda1dca7-9f5f-4955-8522-345e4f6e82a2\") " Mar 18 17:41:25.753541 master-0 kubenswrapper[4090]: I0318 17:41:25.751739 4090 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/eda1dca7-9f5f-4955-8522-345e4f6e82a2-run-openvswitch\") pod \"eda1dca7-9f5f-4955-8522-345e4f6e82a2\" (UID: \"eda1dca7-9f5f-4955-8522-345e4f6e82a2\") " Mar 18 17:41:25.753541 master-0 kubenswrapper[4090]: I0318 17:41:25.751756 4090 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/eda1dca7-9f5f-4955-8522-345e4f6e82a2-var-lib-openvswitch\") pod \"eda1dca7-9f5f-4955-8522-345e4f6e82a2\" (UID: \"eda1dca7-9f5f-4955-8522-345e4f6e82a2\") " Mar 18 17:41:25.753541 master-0 kubenswrapper[4090]: I0318 17:41:25.751770 4090 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/eda1dca7-9f5f-4955-8522-345e4f6e82a2-host-slash\") pod \"eda1dca7-9f5f-4955-8522-345e4f6e82a2\" (UID: \"eda1dca7-9f5f-4955-8522-345e4f6e82a2\") " Mar 18 17:41:25.753541 master-0 kubenswrapper[4090]: I0318 17:41:25.751786 4090 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/eda1dca7-9f5f-4955-8522-345e4f6e82a2-log-socket\") pod \"eda1dca7-9f5f-4955-8522-345e4f6e82a2\" (UID: \"eda1dca7-9f5f-4955-8522-345e4f6e82a2\") " Mar 18 17:41:25.753541 master-0 kubenswrapper[4090]: I0318 17:41:25.751800 4090 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/eda1dca7-9f5f-4955-8522-345e4f6e82a2-host-kubelet\") pod \"eda1dca7-9f5f-4955-8522-345e4f6e82a2\" (UID: \"eda1dca7-9f5f-4955-8522-345e4f6e82a2\") " Mar 18 17:41:25.753541 master-0 kubenswrapper[4090]: I0318 17:41:25.751828 4090 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fr594\" (UniqueName: \"kubernetes.io/projected/eda1dca7-9f5f-4955-8522-345e4f6e82a2-kube-api-access-fr594\") pod \"eda1dca7-9f5f-4955-8522-345e4f6e82a2\" (UID: \"eda1dca7-9f5f-4955-8522-345e4f6e82a2\") " Mar 18 17:41:25.753541 master-0 kubenswrapper[4090]: I0318 17:41:25.751843 4090 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/eda1dca7-9f5f-4955-8522-345e4f6e82a2-node-log\") pod \"eda1dca7-9f5f-4955-8522-345e4f6e82a2\" (UID: \"eda1dca7-9f5f-4955-8522-345e4f6e82a2\") " Mar 18 17:41:25.753541 master-0 kubenswrapper[4090]: I0318 17:41:25.751868 4090 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/eda1dca7-9f5f-4955-8522-345e4f6e82a2-ovn-node-metrics-cert\") pod \"eda1dca7-9f5f-4955-8522-345e4f6e82a2\" (UID: \"eda1dca7-9f5f-4955-8522-345e4f6e82a2\") " Mar 18 17:41:25.753541 master-0 kubenswrapper[4090]: I0318 17:41:25.751882 4090 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/eda1dca7-9f5f-4955-8522-345e4f6e82a2-run-systemd\") pod \"eda1dca7-9f5f-4955-8522-345e4f6e82a2\" (UID: \"eda1dca7-9f5f-4955-8522-345e4f6e82a2\") " Mar 18 17:41:25.753541 master-0 kubenswrapper[4090]: I0318 17:41:25.751900 4090 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/eda1dca7-9f5f-4955-8522-345e4f6e82a2-host-cni-netd\") pod \"eda1dca7-9f5f-4955-8522-345e4f6e82a2\" (UID: \"eda1dca7-9f5f-4955-8522-345e4f6e82a2\") " Mar 18 17:41:25.753541 master-0 kubenswrapper[4090]: I0318 17:41:25.751918 4090 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/eda1dca7-9f5f-4955-8522-345e4f6e82a2-systemd-units\") pod \"eda1dca7-9f5f-4955-8522-345e4f6e82a2\" (UID: \"eda1dca7-9f5f-4955-8522-345e4f6e82a2\") " Mar 18 17:41:25.753541 master-0 kubenswrapper[4090]: I0318 17:41:25.751937 4090 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/eda1dca7-9f5f-4955-8522-345e4f6e82a2-env-overrides\") pod \"eda1dca7-9f5f-4955-8522-345e4f6e82a2\" (UID: \"eda1dca7-9f5f-4955-8522-345e4f6e82a2\") " Mar 18 17:41:25.753541 master-0 kubenswrapper[4090]: I0318 17:41:25.751953 4090 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/eda1dca7-9f5f-4955-8522-345e4f6e82a2-ovnkube-script-lib\") pod \"eda1dca7-9f5f-4955-8522-345e4f6e82a2\" (UID: \"eda1dca7-9f5f-4955-8522-345e4f6e82a2\") " Mar 18 17:41:25.753541 master-0 kubenswrapper[4090]: I0318 17:41:25.751969 4090 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/eda1dca7-9f5f-4955-8522-345e4f6e82a2-host-var-lib-cni-networks-ovn-kubernetes\") pod \"eda1dca7-9f5f-4955-8522-345e4f6e82a2\" (UID: \"eda1dca7-9f5f-4955-8522-345e4f6e82a2\") " Mar 18 17:41:25.754448 master-0 kubenswrapper[4090]: I0318 17:41:25.751987 4090 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/eda1dca7-9f5f-4955-8522-345e4f6e82a2-ovnkube-config\") pod \"eda1dca7-9f5f-4955-8522-345e4f6e82a2\" (UID: \"eda1dca7-9f5f-4955-8522-345e4f6e82a2\") " Mar 18 17:41:25.754448 master-0 kubenswrapper[4090]: I0318 17:41:25.751991 4090 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/eda1dca7-9f5f-4955-8522-345e4f6e82a2-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "eda1dca7-9f5f-4955-8522-345e4f6e82a2" (UID: "eda1dca7-9f5f-4955-8522-345e4f6e82a2"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 17:41:25.754448 master-0 kubenswrapper[4090]: I0318 17:41:25.752032 4090 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/eda1dca7-9f5f-4955-8522-345e4f6e82a2-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "eda1dca7-9f5f-4955-8522-345e4f6e82a2" (UID: "eda1dca7-9f5f-4955-8522-345e4f6e82a2"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 17:41:25.754448 master-0 kubenswrapper[4090]: I0318 17:41:25.752086 4090 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/eda1dca7-9f5f-4955-8522-345e4f6e82a2-host-kubelet\") on node \"master-0\" DevicePath \"\"" Mar 18 17:41:25.754448 master-0 kubenswrapper[4090]: I0318 17:41:25.752099 4090 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/eda1dca7-9f5f-4955-8522-345e4f6e82a2-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "eda1dca7-9f5f-4955-8522-345e4f6e82a2" (UID: "eda1dca7-9f5f-4955-8522-345e4f6e82a2"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 17:41:25.754448 master-0 kubenswrapper[4090]: I0318 17:41:25.752062 4090 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/eda1dca7-9f5f-4955-8522-345e4f6e82a2-log-socket" (OuterVolumeSpecName: "log-socket") pod "eda1dca7-9f5f-4955-8522-345e4f6e82a2" (UID: "eda1dca7-9f5f-4955-8522-345e4f6e82a2"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 17:41:25.754448 master-0 kubenswrapper[4090]: I0318 17:41:25.752169 4090 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/eda1dca7-9f5f-4955-8522-345e4f6e82a2-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "eda1dca7-9f5f-4955-8522-345e4f6e82a2" (UID: "eda1dca7-9f5f-4955-8522-345e4f6e82a2"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 17:41:25.754448 master-0 kubenswrapper[4090]: I0318 17:41:25.752179 4090 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/eda1dca7-9f5f-4955-8522-345e4f6e82a2-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "eda1dca7-9f5f-4955-8522-345e4f6e82a2" (UID: "eda1dca7-9f5f-4955-8522-345e4f6e82a2"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 17:41:25.754448 master-0 kubenswrapper[4090]: I0318 17:41:25.752201 4090 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/eda1dca7-9f5f-4955-8522-345e4f6e82a2-host-slash" (OuterVolumeSpecName: "host-slash") pod "eda1dca7-9f5f-4955-8522-345e4f6e82a2" (UID: "eda1dca7-9f5f-4955-8522-345e4f6e82a2"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 17:41:25.754448 master-0 kubenswrapper[4090]: I0318 17:41:25.752319 4090 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/eda1dca7-9f5f-4955-8522-345e4f6e82a2-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "eda1dca7-9f5f-4955-8522-345e4f6e82a2" (UID: "eda1dca7-9f5f-4955-8522-345e4f6e82a2"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 17:41:25.754448 master-0 kubenswrapper[4090]: I0318 17:41:25.752372 4090 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/eda1dca7-9f5f-4955-8522-345e4f6e82a2-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "eda1dca7-9f5f-4955-8522-345e4f6e82a2" (UID: "eda1dca7-9f5f-4955-8522-345e4f6e82a2"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 17:41:25.754448 master-0 kubenswrapper[4090]: I0318 17:41:25.752641 4090 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eda1dca7-9f5f-4955-8522-345e4f6e82a2-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "eda1dca7-9f5f-4955-8522-345e4f6e82a2" (UID: "eda1dca7-9f5f-4955-8522-345e4f6e82a2"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 17:41:25.754448 master-0 kubenswrapper[4090]: I0318 17:41:25.752228 4090 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/eda1dca7-9f5f-4955-8522-345e4f6e82a2-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "eda1dca7-9f5f-4955-8522-345e4f6e82a2" (UID: "eda1dca7-9f5f-4955-8522-345e4f6e82a2"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 17:41:25.754448 master-0 kubenswrapper[4090]: I0318 17:41:25.752306 4090 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/eda1dca7-9f5f-4955-8522-345e4f6e82a2-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "eda1dca7-9f5f-4955-8522-345e4f6e82a2" (UID: "eda1dca7-9f5f-4955-8522-345e4f6e82a2"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 17:41:25.754448 master-0 kubenswrapper[4090]: I0318 17:41:25.753496 4090 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/eda1dca7-9f5f-4955-8522-345e4f6e82a2-node-log" (OuterVolumeSpecName: "node-log") pod "eda1dca7-9f5f-4955-8522-345e4f6e82a2" (UID: "eda1dca7-9f5f-4955-8522-345e4f6e82a2"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 17:41:25.755035 master-0 kubenswrapper[4090]: I0318 17:41:25.753536 4090 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/eda1dca7-9f5f-4955-8522-345e4f6e82a2-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "eda1dca7-9f5f-4955-8522-345e4f6e82a2" (UID: "eda1dca7-9f5f-4955-8522-345e4f6e82a2"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 17:41:25.755035 master-0 kubenswrapper[4090]: I0318 17:41:25.752203 4090 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/eda1dca7-9f5f-4955-8522-345e4f6e82a2-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "eda1dca7-9f5f-4955-8522-345e4f6e82a2" (UID: "eda1dca7-9f5f-4955-8522-345e4f6e82a2"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 17:41:25.755035 master-0 kubenswrapper[4090]: I0318 17:41:25.754015 4090 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eda1dca7-9f5f-4955-8522-345e4f6e82a2-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "eda1dca7-9f5f-4955-8522-345e4f6e82a2" (UID: "eda1dca7-9f5f-4955-8522-345e4f6e82a2"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 17:41:25.756743 master-0 kubenswrapper[4090]: I0318 17:41:25.756689 4090 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eda1dca7-9f5f-4955-8522-345e4f6e82a2-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "eda1dca7-9f5f-4955-8522-345e4f6e82a2" (UID: "eda1dca7-9f5f-4955-8522-345e4f6e82a2"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 17:41:25.758792 master-0 kubenswrapper[4090]: I0318 17:41:25.758740 4090 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eda1dca7-9f5f-4955-8522-345e4f6e82a2-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "eda1dca7-9f5f-4955-8522-345e4f6e82a2" (UID: "eda1dca7-9f5f-4955-8522-345e4f6e82a2"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 17:41:25.760571 master-0 kubenswrapper[4090]: I0318 17:41:25.760519 4090 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eda1dca7-9f5f-4955-8522-345e4f6e82a2-kube-api-access-fr594" (OuterVolumeSpecName: "kube-api-access-fr594") pod "eda1dca7-9f5f-4955-8522-345e4f6e82a2" (UID: "eda1dca7-9f5f-4955-8522-345e4f6e82a2"). InnerVolumeSpecName "kube-api-access-fr594". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 17:41:25.760983 master-0 kubenswrapper[4090]: I0318 17:41:25.760928 4090 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/eda1dca7-9f5f-4955-8522-345e4f6e82a2-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "eda1dca7-9f5f-4955-8522-345e4f6e82a2" (UID: "eda1dca7-9f5f-4955-8522-345e4f6e82a2"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 17:41:25.852895 master-0 kubenswrapper[4090]: I0318 17:41:25.852824 4090 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/eda1dca7-9f5f-4955-8522-345e4f6e82a2-run-ovn\") on node \"master-0\" DevicePath \"\"" Mar 18 17:41:25.852895 master-0 kubenswrapper[4090]: I0318 17:41:25.852864 4090 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/eda1dca7-9f5f-4955-8522-345e4f6e82a2-etc-openvswitch\") on node \"master-0\" DevicePath \"\"" Mar 18 17:41:25.852895 master-0 kubenswrapper[4090]: I0318 17:41:25.852879 4090 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/eda1dca7-9f5f-4955-8522-345e4f6e82a2-host-run-ovn-kubernetes\") on node \"master-0\" DevicePath \"\"" Mar 18 17:41:25.852895 master-0 kubenswrapper[4090]: I0318 17:41:25.852895 4090 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/eda1dca7-9f5f-4955-8522-345e4f6e82a2-host-run-netns\") on node \"master-0\" DevicePath \"\"" Mar 18 17:41:25.852895 master-0 kubenswrapper[4090]: I0318 17:41:25.852907 4090 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/eda1dca7-9f5f-4955-8522-345e4f6e82a2-run-openvswitch\") on node \"master-0\" DevicePath \"\"" Mar 18 17:41:25.852895 master-0 kubenswrapper[4090]: I0318 17:41:25.852921 4090 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/eda1dca7-9f5f-4955-8522-345e4f6e82a2-var-lib-openvswitch\") on node \"master-0\" DevicePath \"\"" Mar 18 17:41:25.852895 master-0 kubenswrapper[4090]: I0318 17:41:25.852933 4090 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/eda1dca7-9f5f-4955-8522-345e4f6e82a2-log-socket\") on node \"master-0\" DevicePath \"\"" Mar 18 17:41:25.852895 master-0 kubenswrapper[4090]: I0318 17:41:25.852947 4090 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/eda1dca7-9f5f-4955-8522-345e4f6e82a2-host-slash\") on node \"master-0\" DevicePath \"\"" Mar 18 17:41:25.853469 master-0 kubenswrapper[4090]: I0318 17:41:25.852961 4090 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fr594\" (UniqueName: \"kubernetes.io/projected/eda1dca7-9f5f-4955-8522-345e4f6e82a2-kube-api-access-fr594\") on node \"master-0\" DevicePath \"\"" Mar 18 17:41:25.853469 master-0 kubenswrapper[4090]: I0318 17:41:25.852975 4090 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/eda1dca7-9f5f-4955-8522-345e4f6e82a2-node-log\") on node \"master-0\" DevicePath \"\"" Mar 18 17:41:25.853469 master-0 kubenswrapper[4090]: I0318 17:41:25.852987 4090 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/eda1dca7-9f5f-4955-8522-345e4f6e82a2-run-systemd\") on node \"master-0\" DevicePath \"\"" Mar 18 17:41:25.853469 master-0 kubenswrapper[4090]: I0318 17:41:25.852999 4090 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/eda1dca7-9f5f-4955-8522-345e4f6e82a2-ovn-node-metrics-cert\") on node \"master-0\" DevicePath \"\"" Mar 18 17:41:25.853469 master-0 kubenswrapper[4090]: I0318 17:41:25.853011 4090 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/eda1dca7-9f5f-4955-8522-345e4f6e82a2-host-cni-netd\") on node \"master-0\" DevicePath \"\"" Mar 18 17:41:25.853469 master-0 kubenswrapper[4090]: I0318 17:41:25.853023 4090 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/eda1dca7-9f5f-4955-8522-345e4f6e82a2-systemd-units\") on node \"master-0\" DevicePath \"\"" Mar 18 17:41:25.853469 master-0 kubenswrapper[4090]: I0318 17:41:25.853036 4090 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/eda1dca7-9f5f-4955-8522-345e4f6e82a2-host-var-lib-cni-networks-ovn-kubernetes\") on node \"master-0\" DevicePath \"\"" Mar 18 17:41:25.853469 master-0 kubenswrapper[4090]: I0318 17:41:25.853050 4090 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/eda1dca7-9f5f-4955-8522-345e4f6e82a2-env-overrides\") on node \"master-0\" DevicePath \"\"" Mar 18 17:41:25.853469 master-0 kubenswrapper[4090]: I0318 17:41:25.853062 4090 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/eda1dca7-9f5f-4955-8522-345e4f6e82a2-ovnkube-script-lib\") on node \"master-0\" DevicePath \"\"" Mar 18 17:41:25.853469 master-0 kubenswrapper[4090]: I0318 17:41:25.853075 4090 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/eda1dca7-9f5f-4955-8522-345e4f6e82a2-ovnkube-config\") on node \"master-0\" DevicePath \"\"" Mar 18 17:41:25.853469 master-0 kubenswrapper[4090]: I0318 17:41:25.853087 4090 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/eda1dca7-9f5f-4955-8522-345e4f6e82a2-host-cni-bin\") on node \"master-0\" DevicePath \"\"" Mar 18 17:41:25.915497 master-0 kubenswrapper[4090]: I0318 17:41:25.915440 4090 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-5l4qp"] Mar 18 17:41:25.915941 master-0 kubenswrapper[4090]: E0318 17:41:25.915560 4090 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eda1dca7-9f5f-4955-8522-345e4f6e82a2" containerName="ovn-controller" Mar 18 17:41:25.915941 master-0 kubenswrapper[4090]: I0318 17:41:25.915770 4090 state_mem.go:107] "Deleted CPUSet assignment" podUID="eda1dca7-9f5f-4955-8522-345e4f6e82a2" containerName="ovn-controller" Mar 18 17:41:25.915941 master-0 kubenswrapper[4090]: E0318 17:41:25.915783 4090 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eda1dca7-9f5f-4955-8522-345e4f6e82a2" containerName="ovn-acl-logging" Mar 18 17:41:25.915941 master-0 kubenswrapper[4090]: I0318 17:41:25.915791 4090 state_mem.go:107] "Deleted CPUSet assignment" podUID="eda1dca7-9f5f-4955-8522-345e4f6e82a2" containerName="ovn-acl-logging" Mar 18 17:41:25.915941 master-0 kubenswrapper[4090]: E0318 17:41:25.915800 4090 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eda1dca7-9f5f-4955-8522-345e4f6e82a2" containerName="kubecfg-setup" Mar 18 17:41:25.915941 master-0 kubenswrapper[4090]: I0318 17:41:25.915948 4090 state_mem.go:107] "Deleted CPUSet assignment" podUID="eda1dca7-9f5f-4955-8522-345e4f6e82a2" containerName="kubecfg-setup" Mar 18 17:41:25.916217 master-0 kubenswrapper[4090]: E0318 17:41:25.915960 4090 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eda1dca7-9f5f-4955-8522-345e4f6e82a2" containerName="kube-rbac-proxy-ovn-metrics" Mar 18 17:41:25.916217 master-0 kubenswrapper[4090]: I0318 17:41:25.915970 4090 state_mem.go:107] "Deleted CPUSet assignment" podUID="eda1dca7-9f5f-4955-8522-345e4f6e82a2" containerName="kube-rbac-proxy-ovn-metrics" Mar 18 17:41:25.916217 master-0 kubenswrapper[4090]: E0318 17:41:25.915979 4090 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eda1dca7-9f5f-4955-8522-345e4f6e82a2" containerName="nbdb" Mar 18 17:41:25.916217 master-0 kubenswrapper[4090]: I0318 17:41:25.915987 4090 state_mem.go:107] "Deleted CPUSet assignment" podUID="eda1dca7-9f5f-4955-8522-345e4f6e82a2" containerName="nbdb" Mar 18 17:41:25.916217 master-0 kubenswrapper[4090]: E0318 17:41:25.915996 4090 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eda1dca7-9f5f-4955-8522-345e4f6e82a2" containerName="sbdb" Mar 18 17:41:25.916217 master-0 kubenswrapper[4090]: I0318 17:41:25.916004 4090 state_mem.go:107] "Deleted CPUSet assignment" podUID="eda1dca7-9f5f-4955-8522-345e4f6e82a2" containerName="sbdb" Mar 18 17:41:25.916217 master-0 kubenswrapper[4090]: E0318 17:41:25.916013 4090 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eda1dca7-9f5f-4955-8522-345e4f6e82a2" containerName="ovnkube-controller" Mar 18 17:41:25.916217 master-0 kubenswrapper[4090]: I0318 17:41:25.916021 4090 state_mem.go:107] "Deleted CPUSet assignment" podUID="eda1dca7-9f5f-4955-8522-345e4f6e82a2" containerName="ovnkube-controller" Mar 18 17:41:25.916217 master-0 kubenswrapper[4090]: E0318 17:41:25.916030 4090 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eda1dca7-9f5f-4955-8522-345e4f6e82a2" containerName="northd" Mar 18 17:41:25.916217 master-0 kubenswrapper[4090]: I0318 17:41:25.916040 4090 state_mem.go:107] "Deleted CPUSet assignment" podUID="eda1dca7-9f5f-4955-8522-345e4f6e82a2" containerName="northd" Mar 18 17:41:25.916217 master-0 kubenswrapper[4090]: E0318 17:41:25.916049 4090 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eda1dca7-9f5f-4955-8522-345e4f6e82a2" containerName="kube-rbac-proxy-node" Mar 18 17:41:25.916217 master-0 kubenswrapper[4090]: I0318 17:41:25.916058 4090 state_mem.go:107] "Deleted CPUSet assignment" podUID="eda1dca7-9f5f-4955-8522-345e4f6e82a2" containerName="kube-rbac-proxy-node" Mar 18 17:41:25.916217 master-0 kubenswrapper[4090]: I0318 17:41:25.916102 4090 memory_manager.go:354] "RemoveStaleState removing state" podUID="eda1dca7-9f5f-4955-8522-345e4f6e82a2" containerName="kube-rbac-proxy-ovn-metrics" Mar 18 17:41:25.916217 master-0 kubenswrapper[4090]: I0318 17:41:25.916115 4090 memory_manager.go:354] "RemoveStaleState removing state" podUID="eda1dca7-9f5f-4955-8522-345e4f6e82a2" containerName="kube-rbac-proxy-node" Mar 18 17:41:25.916217 master-0 kubenswrapper[4090]: I0318 17:41:25.916123 4090 memory_manager.go:354] "RemoveStaleState removing state" podUID="eda1dca7-9f5f-4955-8522-345e4f6e82a2" containerName="nbdb" Mar 18 17:41:25.916217 master-0 kubenswrapper[4090]: I0318 17:41:25.916131 4090 memory_manager.go:354] "RemoveStaleState removing state" podUID="eda1dca7-9f5f-4955-8522-345e4f6e82a2" containerName="ovn-controller" Mar 18 17:41:25.916217 master-0 kubenswrapper[4090]: I0318 17:41:25.916141 4090 memory_manager.go:354] "RemoveStaleState removing state" podUID="eda1dca7-9f5f-4955-8522-345e4f6e82a2" containerName="northd" Mar 18 17:41:25.916217 master-0 kubenswrapper[4090]: I0318 17:41:25.916149 4090 memory_manager.go:354] "RemoveStaleState removing state" podUID="eda1dca7-9f5f-4955-8522-345e4f6e82a2" containerName="ovn-acl-logging" Mar 18 17:41:25.916217 master-0 kubenswrapper[4090]: I0318 17:41:25.916156 4090 memory_manager.go:354] "RemoveStaleState removing state" podUID="eda1dca7-9f5f-4955-8522-345e4f6e82a2" containerName="sbdb" Mar 18 17:41:25.916217 master-0 kubenswrapper[4090]: I0318 17:41:25.916164 4090 memory_manager.go:354] "RemoveStaleState removing state" podUID="eda1dca7-9f5f-4955-8522-345e4f6e82a2" containerName="ovnkube-controller" Mar 18 17:41:25.917707 master-0 kubenswrapper[4090]: I0318 17:41:25.916960 4090 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-5l4qp" Mar 18 17:41:25.955986 master-0 kubenswrapper[4090]: I0318 17:41:25.955883 4090 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/994fff04-c1d7-4f10-8d4b-6b49a6934829-host-slash\") pod \"ovnkube-node-5l4qp\" (UID: \"994fff04-c1d7-4f10-8d4b-6b49a6934829\") " pod="openshift-ovn-kubernetes/ovnkube-node-5l4qp" Mar 18 17:41:25.955986 master-0 kubenswrapper[4090]: I0318 17:41:25.955953 4090 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/994fff04-c1d7-4f10-8d4b-6b49a6934829-run-openvswitch\") pod \"ovnkube-node-5l4qp\" (UID: \"994fff04-c1d7-4f10-8d4b-6b49a6934829\") " pod="openshift-ovn-kubernetes/ovnkube-node-5l4qp" Mar 18 17:41:25.956543 master-0 kubenswrapper[4090]: I0318 17:41:25.956042 4090 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/994fff04-c1d7-4f10-8d4b-6b49a6934829-run-systemd\") pod \"ovnkube-node-5l4qp\" (UID: \"994fff04-c1d7-4f10-8d4b-6b49a6934829\") " pod="openshift-ovn-kubernetes/ovnkube-node-5l4qp" Mar 18 17:41:25.956543 master-0 kubenswrapper[4090]: I0318 17:41:25.956088 4090 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/994fff04-c1d7-4f10-8d4b-6b49a6934829-host-run-netns\") pod \"ovnkube-node-5l4qp\" (UID: \"994fff04-c1d7-4f10-8d4b-6b49a6934829\") " pod="openshift-ovn-kubernetes/ovnkube-node-5l4qp" Mar 18 17:41:25.956543 master-0 kubenswrapper[4090]: I0318 17:41:25.956170 4090 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/994fff04-c1d7-4f10-8d4b-6b49a6934829-host-cni-bin\") pod \"ovnkube-node-5l4qp\" (UID: \"994fff04-c1d7-4f10-8d4b-6b49a6934829\") " pod="openshift-ovn-kubernetes/ovnkube-node-5l4qp" Mar 18 17:41:25.956543 master-0 kubenswrapper[4090]: I0318 17:41:25.956206 4090 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/994fff04-c1d7-4f10-8d4b-6b49a6934829-etc-openvswitch\") pod \"ovnkube-node-5l4qp\" (UID: \"994fff04-c1d7-4f10-8d4b-6b49a6934829\") " pod="openshift-ovn-kubernetes/ovnkube-node-5l4qp" Mar 18 17:41:25.956543 master-0 kubenswrapper[4090]: I0318 17:41:25.956239 4090 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/994fff04-c1d7-4f10-8d4b-6b49a6934829-host-cni-netd\") pod \"ovnkube-node-5l4qp\" (UID: \"994fff04-c1d7-4f10-8d4b-6b49a6934829\") " pod="openshift-ovn-kubernetes/ovnkube-node-5l4qp" Mar 18 17:41:25.956543 master-0 kubenswrapper[4090]: I0318 17:41:25.956343 4090 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/994fff04-c1d7-4f10-8d4b-6b49a6934829-var-lib-openvswitch\") pod \"ovnkube-node-5l4qp\" (UID: \"994fff04-c1d7-4f10-8d4b-6b49a6934829\") " pod="openshift-ovn-kubernetes/ovnkube-node-5l4qp" Mar 18 17:41:25.956543 master-0 kubenswrapper[4090]: I0318 17:41:25.956399 4090 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/994fff04-c1d7-4f10-8d4b-6b49a6934829-systemd-units\") pod \"ovnkube-node-5l4qp\" (UID: \"994fff04-c1d7-4f10-8d4b-6b49a6934829\") " pod="openshift-ovn-kubernetes/ovnkube-node-5l4qp" Mar 18 17:41:25.956543 master-0 kubenswrapper[4090]: I0318 17:41:25.956457 4090 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/994fff04-c1d7-4f10-8d4b-6b49a6934829-run-ovn\") pod \"ovnkube-node-5l4qp\" (UID: \"994fff04-c1d7-4f10-8d4b-6b49a6934829\") " pod="openshift-ovn-kubernetes/ovnkube-node-5l4qp" Mar 18 17:41:25.956543 master-0 kubenswrapper[4090]: I0318 17:41:25.956480 4090 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/994fff04-c1d7-4f10-8d4b-6b49a6934829-ovn-node-metrics-cert\") pod \"ovnkube-node-5l4qp\" (UID: \"994fff04-c1d7-4f10-8d4b-6b49a6934829\") " pod="openshift-ovn-kubernetes/ovnkube-node-5l4qp" Mar 18 17:41:25.956543 master-0 kubenswrapper[4090]: I0318 17:41:25.956552 4090 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/994fff04-c1d7-4f10-8d4b-6b49a6934829-node-log\") pod \"ovnkube-node-5l4qp\" (UID: \"994fff04-c1d7-4f10-8d4b-6b49a6934829\") " pod="openshift-ovn-kubernetes/ovnkube-node-5l4qp" Mar 18 17:41:25.957423 master-0 kubenswrapper[4090]: I0318 17:41:25.956608 4090 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/994fff04-c1d7-4f10-8d4b-6b49a6934829-ovnkube-config\") pod \"ovnkube-node-5l4qp\" (UID: \"994fff04-c1d7-4f10-8d4b-6b49a6934829\") " pod="openshift-ovn-kubernetes/ovnkube-node-5l4qp" Mar 18 17:41:25.957423 master-0 kubenswrapper[4090]: I0318 17:41:25.956680 4090 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/994fff04-c1d7-4f10-8d4b-6b49a6934829-host-kubelet\") pod \"ovnkube-node-5l4qp\" (UID: \"994fff04-c1d7-4f10-8d4b-6b49a6934829\") " pod="openshift-ovn-kubernetes/ovnkube-node-5l4qp" Mar 18 17:41:25.957423 master-0 kubenswrapper[4090]: I0318 17:41:25.956716 4090 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/994fff04-c1d7-4f10-8d4b-6b49a6934829-ovnkube-script-lib\") pod \"ovnkube-node-5l4qp\" (UID: \"994fff04-c1d7-4f10-8d4b-6b49a6934829\") " pod="openshift-ovn-kubernetes/ovnkube-node-5l4qp" Mar 18 17:41:25.957423 master-0 kubenswrapper[4090]: I0318 17:41:25.956751 4090 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9lwsm\" (UniqueName: \"kubernetes.io/projected/994fff04-c1d7-4f10-8d4b-6b49a6934829-kube-api-access-9lwsm\") pod \"ovnkube-node-5l4qp\" (UID: \"994fff04-c1d7-4f10-8d4b-6b49a6934829\") " pod="openshift-ovn-kubernetes/ovnkube-node-5l4qp" Mar 18 17:41:25.957423 master-0 kubenswrapper[4090]: I0318 17:41:25.956803 4090 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/994fff04-c1d7-4f10-8d4b-6b49a6934829-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-5l4qp\" (UID: \"994fff04-c1d7-4f10-8d4b-6b49a6934829\") " pod="openshift-ovn-kubernetes/ovnkube-node-5l4qp" Mar 18 17:41:25.957423 master-0 kubenswrapper[4090]: I0318 17:41:25.956835 4090 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/994fff04-c1d7-4f10-8d4b-6b49a6934829-env-overrides\") pod \"ovnkube-node-5l4qp\" (UID: \"994fff04-c1d7-4f10-8d4b-6b49a6934829\") " pod="openshift-ovn-kubernetes/ovnkube-node-5l4qp" Mar 18 17:41:25.957423 master-0 kubenswrapper[4090]: I0318 17:41:25.956880 4090 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/994fff04-c1d7-4f10-8d4b-6b49a6934829-log-socket\") pod \"ovnkube-node-5l4qp\" (UID: \"994fff04-c1d7-4f10-8d4b-6b49a6934829\") " pod="openshift-ovn-kubernetes/ovnkube-node-5l4qp" Mar 18 17:41:25.957423 master-0 kubenswrapper[4090]: I0318 17:41:25.956907 4090 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/994fff04-c1d7-4f10-8d4b-6b49a6934829-host-run-ovn-kubernetes\") pod \"ovnkube-node-5l4qp\" (UID: \"994fff04-c1d7-4f10-8d4b-6b49a6934829\") " pod="openshift-ovn-kubernetes/ovnkube-node-5l4qp" Mar 18 17:41:26.058209 master-0 kubenswrapper[4090]: I0318 17:41:26.058115 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/994fff04-c1d7-4f10-8d4b-6b49a6934829-var-lib-openvswitch\") pod \"ovnkube-node-5l4qp\" (UID: \"994fff04-c1d7-4f10-8d4b-6b49a6934829\") " pod="openshift-ovn-kubernetes/ovnkube-node-5l4qp" Mar 18 17:41:26.058574 master-0 kubenswrapper[4090]: I0318 17:41:26.058236 4090 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/994fff04-c1d7-4f10-8d4b-6b49a6934829-var-lib-openvswitch\") pod \"ovnkube-node-5l4qp\" (UID: \"994fff04-c1d7-4f10-8d4b-6b49a6934829\") " pod="openshift-ovn-kubernetes/ovnkube-node-5l4qp" Mar 18 17:41:26.058574 master-0 kubenswrapper[4090]: I0318 17:41:26.058470 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/994fff04-c1d7-4f10-8d4b-6b49a6934829-systemd-units\") pod \"ovnkube-node-5l4qp\" (UID: \"994fff04-c1d7-4f10-8d4b-6b49a6934829\") " pod="openshift-ovn-kubernetes/ovnkube-node-5l4qp" Mar 18 17:41:26.058574 master-0 kubenswrapper[4090]: I0318 17:41:26.058546 4090 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/994fff04-c1d7-4f10-8d4b-6b49a6934829-systemd-units\") pod \"ovnkube-node-5l4qp\" (UID: \"994fff04-c1d7-4f10-8d4b-6b49a6934829\") " pod="openshift-ovn-kubernetes/ovnkube-node-5l4qp" Mar 18 17:41:26.058754 master-0 kubenswrapper[4090]: I0318 17:41:26.058624 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/994fff04-c1d7-4f10-8d4b-6b49a6934829-run-ovn\") pod \"ovnkube-node-5l4qp\" (UID: \"994fff04-c1d7-4f10-8d4b-6b49a6934829\") " pod="openshift-ovn-kubernetes/ovnkube-node-5l4qp" Mar 18 17:41:26.058754 master-0 kubenswrapper[4090]: I0318 17:41:26.058672 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/994fff04-c1d7-4f10-8d4b-6b49a6934829-ovn-node-metrics-cert\") pod \"ovnkube-node-5l4qp\" (UID: \"994fff04-c1d7-4f10-8d4b-6b49a6934829\") " pod="openshift-ovn-kubernetes/ovnkube-node-5l4qp" Mar 18 17:41:26.058754 master-0 kubenswrapper[4090]: I0318 17:41:26.058726 4090 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/994fff04-c1d7-4f10-8d4b-6b49a6934829-run-ovn\") pod \"ovnkube-node-5l4qp\" (UID: \"994fff04-c1d7-4f10-8d4b-6b49a6934829\") " pod="openshift-ovn-kubernetes/ovnkube-node-5l4qp" Mar 18 17:41:26.058863 master-0 kubenswrapper[4090]: I0318 17:41:26.058835 4090 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/994fff04-c1d7-4f10-8d4b-6b49a6934829-node-log\") pod \"ovnkube-node-5l4qp\" (UID: \"994fff04-c1d7-4f10-8d4b-6b49a6934829\") " pod="openshift-ovn-kubernetes/ovnkube-node-5l4qp" Mar 18 17:41:26.058863 master-0 kubenswrapper[4090]: I0318 17:41:26.058729 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/994fff04-c1d7-4f10-8d4b-6b49a6934829-node-log\") pod \"ovnkube-node-5l4qp\" (UID: \"994fff04-c1d7-4f10-8d4b-6b49a6934829\") " pod="openshift-ovn-kubernetes/ovnkube-node-5l4qp" Mar 18 17:41:26.058973 master-0 kubenswrapper[4090]: I0318 17:41:26.058921 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/994fff04-c1d7-4f10-8d4b-6b49a6934829-ovnkube-config\") pod \"ovnkube-node-5l4qp\" (UID: \"994fff04-c1d7-4f10-8d4b-6b49a6934829\") " pod="openshift-ovn-kubernetes/ovnkube-node-5l4qp" Mar 18 17:41:26.059032 master-0 kubenswrapper[4090]: I0318 17:41:26.058998 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/994fff04-c1d7-4f10-8d4b-6b49a6934829-host-kubelet\") pod \"ovnkube-node-5l4qp\" (UID: \"994fff04-c1d7-4f10-8d4b-6b49a6934829\") " pod="openshift-ovn-kubernetes/ovnkube-node-5l4qp" Mar 18 17:41:26.059082 master-0 kubenswrapper[4090]: I0318 17:41:26.059042 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/994fff04-c1d7-4f10-8d4b-6b49a6934829-ovnkube-script-lib\") pod \"ovnkube-node-5l4qp\" (UID: \"994fff04-c1d7-4f10-8d4b-6b49a6934829\") " pod="openshift-ovn-kubernetes/ovnkube-node-5l4qp" Mar 18 17:41:26.059120 master-0 kubenswrapper[4090]: I0318 17:41:26.059087 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9lwsm\" (UniqueName: \"kubernetes.io/projected/994fff04-c1d7-4f10-8d4b-6b49a6934829-kube-api-access-9lwsm\") pod \"ovnkube-node-5l4qp\" (UID: \"994fff04-c1d7-4f10-8d4b-6b49a6934829\") " pod="openshift-ovn-kubernetes/ovnkube-node-5l4qp" Mar 18 17:41:26.059168 master-0 kubenswrapper[4090]: I0318 17:41:26.059130 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/994fff04-c1d7-4f10-8d4b-6b49a6934829-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-5l4qp\" (UID: \"994fff04-c1d7-4f10-8d4b-6b49a6934829\") " pod="openshift-ovn-kubernetes/ovnkube-node-5l4qp" Mar 18 17:41:26.059215 master-0 kubenswrapper[4090]: I0318 17:41:26.059172 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/994fff04-c1d7-4f10-8d4b-6b49a6934829-env-overrides\") pod \"ovnkube-node-5l4qp\" (UID: \"994fff04-c1d7-4f10-8d4b-6b49a6934829\") " pod="openshift-ovn-kubernetes/ovnkube-node-5l4qp" Mar 18 17:41:26.059685 master-0 kubenswrapper[4090]: I0318 17:41:26.059095 4090 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/994fff04-c1d7-4f10-8d4b-6b49a6934829-host-kubelet\") pod \"ovnkube-node-5l4qp\" (UID: \"994fff04-c1d7-4f10-8d4b-6b49a6934829\") " pod="openshift-ovn-kubernetes/ovnkube-node-5l4qp" Mar 18 17:41:26.059685 master-0 kubenswrapper[4090]: I0318 17:41:26.059241 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/994fff04-c1d7-4f10-8d4b-6b49a6934829-log-socket\") pod \"ovnkube-node-5l4qp\" (UID: \"994fff04-c1d7-4f10-8d4b-6b49a6934829\") " pod="openshift-ovn-kubernetes/ovnkube-node-5l4qp" Mar 18 17:41:26.059685 master-0 kubenswrapper[4090]: I0318 17:41:26.059397 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/994fff04-c1d7-4f10-8d4b-6b49a6934829-host-run-ovn-kubernetes\") pod \"ovnkube-node-5l4qp\" (UID: \"994fff04-c1d7-4f10-8d4b-6b49a6934829\") " pod="openshift-ovn-kubernetes/ovnkube-node-5l4qp" Mar 18 17:41:26.059685 master-0 kubenswrapper[4090]: I0318 17:41:26.059483 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/994fff04-c1d7-4f10-8d4b-6b49a6934829-host-slash\") pod \"ovnkube-node-5l4qp\" (UID: \"994fff04-c1d7-4f10-8d4b-6b49a6934829\") " pod="openshift-ovn-kubernetes/ovnkube-node-5l4qp" Mar 18 17:41:26.059685 master-0 kubenswrapper[4090]: I0318 17:41:26.059608 4090 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/994fff04-c1d7-4f10-8d4b-6b49a6934829-host-slash\") pod \"ovnkube-node-5l4qp\" (UID: \"994fff04-c1d7-4f10-8d4b-6b49a6934829\") " pod="openshift-ovn-kubernetes/ovnkube-node-5l4qp" Mar 18 17:41:26.059685 master-0 kubenswrapper[4090]: I0318 17:41:26.059605 4090 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/994fff04-c1d7-4f10-8d4b-6b49a6934829-host-run-ovn-kubernetes\") pod \"ovnkube-node-5l4qp\" (UID: \"994fff04-c1d7-4f10-8d4b-6b49a6934829\") " pod="openshift-ovn-kubernetes/ovnkube-node-5l4qp" Mar 18 17:41:26.059685 master-0 kubenswrapper[4090]: I0318 17:41:26.059691 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/994fff04-c1d7-4f10-8d4b-6b49a6934829-run-openvswitch\") pod \"ovnkube-node-5l4qp\" (UID: \"994fff04-c1d7-4f10-8d4b-6b49a6934829\") " pod="openshift-ovn-kubernetes/ovnkube-node-5l4qp" Mar 18 17:41:26.060943 master-0 kubenswrapper[4090]: I0318 17:41:26.059621 4090 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/994fff04-c1d7-4f10-8d4b-6b49a6934829-log-socket\") pod \"ovnkube-node-5l4qp\" (UID: \"994fff04-c1d7-4f10-8d4b-6b49a6934829\") " pod="openshift-ovn-kubernetes/ovnkube-node-5l4qp" Mar 18 17:41:26.060943 master-0 kubenswrapper[4090]: I0318 17:41:26.059773 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/994fff04-c1d7-4f10-8d4b-6b49a6934829-run-systemd\") pod \"ovnkube-node-5l4qp\" (UID: \"994fff04-c1d7-4f10-8d4b-6b49a6934829\") " pod="openshift-ovn-kubernetes/ovnkube-node-5l4qp" Mar 18 17:41:26.060943 master-0 kubenswrapper[4090]: I0318 17:41:26.059776 4090 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/994fff04-c1d7-4f10-8d4b-6b49a6934829-run-openvswitch\") pod \"ovnkube-node-5l4qp\" (UID: \"994fff04-c1d7-4f10-8d4b-6b49a6934829\") " pod="openshift-ovn-kubernetes/ovnkube-node-5l4qp" Mar 18 17:41:26.060943 master-0 kubenswrapper[4090]: I0318 17:41:26.059820 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/994fff04-c1d7-4f10-8d4b-6b49a6934829-host-run-netns\") pod \"ovnkube-node-5l4qp\" (UID: \"994fff04-c1d7-4f10-8d4b-6b49a6934829\") " pod="openshift-ovn-kubernetes/ovnkube-node-5l4qp" Mar 18 17:41:26.060943 master-0 kubenswrapper[4090]: I0318 17:41:26.059659 4090 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/994fff04-c1d7-4f10-8d4b-6b49a6934829-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-5l4qp\" (UID: \"994fff04-c1d7-4f10-8d4b-6b49a6934829\") " pod="openshift-ovn-kubernetes/ovnkube-node-5l4qp" Mar 18 17:41:26.060943 master-0 kubenswrapper[4090]: I0318 17:41:26.060088 4090 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/994fff04-c1d7-4f10-8d4b-6b49a6934829-host-run-netns\") pod \"ovnkube-node-5l4qp\" (UID: \"994fff04-c1d7-4f10-8d4b-6b49a6934829\") " pod="openshift-ovn-kubernetes/ovnkube-node-5l4qp" Mar 18 17:41:26.060943 master-0 kubenswrapper[4090]: I0318 17:41:26.060090 4090 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/994fff04-c1d7-4f10-8d4b-6b49a6934829-run-systemd\") pod \"ovnkube-node-5l4qp\" (UID: \"994fff04-c1d7-4f10-8d4b-6b49a6934829\") " pod="openshift-ovn-kubernetes/ovnkube-node-5l4qp" Mar 18 17:41:26.060943 master-0 kubenswrapper[4090]: I0318 17:41:26.060150 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/994fff04-c1d7-4f10-8d4b-6b49a6934829-host-cni-bin\") pod \"ovnkube-node-5l4qp\" (UID: \"994fff04-c1d7-4f10-8d4b-6b49a6934829\") " pod="openshift-ovn-kubernetes/ovnkube-node-5l4qp" Mar 18 17:41:26.060943 master-0 kubenswrapper[4090]: I0318 17:41:26.060213 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/994fff04-c1d7-4f10-8d4b-6b49a6934829-etc-openvswitch\") pod \"ovnkube-node-5l4qp\" (UID: \"994fff04-c1d7-4f10-8d4b-6b49a6934829\") " pod="openshift-ovn-kubernetes/ovnkube-node-5l4qp" Mar 18 17:41:26.060943 master-0 kubenswrapper[4090]: I0318 17:41:26.060251 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/994fff04-c1d7-4f10-8d4b-6b49a6934829-host-cni-netd\") pod \"ovnkube-node-5l4qp\" (UID: \"994fff04-c1d7-4f10-8d4b-6b49a6934829\") " pod="openshift-ovn-kubernetes/ovnkube-node-5l4qp" Mar 18 17:41:26.060943 master-0 kubenswrapper[4090]: I0318 17:41:26.060303 4090 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/994fff04-c1d7-4f10-8d4b-6b49a6934829-host-cni-bin\") pod \"ovnkube-node-5l4qp\" (UID: \"994fff04-c1d7-4f10-8d4b-6b49a6934829\") " pod="openshift-ovn-kubernetes/ovnkube-node-5l4qp" Mar 18 17:41:26.060943 master-0 kubenswrapper[4090]: I0318 17:41:26.060408 4090 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/994fff04-c1d7-4f10-8d4b-6b49a6934829-etc-openvswitch\") pod \"ovnkube-node-5l4qp\" (UID: \"994fff04-c1d7-4f10-8d4b-6b49a6934829\") " pod="openshift-ovn-kubernetes/ovnkube-node-5l4qp" Mar 18 17:41:26.060943 master-0 kubenswrapper[4090]: I0318 17:41:26.060467 4090 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/994fff04-c1d7-4f10-8d4b-6b49a6934829-host-cni-netd\") pod \"ovnkube-node-5l4qp\" (UID: \"994fff04-c1d7-4f10-8d4b-6b49a6934829\") " pod="openshift-ovn-kubernetes/ovnkube-node-5l4qp" Mar 18 17:41:26.060943 master-0 kubenswrapper[4090]: I0318 17:41:26.060592 4090 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/994fff04-c1d7-4f10-8d4b-6b49a6934829-ovnkube-script-lib\") pod \"ovnkube-node-5l4qp\" (UID: \"994fff04-c1d7-4f10-8d4b-6b49a6934829\") " pod="openshift-ovn-kubernetes/ovnkube-node-5l4qp" Mar 18 17:41:26.060943 master-0 kubenswrapper[4090]: I0318 17:41:26.060638 4090 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/994fff04-c1d7-4f10-8d4b-6b49a6934829-ovnkube-config\") pod \"ovnkube-node-5l4qp\" (UID: \"994fff04-c1d7-4f10-8d4b-6b49a6934829\") " pod="openshift-ovn-kubernetes/ovnkube-node-5l4qp" Mar 18 17:41:26.060943 master-0 kubenswrapper[4090]: I0318 17:41:26.060682 4090 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/994fff04-c1d7-4f10-8d4b-6b49a6934829-env-overrides\") pod \"ovnkube-node-5l4qp\" (UID: \"994fff04-c1d7-4f10-8d4b-6b49a6934829\") " pod="openshift-ovn-kubernetes/ovnkube-node-5l4qp" Mar 18 17:41:26.063820 master-0 kubenswrapper[4090]: I0318 17:41:26.063750 4090 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/994fff04-c1d7-4f10-8d4b-6b49a6934829-ovn-node-metrics-cert\") pod \"ovnkube-node-5l4qp\" (UID: \"994fff04-c1d7-4f10-8d4b-6b49a6934829\") " pod="openshift-ovn-kubernetes/ovnkube-node-5l4qp" Mar 18 17:41:26.090342 master-0 kubenswrapper[4090]: I0318 17:41:26.087343 4090 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9lwsm\" (UniqueName: \"kubernetes.io/projected/994fff04-c1d7-4f10-8d4b-6b49a6934829-kube-api-access-9lwsm\") pod \"ovnkube-node-5l4qp\" (UID: \"994fff04-c1d7-4f10-8d4b-6b49a6934829\") " pod="openshift-ovn-kubernetes/ovnkube-node-5l4qp" Mar 18 17:41:26.106336 master-0 kubenswrapper[4090]: I0318 17:41:26.104356 4090 generic.go:334] "Generic (PLEG): container finished" podID="fea7b899-fde4-4463-9520-4d433a8ebe21" containerID="49a577ee2ac2a159de0067da85450704e2357b11d86f52af06168530d5d8c67c" exitCode=0 Mar 18 17:41:26.106336 master-0 kubenswrapper[4090]: I0318 17:41:26.104501 4090 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-ttbr5" event={"ID":"fea7b899-fde4-4463-9520-4d433a8ebe21","Type":"ContainerDied","Data":"49a577ee2ac2a159de0067da85450704e2357b11d86f52af06168530d5d8c67c"} Mar 18 17:41:26.108649 master-0 kubenswrapper[4090]: I0318 17:41:26.108345 4090 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-w28hf_eda1dca7-9f5f-4955-8522-345e4f6e82a2/ovnkube-controller/0.log" Mar 18 17:41:26.111551 master-0 kubenswrapper[4090]: I0318 17:41:26.111489 4090 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-w28hf_eda1dca7-9f5f-4955-8522-345e4f6e82a2/kube-rbac-proxy-ovn-metrics/0.log" Mar 18 17:41:26.113649 master-0 kubenswrapper[4090]: I0318 17:41:26.112544 4090 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-w28hf_eda1dca7-9f5f-4955-8522-345e4f6e82a2/kube-rbac-proxy-node/0.log" Mar 18 17:41:26.113649 master-0 kubenswrapper[4090]: I0318 17:41:26.113467 4090 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-w28hf_eda1dca7-9f5f-4955-8522-345e4f6e82a2/ovn-acl-logging/0.log" Mar 18 17:41:26.114434 master-0 kubenswrapper[4090]: I0318 17:41:26.114393 4090 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-w28hf_eda1dca7-9f5f-4955-8522-345e4f6e82a2/ovn-controller/0.log" Mar 18 17:41:26.122437 master-0 kubenswrapper[4090]: I0318 17:41:26.122241 4090 generic.go:334] "Generic (PLEG): container finished" podID="eda1dca7-9f5f-4955-8522-345e4f6e82a2" containerID="78c48fca4536f9843571b1df443be561e2eacf57b33e35b13d706903ef574249" exitCode=2 Mar 18 17:41:26.122437 master-0 kubenswrapper[4090]: I0318 17:41:26.122330 4090 generic.go:334] "Generic (PLEG): container finished" podID="eda1dca7-9f5f-4955-8522-345e4f6e82a2" containerID="e94cf1ae09720b90a837378ce21a224cf3d1ec15ebe4df07aca3dc99ebf1c5b9" exitCode=0 Mar 18 17:41:26.122437 master-0 kubenswrapper[4090]: I0318 17:41:26.122359 4090 generic.go:334] "Generic (PLEG): container finished" podID="eda1dca7-9f5f-4955-8522-345e4f6e82a2" containerID="5a0cc38196c956571e1020fd07aac66b445ed9098590c86d0420398246fccc70" exitCode=0 Mar 18 17:41:26.122437 master-0 kubenswrapper[4090]: I0318 17:41:26.122382 4090 generic.go:334] "Generic (PLEG): container finished" podID="eda1dca7-9f5f-4955-8522-345e4f6e82a2" containerID="76fe39ffc3ee88a37aec5098c7c41d8e5c952b0cd635cc6932fa8c25c90fc7b2" exitCode=0 Mar 18 17:41:26.122437 master-0 kubenswrapper[4090]: I0318 17:41:26.122402 4090 generic.go:334] "Generic (PLEG): container finished" podID="eda1dca7-9f5f-4955-8522-345e4f6e82a2" containerID="a2f7840253ba77627965588c80fd6601ad2c7b85616e8b0045ab019fad2b4eb5" exitCode=143 Mar 18 17:41:26.122437 master-0 kubenswrapper[4090]: I0318 17:41:26.122423 4090 generic.go:334] "Generic (PLEG): container finished" podID="eda1dca7-9f5f-4955-8522-345e4f6e82a2" containerID="dd71bfd3525e8ab627d591d8e984b8e06b2147e143ab566d2810a6c719c82058" exitCode=143 Mar 18 17:41:26.122437 master-0 kubenswrapper[4090]: I0318 17:41:26.122441 4090 generic.go:334] "Generic (PLEG): container finished" podID="eda1dca7-9f5f-4955-8522-345e4f6e82a2" containerID="6af32710a9a093c2ffb5165e045584759becf8ee99079c76ea090f36b418b7ac" exitCode=143 Mar 18 17:41:26.122437 master-0 kubenswrapper[4090]: I0318 17:41:26.122459 4090 generic.go:334] "Generic (PLEG): container finished" podID="eda1dca7-9f5f-4955-8522-345e4f6e82a2" containerID="e7320e65f254a24edc9e01cc30587d714155472069d129fd73b300ad45b8a90c" exitCode=143 Mar 18 17:41:26.123023 master-0 kubenswrapper[4090]: I0318 17:41:26.122506 4090 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w28hf" event={"ID":"eda1dca7-9f5f-4955-8522-345e4f6e82a2","Type":"ContainerDied","Data":"78c48fca4536f9843571b1df443be561e2eacf57b33e35b13d706903ef574249"} Mar 18 17:41:26.123023 master-0 kubenswrapper[4090]: I0318 17:41:26.122564 4090 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w28hf" event={"ID":"eda1dca7-9f5f-4955-8522-345e4f6e82a2","Type":"ContainerDied","Data":"e94cf1ae09720b90a837378ce21a224cf3d1ec15ebe4df07aca3dc99ebf1c5b9"} Mar 18 17:41:26.123023 master-0 kubenswrapper[4090]: I0318 17:41:26.122596 4090 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w28hf" event={"ID":"eda1dca7-9f5f-4955-8522-345e4f6e82a2","Type":"ContainerDied","Data":"5a0cc38196c956571e1020fd07aac66b445ed9098590c86d0420398246fccc70"} Mar 18 17:41:26.123023 master-0 kubenswrapper[4090]: I0318 17:41:26.122624 4090 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w28hf" event={"ID":"eda1dca7-9f5f-4955-8522-345e4f6e82a2","Type":"ContainerDied","Data":"76fe39ffc3ee88a37aec5098c7c41d8e5c952b0cd635cc6932fa8c25c90fc7b2"} Mar 18 17:41:26.123023 master-0 kubenswrapper[4090]: I0318 17:41:26.122652 4090 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w28hf" event={"ID":"eda1dca7-9f5f-4955-8522-345e4f6e82a2","Type":"ContainerDied","Data":"a2f7840253ba77627965588c80fd6601ad2c7b85616e8b0045ab019fad2b4eb5"} Mar 18 17:41:26.123023 master-0 kubenswrapper[4090]: I0318 17:41:26.122683 4090 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w28hf" event={"ID":"eda1dca7-9f5f-4955-8522-345e4f6e82a2","Type":"ContainerDied","Data":"dd71bfd3525e8ab627d591d8e984b8e06b2147e143ab566d2810a6c719c82058"} Mar 18 17:41:26.123023 master-0 kubenswrapper[4090]: I0318 17:41:26.122712 4090 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"6af32710a9a093c2ffb5165e045584759becf8ee99079c76ea090f36b418b7ac"} Mar 18 17:41:26.123023 master-0 kubenswrapper[4090]: I0318 17:41:26.122914 4090 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e7320e65f254a24edc9e01cc30587d714155472069d129fd73b300ad45b8a90c"} Mar 18 17:41:26.123023 master-0 kubenswrapper[4090]: I0318 17:41:26.122930 4090 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f1d14468646824242e1b501b4d730c725291ad7baae06d0e85140b05ffb53e55"} Mar 18 17:41:26.123023 master-0 kubenswrapper[4090]: I0318 17:41:26.122951 4090 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w28hf" event={"ID":"eda1dca7-9f5f-4955-8522-345e4f6e82a2","Type":"ContainerDied","Data":"6af32710a9a093c2ffb5165e045584759becf8ee99079c76ea090f36b418b7ac"} Mar 18 17:41:26.123023 master-0 kubenswrapper[4090]: I0318 17:41:26.122974 4090 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"78c48fca4536f9843571b1df443be561e2eacf57b33e35b13d706903ef574249"} Mar 18 17:41:26.123023 master-0 kubenswrapper[4090]: I0318 17:41:26.122992 4090 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e94cf1ae09720b90a837378ce21a224cf3d1ec15ebe4df07aca3dc99ebf1c5b9"} Mar 18 17:41:26.123023 master-0 kubenswrapper[4090]: I0318 17:41:26.123006 4090 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5a0cc38196c956571e1020fd07aac66b445ed9098590c86d0420398246fccc70"} Mar 18 17:41:26.123023 master-0 kubenswrapper[4090]: I0318 17:41:26.123020 4090 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"76fe39ffc3ee88a37aec5098c7c41d8e5c952b0cd635cc6932fa8c25c90fc7b2"} Mar 18 17:41:26.123023 master-0 kubenswrapper[4090]: I0318 17:41:26.123036 4090 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a2f7840253ba77627965588c80fd6601ad2c7b85616e8b0045ab019fad2b4eb5"} Mar 18 17:41:26.127710 master-0 kubenswrapper[4090]: I0318 17:41:26.123052 4090 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"dd71bfd3525e8ab627d591d8e984b8e06b2147e143ab566d2810a6c719c82058"} Mar 18 17:41:26.127710 master-0 kubenswrapper[4090]: I0318 17:41:26.123068 4090 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"6af32710a9a093c2ffb5165e045584759becf8ee99079c76ea090f36b418b7ac"} Mar 18 17:41:26.127710 master-0 kubenswrapper[4090]: I0318 17:41:26.123082 4090 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e7320e65f254a24edc9e01cc30587d714155472069d129fd73b300ad45b8a90c"} Mar 18 17:41:26.127710 master-0 kubenswrapper[4090]: I0318 17:41:26.123096 4090 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f1d14468646824242e1b501b4d730c725291ad7baae06d0e85140b05ffb53e55"} Mar 18 17:41:26.127710 master-0 kubenswrapper[4090]: I0318 17:41:26.123115 4090 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w28hf" event={"ID":"eda1dca7-9f5f-4955-8522-345e4f6e82a2","Type":"ContainerDied","Data":"e7320e65f254a24edc9e01cc30587d714155472069d129fd73b300ad45b8a90c"} Mar 18 17:41:26.127710 master-0 kubenswrapper[4090]: I0318 17:41:26.123137 4090 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"78c48fca4536f9843571b1df443be561e2eacf57b33e35b13d706903ef574249"} Mar 18 17:41:26.127710 master-0 kubenswrapper[4090]: I0318 17:41:26.123155 4090 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e94cf1ae09720b90a837378ce21a224cf3d1ec15ebe4df07aca3dc99ebf1c5b9"} Mar 18 17:41:26.127710 master-0 kubenswrapper[4090]: I0318 17:41:26.123170 4090 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5a0cc38196c956571e1020fd07aac66b445ed9098590c86d0420398246fccc70"} Mar 18 17:41:26.127710 master-0 kubenswrapper[4090]: I0318 17:41:26.123185 4090 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"76fe39ffc3ee88a37aec5098c7c41d8e5c952b0cd635cc6932fa8c25c90fc7b2"} Mar 18 17:41:26.127710 master-0 kubenswrapper[4090]: I0318 17:41:26.123201 4090 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a2f7840253ba77627965588c80fd6601ad2c7b85616e8b0045ab019fad2b4eb5"} Mar 18 17:41:26.127710 master-0 kubenswrapper[4090]: I0318 17:41:26.123215 4090 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"dd71bfd3525e8ab627d591d8e984b8e06b2147e143ab566d2810a6c719c82058"} Mar 18 17:41:26.127710 master-0 kubenswrapper[4090]: I0318 17:41:26.123236 4090 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"6af32710a9a093c2ffb5165e045584759becf8ee99079c76ea090f36b418b7ac"} Mar 18 17:41:26.127710 master-0 kubenswrapper[4090]: I0318 17:41:26.123251 4090 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e7320e65f254a24edc9e01cc30587d714155472069d129fd73b300ad45b8a90c"} Mar 18 17:41:26.127710 master-0 kubenswrapper[4090]: I0318 17:41:26.123264 4090 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f1d14468646824242e1b501b4d730c725291ad7baae06d0e85140b05ffb53e55"} Mar 18 17:41:26.127710 master-0 kubenswrapper[4090]: I0318 17:41:26.123314 4090 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w28hf" event={"ID":"eda1dca7-9f5f-4955-8522-345e4f6e82a2","Type":"ContainerDied","Data":"de191ef380880e41074c916544a090af370497a2183310a181d94c72cfa6a53a"} Mar 18 17:41:26.127710 master-0 kubenswrapper[4090]: I0318 17:41:26.123340 4090 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"78c48fca4536f9843571b1df443be561e2eacf57b33e35b13d706903ef574249"} Mar 18 17:41:26.127710 master-0 kubenswrapper[4090]: I0318 17:41:26.123356 4090 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e94cf1ae09720b90a837378ce21a224cf3d1ec15ebe4df07aca3dc99ebf1c5b9"} Mar 18 17:41:26.127710 master-0 kubenswrapper[4090]: I0318 17:41:26.123371 4090 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5a0cc38196c956571e1020fd07aac66b445ed9098590c86d0420398246fccc70"} Mar 18 17:41:26.127710 master-0 kubenswrapper[4090]: I0318 17:41:26.123386 4090 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"76fe39ffc3ee88a37aec5098c7c41d8e5c952b0cd635cc6932fa8c25c90fc7b2"} Mar 18 17:41:26.127710 master-0 kubenswrapper[4090]: I0318 17:41:26.123399 4090 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a2f7840253ba77627965588c80fd6601ad2c7b85616e8b0045ab019fad2b4eb5"} Mar 18 17:41:26.127710 master-0 kubenswrapper[4090]: I0318 17:41:26.123414 4090 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"dd71bfd3525e8ab627d591d8e984b8e06b2147e143ab566d2810a6c719c82058"} Mar 18 17:41:26.127710 master-0 kubenswrapper[4090]: I0318 17:41:26.123428 4090 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"6af32710a9a093c2ffb5165e045584759becf8ee99079c76ea090f36b418b7ac"} Mar 18 17:41:26.127710 master-0 kubenswrapper[4090]: I0318 17:41:26.123443 4090 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e7320e65f254a24edc9e01cc30587d714155472069d129fd73b300ad45b8a90c"} Mar 18 17:41:26.127710 master-0 kubenswrapper[4090]: I0318 17:41:26.123458 4090 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f1d14468646824242e1b501b4d730c725291ad7baae06d0e85140b05ffb53e55"} Mar 18 17:41:26.127710 master-0 kubenswrapper[4090]: I0318 17:41:26.123491 4090 scope.go:117] "RemoveContainer" containerID="78c48fca4536f9843571b1df443be561e2eacf57b33e35b13d706903ef574249" Mar 18 17:41:26.127710 master-0 kubenswrapper[4090]: I0318 17:41:26.123919 4090 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-w28hf" Mar 18 17:41:26.149554 master-0 kubenswrapper[4090]: I0318 17:41:26.149494 4090 scope.go:117] "RemoveContainer" containerID="e94cf1ae09720b90a837378ce21a224cf3d1ec15ebe4df07aca3dc99ebf1c5b9" Mar 18 17:41:26.176629 master-0 kubenswrapper[4090]: I0318 17:41:26.173709 4090 scope.go:117] "RemoveContainer" containerID="5a0cc38196c956571e1020fd07aac66b445ed9098590c86d0420398246fccc70" Mar 18 17:41:26.179308 master-0 kubenswrapper[4090]: I0318 17:41:26.179196 4090 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-w28hf"] Mar 18 17:41:26.193692 master-0 kubenswrapper[4090]: I0318 17:41:26.191208 4090 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-w28hf"] Mar 18 17:41:26.195067 master-0 kubenswrapper[4090]: I0318 17:41:26.195002 4090 scope.go:117] "RemoveContainer" containerID="76fe39ffc3ee88a37aec5098c7c41d8e5c952b0cd635cc6932fa8c25c90fc7b2" Mar 18 17:41:26.210732 master-0 kubenswrapper[4090]: I0318 17:41:26.210684 4090 scope.go:117] "RemoveContainer" containerID="a2f7840253ba77627965588c80fd6601ad2c7b85616e8b0045ab019fad2b4eb5" Mar 18 17:41:26.229476 master-0 kubenswrapper[4090]: I0318 17:41:26.229435 4090 scope.go:117] "RemoveContainer" containerID="dd71bfd3525e8ab627d591d8e984b8e06b2147e143ab566d2810a6c719c82058" Mar 18 17:41:26.235323 master-0 kubenswrapper[4090]: I0318 17:41:26.233292 4090 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-5l4qp" Mar 18 17:41:26.245531 master-0 kubenswrapper[4090]: I0318 17:41:26.245488 4090 scope.go:117] "RemoveContainer" containerID="6af32710a9a093c2ffb5165e045584759becf8ee99079c76ea090f36b418b7ac" Mar 18 17:41:26.257105 master-0 kubenswrapper[4090]: I0318 17:41:26.257064 4090 scope.go:117] "RemoveContainer" containerID="e7320e65f254a24edc9e01cc30587d714155472069d129fd73b300ad45b8a90c" Mar 18 17:41:26.278609 master-0 kubenswrapper[4090]: I0318 17:41:26.278552 4090 scope.go:117] "RemoveContainer" containerID="f1d14468646824242e1b501b4d730c725291ad7baae06d0e85140b05ffb53e55" Mar 18 17:41:26.292597 master-0 kubenswrapper[4090]: I0318 17:41:26.292389 4090 scope.go:117] "RemoveContainer" containerID="78c48fca4536f9843571b1df443be561e2eacf57b33e35b13d706903ef574249" Mar 18 17:41:26.293681 master-0 kubenswrapper[4090]: E0318 17:41:26.292857 4090 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"78c48fca4536f9843571b1df443be561e2eacf57b33e35b13d706903ef574249\": container with ID starting with 78c48fca4536f9843571b1df443be561e2eacf57b33e35b13d706903ef574249 not found: ID does not exist" containerID="78c48fca4536f9843571b1df443be561e2eacf57b33e35b13d706903ef574249" Mar 18 17:41:26.293681 master-0 kubenswrapper[4090]: I0318 17:41:26.292894 4090 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"78c48fca4536f9843571b1df443be561e2eacf57b33e35b13d706903ef574249"} err="failed to get container status \"78c48fca4536f9843571b1df443be561e2eacf57b33e35b13d706903ef574249\": rpc error: code = NotFound desc = could not find container \"78c48fca4536f9843571b1df443be561e2eacf57b33e35b13d706903ef574249\": container with ID starting with 78c48fca4536f9843571b1df443be561e2eacf57b33e35b13d706903ef574249 not found: ID does not exist" Mar 18 17:41:26.293681 master-0 kubenswrapper[4090]: I0318 17:41:26.292921 4090 scope.go:117] "RemoveContainer" containerID="e94cf1ae09720b90a837378ce21a224cf3d1ec15ebe4df07aca3dc99ebf1c5b9" Mar 18 17:41:26.293681 master-0 kubenswrapper[4090]: E0318 17:41:26.293219 4090 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e94cf1ae09720b90a837378ce21a224cf3d1ec15ebe4df07aca3dc99ebf1c5b9\": container with ID starting with e94cf1ae09720b90a837378ce21a224cf3d1ec15ebe4df07aca3dc99ebf1c5b9 not found: ID does not exist" containerID="e94cf1ae09720b90a837378ce21a224cf3d1ec15ebe4df07aca3dc99ebf1c5b9" Mar 18 17:41:26.293681 master-0 kubenswrapper[4090]: I0318 17:41:26.293241 4090 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e94cf1ae09720b90a837378ce21a224cf3d1ec15ebe4df07aca3dc99ebf1c5b9"} err="failed to get container status \"e94cf1ae09720b90a837378ce21a224cf3d1ec15ebe4df07aca3dc99ebf1c5b9\": rpc error: code = NotFound desc = could not find container \"e94cf1ae09720b90a837378ce21a224cf3d1ec15ebe4df07aca3dc99ebf1c5b9\": container with ID starting with e94cf1ae09720b90a837378ce21a224cf3d1ec15ebe4df07aca3dc99ebf1c5b9 not found: ID does not exist" Mar 18 17:41:26.293681 master-0 kubenswrapper[4090]: I0318 17:41:26.293258 4090 scope.go:117] "RemoveContainer" containerID="5a0cc38196c956571e1020fd07aac66b445ed9098590c86d0420398246fccc70" Mar 18 17:41:26.295782 master-0 kubenswrapper[4090]: E0318 17:41:26.293700 4090 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5a0cc38196c956571e1020fd07aac66b445ed9098590c86d0420398246fccc70\": container with ID starting with 5a0cc38196c956571e1020fd07aac66b445ed9098590c86d0420398246fccc70 not found: ID does not exist" containerID="5a0cc38196c956571e1020fd07aac66b445ed9098590c86d0420398246fccc70" Mar 18 17:41:26.295782 master-0 kubenswrapper[4090]: I0318 17:41:26.293721 4090 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5a0cc38196c956571e1020fd07aac66b445ed9098590c86d0420398246fccc70"} err="failed to get container status \"5a0cc38196c956571e1020fd07aac66b445ed9098590c86d0420398246fccc70\": rpc error: code = NotFound desc = could not find container \"5a0cc38196c956571e1020fd07aac66b445ed9098590c86d0420398246fccc70\": container with ID starting with 5a0cc38196c956571e1020fd07aac66b445ed9098590c86d0420398246fccc70 not found: ID does not exist" Mar 18 17:41:26.295782 master-0 kubenswrapper[4090]: I0318 17:41:26.293735 4090 scope.go:117] "RemoveContainer" containerID="76fe39ffc3ee88a37aec5098c7c41d8e5c952b0cd635cc6932fa8c25c90fc7b2" Mar 18 17:41:26.295782 master-0 kubenswrapper[4090]: E0318 17:41:26.294037 4090 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"76fe39ffc3ee88a37aec5098c7c41d8e5c952b0cd635cc6932fa8c25c90fc7b2\": container with ID starting with 76fe39ffc3ee88a37aec5098c7c41d8e5c952b0cd635cc6932fa8c25c90fc7b2 not found: ID does not exist" containerID="76fe39ffc3ee88a37aec5098c7c41d8e5c952b0cd635cc6932fa8c25c90fc7b2" Mar 18 17:41:26.295782 master-0 kubenswrapper[4090]: I0318 17:41:26.294074 4090 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"76fe39ffc3ee88a37aec5098c7c41d8e5c952b0cd635cc6932fa8c25c90fc7b2"} err="failed to get container status \"76fe39ffc3ee88a37aec5098c7c41d8e5c952b0cd635cc6932fa8c25c90fc7b2\": rpc error: code = NotFound desc = could not find container \"76fe39ffc3ee88a37aec5098c7c41d8e5c952b0cd635cc6932fa8c25c90fc7b2\": container with ID starting with 76fe39ffc3ee88a37aec5098c7c41d8e5c952b0cd635cc6932fa8c25c90fc7b2 not found: ID does not exist" Mar 18 17:41:26.295782 master-0 kubenswrapper[4090]: I0318 17:41:26.294101 4090 scope.go:117] "RemoveContainer" containerID="a2f7840253ba77627965588c80fd6601ad2c7b85616e8b0045ab019fad2b4eb5" Mar 18 17:41:26.295782 master-0 kubenswrapper[4090]: E0318 17:41:26.294350 4090 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a2f7840253ba77627965588c80fd6601ad2c7b85616e8b0045ab019fad2b4eb5\": container with ID starting with a2f7840253ba77627965588c80fd6601ad2c7b85616e8b0045ab019fad2b4eb5 not found: ID does not exist" containerID="a2f7840253ba77627965588c80fd6601ad2c7b85616e8b0045ab019fad2b4eb5" Mar 18 17:41:26.295782 master-0 kubenswrapper[4090]: I0318 17:41:26.294371 4090 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a2f7840253ba77627965588c80fd6601ad2c7b85616e8b0045ab019fad2b4eb5"} err="failed to get container status \"a2f7840253ba77627965588c80fd6601ad2c7b85616e8b0045ab019fad2b4eb5\": rpc error: code = NotFound desc = could not find container \"a2f7840253ba77627965588c80fd6601ad2c7b85616e8b0045ab019fad2b4eb5\": container with ID starting with a2f7840253ba77627965588c80fd6601ad2c7b85616e8b0045ab019fad2b4eb5 not found: ID does not exist" Mar 18 17:41:26.295782 master-0 kubenswrapper[4090]: I0318 17:41:26.294387 4090 scope.go:117] "RemoveContainer" containerID="dd71bfd3525e8ab627d591d8e984b8e06b2147e143ab566d2810a6c719c82058" Mar 18 17:41:26.295782 master-0 kubenswrapper[4090]: E0318 17:41:26.294723 4090 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dd71bfd3525e8ab627d591d8e984b8e06b2147e143ab566d2810a6c719c82058\": container with ID starting with dd71bfd3525e8ab627d591d8e984b8e06b2147e143ab566d2810a6c719c82058 not found: ID does not exist" containerID="dd71bfd3525e8ab627d591d8e984b8e06b2147e143ab566d2810a6c719c82058" Mar 18 17:41:26.295782 master-0 kubenswrapper[4090]: I0318 17:41:26.294763 4090 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dd71bfd3525e8ab627d591d8e984b8e06b2147e143ab566d2810a6c719c82058"} err="failed to get container status \"dd71bfd3525e8ab627d591d8e984b8e06b2147e143ab566d2810a6c719c82058\": rpc error: code = NotFound desc = could not find container \"dd71bfd3525e8ab627d591d8e984b8e06b2147e143ab566d2810a6c719c82058\": container with ID starting with dd71bfd3525e8ab627d591d8e984b8e06b2147e143ab566d2810a6c719c82058 not found: ID does not exist" Mar 18 17:41:26.295782 master-0 kubenswrapper[4090]: I0318 17:41:26.294778 4090 scope.go:117] "RemoveContainer" containerID="6af32710a9a093c2ffb5165e045584759becf8ee99079c76ea090f36b418b7ac" Mar 18 17:41:26.295782 master-0 kubenswrapper[4090]: E0318 17:41:26.294991 4090 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6af32710a9a093c2ffb5165e045584759becf8ee99079c76ea090f36b418b7ac\": container with ID starting with 6af32710a9a093c2ffb5165e045584759becf8ee99079c76ea090f36b418b7ac not found: ID does not exist" containerID="6af32710a9a093c2ffb5165e045584759becf8ee99079c76ea090f36b418b7ac" Mar 18 17:41:26.295782 master-0 kubenswrapper[4090]: I0318 17:41:26.295007 4090 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6af32710a9a093c2ffb5165e045584759becf8ee99079c76ea090f36b418b7ac"} err="failed to get container status \"6af32710a9a093c2ffb5165e045584759becf8ee99079c76ea090f36b418b7ac\": rpc error: code = NotFound desc = could not find container \"6af32710a9a093c2ffb5165e045584759becf8ee99079c76ea090f36b418b7ac\": container with ID starting with 6af32710a9a093c2ffb5165e045584759becf8ee99079c76ea090f36b418b7ac not found: ID does not exist" Mar 18 17:41:26.295782 master-0 kubenswrapper[4090]: I0318 17:41:26.295020 4090 scope.go:117] "RemoveContainer" containerID="e7320e65f254a24edc9e01cc30587d714155472069d129fd73b300ad45b8a90c" Mar 18 17:41:26.295782 master-0 kubenswrapper[4090]: E0318 17:41:26.295313 4090 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e7320e65f254a24edc9e01cc30587d714155472069d129fd73b300ad45b8a90c\": container with ID starting with e7320e65f254a24edc9e01cc30587d714155472069d129fd73b300ad45b8a90c not found: ID does not exist" containerID="e7320e65f254a24edc9e01cc30587d714155472069d129fd73b300ad45b8a90c" Mar 18 17:41:26.296699 master-0 kubenswrapper[4090]: I0318 17:41:26.295332 4090 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e7320e65f254a24edc9e01cc30587d714155472069d129fd73b300ad45b8a90c"} err="failed to get container status \"e7320e65f254a24edc9e01cc30587d714155472069d129fd73b300ad45b8a90c\": rpc error: code = NotFound desc = could not find container \"e7320e65f254a24edc9e01cc30587d714155472069d129fd73b300ad45b8a90c\": container with ID starting with e7320e65f254a24edc9e01cc30587d714155472069d129fd73b300ad45b8a90c not found: ID does not exist" Mar 18 17:41:26.296699 master-0 kubenswrapper[4090]: I0318 17:41:26.295346 4090 scope.go:117] "RemoveContainer" containerID="f1d14468646824242e1b501b4d730c725291ad7baae06d0e85140b05ffb53e55" Mar 18 17:41:26.296699 master-0 kubenswrapper[4090]: E0318 17:41:26.295591 4090 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f1d14468646824242e1b501b4d730c725291ad7baae06d0e85140b05ffb53e55\": container with ID starting with f1d14468646824242e1b501b4d730c725291ad7baae06d0e85140b05ffb53e55 not found: ID does not exist" containerID="f1d14468646824242e1b501b4d730c725291ad7baae06d0e85140b05ffb53e55" Mar 18 17:41:26.296699 master-0 kubenswrapper[4090]: I0318 17:41:26.295607 4090 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f1d14468646824242e1b501b4d730c725291ad7baae06d0e85140b05ffb53e55"} err="failed to get container status \"f1d14468646824242e1b501b4d730c725291ad7baae06d0e85140b05ffb53e55\": rpc error: code = NotFound desc = could not find container \"f1d14468646824242e1b501b4d730c725291ad7baae06d0e85140b05ffb53e55\": container with ID starting with f1d14468646824242e1b501b4d730c725291ad7baae06d0e85140b05ffb53e55 not found: ID does not exist" Mar 18 17:41:26.296699 master-0 kubenswrapper[4090]: I0318 17:41:26.295619 4090 scope.go:117] "RemoveContainer" containerID="78c48fca4536f9843571b1df443be561e2eacf57b33e35b13d706903ef574249" Mar 18 17:41:26.296699 master-0 kubenswrapper[4090]: I0318 17:41:26.295840 4090 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"78c48fca4536f9843571b1df443be561e2eacf57b33e35b13d706903ef574249"} err="failed to get container status \"78c48fca4536f9843571b1df443be561e2eacf57b33e35b13d706903ef574249\": rpc error: code = NotFound desc = could not find container \"78c48fca4536f9843571b1df443be561e2eacf57b33e35b13d706903ef574249\": container with ID starting with 78c48fca4536f9843571b1df443be561e2eacf57b33e35b13d706903ef574249 not found: ID does not exist" Mar 18 17:41:26.296699 master-0 kubenswrapper[4090]: I0318 17:41:26.295856 4090 scope.go:117] "RemoveContainer" containerID="e94cf1ae09720b90a837378ce21a224cf3d1ec15ebe4df07aca3dc99ebf1c5b9" Mar 18 17:41:26.296699 master-0 kubenswrapper[4090]: I0318 17:41:26.296087 4090 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e94cf1ae09720b90a837378ce21a224cf3d1ec15ebe4df07aca3dc99ebf1c5b9"} err="failed to get container status \"e94cf1ae09720b90a837378ce21a224cf3d1ec15ebe4df07aca3dc99ebf1c5b9\": rpc error: code = NotFound desc = could not find container \"e94cf1ae09720b90a837378ce21a224cf3d1ec15ebe4df07aca3dc99ebf1c5b9\": container with ID starting with e94cf1ae09720b90a837378ce21a224cf3d1ec15ebe4df07aca3dc99ebf1c5b9 not found: ID does not exist" Mar 18 17:41:26.296699 master-0 kubenswrapper[4090]: I0318 17:41:26.296103 4090 scope.go:117] "RemoveContainer" containerID="5a0cc38196c956571e1020fd07aac66b445ed9098590c86d0420398246fccc70" Mar 18 17:41:26.296699 master-0 kubenswrapper[4090]: I0318 17:41:26.296330 4090 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5a0cc38196c956571e1020fd07aac66b445ed9098590c86d0420398246fccc70"} err="failed to get container status \"5a0cc38196c956571e1020fd07aac66b445ed9098590c86d0420398246fccc70\": rpc error: code = NotFound desc = could not find container \"5a0cc38196c956571e1020fd07aac66b445ed9098590c86d0420398246fccc70\": container with ID starting with 5a0cc38196c956571e1020fd07aac66b445ed9098590c86d0420398246fccc70 not found: ID does not exist" Mar 18 17:41:26.296699 master-0 kubenswrapper[4090]: I0318 17:41:26.296346 4090 scope.go:117] "RemoveContainer" containerID="76fe39ffc3ee88a37aec5098c7c41d8e5c952b0cd635cc6932fa8c25c90fc7b2" Mar 18 17:41:26.297269 master-0 kubenswrapper[4090]: I0318 17:41:26.296782 4090 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"76fe39ffc3ee88a37aec5098c7c41d8e5c952b0cd635cc6932fa8c25c90fc7b2"} err="failed to get container status \"76fe39ffc3ee88a37aec5098c7c41d8e5c952b0cd635cc6932fa8c25c90fc7b2\": rpc error: code = NotFound desc = could not find container \"76fe39ffc3ee88a37aec5098c7c41d8e5c952b0cd635cc6932fa8c25c90fc7b2\": container with ID starting with 76fe39ffc3ee88a37aec5098c7c41d8e5c952b0cd635cc6932fa8c25c90fc7b2 not found: ID does not exist" Mar 18 17:41:26.297269 master-0 kubenswrapper[4090]: I0318 17:41:26.296809 4090 scope.go:117] "RemoveContainer" containerID="a2f7840253ba77627965588c80fd6601ad2c7b85616e8b0045ab019fad2b4eb5" Mar 18 17:41:26.297269 master-0 kubenswrapper[4090]: I0318 17:41:26.297140 4090 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a2f7840253ba77627965588c80fd6601ad2c7b85616e8b0045ab019fad2b4eb5"} err="failed to get container status \"a2f7840253ba77627965588c80fd6601ad2c7b85616e8b0045ab019fad2b4eb5\": rpc error: code = NotFound desc = could not find container \"a2f7840253ba77627965588c80fd6601ad2c7b85616e8b0045ab019fad2b4eb5\": container with ID starting with a2f7840253ba77627965588c80fd6601ad2c7b85616e8b0045ab019fad2b4eb5 not found: ID does not exist" Mar 18 17:41:26.297269 master-0 kubenswrapper[4090]: I0318 17:41:26.297154 4090 scope.go:117] "RemoveContainer" containerID="dd71bfd3525e8ab627d591d8e984b8e06b2147e143ab566d2810a6c719c82058" Mar 18 17:41:26.297535 master-0 kubenswrapper[4090]: I0318 17:41:26.297357 4090 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dd71bfd3525e8ab627d591d8e984b8e06b2147e143ab566d2810a6c719c82058"} err="failed to get container status \"dd71bfd3525e8ab627d591d8e984b8e06b2147e143ab566d2810a6c719c82058\": rpc error: code = NotFound desc = could not find container \"dd71bfd3525e8ab627d591d8e984b8e06b2147e143ab566d2810a6c719c82058\": container with ID starting with dd71bfd3525e8ab627d591d8e984b8e06b2147e143ab566d2810a6c719c82058 not found: ID does not exist" Mar 18 17:41:26.297535 master-0 kubenswrapper[4090]: I0318 17:41:26.297372 4090 scope.go:117] "RemoveContainer" containerID="6af32710a9a093c2ffb5165e045584759becf8ee99079c76ea090f36b418b7ac" Mar 18 17:41:26.297656 master-0 kubenswrapper[4090]: I0318 17:41:26.297598 4090 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6af32710a9a093c2ffb5165e045584759becf8ee99079c76ea090f36b418b7ac"} err="failed to get container status \"6af32710a9a093c2ffb5165e045584759becf8ee99079c76ea090f36b418b7ac\": rpc error: code = NotFound desc = could not find container \"6af32710a9a093c2ffb5165e045584759becf8ee99079c76ea090f36b418b7ac\": container with ID starting with 6af32710a9a093c2ffb5165e045584759becf8ee99079c76ea090f36b418b7ac not found: ID does not exist" Mar 18 17:41:26.297656 master-0 kubenswrapper[4090]: I0318 17:41:26.297634 4090 scope.go:117] "RemoveContainer" containerID="e7320e65f254a24edc9e01cc30587d714155472069d129fd73b300ad45b8a90c" Mar 18 17:41:26.297854 master-0 kubenswrapper[4090]: I0318 17:41:26.297810 4090 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e7320e65f254a24edc9e01cc30587d714155472069d129fd73b300ad45b8a90c"} err="failed to get container status \"e7320e65f254a24edc9e01cc30587d714155472069d129fd73b300ad45b8a90c\": rpc error: code = NotFound desc = could not find container \"e7320e65f254a24edc9e01cc30587d714155472069d129fd73b300ad45b8a90c\": container with ID starting with e7320e65f254a24edc9e01cc30587d714155472069d129fd73b300ad45b8a90c not found: ID does not exist" Mar 18 17:41:26.297854 master-0 kubenswrapper[4090]: I0318 17:41:26.297832 4090 scope.go:117] "RemoveContainer" containerID="f1d14468646824242e1b501b4d730c725291ad7baae06d0e85140b05ffb53e55" Mar 18 17:41:26.298215 master-0 kubenswrapper[4090]: I0318 17:41:26.298162 4090 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f1d14468646824242e1b501b4d730c725291ad7baae06d0e85140b05ffb53e55"} err="failed to get container status \"f1d14468646824242e1b501b4d730c725291ad7baae06d0e85140b05ffb53e55\": rpc error: code = NotFound desc = could not find container \"f1d14468646824242e1b501b4d730c725291ad7baae06d0e85140b05ffb53e55\": container with ID starting with f1d14468646824242e1b501b4d730c725291ad7baae06d0e85140b05ffb53e55 not found: ID does not exist" Mar 18 17:41:26.298215 master-0 kubenswrapper[4090]: I0318 17:41:26.298201 4090 scope.go:117] "RemoveContainer" containerID="78c48fca4536f9843571b1df443be561e2eacf57b33e35b13d706903ef574249" Mar 18 17:41:26.298609 master-0 kubenswrapper[4090]: I0318 17:41:26.298562 4090 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"78c48fca4536f9843571b1df443be561e2eacf57b33e35b13d706903ef574249"} err="failed to get container status \"78c48fca4536f9843571b1df443be561e2eacf57b33e35b13d706903ef574249\": rpc error: code = NotFound desc = could not find container \"78c48fca4536f9843571b1df443be561e2eacf57b33e35b13d706903ef574249\": container with ID starting with 78c48fca4536f9843571b1df443be561e2eacf57b33e35b13d706903ef574249 not found: ID does not exist" Mar 18 17:41:26.298609 master-0 kubenswrapper[4090]: I0318 17:41:26.298585 4090 scope.go:117] "RemoveContainer" containerID="e94cf1ae09720b90a837378ce21a224cf3d1ec15ebe4df07aca3dc99ebf1c5b9" Mar 18 17:41:26.298810 master-0 kubenswrapper[4090]: I0318 17:41:26.298770 4090 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e94cf1ae09720b90a837378ce21a224cf3d1ec15ebe4df07aca3dc99ebf1c5b9"} err="failed to get container status \"e94cf1ae09720b90a837378ce21a224cf3d1ec15ebe4df07aca3dc99ebf1c5b9\": rpc error: code = NotFound desc = could not find container \"e94cf1ae09720b90a837378ce21a224cf3d1ec15ebe4df07aca3dc99ebf1c5b9\": container with ID starting with e94cf1ae09720b90a837378ce21a224cf3d1ec15ebe4df07aca3dc99ebf1c5b9 not found: ID does not exist" Mar 18 17:41:26.298810 master-0 kubenswrapper[4090]: I0318 17:41:26.298792 4090 scope.go:117] "RemoveContainer" containerID="5a0cc38196c956571e1020fd07aac66b445ed9098590c86d0420398246fccc70" Mar 18 17:41:26.299504 master-0 kubenswrapper[4090]: I0318 17:41:26.299466 4090 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5a0cc38196c956571e1020fd07aac66b445ed9098590c86d0420398246fccc70"} err="failed to get container status \"5a0cc38196c956571e1020fd07aac66b445ed9098590c86d0420398246fccc70\": rpc error: code = NotFound desc = could not find container \"5a0cc38196c956571e1020fd07aac66b445ed9098590c86d0420398246fccc70\": container with ID starting with 5a0cc38196c956571e1020fd07aac66b445ed9098590c86d0420398246fccc70 not found: ID does not exist" Mar 18 17:41:26.299504 master-0 kubenswrapper[4090]: I0318 17:41:26.299487 4090 scope.go:117] "RemoveContainer" containerID="76fe39ffc3ee88a37aec5098c7c41d8e5c952b0cd635cc6932fa8c25c90fc7b2" Mar 18 17:41:26.299765 master-0 kubenswrapper[4090]: I0318 17:41:26.299725 4090 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"76fe39ffc3ee88a37aec5098c7c41d8e5c952b0cd635cc6932fa8c25c90fc7b2"} err="failed to get container status \"76fe39ffc3ee88a37aec5098c7c41d8e5c952b0cd635cc6932fa8c25c90fc7b2\": rpc error: code = NotFound desc = could not find container \"76fe39ffc3ee88a37aec5098c7c41d8e5c952b0cd635cc6932fa8c25c90fc7b2\": container with ID starting with 76fe39ffc3ee88a37aec5098c7c41d8e5c952b0cd635cc6932fa8c25c90fc7b2 not found: ID does not exist" Mar 18 17:41:26.299765 master-0 kubenswrapper[4090]: I0318 17:41:26.299745 4090 scope.go:117] "RemoveContainer" containerID="a2f7840253ba77627965588c80fd6601ad2c7b85616e8b0045ab019fad2b4eb5" Mar 18 17:41:26.300067 master-0 kubenswrapper[4090]: I0318 17:41:26.300024 4090 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a2f7840253ba77627965588c80fd6601ad2c7b85616e8b0045ab019fad2b4eb5"} err="failed to get container status \"a2f7840253ba77627965588c80fd6601ad2c7b85616e8b0045ab019fad2b4eb5\": rpc error: code = NotFound desc = could not find container \"a2f7840253ba77627965588c80fd6601ad2c7b85616e8b0045ab019fad2b4eb5\": container with ID starting with a2f7840253ba77627965588c80fd6601ad2c7b85616e8b0045ab019fad2b4eb5 not found: ID does not exist" Mar 18 17:41:26.300067 master-0 kubenswrapper[4090]: I0318 17:41:26.300046 4090 scope.go:117] "RemoveContainer" containerID="dd71bfd3525e8ab627d591d8e984b8e06b2147e143ab566d2810a6c719c82058" Mar 18 17:41:26.300360 master-0 kubenswrapper[4090]: I0318 17:41:26.300324 4090 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dd71bfd3525e8ab627d591d8e984b8e06b2147e143ab566d2810a6c719c82058"} err="failed to get container status \"dd71bfd3525e8ab627d591d8e984b8e06b2147e143ab566d2810a6c719c82058\": rpc error: code = NotFound desc = could not find container \"dd71bfd3525e8ab627d591d8e984b8e06b2147e143ab566d2810a6c719c82058\": container with ID starting with dd71bfd3525e8ab627d591d8e984b8e06b2147e143ab566d2810a6c719c82058 not found: ID does not exist" Mar 18 17:41:26.300360 master-0 kubenswrapper[4090]: I0318 17:41:26.300352 4090 scope.go:117] "RemoveContainer" containerID="6af32710a9a093c2ffb5165e045584759becf8ee99079c76ea090f36b418b7ac" Mar 18 17:41:26.300728 master-0 kubenswrapper[4090]: I0318 17:41:26.300672 4090 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6af32710a9a093c2ffb5165e045584759becf8ee99079c76ea090f36b418b7ac"} err="failed to get container status \"6af32710a9a093c2ffb5165e045584759becf8ee99079c76ea090f36b418b7ac\": rpc error: code = NotFound desc = could not find container \"6af32710a9a093c2ffb5165e045584759becf8ee99079c76ea090f36b418b7ac\": container with ID starting with 6af32710a9a093c2ffb5165e045584759becf8ee99079c76ea090f36b418b7ac not found: ID does not exist" Mar 18 17:41:26.300728 master-0 kubenswrapper[4090]: I0318 17:41:26.300701 4090 scope.go:117] "RemoveContainer" containerID="e7320e65f254a24edc9e01cc30587d714155472069d129fd73b300ad45b8a90c" Mar 18 17:41:26.300936 master-0 kubenswrapper[4090]: I0318 17:41:26.300905 4090 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e7320e65f254a24edc9e01cc30587d714155472069d129fd73b300ad45b8a90c"} err="failed to get container status \"e7320e65f254a24edc9e01cc30587d714155472069d129fd73b300ad45b8a90c\": rpc error: code = NotFound desc = could not find container \"e7320e65f254a24edc9e01cc30587d714155472069d129fd73b300ad45b8a90c\": container with ID starting with e7320e65f254a24edc9e01cc30587d714155472069d129fd73b300ad45b8a90c not found: ID does not exist" Mar 18 17:41:26.300936 master-0 kubenswrapper[4090]: I0318 17:41:26.300931 4090 scope.go:117] "RemoveContainer" containerID="f1d14468646824242e1b501b4d730c725291ad7baae06d0e85140b05ffb53e55" Mar 18 17:41:26.301101 master-0 kubenswrapper[4090]: I0318 17:41:26.301071 4090 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f1d14468646824242e1b501b4d730c725291ad7baae06d0e85140b05ffb53e55"} err="failed to get container status \"f1d14468646824242e1b501b4d730c725291ad7baae06d0e85140b05ffb53e55\": rpc error: code = NotFound desc = could not find container \"f1d14468646824242e1b501b4d730c725291ad7baae06d0e85140b05ffb53e55\": container with ID starting with f1d14468646824242e1b501b4d730c725291ad7baae06d0e85140b05ffb53e55 not found: ID does not exist" Mar 18 17:41:26.301101 master-0 kubenswrapper[4090]: I0318 17:41:26.301097 4090 scope.go:117] "RemoveContainer" containerID="78c48fca4536f9843571b1df443be561e2eacf57b33e35b13d706903ef574249" Mar 18 17:41:26.301403 master-0 kubenswrapper[4090]: I0318 17:41:26.301370 4090 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"78c48fca4536f9843571b1df443be561e2eacf57b33e35b13d706903ef574249"} err="failed to get container status \"78c48fca4536f9843571b1df443be561e2eacf57b33e35b13d706903ef574249\": rpc error: code = NotFound desc = could not find container \"78c48fca4536f9843571b1df443be561e2eacf57b33e35b13d706903ef574249\": container with ID starting with 78c48fca4536f9843571b1df443be561e2eacf57b33e35b13d706903ef574249 not found: ID does not exist" Mar 18 17:41:26.301403 master-0 kubenswrapper[4090]: I0318 17:41:26.301392 4090 scope.go:117] "RemoveContainer" containerID="e94cf1ae09720b90a837378ce21a224cf3d1ec15ebe4df07aca3dc99ebf1c5b9" Mar 18 17:41:26.302355 master-0 kubenswrapper[4090]: I0318 17:41:26.302317 4090 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e94cf1ae09720b90a837378ce21a224cf3d1ec15ebe4df07aca3dc99ebf1c5b9"} err="failed to get container status \"e94cf1ae09720b90a837378ce21a224cf3d1ec15ebe4df07aca3dc99ebf1c5b9\": rpc error: code = NotFound desc = could not find container \"e94cf1ae09720b90a837378ce21a224cf3d1ec15ebe4df07aca3dc99ebf1c5b9\": container with ID starting with e94cf1ae09720b90a837378ce21a224cf3d1ec15ebe4df07aca3dc99ebf1c5b9 not found: ID does not exist" Mar 18 17:41:26.302355 master-0 kubenswrapper[4090]: I0318 17:41:26.302338 4090 scope.go:117] "RemoveContainer" containerID="5a0cc38196c956571e1020fd07aac66b445ed9098590c86d0420398246fccc70" Mar 18 17:41:26.302573 master-0 kubenswrapper[4090]: I0318 17:41:26.302538 4090 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5a0cc38196c956571e1020fd07aac66b445ed9098590c86d0420398246fccc70"} err="failed to get container status \"5a0cc38196c956571e1020fd07aac66b445ed9098590c86d0420398246fccc70\": rpc error: code = NotFound desc = could not find container \"5a0cc38196c956571e1020fd07aac66b445ed9098590c86d0420398246fccc70\": container with ID starting with 5a0cc38196c956571e1020fd07aac66b445ed9098590c86d0420398246fccc70 not found: ID does not exist" Mar 18 17:41:26.302573 master-0 kubenswrapper[4090]: I0318 17:41:26.302556 4090 scope.go:117] "RemoveContainer" containerID="76fe39ffc3ee88a37aec5098c7c41d8e5c952b0cd635cc6932fa8c25c90fc7b2" Mar 18 17:41:26.302719 master-0 kubenswrapper[4090]: I0318 17:41:26.302694 4090 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"76fe39ffc3ee88a37aec5098c7c41d8e5c952b0cd635cc6932fa8c25c90fc7b2"} err="failed to get container status \"76fe39ffc3ee88a37aec5098c7c41d8e5c952b0cd635cc6932fa8c25c90fc7b2\": rpc error: code = NotFound desc = could not find container \"76fe39ffc3ee88a37aec5098c7c41d8e5c952b0cd635cc6932fa8c25c90fc7b2\": container with ID starting with 76fe39ffc3ee88a37aec5098c7c41d8e5c952b0cd635cc6932fa8c25c90fc7b2 not found: ID does not exist" Mar 18 17:41:26.302719 master-0 kubenswrapper[4090]: I0318 17:41:26.302713 4090 scope.go:117] "RemoveContainer" containerID="a2f7840253ba77627965588c80fd6601ad2c7b85616e8b0045ab019fad2b4eb5" Mar 18 17:41:26.302920 master-0 kubenswrapper[4090]: I0318 17:41:26.302885 4090 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a2f7840253ba77627965588c80fd6601ad2c7b85616e8b0045ab019fad2b4eb5"} err="failed to get container status \"a2f7840253ba77627965588c80fd6601ad2c7b85616e8b0045ab019fad2b4eb5\": rpc error: code = NotFound desc = could not find container \"a2f7840253ba77627965588c80fd6601ad2c7b85616e8b0045ab019fad2b4eb5\": container with ID starting with a2f7840253ba77627965588c80fd6601ad2c7b85616e8b0045ab019fad2b4eb5 not found: ID does not exist" Mar 18 17:41:26.302920 master-0 kubenswrapper[4090]: I0318 17:41:26.302904 4090 scope.go:117] "RemoveContainer" containerID="dd71bfd3525e8ab627d591d8e984b8e06b2147e143ab566d2810a6c719c82058" Mar 18 17:41:26.303070 master-0 kubenswrapper[4090]: I0318 17:41:26.303047 4090 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dd71bfd3525e8ab627d591d8e984b8e06b2147e143ab566d2810a6c719c82058"} err="failed to get container status \"dd71bfd3525e8ab627d591d8e984b8e06b2147e143ab566d2810a6c719c82058\": rpc error: code = NotFound desc = could not find container \"dd71bfd3525e8ab627d591d8e984b8e06b2147e143ab566d2810a6c719c82058\": container with ID starting with dd71bfd3525e8ab627d591d8e984b8e06b2147e143ab566d2810a6c719c82058 not found: ID does not exist" Mar 18 17:41:26.303070 master-0 kubenswrapper[4090]: I0318 17:41:26.303063 4090 scope.go:117] "RemoveContainer" containerID="6af32710a9a093c2ffb5165e045584759becf8ee99079c76ea090f36b418b7ac" Mar 18 17:41:26.303293 master-0 kubenswrapper[4090]: I0318 17:41:26.303227 4090 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6af32710a9a093c2ffb5165e045584759becf8ee99079c76ea090f36b418b7ac"} err="failed to get container status \"6af32710a9a093c2ffb5165e045584759becf8ee99079c76ea090f36b418b7ac\": rpc error: code = NotFound desc = could not find container \"6af32710a9a093c2ffb5165e045584759becf8ee99079c76ea090f36b418b7ac\": container with ID starting with 6af32710a9a093c2ffb5165e045584759becf8ee99079c76ea090f36b418b7ac not found: ID does not exist" Mar 18 17:41:26.303293 master-0 kubenswrapper[4090]: I0318 17:41:26.303251 4090 scope.go:117] "RemoveContainer" containerID="e7320e65f254a24edc9e01cc30587d714155472069d129fd73b300ad45b8a90c" Mar 18 17:41:26.303491 master-0 kubenswrapper[4090]: I0318 17:41:26.303465 4090 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e7320e65f254a24edc9e01cc30587d714155472069d129fd73b300ad45b8a90c"} err="failed to get container status \"e7320e65f254a24edc9e01cc30587d714155472069d129fd73b300ad45b8a90c\": rpc error: code = NotFound desc = could not find container \"e7320e65f254a24edc9e01cc30587d714155472069d129fd73b300ad45b8a90c\": container with ID starting with e7320e65f254a24edc9e01cc30587d714155472069d129fd73b300ad45b8a90c not found: ID does not exist" Mar 18 17:41:26.303491 master-0 kubenswrapper[4090]: I0318 17:41:26.303484 4090 scope.go:117] "RemoveContainer" containerID="f1d14468646824242e1b501b4d730c725291ad7baae06d0e85140b05ffb53e55" Mar 18 17:41:26.303694 master-0 kubenswrapper[4090]: I0318 17:41:26.303670 4090 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f1d14468646824242e1b501b4d730c725291ad7baae06d0e85140b05ffb53e55"} err="failed to get container status \"f1d14468646824242e1b501b4d730c725291ad7baae06d0e85140b05ffb53e55\": rpc error: code = NotFound desc = could not find container \"f1d14468646824242e1b501b4d730c725291ad7baae06d0e85140b05ffb53e55\": container with ID starting with f1d14468646824242e1b501b4d730c725291ad7baae06d0e85140b05ffb53e55 not found: ID does not exist" Mar 18 17:41:26.303694 master-0 kubenswrapper[4090]: I0318 17:41:26.303687 4090 scope.go:117] "RemoveContainer" containerID="78c48fca4536f9843571b1df443be561e2eacf57b33e35b13d706903ef574249" Mar 18 17:41:26.303878 master-0 kubenswrapper[4090]: I0318 17:41:26.303855 4090 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"78c48fca4536f9843571b1df443be561e2eacf57b33e35b13d706903ef574249"} err="failed to get container status \"78c48fca4536f9843571b1df443be561e2eacf57b33e35b13d706903ef574249\": rpc error: code = NotFound desc = could not find container \"78c48fca4536f9843571b1df443be561e2eacf57b33e35b13d706903ef574249\": container with ID starting with 78c48fca4536f9843571b1df443be561e2eacf57b33e35b13d706903ef574249 not found: ID does not exist" Mar 18 17:41:26.303878 master-0 kubenswrapper[4090]: I0318 17:41:26.303871 4090 scope.go:117] "RemoveContainer" containerID="e94cf1ae09720b90a837378ce21a224cf3d1ec15ebe4df07aca3dc99ebf1c5b9" Mar 18 17:41:26.304082 master-0 kubenswrapper[4090]: I0318 17:41:26.304058 4090 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e94cf1ae09720b90a837378ce21a224cf3d1ec15ebe4df07aca3dc99ebf1c5b9"} err="failed to get container status \"e94cf1ae09720b90a837378ce21a224cf3d1ec15ebe4df07aca3dc99ebf1c5b9\": rpc error: code = NotFound desc = could not find container \"e94cf1ae09720b90a837378ce21a224cf3d1ec15ebe4df07aca3dc99ebf1c5b9\": container with ID starting with e94cf1ae09720b90a837378ce21a224cf3d1ec15ebe4df07aca3dc99ebf1c5b9 not found: ID does not exist" Mar 18 17:41:26.304082 master-0 kubenswrapper[4090]: I0318 17:41:26.304074 4090 scope.go:117] "RemoveContainer" containerID="5a0cc38196c956571e1020fd07aac66b445ed9098590c86d0420398246fccc70" Mar 18 17:41:26.304292 master-0 kubenswrapper[4090]: I0318 17:41:26.304247 4090 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5a0cc38196c956571e1020fd07aac66b445ed9098590c86d0420398246fccc70"} err="failed to get container status \"5a0cc38196c956571e1020fd07aac66b445ed9098590c86d0420398246fccc70\": rpc error: code = NotFound desc = could not find container \"5a0cc38196c956571e1020fd07aac66b445ed9098590c86d0420398246fccc70\": container with ID starting with 5a0cc38196c956571e1020fd07aac66b445ed9098590c86d0420398246fccc70 not found: ID does not exist" Mar 18 17:41:26.304292 master-0 kubenswrapper[4090]: I0318 17:41:26.304266 4090 scope.go:117] "RemoveContainer" containerID="76fe39ffc3ee88a37aec5098c7c41d8e5c952b0cd635cc6932fa8c25c90fc7b2" Mar 18 17:41:26.304506 master-0 kubenswrapper[4090]: I0318 17:41:26.304477 4090 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"76fe39ffc3ee88a37aec5098c7c41d8e5c952b0cd635cc6932fa8c25c90fc7b2"} err="failed to get container status \"76fe39ffc3ee88a37aec5098c7c41d8e5c952b0cd635cc6932fa8c25c90fc7b2\": rpc error: code = NotFound desc = could not find container \"76fe39ffc3ee88a37aec5098c7c41d8e5c952b0cd635cc6932fa8c25c90fc7b2\": container with ID starting with 76fe39ffc3ee88a37aec5098c7c41d8e5c952b0cd635cc6932fa8c25c90fc7b2 not found: ID does not exist" Mar 18 17:41:26.304506 master-0 kubenswrapper[4090]: I0318 17:41:26.304502 4090 scope.go:117] "RemoveContainer" containerID="a2f7840253ba77627965588c80fd6601ad2c7b85616e8b0045ab019fad2b4eb5" Mar 18 17:41:26.304767 master-0 kubenswrapper[4090]: I0318 17:41:26.304729 4090 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a2f7840253ba77627965588c80fd6601ad2c7b85616e8b0045ab019fad2b4eb5"} err="failed to get container status \"a2f7840253ba77627965588c80fd6601ad2c7b85616e8b0045ab019fad2b4eb5\": rpc error: code = NotFound desc = could not find container \"a2f7840253ba77627965588c80fd6601ad2c7b85616e8b0045ab019fad2b4eb5\": container with ID starting with a2f7840253ba77627965588c80fd6601ad2c7b85616e8b0045ab019fad2b4eb5 not found: ID does not exist" Mar 18 17:41:26.304767 master-0 kubenswrapper[4090]: I0318 17:41:26.304749 4090 scope.go:117] "RemoveContainer" containerID="dd71bfd3525e8ab627d591d8e984b8e06b2147e143ab566d2810a6c719c82058" Mar 18 17:41:26.304955 master-0 kubenswrapper[4090]: I0318 17:41:26.304930 4090 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dd71bfd3525e8ab627d591d8e984b8e06b2147e143ab566d2810a6c719c82058"} err="failed to get container status \"dd71bfd3525e8ab627d591d8e984b8e06b2147e143ab566d2810a6c719c82058\": rpc error: code = NotFound desc = could not find container \"dd71bfd3525e8ab627d591d8e984b8e06b2147e143ab566d2810a6c719c82058\": container with ID starting with dd71bfd3525e8ab627d591d8e984b8e06b2147e143ab566d2810a6c719c82058 not found: ID does not exist" Mar 18 17:41:26.374461 master-0 kubenswrapper[4090]: I0318 17:41:26.374425 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a02399de-859b-45b1-9b00-18a08f285f39-serving-cert\") pod \"cluster-version-operator-56d8475767-lqvvj\" (UID: \"a02399de-859b-45b1-9b00-18a08f285f39\") " pod="openshift-cluster-version/cluster-version-operator-56d8475767-lqvvj" Mar 18 17:41:26.374576 master-0 kubenswrapper[4090]: E0318 17:41:26.374548 4090 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Mar 18 17:41:26.374638 master-0 kubenswrapper[4090]: E0318 17:41:26.374608 4090 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a02399de-859b-45b1-9b00-18a08f285f39-serving-cert podName:a02399de-859b-45b1-9b00-18a08f285f39 nodeName:}" failed. No retries permitted until 2026-03-18 17:42:30.374588517 +0000 UTC m=+167.566860431 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/a02399de-859b-45b1-9b00-18a08f285f39-serving-cert") pod "cluster-version-operator-56d8475767-lqvvj" (UID: "a02399de-859b-45b1-9b00-18a08f285f39") : secret "cluster-version-operator-serving-cert" not found Mar 18 17:41:27.126411 master-0 kubenswrapper[4090]: I0318 17:41:27.126351 4090 generic.go:334] "Generic (PLEG): container finished" podID="994fff04-c1d7-4f10-8d4b-6b49a6934829" containerID="1a93390a62f28ef65e80a805fc6b9268f2506ce23dcb2e7e0c063ca4b86c7617" exitCode=0 Mar 18 17:41:27.126786 master-0 kubenswrapper[4090]: I0318 17:41:27.126428 4090 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5l4qp" event={"ID":"994fff04-c1d7-4f10-8d4b-6b49a6934829","Type":"ContainerDied","Data":"1a93390a62f28ef65e80a805fc6b9268f2506ce23dcb2e7e0c063ca4b86c7617"} Mar 18 17:41:27.126786 master-0 kubenswrapper[4090]: I0318 17:41:27.126462 4090 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5l4qp" event={"ID":"994fff04-c1d7-4f10-8d4b-6b49a6934829","Type":"ContainerStarted","Data":"1add5afbf418952e0016f7866a470207154a949d28966174c8a7f5fa79ba0e1f"} Mar 18 17:41:27.148627 master-0 kubenswrapper[4090]: I0318 17:41:27.134320 4090 generic.go:334] "Generic (PLEG): container finished" podID="fea7b899-fde4-4463-9520-4d433a8ebe21" containerID="e43f9ea395a7c58acd7f5ae682a5f3d1676e30932b7eae1967401d8e7c98e640" exitCode=0 Mar 18 17:41:27.148627 master-0 kubenswrapper[4090]: I0318 17:41:27.134421 4090 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-ttbr5" event={"ID":"fea7b899-fde4-4463-9520-4d433a8ebe21","Type":"ContainerDied","Data":"e43f9ea395a7c58acd7f5ae682a5f3d1676e30932b7eae1967401d8e7c98e640"} Mar 18 17:41:27.624711 master-0 kubenswrapper[4090]: I0318 17:41:27.621260 4090 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mfn52" Mar 18 17:41:27.624711 master-0 kubenswrapper[4090]: I0318 17:41:27.621457 4090 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-ctd49" Mar 18 17:41:27.624711 master-0 kubenswrapper[4090]: E0318 17:41:27.622035 4090 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mfn52" podUID="5a4f94f3-d63a-4869-b723-ae9637610b4b" Mar 18 17:41:27.624711 master-0 kubenswrapper[4090]: E0318 17:41:27.622313 4090 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-ctd49" podUID="978dcca6-b396-463f-9614-9e24194a1aaa" Mar 18 17:41:27.632870 master-0 kubenswrapper[4090]: I0318 17:41:27.632815 4090 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eda1dca7-9f5f-4955-8522-345e4f6e82a2" path="/var/lib/kubelet/pods/eda1dca7-9f5f-4955-8522-345e4f6e82a2/volumes" Mar 18 17:41:28.143330 master-0 kubenswrapper[4090]: I0318 17:41:28.143260 4090 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5l4qp" event={"ID":"994fff04-c1d7-4f10-8d4b-6b49a6934829","Type":"ContainerStarted","Data":"25a3c97297e8c4545fea26a65c4fe6e86943d2c75b660913a2f153c6c2e1e00e"} Mar 18 17:41:28.143330 master-0 kubenswrapper[4090]: I0318 17:41:28.143331 4090 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5l4qp" event={"ID":"994fff04-c1d7-4f10-8d4b-6b49a6934829","Type":"ContainerStarted","Data":"e7a279e3afb881ce9ad173551ec85ac348588b9e6b8bdeff4c541a727811ad13"} Mar 18 17:41:28.143330 master-0 kubenswrapper[4090]: I0318 17:41:28.143347 4090 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5l4qp" event={"ID":"994fff04-c1d7-4f10-8d4b-6b49a6934829","Type":"ContainerStarted","Data":"df5075e989e49094497730f4e546175b79fcc21a4ac4135a5b1e7b9f86ac6d0a"} Mar 18 17:41:28.143631 master-0 kubenswrapper[4090]: I0318 17:41:28.143361 4090 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5l4qp" event={"ID":"994fff04-c1d7-4f10-8d4b-6b49a6934829","Type":"ContainerStarted","Data":"5151b8b28ff1ccd8d0f7d9940f5072947ce86ed4f2ab943851fc0e71126ebc5f"} Mar 18 17:41:28.143631 master-0 kubenswrapper[4090]: I0318 17:41:28.143374 4090 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5l4qp" event={"ID":"994fff04-c1d7-4f10-8d4b-6b49a6934829","Type":"ContainerStarted","Data":"e89637bfd758b36994c59b6159bef4c0b5c116eabd37b59ba4502ad1ec776558"} Mar 18 17:41:28.143631 master-0 kubenswrapper[4090]: I0318 17:41:28.143386 4090 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5l4qp" event={"ID":"994fff04-c1d7-4f10-8d4b-6b49a6934829","Type":"ContainerStarted","Data":"c3a4afdb6d3e54425d45e9689e097e674252bde0bc7b4334d636d37814970007"} Mar 18 17:41:28.147799 master-0 kubenswrapper[4090]: I0318 17:41:28.147774 4090 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-ttbr5" event={"ID":"fea7b899-fde4-4463-9520-4d433a8ebe21","Type":"ContainerStarted","Data":"271ffcda202d57e340a0f27967f0c698cfba1c754e574206b19060ac643253ca"} Mar 18 17:41:28.170588 master-0 kubenswrapper[4090]: I0318 17:41:28.170503 4090 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-ttbr5" podStartSLOduration=4.933942633 podStartE2EDuration="46.170482346s" podCreationTimestamp="2026-03-18 17:40:42 +0000 UTC" firstStartedPulling="2026-03-18 17:40:43.457197706 +0000 UTC m=+60.649469670" lastFinishedPulling="2026-03-18 17:41:24.693737469 +0000 UTC m=+101.886009383" observedRunningTime="2026-03-18 17:41:28.170437855 +0000 UTC m=+105.362709779" watchObservedRunningTime="2026-03-18 17:41:28.170482346 +0000 UTC m=+105.362754280" Mar 18 17:41:29.606919 master-0 kubenswrapper[4090]: I0318 17:41:29.606796 4090 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mfn52" Mar 18 17:41:29.607952 master-0 kubenswrapper[4090]: I0318 17:41:29.606816 4090 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-ctd49" Mar 18 17:41:29.607952 master-0 kubenswrapper[4090]: E0318 17:41:29.607030 4090 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mfn52" podUID="5a4f94f3-d63a-4869-b723-ae9637610b4b" Mar 18 17:41:29.607952 master-0 kubenswrapper[4090]: E0318 17:41:29.607117 4090 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-ctd49" podUID="978dcca6-b396-463f-9614-9e24194a1aaa" Mar 18 17:41:30.110940 master-0 kubenswrapper[4090]: I0318 17:41:30.110872 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5s6f5\" (UniqueName: \"kubernetes.io/projected/978dcca6-b396-463f-9614-9e24194a1aaa-kube-api-access-5s6f5\") pod \"network-check-target-ctd49\" (UID: \"978dcca6-b396-463f-9614-9e24194a1aaa\") " pod="openshift-network-diagnostics/network-check-target-ctd49" Mar 18 17:41:30.111130 master-0 kubenswrapper[4090]: E0318 17:41:30.111045 4090 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 18 17:41:30.111130 master-0 kubenswrapper[4090]: E0318 17:41:30.111065 4090 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 18 17:41:30.111130 master-0 kubenswrapper[4090]: E0318 17:41:30.111076 4090 projected.go:194] Error preparing data for projected volume kube-api-access-5s6f5 for pod openshift-network-diagnostics/network-check-target-ctd49: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 18 17:41:30.111130 master-0 kubenswrapper[4090]: E0318 17:41:30.111130 4090 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/978dcca6-b396-463f-9614-9e24194a1aaa-kube-api-access-5s6f5 podName:978dcca6-b396-463f-9614-9e24194a1aaa nodeName:}" failed. No retries permitted until 2026-03-18 17:42:02.111114679 +0000 UTC m=+139.303386583 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-5s6f5" (UniqueName: "kubernetes.io/projected/978dcca6-b396-463f-9614-9e24194a1aaa-kube-api-access-5s6f5") pod "network-check-target-ctd49" (UID: "978dcca6-b396-463f-9614-9e24194a1aaa") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 18 17:41:30.161908 master-0 kubenswrapper[4090]: I0318 17:41:30.161831 4090 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5l4qp" event={"ID":"994fff04-c1d7-4f10-8d4b-6b49a6934829","Type":"ContainerStarted","Data":"09251d033bfb1453f78cf8b529f3d51f87268e9b5a1a0bf74bf9eaea8ecd45c7"} Mar 18 17:41:31.607776 master-0 kubenswrapper[4090]: I0318 17:41:31.607697 4090 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mfn52" Mar 18 17:41:31.608728 master-0 kubenswrapper[4090]: I0318 17:41:31.607823 4090 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-ctd49" Mar 18 17:41:31.608728 master-0 kubenswrapper[4090]: E0318 17:41:31.607890 4090 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mfn52" podUID="5a4f94f3-d63a-4869-b723-ae9637610b4b" Mar 18 17:41:31.608728 master-0 kubenswrapper[4090]: E0318 17:41:31.608247 4090 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-ctd49" podUID="978dcca6-b396-463f-9614-9e24194a1aaa" Mar 18 17:41:33.183645 master-0 kubenswrapper[4090]: I0318 17:41:33.183568 4090 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5l4qp" event={"ID":"994fff04-c1d7-4f10-8d4b-6b49a6934829","Type":"ContainerStarted","Data":"b015a1210ad77eada94bba6d0d136bf2ddb1c2f6332cbc5d99332e490d63b54e"} Mar 18 17:41:33.184340 master-0 kubenswrapper[4090]: I0318 17:41:33.184110 4090 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-5l4qp" Mar 18 17:41:33.184340 master-0 kubenswrapper[4090]: I0318 17:41:33.184174 4090 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-5l4qp" Mar 18 17:41:33.184340 master-0 kubenswrapper[4090]: I0318 17:41:33.184203 4090 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-5l4qp" Mar 18 17:41:33.217636 master-0 kubenswrapper[4090]: I0318 17:41:33.217210 4090 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-5l4qp" podStartSLOduration=8.217183724 podStartE2EDuration="8.217183724s" podCreationTimestamp="2026-03-18 17:41:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 17:41:33.215641172 +0000 UTC m=+110.407913166" watchObservedRunningTime="2026-03-18 17:41:33.217183724 +0000 UTC m=+110.409455678" Mar 18 17:41:33.222484 master-0 kubenswrapper[4090]: I0318 17:41:33.222432 4090 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-5l4qp" Mar 18 17:41:33.227770 master-0 kubenswrapper[4090]: I0318 17:41:33.227697 4090 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-5l4qp" Mar 18 17:41:33.470360 master-0 kubenswrapper[4090]: I0318 17:41:33.469152 4090 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-mfn52"] Mar 18 17:41:33.470360 master-0 kubenswrapper[4090]: I0318 17:41:33.469195 4090 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-network-diagnostics/network-check-target-ctd49"] Mar 18 17:41:33.470360 master-0 kubenswrapper[4090]: I0318 17:41:33.469263 4090 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-ctd49" Mar 18 17:41:33.470360 master-0 kubenswrapper[4090]: E0318 17:41:33.469362 4090 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-ctd49" podUID="978dcca6-b396-463f-9614-9e24194a1aaa" Mar 18 17:41:33.470360 master-0 kubenswrapper[4090]: I0318 17:41:33.469561 4090 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mfn52" Mar 18 17:41:33.470360 master-0 kubenswrapper[4090]: E0318 17:41:33.469613 4090 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mfn52" podUID="5a4f94f3-d63a-4869-b723-ae9637610b4b" Mar 18 17:41:35.607745 master-0 kubenswrapper[4090]: I0318 17:41:35.607636 4090 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mfn52" Mar 18 17:41:35.608998 master-0 kubenswrapper[4090]: I0318 17:41:35.607651 4090 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-ctd49" Mar 18 17:41:35.608998 master-0 kubenswrapper[4090]: E0318 17:41:35.607815 4090 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mfn52" podUID="5a4f94f3-d63a-4869-b723-ae9637610b4b" Mar 18 17:41:35.608998 master-0 kubenswrapper[4090]: E0318 17:41:35.607952 4090 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-ctd49" podUID="978dcca6-b396-463f-9614-9e24194a1aaa" Mar 18 17:41:37.607690 master-0 kubenswrapper[4090]: I0318 17:41:37.607577 4090 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-ctd49" Mar 18 17:41:37.608648 master-0 kubenswrapper[4090]: I0318 17:41:37.607716 4090 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mfn52" Mar 18 17:41:37.608648 master-0 kubenswrapper[4090]: E0318 17:41:37.607791 4090 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-ctd49" podUID="978dcca6-b396-463f-9614-9e24194a1aaa" Mar 18 17:41:37.608648 master-0 kubenswrapper[4090]: E0318 17:41:37.607964 4090 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mfn52" podUID="5a4f94f3-d63a-4869-b723-ae9637610b4b" Mar 18 17:41:37.642637 master-0 kubenswrapper[4090]: I0318 17:41:37.642571 4090 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeReady" Mar 18 17:41:37.644150 master-0 kubenswrapper[4090]: I0318 17:41:37.644045 4090 kubelet_node_status.go:538] "Fast updating node status as it just became ready" Mar 18 17:41:37.696799 master-0 kubenswrapper[4090]: I0318 17:41:37.696692 4090 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68f85b4d6c-qpgfz"] Mar 18 17:41:37.697444 master-0 kubenswrapper[4090]: I0318 17:41:37.697402 4090 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68f85b4d6c-qpgfz" Mar 18 17:41:37.700453 master-0 kubenswrapper[4090]: I0318 17:41:37.700372 4090 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Mar 18 17:41:37.700583 master-0 kubenswrapper[4090]: I0318 17:41:37.700486 4090 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Mar 18 17:41:37.705447 master-0 kubenswrapper[4090]: I0318 17:41:37.705399 4090 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Mar 18 17:41:37.715075 master-0 kubenswrapper[4090]: I0318 17:41:37.711540 4090 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-ff989d6cc-qk279"] Mar 18 17:41:37.715075 master-0 kubenswrapper[4090]: I0318 17:41:37.711964 4090 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-89ccd998f-l5gm7"] Mar 18 17:41:37.715075 master-0 kubenswrapper[4090]: I0318 17:41:37.713128 4090 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-89ccd998f-l5gm7" Mar 18 17:41:37.715075 master-0 kubenswrapper[4090]: I0318 17:41:37.713722 4090 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-ff989d6cc-qk279" Mar 18 17:41:37.719360 master-0 kubenswrapper[4090]: I0318 17:41:37.718000 4090 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-d65958b8-t266j"] Mar 18 17:41:37.719360 master-0 kubenswrapper[4090]: I0318 17:41:37.718583 4090 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-6bb5bfb6fd-xwwcg"] Mar 18 17:41:37.719360 master-0 kubenswrapper[4090]: I0318 17:41:37.719033 4090 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-6bb5bfb6fd-xwwcg" Mar 18 17:41:37.720612 master-0 kubenswrapper[4090]: I0318 17:41:37.719845 4090 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-d65958b8-t266j" Mar 18 17:41:37.722681 master-0 kubenswrapper[4090]: I0318 17:41:37.722230 4090 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Mar 18 17:41:37.722681 master-0 kubenswrapper[4090]: I0318 17:41:37.722581 4090 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Mar 18 17:41:37.722869 master-0 kubenswrapper[4090]: I0318 17:41:37.722752 4090 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-8c94f4649-hpsbd"] Mar 18 17:41:37.725163 master-0 kubenswrapper[4090]: I0318 17:41:37.723204 4090 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8c94f4649-hpsbd" Mar 18 17:41:37.725163 master-0 kubenswrapper[4090]: I0318 17:41:37.724254 4090 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Mar 18 17:41:37.725784 master-0 kubenswrapper[4090]: I0318 17:41:37.725600 4090 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Mar 18 17:41:37.726710 master-0 kubenswrapper[4090]: I0318 17:41:37.726156 4090 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Mar 18 17:41:37.728923 master-0 kubenswrapper[4090]: I0318 17:41:37.727071 4090 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Mar 18 17:41:37.763405 master-0 kubenswrapper[4090]: I0318 17:41:37.741201 4090 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/cluster-baremetal-operator-6f69995874-dh5zl"] Mar 18 17:41:37.763405 master-0 kubenswrapper[4090]: I0318 17:41:37.742013 4090 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-b865698dc-5zj8r"] Mar 18 17:41:37.763405 master-0 kubenswrapper[4090]: I0318 17:41:37.742452 4090 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-8544cbcf9c-rws9x"] Mar 18 17:41:37.763405 master-0 kubenswrapper[4090]: I0318 17:41:37.742712 4090 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-dh5zl" Mar 18 17:41:37.763405 master-0 kubenswrapper[4090]: I0318 17:41:37.742745 4090 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Mar 18 17:41:37.763405 master-0 kubenswrapper[4090]: I0318 17:41:37.742965 4090 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-rws9x" Mar 18 17:41:37.763405 master-0 kubenswrapper[4090]: I0318 17:41:37.743057 4090 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-b865698dc-5zj8r" Mar 18 17:41:37.763405 master-0 kubenswrapper[4090]: I0318 17:41:37.758635 4090 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Mar 18 17:41:37.763405 master-0 kubenswrapper[4090]: I0318 17:41:37.758927 4090 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Mar 18 17:41:37.763405 master-0 kubenswrapper[4090]: I0318 17:41:37.759239 4090 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Mar 18 17:41:37.763405 master-0 kubenswrapper[4090]: I0318 17:41:37.759449 4090 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Mar 18 17:41:37.763405 master-0 kubenswrapper[4090]: I0318 17:41:37.759573 4090 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Mar 18 17:41:37.763405 master-0 kubenswrapper[4090]: I0318 17:41:37.759703 4090 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Mar 18 17:41:37.763405 master-0 kubenswrapper[4090]: I0318 17:41:37.760001 4090 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Mar 18 17:41:37.763405 master-0 kubenswrapper[4090]: I0318 17:41:37.760394 4090 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Mar 18 17:41:37.763405 master-0 kubenswrapper[4090]: I0318 17:41:37.760501 4090 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Mar 18 17:41:37.764327 master-0 kubenswrapper[4090]: I0318 17:41:37.763929 4090 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b9ff55a-73fb-473f-b406-1f8b6cffdb89-serving-cert\") pod \"openshift-apiserver-operator-d65958b8-t266j\" (UID: \"0b9ff55a-73fb-473f-b406-1f8b6cffdb89\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-d65958b8-t266j" Mar 18 17:41:37.764327 master-0 kubenswrapper[4090]: I0318 17:41:37.764002 4090 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Mar 18 17:41:37.764327 master-0 kubenswrapper[4090]: I0318 17:41:37.764009 4090 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9b424d6c-7440-4c98-ac19-2d0642c696fd-kube-api-access\") pod \"kube-controller-manager-operator-ff989d6cc-qk279\" (UID: \"9b424d6c-7440-4c98-ac19-2d0642c696fd\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-ff989d6cc-qk279" Mar 18 17:41:37.764327 master-0 kubenswrapper[4090]: I0318 17:41:37.764084 4090 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2tvgq\" (UniqueName: \"kubernetes.io/projected/0b9ff55a-73fb-473f-b406-1f8b6cffdb89-kube-api-access-2tvgq\") pod \"openshift-apiserver-operator-d65958b8-t266j\" (UID: \"0b9ff55a-73fb-473f-b406-1f8b6cffdb89\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-d65958b8-t266j" Mar 18 17:41:37.764327 master-0 kubenswrapper[4090]: I0318 17:41:37.764135 4090 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0b9ff55a-73fb-473f-b406-1f8b6cffdb89-config\") pod \"openshift-apiserver-operator-d65958b8-t266j\" (UID: \"0b9ff55a-73fb-473f-b406-1f8b6cffdb89\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-d65958b8-t266j" Mar 18 17:41:37.764327 master-0 kubenswrapper[4090]: I0318 17:41:37.764195 4090 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9b424d6c-7440-4c98-ac19-2d0642c696fd-config\") pod \"kube-controller-manager-operator-ff989d6cc-qk279\" (UID: \"9b424d6c-7440-4c98-ac19-2d0642c696fd\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-ff989d6cc-qk279" Mar 18 17:41:37.764327 master-0 kubenswrapper[4090]: I0318 17:41:37.764231 4090 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9b424d6c-7440-4c98-ac19-2d0642c696fd-serving-cert\") pod \"kube-controller-manager-operator-ff989d6cc-qk279\" (UID: \"9b424d6c-7440-4c98-ac19-2d0642c696fd\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-ff989d6cc-qk279" Mar 18 17:41:37.765978 master-0 kubenswrapper[4090]: I0318 17:41:37.764905 4090 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Mar 18 17:41:37.765978 master-0 kubenswrapper[4090]: I0318 17:41:37.765530 4090 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Mar 18 17:41:37.765978 master-0 kubenswrapper[4090]: I0318 17:41:37.765818 4090 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-8b68b9d9b-p72m2"] Mar 18 17:41:37.767349 master-0 kubenswrapper[4090]: I0318 17:41:37.766352 4090 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-8b68b9d9b-p72m2" Mar 18 17:41:37.769031 master-0 kubenswrapper[4090]: I0318 17:41:37.768973 4090 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"cluster-baremetal-operator-images" Mar 18 17:41:37.774709 master-0 kubenswrapper[4090]: I0318 17:41:37.770039 4090 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Mar 18 17:41:37.774709 master-0 kubenswrapper[4090]: I0318 17:41:37.771264 4090 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Mar 18 17:41:37.774709 master-0 kubenswrapper[4090]: I0318 17:41:37.771620 4090 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Mar 18 17:41:37.774709 master-0 kubenswrapper[4090]: I0318 17:41:37.771914 4090 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-baremetal-operator-tls" Mar 18 17:41:37.774709 master-0 kubenswrapper[4090]: I0318 17:41:37.772157 4090 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"baremetal-kube-rbac-proxy" Mar 18 17:41:37.774709 master-0 kubenswrapper[4090]: I0318 17:41:37.772428 4090 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Mar 18 17:41:37.774709 master-0 kubenswrapper[4090]: I0318 17:41:37.772620 4090 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Mar 18 17:41:37.774709 master-0 kubenswrapper[4090]: I0318 17:41:37.772884 4090 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Mar 18 17:41:37.774709 master-0 kubenswrapper[4090]: I0318 17:41:37.773116 4090 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Mar 18 17:41:37.774709 master-0 kubenswrapper[4090]: I0318 17:41:37.773924 4090 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Mar 18 17:41:37.774709 master-0 kubenswrapper[4090]: I0318 17:41:37.774210 4090 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Mar 18 17:41:37.775119 master-0 kubenswrapper[4090]: I0318 17:41:37.774757 4090 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Mar 18 17:41:37.775119 master-0 kubenswrapper[4090]: I0318 17:41:37.774987 4090 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-9c5679d8f-7sc7v"] Mar 18 17:41:37.775448 master-0 kubenswrapper[4090]: I0318 17:41:37.775120 4090 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Mar 18 17:41:37.775654 master-0 kubenswrapper[4090]: I0318 17:41:37.775527 4090 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-baremetal-webhook-server-cert" Mar 18 17:41:37.775654 master-0 kubenswrapper[4090]: I0318 17:41:37.775576 4090 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Mar 18 17:41:37.775654 master-0 kubenswrapper[4090]: I0318 17:41:37.775584 4090 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Mar 18 17:41:37.776251 master-0 kubenswrapper[4090]: I0318 17:41:37.776216 4090 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Mar 18 17:41:37.777267 master-0 kubenswrapper[4090]: I0318 17:41:37.776520 4090 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Mar 18 17:41:37.777267 master-0 kubenswrapper[4090]: I0318 17:41:37.776753 4090 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Mar 18 17:41:37.783401 master-0 kubenswrapper[4090]: I0318 17:41:37.777949 4090 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/cluster-monitoring-operator-58845fbb57-vjrjg"] Mar 18 17:41:37.783401 master-0 kubenswrapper[4090]: I0318 17:41:37.778308 4090 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-olm-operator/cluster-olm-operator-67dcd4998-lljnt"] Mar 18 17:41:37.783401 master-0 kubenswrapper[4090]: I0318 17:41:37.778722 4090 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-7b95f86987-6qqz4"] Mar 18 17:41:37.783401 master-0 kubenswrapper[4090]: I0318 17:41:37.779181 4090 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-7b95f86987-6qqz4" Mar 18 17:41:37.783401 master-0 kubenswrapper[4090]: I0318 17:41:37.781501 4090 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-olm-operator/cluster-olm-operator-67dcd4998-lljnt" Mar 18 17:41:37.783401 master-0 kubenswrapper[4090]: I0318 17:41:37.781518 4090 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-9c5679d8f-7sc7v" Mar 18 17:41:37.783401 master-0 kubenswrapper[4090]: I0318 17:41:37.781971 4090 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/cluster-monitoring-operator-58845fbb57-vjrjg" Mar 18 17:41:37.792307 master-0 kubenswrapper[4090]: I0318 17:41:37.786707 4090 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-5c9796789-6hngr"] Mar 18 17:41:37.792307 master-0 kubenswrapper[4090]: I0318 17:41:37.787245 4090 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-5c9796789-6hngr" Mar 18 17:41:37.792307 master-0 kubenswrapper[4090]: I0318 17:41:37.790366 4090 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-5dbbb8b86f-gr8jc"] Mar 18 17:41:37.817144 master-0 kubenswrapper[4090]: I0318 17:41:37.817092 4090 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-storage-operator/csi-snapshot-controller-operator-5f5d689c6b-z9vvz"] Mar 18 17:41:37.817563 master-0 kubenswrapper[4090]: I0318 17:41:37.817541 4090 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-5f5d689c6b-z9vvz" Mar 18 17:41:37.818133 master-0 kubenswrapper[4090]: I0318 17:41:37.818112 4090 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-5dbbb8b86f-gr8jc" Mar 18 17:41:37.818691 master-0 kubenswrapper[4090]: I0318 17:41:37.818659 4090 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-olm-operator"/"openshift-service-ca.crt" Mar 18 17:41:37.819201 master-0 kubenswrapper[4090]: I0318 17:41:37.819063 4090 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"telemetry-config" Mar 18 17:41:37.819397 master-0 kubenswrapper[4090]: I0318 17:41:37.819370 4090 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Mar 18 17:41:37.820113 master-0 kubenswrapper[4090]: I0318 17:41:37.820088 4090 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-olm-operator"/"kube-root-ca.crt" Mar 18 17:41:37.820554 master-0 kubenswrapper[4090]: I0318 17:41:37.820467 4090 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Mar 18 17:41:37.820916 master-0 kubenswrapper[4090]: I0318 17:41:37.820896 4090 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kube-root-ca.crt" Mar 18 17:41:37.821123 master-0 kubenswrapper[4090]: I0318 17:41:37.821103 4090 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-olm-operator"/"cluster-olm-operator-serving-cert" Mar 18 17:41:37.821403 master-0 kubenswrapper[4090]: I0318 17:41:37.821255 4090 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Mar 18 17:41:37.821403 master-0 kubenswrapper[4090]: I0318 17:41:37.821364 4090 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Mar 18 17:41:37.821488 master-0 kubenswrapper[4090]: I0318 17:41:37.821441 4090 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"cluster-monitoring-operator-tls" Mar 18 17:41:37.821805 master-0 kubenswrapper[4090]: I0318 17:41:37.821786 4090 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Mar 18 17:41:37.824906 master-0 kubenswrapper[4090]: I0318 17:41:37.824769 4090 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"openshift-service-ca.crt" Mar 18 17:41:37.826889 master-0 kubenswrapper[4090]: I0318 17:41:37.826787 4090 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-storage-operator"/"openshift-service-ca.crt" Mar 18 17:41:37.827341 master-0 kubenswrapper[4090]: I0318 17:41:37.827219 4090 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-storage-operator"/"kube-root-ca.crt" Mar 18 17:41:37.827432 master-0 kubenswrapper[4090]: I0318 17:41:37.827403 4090 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Mar 18 17:41:37.833638 master-0 kubenswrapper[4090]: I0318 17:41:37.833595 4090 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-5885bfd7f4-8sxdf"] Mar 18 17:41:37.836614 master-0 kubenswrapper[4090]: I0318 17:41:37.836546 4090 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-8sxdf" Mar 18 17:41:37.845445 master-0 kubenswrapper[4090]: I0318 17:41:37.845415 4090 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Mar 18 17:41:37.845531 master-0 kubenswrapper[4090]: I0318 17:41:37.845503 4090 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Mar 18 17:41:37.845616 master-0 kubenswrapper[4090]: I0318 17:41:37.845366 4090 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Mar 18 17:41:37.847678 master-0 kubenswrapper[4090]: I0318 17:41:37.845920 4090 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Mar 18 17:41:37.847678 master-0 kubenswrapper[4090]: I0318 17:41:37.846005 4090 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Mar 18 17:41:37.850240 master-0 kubenswrapper[4090]: I0318 17:41:37.850202 4090 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Mar 18 17:41:37.852318 master-0 kubenswrapper[4090]: I0318 17:41:37.852257 4090 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-95bf4f4d-q27fh"] Mar 18 17:41:37.852997 master-0 kubenswrapper[4090]: I0318 17:41:37.852953 4090 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-7qwxn"] Mar 18 17:41:37.854376 master-0 kubenswrapper[4090]: I0318 17:41:37.853194 4090 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-95bf4f4d-q27fh" Mar 18 17:41:37.854376 master-0 kubenswrapper[4090]: I0318 17:41:37.853361 4090 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-7qwxn" Mar 18 17:41:37.854376 master-0 kubenswrapper[4090]: I0318 17:41:37.853743 4090 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-5549dc66cb-ljrq8"] Mar 18 17:41:37.854376 master-0 kubenswrapper[4090]: I0318 17:41:37.854147 4090 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-5549dc66cb-ljrq8" Mar 18 17:41:37.854376 master-0 kubenswrapper[4090]: I0318 17:41:37.854352 4090 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-66b84d69b-qb7n6"] Mar 18 17:41:37.855550 master-0 kubenswrapper[4090]: I0318 17:41:37.855527 4090 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-66b84d69b-qb7n6" Mar 18 17:41:37.855734 master-0 kubenswrapper[4090]: I0318 17:41:37.855704 4090 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-dddff6458-wlfj4"] Mar 18 17:41:37.856095 master-0 kubenswrapper[4090]: I0318 17:41:37.856068 4090 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-dddff6458-wlfj4" Mar 18 17:41:37.857604 master-0 kubenswrapper[4090]: I0318 17:41:37.857579 4090 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Mar 18 17:41:37.857873 master-0 kubenswrapper[4090]: I0318 17:41:37.857794 4090 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"kube-root-ca.crt" Mar 18 17:41:37.857873 master-0 kubenswrapper[4090]: I0318 17:41:37.857830 4090 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Mar 18 17:41:37.862364 master-0 kubenswrapper[4090]: I0318 17:41:37.857803 4090 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-node-tuning-operator"/"node-tuning-operator-tls" Mar 18 17:41:37.862364 master-0 kubenswrapper[4090]: I0318 17:41:37.858096 4090 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"openshift-service-ca.crt" Mar 18 17:41:37.862364 master-0 kubenswrapper[4090]: I0318 17:41:37.858170 4090 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-node-tuning-operator"/"performance-addon-operator-webhook-cert" Mar 18 17:41:37.862364 master-0 kubenswrapper[4090]: I0318 17:41:37.858200 4090 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Mar 18 17:41:37.862364 master-0 kubenswrapper[4090]: I0318 17:41:37.858336 4090 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Mar 18 17:41:37.862364 master-0 kubenswrapper[4090]: I0318 17:41:37.858340 4090 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Mar 18 17:41:37.862364 master-0 kubenswrapper[4090]: I0318 17:41:37.860564 4090 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68f85b4d6c-qpgfz"] Mar 18 17:41:37.862364 master-0 kubenswrapper[4090]: I0318 17:41:37.860607 4090 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-d65958b8-t266j"] Mar 18 17:41:37.868387 master-0 kubenswrapper[4090]: I0318 17:41:37.864946 4090 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Mar 18 17:41:37.868387 master-0 kubenswrapper[4090]: I0318 17:41:37.866171 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2tvgq\" (UniqueName: \"kubernetes.io/projected/0b9ff55a-73fb-473f-b406-1f8b6cffdb89-kube-api-access-2tvgq\") pod \"openshift-apiserver-operator-d65958b8-t266j\" (UID: \"0b9ff55a-73fb-473f-b406-1f8b6cffdb89\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-d65958b8-t266j" Mar 18 17:41:37.868387 master-0 kubenswrapper[4090]: I0318 17:41:37.866220 4090 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-789k6\" (UniqueName: \"kubernetes.io/projected/c087ce06-a16b-41f4-ba93-8fccdee09003-kube-api-access-789k6\") pod \"authentication-operator-5885bfd7f4-8sxdf\" (UID: \"c087ce06-a16b-41f4-ba93-8fccdee09003\") " pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-8sxdf" Mar 18 17:41:37.868387 master-0 kubenswrapper[4090]: I0318 17:41:37.866251 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9b424d6c-7440-4c98-ac19-2d0642c696fd-serving-cert\") pod \"kube-controller-manager-operator-ff989d6cc-qk279\" (UID: \"9b424d6c-7440-4c98-ac19-2d0642c696fd\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-ff989d6cc-qk279" Mar 18 17:41:37.868387 master-0 kubenswrapper[4090]: I0318 17:41:37.866307 4090 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c087ce06-a16b-41f4-ba93-8fccdee09003-serving-cert\") pod \"authentication-operator-5885bfd7f4-8sxdf\" (UID: \"c087ce06-a16b-41f4-ba93-8fccdee09003\") " pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-8sxdf" Mar 18 17:41:37.868387 master-0 kubenswrapper[4090]: I0318 17:41:37.866337 4090 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/cb522b02-0b93-4711-9041-566daa06b95a-available-featuregates\") pod \"openshift-config-operator-95bf4f4d-q27fh\" (UID: \"cb522b02-0b93-4711-9041-566daa06b95a\") " pod="openshift-config-operator/openshift-config-operator-95bf4f4d-q27fh" Mar 18 17:41:37.868387 master-0 kubenswrapper[4090]: I0318 17:41:37.866359 4090 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f7ff61c7-32d1-4407-a792-8e22bb4d50f9-config\") pod \"kube-storage-version-migrator-operator-6bb5bfb6fd-xwwcg\" (UID: \"f7ff61c7-32d1-4407-a792-8e22bb4d50f9\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-6bb5bfb6fd-xwwcg" Mar 18 17:41:37.868387 master-0 kubenswrapper[4090]: I0318 17:41:37.866379 4090 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c087ce06-a16b-41f4-ba93-8fccdee09003-config\") pod \"authentication-operator-5885bfd7f4-8sxdf\" (UID: \"c087ce06-a16b-41f4-ba93-8fccdee09003\") " pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-8sxdf" Mar 18 17:41:37.868387 master-0 kubenswrapper[4090]: I0318 17:41:37.866401 4090 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cluster-olm-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/99e215da-759d-4fff-af65-0fb64245fbd0-cluster-olm-operator-serving-cert\") pod \"cluster-olm-operator-67dcd4998-lljnt\" (UID: \"99e215da-759d-4fff-af65-0fb64245fbd0\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-67dcd4998-lljnt" Mar 18 17:41:37.868387 master-0 kubenswrapper[4090]: I0318 17:41:37.866428 4090 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/7c6694a8-ccd0-491b-9f21-215450f6ce67-apiservice-cert\") pod \"cluster-node-tuning-operator-598fbc5f8f-7qwxn\" (UID: \"7c6694a8-ccd0-491b-9f21-215450f6ce67\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-7qwxn" Mar 18 17:41:37.868387 master-0 kubenswrapper[4090]: I0318 17:41:37.866520 4090 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mrdqg\" (UniqueName: \"kubernetes.io/projected/7c6694a8-ccd0-491b-9f21-215450f6ce67-kube-api-access-mrdqg\") pod \"cluster-node-tuning-operator-598fbc5f8f-7qwxn\" (UID: \"7c6694a8-ccd0-491b-9f21-215450f6ce67\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-7qwxn" Mar 18 17:41:37.868387 master-0 kubenswrapper[4090]: I0318 17:41:37.866630 4090 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Mar 18 17:41:37.868387 master-0 kubenswrapper[4090]: I0318 17:41:37.866647 4090 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Mar 18 17:41:37.868387 master-0 kubenswrapper[4090]: I0318 17:41:37.866800 4090 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lmsm4\" (UniqueName: \"kubernetes.io/projected/dba5f8d7-4d25-42b5-9c58-813221bf96bb-kube-api-access-lmsm4\") pod \"csi-snapshot-controller-operator-5f5d689c6b-z9vvz\" (UID: \"dba5f8d7-4d25-42b5-9c58-813221bf96bb\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-5f5d689c6b-z9vvz" Mar 18 17:41:37.868387 master-0 kubenswrapper[4090]: I0318 17:41:37.866839 4090 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0100a259-1358-45e8-8191-4e1f9a14ec89-serving-cert\") pod \"etcd-operator-8544cbcf9c-rws9x\" (UID: \"0100a259-1358-45e8-8191-4e1f9a14ec89\") " pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-rws9x" Mar 18 17:41:37.868387 master-0 kubenswrapper[4090]: I0318 17:41:37.866862 4090 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c355c750-ae2f-49fa-9a16-8fb4f688853e-config\") pod \"service-ca-operator-b865698dc-5zj8r\" (UID: \"c355c750-ae2f-49fa-9a16-8fb4f688853e\") " pod="openshift-service-ca-operator/service-ca-operator-b865698dc-5zj8r" Mar 18 17:41:37.869294 master-0 kubenswrapper[4090]: I0318 17:41:37.866915 4090 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/6f26e239-2988-4faa-bc1d-24b15b95b7f1-image-registry-operator-tls\") pod \"cluster-image-registry-operator-5549dc66cb-ljrq8\" (UID: \"6f26e239-2988-4faa-bc1d-24b15b95b7f1\") " pod="openshift-image-registry/cluster-image-registry-operator-5549dc66cb-ljrq8" Mar 18 17:41:37.869294 master-0 kubenswrapper[4090]: I0318 17:41:37.867014 4090 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ce5831a6-5a8d-4cda-9299-5d86437bcab2-marketplace-trusted-ca\") pod \"marketplace-operator-89ccd998f-l5gm7\" (UID: \"ce5831a6-5a8d-4cda-9299-5d86437bcab2\") " pod="openshift-marketplace/marketplace-operator-89ccd998f-l5gm7" Mar 18 17:41:37.869294 master-0 kubenswrapper[4090]: I0318 17:41:37.867062 4090 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qwps9\" (UniqueName: \"kubernetes.io/projected/e73f2834-c56c-4cef-ac3c-2317e9a4324c-kube-api-access-qwps9\") pod \"olm-operator-5c9796789-6hngr\" (UID: \"e73f2834-c56c-4cef-ac3c-2317e9a4324c\") " pod="openshift-operator-lifecycle-manager/olm-operator-5c9796789-6hngr" Mar 18 17:41:37.869294 master-0 kubenswrapper[4090]: I0318 17:41:37.868884 4090 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Mar 18 17:41:37.869294 master-0 kubenswrapper[4090]: I0318 17:41:37.869056 4090 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Mar 18 17:41:37.869294 master-0 kubenswrapper[4090]: I0318 17:41:37.869061 4090 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Mar 18 17:41:37.869294 master-0 kubenswrapper[4090]: I0318 17:41:37.869187 4090 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Mar 18 17:41:37.869294 master-0 kubenswrapper[4090]: I0318 17:41:37.869257 4090 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-ff989d6cc-qk279"] Mar 18 17:41:37.874314 master-0 kubenswrapper[4090]: I0318 17:41:37.871468 4090 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-89ccd998f-l5gm7"] Mar 18 17:41:37.874314 master-0 kubenswrapper[4090]: I0318 17:41:37.872445 4090 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/7e64a377-f497-4416-8f22-d5c7f52e0b65-metrics-tls\") pod \"ingress-operator-66b84d69b-qb7n6\" (UID: \"7e64a377-f497-4416-8f22-d5c7f52e0b65\") " pod="openshift-ingress-operator/ingress-operator-66b84d69b-qb7n6" Mar 18 17:41:37.874314 master-0 kubenswrapper[4090]: I0318 17:41:37.872560 4090 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/26575d68-0488-4dfa-a5d0-5016e481dba6-serving-cert\") pod \"kube-apiserver-operator-8b68b9d9b-p72m2\" (UID: \"26575d68-0488-4dfa-a5d0-5016e481dba6\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-8b68b9d9b-p72m2" Mar 18 17:41:37.874314 master-0 kubenswrapper[4090]: I0318 17:41:37.872589 4090 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/8d5e9525-6c0d-4c0b-a2ce-e42eaf66c311-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-58845fbb57-vjrjg\" (UID: \"8d5e9525-6c0d-4c0b-a2ce-e42eaf66c311\") " pod="openshift-monitoring/cluster-monitoring-operator-58845fbb57-vjrjg" Mar 18 17:41:37.874314 master-0 kubenswrapper[4090]: I0318 17:41:37.872659 4090 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/a8c34df1-ea0d-4dfa-bf4d-5b58dc5bee8e-webhook-certs\") pod \"multus-admission-controller-5dbbb8b86f-gr8jc\" (UID: \"a8c34df1-ea0d-4dfa-bf4d-5b58dc5bee8e\") " pod="openshift-multus/multus-admission-controller-5dbbb8b86f-gr8jc" Mar 18 17:41:37.874314 master-0 kubenswrapper[4090]: I0318 17:41:37.872700 4090 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/0100a259-1358-45e8-8191-4e1f9a14ec89-etcd-ca\") pod \"etcd-operator-8544cbcf9c-rws9x\" (UID: \"0100a259-1358-45e8-8191-4e1f9a14ec89\") " pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-rws9x" Mar 18 17:41:37.874314 master-0 kubenswrapper[4090]: I0318 17:41:37.872754 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b9ff55a-73fb-473f-b406-1f8b6cffdb89-serving-cert\") pod \"openshift-apiserver-operator-d65958b8-t266j\" (UID: \"0b9ff55a-73fb-473f-b406-1f8b6cffdb89\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-d65958b8-t266j" Mar 18 17:41:37.874314 master-0 kubenswrapper[4090]: I0318 17:41:37.872824 4090 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/0100a259-1358-45e8-8191-4e1f9a14ec89-etcd-service-ca\") pod \"etcd-operator-8544cbcf9c-rws9x\" (UID: \"0100a259-1358-45e8-8191-4e1f9a14ec89\") " pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-rws9x" Mar 18 17:41:37.874314 master-0 kubenswrapper[4090]: I0318 17:41:37.872842 4090 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9a240ab7-a1d5-4e9a-96f3-4590681cc7ed-serving-cert\") pod \"openshift-controller-manager-operator-8c94f4649-hpsbd\" (UID: \"9a240ab7-a1d5-4e9a-96f3-4590681cc7ed\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8c94f4649-hpsbd" Mar 18 17:41:37.874314 master-0 kubenswrapper[4090]: I0318 17:41:37.872865 4090 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-756j8\" (UniqueName: \"kubernetes.io/projected/ce5831a6-5a8d-4cda-9299-5d86437bcab2-kube-api-access-756j8\") pod \"marketplace-operator-89ccd998f-l5gm7\" (UID: \"ce5831a6-5a8d-4cda-9299-5d86437bcab2\") " pod="openshift-marketplace/marketplace-operator-89ccd998f-l5gm7" Mar 18 17:41:37.874314 master-0 kubenswrapper[4090]: I0318 17:41:37.872905 4090 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/26575d68-0488-4dfa-a5d0-5016e481dba6-config\") pod \"kube-apiserver-operator-8b68b9d9b-p72m2\" (UID: \"26575d68-0488-4dfa-a5d0-5016e481dba6\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-8b68b9d9b-p72m2" Mar 18 17:41:37.874314 master-0 kubenswrapper[4090]: I0318 17:41:37.872936 4090 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/37b3753f-bf4f-4a9e-a4a8-d58296bada79-config\") pod \"cluster-baremetal-operator-6f69995874-dh5zl\" (UID: \"37b3753f-bf4f-4a9e-a4a8-d58296bada79\") " pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-dh5zl" Mar 18 17:41:37.874314 master-0 kubenswrapper[4090]: I0318 17:41:37.872980 4090 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c087ce06-a16b-41f4-ba93-8fccdee09003-trusted-ca-bundle\") pod \"authentication-operator-5885bfd7f4-8sxdf\" (UID: \"c087ce06-a16b-41f4-ba93-8fccdee09003\") " pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-8sxdf" Mar 18 17:41:37.874314 master-0 kubenswrapper[4090]: I0318 17:41:37.873020 4090 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-8c94f4649-hpsbd"] Mar 18 17:41:37.874314 master-0 kubenswrapper[4090]: I0318 17:41:37.873063 4090 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-6bb5bfb6fd-xwwcg"] Mar 18 17:41:37.874314 master-0 kubenswrapper[4090]: I0318 17:41:37.873071 4090 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9pp5f\" (UniqueName: \"kubernetes.io/projected/b1352cc7-4099-44c5-9c31-8259fb783bc7-kube-api-access-9pp5f\") pod \"dns-operator-9c5679d8f-7sc7v\" (UID: \"b1352cc7-4099-44c5-9c31-8259fb783bc7\") " pod="openshift-dns-operator/dns-operator-9c5679d8f-7sc7v" Mar 18 17:41:37.874314 master-0 kubenswrapper[4090]: I0318 17:41:37.873778 4090 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Mar 18 17:41:37.874984 master-0 kubenswrapper[4090]: I0318 17:41:37.874438 4090 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bm8jj\" (UniqueName: \"kubernetes.io/projected/d26d4515-391e-41a5-8c82-1b2b8a375662-kube-api-access-bm8jj\") pod \"package-server-manager-7b95f86987-6qqz4\" (UID: \"d26d4515-391e-41a5-8c82-1b2b8a375662\") " pod="openshift-operator-lifecycle-manager/package-server-manager-7b95f86987-6qqz4" Mar 18 17:41:37.874984 master-0 kubenswrapper[4090]: I0318 17:41:37.874462 4090 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cb522b02-0b93-4711-9041-566daa06b95a-serving-cert\") pod \"openshift-config-operator-95bf4f4d-q27fh\" (UID: \"cb522b02-0b93-4711-9041-566daa06b95a\") " pod="openshift-config-operator/openshift-config-operator-95bf4f4d-q27fh" Mar 18 17:41:37.874984 master-0 kubenswrapper[4090]: I0318 17:41:37.874484 4090 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3a3a6c2c-78e7-41f3-acff-20173cbc012a-config\") pod \"openshift-kube-scheduler-operator-dddff6458-wlfj4\" (UID: \"3a3a6c2c-78e7-41f3-acff-20173cbc012a\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-dddff6458-wlfj4" Mar 18 17:41:37.874984 master-0 kubenswrapper[4090]: I0318 17:41:37.874503 4090 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/7c6694a8-ccd0-491b-9f21-215450f6ce67-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-598fbc5f8f-7qwxn\" (UID: \"7c6694a8-ccd0-491b-9f21-215450f6ce67\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-7qwxn" Mar 18 17:41:37.874984 master-0 kubenswrapper[4090]: I0318 17:41:37.874553 4090 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-clm4b\" (UniqueName: \"kubernetes.io/projected/8d5e9525-6c0d-4c0b-a2ce-e42eaf66c311-kube-api-access-clm4b\") pod \"cluster-monitoring-operator-58845fbb57-vjrjg\" (UID: \"8d5e9525-6c0d-4c0b-a2ce-e42eaf66c311\") " pod="openshift-monitoring/cluster-monitoring-operator-58845fbb57-vjrjg" Mar 18 17:41:37.874984 master-0 kubenswrapper[4090]: I0318 17:41:37.874570 4090 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0100a259-1358-45e8-8191-4e1f9a14ec89-config\") pod \"etcd-operator-8544cbcf9c-rws9x\" (UID: \"0100a259-1358-45e8-8191-4e1f9a14ec89\") " pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-rws9x" Mar 18 17:41:37.874984 master-0 kubenswrapper[4090]: I0318 17:41:37.874592 4090 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zfnqp\" (UniqueName: \"kubernetes.io/projected/c355c750-ae2f-49fa-9a16-8fb4f688853e-kube-api-access-zfnqp\") pod \"service-ca-operator-b865698dc-5zj8r\" (UID: \"c355c750-ae2f-49fa-9a16-8fb4f688853e\") " pod="openshift-service-ca-operator/service-ca-operator-b865698dc-5zj8r" Mar 18 17:41:37.874984 master-0 kubenswrapper[4090]: I0318 17:41:37.874620 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0b9ff55a-73fb-473f-b406-1f8b6cffdb89-config\") pod \"openshift-apiserver-operator-d65958b8-t266j\" (UID: \"0b9ff55a-73fb-473f-b406-1f8b6cffdb89\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-d65958b8-t266j" Mar 18 17:41:37.874984 master-0 kubenswrapper[4090]: I0318 17:41:37.874654 4090 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/ce5831a6-5a8d-4cda-9299-5d86437bcab2-marketplace-operator-metrics\") pod \"marketplace-operator-89ccd998f-l5gm7\" (UID: \"ce5831a6-5a8d-4cda-9299-5d86437bcab2\") " pod="openshift-marketplace/marketplace-operator-89ccd998f-l5gm7" Mar 18 17:41:37.874984 master-0 kubenswrapper[4090]: I0318 17:41:37.874673 4090 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cluster-baremetal-operator-tls\" (UniqueName: \"kubernetes.io/secret/37b3753f-bf4f-4a9e-a4a8-d58296bada79-cluster-baremetal-operator-tls\") pod \"cluster-baremetal-operator-6f69995874-dh5zl\" (UID: \"37b3753f-bf4f-4a9e-a4a8-d58296bada79\") " pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-dh5zl" Mar 18 17:41:37.874984 master-0 kubenswrapper[4090]: I0318 17:41:37.874688 4090 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/0100a259-1358-45e8-8191-4e1f9a14ec89-etcd-client\") pod \"etcd-operator-8544cbcf9c-rws9x\" (UID: \"0100a259-1358-45e8-8191-4e1f9a14ec89\") " pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-rws9x" Mar 18 17:41:37.874984 master-0 kubenswrapper[4090]: I0318 17:41:37.874712 4090 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n8k5q\" (UniqueName: \"kubernetes.io/projected/99e215da-759d-4fff-af65-0fb64245fbd0-kube-api-access-n8k5q\") pod \"cluster-olm-operator-67dcd4998-lljnt\" (UID: \"99e215da-759d-4fff-af65-0fb64245fbd0\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-67dcd4998-lljnt" Mar 18 17:41:37.875444 master-0 kubenswrapper[4090]: I0318 17:41:37.875101 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9b424d6c-7440-4c98-ac19-2d0642c696fd-config\") pod \"kube-controller-manager-operator-ff989d6cc-qk279\" (UID: \"9b424d6c-7440-4c98-ac19-2d0642c696fd\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-ff989d6cc-qk279" Mar 18 17:41:37.875444 master-0 kubenswrapper[4090]: I0318 17:41:37.875259 4090 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fk59q\" (UniqueName: \"kubernetes.io/projected/cb522b02-0b93-4711-9041-566daa06b95a-kube-api-access-fk59q\") pod \"openshift-config-operator-95bf4f4d-q27fh\" (UID: \"cb522b02-0b93-4711-9041-566daa06b95a\") " pod="openshift-config-operator/openshift-config-operator-95bf4f4d-q27fh" Mar 18 17:41:37.875444 master-0 kubenswrapper[4090]: I0318 17:41:37.875400 4090 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0b9ff55a-73fb-473f-b406-1f8b6cffdb89-config\") pod \"openshift-apiserver-operator-d65958b8-t266j\" (UID: \"0b9ff55a-73fb-473f-b406-1f8b6cffdb89\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-d65958b8-t266j" Mar 18 17:41:37.875559 master-0 kubenswrapper[4090]: I0318 17:41:37.875488 4090 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3a3a6c2c-78e7-41f3-acff-20173cbc012a-kube-api-access\") pod \"openshift-kube-scheduler-operator-dddff6458-wlfj4\" (UID: \"3a3a6c2c-78e7-41f3-acff-20173cbc012a\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-dddff6458-wlfj4" Mar 18 17:41:37.875559 master-0 kubenswrapper[4090]: I0318 17:41:37.875516 4090 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/6f26e239-2988-4faa-bc1d-24b15b95b7f1-trusted-ca\") pod \"cluster-image-registry-operator-5549dc66cb-ljrq8\" (UID: \"6f26e239-2988-4faa-bc1d-24b15b95b7f1\") " pod="openshift-image-registry/cluster-image-registry-operator-5549dc66cb-ljrq8" Mar 18 17:41:37.875559 master-0 kubenswrapper[4090]: I0318 17:41:37.875549 4090 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c355c750-ae2f-49fa-9a16-8fb4f688853e-serving-cert\") pod \"service-ca-operator-b865698dc-5zj8r\" (UID: \"c355c750-ae2f-49fa-9a16-8fb4f688853e\") " pod="openshift-service-ca-operator/service-ca-operator-b865698dc-5zj8r" Mar 18 17:41:37.875664 master-0 kubenswrapper[4090]: I0318 17:41:37.875572 4090 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/e9e04572-1425-440e-9869-6deef05e13e3-srv-cert\") pod \"catalog-operator-68f85b4d6c-qpgfz\" (UID: \"e9e04572-1425-440e-9869-6deef05e13e3\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68f85b4d6c-qpgfz" Mar 18 17:41:37.875664 master-0 kubenswrapper[4090]: I0318 17:41:37.875598 4090 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7e64a377-f497-4416-8f22-d5c7f52e0b65-trusted-ca\") pod \"ingress-operator-66b84d69b-qb7n6\" (UID: \"7e64a377-f497-4416-8f22-d5c7f52e0b65\") " pod="openshift-ingress-operator/ingress-operator-66b84d69b-qb7n6" Mar 18 17:41:37.875664 master-0 kubenswrapper[4090]: I0318 17:41:37.875623 4090 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/d26d4515-391e-41a5-8c82-1b2b8a375662-package-server-manager-serving-cert\") pod \"package-server-manager-7b95f86987-6qqz4\" (UID: \"d26d4515-391e-41a5-8c82-1b2b8a375662\") " pod="openshift-operator-lifecycle-manager/package-server-manager-7b95f86987-6qqz4" Mar 18 17:41:37.875664 master-0 kubenswrapper[4090]: I0318 17:41:37.875650 4090 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/37b3753f-bf4f-4a9e-a4a8-d58296bada79-cert\") pod \"cluster-baremetal-operator-6f69995874-dh5zl\" (UID: \"37b3753f-bf4f-4a9e-a4a8-d58296bada79\") " pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-dh5zl" Mar 18 17:41:37.875809 master-0 kubenswrapper[4090]: I0318 17:41:37.875679 4090 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nf82n\" (UniqueName: \"kubernetes.io/projected/f7ff61c7-32d1-4407-a792-8e22bb4d50f9-kube-api-access-nf82n\") pod \"kube-storage-version-migrator-operator-6bb5bfb6fd-xwwcg\" (UID: \"f7ff61c7-32d1-4407-a792-8e22bb4d50f9\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-6bb5bfb6fd-xwwcg" Mar 18 17:41:37.875809 master-0 kubenswrapper[4090]: I0318 17:41:37.875701 4090 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3a3a6c2c-78e7-41f3-acff-20173cbc012a-serving-cert\") pod \"openshift-kube-scheduler-operator-dddff6458-wlfj4\" (UID: \"3a3a6c2c-78e7-41f3-acff-20173cbc012a\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-dddff6458-wlfj4" Mar 18 17:41:37.875809 master-0 kubenswrapper[4090]: I0318 17:41:37.875731 4090 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t92bz\" (UniqueName: \"kubernetes.io/projected/e9e04572-1425-440e-9869-6deef05e13e3-kube-api-access-t92bz\") pod \"catalog-operator-68f85b4d6c-qpgfz\" (UID: \"e9e04572-1425-440e-9869-6deef05e13e3\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68f85b4d6c-qpgfz" Mar 18 17:41:37.875809 master-0 kubenswrapper[4090]: I0318 17:41:37.875753 4090 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c087ce06-a16b-41f4-ba93-8fccdee09003-service-ca-bundle\") pod \"authentication-operator-5885bfd7f4-8sxdf\" (UID: \"c087ce06-a16b-41f4-ba93-8fccdee09003\") " pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-8sxdf" Mar 18 17:41:37.875809 master-0 kubenswrapper[4090]: I0318 17:41:37.875776 4090 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operand-assets\" (UniqueName: \"kubernetes.io/empty-dir/99e215da-759d-4fff-af65-0fb64245fbd0-operand-assets\") pod \"cluster-olm-operator-67dcd4998-lljnt\" (UID: \"99e215da-759d-4fff-af65-0fb64245fbd0\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-67dcd4998-lljnt" Mar 18 17:41:37.875809 master-0 kubenswrapper[4090]: I0318 17:41:37.875803 4090 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5sl7p\" (UniqueName: \"kubernetes.io/projected/6f26e239-2988-4faa-bc1d-24b15b95b7f1-kube-api-access-5sl7p\") pod \"cluster-image-registry-operator-5549dc66cb-ljrq8\" (UID: \"6f26e239-2988-4faa-bc1d-24b15b95b7f1\") " pod="openshift-image-registry/cluster-image-registry-operator-5549dc66cb-ljrq8" Mar 18 17:41:37.876023 master-0 kubenswrapper[4090]: I0318 17:41:37.875829 4090 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/b1352cc7-4099-44c5-9c31-8259fb783bc7-metrics-tls\") pod \"dns-operator-9c5679d8f-7sc7v\" (UID: \"b1352cc7-4099-44c5-9c31-8259fb783bc7\") " pod="openshift-dns-operator/dns-operator-9c5679d8f-7sc7v" Mar 18 17:41:37.876023 master-0 kubenswrapper[4090]: I0318 17:41:37.875856 4090 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f7ff61c7-32d1-4407-a792-8e22bb4d50f9-serving-cert\") pod \"kube-storage-version-migrator-operator-6bb5bfb6fd-xwwcg\" (UID: \"f7ff61c7-32d1-4407-a792-8e22bb4d50f9\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-6bb5bfb6fd-xwwcg" Mar 18 17:41:37.876023 master-0 kubenswrapper[4090]: I0318 17:41:37.875881 4090 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/8d5e9525-6c0d-4c0b-a2ce-e42eaf66c311-telemetry-config\") pod \"cluster-monitoring-operator-58845fbb57-vjrjg\" (UID: \"8d5e9525-6c0d-4c0b-a2ce-e42eaf66c311\") " pod="openshift-monitoring/cluster-monitoring-operator-58845fbb57-vjrjg" Mar 18 17:41:37.876023 master-0 kubenswrapper[4090]: I0318 17:41:37.875908 4090 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tnknt\" (UniqueName: \"kubernetes.io/projected/0100a259-1358-45e8-8191-4e1f9a14ec89-kube-api-access-tnknt\") pod \"etcd-operator-8544cbcf9c-rws9x\" (UID: \"0100a259-1358-45e8-8191-4e1f9a14ec89\") " pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-rws9x" Mar 18 17:41:37.876023 master-0 kubenswrapper[4090]: I0318 17:41:37.875940 4090 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l5tw2\" (UniqueName: \"kubernetes.io/projected/9a240ab7-a1d5-4e9a-96f3-4590681cc7ed-kube-api-access-l5tw2\") pod \"openshift-controller-manager-operator-8c94f4649-hpsbd\" (UID: \"9a240ab7-a1d5-4e9a-96f3-4590681cc7ed\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8c94f4649-hpsbd" Mar 18 17:41:37.876023 master-0 kubenswrapper[4090]: I0318 17:41:37.875963 4090 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/7e64a377-f497-4416-8f22-d5c7f52e0b65-bound-sa-token\") pod \"ingress-operator-66b84d69b-qb7n6\" (UID: \"7e64a377-f497-4416-8f22-d5c7f52e0b65\") " pod="openshift-ingress-operator/ingress-operator-66b84d69b-qb7n6" Mar 18 17:41:37.876023 master-0 kubenswrapper[4090]: I0318 17:41:37.875996 4090 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dlcnh\" (UniqueName: \"kubernetes.io/projected/a8c34df1-ea0d-4dfa-bf4d-5b58dc5bee8e-kube-api-access-dlcnh\") pod \"multus-admission-controller-5dbbb8b86f-gr8jc\" (UID: \"a8c34df1-ea0d-4dfa-bf4d-5b58dc5bee8e\") " pod="openshift-multus/multus-admission-controller-5dbbb8b86f-gr8jc" Mar 18 17:41:37.876023 master-0 kubenswrapper[4090]: I0318 17:41:37.876004 4090 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9b424d6c-7440-4c98-ac19-2d0642c696fd-config\") pod \"kube-controller-manager-operator-ff989d6cc-qk279\" (UID: \"9b424d6c-7440-4c98-ac19-2d0642c696fd\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-ff989d6cc-qk279" Mar 18 17:41:37.876023 master-0 kubenswrapper[4090]: I0318 17:41:37.876025 4090 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9a240ab7-a1d5-4e9a-96f3-4590681cc7ed-config\") pod \"openshift-controller-manager-operator-8c94f4649-hpsbd\" (UID: \"9a240ab7-a1d5-4e9a-96f3-4590681cc7ed\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8c94f4649-hpsbd" Mar 18 17:41:37.876364 master-0 kubenswrapper[4090]: I0318 17:41:37.876058 4090 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/e73f2834-c56c-4cef-ac3c-2317e9a4324c-srv-cert\") pod \"olm-operator-5c9796789-6hngr\" (UID: \"e73f2834-c56c-4cef-ac3c-2317e9a4324c\") " pod="openshift-operator-lifecycle-manager/olm-operator-5c9796789-6hngr" Mar 18 17:41:37.876364 master-0 kubenswrapper[4090]: I0318 17:41:37.876084 4090 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/26575d68-0488-4dfa-a5d0-5016e481dba6-kube-api-access\") pod \"kube-apiserver-operator-8b68b9d9b-p72m2\" (UID: \"26575d68-0488-4dfa-a5d0-5016e481dba6\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-8b68b9d9b-p72m2" Mar 18 17:41:37.876364 master-0 kubenswrapper[4090]: I0318 17:41:37.876110 4090 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sclm5\" (UniqueName: \"kubernetes.io/projected/7e64a377-f497-4416-8f22-d5c7f52e0b65-kube-api-access-sclm5\") pod \"ingress-operator-66b84d69b-qb7n6\" (UID: \"7e64a377-f497-4416-8f22-d5c7f52e0b65\") " pod="openshift-ingress-operator/ingress-operator-66b84d69b-qb7n6" Mar 18 17:41:37.876364 master-0 kubenswrapper[4090]: I0318 17:41:37.876149 4090 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"trusted-ca" Mar 18 17:41:37.876364 master-0 kubenswrapper[4090]: I0318 17:41:37.876164 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9b424d6c-7440-4c98-ac19-2d0642c696fd-kube-api-access\") pod \"kube-controller-manager-operator-ff989d6cc-qk279\" (UID: \"9b424d6c-7440-4c98-ac19-2d0642c696fd\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-ff989d6cc-qk279" Mar 18 17:41:37.876364 master-0 kubenswrapper[4090]: I0318 17:41:37.876228 4090 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/37b3753f-bf4f-4a9e-a4a8-d58296bada79-images\") pod \"cluster-baremetal-operator-6f69995874-dh5zl\" (UID: \"37b3753f-bf4f-4a9e-a4a8-d58296bada79\") " pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-dh5zl" Mar 18 17:41:37.876364 master-0 kubenswrapper[4090]: I0318 17:41:37.876249 4090 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zwlxb\" (UniqueName: \"kubernetes.io/projected/37b3753f-bf4f-4a9e-a4a8-d58296bada79-kube-api-access-zwlxb\") pod \"cluster-baremetal-operator-6f69995874-dh5zl\" (UID: \"37b3753f-bf4f-4a9e-a4a8-d58296bada79\") " pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-dh5zl" Mar 18 17:41:37.876364 master-0 kubenswrapper[4090]: I0318 17:41:37.876269 4090 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7c6694a8-ccd0-491b-9f21-215450f6ce67-trusted-ca\") pod \"cluster-node-tuning-operator-598fbc5f8f-7qwxn\" (UID: \"7c6694a8-ccd0-491b-9f21-215450f6ce67\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-7qwxn" Mar 18 17:41:37.876364 master-0 kubenswrapper[4090]: I0318 17:41:37.876349 4090 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/6f26e239-2988-4faa-bc1d-24b15b95b7f1-bound-sa-token\") pod \"cluster-image-registry-operator-5549dc66cb-ljrq8\" (UID: \"6f26e239-2988-4faa-bc1d-24b15b95b7f1\") " pod="openshift-image-registry/cluster-image-registry-operator-5549dc66cb-ljrq8" Mar 18 17:41:37.879250 master-0 kubenswrapper[4090]: I0318 17:41:37.879208 4090 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9b424d6c-7440-4c98-ac19-2d0642c696fd-serving-cert\") pod \"kube-controller-manager-operator-ff989d6cc-qk279\" (UID: \"9b424d6c-7440-4c98-ac19-2d0642c696fd\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-ff989d6cc-qk279" Mar 18 17:41:37.887880 master-0 kubenswrapper[4090]: I0318 17:41:37.887821 4090 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-operator/iptables-alerter-f7jp5"] Mar 18 17:41:37.889021 master-0 kubenswrapper[4090]: I0318 17:41:37.888985 4090 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-f7jp5" Mar 18 17:41:37.889439 master-0 kubenswrapper[4090]: I0318 17:41:37.889393 4090 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b9ff55a-73fb-473f-b406-1f8b6cffdb89-serving-cert\") pod \"openshift-apiserver-operator-d65958b8-t266j\" (UID: \"0b9ff55a-73fb-473f-b406-1f8b6cffdb89\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-d65958b8-t266j" Mar 18 17:41:37.893015 master-0 kubenswrapper[4090]: I0318 17:41:37.892967 4090 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-8544cbcf9c-rws9x"] Mar 18 17:41:37.893262 master-0 kubenswrapper[4090]: I0318 17:41:37.893227 4090 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Mar 18 17:41:37.894844 master-0 kubenswrapper[4090]: I0318 17:41:37.894810 4090 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Mar 18 17:41:37.895132 master-0 kubenswrapper[4090]: I0318 17:41:37.895073 4090 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2tvgq\" (UniqueName: \"kubernetes.io/projected/0b9ff55a-73fb-473f-b406-1f8b6cffdb89-kube-api-access-2tvgq\") pod \"openshift-apiserver-operator-d65958b8-t266j\" (UID: \"0b9ff55a-73fb-473f-b406-1f8b6cffdb89\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-d65958b8-t266j" Mar 18 17:41:37.895551 master-0 kubenswrapper[4090]: I0318 17:41:37.895513 4090 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-d65958b8-t266j" Mar 18 17:41:37.901446 master-0 kubenswrapper[4090]: I0318 17:41:37.901397 4090 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9b424d6c-7440-4c98-ac19-2d0642c696fd-kube-api-access\") pod \"kube-controller-manager-operator-ff989d6cc-qk279\" (UID: \"9b424d6c-7440-4c98-ac19-2d0642c696fd\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-ff989d6cc-qk279" Mar 18 17:41:37.908627 master-0 kubenswrapper[4090]: I0318 17:41:37.908570 4090 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/cluster-baremetal-operator-6f69995874-dh5zl"] Mar 18 17:41:37.909777 master-0 kubenswrapper[4090]: I0318 17:41:37.909731 4090 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-b865698dc-5zj8r"] Mar 18 17:41:37.910556 master-0 kubenswrapper[4090]: I0318 17:41:37.910533 4090 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-7b95f86987-6qqz4"] Mar 18 17:41:37.911643 master-0 kubenswrapper[4090]: I0318 17:41:37.911606 4090 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-95bf4f4d-q27fh"] Mar 18 17:41:37.919090 master-0 kubenswrapper[4090]: I0318 17:41:37.917233 4090 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-5549dc66cb-ljrq8"] Mar 18 17:41:37.919760 master-0 kubenswrapper[4090]: I0318 17:41:37.919731 4090 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/cluster-monitoring-operator-58845fbb57-vjrjg"] Mar 18 17:41:37.919827 master-0 kubenswrapper[4090]: I0318 17:41:37.919773 4090 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-5dbbb8b86f-gr8jc"] Mar 18 17:41:37.919827 master-0 kubenswrapper[4090]: I0318 17:41:37.919789 4090 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-9c5679d8f-7sc7v"] Mar 18 17:41:37.921459 master-0 kubenswrapper[4090]: I0318 17:41:37.921419 4090 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-olm-operator/cluster-olm-operator-67dcd4998-lljnt"] Mar 18 17:41:37.925342 master-0 kubenswrapper[4090]: I0318 17:41:37.925312 4090 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-5885bfd7f4-8sxdf"] Mar 18 17:41:37.927364 master-0 kubenswrapper[4090]: I0318 17:41:37.927340 4090 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-7qwxn"] Mar 18 17:41:37.928375 master-0 kubenswrapper[4090]: I0318 17:41:37.928354 4090 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-66b84d69b-qb7n6"] Mar 18 17:41:37.930252 master-0 kubenswrapper[4090]: I0318 17:41:37.930227 4090 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-dddff6458-wlfj4"] Mar 18 17:41:37.930328 master-0 kubenswrapper[4090]: I0318 17:41:37.930261 4090 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-storage-operator/csi-snapshot-controller-operator-5f5d689c6b-z9vvz"] Mar 18 17:41:37.930328 master-0 kubenswrapper[4090]: I0318 17:41:37.930276 4090 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-8b68b9d9b-p72m2"] Mar 18 17:41:37.932474 master-0 kubenswrapper[4090]: I0318 17:41:37.932402 4090 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-5c9796789-6hngr"] Mar 18 17:41:37.977617 master-0 kubenswrapper[4090]: I0318 17:41:37.976617 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ce5831a6-5a8d-4cda-9299-5d86437bcab2-marketplace-trusted-ca\") pod \"marketplace-operator-89ccd998f-l5gm7\" (UID: \"ce5831a6-5a8d-4cda-9299-5d86437bcab2\") " pod="openshift-marketplace/marketplace-operator-89ccd998f-l5gm7" Mar 18 17:41:37.977617 master-0 kubenswrapper[4090]: I0318 17:41:37.976647 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qwps9\" (UniqueName: \"kubernetes.io/projected/e73f2834-c56c-4cef-ac3c-2317e9a4324c-kube-api-access-qwps9\") pod \"olm-operator-5c9796789-6hngr\" (UID: \"e73f2834-c56c-4cef-ac3c-2317e9a4324c\") " pod="openshift-operator-lifecycle-manager/olm-operator-5c9796789-6hngr" Mar 18 17:41:37.977617 master-0 kubenswrapper[4090]: I0318 17:41:37.976669 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/7e64a377-f497-4416-8f22-d5c7f52e0b65-metrics-tls\") pod \"ingress-operator-66b84d69b-qb7n6\" (UID: \"7e64a377-f497-4416-8f22-d5c7f52e0b65\") " pod="openshift-ingress-operator/ingress-operator-66b84d69b-qb7n6" Mar 18 17:41:37.977617 master-0 kubenswrapper[4090]: I0318 17:41:37.976689 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/26575d68-0488-4dfa-a5d0-5016e481dba6-serving-cert\") pod \"kube-apiserver-operator-8b68b9d9b-p72m2\" (UID: \"26575d68-0488-4dfa-a5d0-5016e481dba6\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-8b68b9d9b-p72m2" Mar 18 17:41:37.977617 master-0 kubenswrapper[4090]: I0318 17:41:37.976708 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/8d5e9525-6c0d-4c0b-a2ce-e42eaf66c311-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-58845fbb57-vjrjg\" (UID: \"8d5e9525-6c0d-4c0b-a2ce-e42eaf66c311\") " pod="openshift-monitoring/cluster-monitoring-operator-58845fbb57-vjrjg" Mar 18 17:41:37.977617 master-0 kubenswrapper[4090]: I0318 17:41:37.976726 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/a8c34df1-ea0d-4dfa-bf4d-5b58dc5bee8e-webhook-certs\") pod \"multus-admission-controller-5dbbb8b86f-gr8jc\" (UID: \"a8c34df1-ea0d-4dfa-bf4d-5b58dc5bee8e\") " pod="openshift-multus/multus-admission-controller-5dbbb8b86f-gr8jc" Mar 18 17:41:37.977617 master-0 kubenswrapper[4090]: I0318 17:41:37.976751 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/0100a259-1358-45e8-8191-4e1f9a14ec89-etcd-ca\") pod \"etcd-operator-8544cbcf9c-rws9x\" (UID: \"0100a259-1358-45e8-8191-4e1f9a14ec89\") " pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-rws9x" Mar 18 17:41:37.977617 master-0 kubenswrapper[4090]: I0318 17:41:37.976780 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/0100a259-1358-45e8-8191-4e1f9a14ec89-etcd-service-ca\") pod \"etcd-operator-8544cbcf9c-rws9x\" (UID: \"0100a259-1358-45e8-8191-4e1f9a14ec89\") " pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-rws9x" Mar 18 17:41:37.977617 master-0 kubenswrapper[4090]: I0318 17:41:37.976795 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9a240ab7-a1d5-4e9a-96f3-4590681cc7ed-serving-cert\") pod \"openshift-controller-manager-operator-8c94f4649-hpsbd\" (UID: \"9a240ab7-a1d5-4e9a-96f3-4590681cc7ed\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8c94f4649-hpsbd" Mar 18 17:41:37.977617 master-0 kubenswrapper[4090]: I0318 17:41:37.976815 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-756j8\" (UniqueName: \"kubernetes.io/projected/ce5831a6-5a8d-4cda-9299-5d86437bcab2-kube-api-access-756j8\") pod \"marketplace-operator-89ccd998f-l5gm7\" (UID: \"ce5831a6-5a8d-4cda-9299-5d86437bcab2\") " pod="openshift-marketplace/marketplace-operator-89ccd998f-l5gm7" Mar 18 17:41:37.977617 master-0 kubenswrapper[4090]: I0318 17:41:37.976830 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/26575d68-0488-4dfa-a5d0-5016e481dba6-config\") pod \"kube-apiserver-operator-8b68b9d9b-p72m2\" (UID: \"26575d68-0488-4dfa-a5d0-5016e481dba6\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-8b68b9d9b-p72m2" Mar 18 17:41:37.977617 master-0 kubenswrapper[4090]: I0318 17:41:37.976850 4090 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cd868\" (UniqueName: \"kubernetes.io/projected/1d969530-c138-4fb7-9bfe-0825be66c009-kube-api-access-cd868\") pod \"iptables-alerter-f7jp5\" (UID: \"1d969530-c138-4fb7-9bfe-0825be66c009\") " pod="openshift-network-operator/iptables-alerter-f7jp5" Mar 18 17:41:37.977617 master-0 kubenswrapper[4090]: I0318 17:41:37.976867 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/37b3753f-bf4f-4a9e-a4a8-d58296bada79-config\") pod \"cluster-baremetal-operator-6f69995874-dh5zl\" (UID: \"37b3753f-bf4f-4a9e-a4a8-d58296bada79\") " pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-dh5zl" Mar 18 17:41:37.977617 master-0 kubenswrapper[4090]: I0318 17:41:37.976883 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c087ce06-a16b-41f4-ba93-8fccdee09003-trusted-ca-bundle\") pod \"authentication-operator-5885bfd7f4-8sxdf\" (UID: \"c087ce06-a16b-41f4-ba93-8fccdee09003\") " pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-8sxdf" Mar 18 17:41:37.977617 master-0 kubenswrapper[4090]: I0318 17:41:37.976902 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9pp5f\" (UniqueName: \"kubernetes.io/projected/b1352cc7-4099-44c5-9c31-8259fb783bc7-kube-api-access-9pp5f\") pod \"dns-operator-9c5679d8f-7sc7v\" (UID: \"b1352cc7-4099-44c5-9c31-8259fb783bc7\") " pod="openshift-dns-operator/dns-operator-9c5679d8f-7sc7v" Mar 18 17:41:37.978206 master-0 kubenswrapper[4090]: I0318 17:41:37.976919 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bm8jj\" (UniqueName: \"kubernetes.io/projected/d26d4515-391e-41a5-8c82-1b2b8a375662-kube-api-access-bm8jj\") pod \"package-server-manager-7b95f86987-6qqz4\" (UID: \"d26d4515-391e-41a5-8c82-1b2b8a375662\") " pod="openshift-operator-lifecycle-manager/package-server-manager-7b95f86987-6qqz4" Mar 18 17:41:37.978206 master-0 kubenswrapper[4090]: I0318 17:41:37.976954 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cb522b02-0b93-4711-9041-566daa06b95a-serving-cert\") pod \"openshift-config-operator-95bf4f4d-q27fh\" (UID: \"cb522b02-0b93-4711-9041-566daa06b95a\") " pod="openshift-config-operator/openshift-config-operator-95bf4f4d-q27fh" Mar 18 17:41:37.978206 master-0 kubenswrapper[4090]: I0318 17:41:37.976973 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3a3a6c2c-78e7-41f3-acff-20173cbc012a-config\") pod \"openshift-kube-scheduler-operator-dddff6458-wlfj4\" (UID: \"3a3a6c2c-78e7-41f3-acff-20173cbc012a\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-dddff6458-wlfj4" Mar 18 17:41:37.978206 master-0 kubenswrapper[4090]: I0318 17:41:37.976992 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/7c6694a8-ccd0-491b-9f21-215450f6ce67-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-598fbc5f8f-7qwxn\" (UID: \"7c6694a8-ccd0-491b-9f21-215450f6ce67\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-7qwxn" Mar 18 17:41:37.978206 master-0 kubenswrapper[4090]: I0318 17:41:37.977011 4090 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/1d969530-c138-4fb7-9bfe-0825be66c009-host-slash\") pod \"iptables-alerter-f7jp5\" (UID: \"1d969530-c138-4fb7-9bfe-0825be66c009\") " pod="openshift-network-operator/iptables-alerter-f7jp5" Mar 18 17:41:37.978206 master-0 kubenswrapper[4090]: I0318 17:41:37.977032 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-clm4b\" (UniqueName: \"kubernetes.io/projected/8d5e9525-6c0d-4c0b-a2ce-e42eaf66c311-kube-api-access-clm4b\") pod \"cluster-monitoring-operator-58845fbb57-vjrjg\" (UID: \"8d5e9525-6c0d-4c0b-a2ce-e42eaf66c311\") " pod="openshift-monitoring/cluster-monitoring-operator-58845fbb57-vjrjg" Mar 18 17:41:37.978206 master-0 kubenswrapper[4090]: I0318 17:41:37.977051 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0100a259-1358-45e8-8191-4e1f9a14ec89-config\") pod \"etcd-operator-8544cbcf9c-rws9x\" (UID: \"0100a259-1358-45e8-8191-4e1f9a14ec89\") " pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-rws9x" Mar 18 17:41:37.978206 master-0 kubenswrapper[4090]: I0318 17:41:37.977083 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/ce5831a6-5a8d-4cda-9299-5d86437bcab2-marketplace-operator-metrics\") pod \"marketplace-operator-89ccd998f-l5gm7\" (UID: \"ce5831a6-5a8d-4cda-9299-5d86437bcab2\") " pod="openshift-marketplace/marketplace-operator-89ccd998f-l5gm7" Mar 18 17:41:37.978206 master-0 kubenswrapper[4090]: I0318 17:41:37.977100 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-baremetal-operator-tls\" (UniqueName: \"kubernetes.io/secret/37b3753f-bf4f-4a9e-a4a8-d58296bada79-cluster-baremetal-operator-tls\") pod \"cluster-baremetal-operator-6f69995874-dh5zl\" (UID: \"37b3753f-bf4f-4a9e-a4a8-d58296bada79\") " pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-dh5zl" Mar 18 17:41:37.978206 master-0 kubenswrapper[4090]: I0318 17:41:37.977118 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/0100a259-1358-45e8-8191-4e1f9a14ec89-etcd-client\") pod \"etcd-operator-8544cbcf9c-rws9x\" (UID: \"0100a259-1358-45e8-8191-4e1f9a14ec89\") " pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-rws9x" Mar 18 17:41:37.978206 master-0 kubenswrapper[4090]: I0318 17:41:37.977138 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zfnqp\" (UniqueName: \"kubernetes.io/projected/c355c750-ae2f-49fa-9a16-8fb4f688853e-kube-api-access-zfnqp\") pod \"service-ca-operator-b865698dc-5zj8r\" (UID: \"c355c750-ae2f-49fa-9a16-8fb4f688853e\") " pod="openshift-service-ca-operator/service-ca-operator-b865698dc-5zj8r" Mar 18 17:41:37.978206 master-0 kubenswrapper[4090]: I0318 17:41:37.977164 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fk59q\" (UniqueName: \"kubernetes.io/projected/cb522b02-0b93-4711-9041-566daa06b95a-kube-api-access-fk59q\") pod \"openshift-config-operator-95bf4f4d-q27fh\" (UID: \"cb522b02-0b93-4711-9041-566daa06b95a\") " pod="openshift-config-operator/openshift-config-operator-95bf4f4d-q27fh" Mar 18 17:41:37.978206 master-0 kubenswrapper[4090]: I0318 17:41:37.977183 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n8k5q\" (UniqueName: \"kubernetes.io/projected/99e215da-759d-4fff-af65-0fb64245fbd0-kube-api-access-n8k5q\") pod \"cluster-olm-operator-67dcd4998-lljnt\" (UID: \"99e215da-759d-4fff-af65-0fb64245fbd0\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-67dcd4998-lljnt" Mar 18 17:41:37.978206 master-0 kubenswrapper[4090]: I0318 17:41:37.977200 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3a3a6c2c-78e7-41f3-acff-20173cbc012a-kube-api-access\") pod \"openshift-kube-scheduler-operator-dddff6458-wlfj4\" (UID: \"3a3a6c2c-78e7-41f3-acff-20173cbc012a\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-dddff6458-wlfj4" Mar 18 17:41:37.978206 master-0 kubenswrapper[4090]: I0318 17:41:37.977217 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/6f26e239-2988-4faa-bc1d-24b15b95b7f1-trusted-ca\") pod \"cluster-image-registry-operator-5549dc66cb-ljrq8\" (UID: \"6f26e239-2988-4faa-bc1d-24b15b95b7f1\") " pod="openshift-image-registry/cluster-image-registry-operator-5549dc66cb-ljrq8" Mar 18 17:41:37.978604 master-0 kubenswrapper[4090]: I0318 17:41:37.977236 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c355c750-ae2f-49fa-9a16-8fb4f688853e-serving-cert\") pod \"service-ca-operator-b865698dc-5zj8r\" (UID: \"c355c750-ae2f-49fa-9a16-8fb4f688853e\") " pod="openshift-service-ca-operator/service-ca-operator-b865698dc-5zj8r" Mar 18 17:41:37.978604 master-0 kubenswrapper[4090]: I0318 17:41:37.977260 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/e9e04572-1425-440e-9869-6deef05e13e3-srv-cert\") pod \"catalog-operator-68f85b4d6c-qpgfz\" (UID: \"e9e04572-1425-440e-9869-6deef05e13e3\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68f85b4d6c-qpgfz" Mar 18 17:41:37.978604 master-0 kubenswrapper[4090]: I0318 17:41:37.977298 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7e64a377-f497-4416-8f22-d5c7f52e0b65-trusted-ca\") pod \"ingress-operator-66b84d69b-qb7n6\" (UID: \"7e64a377-f497-4416-8f22-d5c7f52e0b65\") " pod="openshift-ingress-operator/ingress-operator-66b84d69b-qb7n6" Mar 18 17:41:37.978604 master-0 kubenswrapper[4090]: I0318 17:41:37.977316 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/d26d4515-391e-41a5-8c82-1b2b8a375662-package-server-manager-serving-cert\") pod \"package-server-manager-7b95f86987-6qqz4\" (UID: \"d26d4515-391e-41a5-8c82-1b2b8a375662\") " pod="openshift-operator-lifecycle-manager/package-server-manager-7b95f86987-6qqz4" Mar 18 17:41:37.978604 master-0 kubenswrapper[4090]: I0318 17:41:37.977332 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/37b3753f-bf4f-4a9e-a4a8-d58296bada79-cert\") pod \"cluster-baremetal-operator-6f69995874-dh5zl\" (UID: \"37b3753f-bf4f-4a9e-a4a8-d58296bada79\") " pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-dh5zl" Mar 18 17:41:37.978604 master-0 kubenswrapper[4090]: I0318 17:41:37.977349 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nf82n\" (UniqueName: \"kubernetes.io/projected/f7ff61c7-32d1-4407-a792-8e22bb4d50f9-kube-api-access-nf82n\") pod \"kube-storage-version-migrator-operator-6bb5bfb6fd-xwwcg\" (UID: \"f7ff61c7-32d1-4407-a792-8e22bb4d50f9\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-6bb5bfb6fd-xwwcg" Mar 18 17:41:37.978604 master-0 kubenswrapper[4090]: I0318 17:41:37.977364 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3a3a6c2c-78e7-41f3-acff-20173cbc012a-serving-cert\") pod \"openshift-kube-scheduler-operator-dddff6458-wlfj4\" (UID: \"3a3a6c2c-78e7-41f3-acff-20173cbc012a\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-dddff6458-wlfj4" Mar 18 17:41:37.978604 master-0 kubenswrapper[4090]: I0318 17:41:37.977382 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t92bz\" (UniqueName: \"kubernetes.io/projected/e9e04572-1425-440e-9869-6deef05e13e3-kube-api-access-t92bz\") pod \"catalog-operator-68f85b4d6c-qpgfz\" (UID: \"e9e04572-1425-440e-9869-6deef05e13e3\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68f85b4d6c-qpgfz" Mar 18 17:41:37.978604 master-0 kubenswrapper[4090]: I0318 17:41:37.977398 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c087ce06-a16b-41f4-ba93-8fccdee09003-service-ca-bundle\") pod \"authentication-operator-5885bfd7f4-8sxdf\" (UID: \"c087ce06-a16b-41f4-ba93-8fccdee09003\") " pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-8sxdf" Mar 18 17:41:37.978604 master-0 kubenswrapper[4090]: I0318 17:41:37.977412 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operand-assets\" (UniqueName: \"kubernetes.io/empty-dir/99e215da-759d-4fff-af65-0fb64245fbd0-operand-assets\") pod \"cluster-olm-operator-67dcd4998-lljnt\" (UID: \"99e215da-759d-4fff-af65-0fb64245fbd0\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-67dcd4998-lljnt" Mar 18 17:41:37.978604 master-0 kubenswrapper[4090]: I0318 17:41:37.977429 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5sl7p\" (UniqueName: \"kubernetes.io/projected/6f26e239-2988-4faa-bc1d-24b15b95b7f1-kube-api-access-5sl7p\") pod \"cluster-image-registry-operator-5549dc66cb-ljrq8\" (UID: \"6f26e239-2988-4faa-bc1d-24b15b95b7f1\") " pod="openshift-image-registry/cluster-image-registry-operator-5549dc66cb-ljrq8" Mar 18 17:41:37.978604 master-0 kubenswrapper[4090]: I0318 17:41:37.977446 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/b1352cc7-4099-44c5-9c31-8259fb783bc7-metrics-tls\") pod \"dns-operator-9c5679d8f-7sc7v\" (UID: \"b1352cc7-4099-44c5-9c31-8259fb783bc7\") " pod="openshift-dns-operator/dns-operator-9c5679d8f-7sc7v" Mar 18 17:41:37.978604 master-0 kubenswrapper[4090]: I0318 17:41:37.977731 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f7ff61c7-32d1-4407-a792-8e22bb4d50f9-serving-cert\") pod \"kube-storage-version-migrator-operator-6bb5bfb6fd-xwwcg\" (UID: \"f7ff61c7-32d1-4407-a792-8e22bb4d50f9\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-6bb5bfb6fd-xwwcg" Mar 18 17:41:37.978604 master-0 kubenswrapper[4090]: I0318 17:41:37.977750 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/8d5e9525-6c0d-4c0b-a2ce-e42eaf66c311-telemetry-config\") pod \"cluster-monitoring-operator-58845fbb57-vjrjg\" (UID: \"8d5e9525-6c0d-4c0b-a2ce-e42eaf66c311\") " pod="openshift-monitoring/cluster-monitoring-operator-58845fbb57-vjrjg" Mar 18 17:41:37.978604 master-0 kubenswrapper[4090]: I0318 17:41:37.977802 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tnknt\" (UniqueName: \"kubernetes.io/projected/0100a259-1358-45e8-8191-4e1f9a14ec89-kube-api-access-tnknt\") pod \"etcd-operator-8544cbcf9c-rws9x\" (UID: \"0100a259-1358-45e8-8191-4e1f9a14ec89\") " pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-rws9x" Mar 18 17:41:37.978960 master-0 kubenswrapper[4090]: I0318 17:41:37.977822 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l5tw2\" (UniqueName: \"kubernetes.io/projected/9a240ab7-a1d5-4e9a-96f3-4590681cc7ed-kube-api-access-l5tw2\") pod \"openshift-controller-manager-operator-8c94f4649-hpsbd\" (UID: \"9a240ab7-a1d5-4e9a-96f3-4590681cc7ed\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8c94f4649-hpsbd" Mar 18 17:41:37.978960 master-0 kubenswrapper[4090]: I0318 17:41:37.977839 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dlcnh\" (UniqueName: \"kubernetes.io/projected/a8c34df1-ea0d-4dfa-bf4d-5b58dc5bee8e-kube-api-access-dlcnh\") pod \"multus-admission-controller-5dbbb8b86f-gr8jc\" (UID: \"a8c34df1-ea0d-4dfa-bf4d-5b58dc5bee8e\") " pod="openshift-multus/multus-admission-controller-5dbbb8b86f-gr8jc" Mar 18 17:41:37.978960 master-0 kubenswrapper[4090]: I0318 17:41:37.977875 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/7e64a377-f497-4416-8f22-d5c7f52e0b65-bound-sa-token\") pod \"ingress-operator-66b84d69b-qb7n6\" (UID: \"7e64a377-f497-4416-8f22-d5c7f52e0b65\") " pod="openshift-ingress-operator/ingress-operator-66b84d69b-qb7n6" Mar 18 17:41:37.978960 master-0 kubenswrapper[4090]: I0318 17:41:37.977893 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9a240ab7-a1d5-4e9a-96f3-4590681cc7ed-config\") pod \"openshift-controller-manager-operator-8c94f4649-hpsbd\" (UID: \"9a240ab7-a1d5-4e9a-96f3-4590681cc7ed\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8c94f4649-hpsbd" Mar 18 17:41:37.978960 master-0 kubenswrapper[4090]: I0318 17:41:37.977926 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/e73f2834-c56c-4cef-ac3c-2317e9a4324c-srv-cert\") pod \"olm-operator-5c9796789-6hngr\" (UID: \"e73f2834-c56c-4cef-ac3c-2317e9a4324c\") " pod="openshift-operator-lifecycle-manager/olm-operator-5c9796789-6hngr" Mar 18 17:41:37.978960 master-0 kubenswrapper[4090]: I0318 17:41:37.977964 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/26575d68-0488-4dfa-a5d0-5016e481dba6-kube-api-access\") pod \"kube-apiserver-operator-8b68b9d9b-p72m2\" (UID: \"26575d68-0488-4dfa-a5d0-5016e481dba6\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-8b68b9d9b-p72m2" Mar 18 17:41:37.978960 master-0 kubenswrapper[4090]: I0318 17:41:37.977983 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sclm5\" (UniqueName: \"kubernetes.io/projected/7e64a377-f497-4416-8f22-d5c7f52e0b65-kube-api-access-sclm5\") pod \"ingress-operator-66b84d69b-qb7n6\" (UID: \"7e64a377-f497-4416-8f22-d5c7f52e0b65\") " pod="openshift-ingress-operator/ingress-operator-66b84d69b-qb7n6" Mar 18 17:41:37.978960 master-0 kubenswrapper[4090]: I0318 17:41:37.978002 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/37b3753f-bf4f-4a9e-a4a8-d58296bada79-images\") pod \"cluster-baremetal-operator-6f69995874-dh5zl\" (UID: \"37b3753f-bf4f-4a9e-a4a8-d58296bada79\") " pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-dh5zl" Mar 18 17:41:37.978960 master-0 kubenswrapper[4090]: I0318 17:41:37.978042 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zwlxb\" (UniqueName: \"kubernetes.io/projected/37b3753f-bf4f-4a9e-a4a8-d58296bada79-kube-api-access-zwlxb\") pod \"cluster-baremetal-operator-6f69995874-dh5zl\" (UID: \"37b3753f-bf4f-4a9e-a4a8-d58296bada79\") " pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-dh5zl" Mar 18 17:41:37.978960 master-0 kubenswrapper[4090]: I0318 17:41:37.978060 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7c6694a8-ccd0-491b-9f21-215450f6ce67-trusted-ca\") pod \"cluster-node-tuning-operator-598fbc5f8f-7qwxn\" (UID: \"7c6694a8-ccd0-491b-9f21-215450f6ce67\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-7qwxn" Mar 18 17:41:37.978960 master-0 kubenswrapper[4090]: I0318 17:41:37.978078 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/6f26e239-2988-4faa-bc1d-24b15b95b7f1-bound-sa-token\") pod \"cluster-image-registry-operator-5549dc66cb-ljrq8\" (UID: \"6f26e239-2988-4faa-bc1d-24b15b95b7f1\") " pod="openshift-image-registry/cluster-image-registry-operator-5549dc66cb-ljrq8" Mar 18 17:41:37.978960 master-0 kubenswrapper[4090]: I0318 17:41:37.978156 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-789k6\" (UniqueName: \"kubernetes.io/projected/c087ce06-a16b-41f4-ba93-8fccdee09003-kube-api-access-789k6\") pod \"authentication-operator-5885bfd7f4-8sxdf\" (UID: \"c087ce06-a16b-41f4-ba93-8fccdee09003\") " pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-8sxdf" Mar 18 17:41:37.978960 master-0 kubenswrapper[4090]: I0318 17:41:37.978175 4090 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/1d969530-c138-4fb7-9bfe-0825be66c009-iptables-alerter-script\") pod \"iptables-alerter-f7jp5\" (UID: \"1d969530-c138-4fb7-9bfe-0825be66c009\") " pod="openshift-network-operator/iptables-alerter-f7jp5" Mar 18 17:41:37.978960 master-0 kubenswrapper[4090]: I0318 17:41:37.978220 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c087ce06-a16b-41f4-ba93-8fccdee09003-serving-cert\") pod \"authentication-operator-5885bfd7f4-8sxdf\" (UID: \"c087ce06-a16b-41f4-ba93-8fccdee09003\") " pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-8sxdf" Mar 18 17:41:37.978960 master-0 kubenswrapper[4090]: I0318 17:41:37.978239 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/cb522b02-0b93-4711-9041-566daa06b95a-available-featuregates\") pod \"openshift-config-operator-95bf4f4d-q27fh\" (UID: \"cb522b02-0b93-4711-9041-566daa06b95a\") " pod="openshift-config-operator/openshift-config-operator-95bf4f4d-q27fh" Mar 18 17:41:37.982580 master-0 kubenswrapper[4090]: I0318 17:41:37.978258 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f7ff61c7-32d1-4407-a792-8e22bb4d50f9-config\") pod \"kube-storage-version-migrator-operator-6bb5bfb6fd-xwwcg\" (UID: \"f7ff61c7-32d1-4407-a792-8e22bb4d50f9\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-6bb5bfb6fd-xwwcg" Mar 18 17:41:37.982580 master-0 kubenswrapper[4090]: I0318 17:41:37.978307 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c087ce06-a16b-41f4-ba93-8fccdee09003-config\") pod \"authentication-operator-5885bfd7f4-8sxdf\" (UID: \"c087ce06-a16b-41f4-ba93-8fccdee09003\") " pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-8sxdf" Mar 18 17:41:37.982580 master-0 kubenswrapper[4090]: I0318 17:41:37.978329 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-olm-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/99e215da-759d-4fff-af65-0fb64245fbd0-cluster-olm-operator-serving-cert\") pod \"cluster-olm-operator-67dcd4998-lljnt\" (UID: \"99e215da-759d-4fff-af65-0fb64245fbd0\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-67dcd4998-lljnt" Mar 18 17:41:37.982580 master-0 kubenswrapper[4090]: I0318 17:41:37.978347 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/7c6694a8-ccd0-491b-9f21-215450f6ce67-apiservice-cert\") pod \"cluster-node-tuning-operator-598fbc5f8f-7qwxn\" (UID: \"7c6694a8-ccd0-491b-9f21-215450f6ce67\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-7qwxn" Mar 18 17:41:37.982580 master-0 kubenswrapper[4090]: I0318 17:41:37.978391 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lmsm4\" (UniqueName: \"kubernetes.io/projected/dba5f8d7-4d25-42b5-9c58-813221bf96bb-kube-api-access-lmsm4\") pod \"csi-snapshot-controller-operator-5f5d689c6b-z9vvz\" (UID: \"dba5f8d7-4d25-42b5-9c58-813221bf96bb\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-5f5d689c6b-z9vvz" Mar 18 17:41:37.982580 master-0 kubenswrapper[4090]: I0318 17:41:37.978407 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0100a259-1358-45e8-8191-4e1f9a14ec89-serving-cert\") pod \"etcd-operator-8544cbcf9c-rws9x\" (UID: \"0100a259-1358-45e8-8191-4e1f9a14ec89\") " pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-rws9x" Mar 18 17:41:37.982580 master-0 kubenswrapper[4090]: I0318 17:41:37.978422 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c355c750-ae2f-49fa-9a16-8fb4f688853e-config\") pod \"service-ca-operator-b865698dc-5zj8r\" (UID: \"c355c750-ae2f-49fa-9a16-8fb4f688853e\") " pod="openshift-service-ca-operator/service-ca-operator-b865698dc-5zj8r" Mar 18 17:41:37.982580 master-0 kubenswrapper[4090]: I0318 17:41:37.978459 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mrdqg\" (UniqueName: \"kubernetes.io/projected/7c6694a8-ccd0-491b-9f21-215450f6ce67-kube-api-access-mrdqg\") pod \"cluster-node-tuning-operator-598fbc5f8f-7qwxn\" (UID: \"7c6694a8-ccd0-491b-9f21-215450f6ce67\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-7qwxn" Mar 18 17:41:37.982580 master-0 kubenswrapper[4090]: I0318 17:41:37.978482 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/6f26e239-2988-4faa-bc1d-24b15b95b7f1-image-registry-operator-tls\") pod \"cluster-image-registry-operator-5549dc66cb-ljrq8\" (UID: \"6f26e239-2988-4faa-bc1d-24b15b95b7f1\") " pod="openshift-image-registry/cluster-image-registry-operator-5549dc66cb-ljrq8" Mar 18 17:41:37.982580 master-0 kubenswrapper[4090]: E0318 17:41:37.978615 4090 secret.go:189] Couldn't get secret openshift-image-registry/image-registry-operator-tls: secret "image-registry-operator-tls" not found Mar 18 17:41:37.982580 master-0 kubenswrapper[4090]: E0318 17:41:37.978658 4090 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6f26e239-2988-4faa-bc1d-24b15b95b7f1-image-registry-operator-tls podName:6f26e239-2988-4faa-bc1d-24b15b95b7f1 nodeName:}" failed. No retries permitted until 2026-03-18 17:41:38.47864371 +0000 UTC m=+115.670915624 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "image-registry-operator-tls" (UniqueName: "kubernetes.io/secret/6f26e239-2988-4faa-bc1d-24b15b95b7f1-image-registry-operator-tls") pod "cluster-image-registry-operator-5549dc66cb-ljrq8" (UID: "6f26e239-2988-4faa-bc1d-24b15b95b7f1") : secret "image-registry-operator-tls" not found Mar 18 17:41:37.982580 master-0 kubenswrapper[4090]: E0318 17:41:37.978935 4090 secret.go:189] Couldn't get secret openshift-machine-api/cluster-baremetal-webhook-server-cert: secret "cluster-baremetal-webhook-server-cert" not found Mar 18 17:41:37.982580 master-0 kubenswrapper[4090]: E0318 17:41:37.978949 4090 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: secret "package-server-manager-serving-cert" not found Mar 18 17:41:37.982580 master-0 kubenswrapper[4090]: E0318 17:41:37.980501 4090 secret.go:189] Couldn't get secret openshift-ingress-operator/metrics-tls: secret "metrics-tls" not found Mar 18 17:41:37.982580 master-0 kubenswrapper[4090]: I0318 17:41:37.980527 4090 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c087ce06-a16b-41f4-ba93-8fccdee09003-service-ca-bundle\") pod \"authentication-operator-5885bfd7f4-8sxdf\" (UID: \"c087ce06-a16b-41f4-ba93-8fccdee09003\") " pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-8sxdf" Mar 18 17:41:37.982580 master-0 kubenswrapper[4090]: E0318 17:41:37.980765 4090 secret.go:189] Couldn't get secret openshift-dns-operator/metrics-tls: secret "metrics-tls" not found Mar 18 17:41:37.982580 master-0 kubenswrapper[4090]: I0318 17:41:37.980772 4090 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ce5831a6-5a8d-4cda-9299-5d86437bcab2-marketplace-trusted-ca\") pod \"marketplace-operator-89ccd998f-l5gm7\" (UID: \"ce5831a6-5a8d-4cda-9299-5d86437bcab2\") " pod="openshift-marketplace/marketplace-operator-89ccd998f-l5gm7" Mar 18 17:41:37.982580 master-0 kubenswrapper[4090]: E0318 17:41:37.980983 4090 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/node-tuning-operator-tls: secret "node-tuning-operator-tls" not found Mar 18 17:41:37.983241 master-0 kubenswrapper[4090]: E0318 17:41:37.981069 4090 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: secret "cluster-monitoring-operator-tls" not found Mar 18 17:41:37.983241 master-0 kubenswrapper[4090]: E0318 17:41:37.981142 4090 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: secret "multus-admission-controller-secret" not found Mar 18 17:41:37.983241 master-0 kubenswrapper[4090]: E0318 17:41:37.981241 4090 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: secret "olm-operator-serving-cert" not found Mar 18 17:41:37.983241 master-0 kubenswrapper[4090]: I0318 17:41:37.982070 4090 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0100a259-1358-45e8-8191-4e1f9a14ec89-config\") pod \"etcd-operator-8544cbcf9c-rws9x\" (UID: \"0100a259-1358-45e8-8191-4e1f9a14ec89\") " pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-rws9x" Mar 18 17:41:37.983241 master-0 kubenswrapper[4090]: I0318 17:41:37.982307 4090 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/37b3753f-bf4f-4a9e-a4a8-d58296bada79-images\") pod \"cluster-baremetal-operator-6f69995874-dh5zl\" (UID: \"37b3753f-bf4f-4a9e-a4a8-d58296bada79\") " pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-dh5zl" Mar 18 17:41:37.983241 master-0 kubenswrapper[4090]: I0318 17:41:37.982702 4090 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9a240ab7-a1d5-4e9a-96f3-4590681cc7ed-config\") pod \"openshift-controller-manager-operator-8c94f4649-hpsbd\" (UID: \"9a240ab7-a1d5-4e9a-96f3-4590681cc7ed\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8c94f4649-hpsbd" Mar 18 17:41:37.983241 master-0 kubenswrapper[4090]: E0318 17:41:37.982806 4090 secret.go:189] Couldn't get secret openshift-machine-api/cluster-baremetal-operator-tls: secret "cluster-baremetal-operator-tls" not found Mar 18 17:41:37.983241 master-0 kubenswrapper[4090]: E0318 17:41:37.983023 4090 secret.go:189] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: secret "marketplace-operator-metrics" not found Mar 18 17:41:37.983556 master-0 kubenswrapper[4090]: I0318 17:41:37.983432 4090 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/0100a259-1358-45e8-8191-4e1f9a14ec89-etcd-service-ca\") pod \"etcd-operator-8544cbcf9c-rws9x\" (UID: \"0100a259-1358-45e8-8191-4e1f9a14ec89\") " pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-rws9x" Mar 18 17:41:37.983779 master-0 kubenswrapper[4090]: I0318 17:41:37.983743 4090 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f7ff61c7-32d1-4407-a792-8e22bb4d50f9-config\") pod \"kube-storage-version-migrator-operator-6bb5bfb6fd-xwwcg\" (UID: \"f7ff61c7-32d1-4407-a792-8e22bb4d50f9\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-6bb5bfb6fd-xwwcg" Mar 18 17:41:37.984663 master-0 kubenswrapper[4090]: I0318 17:41:37.984563 4090 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/8d5e9525-6c0d-4c0b-a2ce-e42eaf66c311-telemetry-config\") pod \"cluster-monitoring-operator-58845fbb57-vjrjg\" (UID: \"8d5e9525-6c0d-4c0b-a2ce-e42eaf66c311\") " pod="openshift-monitoring/cluster-monitoring-operator-58845fbb57-vjrjg" Mar 18 17:41:37.984739 master-0 kubenswrapper[4090]: I0318 17:41:37.984699 4090 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/26575d68-0488-4dfa-a5d0-5016e481dba6-config\") pod \"kube-apiserver-operator-8b68b9d9b-p72m2\" (UID: \"26575d68-0488-4dfa-a5d0-5016e481dba6\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-8b68b9d9b-p72m2" Mar 18 17:41:37.985303 master-0 kubenswrapper[4090]: E0318 17:41:37.985222 4090 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: secret "catalog-operator-serving-cert" not found Mar 18 17:41:37.985369 master-0 kubenswrapper[4090]: I0318 17:41:37.985344 4090 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7c6694a8-ccd0-491b-9f21-215450f6ce67-trusted-ca\") pod \"cluster-node-tuning-operator-598fbc5f8f-7qwxn\" (UID: \"7c6694a8-ccd0-491b-9f21-215450f6ce67\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-7qwxn" Mar 18 17:41:37.985413 master-0 kubenswrapper[4090]: I0318 17:41:37.985349 4090 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operand-assets\" (UniqueName: \"kubernetes.io/empty-dir/99e215da-759d-4fff-af65-0fb64245fbd0-operand-assets\") pod \"cluster-olm-operator-67dcd4998-lljnt\" (UID: \"99e215da-759d-4fff-af65-0fb64245fbd0\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-67dcd4998-lljnt" Mar 18 17:41:37.985537 master-0 kubenswrapper[4090]: I0318 17:41:37.985493 4090 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/37b3753f-bf4f-4a9e-a4a8-d58296bada79-config\") pod \"cluster-baremetal-operator-6f69995874-dh5zl\" (UID: \"37b3753f-bf4f-4a9e-a4a8-d58296bada79\") " pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-dh5zl" Mar 18 17:41:37.986649 master-0 kubenswrapper[4090]: E0318 17:41:37.978959 4090 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/37b3753f-bf4f-4a9e-a4a8-d58296bada79-cert podName:37b3753f-bf4f-4a9e-a4a8-d58296bada79 nodeName:}" failed. No retries permitted until 2026-03-18 17:41:38.478950656 +0000 UTC m=+115.671222570 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/37b3753f-bf4f-4a9e-a4a8-d58296bada79-cert") pod "cluster-baremetal-operator-6f69995874-dh5zl" (UID: "37b3753f-bf4f-4a9e-a4a8-d58296bada79") : secret "cluster-baremetal-webhook-server-cert" not found Mar 18 17:41:37.986649 master-0 kubenswrapper[4090]: I0318 17:41:37.986509 4090 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c087ce06-a16b-41f4-ba93-8fccdee09003-trusted-ca-bundle\") pod \"authentication-operator-5885bfd7f4-8sxdf\" (UID: \"c087ce06-a16b-41f4-ba93-8fccdee09003\") " pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-8sxdf" Mar 18 17:41:37.986649 master-0 kubenswrapper[4090]: E0318 17:41:37.986531 4090 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d26d4515-391e-41a5-8c82-1b2b8a375662-package-server-manager-serving-cert podName:d26d4515-391e-41a5-8c82-1b2b8a375662 nodeName:}" failed. No retries permitted until 2026-03-18 17:41:38.486461651 +0000 UTC m=+115.678733575 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/d26d4515-391e-41a5-8c82-1b2b8a375662-package-server-manager-serving-cert") pod "package-server-manager-7b95f86987-6qqz4" (UID: "d26d4515-391e-41a5-8c82-1b2b8a375662") : secret "package-server-manager-serving-cert" not found Mar 18 17:41:37.986649 master-0 kubenswrapper[4090]: E0318 17:41:37.986577 4090 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7e64a377-f497-4416-8f22-d5c7f52e0b65-metrics-tls podName:7e64a377-f497-4416-8f22-d5c7f52e0b65 nodeName:}" failed. No retries permitted until 2026-03-18 17:41:38.486555634 +0000 UTC m=+115.678827548 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/7e64a377-f497-4416-8f22-d5c7f52e0b65-metrics-tls") pod "ingress-operator-66b84d69b-qb7n6" (UID: "7e64a377-f497-4416-8f22-d5c7f52e0b65") : secret "metrics-tls" not found Mar 18 17:41:37.986649 master-0 kubenswrapper[4090]: E0318 17:41:37.986623 4090 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b1352cc7-4099-44c5-9c31-8259fb783bc7-metrics-tls podName:b1352cc7-4099-44c5-9c31-8259fb783bc7 nodeName:}" failed. No retries permitted until 2026-03-18 17:41:38.486599844 +0000 UTC m=+115.678871958 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/b1352cc7-4099-44c5-9c31-8259fb783bc7-metrics-tls") pod "dns-operator-9c5679d8f-7sc7v" (UID: "b1352cc7-4099-44c5-9c31-8259fb783bc7") : secret "metrics-tls" not found Mar 18 17:41:37.986649 master-0 kubenswrapper[4090]: E0318 17:41:37.986643 4090 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7c6694a8-ccd0-491b-9f21-215450f6ce67-node-tuning-operator-tls podName:7c6694a8-ccd0-491b-9f21-215450f6ce67 nodeName:}" failed. No retries permitted until 2026-03-18 17:41:38.486634825 +0000 UTC m=+115.678906739 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "node-tuning-operator-tls" (UniqueName: "kubernetes.io/secret/7c6694a8-ccd0-491b-9f21-215450f6ce67-node-tuning-operator-tls") pod "cluster-node-tuning-operator-598fbc5f8f-7qwxn" (UID: "7c6694a8-ccd0-491b-9f21-215450f6ce67") : secret "node-tuning-operator-tls" not found Mar 18 17:41:37.986649 master-0 kubenswrapper[4090]: E0318 17:41:37.986658 4090 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8d5e9525-6c0d-4c0b-a2ce-e42eaf66c311-cluster-monitoring-operator-tls podName:8d5e9525-6c0d-4c0b-a2ce-e42eaf66c311 nodeName:}" failed. No retries permitted until 2026-03-18 17:41:38.486650835 +0000 UTC m=+115.678922749 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/8d5e9525-6c0d-4c0b-a2ce-e42eaf66c311-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-58845fbb57-vjrjg" (UID: "8d5e9525-6c0d-4c0b-a2ce-e42eaf66c311") : secret "cluster-monitoring-operator-tls" not found Mar 18 17:41:37.986955 master-0 kubenswrapper[4090]: E0318 17:41:37.986673 4090 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a8c34df1-ea0d-4dfa-bf4d-5b58dc5bee8e-webhook-certs podName:a8c34df1-ea0d-4dfa-bf4d-5b58dc5bee8e nodeName:}" failed. No retries permitted until 2026-03-18 17:41:38.486666516 +0000 UTC m=+115.678938420 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/a8c34df1-ea0d-4dfa-bf4d-5b58dc5bee8e-webhook-certs") pod "multus-admission-controller-5dbbb8b86f-gr8jc" (UID: "a8c34df1-ea0d-4dfa-bf4d-5b58dc5bee8e") : secret "multus-admission-controller-secret" not found Mar 18 17:41:37.986955 master-0 kubenswrapper[4090]: E0318 17:41:37.986685 4090 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e73f2834-c56c-4cef-ac3c-2317e9a4324c-srv-cert podName:e73f2834-c56c-4cef-ac3c-2317e9a4324c nodeName:}" failed. No retries permitted until 2026-03-18 17:41:38.486679426 +0000 UTC m=+115.678951340 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/e73f2834-c56c-4cef-ac3c-2317e9a4324c-srv-cert") pod "olm-operator-5c9796789-6hngr" (UID: "e73f2834-c56c-4cef-ac3c-2317e9a4324c") : secret "olm-operator-serving-cert" not found Mar 18 17:41:37.986955 master-0 kubenswrapper[4090]: E0318 17:41:37.986703 4090 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/37b3753f-bf4f-4a9e-a4a8-d58296bada79-cluster-baremetal-operator-tls podName:37b3753f-bf4f-4a9e-a4a8-d58296bada79 nodeName:}" failed. No retries permitted until 2026-03-18 17:41:38.486697656 +0000 UTC m=+115.678969570 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cluster-baremetal-operator-tls" (UniqueName: "kubernetes.io/secret/37b3753f-bf4f-4a9e-a4a8-d58296bada79-cluster-baremetal-operator-tls") pod "cluster-baremetal-operator-6f69995874-dh5zl" (UID: "37b3753f-bf4f-4a9e-a4a8-d58296bada79") : secret "cluster-baremetal-operator-tls" not found Mar 18 17:41:37.986955 master-0 kubenswrapper[4090]: E0318 17:41:37.986716 4090 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ce5831a6-5a8d-4cda-9299-5d86437bcab2-marketplace-operator-metrics podName:ce5831a6-5a8d-4cda-9299-5d86437bcab2 nodeName:}" failed. No retries permitted until 2026-03-18 17:41:38.486711407 +0000 UTC m=+115.678983321 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/ce5831a6-5a8d-4cda-9299-5d86437bcab2-marketplace-operator-metrics") pod "marketplace-operator-89ccd998f-l5gm7" (UID: "ce5831a6-5a8d-4cda-9299-5d86437bcab2") : secret "marketplace-operator-metrics" not found Mar 18 17:41:37.986955 master-0 kubenswrapper[4090]: E0318 17:41:37.986741 4090 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e9e04572-1425-440e-9869-6deef05e13e3-srv-cert podName:e9e04572-1425-440e-9869-6deef05e13e3 nodeName:}" failed. No retries permitted until 2026-03-18 17:41:38.486734017 +0000 UTC m=+115.679005931 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/e9e04572-1425-440e-9869-6deef05e13e3-srv-cert") pod "catalog-operator-68f85b4d6c-qpgfz" (UID: "e9e04572-1425-440e-9869-6deef05e13e3") : secret "catalog-operator-serving-cert" not found Mar 18 17:41:37.986955 master-0 kubenswrapper[4090]: I0318 17:41:37.986898 4090 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3a3a6c2c-78e7-41f3-acff-20173cbc012a-config\") pod \"openshift-kube-scheduler-operator-dddff6458-wlfj4\" (UID: \"3a3a6c2c-78e7-41f3-acff-20173cbc012a\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-dddff6458-wlfj4" Mar 18 17:41:37.987467 master-0 kubenswrapper[4090]: I0318 17:41:37.987438 4090 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/0100a259-1358-45e8-8191-4e1f9a14ec89-etcd-ca\") pod \"etcd-operator-8544cbcf9c-rws9x\" (UID: \"0100a259-1358-45e8-8191-4e1f9a14ec89\") " pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-rws9x" Mar 18 17:41:37.988442 master-0 kubenswrapper[4090]: E0318 17:41:37.988413 4090 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/performance-addon-operator-webhook-cert: secret "performance-addon-operator-webhook-cert" not found Mar 18 17:41:37.988549 master-0 kubenswrapper[4090]: E0318 17:41:37.988519 4090 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7c6694a8-ccd0-491b-9f21-215450f6ce67-apiservice-cert podName:7c6694a8-ccd0-491b-9f21-215450f6ce67 nodeName:}" failed. No retries permitted until 2026-03-18 17:41:38.488501383 +0000 UTC m=+115.680773307 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/7c6694a8-ccd0-491b-9f21-215450f6ce67-apiservice-cert") pod "cluster-node-tuning-operator-598fbc5f8f-7qwxn" (UID: "7c6694a8-ccd0-491b-9f21-215450f6ce67") : secret "performance-addon-operator-webhook-cert" not found Mar 18 17:41:37.988823 master-0 kubenswrapper[4090]: I0318 17:41:37.988787 4090 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/cb522b02-0b93-4711-9041-566daa06b95a-available-featuregates\") pod \"openshift-config-operator-95bf4f4d-q27fh\" (UID: \"cb522b02-0b93-4711-9041-566daa06b95a\") " pod="openshift-config-operator/openshift-config-operator-95bf4f4d-q27fh" Mar 18 17:41:37.989145 master-0 kubenswrapper[4090]: I0318 17:41:37.989114 4090 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9a240ab7-a1d5-4e9a-96f3-4590681cc7ed-serving-cert\") pod \"openshift-controller-manager-operator-8c94f4649-hpsbd\" (UID: \"9a240ab7-a1d5-4e9a-96f3-4590681cc7ed\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8c94f4649-hpsbd" Mar 18 17:41:37.989514 master-0 kubenswrapper[4090]: I0318 17:41:37.989484 4090 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c355c750-ae2f-49fa-9a16-8fb4f688853e-serving-cert\") pod \"service-ca-operator-b865698dc-5zj8r\" (UID: \"c355c750-ae2f-49fa-9a16-8fb4f688853e\") " pod="openshift-service-ca-operator/service-ca-operator-b865698dc-5zj8r" Mar 18 17:41:37.989764 master-0 kubenswrapper[4090]: I0318 17:41:37.989728 4090 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/6f26e239-2988-4faa-bc1d-24b15b95b7f1-trusted-ca\") pod \"cluster-image-registry-operator-5549dc66cb-ljrq8\" (UID: \"6f26e239-2988-4faa-bc1d-24b15b95b7f1\") " pod="openshift-image-registry/cluster-image-registry-operator-5549dc66cb-ljrq8" Mar 18 17:41:37.989895 master-0 kubenswrapper[4090]: I0318 17:41:37.989855 4090 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c355c750-ae2f-49fa-9a16-8fb4f688853e-config\") pod \"service-ca-operator-b865698dc-5zj8r\" (UID: \"c355c750-ae2f-49fa-9a16-8fb4f688853e\") " pod="openshift-service-ca-operator/service-ca-operator-b865698dc-5zj8r" Mar 18 17:41:37.990355 master-0 kubenswrapper[4090]: I0318 17:41:37.990011 4090 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3a3a6c2c-78e7-41f3-acff-20173cbc012a-serving-cert\") pod \"openshift-kube-scheduler-operator-dddff6458-wlfj4\" (UID: \"3a3a6c2c-78e7-41f3-acff-20173cbc012a\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-dddff6458-wlfj4" Mar 18 17:41:37.990355 master-0 kubenswrapper[4090]: I0318 17:41:37.990295 4090 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/26575d68-0488-4dfa-a5d0-5016e481dba6-serving-cert\") pod \"kube-apiserver-operator-8b68b9d9b-p72m2\" (UID: \"26575d68-0488-4dfa-a5d0-5016e481dba6\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-8b68b9d9b-p72m2" Mar 18 17:41:37.992091 master-0 kubenswrapper[4090]: I0318 17:41:37.992044 4090 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c087ce06-a16b-41f4-ba93-8fccdee09003-serving-cert\") pod \"authentication-operator-5885bfd7f4-8sxdf\" (UID: \"c087ce06-a16b-41f4-ba93-8fccdee09003\") " pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-8sxdf" Mar 18 17:41:37.992498 master-0 kubenswrapper[4090]: I0318 17:41:37.992459 4090 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0100a259-1358-45e8-8191-4e1f9a14ec89-serving-cert\") pod \"etcd-operator-8544cbcf9c-rws9x\" (UID: \"0100a259-1358-45e8-8191-4e1f9a14ec89\") " pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-rws9x" Mar 18 17:41:37.997503 master-0 kubenswrapper[4090]: I0318 17:41:37.997446 4090 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c087ce06-a16b-41f4-ba93-8fccdee09003-config\") pod \"authentication-operator-5885bfd7f4-8sxdf\" (UID: \"c087ce06-a16b-41f4-ba93-8fccdee09003\") " pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-8sxdf" Mar 18 17:41:37.998595 master-0 kubenswrapper[4090]: I0318 17:41:37.998541 4090 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7e64a377-f497-4416-8f22-d5c7f52e0b65-trusted-ca\") pod \"ingress-operator-66b84d69b-qb7n6\" (UID: \"7e64a377-f497-4416-8f22-d5c7f52e0b65\") " pod="openshift-ingress-operator/ingress-operator-66b84d69b-qb7n6" Mar 18 17:41:37.998991 master-0 kubenswrapper[4090]: I0318 17:41:37.998959 4090 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f7ff61c7-32d1-4407-a792-8e22bb4d50f9-serving-cert\") pod \"kube-storage-version-migrator-operator-6bb5bfb6fd-xwwcg\" (UID: \"f7ff61c7-32d1-4407-a792-8e22bb4d50f9\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-6bb5bfb6fd-xwwcg" Mar 18 17:41:38.004096 master-0 kubenswrapper[4090]: I0318 17:41:38.004057 4090 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5sl7p\" (UniqueName: \"kubernetes.io/projected/6f26e239-2988-4faa-bc1d-24b15b95b7f1-kube-api-access-5sl7p\") pod \"cluster-image-registry-operator-5549dc66cb-ljrq8\" (UID: \"6f26e239-2988-4faa-bc1d-24b15b95b7f1\") " pod="openshift-image-registry/cluster-image-registry-operator-5549dc66cb-ljrq8" Mar 18 17:41:38.004840 master-0 kubenswrapper[4090]: I0318 17:41:38.004800 4090 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-olm-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/99e215da-759d-4fff-af65-0fb64245fbd0-cluster-olm-operator-serving-cert\") pod \"cluster-olm-operator-67dcd4998-lljnt\" (UID: \"99e215da-759d-4fff-af65-0fb64245fbd0\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-67dcd4998-lljnt" Mar 18 17:41:38.005485 master-0 kubenswrapper[4090]: I0318 17:41:38.005457 4090 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cb522b02-0b93-4711-9041-566daa06b95a-serving-cert\") pod \"openshift-config-operator-95bf4f4d-q27fh\" (UID: \"cb522b02-0b93-4711-9041-566daa06b95a\") " pod="openshift-config-operator/openshift-config-operator-95bf4f4d-q27fh" Mar 18 17:41:38.005797 master-0 kubenswrapper[4090]: I0318 17:41:38.005765 4090 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/0100a259-1358-45e8-8191-4e1f9a14ec89-etcd-client\") pod \"etcd-operator-8544cbcf9c-rws9x\" (UID: \"0100a259-1358-45e8-8191-4e1f9a14ec89\") " pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-rws9x" Mar 18 17:41:38.005797 master-0 kubenswrapper[4090]: I0318 17:41:38.005770 4090 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zwlxb\" (UniqueName: \"kubernetes.io/projected/37b3753f-bf4f-4a9e-a4a8-d58296bada79-kube-api-access-zwlxb\") pod \"cluster-baremetal-operator-6f69995874-dh5zl\" (UID: \"37b3753f-bf4f-4a9e-a4a8-d58296bada79\") " pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-dh5zl" Mar 18 17:41:38.015466 master-0 kubenswrapper[4090]: I0318 17:41:38.015424 4090 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nf82n\" (UniqueName: \"kubernetes.io/projected/f7ff61c7-32d1-4407-a792-8e22bb4d50f9-kube-api-access-nf82n\") pod \"kube-storage-version-migrator-operator-6bb5bfb6fd-xwwcg\" (UID: \"f7ff61c7-32d1-4407-a792-8e22bb4d50f9\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-6bb5bfb6fd-xwwcg" Mar 18 17:41:38.015700 master-0 kubenswrapper[4090]: I0318 17:41:38.015656 4090 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l5tw2\" (UniqueName: \"kubernetes.io/projected/9a240ab7-a1d5-4e9a-96f3-4590681cc7ed-kube-api-access-l5tw2\") pod \"openshift-controller-manager-operator-8c94f4649-hpsbd\" (UID: \"9a240ab7-a1d5-4e9a-96f3-4590681cc7ed\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8c94f4649-hpsbd" Mar 18 17:41:38.015700 master-0 kubenswrapper[4090]: I0318 17:41:38.015686 4090 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/26575d68-0488-4dfa-a5d0-5016e481dba6-kube-api-access\") pod \"kube-apiserver-operator-8b68b9d9b-p72m2\" (UID: \"26575d68-0488-4dfa-a5d0-5016e481dba6\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-8b68b9d9b-p72m2" Mar 18 17:41:38.017916 master-0 kubenswrapper[4090]: I0318 17:41:38.017453 4090 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qwps9\" (UniqueName: \"kubernetes.io/projected/e73f2834-c56c-4cef-ac3c-2317e9a4324c-kube-api-access-qwps9\") pod \"olm-operator-5c9796789-6hngr\" (UID: \"e73f2834-c56c-4cef-ac3c-2317e9a4324c\") " pod="openshift-operator-lifecycle-manager/olm-operator-5c9796789-6hngr" Mar 18 17:41:38.018380 master-0 kubenswrapper[4090]: I0318 17:41:38.018359 4090 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t92bz\" (UniqueName: \"kubernetes.io/projected/e9e04572-1425-440e-9869-6deef05e13e3-kube-api-access-t92bz\") pod \"catalog-operator-68f85b4d6c-qpgfz\" (UID: \"e9e04572-1425-440e-9869-6deef05e13e3\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68f85b4d6c-qpgfz" Mar 18 17:41:38.018484 master-0 kubenswrapper[4090]: I0318 17:41:38.018452 4090 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/7e64a377-f497-4416-8f22-d5c7f52e0b65-bound-sa-token\") pod \"ingress-operator-66b84d69b-qb7n6\" (UID: \"7e64a377-f497-4416-8f22-d5c7f52e0b65\") " pod="openshift-ingress-operator/ingress-operator-66b84d69b-qb7n6" Mar 18 17:41:38.019605 master-0 kubenswrapper[4090]: I0318 17:41:38.019441 4090 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zfnqp\" (UniqueName: \"kubernetes.io/projected/c355c750-ae2f-49fa-9a16-8fb4f688853e-kube-api-access-zfnqp\") pod \"service-ca-operator-b865698dc-5zj8r\" (UID: \"c355c750-ae2f-49fa-9a16-8fb4f688853e\") " pod="openshift-service-ca-operator/service-ca-operator-b865698dc-5zj8r" Mar 18 17:41:38.039067 master-0 kubenswrapper[4090]: I0318 17:41:38.039034 4090 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-clm4b\" (UniqueName: \"kubernetes.io/projected/8d5e9525-6c0d-4c0b-a2ce-e42eaf66c311-kube-api-access-clm4b\") pod \"cluster-monitoring-operator-58845fbb57-vjrjg\" (UID: \"8d5e9525-6c0d-4c0b-a2ce-e42eaf66c311\") " pod="openshift-monitoring/cluster-monitoring-operator-58845fbb57-vjrjg" Mar 18 17:41:38.052519 master-0 kubenswrapper[4090]: I0318 17:41:38.052294 4090 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sclm5\" (UniqueName: \"kubernetes.io/projected/7e64a377-f497-4416-8f22-d5c7f52e0b65-kube-api-access-sclm5\") pod \"ingress-operator-66b84d69b-qb7n6\" (UID: \"7e64a377-f497-4416-8f22-d5c7f52e0b65\") " pod="openshift-ingress-operator/ingress-operator-66b84d69b-qb7n6" Mar 18 17:41:38.080683 master-0 kubenswrapper[4090]: I0318 17:41:38.080352 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/1d969530-c138-4fb7-9bfe-0825be66c009-iptables-alerter-script\") pod \"iptables-alerter-f7jp5\" (UID: \"1d969530-c138-4fb7-9bfe-0825be66c009\") " pod="openshift-network-operator/iptables-alerter-f7jp5" Mar 18 17:41:38.081753 master-0 kubenswrapper[4090]: I0318 17:41:38.080857 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cd868\" (UniqueName: \"kubernetes.io/projected/1d969530-c138-4fb7-9bfe-0825be66c009-kube-api-access-cd868\") pod \"iptables-alerter-f7jp5\" (UID: \"1d969530-c138-4fb7-9bfe-0825be66c009\") " pod="openshift-network-operator/iptables-alerter-f7jp5" Mar 18 17:41:38.081753 master-0 kubenswrapper[4090]: I0318 17:41:38.080959 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/1d969530-c138-4fb7-9bfe-0825be66c009-host-slash\") pod \"iptables-alerter-f7jp5\" (UID: \"1d969530-c138-4fb7-9bfe-0825be66c009\") " pod="openshift-network-operator/iptables-alerter-f7jp5" Mar 18 17:41:38.081753 master-0 kubenswrapper[4090]: I0318 17:41:38.081245 4090 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/1d969530-c138-4fb7-9bfe-0825be66c009-iptables-alerter-script\") pod \"iptables-alerter-f7jp5\" (UID: \"1d969530-c138-4fb7-9bfe-0825be66c009\") " pod="openshift-network-operator/iptables-alerter-f7jp5" Mar 18 17:41:38.081753 master-0 kubenswrapper[4090]: I0318 17:41:38.081593 4090 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/1d969530-c138-4fb7-9bfe-0825be66c009-host-slash\") pod \"iptables-alerter-f7jp5\" (UID: \"1d969530-c138-4fb7-9bfe-0825be66c009\") " pod="openshift-network-operator/iptables-alerter-f7jp5" Mar 18 17:41:38.097874 master-0 kubenswrapper[4090]: I0318 17:41:38.097832 4090 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tnknt\" (UniqueName: \"kubernetes.io/projected/0100a259-1358-45e8-8191-4e1f9a14ec89-kube-api-access-tnknt\") pod \"etcd-operator-8544cbcf9c-rws9x\" (UID: \"0100a259-1358-45e8-8191-4e1f9a14ec89\") " pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-rws9x" Mar 18 17:41:38.112961 master-0 kubenswrapper[4090]: I0318 17:41:38.112673 4090 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9pp5f\" (UniqueName: \"kubernetes.io/projected/b1352cc7-4099-44c5-9c31-8259fb783bc7-kube-api-access-9pp5f\") pod \"dns-operator-9c5679d8f-7sc7v\" (UID: \"b1352cc7-4099-44c5-9c31-8259fb783bc7\") " pod="openshift-dns-operator/dns-operator-9c5679d8f-7sc7v" Mar 18 17:41:38.136032 master-0 kubenswrapper[4090]: I0318 17:41:38.135978 4090 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-756j8\" (UniqueName: \"kubernetes.io/projected/ce5831a6-5a8d-4cda-9299-5d86437bcab2-kube-api-access-756j8\") pod \"marketplace-operator-89ccd998f-l5gm7\" (UID: \"ce5831a6-5a8d-4cda-9299-5d86437bcab2\") " pod="openshift-marketplace/marketplace-operator-89ccd998f-l5gm7" Mar 18 17:41:38.143163 master-0 kubenswrapper[4090]: I0318 17:41:38.143112 4090 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-d65958b8-t266j"] Mar 18 17:41:38.151609 master-0 kubenswrapper[4090]: I0318 17:41:38.151574 4090 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-ff989d6cc-qk279" Mar 18 17:41:38.153903 master-0 kubenswrapper[4090]: I0318 17:41:38.153387 4090 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-789k6\" (UniqueName: \"kubernetes.io/projected/c087ce06-a16b-41f4-ba93-8fccdee09003-kube-api-access-789k6\") pod \"authentication-operator-5885bfd7f4-8sxdf\" (UID: \"c087ce06-a16b-41f4-ba93-8fccdee09003\") " pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-8sxdf" Mar 18 17:41:38.163850 master-0 kubenswrapper[4090]: I0318 17:41:38.163823 4090 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-6bb5bfb6fd-xwwcg" Mar 18 17:41:38.178314 master-0 kubenswrapper[4090]: I0318 17:41:38.178265 4090 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/6f26e239-2988-4faa-bc1d-24b15b95b7f1-bound-sa-token\") pod \"cluster-image-registry-operator-5549dc66cb-ljrq8\" (UID: \"6f26e239-2988-4faa-bc1d-24b15b95b7f1\") " pod="openshift-image-registry/cluster-image-registry-operator-5549dc66cb-ljrq8" Mar 18 17:41:38.193823 master-0 kubenswrapper[4090]: I0318 17:41:38.193760 4090 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n8k5q\" (UniqueName: \"kubernetes.io/projected/99e215da-759d-4fff-af65-0fb64245fbd0-kube-api-access-n8k5q\") pod \"cluster-olm-operator-67dcd4998-lljnt\" (UID: \"99e215da-759d-4fff-af65-0fb64245fbd0\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-67dcd4998-lljnt" Mar 18 17:41:38.204696 master-0 kubenswrapper[4090]: I0318 17:41:38.204642 4090 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-d65958b8-t266j" event={"ID":"0b9ff55a-73fb-473f-b406-1f8b6cffdb89","Type":"ContainerStarted","Data":"39f34c1f903429d7c69072e5211db003fe4dc2847c946a6e7e2b74d4bd2e8ac8"} Mar 18 17:41:38.222347 master-0 kubenswrapper[4090]: I0318 17:41:38.222306 4090 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8c94f4649-hpsbd" Mar 18 17:41:38.227022 master-0 kubenswrapper[4090]: I0318 17:41:38.225894 4090 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fk59q\" (UniqueName: \"kubernetes.io/projected/cb522b02-0b93-4711-9041-566daa06b95a-kube-api-access-fk59q\") pod \"openshift-config-operator-95bf4f4d-q27fh\" (UID: \"cb522b02-0b93-4711-9041-566daa06b95a\") " pod="openshift-config-operator/openshift-config-operator-95bf4f4d-q27fh" Mar 18 17:41:38.245072 master-0 kubenswrapper[4090]: I0318 17:41:38.241866 4090 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-rws9x" Mar 18 17:41:38.247736 master-0 kubenswrapper[4090]: I0318 17:41:38.245199 4090 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lmsm4\" (UniqueName: \"kubernetes.io/projected/dba5f8d7-4d25-42b5-9c58-813221bf96bb-kube-api-access-lmsm4\") pod \"csi-snapshot-controller-operator-5f5d689c6b-z9vvz\" (UID: \"dba5f8d7-4d25-42b5-9c58-813221bf96bb\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-5f5d689c6b-z9vvz" Mar 18 17:41:38.247736 master-0 kubenswrapper[4090]: I0318 17:41:38.245344 4090 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-b865698dc-5zj8r" Mar 18 17:41:38.254651 master-0 kubenswrapper[4090]: I0318 17:41:38.253743 4090 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-8b68b9d9b-p72m2" Mar 18 17:41:38.274452 master-0 kubenswrapper[4090]: I0318 17:41:38.274402 4090 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-olm-operator/cluster-olm-operator-67dcd4998-lljnt" Mar 18 17:41:38.279459 master-0 kubenswrapper[4090]: I0318 17:41:38.279416 4090 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3a3a6c2c-78e7-41f3-acff-20173cbc012a-kube-api-access\") pod \"openshift-kube-scheduler-operator-dddff6458-wlfj4\" (UID: \"3a3a6c2c-78e7-41f3-acff-20173cbc012a\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-dddff6458-wlfj4" Mar 18 17:41:38.284966 master-0 kubenswrapper[4090]: I0318 17:41:38.284218 4090 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dlcnh\" (UniqueName: \"kubernetes.io/projected/a8c34df1-ea0d-4dfa-bf4d-5b58dc5bee8e-kube-api-access-dlcnh\") pod \"multus-admission-controller-5dbbb8b86f-gr8jc\" (UID: \"a8c34df1-ea0d-4dfa-bf4d-5b58dc5bee8e\") " pod="openshift-multus/multus-admission-controller-5dbbb8b86f-gr8jc" Mar 18 17:41:38.294901 master-0 kubenswrapper[4090]: I0318 17:41:38.294873 4090 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bm8jj\" (UniqueName: \"kubernetes.io/projected/d26d4515-391e-41a5-8c82-1b2b8a375662-kube-api-access-bm8jj\") pod \"package-server-manager-7b95f86987-6qqz4\" (UID: \"d26d4515-391e-41a5-8c82-1b2b8a375662\") " pod="openshift-operator-lifecycle-manager/package-server-manager-7b95f86987-6qqz4" Mar 18 17:41:38.298824 master-0 kubenswrapper[4090]: I0318 17:41:38.298801 4090 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-5f5d689c6b-z9vvz" Mar 18 17:41:38.313688 master-0 kubenswrapper[4090]: I0318 17:41:38.313261 4090 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mrdqg\" (UniqueName: \"kubernetes.io/projected/7c6694a8-ccd0-491b-9f21-215450f6ce67-kube-api-access-mrdqg\") pod \"cluster-node-tuning-operator-598fbc5f8f-7qwxn\" (UID: \"7c6694a8-ccd0-491b-9f21-215450f6ce67\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-7qwxn" Mar 18 17:41:38.323616 master-0 kubenswrapper[4090]: I0318 17:41:38.321504 4090 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-8sxdf" Mar 18 17:41:38.333710 master-0 kubenswrapper[4090]: I0318 17:41:38.333466 4090 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-95bf4f4d-q27fh" Mar 18 17:41:38.346576 master-0 kubenswrapper[4090]: I0318 17:41:38.345382 4090 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-ff989d6cc-qk279"] Mar 18 17:41:38.362252 master-0 kubenswrapper[4090]: I0318 17:41:38.358982 4090 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cd868\" (UniqueName: \"kubernetes.io/projected/1d969530-c138-4fb7-9bfe-0825be66c009-kube-api-access-cd868\") pod \"iptables-alerter-f7jp5\" (UID: \"1d969530-c138-4fb7-9bfe-0825be66c009\") " pod="openshift-network-operator/iptables-alerter-f7jp5" Mar 18 17:41:38.376644 master-0 kubenswrapper[4090]: I0318 17:41:38.376562 4090 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-dddff6458-wlfj4" Mar 18 17:41:38.383959 master-0 kubenswrapper[4090]: I0318 17:41:38.383292 4090 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-f7jp5" Mar 18 17:41:38.389330 master-0 kubenswrapper[4090]: I0318 17:41:38.389301 4090 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-6bb5bfb6fd-xwwcg"] Mar 18 17:41:38.399639 master-0 kubenswrapper[4090]: W0318 17:41:38.399606 4090 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9b424d6c_7440_4c98_ac19_2d0642c696fd.slice/crio-d451cc909e96cb90161ef2054b945e5cb54cff4fe2886dd65033c87b6a8fe884 WatchSource:0}: Error finding container d451cc909e96cb90161ef2054b945e5cb54cff4fe2886dd65033c87b6a8fe884: Status 404 returned error can't find the container with id d451cc909e96cb90161ef2054b945e5cb54cff4fe2886dd65033c87b6a8fe884 Mar 18 17:41:38.408193 master-0 kubenswrapper[4090]: W0318 17:41:38.408151 4090 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1d969530_c138_4fb7_9bfe_0825be66c009.slice/crio-d5c2063a6c515ba48297abcc083d96a0ee10588d1979ea68a5e48cdf4d96c90e WatchSource:0}: Error finding container d5c2063a6c515ba48297abcc083d96a0ee10588d1979ea68a5e48cdf4d96c90e: Status 404 returned error can't find the container with id d5c2063a6c515ba48297abcc083d96a0ee10588d1979ea68a5e48cdf4d96c90e Mar 18 17:41:38.486933 master-0 kubenswrapper[4090]: I0318 17:41:38.486899 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/6f26e239-2988-4faa-bc1d-24b15b95b7f1-image-registry-operator-tls\") pod \"cluster-image-registry-operator-5549dc66cb-ljrq8\" (UID: \"6f26e239-2988-4faa-bc1d-24b15b95b7f1\") " pod="openshift-image-registry/cluster-image-registry-operator-5549dc66cb-ljrq8" Mar 18 17:41:38.487010 master-0 kubenswrapper[4090]: I0318 17:41:38.486936 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/7e64a377-f497-4416-8f22-d5c7f52e0b65-metrics-tls\") pod \"ingress-operator-66b84d69b-qb7n6\" (UID: \"7e64a377-f497-4416-8f22-d5c7f52e0b65\") " pod="openshift-ingress-operator/ingress-operator-66b84d69b-qb7n6" Mar 18 17:41:38.487010 master-0 kubenswrapper[4090]: I0318 17:41:38.486957 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/a8c34df1-ea0d-4dfa-bf4d-5b58dc5bee8e-webhook-certs\") pod \"multus-admission-controller-5dbbb8b86f-gr8jc\" (UID: \"a8c34df1-ea0d-4dfa-bf4d-5b58dc5bee8e\") " pod="openshift-multus/multus-admission-controller-5dbbb8b86f-gr8jc" Mar 18 17:41:38.487010 master-0 kubenswrapper[4090]: I0318 17:41:38.486979 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/8d5e9525-6c0d-4c0b-a2ce-e42eaf66c311-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-58845fbb57-vjrjg\" (UID: \"8d5e9525-6c0d-4c0b-a2ce-e42eaf66c311\") " pod="openshift-monitoring/cluster-monitoring-operator-58845fbb57-vjrjg" Mar 18 17:41:38.487089 master-0 kubenswrapper[4090]: I0318 17:41:38.487067 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/7c6694a8-ccd0-491b-9f21-215450f6ce67-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-598fbc5f8f-7qwxn\" (UID: \"7c6694a8-ccd0-491b-9f21-215450f6ce67\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-7qwxn" Mar 18 17:41:38.487326 master-0 kubenswrapper[4090]: I0318 17:41:38.487095 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/ce5831a6-5a8d-4cda-9299-5d86437bcab2-marketplace-operator-metrics\") pod \"marketplace-operator-89ccd998f-l5gm7\" (UID: \"ce5831a6-5a8d-4cda-9299-5d86437bcab2\") " pod="openshift-marketplace/marketplace-operator-89ccd998f-l5gm7" Mar 18 17:41:38.487369 master-0 kubenswrapper[4090]: I0318 17:41:38.487336 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-baremetal-operator-tls\" (UniqueName: \"kubernetes.io/secret/37b3753f-bf4f-4a9e-a4a8-d58296bada79-cluster-baremetal-operator-tls\") pod \"cluster-baremetal-operator-6f69995874-dh5zl\" (UID: \"37b3753f-bf4f-4a9e-a4a8-d58296bada79\") " pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-dh5zl" Mar 18 17:41:38.487396 master-0 kubenswrapper[4090]: I0318 17:41:38.487364 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/e9e04572-1425-440e-9869-6deef05e13e3-srv-cert\") pod \"catalog-operator-68f85b4d6c-qpgfz\" (UID: \"e9e04572-1425-440e-9869-6deef05e13e3\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68f85b4d6c-qpgfz" Mar 18 17:41:38.487422 master-0 kubenswrapper[4090]: I0318 17:41:38.487392 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/d26d4515-391e-41a5-8c82-1b2b8a375662-package-server-manager-serving-cert\") pod \"package-server-manager-7b95f86987-6qqz4\" (UID: \"d26d4515-391e-41a5-8c82-1b2b8a375662\") " pod="openshift-operator-lifecycle-manager/package-server-manager-7b95f86987-6qqz4" Mar 18 17:41:38.487596 master-0 kubenswrapper[4090]: I0318 17:41:38.487573 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/37b3753f-bf4f-4a9e-a4a8-d58296bada79-cert\") pod \"cluster-baremetal-operator-6f69995874-dh5zl\" (UID: \"37b3753f-bf4f-4a9e-a4a8-d58296bada79\") " pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-dh5zl" Mar 18 17:41:38.487631 master-0 kubenswrapper[4090]: I0318 17:41:38.487614 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/b1352cc7-4099-44c5-9c31-8259fb783bc7-metrics-tls\") pod \"dns-operator-9c5679d8f-7sc7v\" (UID: \"b1352cc7-4099-44c5-9c31-8259fb783bc7\") " pod="openshift-dns-operator/dns-operator-9c5679d8f-7sc7v" Mar 18 17:41:38.487702 master-0 kubenswrapper[4090]: I0318 17:41:38.487688 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/e73f2834-c56c-4cef-ac3c-2317e9a4324c-srv-cert\") pod \"olm-operator-5c9796789-6hngr\" (UID: \"e73f2834-c56c-4cef-ac3c-2317e9a4324c\") " pod="openshift-operator-lifecycle-manager/olm-operator-5c9796789-6hngr" Mar 18 17:41:38.487808 master-0 kubenswrapper[4090]: E0318 17:41:38.487329 4090 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: secret "multus-admission-controller-secret" not found Mar 18 17:41:38.487840 master-0 kubenswrapper[4090]: E0318 17:41:38.487821 4090 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: secret "catalog-operator-serving-cert" not found Mar 18 17:41:38.487865 master-0 kubenswrapper[4090]: E0318 17:41:38.487401 4090 secret.go:189] Couldn't get secret openshift-image-registry/image-registry-operator-tls: secret "image-registry-operator-tls" not found Mar 18 17:41:38.487894 master-0 kubenswrapper[4090]: E0318 17:41:38.487861 4090 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a8c34df1-ea0d-4dfa-bf4d-5b58dc5bee8e-webhook-certs podName:a8c34df1-ea0d-4dfa-bf4d-5b58dc5bee8e nodeName:}" failed. No retries permitted until 2026-03-18 17:41:39.487836147 +0000 UTC m=+116.680108051 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/a8c34df1-ea0d-4dfa-bf4d-5b58dc5bee8e-webhook-certs") pod "multus-admission-controller-5dbbb8b86f-gr8jc" (UID: "a8c34df1-ea0d-4dfa-bf4d-5b58dc5bee8e") : secret "multus-admission-controller-secret" not found Mar 18 17:41:38.487894 master-0 kubenswrapper[4090]: E0318 17:41:38.487492 4090 secret.go:189] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: secret "marketplace-operator-metrics" not found Mar 18 17:41:38.487894 master-0 kubenswrapper[4090]: E0318 17:41:38.487537 4090 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: secret "cluster-monitoring-operator-tls" not found Mar 18 17:41:38.487985 master-0 kubenswrapper[4090]: E0318 17:41:38.487560 4090 secret.go:189] Couldn't get secret openshift-ingress-operator/metrics-tls: secret "metrics-tls" not found Mar 18 17:41:38.487985 master-0 kubenswrapper[4090]: E0318 17:41:38.487576 4090 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/node-tuning-operator-tls: secret "node-tuning-operator-tls" not found Mar 18 17:41:38.487985 master-0 kubenswrapper[4090]: E0318 17:41:38.487622 4090 secret.go:189] Couldn't get secret openshift-machine-api/cluster-baremetal-operator-tls: secret "cluster-baremetal-operator-tls" not found Mar 18 17:41:38.487985 master-0 kubenswrapper[4090]: E0318 17:41:38.487942 4090 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e9e04572-1425-440e-9869-6deef05e13e3-srv-cert podName:e9e04572-1425-440e-9869-6deef05e13e3 nodeName:}" failed. No retries permitted until 2026-03-18 17:41:39.487874338 +0000 UTC m=+116.680146242 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/e9e04572-1425-440e-9869-6deef05e13e3-srv-cert") pod "catalog-operator-68f85b4d6c-qpgfz" (UID: "e9e04572-1425-440e-9869-6deef05e13e3") : secret "catalog-operator-serving-cert" not found Mar 18 17:41:38.487985 master-0 kubenswrapper[4090]: E0318 17:41:38.487948 4090 secret.go:189] Couldn't get secret openshift-dns-operator/metrics-tls: secret "metrics-tls" not found Mar 18 17:41:38.488104 master-0 kubenswrapper[4090]: E0318 17:41:38.488012 4090 secret.go:189] Couldn't get secret openshift-machine-api/cluster-baremetal-webhook-server-cert: secret "cluster-baremetal-webhook-server-cert" not found Mar 18 17:41:38.488104 master-0 kubenswrapper[4090]: E0318 17:41:38.487961 4090 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6f26e239-2988-4faa-bc1d-24b15b95b7f1-image-registry-operator-tls podName:6f26e239-2988-4faa-bc1d-24b15b95b7f1 nodeName:}" failed. No retries permitted until 2026-03-18 17:41:39.48795315 +0000 UTC m=+116.680225064 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "image-registry-operator-tls" (UniqueName: "kubernetes.io/secret/6f26e239-2988-4faa-bc1d-24b15b95b7f1-image-registry-operator-tls") pod "cluster-image-registry-operator-5549dc66cb-ljrq8" (UID: "6f26e239-2988-4faa-bc1d-24b15b95b7f1") : secret "image-registry-operator-tls" not found Mar 18 17:41:38.488104 master-0 kubenswrapper[4090]: E0318 17:41:38.487530 4090 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: secret "package-server-manager-serving-cert" not found Mar 18 17:41:38.488104 master-0 kubenswrapper[4090]: E0318 17:41:38.488050 4090 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ce5831a6-5a8d-4cda-9299-5d86437bcab2-marketplace-operator-metrics podName:ce5831a6-5a8d-4cda-9299-5d86437bcab2 nodeName:}" failed. No retries permitted until 2026-03-18 17:41:39.488029722 +0000 UTC m=+116.680301636 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/ce5831a6-5a8d-4cda-9299-5d86437bcab2-marketplace-operator-metrics") pod "marketplace-operator-89ccd998f-l5gm7" (UID: "ce5831a6-5a8d-4cda-9299-5d86437bcab2") : secret "marketplace-operator-metrics" not found Mar 18 17:41:38.488104 master-0 kubenswrapper[4090]: E0318 17:41:38.488092 4090 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8d5e9525-6c0d-4c0b-a2ce-e42eaf66c311-cluster-monitoring-operator-tls podName:8d5e9525-6c0d-4c0b-a2ce-e42eaf66c311 nodeName:}" failed. No retries permitted until 2026-03-18 17:41:39.488083293 +0000 UTC m=+116.680355207 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/8d5e9525-6c0d-4c0b-a2ce-e42eaf66c311-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-58845fbb57-vjrjg" (UID: "8d5e9525-6c0d-4c0b-a2ce-e42eaf66c311") : secret "cluster-monitoring-operator-tls" not found Mar 18 17:41:38.488104 master-0 kubenswrapper[4090]: E0318 17:41:38.488105 4090 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7e64a377-f497-4416-8f22-d5c7f52e0b65-metrics-tls podName:7e64a377-f497-4416-8f22-d5c7f52e0b65 nodeName:}" failed. No retries permitted until 2026-03-18 17:41:39.488099204 +0000 UTC m=+116.680371118 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/7e64a377-f497-4416-8f22-d5c7f52e0b65-metrics-tls") pod "ingress-operator-66b84d69b-qb7n6" (UID: "7e64a377-f497-4416-8f22-d5c7f52e0b65") : secret "metrics-tls" not found Mar 18 17:41:38.488252 master-0 kubenswrapper[4090]: E0318 17:41:38.488118 4090 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7c6694a8-ccd0-491b-9f21-215450f6ce67-node-tuning-operator-tls podName:7c6694a8-ccd0-491b-9f21-215450f6ce67 nodeName:}" failed. No retries permitted until 2026-03-18 17:41:39.488112024 +0000 UTC m=+116.680383938 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "node-tuning-operator-tls" (UniqueName: "kubernetes.io/secret/7c6694a8-ccd0-491b-9f21-215450f6ce67-node-tuning-operator-tls") pod "cluster-node-tuning-operator-598fbc5f8f-7qwxn" (UID: "7c6694a8-ccd0-491b-9f21-215450f6ce67") : secret "node-tuning-operator-tls" not found Mar 18 17:41:38.488252 master-0 kubenswrapper[4090]: E0318 17:41:38.488131 4090 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/37b3753f-bf4f-4a9e-a4a8-d58296bada79-cluster-baremetal-operator-tls podName:37b3753f-bf4f-4a9e-a4a8-d58296bada79 nodeName:}" failed. No retries permitted until 2026-03-18 17:41:39.488125414 +0000 UTC m=+116.680397328 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cluster-baremetal-operator-tls" (UniqueName: "kubernetes.io/secret/37b3753f-bf4f-4a9e-a4a8-d58296bada79-cluster-baremetal-operator-tls") pod "cluster-baremetal-operator-6f69995874-dh5zl" (UID: "37b3753f-bf4f-4a9e-a4a8-d58296bada79") : secret "cluster-baremetal-operator-tls" not found Mar 18 17:41:38.488252 master-0 kubenswrapper[4090]: E0318 17:41:38.488147 4090 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b1352cc7-4099-44c5-9c31-8259fb783bc7-metrics-tls podName:b1352cc7-4099-44c5-9c31-8259fb783bc7 nodeName:}" failed. No retries permitted until 2026-03-18 17:41:39.488138234 +0000 UTC m=+116.680410148 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/b1352cc7-4099-44c5-9c31-8259fb783bc7-metrics-tls") pod "dns-operator-9c5679d8f-7sc7v" (UID: "b1352cc7-4099-44c5-9c31-8259fb783bc7") : secret "metrics-tls" not found Mar 18 17:41:38.488252 master-0 kubenswrapper[4090]: E0318 17:41:38.488158 4090 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/37b3753f-bf4f-4a9e-a4a8-d58296bada79-cert podName:37b3753f-bf4f-4a9e-a4a8-d58296bada79 nodeName:}" failed. No retries permitted until 2026-03-18 17:41:39.488153355 +0000 UTC m=+116.680425269 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/37b3753f-bf4f-4a9e-a4a8-d58296bada79-cert") pod "cluster-baremetal-operator-6f69995874-dh5zl" (UID: "37b3753f-bf4f-4a9e-a4a8-d58296bada79") : secret "cluster-baremetal-webhook-server-cert" not found Mar 18 17:41:38.488252 master-0 kubenswrapper[4090]: E0318 17:41:38.488168 4090 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d26d4515-391e-41a5-8c82-1b2b8a375662-package-server-manager-serving-cert podName:d26d4515-391e-41a5-8c82-1b2b8a375662 nodeName:}" failed. No retries permitted until 2026-03-18 17:41:39.488163785 +0000 UTC m=+116.680435699 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/d26d4515-391e-41a5-8c82-1b2b8a375662-package-server-manager-serving-cert") pod "package-server-manager-7b95f86987-6qqz4" (UID: "d26d4515-391e-41a5-8c82-1b2b8a375662") : secret "package-server-manager-serving-cert" not found Mar 18 17:41:38.488252 master-0 kubenswrapper[4090]: E0318 17:41:38.488207 4090 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: secret "olm-operator-serving-cert" not found Mar 18 17:41:38.488252 master-0 kubenswrapper[4090]: E0318 17:41:38.488245 4090 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e73f2834-c56c-4cef-ac3c-2317e9a4324c-srv-cert podName:e73f2834-c56c-4cef-ac3c-2317e9a4324c nodeName:}" failed. No retries permitted until 2026-03-18 17:41:39.488228976 +0000 UTC m=+116.680500890 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/e73f2834-c56c-4cef-ac3c-2317e9a4324c-srv-cert") pod "olm-operator-5c9796789-6hngr" (UID: "e73f2834-c56c-4cef-ac3c-2317e9a4324c") : secret "olm-operator-serving-cert" not found Mar 18 17:41:38.583864 master-0 kubenswrapper[4090]: I0318 17:41:38.583625 4090 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-olm-operator/cluster-olm-operator-67dcd4998-lljnt"] Mar 18 17:41:38.593103 master-0 kubenswrapper[4090]: I0318 17:41:38.592730 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/7c6694a8-ccd0-491b-9f21-215450f6ce67-apiservice-cert\") pod \"cluster-node-tuning-operator-598fbc5f8f-7qwxn\" (UID: \"7c6694a8-ccd0-491b-9f21-215450f6ce67\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-7qwxn" Mar 18 17:41:38.593103 master-0 kubenswrapper[4090]: E0318 17:41:38.593057 4090 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/performance-addon-operator-webhook-cert: secret "performance-addon-operator-webhook-cert" not found Mar 18 17:41:38.593211 master-0 kubenswrapper[4090]: E0318 17:41:38.593134 4090 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7c6694a8-ccd0-491b-9f21-215450f6ce67-apiservice-cert podName:7c6694a8-ccd0-491b-9f21-215450f6ce67 nodeName:}" failed. No retries permitted until 2026-03-18 17:41:39.593114576 +0000 UTC m=+116.785386490 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/7c6694a8-ccd0-491b-9f21-215450f6ce67-apiservice-cert") pod "cluster-node-tuning-operator-598fbc5f8f-7qwxn" (UID: "7c6694a8-ccd0-491b-9f21-215450f6ce67") : secret "performance-addon-operator-webhook-cert" not found Mar 18 17:41:38.595293 master-0 kubenswrapper[4090]: I0318 17:41:38.595253 4090 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-8b68b9d9b-p72m2"] Mar 18 17:41:38.606169 master-0 kubenswrapper[4090]: W0318 17:41:38.605537 4090 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod26575d68_0488_4dfa_a5d0_5016e481dba6.slice/crio-5d787dbd681a8506a724fcd5492ed061edac8e14293b27fcf81a68f92d0df82e WatchSource:0}: Error finding container 5d787dbd681a8506a724fcd5492ed061edac8e14293b27fcf81a68f92d0df82e: Status 404 returned error can't find the container with id 5d787dbd681a8506a724fcd5492ed061edac8e14293b27fcf81a68f92d0df82e Mar 18 17:41:38.618245 master-0 kubenswrapper[4090]: I0318 17:41:38.618166 4090 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-storage-operator/csi-snapshot-controller-operator-5f5d689c6b-z9vvz"] Mar 18 17:41:38.624905 master-0 kubenswrapper[4090]: I0318 17:41:38.624686 4090 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-8544cbcf9c-rws9x"] Mar 18 17:41:38.640122 master-0 kubenswrapper[4090]: I0318 17:41:38.640059 4090 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-95bf4f4d-q27fh"] Mar 18 17:41:38.643763 master-0 kubenswrapper[4090]: I0318 17:41:38.643736 4090 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-5885bfd7f4-8sxdf"] Mar 18 17:41:38.675767 master-0 kubenswrapper[4090]: I0318 17:41:38.675727 4090 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-dddff6458-wlfj4"] Mar 18 17:41:38.691332 master-0 kubenswrapper[4090]: I0318 17:41:38.691166 4090 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-8c94f4649-hpsbd"] Mar 18 17:41:38.691332 master-0 kubenswrapper[4090]: I0318 17:41:38.691220 4090 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-b865698dc-5zj8r"] Mar 18 17:41:38.696257 master-0 kubenswrapper[4090]: W0318 17:41:38.696205 4090 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9a240ab7_a1d5_4e9a_96f3_4590681cc7ed.slice/crio-9c6ba19a43312e7426d156208bc0c31b36ee526eb8006e7186d5ea94923d4e9f WatchSource:0}: Error finding container 9c6ba19a43312e7426d156208bc0c31b36ee526eb8006e7186d5ea94923d4e9f: Status 404 returned error can't find the container with id 9c6ba19a43312e7426d156208bc0c31b36ee526eb8006e7186d5ea94923d4e9f Mar 18 17:41:38.697198 master-0 kubenswrapper[4090]: W0318 17:41:38.697162 4090 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc355c750_ae2f_49fa_9a16_8fb4f688853e.slice/crio-b5e733421a5534241a408f87a9d1282c96549651c461bd5bf9e9c1999c97d9e5 WatchSource:0}: Error finding container b5e733421a5534241a408f87a9d1282c96549651c461bd5bf9e9c1999c97d9e5: Status 404 returned error can't find the container with id b5e733421a5534241a408f87a9d1282c96549651c461bd5bf9e9c1999c97d9e5 Mar 18 17:41:39.211368 master-0 kubenswrapper[4090]: I0318 17:41:39.211298 4090 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-8b68b9d9b-p72m2" event={"ID":"26575d68-0488-4dfa-a5d0-5016e481dba6","Type":"ContainerStarted","Data":"51887b13aef82e88868eb337156320571051617c6952199181cd88bfeb560624"} Mar 18 17:41:39.211368 master-0 kubenswrapper[4090]: I0318 17:41:39.211366 4090 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-8b68b9d9b-p72m2" event={"ID":"26575d68-0488-4dfa-a5d0-5016e481dba6","Type":"ContainerStarted","Data":"5d787dbd681a8506a724fcd5492ed061edac8e14293b27fcf81a68f92d0df82e"} Mar 18 17:41:39.212925 master-0 kubenswrapper[4090]: I0318 17:41:39.212894 4090 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-ff989d6cc-qk279" event={"ID":"9b424d6c-7440-4c98-ac19-2d0642c696fd","Type":"ContainerStarted","Data":"d451cc909e96cb90161ef2054b945e5cb54cff4fe2886dd65033c87b6a8fe884"} Mar 18 17:41:39.214137 master-0 kubenswrapper[4090]: I0318 17:41:39.214103 4090 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-95bf4f4d-q27fh" event={"ID":"cb522b02-0b93-4711-9041-566daa06b95a","Type":"ContainerStarted","Data":"62f87c779c80aac58d08d6114e2c8cc2c2974d823d9538d2de8360d3c4243057"} Mar 18 17:41:39.219922 master-0 kubenswrapper[4090]: I0318 17:41:39.217628 4090 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-b865698dc-5zj8r" event={"ID":"c355c750-ae2f-49fa-9a16-8fb4f688853e","Type":"ContainerStarted","Data":"b5e733421a5534241a408f87a9d1282c96549651c461bd5bf9e9c1999c97d9e5"} Mar 18 17:41:39.219922 master-0 kubenswrapper[4090]: I0318 17:41:39.219052 4090 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-olm-operator/cluster-olm-operator-67dcd4998-lljnt" event={"ID":"99e215da-759d-4fff-af65-0fb64245fbd0","Type":"ContainerStarted","Data":"6bd8b74e410d81f6dbc5c2f014e72715199a5fa6c057d771fdb8890689635805"} Mar 18 17:41:39.220776 master-0 kubenswrapper[4090]: I0318 17:41:39.220739 4090 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-f7jp5" event={"ID":"1d969530-c138-4fb7-9bfe-0825be66c009","Type":"ContainerStarted","Data":"d5c2063a6c515ba48297abcc083d96a0ee10588d1979ea68a5e48cdf4d96c90e"} Mar 18 17:41:39.222048 master-0 kubenswrapper[4090]: I0318 17:41:39.222023 4090 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-8sxdf" event={"ID":"c087ce06-a16b-41f4-ba93-8fccdee09003","Type":"ContainerStarted","Data":"681e9cfa9d99b6787480ff89127df11d81327ab93296d6efacd157b94bbfa393"} Mar 18 17:41:39.223088 master-0 kubenswrapper[4090]: I0318 17:41:39.223048 4090 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-rws9x" event={"ID":"0100a259-1358-45e8-8191-4e1f9a14ec89","Type":"ContainerStarted","Data":"a06a3f0fb54d1869684741c01721cbf6af520d75473205b84e908f306a368b3a"} Mar 18 17:41:39.224570 master-0 kubenswrapper[4090]: I0318 17:41:39.224536 4090 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8c94f4649-hpsbd" event={"ID":"9a240ab7-a1d5-4e9a-96f3-4590681cc7ed","Type":"ContainerStarted","Data":"9c6ba19a43312e7426d156208bc0c31b36ee526eb8006e7186d5ea94923d4e9f"} Mar 18 17:41:39.227499 master-0 kubenswrapper[4090]: I0318 17:41:39.227434 4090 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-6bb5bfb6fd-xwwcg" event={"ID":"f7ff61c7-32d1-4407-a792-8e22bb4d50f9","Type":"ContainerStarted","Data":"8d76a48b181c0cd15d1de5c39a3bc3d9f330bf1dff375bce677cfee095393ae6"} Mar 18 17:41:39.228636 master-0 kubenswrapper[4090]: I0318 17:41:39.228600 4090 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-dddff6458-wlfj4" event={"ID":"3a3a6c2c-78e7-41f3-acff-20173cbc012a","Type":"ContainerStarted","Data":"3850c530da1325c13b135240c71869228656f1ceff63510ab0a98443cee54a55"} Mar 18 17:41:39.229978 master-0 kubenswrapper[4090]: I0318 17:41:39.229938 4090 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-5f5d689c6b-z9vvz" event={"ID":"dba5f8d7-4d25-42b5-9c58-813221bf96bb","Type":"ContainerStarted","Data":"6855c26bf134f973aca5b753cd9252cc1f86b218f035870b1dab49845cbadb56"} Mar 18 17:41:39.236429 master-0 kubenswrapper[4090]: I0318 17:41:39.235254 4090 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-8b68b9d9b-p72m2" podStartSLOduration=81.235244146 podStartE2EDuration="1m21.235244146s" podCreationTimestamp="2026-03-18 17:40:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 17:41:39.234914059 +0000 UTC m=+116.427185973" watchObservedRunningTime="2026-03-18 17:41:39.235244146 +0000 UTC m=+116.427516060" Mar 18 17:41:39.506761 master-0 kubenswrapper[4090]: I0318 17:41:39.506491 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/6f26e239-2988-4faa-bc1d-24b15b95b7f1-image-registry-operator-tls\") pod \"cluster-image-registry-operator-5549dc66cb-ljrq8\" (UID: \"6f26e239-2988-4faa-bc1d-24b15b95b7f1\") " pod="openshift-image-registry/cluster-image-registry-operator-5549dc66cb-ljrq8" Mar 18 17:41:39.506761 master-0 kubenswrapper[4090]: I0318 17:41:39.506617 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/7e64a377-f497-4416-8f22-d5c7f52e0b65-metrics-tls\") pod \"ingress-operator-66b84d69b-qb7n6\" (UID: \"7e64a377-f497-4416-8f22-d5c7f52e0b65\") " pod="openshift-ingress-operator/ingress-operator-66b84d69b-qb7n6" Mar 18 17:41:39.506981 master-0 kubenswrapper[4090]: E0318 17:41:39.506766 4090 secret.go:189] Couldn't get secret openshift-image-registry/image-registry-operator-tls: secret "image-registry-operator-tls" not found Mar 18 17:41:39.506981 master-0 kubenswrapper[4090]: I0318 17:41:39.506892 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/8d5e9525-6c0d-4c0b-a2ce-e42eaf66c311-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-58845fbb57-vjrjg\" (UID: \"8d5e9525-6c0d-4c0b-a2ce-e42eaf66c311\") " pod="openshift-monitoring/cluster-monitoring-operator-58845fbb57-vjrjg" Mar 18 17:41:39.506981 master-0 kubenswrapper[4090]: I0318 17:41:39.506951 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/a8c34df1-ea0d-4dfa-bf4d-5b58dc5bee8e-webhook-certs\") pod \"multus-admission-controller-5dbbb8b86f-gr8jc\" (UID: \"a8c34df1-ea0d-4dfa-bf4d-5b58dc5bee8e\") " pod="openshift-multus/multus-admission-controller-5dbbb8b86f-gr8jc" Mar 18 17:41:39.507073 master-0 kubenswrapper[4090]: I0318 17:41:39.507015 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/7c6694a8-ccd0-491b-9f21-215450f6ce67-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-598fbc5f8f-7qwxn\" (UID: \"7c6694a8-ccd0-491b-9f21-215450f6ce67\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-7qwxn" Mar 18 17:41:39.507073 master-0 kubenswrapper[4090]: E0318 17:41:39.507027 4090 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: secret "cluster-monitoring-operator-tls" not found Mar 18 17:41:39.507073 master-0 kubenswrapper[4090]: I0318 17:41:39.507046 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/ce5831a6-5a8d-4cda-9299-5d86437bcab2-marketplace-operator-metrics\") pod \"marketplace-operator-89ccd998f-l5gm7\" (UID: \"ce5831a6-5a8d-4cda-9299-5d86437bcab2\") " pod="openshift-marketplace/marketplace-operator-89ccd998f-l5gm7" Mar 18 17:41:39.507073 master-0 kubenswrapper[4090]: I0318 17:41:39.507064 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-baremetal-operator-tls\" (UniqueName: \"kubernetes.io/secret/37b3753f-bf4f-4a9e-a4a8-d58296bada79-cluster-baremetal-operator-tls\") pod \"cluster-baremetal-operator-6f69995874-dh5zl\" (UID: \"37b3753f-bf4f-4a9e-a4a8-d58296bada79\") " pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-dh5zl" Mar 18 17:41:39.507194 master-0 kubenswrapper[4090]: E0318 17:41:39.507095 4090 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8d5e9525-6c0d-4c0b-a2ce-e42eaf66c311-cluster-monitoring-operator-tls podName:8d5e9525-6c0d-4c0b-a2ce-e42eaf66c311 nodeName:}" failed. No retries permitted until 2026-03-18 17:41:41.507077162 +0000 UTC m=+118.699349076 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/8d5e9525-6c0d-4c0b-a2ce-e42eaf66c311-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-58845fbb57-vjrjg" (UID: "8d5e9525-6c0d-4c0b-a2ce-e42eaf66c311") : secret "cluster-monitoring-operator-tls" not found Mar 18 17:41:39.507194 master-0 kubenswrapper[4090]: I0318 17:41:39.507111 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/e9e04572-1425-440e-9869-6deef05e13e3-srv-cert\") pod \"catalog-operator-68f85b4d6c-qpgfz\" (UID: \"e9e04572-1425-440e-9869-6deef05e13e3\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68f85b4d6c-qpgfz" Mar 18 17:41:39.507194 master-0 kubenswrapper[4090]: I0318 17:41:39.507134 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/37b3753f-bf4f-4a9e-a4a8-d58296bada79-cert\") pod \"cluster-baremetal-operator-6f69995874-dh5zl\" (UID: \"37b3753f-bf4f-4a9e-a4a8-d58296bada79\") " pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-dh5zl" Mar 18 17:41:39.507194 master-0 kubenswrapper[4090]: I0318 17:41:39.507155 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/d26d4515-391e-41a5-8c82-1b2b8a375662-package-server-manager-serving-cert\") pod \"package-server-manager-7b95f86987-6qqz4\" (UID: \"d26d4515-391e-41a5-8c82-1b2b8a375662\") " pod="openshift-operator-lifecycle-manager/package-server-manager-7b95f86987-6qqz4" Mar 18 17:41:39.507194 master-0 kubenswrapper[4090]: E0318 17:41:39.507156 4090 secret.go:189] Couldn't get secret openshift-machine-api/cluster-baremetal-operator-tls: secret "cluster-baremetal-operator-tls" not found Mar 18 17:41:39.507194 master-0 kubenswrapper[4090]: E0318 17:41:39.507184 4090 secret.go:189] Couldn't get secret openshift-ingress-operator/metrics-tls: secret "metrics-tls" not found Mar 18 17:41:39.507421 master-0 kubenswrapper[4090]: E0318 17:41:39.507235 4090 secret.go:189] Couldn't get secret openshift-dns-operator/metrics-tls: secret "metrics-tls" not found Mar 18 17:41:39.507421 master-0 kubenswrapper[4090]: E0318 17:41:39.507240 4090 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: secret "catalog-operator-serving-cert" not found Mar 18 17:41:39.507421 master-0 kubenswrapper[4090]: E0318 17:41:39.507243 4090 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: secret "multus-admission-controller-secret" not found Mar 18 17:41:39.507421 master-0 kubenswrapper[4090]: I0318 17:41:39.507188 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/b1352cc7-4099-44c5-9c31-8259fb783bc7-metrics-tls\") pod \"dns-operator-9c5679d8f-7sc7v\" (UID: \"b1352cc7-4099-44c5-9c31-8259fb783bc7\") " pod="openshift-dns-operator/dns-operator-9c5679d8f-7sc7v" Mar 18 17:41:39.507421 master-0 kubenswrapper[4090]: E0318 17:41:39.507306 4090 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/node-tuning-operator-tls: secret "node-tuning-operator-tls" not found Mar 18 17:41:39.507421 master-0 kubenswrapper[4090]: E0318 17:41:39.507320 4090 secret.go:189] Couldn't get secret openshift-machine-api/cluster-baremetal-webhook-server-cert: secret "cluster-baremetal-webhook-server-cert" not found Mar 18 17:41:39.507421 master-0 kubenswrapper[4090]: E0318 17:41:39.507193 4090 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/37b3753f-bf4f-4a9e-a4a8-d58296bada79-cluster-baremetal-operator-tls podName:37b3753f-bf4f-4a9e-a4a8-d58296bada79 nodeName:}" failed. No retries permitted until 2026-03-18 17:41:41.507184914 +0000 UTC m=+118.699456828 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cluster-baremetal-operator-tls" (UniqueName: "kubernetes.io/secret/37b3753f-bf4f-4a9e-a4a8-d58296bada79-cluster-baremetal-operator-tls") pod "cluster-baremetal-operator-6f69995874-dh5zl" (UID: "37b3753f-bf4f-4a9e-a4a8-d58296bada79") : secret "cluster-baremetal-operator-tls" not found Mar 18 17:41:39.507421 master-0 kubenswrapper[4090]: E0318 17:41:39.507348 4090 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b1352cc7-4099-44c5-9c31-8259fb783bc7-metrics-tls podName:b1352cc7-4099-44c5-9c31-8259fb783bc7 nodeName:}" failed. No retries permitted until 2026-03-18 17:41:41.507340097 +0000 UTC m=+118.699612011 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/b1352cc7-4099-44c5-9c31-8259fb783bc7-metrics-tls") pod "dns-operator-9c5679d8f-7sc7v" (UID: "b1352cc7-4099-44c5-9c31-8259fb783bc7") : secret "metrics-tls" not found Mar 18 17:41:39.507421 master-0 kubenswrapper[4090]: E0318 17:41:39.507357 4090 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e9e04572-1425-440e-9869-6deef05e13e3-srv-cert podName:e9e04572-1425-440e-9869-6deef05e13e3 nodeName:}" failed. No retries permitted until 2026-03-18 17:41:41.507353177 +0000 UTC m=+118.699625091 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/e9e04572-1425-440e-9869-6deef05e13e3-srv-cert") pod "catalog-operator-68f85b4d6c-qpgfz" (UID: "e9e04572-1425-440e-9869-6deef05e13e3") : secret "catalog-operator-serving-cert" not found Mar 18 17:41:39.507421 master-0 kubenswrapper[4090]: E0318 17:41:39.507367 4090 secret.go:189] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: secret "marketplace-operator-metrics" not found Mar 18 17:41:39.507421 master-0 kubenswrapper[4090]: E0318 17:41:39.507388 4090 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ce5831a6-5a8d-4cda-9299-5d86437bcab2-marketplace-operator-metrics podName:ce5831a6-5a8d-4cda-9299-5d86437bcab2 nodeName:}" failed. No retries permitted until 2026-03-18 17:41:41.507380538 +0000 UTC m=+118.699652452 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/ce5831a6-5a8d-4cda-9299-5d86437bcab2-marketplace-operator-metrics") pod "marketplace-operator-89ccd998f-l5gm7" (UID: "ce5831a6-5a8d-4cda-9299-5d86437bcab2") : secret "marketplace-operator-metrics" not found Mar 18 17:41:39.507421 master-0 kubenswrapper[4090]: I0318 17:41:39.507392 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/e73f2834-c56c-4cef-ac3c-2317e9a4324c-srv-cert\") pod \"olm-operator-5c9796789-6hngr\" (UID: \"e73f2834-c56c-4cef-ac3c-2317e9a4324c\") " pod="openshift-operator-lifecycle-manager/olm-operator-5c9796789-6hngr" Mar 18 17:41:39.507773 master-0 kubenswrapper[4090]: E0318 17:41:39.507451 4090 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: secret "olm-operator-serving-cert" not found Mar 18 17:41:39.507773 master-0 kubenswrapper[4090]: E0318 17:41:39.507476 4090 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6f26e239-2988-4faa-bc1d-24b15b95b7f1-image-registry-operator-tls podName:6f26e239-2988-4faa-bc1d-24b15b95b7f1 nodeName:}" failed. No retries permitted until 2026-03-18 17:41:41.50746598 +0000 UTC m=+118.699737894 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "image-registry-operator-tls" (UniqueName: "kubernetes.io/secret/6f26e239-2988-4faa-bc1d-24b15b95b7f1-image-registry-operator-tls") pod "cluster-image-registry-operator-5549dc66cb-ljrq8" (UID: "6f26e239-2988-4faa-bc1d-24b15b95b7f1") : secret "image-registry-operator-tls" not found Mar 18 17:41:39.507773 master-0 kubenswrapper[4090]: E0318 17:41:39.507496 4090 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7e64a377-f497-4416-8f22-d5c7f52e0b65-metrics-tls podName:7e64a377-f497-4416-8f22-d5c7f52e0b65 nodeName:}" failed. No retries permitted until 2026-03-18 17:41:41.50748867 +0000 UTC m=+118.699760584 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/7e64a377-f497-4416-8f22-d5c7f52e0b65-metrics-tls") pod "ingress-operator-66b84d69b-qb7n6" (UID: "7e64a377-f497-4416-8f22-d5c7f52e0b65") : secret "metrics-tls" not found Mar 18 17:41:39.507773 master-0 kubenswrapper[4090]: E0318 17:41:39.507512 4090 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a8c34df1-ea0d-4dfa-bf4d-5b58dc5bee8e-webhook-certs podName:a8c34df1-ea0d-4dfa-bf4d-5b58dc5bee8e nodeName:}" failed. No retries permitted until 2026-03-18 17:41:41.507505021 +0000 UTC m=+118.699776935 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/a8c34df1-ea0d-4dfa-bf4d-5b58dc5bee8e-webhook-certs") pod "multus-admission-controller-5dbbb8b86f-gr8jc" (UID: "a8c34df1-ea0d-4dfa-bf4d-5b58dc5bee8e") : secret "multus-admission-controller-secret" not found Mar 18 17:41:39.507773 master-0 kubenswrapper[4090]: E0318 17:41:39.507523 4090 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7c6694a8-ccd0-491b-9f21-215450f6ce67-node-tuning-operator-tls podName:7c6694a8-ccd0-491b-9f21-215450f6ce67 nodeName:}" failed. No retries permitted until 2026-03-18 17:41:41.507517611 +0000 UTC m=+118.699789525 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "node-tuning-operator-tls" (UniqueName: "kubernetes.io/secret/7c6694a8-ccd0-491b-9f21-215450f6ce67-node-tuning-operator-tls") pod "cluster-node-tuning-operator-598fbc5f8f-7qwxn" (UID: "7c6694a8-ccd0-491b-9f21-215450f6ce67") : secret "node-tuning-operator-tls" not found Mar 18 17:41:39.507773 master-0 kubenswrapper[4090]: E0318 17:41:39.507523 4090 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: secret "package-server-manager-serving-cert" not found Mar 18 17:41:39.507773 master-0 kubenswrapper[4090]: E0318 17:41:39.507534 4090 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/37b3753f-bf4f-4a9e-a4a8-d58296bada79-cert podName:37b3753f-bf4f-4a9e-a4a8-d58296bada79 nodeName:}" failed. No retries permitted until 2026-03-18 17:41:41.507529151 +0000 UTC m=+118.699801065 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/37b3753f-bf4f-4a9e-a4a8-d58296bada79-cert") pod "cluster-baremetal-operator-6f69995874-dh5zl" (UID: "37b3753f-bf4f-4a9e-a4a8-d58296bada79") : secret "cluster-baremetal-webhook-server-cert" not found Mar 18 17:41:39.507773 master-0 kubenswrapper[4090]: E0318 17:41:39.507553 4090 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e73f2834-c56c-4cef-ac3c-2317e9a4324c-srv-cert podName:e73f2834-c56c-4cef-ac3c-2317e9a4324c nodeName:}" failed. No retries permitted until 2026-03-18 17:41:41.507547891 +0000 UTC m=+118.699819805 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/e73f2834-c56c-4cef-ac3c-2317e9a4324c-srv-cert") pod "olm-operator-5c9796789-6hngr" (UID: "e73f2834-c56c-4cef-ac3c-2317e9a4324c") : secret "olm-operator-serving-cert" not found Mar 18 17:41:39.507773 master-0 kubenswrapper[4090]: E0318 17:41:39.507633 4090 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d26d4515-391e-41a5-8c82-1b2b8a375662-package-server-manager-serving-cert podName:d26d4515-391e-41a5-8c82-1b2b8a375662 nodeName:}" failed. No retries permitted until 2026-03-18 17:41:41.507615313 +0000 UTC m=+118.699887217 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/d26d4515-391e-41a5-8c82-1b2b8a375662-package-server-manager-serving-cert") pod "package-server-manager-7b95f86987-6qqz4" (UID: "d26d4515-391e-41a5-8c82-1b2b8a375662") : secret "package-server-manager-serving-cert" not found Mar 18 17:41:39.607456 master-0 kubenswrapper[4090]: I0318 17:41:39.607386 4090 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mfn52" Mar 18 17:41:39.607637 master-0 kubenswrapper[4090]: I0318 17:41:39.607469 4090 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-ctd49" Mar 18 17:41:39.609913 master-0 kubenswrapper[4090]: I0318 17:41:39.609875 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/7c6694a8-ccd0-491b-9f21-215450f6ce67-apiservice-cert\") pod \"cluster-node-tuning-operator-598fbc5f8f-7qwxn\" (UID: \"7c6694a8-ccd0-491b-9f21-215450f6ce67\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-7qwxn" Mar 18 17:41:39.610139 master-0 kubenswrapper[4090]: E0318 17:41:39.610114 4090 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/performance-addon-operator-webhook-cert: secret "performance-addon-operator-webhook-cert" not found Mar 18 17:41:39.610208 master-0 kubenswrapper[4090]: I0318 17:41:39.610145 4090 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Mar 18 17:41:39.610208 master-0 kubenswrapper[4090]: E0318 17:41:39.610186 4090 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7c6694a8-ccd0-491b-9f21-215450f6ce67-apiservice-cert podName:7c6694a8-ccd0-491b-9f21-215450f6ce67 nodeName:}" failed. No retries permitted until 2026-03-18 17:41:41.610169936 +0000 UTC m=+118.802441850 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/7c6694a8-ccd0-491b-9f21-215450f6ce67-apiservice-cert") pod "cluster-node-tuning-operator-598fbc5f8f-7qwxn" (UID: "7c6694a8-ccd0-491b-9f21-215450f6ce67") : secret "performance-addon-operator-webhook-cert" not found Mar 18 17:41:39.610330 master-0 kubenswrapper[4090]: I0318 17:41:39.610311 4090 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Mar 18 17:41:39.610443 master-0 kubenswrapper[4090]: I0318 17:41:39.610422 4090 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Mar 18 17:41:41.586384 master-0 kubenswrapper[4090]: I0318 17:41:41.585975 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/6f26e239-2988-4faa-bc1d-24b15b95b7f1-image-registry-operator-tls\") pod \"cluster-image-registry-operator-5549dc66cb-ljrq8\" (UID: \"6f26e239-2988-4faa-bc1d-24b15b95b7f1\") " pod="openshift-image-registry/cluster-image-registry-operator-5549dc66cb-ljrq8" Mar 18 17:41:41.586384 master-0 kubenswrapper[4090]: I0318 17:41:41.586368 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/7e64a377-f497-4416-8f22-d5c7f52e0b65-metrics-tls\") pod \"ingress-operator-66b84d69b-qb7n6\" (UID: \"7e64a377-f497-4416-8f22-d5c7f52e0b65\") " pod="openshift-ingress-operator/ingress-operator-66b84d69b-qb7n6" Mar 18 17:41:41.586384 master-0 kubenswrapper[4090]: I0318 17:41:41.586400 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/8d5e9525-6c0d-4c0b-a2ce-e42eaf66c311-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-58845fbb57-vjrjg\" (UID: \"8d5e9525-6c0d-4c0b-a2ce-e42eaf66c311\") " pod="openshift-monitoring/cluster-monitoring-operator-58845fbb57-vjrjg" Mar 18 17:41:41.587511 master-0 kubenswrapper[4090]: E0318 17:41:41.586193 4090 secret.go:189] Couldn't get secret openshift-image-registry/image-registry-operator-tls: secret "image-registry-operator-tls" not found Mar 18 17:41:41.587511 master-0 kubenswrapper[4090]: E0318 17:41:41.586543 4090 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: secret "multus-admission-controller-secret" not found Mar 18 17:41:41.587511 master-0 kubenswrapper[4090]: E0318 17:41:41.586545 4090 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6f26e239-2988-4faa-bc1d-24b15b95b7f1-image-registry-operator-tls podName:6f26e239-2988-4faa-bc1d-24b15b95b7f1 nodeName:}" failed. No retries permitted until 2026-03-18 17:41:45.586514239 +0000 UTC m=+122.778786153 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "image-registry-operator-tls" (UniqueName: "kubernetes.io/secret/6f26e239-2988-4faa-bc1d-24b15b95b7f1-image-registry-operator-tls") pod "cluster-image-registry-operator-5549dc66cb-ljrq8" (UID: "6f26e239-2988-4faa-bc1d-24b15b95b7f1") : secret "image-registry-operator-tls" not found Mar 18 17:41:41.587511 master-0 kubenswrapper[4090]: E0318 17:41:41.586636 4090 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a8c34df1-ea0d-4dfa-bf4d-5b58dc5bee8e-webhook-certs podName:a8c34df1-ea0d-4dfa-bf4d-5b58dc5bee8e nodeName:}" failed. No retries permitted until 2026-03-18 17:41:45.586608191 +0000 UTC m=+122.778880275 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/a8c34df1-ea0d-4dfa-bf4d-5b58dc5bee8e-webhook-certs") pod "multus-admission-controller-5dbbb8b86f-gr8jc" (UID: "a8c34df1-ea0d-4dfa-bf4d-5b58dc5bee8e") : secret "multus-admission-controller-secret" not found Mar 18 17:41:41.587511 master-0 kubenswrapper[4090]: I0318 17:41:41.586426 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/a8c34df1-ea0d-4dfa-bf4d-5b58dc5bee8e-webhook-certs\") pod \"multus-admission-controller-5dbbb8b86f-gr8jc\" (UID: \"a8c34df1-ea0d-4dfa-bf4d-5b58dc5bee8e\") " pod="openshift-multus/multus-admission-controller-5dbbb8b86f-gr8jc" Mar 18 17:41:41.587511 master-0 kubenswrapper[4090]: I0318 17:41:41.586688 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/7c6694a8-ccd0-491b-9f21-215450f6ce67-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-598fbc5f8f-7qwxn\" (UID: \"7c6694a8-ccd0-491b-9f21-215450f6ce67\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-7qwxn" Mar 18 17:41:41.587511 master-0 kubenswrapper[4090]: I0318 17:41:41.586716 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-baremetal-operator-tls\" (UniqueName: \"kubernetes.io/secret/37b3753f-bf4f-4a9e-a4a8-d58296bada79-cluster-baremetal-operator-tls\") pod \"cluster-baremetal-operator-6f69995874-dh5zl\" (UID: \"37b3753f-bf4f-4a9e-a4a8-d58296bada79\") " pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-dh5zl" Mar 18 17:41:41.587511 master-0 kubenswrapper[4090]: E0318 17:41:41.586713 4090 secret.go:189] Couldn't get secret openshift-ingress-operator/metrics-tls: secret "metrics-tls" not found Mar 18 17:41:41.587511 master-0 kubenswrapper[4090]: I0318 17:41:41.586740 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/ce5831a6-5a8d-4cda-9299-5d86437bcab2-marketplace-operator-metrics\") pod \"marketplace-operator-89ccd998f-l5gm7\" (UID: \"ce5831a6-5a8d-4cda-9299-5d86437bcab2\") " pod="openshift-marketplace/marketplace-operator-89ccd998f-l5gm7" Mar 18 17:41:41.587511 master-0 kubenswrapper[4090]: E0318 17:41:41.586802 4090 secret.go:189] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: secret "marketplace-operator-metrics" not found Mar 18 17:41:41.587511 master-0 kubenswrapper[4090]: E0318 17:41:41.586838 4090 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7e64a377-f497-4416-8f22-d5c7f52e0b65-metrics-tls podName:7e64a377-f497-4416-8f22-d5c7f52e0b65 nodeName:}" failed. No retries permitted until 2026-03-18 17:41:45.586810495 +0000 UTC m=+122.779082409 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/7e64a377-f497-4416-8f22-d5c7f52e0b65-metrics-tls") pod "ingress-operator-66b84d69b-qb7n6" (UID: "7e64a377-f497-4416-8f22-d5c7f52e0b65") : secret "metrics-tls" not found Mar 18 17:41:41.587511 master-0 kubenswrapper[4090]: E0318 17:41:41.586865 4090 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ce5831a6-5a8d-4cda-9299-5d86437bcab2-marketplace-operator-metrics podName:ce5831a6-5a8d-4cda-9299-5d86437bcab2 nodeName:}" failed. No retries permitted until 2026-03-18 17:41:45.586853536 +0000 UTC m=+122.779125450 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/ce5831a6-5a8d-4cda-9299-5d86437bcab2-marketplace-operator-metrics") pod "marketplace-operator-89ccd998f-l5gm7" (UID: "ce5831a6-5a8d-4cda-9299-5d86437bcab2") : secret "marketplace-operator-metrics" not found Mar 18 17:41:41.587511 master-0 kubenswrapper[4090]: E0318 17:41:41.586876 4090 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/node-tuning-operator-tls: secret "node-tuning-operator-tls" not found Mar 18 17:41:41.587511 master-0 kubenswrapper[4090]: I0318 17:41:41.586903 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/e9e04572-1425-440e-9869-6deef05e13e3-srv-cert\") pod \"catalog-operator-68f85b4d6c-qpgfz\" (UID: \"e9e04572-1425-440e-9869-6deef05e13e3\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68f85b4d6c-qpgfz" Mar 18 17:41:41.587511 master-0 kubenswrapper[4090]: E0318 17:41:41.586913 4090 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7c6694a8-ccd0-491b-9f21-215450f6ce67-node-tuning-operator-tls podName:7c6694a8-ccd0-491b-9f21-215450f6ce67 nodeName:}" failed. No retries permitted until 2026-03-18 17:41:45.586899267 +0000 UTC m=+122.779171391 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "node-tuning-operator-tls" (UniqueName: "kubernetes.io/secret/7c6694a8-ccd0-491b-9f21-215450f6ce67-node-tuning-operator-tls") pod "cluster-node-tuning-operator-598fbc5f8f-7qwxn" (UID: "7c6694a8-ccd0-491b-9f21-215450f6ce67") : secret "node-tuning-operator-tls" not found Mar 18 17:41:41.587951 master-0 kubenswrapper[4090]: I0318 17:41:41.586960 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/d26d4515-391e-41a5-8c82-1b2b8a375662-package-server-manager-serving-cert\") pod \"package-server-manager-7b95f86987-6qqz4\" (UID: \"d26d4515-391e-41a5-8c82-1b2b8a375662\") " pod="openshift-operator-lifecycle-manager/package-server-manager-7b95f86987-6qqz4" Mar 18 17:41:41.587951 master-0 kubenswrapper[4090]: E0318 17:41:41.586734 4090 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: secret "cluster-monitoring-operator-tls" not found Mar 18 17:41:41.587951 master-0 kubenswrapper[4090]: E0318 17:41:41.586972 4090 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: secret "catalog-operator-serving-cert" not found Mar 18 17:41:41.587951 master-0 kubenswrapper[4090]: E0318 17:41:41.587007 4090 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8d5e9525-6c0d-4c0b-a2ce-e42eaf66c311-cluster-monitoring-operator-tls podName:8d5e9525-6c0d-4c0b-a2ce-e42eaf66c311 nodeName:}" failed. No retries permitted until 2026-03-18 17:41:45.586998069 +0000 UTC m=+122.779269983 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/8d5e9525-6c0d-4c0b-a2ce-e42eaf66c311-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-58845fbb57-vjrjg" (UID: "8d5e9525-6c0d-4c0b-a2ce-e42eaf66c311") : secret "cluster-monitoring-operator-tls" not found Mar 18 17:41:41.587951 master-0 kubenswrapper[4090]: I0318 17:41:41.586986 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/37b3753f-bf4f-4a9e-a4a8-d58296bada79-cert\") pod \"cluster-baremetal-operator-6f69995874-dh5zl\" (UID: \"37b3753f-bf4f-4a9e-a4a8-d58296bada79\") " pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-dh5zl" Mar 18 17:41:41.587951 master-0 kubenswrapper[4090]: E0318 17:41:41.587032 4090 secret.go:189] Couldn't get secret openshift-machine-api/cluster-baremetal-webhook-server-cert: secret "cluster-baremetal-webhook-server-cert" not found Mar 18 17:41:41.587951 master-0 kubenswrapper[4090]: E0318 17:41:41.587058 4090 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e9e04572-1425-440e-9869-6deef05e13e3-srv-cert podName:e9e04572-1425-440e-9869-6deef05e13e3 nodeName:}" failed. No retries permitted until 2026-03-18 17:41:45.5870324 +0000 UTC m=+122.779304304 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/e9e04572-1425-440e-9869-6deef05e13e3-srv-cert") pod "catalog-operator-68f85b4d6c-qpgfz" (UID: "e9e04572-1425-440e-9869-6deef05e13e3") : secret "catalog-operator-serving-cert" not found Mar 18 17:41:41.587951 master-0 kubenswrapper[4090]: E0318 17:41:41.586920 4090 secret.go:189] Couldn't get secret openshift-machine-api/cluster-baremetal-operator-tls: secret "cluster-baremetal-operator-tls" not found Mar 18 17:41:41.587951 master-0 kubenswrapper[4090]: E0318 17:41:41.587082 4090 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: secret "package-server-manager-serving-cert" not found Mar 18 17:41:41.587951 master-0 kubenswrapper[4090]: I0318 17:41:41.587088 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/b1352cc7-4099-44c5-9c31-8259fb783bc7-metrics-tls\") pod \"dns-operator-9c5679d8f-7sc7v\" (UID: \"b1352cc7-4099-44c5-9c31-8259fb783bc7\") " pod="openshift-dns-operator/dns-operator-9c5679d8f-7sc7v" Mar 18 17:41:41.587951 master-0 kubenswrapper[4090]: I0318 17:41:41.587124 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/e73f2834-c56c-4cef-ac3c-2317e9a4324c-srv-cert\") pod \"olm-operator-5c9796789-6hngr\" (UID: \"e73f2834-c56c-4cef-ac3c-2317e9a4324c\") " pod="openshift-operator-lifecycle-manager/olm-operator-5c9796789-6hngr" Mar 18 17:41:41.587951 master-0 kubenswrapper[4090]: E0318 17:41:41.587132 4090 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/37b3753f-bf4f-4a9e-a4a8-d58296bada79-cluster-baremetal-operator-tls podName:37b3753f-bf4f-4a9e-a4a8-d58296bada79 nodeName:}" failed. No retries permitted until 2026-03-18 17:41:45.587122222 +0000 UTC m=+122.779394136 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cluster-baremetal-operator-tls" (UniqueName: "kubernetes.io/secret/37b3753f-bf4f-4a9e-a4a8-d58296bada79-cluster-baremetal-operator-tls") pod "cluster-baremetal-operator-6f69995874-dh5zl" (UID: "37b3753f-bf4f-4a9e-a4a8-d58296bada79") : secret "cluster-baremetal-operator-tls" not found Mar 18 17:41:41.587951 master-0 kubenswrapper[4090]: E0318 17:41:41.587153 4090 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d26d4515-391e-41a5-8c82-1b2b8a375662-package-server-manager-serving-cert podName:d26d4515-391e-41a5-8c82-1b2b8a375662 nodeName:}" failed. No retries permitted until 2026-03-18 17:41:45.587146912 +0000 UTC m=+122.779418816 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/d26d4515-391e-41a5-8c82-1b2b8a375662-package-server-manager-serving-cert") pod "package-server-manager-7b95f86987-6qqz4" (UID: "d26d4515-391e-41a5-8c82-1b2b8a375662") : secret "package-server-manager-serving-cert" not found Mar 18 17:41:41.587951 master-0 kubenswrapper[4090]: E0318 17:41:41.587167 4090 secret.go:189] Couldn't get secret openshift-dns-operator/metrics-tls: secret "metrics-tls" not found Mar 18 17:41:41.587951 master-0 kubenswrapper[4090]: E0318 17:41:41.587175 4090 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/37b3753f-bf4f-4a9e-a4a8-d58296bada79-cert podName:37b3753f-bf4f-4a9e-a4a8-d58296bada79 nodeName:}" failed. No retries permitted until 2026-03-18 17:41:45.587168983 +0000 UTC m=+122.779440897 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/37b3753f-bf4f-4a9e-a4a8-d58296bada79-cert") pod "cluster-baremetal-operator-6f69995874-dh5zl" (UID: "37b3753f-bf4f-4a9e-a4a8-d58296bada79") : secret "cluster-baremetal-webhook-server-cert" not found Mar 18 17:41:41.588377 master-0 kubenswrapper[4090]: E0318 17:41:41.587202 4090 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b1352cc7-4099-44c5-9c31-8259fb783bc7-metrics-tls podName:b1352cc7-4099-44c5-9c31-8259fb783bc7 nodeName:}" failed. No retries permitted until 2026-03-18 17:41:45.587191553 +0000 UTC m=+122.779463467 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/b1352cc7-4099-44c5-9c31-8259fb783bc7-metrics-tls") pod "dns-operator-9c5679d8f-7sc7v" (UID: "b1352cc7-4099-44c5-9c31-8259fb783bc7") : secret "metrics-tls" not found Mar 18 17:41:41.588377 master-0 kubenswrapper[4090]: E0318 17:41:41.587219 4090 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: secret "olm-operator-serving-cert" not found Mar 18 17:41:41.588377 master-0 kubenswrapper[4090]: E0318 17:41:41.587248 4090 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e73f2834-c56c-4cef-ac3c-2317e9a4324c-srv-cert podName:e73f2834-c56c-4cef-ac3c-2317e9a4324c nodeName:}" failed. No retries permitted until 2026-03-18 17:41:45.587238474 +0000 UTC m=+122.779510488 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/e73f2834-c56c-4cef-ac3c-2317e9a4324c-srv-cert") pod "olm-operator-5c9796789-6hngr" (UID: "e73f2834-c56c-4cef-ac3c-2317e9a4324c") : secret "olm-operator-serving-cert" not found Mar 18 17:41:41.688078 master-0 kubenswrapper[4090]: I0318 17:41:41.687983 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/7c6694a8-ccd0-491b-9f21-215450f6ce67-apiservice-cert\") pod \"cluster-node-tuning-operator-598fbc5f8f-7qwxn\" (UID: \"7c6694a8-ccd0-491b-9f21-215450f6ce67\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-7qwxn" Mar 18 17:41:41.688487 master-0 kubenswrapper[4090]: E0318 17:41:41.688261 4090 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/performance-addon-operator-webhook-cert: secret "performance-addon-operator-webhook-cert" not found Mar 18 17:41:41.688487 master-0 kubenswrapper[4090]: E0318 17:41:41.688420 4090 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7c6694a8-ccd0-491b-9f21-215450f6ce67-apiservice-cert podName:7c6694a8-ccd0-491b-9f21-215450f6ce67 nodeName:}" failed. No retries permitted until 2026-03-18 17:41:45.688390557 +0000 UTC m=+122.880662671 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/7c6694a8-ccd0-491b-9f21-215450f6ce67-apiservice-cert") pod "cluster-node-tuning-operator-598fbc5f8f-7qwxn" (UID: "7c6694a8-ccd0-491b-9f21-215450f6ce67") : secret "performance-addon-operator-webhook-cert" not found Mar 18 17:41:45.631906 master-0 kubenswrapper[4090]: I0318 17:41:45.631750 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/6f26e239-2988-4faa-bc1d-24b15b95b7f1-image-registry-operator-tls\") pod \"cluster-image-registry-operator-5549dc66cb-ljrq8\" (UID: \"6f26e239-2988-4faa-bc1d-24b15b95b7f1\") " pod="openshift-image-registry/cluster-image-registry-operator-5549dc66cb-ljrq8" Mar 18 17:41:45.631906 master-0 kubenswrapper[4090]: I0318 17:41:45.631829 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/7e64a377-f497-4416-8f22-d5c7f52e0b65-metrics-tls\") pod \"ingress-operator-66b84d69b-qb7n6\" (UID: \"7e64a377-f497-4416-8f22-d5c7f52e0b65\") " pod="openshift-ingress-operator/ingress-operator-66b84d69b-qb7n6" Mar 18 17:41:45.631906 master-0 kubenswrapper[4090]: I0318 17:41:45.631867 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/8d5e9525-6c0d-4c0b-a2ce-e42eaf66c311-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-58845fbb57-vjrjg\" (UID: \"8d5e9525-6c0d-4c0b-a2ce-e42eaf66c311\") " pod="openshift-monitoring/cluster-monitoring-operator-58845fbb57-vjrjg" Mar 18 17:41:45.631906 master-0 kubenswrapper[4090]: E0318 17:41:45.631913 4090 secret.go:189] Couldn't get secret openshift-image-registry/image-registry-operator-tls: secret "image-registry-operator-tls" not found Mar 18 17:41:45.633510 master-0 kubenswrapper[4090]: E0318 17:41:45.631982 4090 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6f26e239-2988-4faa-bc1d-24b15b95b7f1-image-registry-operator-tls podName:6f26e239-2988-4faa-bc1d-24b15b95b7f1 nodeName:}" failed. No retries permitted until 2026-03-18 17:41:53.631960065 +0000 UTC m=+130.824231979 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "image-registry-operator-tls" (UniqueName: "kubernetes.io/secret/6f26e239-2988-4faa-bc1d-24b15b95b7f1-image-registry-operator-tls") pod "cluster-image-registry-operator-5549dc66cb-ljrq8" (UID: "6f26e239-2988-4faa-bc1d-24b15b95b7f1") : secret "image-registry-operator-tls" not found Mar 18 17:41:45.633510 master-0 kubenswrapper[4090]: E0318 17:41:45.631999 4090 secret.go:189] Couldn't get secret openshift-ingress-operator/metrics-tls: secret "metrics-tls" not found Mar 18 17:41:45.633510 master-0 kubenswrapper[4090]: I0318 17:41:45.632029 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/a8c34df1-ea0d-4dfa-bf4d-5b58dc5bee8e-webhook-certs\") pod \"multus-admission-controller-5dbbb8b86f-gr8jc\" (UID: \"a8c34df1-ea0d-4dfa-bf4d-5b58dc5bee8e\") " pod="openshift-multus/multus-admission-controller-5dbbb8b86f-gr8jc" Mar 18 17:41:45.633510 master-0 kubenswrapper[4090]: E0318 17:41:45.632084 4090 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7e64a377-f497-4416-8f22-d5c7f52e0b65-metrics-tls podName:7e64a377-f497-4416-8f22-d5c7f52e0b65 nodeName:}" failed. No retries permitted until 2026-03-18 17:41:53.632048987 +0000 UTC m=+130.824320931 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/7e64a377-f497-4416-8f22-d5c7f52e0b65-metrics-tls") pod "ingress-operator-66b84d69b-qb7n6" (UID: "7e64a377-f497-4416-8f22-d5c7f52e0b65") : secret "metrics-tls" not found Mar 18 17:41:45.633510 master-0 kubenswrapper[4090]: E0318 17:41:45.632131 4090 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: secret "multus-admission-controller-secret" not found Mar 18 17:41:45.633510 master-0 kubenswrapper[4090]: E0318 17:41:45.632139 4090 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: secret "cluster-monitoring-operator-tls" not found Mar 18 17:41:45.633510 master-0 kubenswrapper[4090]: E0318 17:41:45.632169 4090 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a8c34df1-ea0d-4dfa-bf4d-5b58dc5bee8e-webhook-certs podName:a8c34df1-ea0d-4dfa-bf4d-5b58dc5bee8e nodeName:}" failed. No retries permitted until 2026-03-18 17:41:53.632157079 +0000 UTC m=+130.824429003 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/a8c34df1-ea0d-4dfa-bf4d-5b58dc5bee8e-webhook-certs") pod "multus-admission-controller-5dbbb8b86f-gr8jc" (UID: "a8c34df1-ea0d-4dfa-bf4d-5b58dc5bee8e") : secret "multus-admission-controller-secret" not found Mar 18 17:41:45.633510 master-0 kubenswrapper[4090]: I0318 17:41:45.632189 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/7c6694a8-ccd0-491b-9f21-215450f6ce67-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-598fbc5f8f-7qwxn\" (UID: \"7c6694a8-ccd0-491b-9f21-215450f6ce67\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-7qwxn" Mar 18 17:41:45.633510 master-0 kubenswrapper[4090]: E0318 17:41:45.632222 4090 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8d5e9525-6c0d-4c0b-a2ce-e42eaf66c311-cluster-monitoring-operator-tls podName:8d5e9525-6c0d-4c0b-a2ce-e42eaf66c311 nodeName:}" failed. No retries permitted until 2026-03-18 17:41:53.63219327 +0000 UTC m=+130.824465224 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/8d5e9525-6c0d-4c0b-a2ce-e42eaf66c311-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-58845fbb57-vjrjg" (UID: "8d5e9525-6c0d-4c0b-a2ce-e42eaf66c311") : secret "cluster-monitoring-operator-tls" not found Mar 18 17:41:45.633510 master-0 kubenswrapper[4090]: E0318 17:41:45.632258 4090 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/node-tuning-operator-tls: secret "node-tuning-operator-tls" not found Mar 18 17:41:45.633510 master-0 kubenswrapper[4090]: E0318 17:41:45.632310 4090 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7c6694a8-ccd0-491b-9f21-215450f6ce67-node-tuning-operator-tls podName:7c6694a8-ccd0-491b-9f21-215450f6ce67 nodeName:}" failed. No retries permitted until 2026-03-18 17:41:53.632298852 +0000 UTC m=+130.824570766 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "node-tuning-operator-tls" (UniqueName: "kubernetes.io/secret/7c6694a8-ccd0-491b-9f21-215450f6ce67-node-tuning-operator-tls") pod "cluster-node-tuning-operator-598fbc5f8f-7qwxn" (UID: "7c6694a8-ccd0-491b-9f21-215450f6ce67") : secret "node-tuning-operator-tls" not found Mar 18 17:41:45.633510 master-0 kubenswrapper[4090]: I0318 17:41:45.632264 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/ce5831a6-5a8d-4cda-9299-5d86437bcab2-marketplace-operator-metrics\") pod \"marketplace-operator-89ccd998f-l5gm7\" (UID: \"ce5831a6-5a8d-4cda-9299-5d86437bcab2\") " pod="openshift-marketplace/marketplace-operator-89ccd998f-l5gm7" Mar 18 17:41:45.633510 master-0 kubenswrapper[4090]: E0318 17:41:45.632406 4090 secret.go:189] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: secret "marketplace-operator-metrics" not found Mar 18 17:41:45.633510 master-0 kubenswrapper[4090]: I0318 17:41:45.632560 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-baremetal-operator-tls\" (UniqueName: \"kubernetes.io/secret/37b3753f-bf4f-4a9e-a4a8-d58296bada79-cluster-baremetal-operator-tls\") pod \"cluster-baremetal-operator-6f69995874-dh5zl\" (UID: \"37b3753f-bf4f-4a9e-a4a8-d58296bada79\") " pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-dh5zl" Mar 18 17:41:45.633510 master-0 kubenswrapper[4090]: I0318 17:41:45.632656 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/e9e04572-1425-440e-9869-6deef05e13e3-srv-cert\") pod \"catalog-operator-68f85b4d6c-qpgfz\" (UID: \"e9e04572-1425-440e-9869-6deef05e13e3\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68f85b4d6c-qpgfz" Mar 18 17:41:45.634062 master-0 kubenswrapper[4090]: I0318 17:41:45.632713 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/37b3753f-bf4f-4a9e-a4a8-d58296bada79-cert\") pod \"cluster-baremetal-operator-6f69995874-dh5zl\" (UID: \"37b3753f-bf4f-4a9e-a4a8-d58296bada79\") " pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-dh5zl" Mar 18 17:41:45.634062 master-0 kubenswrapper[4090]: I0318 17:41:45.632784 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/d26d4515-391e-41a5-8c82-1b2b8a375662-package-server-manager-serving-cert\") pod \"package-server-manager-7b95f86987-6qqz4\" (UID: \"d26d4515-391e-41a5-8c82-1b2b8a375662\") " pod="openshift-operator-lifecycle-manager/package-server-manager-7b95f86987-6qqz4" Mar 18 17:41:45.634062 master-0 kubenswrapper[4090]: I0318 17:41:45.632861 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/b1352cc7-4099-44c5-9c31-8259fb783bc7-metrics-tls\") pod \"dns-operator-9c5679d8f-7sc7v\" (UID: \"b1352cc7-4099-44c5-9c31-8259fb783bc7\") " pod="openshift-dns-operator/dns-operator-9c5679d8f-7sc7v" Mar 18 17:41:45.634062 master-0 kubenswrapper[4090]: I0318 17:41:45.632944 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/e73f2834-c56c-4cef-ac3c-2317e9a4324c-srv-cert\") pod \"olm-operator-5c9796789-6hngr\" (UID: \"e73f2834-c56c-4cef-ac3c-2317e9a4324c\") " pod="openshift-operator-lifecycle-manager/olm-operator-5c9796789-6hngr" Mar 18 17:41:45.634062 master-0 kubenswrapper[4090]: E0318 17:41:45.633009 4090 secret.go:189] Couldn't get secret openshift-machine-api/cluster-baremetal-operator-tls: secret "cluster-baremetal-operator-tls" not found Mar 18 17:41:45.634062 master-0 kubenswrapper[4090]: E0318 17:41:45.633092 4090 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/37b3753f-bf4f-4a9e-a4a8-d58296bada79-cluster-baremetal-operator-tls podName:37b3753f-bf4f-4a9e-a4a8-d58296bada79 nodeName:}" failed. No retries permitted until 2026-03-18 17:41:53.633066538 +0000 UTC m=+130.825338492 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cluster-baremetal-operator-tls" (UniqueName: "kubernetes.io/secret/37b3753f-bf4f-4a9e-a4a8-d58296bada79-cluster-baremetal-operator-tls") pod "cluster-baremetal-operator-6f69995874-dh5zl" (UID: "37b3753f-bf4f-4a9e-a4a8-d58296bada79") : secret "cluster-baremetal-operator-tls" not found Mar 18 17:41:45.634062 master-0 kubenswrapper[4090]: E0318 17:41:45.633122 4090 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: secret "olm-operator-serving-cert" not found Mar 18 17:41:45.634062 master-0 kubenswrapper[4090]: E0318 17:41:45.633192 4090 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e73f2834-c56c-4cef-ac3c-2317e9a4324c-srv-cert podName:e73f2834-c56c-4cef-ac3c-2317e9a4324c nodeName:}" failed. No retries permitted until 2026-03-18 17:41:53.63316852 +0000 UTC m=+130.825440464 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/e73f2834-c56c-4cef-ac3c-2317e9a4324c-srv-cert") pod "olm-operator-5c9796789-6hngr" (UID: "e73f2834-c56c-4cef-ac3c-2317e9a4324c") : secret "olm-operator-serving-cert" not found Mar 18 17:41:45.634062 master-0 kubenswrapper[4090]: E0318 17:41:45.633204 4090 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: secret "package-server-manager-serving-cert" not found Mar 18 17:41:45.634062 master-0 kubenswrapper[4090]: E0318 17:41:45.633306 4090 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d26d4515-391e-41a5-8c82-1b2b8a375662-package-server-manager-serving-cert podName:d26d4515-391e-41a5-8c82-1b2b8a375662 nodeName:}" failed. No retries permitted until 2026-03-18 17:41:53.633246281 +0000 UTC m=+130.825518225 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/d26d4515-391e-41a5-8c82-1b2b8a375662-package-server-manager-serving-cert") pod "package-server-manager-7b95f86987-6qqz4" (UID: "d26d4515-391e-41a5-8c82-1b2b8a375662") : secret "package-server-manager-serving-cert" not found Mar 18 17:41:45.634062 master-0 kubenswrapper[4090]: E0318 17:41:45.633336 4090 secret.go:189] Couldn't get secret openshift-dns-operator/metrics-tls: secret "metrics-tls" not found Mar 18 17:41:45.634062 master-0 kubenswrapper[4090]: E0318 17:41:45.633400 4090 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b1352cc7-4099-44c5-9c31-8259fb783bc7-metrics-tls podName:b1352cc7-4099-44c5-9c31-8259fb783bc7 nodeName:}" failed. No retries permitted until 2026-03-18 17:41:53.633379494 +0000 UTC m=+130.825651448 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/b1352cc7-4099-44c5-9c31-8259fb783bc7-metrics-tls") pod "dns-operator-9c5679d8f-7sc7v" (UID: "b1352cc7-4099-44c5-9c31-8259fb783bc7") : secret "metrics-tls" not found Mar 18 17:41:45.634062 master-0 kubenswrapper[4090]: E0318 17:41:45.633416 4090 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: secret "catalog-operator-serving-cert" not found Mar 18 17:41:45.634062 master-0 kubenswrapper[4090]: E0318 17:41:45.633440 4090 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ce5831a6-5a8d-4cda-9299-5d86437bcab2-marketplace-operator-metrics podName:ce5831a6-5a8d-4cda-9299-5d86437bcab2 nodeName:}" failed. No retries permitted until 2026-03-18 17:41:53.633422765 +0000 UTC m=+130.825694739 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/ce5831a6-5a8d-4cda-9299-5d86437bcab2-marketplace-operator-metrics") pod "marketplace-operator-89ccd998f-l5gm7" (UID: "ce5831a6-5a8d-4cda-9299-5d86437bcab2") : secret "marketplace-operator-metrics" not found Mar 18 17:41:45.634555 master-0 kubenswrapper[4090]: E0318 17:41:45.633477 4090 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e9e04572-1425-440e-9869-6deef05e13e3-srv-cert podName:e9e04572-1425-440e-9869-6deef05e13e3 nodeName:}" failed. No retries permitted until 2026-03-18 17:41:53.633457466 +0000 UTC m=+130.825729420 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/e9e04572-1425-440e-9869-6deef05e13e3-srv-cert") pod "catalog-operator-68f85b4d6c-qpgfz" (UID: "e9e04572-1425-440e-9869-6deef05e13e3") : secret "catalog-operator-serving-cert" not found Mar 18 17:41:45.634555 master-0 kubenswrapper[4090]: E0318 17:41:45.633618 4090 secret.go:189] Couldn't get secret openshift-machine-api/cluster-baremetal-webhook-server-cert: secret "cluster-baremetal-webhook-server-cert" not found Mar 18 17:41:45.634555 master-0 kubenswrapper[4090]: E0318 17:41:45.633742 4090 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/37b3753f-bf4f-4a9e-a4a8-d58296bada79-cert podName:37b3753f-bf4f-4a9e-a4a8-d58296bada79 nodeName:}" failed. No retries permitted until 2026-03-18 17:41:53.633713461 +0000 UTC m=+130.825985585 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/37b3753f-bf4f-4a9e-a4a8-d58296bada79-cert") pod "cluster-baremetal-operator-6f69995874-dh5zl" (UID: "37b3753f-bf4f-4a9e-a4a8-d58296bada79") : secret "cluster-baremetal-webhook-server-cert" not found Mar 18 17:41:45.733933 master-0 kubenswrapper[4090]: I0318 17:41:45.733847 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/7c6694a8-ccd0-491b-9f21-215450f6ce67-apiservice-cert\") pod \"cluster-node-tuning-operator-598fbc5f8f-7qwxn\" (UID: \"7c6694a8-ccd0-491b-9f21-215450f6ce67\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-7qwxn" Mar 18 17:41:45.734266 master-0 kubenswrapper[4090]: E0318 17:41:45.734178 4090 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/performance-addon-operator-webhook-cert: secret "performance-addon-operator-webhook-cert" not found Mar 18 17:41:45.734393 master-0 kubenswrapper[4090]: E0318 17:41:45.734356 4090 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7c6694a8-ccd0-491b-9f21-215450f6ce67-apiservice-cert podName:7c6694a8-ccd0-491b-9f21-215450f6ce67 nodeName:}" failed. No retries permitted until 2026-03-18 17:41:53.734269602 +0000 UTC m=+130.926541556 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/7c6694a8-ccd0-491b-9f21-215450f6ce67-apiservice-cert") pod "cluster-node-tuning-operator-598fbc5f8f-7qwxn" (UID: "7c6694a8-ccd0-491b-9f21-215450f6ce67") : secret "performance-addon-operator-webhook-cert" not found Mar 18 17:41:47.252336 master-0 kubenswrapper[4090]: I0318 17:41:47.252142 4090 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5a4f94f3-d63a-4869-b723-ae9637610b4b-metrics-certs\") pod \"network-metrics-daemon-mfn52\" (UID: \"5a4f94f3-d63a-4869-b723-ae9637610b4b\") " pod="openshift-multus/network-metrics-daemon-mfn52" Mar 18 17:41:47.254774 master-0 kubenswrapper[4090]: I0318 17:41:47.254717 4090 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Mar 18 17:41:47.263509 master-0 kubenswrapper[4090]: E0318 17:41:47.263431 4090 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: secret "metrics-daemon-secret" not found Mar 18 17:41:47.263716 master-0 kubenswrapper[4090]: E0318 17:41:47.263598 4090 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5a4f94f3-d63a-4869-b723-ae9637610b4b-metrics-certs podName:5a4f94f3-d63a-4869-b723-ae9637610b4b nodeName:}" failed. No retries permitted until 2026-03-18 17:42:51.263558952 +0000 UTC m=+188.455830906 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/5a4f94f3-d63a-4869-b723-ae9637610b4b-metrics-certs") pod "network-metrics-daemon-mfn52" (UID: "5a4f94f3-d63a-4869-b723-ae9637610b4b") : secret "metrics-daemon-secret" not found Mar 18 17:41:49.761741 master-0 systemd[1]: Stopping Kubernetes Kubelet... Mar 18 17:41:49.777753 master-0 systemd[1]: kubelet.service: Deactivated successfully. Mar 18 17:41:49.778067 master-0 systemd[1]: Stopped Kubernetes Kubelet. Mar 18 17:41:49.782495 master-0 systemd[1]: kubelet.service: Consumed 10.606s CPU time. Mar 18 17:41:49.794616 master-0 systemd[1]: Starting Kubernetes Kubelet... Mar 18 17:41:49.914048 master-0 kubenswrapper[7553]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 18 17:41:49.914048 master-0 kubenswrapper[7553]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Mar 18 17:41:49.914048 master-0 kubenswrapper[7553]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 18 17:41:49.914048 master-0 kubenswrapper[7553]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 18 17:41:49.914048 master-0 kubenswrapper[7553]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Mar 18 17:41:49.914048 master-0 kubenswrapper[7553]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 18 17:41:49.915511 master-0 kubenswrapper[7553]: I0318 17:41:49.914182 7553 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 18 17:41:49.917505 master-0 kubenswrapper[7553]: W0318 17:41:49.917467 7553 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Mar 18 17:41:49.917505 master-0 kubenswrapper[7553]: W0318 17:41:49.917500 7553 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Mar 18 17:41:49.917505 master-0 kubenswrapper[7553]: W0318 17:41:49.917508 7553 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Mar 18 17:41:49.917505 master-0 kubenswrapper[7553]: W0318 17:41:49.917513 7553 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Mar 18 17:41:49.917505 master-0 kubenswrapper[7553]: W0318 17:41:49.917518 7553 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Mar 18 17:41:49.917505 master-0 kubenswrapper[7553]: W0318 17:41:49.917524 7553 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Mar 18 17:41:49.917705 master-0 kubenswrapper[7553]: W0318 17:41:49.917531 7553 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Mar 18 17:41:49.917705 master-0 kubenswrapper[7553]: W0318 17:41:49.917537 7553 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Mar 18 17:41:49.917705 master-0 kubenswrapper[7553]: W0318 17:41:49.917543 7553 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Mar 18 17:41:49.917705 master-0 kubenswrapper[7553]: W0318 17:41:49.917548 7553 feature_gate.go:330] unrecognized feature gate: InsightsConfig Mar 18 17:41:49.917705 master-0 kubenswrapper[7553]: W0318 17:41:49.917554 7553 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Mar 18 17:41:49.917705 master-0 kubenswrapper[7553]: W0318 17:41:49.917559 7553 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Mar 18 17:41:49.917705 master-0 kubenswrapper[7553]: W0318 17:41:49.917564 7553 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Mar 18 17:41:49.917705 master-0 kubenswrapper[7553]: W0318 17:41:49.917569 7553 feature_gate.go:330] unrecognized feature gate: SignatureStores Mar 18 17:41:49.917705 master-0 kubenswrapper[7553]: W0318 17:41:49.917622 7553 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Mar 18 17:41:49.917705 master-0 kubenswrapper[7553]: W0318 17:41:49.917630 7553 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Mar 18 17:41:49.917705 master-0 kubenswrapper[7553]: W0318 17:41:49.917635 7553 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Mar 18 17:41:49.917705 master-0 kubenswrapper[7553]: W0318 17:41:49.917640 7553 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Mar 18 17:41:49.917705 master-0 kubenswrapper[7553]: W0318 17:41:49.917646 7553 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Mar 18 17:41:49.917705 master-0 kubenswrapper[7553]: W0318 17:41:49.917652 7553 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Mar 18 17:41:49.917705 master-0 kubenswrapper[7553]: W0318 17:41:49.917662 7553 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Mar 18 17:41:49.917705 master-0 kubenswrapper[7553]: W0318 17:41:49.917669 7553 feature_gate.go:330] unrecognized feature gate: GatewayAPI Mar 18 17:41:49.917705 master-0 kubenswrapper[7553]: W0318 17:41:49.917674 7553 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Mar 18 17:41:49.917705 master-0 kubenswrapper[7553]: W0318 17:41:49.917678 7553 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Mar 18 17:41:49.917705 master-0 kubenswrapper[7553]: W0318 17:41:49.917683 7553 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Mar 18 17:41:49.918201 master-0 kubenswrapper[7553]: W0318 17:41:49.917689 7553 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Mar 18 17:41:49.918201 master-0 kubenswrapper[7553]: W0318 17:41:49.917698 7553 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Mar 18 17:41:49.918201 master-0 kubenswrapper[7553]: W0318 17:41:49.917704 7553 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Mar 18 17:41:49.918201 master-0 kubenswrapper[7553]: W0318 17:41:49.917710 7553 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Mar 18 17:41:49.918201 master-0 kubenswrapper[7553]: W0318 17:41:49.917718 7553 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Mar 18 17:41:49.918201 master-0 kubenswrapper[7553]: W0318 17:41:49.917725 7553 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Mar 18 17:41:49.918201 master-0 kubenswrapper[7553]: W0318 17:41:49.917733 7553 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Mar 18 17:41:49.918201 master-0 kubenswrapper[7553]: W0318 17:41:49.917765 7553 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Mar 18 17:41:49.918201 master-0 kubenswrapper[7553]: W0318 17:41:49.917791 7553 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Mar 18 17:41:49.918201 master-0 kubenswrapper[7553]: W0318 17:41:49.917798 7553 feature_gate.go:330] unrecognized feature gate: NewOLM Mar 18 17:41:49.918201 master-0 kubenswrapper[7553]: W0318 17:41:49.917804 7553 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Mar 18 17:41:49.918201 master-0 kubenswrapper[7553]: W0318 17:41:49.917809 7553 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Mar 18 17:41:49.918201 master-0 kubenswrapper[7553]: W0318 17:41:49.917814 7553 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Mar 18 17:41:49.918201 master-0 kubenswrapper[7553]: W0318 17:41:49.917821 7553 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Mar 18 17:41:49.918201 master-0 kubenswrapper[7553]: W0318 17:41:49.917827 7553 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Mar 18 17:41:49.918201 master-0 kubenswrapper[7553]: W0318 17:41:49.917853 7553 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Mar 18 17:41:49.918663 master-0 kubenswrapper[7553]: W0318 17:41:49.918249 7553 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Mar 18 17:41:49.918663 master-0 kubenswrapper[7553]: W0318 17:41:49.918262 7553 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Mar 18 17:41:49.918663 master-0 kubenswrapper[7553]: W0318 17:41:49.918267 7553 feature_gate.go:330] unrecognized feature gate: Example Mar 18 17:41:49.918663 master-0 kubenswrapper[7553]: W0318 17:41:49.918296 7553 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Mar 18 17:41:49.918663 master-0 kubenswrapper[7553]: W0318 17:41:49.918301 7553 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Mar 18 17:41:49.918663 master-0 kubenswrapper[7553]: W0318 17:41:49.918306 7553 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Mar 18 17:41:49.918663 master-0 kubenswrapper[7553]: W0318 17:41:49.918312 7553 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Mar 18 17:41:49.918663 master-0 kubenswrapper[7553]: W0318 17:41:49.918317 7553 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Mar 18 17:41:49.918663 master-0 kubenswrapper[7553]: W0318 17:41:49.918322 7553 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Mar 18 17:41:49.918663 master-0 kubenswrapper[7553]: W0318 17:41:49.918327 7553 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Mar 18 17:41:49.918663 master-0 kubenswrapper[7553]: W0318 17:41:49.918332 7553 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Mar 18 17:41:49.918663 master-0 kubenswrapper[7553]: W0318 17:41:49.918337 7553 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Mar 18 17:41:49.918663 master-0 kubenswrapper[7553]: W0318 17:41:49.918343 7553 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Mar 18 17:41:49.918663 master-0 kubenswrapper[7553]: W0318 17:41:49.918350 7553 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Mar 18 17:41:49.918663 master-0 kubenswrapper[7553]: W0318 17:41:49.918357 7553 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Mar 18 17:41:49.918663 master-0 kubenswrapper[7553]: W0318 17:41:49.918362 7553 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Mar 18 17:41:49.918663 master-0 kubenswrapper[7553]: W0318 17:41:49.918368 7553 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Mar 18 17:41:49.918663 master-0 kubenswrapper[7553]: W0318 17:41:49.918373 7553 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Mar 18 17:41:49.918663 master-0 kubenswrapper[7553]: W0318 17:41:49.918379 7553 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Mar 18 17:41:49.918663 master-0 kubenswrapper[7553]: W0318 17:41:49.918384 7553 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Mar 18 17:41:49.919196 master-0 kubenswrapper[7553]: W0318 17:41:49.918389 7553 feature_gate.go:330] unrecognized feature gate: PinnedImages Mar 18 17:41:49.919196 master-0 kubenswrapper[7553]: W0318 17:41:49.918397 7553 feature_gate.go:330] unrecognized feature gate: PlatformOperators Mar 18 17:41:49.919196 master-0 kubenswrapper[7553]: W0318 17:41:49.918402 7553 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Mar 18 17:41:49.919196 master-0 kubenswrapper[7553]: W0318 17:41:49.918408 7553 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Mar 18 17:41:49.919196 master-0 kubenswrapper[7553]: W0318 17:41:49.918413 7553 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Mar 18 17:41:49.919196 master-0 kubenswrapper[7553]: W0318 17:41:49.918418 7553 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Mar 18 17:41:49.919196 master-0 kubenswrapper[7553]: W0318 17:41:49.918424 7553 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Mar 18 17:41:49.919196 master-0 kubenswrapper[7553]: W0318 17:41:49.918429 7553 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Mar 18 17:41:49.919196 master-0 kubenswrapper[7553]: W0318 17:41:49.918436 7553 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Mar 18 17:41:49.919196 master-0 kubenswrapper[7553]: W0318 17:41:49.918441 7553 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Mar 18 17:41:49.919196 master-0 kubenswrapper[7553]: W0318 17:41:49.918446 7553 feature_gate.go:330] unrecognized feature gate: OVNObservability Mar 18 17:41:49.919196 master-0 kubenswrapper[7553]: I0318 17:41:49.918583 7553 flags.go:64] FLAG: --address="0.0.0.0" Mar 18 17:41:49.919196 master-0 kubenswrapper[7553]: I0318 17:41:49.918599 7553 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Mar 18 17:41:49.919196 master-0 kubenswrapper[7553]: I0318 17:41:49.918611 7553 flags.go:64] FLAG: --anonymous-auth="true" Mar 18 17:41:49.919196 master-0 kubenswrapper[7553]: I0318 17:41:49.918619 7553 flags.go:64] FLAG: --application-metrics-count-limit="100" Mar 18 17:41:49.919196 master-0 kubenswrapper[7553]: I0318 17:41:49.918628 7553 flags.go:64] FLAG: --authentication-token-webhook="false" Mar 18 17:41:49.919196 master-0 kubenswrapper[7553]: I0318 17:41:49.918634 7553 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Mar 18 17:41:49.919196 master-0 kubenswrapper[7553]: I0318 17:41:49.918642 7553 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Mar 18 17:41:49.919196 master-0 kubenswrapper[7553]: I0318 17:41:49.918650 7553 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Mar 18 17:41:49.919196 master-0 kubenswrapper[7553]: I0318 17:41:49.918656 7553 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Mar 18 17:41:49.919196 master-0 kubenswrapper[7553]: I0318 17:41:49.918663 7553 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Mar 18 17:41:49.919877 master-0 kubenswrapper[7553]: I0318 17:41:49.918670 7553 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Mar 18 17:41:49.919877 master-0 kubenswrapper[7553]: I0318 17:41:49.918677 7553 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Mar 18 17:41:49.919877 master-0 kubenswrapper[7553]: I0318 17:41:49.918683 7553 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Mar 18 17:41:49.919877 master-0 kubenswrapper[7553]: I0318 17:41:49.918690 7553 flags.go:64] FLAG: --cgroup-root="" Mar 18 17:41:49.919877 master-0 kubenswrapper[7553]: I0318 17:41:49.918696 7553 flags.go:64] FLAG: --cgroups-per-qos="true" Mar 18 17:41:49.919877 master-0 kubenswrapper[7553]: I0318 17:41:49.918702 7553 flags.go:64] FLAG: --client-ca-file="" Mar 18 17:41:49.919877 master-0 kubenswrapper[7553]: I0318 17:41:49.918709 7553 flags.go:64] FLAG: --cloud-config="" Mar 18 17:41:49.919877 master-0 kubenswrapper[7553]: I0318 17:41:49.918715 7553 flags.go:64] FLAG: --cloud-provider="" Mar 18 17:41:49.919877 master-0 kubenswrapper[7553]: I0318 17:41:49.918720 7553 flags.go:64] FLAG: --cluster-dns="[]" Mar 18 17:41:49.919877 master-0 kubenswrapper[7553]: I0318 17:41:49.918763 7553 flags.go:64] FLAG: --cluster-domain="" Mar 18 17:41:49.919877 master-0 kubenswrapper[7553]: I0318 17:41:49.918768 7553 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Mar 18 17:41:49.919877 master-0 kubenswrapper[7553]: I0318 17:41:49.918773 7553 flags.go:64] FLAG: --config-dir="" Mar 18 17:41:49.919877 master-0 kubenswrapper[7553]: I0318 17:41:49.918779 7553 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Mar 18 17:41:49.919877 master-0 kubenswrapper[7553]: I0318 17:41:49.918785 7553 flags.go:64] FLAG: --container-log-max-files="5" Mar 18 17:41:49.919877 master-0 kubenswrapper[7553]: I0318 17:41:49.918792 7553 flags.go:64] FLAG: --container-log-max-size="10Mi" Mar 18 17:41:49.919877 master-0 kubenswrapper[7553]: I0318 17:41:49.918798 7553 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Mar 18 17:41:49.919877 master-0 kubenswrapper[7553]: I0318 17:41:49.918806 7553 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Mar 18 17:41:49.919877 master-0 kubenswrapper[7553]: I0318 17:41:49.918812 7553 flags.go:64] FLAG: --containerd-namespace="k8s.io" Mar 18 17:41:49.919877 master-0 kubenswrapper[7553]: I0318 17:41:49.918817 7553 flags.go:64] FLAG: --contention-profiling="false" Mar 18 17:41:49.919877 master-0 kubenswrapper[7553]: I0318 17:41:49.918823 7553 flags.go:64] FLAG: --cpu-cfs-quota="true" Mar 18 17:41:49.919877 master-0 kubenswrapper[7553]: I0318 17:41:49.918828 7553 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Mar 18 17:41:49.919877 master-0 kubenswrapper[7553]: I0318 17:41:49.918834 7553 flags.go:64] FLAG: --cpu-manager-policy="none" Mar 18 17:41:49.919877 master-0 kubenswrapper[7553]: I0318 17:41:49.918839 7553 flags.go:64] FLAG: --cpu-manager-policy-options="" Mar 18 17:41:49.919877 master-0 kubenswrapper[7553]: I0318 17:41:49.918851 7553 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Mar 18 17:41:49.919877 master-0 kubenswrapper[7553]: I0318 17:41:49.918857 7553 flags.go:64] FLAG: --enable-controller-attach-detach="true" Mar 18 17:41:49.920697 master-0 kubenswrapper[7553]: I0318 17:41:49.918862 7553 flags.go:64] FLAG: --enable-debugging-handlers="true" Mar 18 17:41:49.920697 master-0 kubenswrapper[7553]: I0318 17:41:49.918867 7553 flags.go:64] FLAG: --enable-load-reader="false" Mar 18 17:41:49.920697 master-0 kubenswrapper[7553]: I0318 17:41:49.918874 7553 flags.go:64] FLAG: --enable-server="true" Mar 18 17:41:49.920697 master-0 kubenswrapper[7553]: I0318 17:41:49.918880 7553 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Mar 18 17:41:49.920697 master-0 kubenswrapper[7553]: I0318 17:41:49.918888 7553 flags.go:64] FLAG: --event-burst="100" Mar 18 17:41:49.920697 master-0 kubenswrapper[7553]: I0318 17:41:49.918894 7553 flags.go:64] FLAG: --event-qps="50" Mar 18 17:41:49.920697 master-0 kubenswrapper[7553]: I0318 17:41:49.918899 7553 flags.go:64] FLAG: --event-storage-age-limit="default=0" Mar 18 17:41:49.920697 master-0 kubenswrapper[7553]: I0318 17:41:49.918905 7553 flags.go:64] FLAG: --event-storage-event-limit="default=0" Mar 18 17:41:49.920697 master-0 kubenswrapper[7553]: I0318 17:41:49.918911 7553 flags.go:64] FLAG: --eviction-hard="" Mar 18 17:41:49.920697 master-0 kubenswrapper[7553]: I0318 17:41:49.918919 7553 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Mar 18 17:41:49.920697 master-0 kubenswrapper[7553]: I0318 17:41:49.918925 7553 flags.go:64] FLAG: --eviction-minimum-reclaim="" Mar 18 17:41:49.920697 master-0 kubenswrapper[7553]: I0318 17:41:49.918930 7553 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Mar 18 17:41:49.920697 master-0 kubenswrapper[7553]: I0318 17:41:49.918936 7553 flags.go:64] FLAG: --eviction-soft="" Mar 18 17:41:49.920697 master-0 kubenswrapper[7553]: I0318 17:41:49.918943 7553 flags.go:64] FLAG: --eviction-soft-grace-period="" Mar 18 17:41:49.920697 master-0 kubenswrapper[7553]: I0318 17:41:49.918950 7553 flags.go:64] FLAG: --exit-on-lock-contention="false" Mar 18 17:41:49.920697 master-0 kubenswrapper[7553]: I0318 17:41:49.918955 7553 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Mar 18 17:41:49.920697 master-0 kubenswrapper[7553]: I0318 17:41:49.918961 7553 flags.go:64] FLAG: --experimental-mounter-path="" Mar 18 17:41:49.920697 master-0 kubenswrapper[7553]: I0318 17:41:49.918966 7553 flags.go:64] FLAG: --fail-cgroupv1="false" Mar 18 17:41:49.920697 master-0 kubenswrapper[7553]: I0318 17:41:49.918971 7553 flags.go:64] FLAG: --fail-swap-on="true" Mar 18 17:41:49.920697 master-0 kubenswrapper[7553]: I0318 17:41:49.918977 7553 flags.go:64] FLAG: --feature-gates="" Mar 18 17:41:49.920697 master-0 kubenswrapper[7553]: I0318 17:41:49.918985 7553 flags.go:64] FLAG: --file-check-frequency="20s" Mar 18 17:41:49.920697 master-0 kubenswrapper[7553]: I0318 17:41:49.918991 7553 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Mar 18 17:41:49.920697 master-0 kubenswrapper[7553]: I0318 17:41:49.918997 7553 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Mar 18 17:41:49.920697 master-0 kubenswrapper[7553]: I0318 17:41:49.919003 7553 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Mar 18 17:41:49.920697 master-0 kubenswrapper[7553]: I0318 17:41:49.919009 7553 flags.go:64] FLAG: --healthz-port="10248" Mar 18 17:41:49.920697 master-0 kubenswrapper[7553]: I0318 17:41:49.919015 7553 flags.go:64] FLAG: --help="false" Mar 18 17:41:49.921441 master-0 kubenswrapper[7553]: I0318 17:41:49.919020 7553 flags.go:64] FLAG: --hostname-override="" Mar 18 17:41:49.921441 master-0 kubenswrapper[7553]: I0318 17:41:49.919029 7553 flags.go:64] FLAG: --housekeeping-interval="10s" Mar 18 17:41:49.921441 master-0 kubenswrapper[7553]: I0318 17:41:49.919034 7553 flags.go:64] FLAG: --http-check-frequency="20s" Mar 18 17:41:49.921441 master-0 kubenswrapper[7553]: I0318 17:41:49.919041 7553 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Mar 18 17:41:49.921441 master-0 kubenswrapper[7553]: I0318 17:41:49.919047 7553 flags.go:64] FLAG: --image-credential-provider-config="" Mar 18 17:41:49.921441 master-0 kubenswrapper[7553]: I0318 17:41:49.919052 7553 flags.go:64] FLAG: --image-gc-high-threshold="85" Mar 18 17:41:49.921441 master-0 kubenswrapper[7553]: I0318 17:41:49.919058 7553 flags.go:64] FLAG: --image-gc-low-threshold="80" Mar 18 17:41:49.921441 master-0 kubenswrapper[7553]: I0318 17:41:49.919063 7553 flags.go:64] FLAG: --image-service-endpoint="" Mar 18 17:41:49.921441 master-0 kubenswrapper[7553]: I0318 17:41:49.919069 7553 flags.go:64] FLAG: --kernel-memcg-notification="false" Mar 18 17:41:49.921441 master-0 kubenswrapper[7553]: I0318 17:41:49.919074 7553 flags.go:64] FLAG: --kube-api-burst="100" Mar 18 17:41:49.921441 master-0 kubenswrapper[7553]: I0318 17:41:49.919080 7553 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Mar 18 17:41:49.921441 master-0 kubenswrapper[7553]: I0318 17:41:49.919086 7553 flags.go:64] FLAG: --kube-api-qps="50" Mar 18 17:41:49.921441 master-0 kubenswrapper[7553]: I0318 17:41:49.919091 7553 flags.go:64] FLAG: --kube-reserved="" Mar 18 17:41:49.921441 master-0 kubenswrapper[7553]: I0318 17:41:49.919097 7553 flags.go:64] FLAG: --kube-reserved-cgroup="" Mar 18 17:41:49.921441 master-0 kubenswrapper[7553]: I0318 17:41:49.919103 7553 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Mar 18 17:41:49.921441 master-0 kubenswrapper[7553]: I0318 17:41:49.919109 7553 flags.go:64] FLAG: --kubelet-cgroups="" Mar 18 17:41:49.921441 master-0 kubenswrapper[7553]: I0318 17:41:49.919114 7553 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Mar 18 17:41:49.921441 master-0 kubenswrapper[7553]: I0318 17:41:49.919119 7553 flags.go:64] FLAG: --lock-file="" Mar 18 17:41:49.921441 master-0 kubenswrapper[7553]: I0318 17:41:49.919125 7553 flags.go:64] FLAG: --log-cadvisor-usage="false" Mar 18 17:41:49.921441 master-0 kubenswrapper[7553]: I0318 17:41:49.919143 7553 flags.go:64] FLAG: --log-flush-frequency="5s" Mar 18 17:41:49.921441 master-0 kubenswrapper[7553]: I0318 17:41:49.919149 7553 flags.go:64] FLAG: --log-json-info-buffer-size="0" Mar 18 17:41:49.921441 master-0 kubenswrapper[7553]: I0318 17:41:49.919159 7553 flags.go:64] FLAG: --log-json-split-stream="false" Mar 18 17:41:49.921441 master-0 kubenswrapper[7553]: I0318 17:41:49.919164 7553 flags.go:64] FLAG: --log-text-info-buffer-size="0" Mar 18 17:41:49.921441 master-0 kubenswrapper[7553]: I0318 17:41:49.919169 7553 flags.go:64] FLAG: --log-text-split-stream="false" Mar 18 17:41:49.921441 master-0 kubenswrapper[7553]: I0318 17:41:49.919175 7553 flags.go:64] FLAG: --logging-format="text" Mar 18 17:41:49.922316 master-0 kubenswrapper[7553]: I0318 17:41:49.919181 7553 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Mar 18 17:41:49.922316 master-0 kubenswrapper[7553]: I0318 17:41:49.919188 7553 flags.go:64] FLAG: --make-iptables-util-chains="true" Mar 18 17:41:49.922316 master-0 kubenswrapper[7553]: I0318 17:41:49.919194 7553 flags.go:64] FLAG: --manifest-url="" Mar 18 17:41:49.922316 master-0 kubenswrapper[7553]: I0318 17:41:49.919201 7553 flags.go:64] FLAG: --manifest-url-header="" Mar 18 17:41:49.922316 master-0 kubenswrapper[7553]: I0318 17:41:49.919211 7553 flags.go:64] FLAG: --max-housekeeping-interval="15s" Mar 18 17:41:49.922316 master-0 kubenswrapper[7553]: I0318 17:41:49.919218 7553 flags.go:64] FLAG: --max-open-files="1000000" Mar 18 17:41:49.922316 master-0 kubenswrapper[7553]: I0318 17:41:49.919225 7553 flags.go:64] FLAG: --max-pods="110" Mar 18 17:41:49.922316 master-0 kubenswrapper[7553]: I0318 17:41:49.919232 7553 flags.go:64] FLAG: --maximum-dead-containers="-1" Mar 18 17:41:49.922316 master-0 kubenswrapper[7553]: I0318 17:41:49.919238 7553 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Mar 18 17:41:49.922316 master-0 kubenswrapper[7553]: I0318 17:41:49.919243 7553 flags.go:64] FLAG: --memory-manager-policy="None" Mar 18 17:41:49.922316 master-0 kubenswrapper[7553]: I0318 17:41:49.919249 7553 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Mar 18 17:41:49.922316 master-0 kubenswrapper[7553]: I0318 17:41:49.919255 7553 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Mar 18 17:41:49.922316 master-0 kubenswrapper[7553]: I0318 17:41:49.919262 7553 flags.go:64] FLAG: --node-ip="192.168.32.10" Mar 18 17:41:49.922316 master-0 kubenswrapper[7553]: I0318 17:41:49.919267 7553 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Mar 18 17:41:49.922316 master-0 kubenswrapper[7553]: I0318 17:41:49.919304 7553 flags.go:64] FLAG: --node-status-max-images="50" Mar 18 17:41:49.922316 master-0 kubenswrapper[7553]: I0318 17:41:49.919310 7553 flags.go:64] FLAG: --node-status-update-frequency="10s" Mar 18 17:41:49.922316 master-0 kubenswrapper[7553]: I0318 17:41:49.919316 7553 flags.go:64] FLAG: --oom-score-adj="-999" Mar 18 17:41:49.922316 master-0 kubenswrapper[7553]: I0318 17:41:49.919323 7553 flags.go:64] FLAG: --pod-cidr="" Mar 18 17:41:49.922316 master-0 kubenswrapper[7553]: I0318 17:41:49.919328 7553 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:53d66d524ca3e787d8dbe30dbc4d9b8612c9cebd505ccb4375a8441814e85422" Mar 18 17:41:49.922316 master-0 kubenswrapper[7553]: I0318 17:41:49.919340 7553 flags.go:64] FLAG: --pod-manifest-path="" Mar 18 17:41:49.922316 master-0 kubenswrapper[7553]: I0318 17:41:49.919346 7553 flags.go:64] FLAG: --pod-max-pids="-1" Mar 18 17:41:49.922316 master-0 kubenswrapper[7553]: I0318 17:41:49.919352 7553 flags.go:64] FLAG: --pods-per-core="0" Mar 18 17:41:49.922316 master-0 kubenswrapper[7553]: I0318 17:41:49.919358 7553 flags.go:64] FLAG: --port="10250" Mar 18 17:41:49.923073 master-0 kubenswrapper[7553]: I0318 17:41:49.919364 7553 flags.go:64] FLAG: --protect-kernel-defaults="false" Mar 18 17:41:49.923073 master-0 kubenswrapper[7553]: I0318 17:41:49.919369 7553 flags.go:64] FLAG: --provider-id="" Mar 18 17:41:49.923073 master-0 kubenswrapper[7553]: I0318 17:41:49.919374 7553 flags.go:64] FLAG: --qos-reserved="" Mar 18 17:41:49.923073 master-0 kubenswrapper[7553]: I0318 17:41:49.919380 7553 flags.go:64] FLAG: --read-only-port="10255" Mar 18 17:41:49.923073 master-0 kubenswrapper[7553]: I0318 17:41:49.919389 7553 flags.go:64] FLAG: --register-node="true" Mar 18 17:41:49.923073 master-0 kubenswrapper[7553]: I0318 17:41:49.919395 7553 flags.go:64] FLAG: --register-schedulable="true" Mar 18 17:41:49.923073 master-0 kubenswrapper[7553]: I0318 17:41:49.919400 7553 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Mar 18 17:41:49.923073 master-0 kubenswrapper[7553]: I0318 17:41:49.919410 7553 flags.go:64] FLAG: --registry-burst="10" Mar 18 17:41:49.923073 master-0 kubenswrapper[7553]: I0318 17:41:49.919415 7553 flags.go:64] FLAG: --registry-qps="5" Mar 18 17:41:49.923073 master-0 kubenswrapper[7553]: I0318 17:41:49.919420 7553 flags.go:64] FLAG: --reserved-cpus="" Mar 18 17:41:49.923073 master-0 kubenswrapper[7553]: I0318 17:41:49.919428 7553 flags.go:64] FLAG: --reserved-memory="" Mar 18 17:41:49.923073 master-0 kubenswrapper[7553]: I0318 17:41:49.919435 7553 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Mar 18 17:41:49.923073 master-0 kubenswrapper[7553]: I0318 17:41:49.919441 7553 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Mar 18 17:41:49.923073 master-0 kubenswrapper[7553]: I0318 17:41:49.919447 7553 flags.go:64] FLAG: --rotate-certificates="false" Mar 18 17:41:49.923073 master-0 kubenswrapper[7553]: I0318 17:41:49.919452 7553 flags.go:64] FLAG: --rotate-server-certificates="false" Mar 18 17:41:49.923073 master-0 kubenswrapper[7553]: I0318 17:41:49.919457 7553 flags.go:64] FLAG: --runonce="false" Mar 18 17:41:49.923073 master-0 kubenswrapper[7553]: I0318 17:41:49.919462 7553 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Mar 18 17:41:49.923073 master-0 kubenswrapper[7553]: I0318 17:41:49.919468 7553 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Mar 18 17:41:49.923073 master-0 kubenswrapper[7553]: I0318 17:41:49.919473 7553 flags.go:64] FLAG: --seccomp-default="false" Mar 18 17:41:49.923073 master-0 kubenswrapper[7553]: I0318 17:41:49.919478 7553 flags.go:64] FLAG: --serialize-image-pulls="true" Mar 18 17:41:49.923073 master-0 kubenswrapper[7553]: I0318 17:41:49.919484 7553 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Mar 18 17:41:49.923073 master-0 kubenswrapper[7553]: I0318 17:41:49.919489 7553 flags.go:64] FLAG: --storage-driver-db="cadvisor" Mar 18 17:41:49.923073 master-0 kubenswrapper[7553]: I0318 17:41:49.919495 7553 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Mar 18 17:41:49.923073 master-0 kubenswrapper[7553]: I0318 17:41:49.919501 7553 flags.go:64] FLAG: --storage-driver-password="root" Mar 18 17:41:49.923073 master-0 kubenswrapper[7553]: I0318 17:41:49.919506 7553 flags.go:64] FLAG: --storage-driver-secure="false" Mar 18 17:41:49.924005 master-0 kubenswrapper[7553]: I0318 17:41:49.919513 7553 flags.go:64] FLAG: --storage-driver-table="stats" Mar 18 17:41:49.924005 master-0 kubenswrapper[7553]: I0318 17:41:49.919518 7553 flags.go:64] FLAG: --storage-driver-user="root" Mar 18 17:41:49.924005 master-0 kubenswrapper[7553]: I0318 17:41:49.919523 7553 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Mar 18 17:41:49.924005 master-0 kubenswrapper[7553]: I0318 17:41:49.919530 7553 flags.go:64] FLAG: --sync-frequency="1m0s" Mar 18 17:41:49.924005 master-0 kubenswrapper[7553]: I0318 17:41:49.919535 7553 flags.go:64] FLAG: --system-cgroups="" Mar 18 17:41:49.924005 master-0 kubenswrapper[7553]: I0318 17:41:49.919540 7553 flags.go:64] FLAG: --system-reserved="cpu=500m,ephemeral-storage=1Gi,memory=1Gi" Mar 18 17:41:49.924005 master-0 kubenswrapper[7553]: I0318 17:41:49.919549 7553 flags.go:64] FLAG: --system-reserved-cgroup="" Mar 18 17:41:49.924005 master-0 kubenswrapper[7553]: I0318 17:41:49.919554 7553 flags.go:64] FLAG: --tls-cert-file="" Mar 18 17:41:49.924005 master-0 kubenswrapper[7553]: I0318 17:41:49.919560 7553 flags.go:64] FLAG: --tls-cipher-suites="[]" Mar 18 17:41:49.924005 master-0 kubenswrapper[7553]: I0318 17:41:49.919569 7553 flags.go:64] FLAG: --tls-min-version="" Mar 18 17:41:49.924005 master-0 kubenswrapper[7553]: I0318 17:41:49.919574 7553 flags.go:64] FLAG: --tls-private-key-file="" Mar 18 17:41:49.924005 master-0 kubenswrapper[7553]: I0318 17:41:49.919584 7553 flags.go:64] FLAG: --topology-manager-policy="none" Mar 18 17:41:49.924005 master-0 kubenswrapper[7553]: I0318 17:41:49.919590 7553 flags.go:64] FLAG: --topology-manager-policy-options="" Mar 18 17:41:49.924005 master-0 kubenswrapper[7553]: I0318 17:41:49.919595 7553 flags.go:64] FLAG: --topology-manager-scope="container" Mar 18 17:41:49.924005 master-0 kubenswrapper[7553]: I0318 17:41:49.919600 7553 flags.go:64] FLAG: --v="2" Mar 18 17:41:49.924005 master-0 kubenswrapper[7553]: I0318 17:41:49.919608 7553 flags.go:64] FLAG: --version="false" Mar 18 17:41:49.924005 master-0 kubenswrapper[7553]: I0318 17:41:49.919616 7553 flags.go:64] FLAG: --vmodule="" Mar 18 17:41:49.924005 master-0 kubenswrapper[7553]: I0318 17:41:49.919623 7553 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Mar 18 17:41:49.924005 master-0 kubenswrapper[7553]: I0318 17:41:49.919629 7553 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Mar 18 17:41:49.924005 master-0 kubenswrapper[7553]: W0318 17:41:49.919773 7553 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Mar 18 17:41:49.924005 master-0 kubenswrapper[7553]: W0318 17:41:49.919782 7553 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Mar 18 17:41:49.924005 master-0 kubenswrapper[7553]: W0318 17:41:49.919787 7553 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Mar 18 17:41:49.924005 master-0 kubenswrapper[7553]: W0318 17:41:49.919792 7553 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Mar 18 17:41:49.924005 master-0 kubenswrapper[7553]: W0318 17:41:49.919797 7553 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Mar 18 17:41:49.924920 master-0 kubenswrapper[7553]: W0318 17:41:49.919802 7553 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Mar 18 17:41:49.924920 master-0 kubenswrapper[7553]: W0318 17:41:49.919807 7553 feature_gate.go:330] unrecognized feature gate: SignatureStores Mar 18 17:41:49.924920 master-0 kubenswrapper[7553]: W0318 17:41:49.919813 7553 feature_gate.go:330] unrecognized feature gate: Example Mar 18 17:41:49.924920 master-0 kubenswrapper[7553]: W0318 17:41:49.919818 7553 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Mar 18 17:41:49.924920 master-0 kubenswrapper[7553]: W0318 17:41:49.919822 7553 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Mar 18 17:41:49.924920 master-0 kubenswrapper[7553]: W0318 17:41:49.919827 7553 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Mar 18 17:41:49.924920 master-0 kubenswrapper[7553]: W0318 17:41:49.919832 7553 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Mar 18 17:41:49.924920 master-0 kubenswrapper[7553]: W0318 17:41:49.919837 7553 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Mar 18 17:41:49.924920 master-0 kubenswrapper[7553]: W0318 17:41:49.919841 7553 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Mar 18 17:41:49.924920 master-0 kubenswrapper[7553]: W0318 17:41:49.919845 7553 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Mar 18 17:41:49.924920 master-0 kubenswrapper[7553]: W0318 17:41:49.919850 7553 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Mar 18 17:41:49.924920 master-0 kubenswrapper[7553]: W0318 17:41:49.919855 7553 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Mar 18 17:41:49.924920 master-0 kubenswrapper[7553]: W0318 17:41:49.919860 7553 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Mar 18 17:41:49.924920 master-0 kubenswrapper[7553]: W0318 17:41:49.919866 7553 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Mar 18 17:41:49.924920 master-0 kubenswrapper[7553]: W0318 17:41:49.919872 7553 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Mar 18 17:41:49.924920 master-0 kubenswrapper[7553]: W0318 17:41:49.919877 7553 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Mar 18 17:41:49.924920 master-0 kubenswrapper[7553]: W0318 17:41:49.919884 7553 feature_gate.go:330] unrecognized feature gate: OVNObservability Mar 18 17:41:49.924920 master-0 kubenswrapper[7553]: W0318 17:41:49.919890 7553 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Mar 18 17:41:49.924920 master-0 kubenswrapper[7553]: W0318 17:41:49.919897 7553 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Mar 18 17:41:49.925577 master-0 kubenswrapper[7553]: W0318 17:41:49.919907 7553 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Mar 18 17:41:49.925577 master-0 kubenswrapper[7553]: W0318 17:41:49.919912 7553 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Mar 18 17:41:49.925577 master-0 kubenswrapper[7553]: W0318 17:41:49.919918 7553 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Mar 18 17:41:49.925577 master-0 kubenswrapper[7553]: W0318 17:41:49.919923 7553 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Mar 18 17:41:49.925577 master-0 kubenswrapper[7553]: W0318 17:41:49.919928 7553 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Mar 18 17:41:49.925577 master-0 kubenswrapper[7553]: W0318 17:41:49.919934 7553 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Mar 18 17:41:49.925577 master-0 kubenswrapper[7553]: W0318 17:41:49.919938 7553 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Mar 18 17:41:49.925577 master-0 kubenswrapper[7553]: W0318 17:41:49.919943 7553 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Mar 18 17:41:49.925577 master-0 kubenswrapper[7553]: W0318 17:41:49.919947 7553 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Mar 18 17:41:49.925577 master-0 kubenswrapper[7553]: W0318 17:41:49.919951 7553 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Mar 18 17:41:49.925577 master-0 kubenswrapper[7553]: W0318 17:41:49.919956 7553 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Mar 18 17:41:49.925577 master-0 kubenswrapper[7553]: W0318 17:41:49.919960 7553 feature_gate.go:330] unrecognized feature gate: NewOLM Mar 18 17:41:49.925577 master-0 kubenswrapper[7553]: W0318 17:41:49.919964 7553 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Mar 18 17:41:49.925577 master-0 kubenswrapper[7553]: W0318 17:41:49.919968 7553 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Mar 18 17:41:49.925577 master-0 kubenswrapper[7553]: W0318 17:41:49.919972 7553 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Mar 18 17:41:49.925577 master-0 kubenswrapper[7553]: W0318 17:41:49.919976 7553 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Mar 18 17:41:49.925577 master-0 kubenswrapper[7553]: W0318 17:41:49.919981 7553 feature_gate.go:330] unrecognized feature gate: PinnedImages Mar 18 17:41:49.925577 master-0 kubenswrapper[7553]: W0318 17:41:49.919985 7553 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Mar 18 17:41:49.925577 master-0 kubenswrapper[7553]: W0318 17:41:49.919989 7553 feature_gate.go:330] unrecognized feature gate: GatewayAPI Mar 18 17:41:49.925577 master-0 kubenswrapper[7553]: W0318 17:41:49.919994 7553 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Mar 18 17:41:49.925577 master-0 kubenswrapper[7553]: W0318 17:41:49.919998 7553 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Mar 18 17:41:49.926488 master-0 kubenswrapper[7553]: W0318 17:41:49.920002 7553 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Mar 18 17:41:49.926488 master-0 kubenswrapper[7553]: W0318 17:41:49.920007 7553 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Mar 18 17:41:49.926488 master-0 kubenswrapper[7553]: W0318 17:41:49.920011 7553 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Mar 18 17:41:49.926488 master-0 kubenswrapper[7553]: W0318 17:41:49.920015 7553 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Mar 18 17:41:49.926488 master-0 kubenswrapper[7553]: W0318 17:41:49.920021 7553 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Mar 18 17:41:49.926488 master-0 kubenswrapper[7553]: W0318 17:41:49.920025 7553 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Mar 18 17:41:49.926488 master-0 kubenswrapper[7553]: W0318 17:41:49.920030 7553 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Mar 18 17:41:49.926488 master-0 kubenswrapper[7553]: W0318 17:41:49.920034 7553 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Mar 18 17:41:49.926488 master-0 kubenswrapper[7553]: W0318 17:41:49.920040 7553 feature_gate.go:330] unrecognized feature gate: PlatformOperators Mar 18 17:41:49.926488 master-0 kubenswrapper[7553]: W0318 17:41:49.920044 7553 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Mar 18 17:41:49.926488 master-0 kubenswrapper[7553]: W0318 17:41:49.920048 7553 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Mar 18 17:41:49.926488 master-0 kubenswrapper[7553]: W0318 17:41:49.920056 7553 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Mar 18 17:41:49.926488 master-0 kubenswrapper[7553]: W0318 17:41:49.920061 7553 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Mar 18 17:41:49.926488 master-0 kubenswrapper[7553]: W0318 17:41:49.920065 7553 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Mar 18 17:41:49.926488 master-0 kubenswrapper[7553]: W0318 17:41:49.920069 7553 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Mar 18 17:41:49.926488 master-0 kubenswrapper[7553]: W0318 17:41:49.920074 7553 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Mar 18 17:41:49.926488 master-0 kubenswrapper[7553]: W0318 17:41:49.920078 7553 feature_gate.go:330] unrecognized feature gate: InsightsConfig Mar 18 17:41:49.926488 master-0 kubenswrapper[7553]: W0318 17:41:49.920083 7553 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Mar 18 17:41:49.926488 master-0 kubenswrapper[7553]: W0318 17:41:49.920089 7553 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Mar 18 17:41:49.927178 master-0 kubenswrapper[7553]: W0318 17:41:49.920094 7553 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Mar 18 17:41:49.927178 master-0 kubenswrapper[7553]: W0318 17:41:49.920099 7553 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Mar 18 17:41:49.927178 master-0 kubenswrapper[7553]: W0318 17:41:49.920105 7553 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Mar 18 17:41:49.927178 master-0 kubenswrapper[7553]: W0318 17:41:49.920115 7553 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Mar 18 17:41:49.927178 master-0 kubenswrapper[7553]: W0318 17:41:49.920120 7553 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Mar 18 17:41:49.927178 master-0 kubenswrapper[7553]: W0318 17:41:49.920124 7553 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Mar 18 17:41:49.927178 master-0 kubenswrapper[7553]: W0318 17:41:49.920129 7553 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Mar 18 17:41:49.927178 master-0 kubenswrapper[7553]: W0318 17:41:49.920134 7553 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Mar 18 17:41:49.927178 master-0 kubenswrapper[7553]: I0318 17:41:49.920152 7553 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false StreamingCollectionEncodingToJSON:true StreamingCollectionEncodingToProtobuf:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Mar 18 17:41:49.933536 master-0 kubenswrapper[7553]: I0318 17:41:49.933472 7553 server.go:491] "Kubelet version" kubeletVersion="v1.31.14" Mar 18 17:41:49.933536 master-0 kubenswrapper[7553]: I0318 17:41:49.933527 7553 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 18 17:41:49.933689 master-0 kubenswrapper[7553]: W0318 17:41:49.933628 7553 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Mar 18 17:41:49.933689 master-0 kubenswrapper[7553]: W0318 17:41:49.933647 7553 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Mar 18 17:41:49.933689 master-0 kubenswrapper[7553]: W0318 17:41:49.933656 7553 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Mar 18 17:41:49.933689 master-0 kubenswrapper[7553]: W0318 17:41:49.933662 7553 feature_gate.go:330] unrecognized feature gate: OVNObservability Mar 18 17:41:49.933689 master-0 kubenswrapper[7553]: W0318 17:41:49.933668 7553 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Mar 18 17:41:49.933689 master-0 kubenswrapper[7553]: W0318 17:41:49.933674 7553 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Mar 18 17:41:49.933689 master-0 kubenswrapper[7553]: W0318 17:41:49.933679 7553 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Mar 18 17:41:49.933689 master-0 kubenswrapper[7553]: W0318 17:41:49.933683 7553 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Mar 18 17:41:49.933689 master-0 kubenswrapper[7553]: W0318 17:41:49.933689 7553 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Mar 18 17:41:49.933689 master-0 kubenswrapper[7553]: W0318 17:41:49.933697 7553 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Mar 18 17:41:49.933689 master-0 kubenswrapper[7553]: W0318 17:41:49.933704 7553 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Mar 18 17:41:49.933689 master-0 kubenswrapper[7553]: W0318 17:41:49.933711 7553 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Mar 18 17:41:49.934046 master-0 kubenswrapper[7553]: W0318 17:41:49.933717 7553 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Mar 18 17:41:49.934046 master-0 kubenswrapper[7553]: W0318 17:41:49.933724 7553 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Mar 18 17:41:49.934046 master-0 kubenswrapper[7553]: W0318 17:41:49.933729 7553 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Mar 18 17:41:49.934046 master-0 kubenswrapper[7553]: W0318 17:41:49.933733 7553 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Mar 18 17:41:49.934046 master-0 kubenswrapper[7553]: W0318 17:41:49.933738 7553 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Mar 18 17:41:49.934046 master-0 kubenswrapper[7553]: W0318 17:41:49.933742 7553 feature_gate.go:330] unrecognized feature gate: NewOLM Mar 18 17:41:49.934046 master-0 kubenswrapper[7553]: W0318 17:41:49.933747 7553 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Mar 18 17:41:49.934046 master-0 kubenswrapper[7553]: W0318 17:41:49.933752 7553 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Mar 18 17:41:49.934046 master-0 kubenswrapper[7553]: W0318 17:41:49.933756 7553 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Mar 18 17:41:49.934046 master-0 kubenswrapper[7553]: W0318 17:41:49.933761 7553 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Mar 18 17:41:49.934046 master-0 kubenswrapper[7553]: W0318 17:41:49.933766 7553 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Mar 18 17:41:49.934046 master-0 kubenswrapper[7553]: W0318 17:41:49.933771 7553 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Mar 18 17:41:49.934046 master-0 kubenswrapper[7553]: W0318 17:41:49.933778 7553 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Mar 18 17:41:49.934046 master-0 kubenswrapper[7553]: W0318 17:41:49.933783 7553 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Mar 18 17:41:49.934046 master-0 kubenswrapper[7553]: W0318 17:41:49.933789 7553 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Mar 18 17:41:49.934046 master-0 kubenswrapper[7553]: W0318 17:41:49.933794 7553 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Mar 18 17:41:49.934046 master-0 kubenswrapper[7553]: W0318 17:41:49.933799 7553 feature_gate.go:330] unrecognized feature gate: InsightsConfig Mar 18 17:41:49.934046 master-0 kubenswrapper[7553]: W0318 17:41:49.933804 7553 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Mar 18 17:41:49.934046 master-0 kubenswrapper[7553]: W0318 17:41:49.933808 7553 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Mar 18 17:41:49.934046 master-0 kubenswrapper[7553]: W0318 17:41:49.933814 7553 feature_gate.go:330] unrecognized feature gate: Example Mar 18 17:41:49.934689 master-0 kubenswrapper[7553]: W0318 17:41:49.933819 7553 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Mar 18 17:41:49.934689 master-0 kubenswrapper[7553]: W0318 17:41:49.933825 7553 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Mar 18 17:41:49.934689 master-0 kubenswrapper[7553]: W0318 17:41:49.933830 7553 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Mar 18 17:41:49.934689 master-0 kubenswrapper[7553]: W0318 17:41:49.933835 7553 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Mar 18 17:41:49.934689 master-0 kubenswrapper[7553]: W0318 17:41:49.933840 7553 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Mar 18 17:41:49.934689 master-0 kubenswrapper[7553]: W0318 17:41:49.933844 7553 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Mar 18 17:41:49.934689 master-0 kubenswrapper[7553]: W0318 17:41:49.933848 7553 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Mar 18 17:41:49.934689 master-0 kubenswrapper[7553]: W0318 17:41:49.933852 7553 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Mar 18 17:41:49.934689 master-0 kubenswrapper[7553]: W0318 17:41:49.933858 7553 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Mar 18 17:41:49.934689 master-0 kubenswrapper[7553]: W0318 17:41:49.933863 7553 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Mar 18 17:41:49.934689 master-0 kubenswrapper[7553]: W0318 17:41:49.933870 7553 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Mar 18 17:41:49.934689 master-0 kubenswrapper[7553]: W0318 17:41:49.933876 7553 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Mar 18 17:41:49.934689 master-0 kubenswrapper[7553]: W0318 17:41:49.933881 7553 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Mar 18 17:41:49.934689 master-0 kubenswrapper[7553]: W0318 17:41:49.933887 7553 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Mar 18 17:41:49.934689 master-0 kubenswrapper[7553]: W0318 17:41:49.933892 7553 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Mar 18 17:41:49.934689 master-0 kubenswrapper[7553]: W0318 17:41:49.933897 7553 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Mar 18 17:41:49.934689 master-0 kubenswrapper[7553]: W0318 17:41:49.933901 7553 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Mar 18 17:41:49.934689 master-0 kubenswrapper[7553]: W0318 17:41:49.933906 7553 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Mar 18 17:41:49.934689 master-0 kubenswrapper[7553]: W0318 17:41:49.933911 7553 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Mar 18 17:41:49.934689 master-0 kubenswrapper[7553]: W0318 17:41:49.933915 7553 feature_gate.go:330] unrecognized feature gate: PinnedImages Mar 18 17:41:49.935383 master-0 kubenswrapper[7553]: W0318 17:41:49.933919 7553 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Mar 18 17:41:49.935383 master-0 kubenswrapper[7553]: W0318 17:41:49.933924 7553 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Mar 18 17:41:49.935383 master-0 kubenswrapper[7553]: W0318 17:41:49.933928 7553 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Mar 18 17:41:49.935383 master-0 kubenswrapper[7553]: W0318 17:41:49.933932 7553 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Mar 18 17:41:49.935383 master-0 kubenswrapper[7553]: W0318 17:41:49.933936 7553 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Mar 18 17:41:49.935383 master-0 kubenswrapper[7553]: W0318 17:41:49.933940 7553 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Mar 18 17:41:49.935383 master-0 kubenswrapper[7553]: W0318 17:41:49.933945 7553 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Mar 18 17:41:49.935383 master-0 kubenswrapper[7553]: W0318 17:41:49.933949 7553 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Mar 18 17:41:49.935383 master-0 kubenswrapper[7553]: W0318 17:41:49.933953 7553 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Mar 18 17:41:49.935383 master-0 kubenswrapper[7553]: W0318 17:41:49.933958 7553 feature_gate.go:330] unrecognized feature gate: GatewayAPI Mar 18 17:41:49.935383 master-0 kubenswrapper[7553]: W0318 17:41:49.933962 7553 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Mar 18 17:41:49.935383 master-0 kubenswrapper[7553]: W0318 17:41:49.933966 7553 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Mar 18 17:41:49.935383 master-0 kubenswrapper[7553]: W0318 17:41:49.933971 7553 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Mar 18 17:41:49.935383 master-0 kubenswrapper[7553]: W0318 17:41:49.933976 7553 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Mar 18 17:41:49.935383 master-0 kubenswrapper[7553]: W0318 17:41:49.933981 7553 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Mar 18 17:41:49.935383 master-0 kubenswrapper[7553]: W0318 17:41:49.933986 7553 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Mar 18 17:41:49.935383 master-0 kubenswrapper[7553]: W0318 17:41:49.933992 7553 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Mar 18 17:41:49.935383 master-0 kubenswrapper[7553]: W0318 17:41:49.933998 7553 feature_gate.go:330] unrecognized feature gate: SignatureStores Mar 18 17:41:49.935383 master-0 kubenswrapper[7553]: W0318 17:41:49.934002 7553 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Mar 18 17:41:49.935383 master-0 kubenswrapper[7553]: W0318 17:41:49.934007 7553 feature_gate.go:330] unrecognized feature gate: PlatformOperators Mar 18 17:41:49.935984 master-0 kubenswrapper[7553]: I0318 17:41:49.934016 7553 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false StreamingCollectionEncodingToJSON:true StreamingCollectionEncodingToProtobuf:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Mar 18 17:41:49.935984 master-0 kubenswrapper[7553]: W0318 17:41:49.934177 7553 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Mar 18 17:41:49.935984 master-0 kubenswrapper[7553]: W0318 17:41:49.934190 7553 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Mar 18 17:41:49.935984 master-0 kubenswrapper[7553]: W0318 17:41:49.934195 7553 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Mar 18 17:41:49.935984 master-0 kubenswrapper[7553]: W0318 17:41:49.934203 7553 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Mar 18 17:41:49.935984 master-0 kubenswrapper[7553]: W0318 17:41:49.934209 7553 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Mar 18 17:41:49.935984 master-0 kubenswrapper[7553]: W0318 17:41:49.934214 7553 feature_gate.go:330] unrecognized feature gate: OVNObservability Mar 18 17:41:49.935984 master-0 kubenswrapper[7553]: W0318 17:41:49.934218 7553 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Mar 18 17:41:49.935984 master-0 kubenswrapper[7553]: W0318 17:41:49.934224 7553 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Mar 18 17:41:49.935984 master-0 kubenswrapper[7553]: W0318 17:41:49.934231 7553 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Mar 18 17:41:49.935984 master-0 kubenswrapper[7553]: W0318 17:41:49.934237 7553 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Mar 18 17:41:49.935984 master-0 kubenswrapper[7553]: W0318 17:41:49.934242 7553 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Mar 18 17:41:49.935984 master-0 kubenswrapper[7553]: W0318 17:41:49.934247 7553 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Mar 18 17:41:49.935984 master-0 kubenswrapper[7553]: W0318 17:41:49.934251 7553 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Mar 18 17:41:49.935984 master-0 kubenswrapper[7553]: W0318 17:41:49.934257 7553 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Mar 18 17:41:49.936465 master-0 kubenswrapper[7553]: W0318 17:41:49.934263 7553 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Mar 18 17:41:49.936465 master-0 kubenswrapper[7553]: W0318 17:41:49.934324 7553 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Mar 18 17:41:49.936465 master-0 kubenswrapper[7553]: W0318 17:41:49.934332 7553 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Mar 18 17:41:49.936465 master-0 kubenswrapper[7553]: W0318 17:41:49.934337 7553 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Mar 18 17:41:49.936465 master-0 kubenswrapper[7553]: W0318 17:41:49.934342 7553 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Mar 18 17:41:49.936465 master-0 kubenswrapper[7553]: W0318 17:41:49.934346 7553 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Mar 18 17:41:49.936465 master-0 kubenswrapper[7553]: W0318 17:41:49.934351 7553 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Mar 18 17:41:49.936465 master-0 kubenswrapper[7553]: W0318 17:41:49.934356 7553 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Mar 18 17:41:49.936465 master-0 kubenswrapper[7553]: W0318 17:41:49.934361 7553 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Mar 18 17:41:49.936465 master-0 kubenswrapper[7553]: W0318 17:41:49.934365 7553 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Mar 18 17:41:49.936465 master-0 kubenswrapper[7553]: W0318 17:41:49.934370 7553 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Mar 18 17:41:49.936465 master-0 kubenswrapper[7553]: W0318 17:41:49.934374 7553 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Mar 18 17:41:49.936465 master-0 kubenswrapper[7553]: W0318 17:41:49.934414 7553 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Mar 18 17:41:49.936465 master-0 kubenswrapper[7553]: W0318 17:41:49.934422 7553 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Mar 18 17:41:49.936465 master-0 kubenswrapper[7553]: W0318 17:41:49.934427 7553 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Mar 18 17:41:49.936465 master-0 kubenswrapper[7553]: W0318 17:41:49.934432 7553 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Mar 18 17:41:49.936465 master-0 kubenswrapper[7553]: W0318 17:41:49.934437 7553 feature_gate.go:330] unrecognized feature gate: SignatureStores Mar 18 17:41:49.936465 master-0 kubenswrapper[7553]: W0318 17:41:49.934442 7553 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Mar 18 17:41:49.936465 master-0 kubenswrapper[7553]: W0318 17:41:49.934447 7553 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Mar 18 17:41:49.936465 master-0 kubenswrapper[7553]: W0318 17:41:49.934451 7553 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Mar 18 17:41:49.936950 master-0 kubenswrapper[7553]: W0318 17:41:49.934458 7553 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Mar 18 17:41:49.936950 master-0 kubenswrapper[7553]: W0318 17:41:49.934494 7553 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Mar 18 17:41:49.936950 master-0 kubenswrapper[7553]: W0318 17:41:49.934501 7553 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Mar 18 17:41:49.936950 master-0 kubenswrapper[7553]: W0318 17:41:49.934507 7553 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Mar 18 17:41:49.936950 master-0 kubenswrapper[7553]: W0318 17:41:49.934515 7553 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Mar 18 17:41:49.936950 master-0 kubenswrapper[7553]: W0318 17:41:49.934522 7553 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Mar 18 17:41:49.936950 master-0 kubenswrapper[7553]: W0318 17:41:49.934527 7553 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Mar 18 17:41:49.936950 master-0 kubenswrapper[7553]: W0318 17:41:49.934532 7553 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Mar 18 17:41:49.936950 master-0 kubenswrapper[7553]: W0318 17:41:49.934537 7553 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Mar 18 17:41:49.936950 master-0 kubenswrapper[7553]: W0318 17:41:49.934542 7553 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Mar 18 17:41:49.936950 master-0 kubenswrapper[7553]: W0318 17:41:49.934547 7553 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Mar 18 17:41:49.936950 master-0 kubenswrapper[7553]: W0318 17:41:49.934577 7553 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Mar 18 17:41:49.936950 master-0 kubenswrapper[7553]: W0318 17:41:49.934584 7553 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Mar 18 17:41:49.936950 master-0 kubenswrapper[7553]: W0318 17:41:49.934589 7553 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Mar 18 17:41:49.936950 master-0 kubenswrapper[7553]: W0318 17:41:49.934595 7553 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Mar 18 17:41:49.936950 master-0 kubenswrapper[7553]: W0318 17:41:49.934600 7553 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Mar 18 17:41:49.936950 master-0 kubenswrapper[7553]: W0318 17:41:49.934605 7553 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Mar 18 17:41:49.936950 master-0 kubenswrapper[7553]: W0318 17:41:49.934610 7553 feature_gate.go:330] unrecognized feature gate: Example Mar 18 17:41:49.936950 master-0 kubenswrapper[7553]: W0318 17:41:49.934615 7553 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Mar 18 17:41:49.937619 master-0 kubenswrapper[7553]: W0318 17:41:49.934619 7553 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Mar 18 17:41:49.937619 master-0 kubenswrapper[7553]: W0318 17:41:49.934624 7553 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Mar 18 17:41:49.937619 master-0 kubenswrapper[7553]: W0318 17:41:49.934628 7553 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Mar 18 17:41:49.937619 master-0 kubenswrapper[7553]: W0318 17:41:49.934654 7553 feature_gate.go:330] unrecognized feature gate: InsightsConfig Mar 18 17:41:49.937619 master-0 kubenswrapper[7553]: W0318 17:41:49.934659 7553 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Mar 18 17:41:49.937619 master-0 kubenswrapper[7553]: W0318 17:41:49.934663 7553 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Mar 18 17:41:49.937619 master-0 kubenswrapper[7553]: W0318 17:41:49.934666 7553 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Mar 18 17:41:49.937619 master-0 kubenswrapper[7553]: W0318 17:41:49.934671 7553 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Mar 18 17:41:49.937619 master-0 kubenswrapper[7553]: W0318 17:41:49.934675 7553 feature_gate.go:330] unrecognized feature gate: NewOLM Mar 18 17:41:49.937619 master-0 kubenswrapper[7553]: W0318 17:41:49.934679 7553 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Mar 18 17:41:49.937619 master-0 kubenswrapper[7553]: W0318 17:41:49.934682 7553 feature_gate.go:330] unrecognized feature gate: GatewayAPI Mar 18 17:41:49.937619 master-0 kubenswrapper[7553]: W0318 17:41:49.934686 7553 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Mar 18 17:41:49.937619 master-0 kubenswrapper[7553]: W0318 17:41:49.934691 7553 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Mar 18 17:41:49.937619 master-0 kubenswrapper[7553]: W0318 17:41:49.934695 7553 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Mar 18 17:41:49.937619 master-0 kubenswrapper[7553]: W0318 17:41:49.934700 7553 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Mar 18 17:41:49.937619 master-0 kubenswrapper[7553]: W0318 17:41:49.934704 7553 feature_gate.go:330] unrecognized feature gate: PinnedImages Mar 18 17:41:49.937619 master-0 kubenswrapper[7553]: W0318 17:41:49.934709 7553 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Mar 18 17:41:49.937619 master-0 kubenswrapper[7553]: W0318 17:41:49.934736 7553 feature_gate.go:330] unrecognized feature gate: PlatformOperators Mar 18 17:41:49.937619 master-0 kubenswrapper[7553]: W0318 17:41:49.934741 7553 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Mar 18 17:41:49.938073 master-0 kubenswrapper[7553]: I0318 17:41:49.934750 7553 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false StreamingCollectionEncodingToJSON:true StreamingCollectionEncodingToProtobuf:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Mar 18 17:41:49.938073 master-0 kubenswrapper[7553]: I0318 17:41:49.935041 7553 server.go:940] "Client rotation is on, will bootstrap in background" Mar 18 17:41:49.938073 master-0 kubenswrapper[7553]: I0318 17:41:49.937254 7553 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Mar 18 17:41:49.938073 master-0 kubenswrapper[7553]: I0318 17:41:49.937399 7553 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Mar 18 17:41:49.938073 master-0 kubenswrapper[7553]: I0318 17:41:49.937718 7553 server.go:997] "Starting client certificate rotation" Mar 18 17:41:49.938073 master-0 kubenswrapper[7553]: I0318 17:41:49.937732 7553 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Mar 18 17:41:49.938073 master-0 kubenswrapper[7553]: I0318 17:41:49.937963 7553 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-03-19 17:31:47 +0000 UTC, rotation deadline is 2026-03-19 12:16:38.803548477 +0000 UTC Mar 18 17:41:49.938254 master-0 kubenswrapper[7553]: I0318 17:41:49.938085 7553 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 18h34m48.865466442s for next certificate rotation Mar 18 17:41:49.938728 master-0 kubenswrapper[7553]: I0318 17:41:49.938677 7553 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Mar 18 17:41:49.941375 master-0 kubenswrapper[7553]: I0318 17:41:49.941333 7553 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Mar 18 17:41:49.950787 master-0 kubenswrapper[7553]: I0318 17:41:49.950738 7553 log.go:25] "Validated CRI v1 runtime API" Mar 18 17:41:49.953695 master-0 kubenswrapper[7553]: I0318 17:41:49.953672 7553 log.go:25] "Validated CRI v1 image API" Mar 18 17:41:49.954804 master-0 kubenswrapper[7553]: I0318 17:41:49.954781 7553 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Mar 18 17:41:49.961821 master-0 kubenswrapper[7553]: I0318 17:41:49.961761 7553 fs.go:135] Filesystem UUIDs: map[7B77-95E7:/dev/vda2 910678ff-f77e-4a7d-8d53-86f2ac47a823:/dev/vda4 fad39e74-417f-48de-99cb-6a377eb68dd8:/dev/vda3] Mar 18 17:41:49.962148 master-0 kubenswrapper[7553]: I0318 17:41:49.961808 7553 fs.go:136] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/1add5afbf418952e0016f7866a470207154a949d28966174c8a7f5fa79ba0e1f/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/1add5afbf418952e0016f7866a470207154a949d28966174c8a7f5fa79ba0e1f/userdata/shm major:0 minor:134 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/3850c530da1325c13b135240c71869228656f1ceff63510ab0a98443cee54a55/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/3850c530da1325c13b135240c71869228656f1ceff63510ab0a98443cee54a55/userdata/shm major:0 minor:276 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/39f34c1f903429d7c69072e5211db003fe4dc2847c946a6e7e2b74d4bd2e8ac8/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/39f34c1f903429d7c69072e5211db003fe4dc2847c946a6e7e2b74d4bd2e8ac8/userdata/shm major:0 minor:216 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/3d376cebaacd129d547a2b1f5d7c73be7200c80d9de53fd252db3ff4f06f931e/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/3d376cebaacd129d547a2b1f5d7c73be7200c80d9de53fd252db3ff4f06f931e/userdata/shm major:0 minor:139 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/4c05ce2500dc59522a6a15e9d7a181f449cb0590dccc6ae6225c9f2d2a528378/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/4c05ce2500dc59522a6a15e9d7a181f449cb0590dccc6ae6225c9f2d2a528378/userdata/shm major:0 minor:41 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/506ad6c80a1b7c5f38a2826db8ee7d4115a8417001018ebd69e1309cf067fc6a/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/506ad6c80a1b7c5f38a2826db8ee7d4115a8417001018ebd69e1309cf067fc6a/userdata/shm major:0 minor:129 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/5d787dbd681a8506a724fcd5492ed061edac8e14293b27fcf81a68f92d0df82e/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/5d787dbd681a8506a724fcd5492ed061edac8e14293b27fcf81a68f92d0df82e/userdata/shm major:0 minor:258 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/62f87c779c80aac58d08d6114e2c8cc2c2974d823d9538d2de8360d3c4243057/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/62f87c779c80aac58d08d6114e2c8cc2c2974d823d9538d2de8360d3c4243057/userdata/shm major:0 minor:271 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/681e9cfa9d99b6787480ff89127df11d81327ab93296d6efacd157b94bbfa393/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/681e9cfa9d99b6787480ff89127df11d81327ab93296d6efacd157b94bbfa393/userdata/shm major:0 minor:269 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/6855c26bf134f973aca5b753cd9252cc1f86b218f035870b1dab49845cbadb56/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/6855c26bf134f973aca5b753cd9252cc1f86b218f035870b1dab49845cbadb56/userdata/shm major:0 minor:266 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/6a2b235f936941b3155c2b3d3485ca14ee2d3465fd9996759d1380c11160d84a/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/6a2b235f936941b3155c2b3d3485ca14ee2d3465fd9996759d1380c11160d84a/userdata/shm major:0 minor:58 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/6a85b3ee12aea7b46bda118fb48d0b8760d887f0c07b29fb0b4386fa0f1ccc35/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/6a85b3ee12aea7b46bda118fb48d0b8760d887f0c07b29fb0b4386fa0f1ccc35/userdata/shm major:0 minor:42 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/6bd8b74e410d81f6dbc5c2f014e72715199a5fa6c057d771fdb8890689635805/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/6bd8b74e410d81f6dbc5c2f014e72715199a5fa6c057d771fdb8890689635805/userdata/shm major:0 minor:262 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/8d76a48b181c0cd15d1de5c39a3bc3d9f330bf1dff375bce677cfee095393ae6/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/8d76a48b181c0cd15d1de5c39a3bc3d9f330bf1dff375bce677cfee095393ae6/userdata/shm major:0 minor:247 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/95d9ed6b43dde926cc6bff2e2109470565941e7ec534301d19b34350a3fd9914/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/95d9ed6b43dde926cc6bff2e2109470565941e7ec534301d19b34350a3fd9914/userdata/shm major:0 minor:44 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/9c6ba19a43312e7426d156208bc0c31b36ee526eb8006e7186d5ea94923d4e9f/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/9c6ba19a43312e7426d156208bc0c31b36ee526eb8006e7186d5ea94923d4e9f/userdata/shm major:0 minor:252 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/a06a3f0fb54d1869684741c01721cbf6af520d75473205b84e908f306a368b3a/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/a06a3f0fb54d1869684741c01721cbf6af520d75473205b84e908f306a368b3a/userdata/shm major:0 minor:257 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/a94ad7630ca05ccfa8a345aad202de39848166c994875cfad1d5875137f9cf66/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/a94ad7630ca05ccfa8a345aad202de39848166c994875cfad1d5875137f9cf66/userdata/shm major:0 minor:115 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/b4db07afd1a03d8c1456d9bd3e2fc4e66947bcaa942aef9864e3ed3e54889795/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/b4db07afd1a03d8c1456d9bd3e2fc4e66947bcaa942aef9864e3ed3e54889795/userdata/shm major:0 minor:119 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/b5e733421a5534241a408f87a9d1282c96549651c461bd5bf9e9c1999c97d9e5/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/b5e733421a5534241a408f87a9d1282c96549651c461bd5bf9e9c1999c97d9e5/userdata/shm major:0 minor:255 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/d451cc909e96cb90161ef2054b945e5cb54cff4fe2886dd65033c87b6a8fe884/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/d451cc909e96cb90161ef2054b945e5cb54cff4fe2886dd65033c87b6a8fe884/userdata/shm major:0 minor:245 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/d5c2063a6c515ba48297abcc083d96a0ee10588d1979ea68a5e48cdf4d96c90e/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/d5c2063a6c515ba48297abcc083d96a0ee10588d1979ea68a5e48cdf4d96c90e/userdata/shm major:0 minor:277 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/db5aec9e61cde7bec4f42fa13ed7d132af8e2baca532c12d1638296fcf06dd34/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/db5aec9e61cde7bec4f42fa13ed7d132af8e2baca532c12d1638296fcf06dd34/userdata/shm major:0 minor:104 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/e03411b7c2bf8367c968ed1adc09d9fecafd420d1122805ea765d466352e23a3/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/e03411b7c2bf8367c968ed1adc09d9fecafd420d1122805ea765d466352e23a3/userdata/shm major:0 minor:54 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/0100a259-1358-45e8-8191-4e1f9a14ec89/volumes/kubernetes.io~projected/kube-api-access-tnknt:{mountpoint:/var/lib/kubelet/pods/0100a259-1358-45e8-8191-4e1f9a14ec89/volumes/kubernetes.io~projected/kube-api-access-tnknt major:0 minor:239 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/0100a259-1358-45e8-8191-4e1f9a14ec89/volumes/kubernetes.io~secret/etcd-client:{mountpoint:/var/lib/kubelet/pods/0100a259-1358-45e8-8191-4e1f9a14ec89/volumes/kubernetes.io~secret/etcd-client major:0 minor:229 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/0100a259-1358-45e8-8191-4e1f9a14ec89/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/0100a259-1358-45e8-8191-4e1f9a14ec89/volumes/kubernetes.io~secret/serving-cert major:0 minor:224 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/0b9ff55a-73fb-473f-b406-1f8b6cffdb89/volumes/kubernetes.io~projected/kube-api-access-2tvgq:{mountpoint:/var/lib/kubelet/pods/0b9ff55a-73fb-473f-b406-1f8b6cffdb89/volumes/kubernetes.io~projected/kube-api-access-2tvgq major:0 minor:214 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/0b9ff55a-73fb-473f-b406-1f8b6cffdb89/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/0b9ff55a-73fb-473f-b406-1f8b6cffdb89/volumes/kubernetes.io~secret/serving-cert major:0 minor:213 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/14a0661b-7bde-4e22-a9a9-5e3fb24df77f/volumes/kubernetes.io~projected/kube-api-access-2pqww:{mountpoint:/var/lib/kubelet/pods/14a0661b-7bde-4e22-a9a9-5e3fb24df77f/volumes/kubernetes.io~projected/kube-api-access-2pqww major:0 minor:92 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/14a0661b-7bde-4e22-a9a9-5e3fb24df77f/volumes/kubernetes.io~secret/metrics-tls:{mountpoint:/var/lib/kubelet/pods/14a0661b-7bde-4e22-a9a9-5e3fb24df77f/volumes/kubernetes.io~secret/metrics-tls major:0 minor:43 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/1d969530-c138-4fb7-9bfe-0825be66c009/volumes/kubernetes.io~projected/kube-api-access-cd868:{mountpoint:/var/lib/kubelet/pods/1d969530-c138-4fb7-9bfe-0825be66c009/volumes/kubernetes.io~projected/kube-api-access-cd868 major:0 minor:275 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/26575d68-0488-4dfa-a5d0-5016e481dba6/volumes/kubernetes.io~projected/kube-api-access:{mountpoint:/var/lib/kubelet/pods/26575d68-0488-4dfa-a5d0-5016e481dba6/volumes/kubernetes.io~projected/kube-api-access major:0 minor:233 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/26575d68-0488-4dfa-a5d0-5016e481dba6/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/26575d68-0488-4dfa-a5d0-5016e481dba6/volumes/kubernetes.io~secret/serving-cert major:0 minor:219 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/37b3753f-bf4f-4a9e-a4a8-d58296bada79/volumes/kubernetes.io~projected/kube-api-access-zwlxb:{mountpoint:/var/lib/kubelet/pods/37b3753f-bf4f-4a9e-a4a8-d58296bada79/volumes/kubernetes.io~projected/kube-api-access-zwlxb major:0 minor:227 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/3a3a6c2c-78e7-41f3-acff-20173cbc012a/volumes/kubernetes.io~projected/kube-api-access:{mountpoint:/var/lib/kubelet/pods/3a3a6c2c-78e7-41f3-acff-20173cbc012a/volumes/kubernetes.io~projected/kube-api-access major:0 minor:259 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/3a3a6c2c-78e7-41f3-acff-20173cbc012a/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/3a3a6c2c-78e7-41f3-acff-20173cbc012a/volumes/kubernetes.io~secret/serving-cert major:0 minor:218 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/5a4f94f3-d63a-4869-b723-ae9637610b4b/volumes/kubernetes.io~projected/kube-api-access-hgnz6:{mountpoint:/var/lib/kubelet/pods/5a4f94f3-d63a-4869-b723-ae9637610b4b/volumes/kubernetes.io~projected/kube-api-access-hgnz6 major:0 minor:123 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/5b0e38f3-3ab5-4519-86a6-68003deb94da/volumes/kubernetes.io~projected/kube-api-access-grnqn:{mountpoint:/var/lib/kubelet/pods/5b0e38f3-3ab5-4519-86a6-68003deb94da/volumes/kubernetes.io~projected/kube-api-access-grnqn major:0 minor:99 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/6f26e239-2988-4faa-bc1d-24b15b95b7f1/volumes/kubernetes.io~projected/bound-sa-token:{mountpoint:/var/lib/kubelet/pods/6f26e239-2988-4faa-bc1d-24b15b95b7f1/volumes/kubernetes.io~projected/bound-sa-token major:0 minor:249 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/6f26e239-2988-4faa-bc1d-24b15b95b7f1/volumes/kubernetes.io~projected/kube-api-access-5sl7p:{mountpoint:/var/lib/kubelet/pods/6f26e239-2988-4faa-bc1d-24b15b95b7f1/volumes/kubernetes.io~projected/kube-api-access-5sl7p major:0 minor:225 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/7b94e08c-7944-445e-bfb7-6c7c14880c65/volumes/kubernetes.io~projected/kube-api-access-g4zcv:{mountpoint:/var/lib/kubelet/pods/7b94e08c-7944-445e-bfb7-6c7c14880c65/volumes/kubernetes.io~projected/kube-api-access-g4zcv major:0 minor:125 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/7b94e08c-7944-445e-bfb7-6c7c14880c65/volumes/kubernetes.io~secret/ovn-control-plane-metrics-cert:{mountpoint:/var/lib/kubelet/pods/7b94e08c-7944-445e-bfb7-6c7c14880c65/volumes/kubernetes.io~secret/ovn-control-plane-metrics-cert major:0 minor:124 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/7c6694a8-ccd0-491b-9f21-215450f6ce67/volumes/kubernetes.io~projected/kube-api-access-mrdqg:{mountpoint:/var/lib/kubelet/pods/7c6694a8-ccd0-491b-9f21-215450f6ce67/volumes/kubernetes.io~projected/kube-api-access-mrdqg major:0 minor:268 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/7e64a377-f497-4416-8f22-d5c7f52e0b65/volumes/kubernetes.io~projected/bound-sa-token:{mountpoint:/var/lib/kubelet/pods/7e64a377-f497-4416-8f22-d5c7f52e0b65/volumes/kubernetes.io~projected/bound-sa-token major:0 minor:234 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/7e64a377-f497-4416-8f22-d5c7f52e0b65/volumes/kubernetes.io~projected/kube-api-access-sclm5:{mountpoint:/var/lib/kubelet/pods/7e64a377-f497-4416-8f22-d5c7f52e0b65/volumes/kubernetes.io~projected/kube-api-access-sclm5 major:0 minor:238 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/8d5e9525-6c0d-4c0b-a2ce-e42eaf66c311/volumes/kubernetes.io~projected/kube-api-access-clm4b:{mountpoint:/var/lib/kubelet/pods/8d5e9525-6c0d-4c0b-a2ce-e42eaf66c311/volumes/kubernetes.io~projected/kube-api-access-clm4b major:0 minor:237 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/9875ed82-813c-483d-8471-8f9b74b774ee/volumes/kubernetes.io~projected/kube-api-access-76j8w:{mountpoint:/var/lib/kubelet/pods/9875ed82-813c-483d-8471-8f9b74b774ee/volumes/kubernetes.io~projected/kube-api-access-76j8w major:0 minor:143 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/9875ed82-813c-483d-8471-8f9b74b774ee/volumes/kubernetes.io~secret/webhook-cert:{mountpoint:/var/lib/kubelet/pods/9875ed82-813c-483d-8471-8f9b74b774ee/volumes/kubernetes.io~secret/webhook-cert major:0 minor:138 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/994fff04-c1d7-4f10-8d4b-6b49a6934829/volume-subpaths/run-systemd/ovnkube-controller/6:{mountpoint:/var/lib/kubelet/pods/994fff04-c1d7-4f10-8d4b-6b49a6934829/volume-subpaths/run-systemd/ovnkube-controller/6 major:0 minor:24 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/994fff04-c1d7-4f10-8d4b-6b49a6934829/volumes/kubernetes.io~projected/kube-api-access-9lwsm:{mountpoint:/var/lib/kubelet/pods/994fff04-c1d7-4f10-8d4b-6b49a6934829/volumes/kubernetes.io~projected/kube-api-access-9lwsm major:0 minor:127 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/994fff04-c1d7-4f10-8d4b-6b49a6934829/volumes/kubernetes.io~secret/ovn-node-metrics-cert:{mountpoint:/var/lib/kubelet/pods/994fff04-c1d7-4f10-8d4b-6b49a6934829/volumes/kubernetes.io~secret/ovn-node-metrics-cert major:0 minor:126 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/99e215da-759d-4fff-af65-0fb64245fbd0/volumes/kubernetes.io~projected/kube-api-access-n8k5q:{mountpoint:/var/lib/kubelet/pods/99e215da-759d-4fff-af65-0fb64245fbd0/volumes/kubernetes.io~projected/kube-api-access-n8k5q major:0 minor:250 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/99e215da-759d-4fff-af65-0fb64245fbd0/volumes/kubernetes.io~secret/cluster-olm-operator-serving-cert:{mountpoint:/var/lib/kubelet/pods/99e215da-759d-4fff-af65-0fb64245fbd0/volumes/kubernetes.io~secret/cluster-olm-operator-serving-cert major:0 minor:226 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/9a240ab7-a1d5-4e9a-96f3-4590681cc7ed/volumes/kubernetes.io~projected/kube-api-access-l5tw2:{mountpoint:/var/lib/kubelet/pods/9a240ab7-a1d5-4e9a-96f3-4590681cc7ed/volumes/kubernetes.io~projected/kube-api-access-l5tw2 major:0 minor:231 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/9a240ab7-a1d5-4e9a-96f3-4590681cc7ed/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/9a240ab7-a1d5-4e9a-96f3-4590681cc7ed/volumes/kubernetes.io~secret/serving-cert major:0 minor:220 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/9b424d6c-7440-4c98-ac19-2d0642c696fd/volumes/kubernetes.io~projected/kube-api-access:{mountpoint:/var/lib/kubelet/pods/9b424d6c-7440-4c98-ac19-2d0642c696fd/volumes/kubernetes.io~projected/kube-api-access major:0 minor:215 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/9b424d6c-7440-4c98-ac19-2d0642c696fd/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/9b424d6c-7440-4c98-ac19-2d0642c696fd/volumes/kubernetes.io~secret/serving-cert major:0 minor:209 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/a02399de-859b-45b1-9b00-18a08f285f39/volumes/kubernetes.io~projected/kube-api-access:{mountpoint:/var/lib/kubelet/pods/a02399de-859b-45b1-9b00-18a08f285f39/volumes/kubernetes.io~projected/kube-api-access major:0 minor:91 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/a8c34df1-ea0d-4dfa-bf4d-5b58dc5bee8e/volumes/kubernetes.io~projected/kube-api-access-dlcnh:{mountpoint:/var/lib/kubelet/pods/a8c34df1-ea0d-4dfa-bf4d-5b58dc5bee8e/volumes/kubernetes.io~projected/kube-api-access-dlcnh major:0 minor:263 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/b1352cc7-4099-44c5-9c31-8259fb783bc7/volumes/kubernetes.io~projected/kube-api-access-9pp5f:{mountpoint:/var/lib/kubelet/pods/b1352cc7-4099-44c5-9c31-8259fb783bc7/volumes/kubernetes.io~projected/kube-api-access-9pp5f major:0 minor:240 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/c087ce06-a16b-41f4-ba93-8fccdee09003/volumes/kubernetes.io~projected/kube-api-access-789k6:{mountpoint:/var/lib/kubelet/pods/c087ce06-a16b-41f4-ba93-8fccdee09003/volumes/kubernetes.io~projected/kube-api-access-789k6 major:0 minor:244 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/c087ce06-a16b-41f4-ba93-8fccdee09003/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/c087ce06-a16b-41f4-ba93-8fccdee09003/volumes/kubernetes.io~secret/serving-cert major:0 minor:223 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/c355c750-ae2f-49fa-9a16-8fb4f688853e/volumes/kubernetes.io~projected/kube-api-access-zfnqp:{mountpoint:/var/lib/kubelet/pods/c355c750-ae2f-49fa-9a16-8fb4f688853e/volumes/kubernetes.io~projected/kube-api-access-zfnqp major:0 minor:236 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/c355c750-ae2f-49fa-9a16-8fb4f688853e/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/c355c750-ae2f-49fa-9a16-8fb4f688853e/volumes/kubernetes.io~secret/serving-cert major:0 minor:221 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/cb522b02-0b93-4711-9041-566daa06b95a/volumes/kubernetes.io~projected/kube-api-access-fk59q:{mountpoint:/var/lib/kubelet/pods/cb522b02-0b93-4711-9041-566daa06b95a/volumes/kubernetes.io~projected/kube-api-access-fk59q major:0 minor:251 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/cb522b02-0b93-4711-9041-566daa06b95a/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/cb522b02-0b93-4711-9041-566daa06b95a/volumes/kubernetes.io~secret/serving-cert major:0 minor:228 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/ce5831a6-5a8d-4cda-9299-5d86437bcab2/volumes/kubernetes.io~projected/kube-api-access-756j8:{mountpoint:/var/lib/kubelet/pods/ce5831a6-5a8d-4cda-9299-5d86437bcab2/volumes/kubernetes.io~projected/kube-api-access-756j8 major:0 minor:241 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/d26d4515-391e-41a5-8c82-1b2b8a375662/volumes/kubernetes.io~projected/kube-api-access-bm8jj:{mountpoint:/var/lib/kubelet/pods/d26d4515-391e-41a5-8c82-1b2b8a375662/volumes/kubernetes.io~projected/kube-api-access-bm8jj major:0 minor:265 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/dba5f8d7-4d25-42b5-9c58-813221bf96bb/volumes/kubernetes.io~projected/kube-api-access-lmsm4:{mountpoint:/var/lib/kubelet/pods/dba5f8d7-4d25-42b5-9c58-813221bf96bb/volumes/kubernetes.io~projected/kube-api-access-lmsm4 major:0 minor:254 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/e73f2834-c56c-4cef-ac3c-2317e9a4324c/volumes/kubernetes.io~projected/kube-api-access-qwps9:{mountpoint:/var/lib/kubelet/pods/e73f2834-c56c-4cef-ac3c-2317e9a4324c/volumes/kubernetes.io~projected/kube-api-access-qwps9 major:0 minor:232 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/e9e04572-1425-440e-9869-6deef05e13e3/volumes/kubernetes.io~projected/kube-api-access-t92bz:{mountpoint:/var/lib/kubelet/pods/e9e04572-1425-440e-9869-6deef05e13e3/volumes/kubernetes.io~projected/kube-api-access-t92bz major:0 minor:235 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/f7ff61c7-32d1-4407-a792-8e22bb4d50f9/volumes/kubernetes.io~projected/kube-api-access-nf82n:{mountpoint:/var/lib/kubelet/pods/f7ff61c7-32d1-4407-a792-8e22bb4d50f9/volumes/kubernetes.io~projected/kube-api-access-nf82n major:0 minor:230 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/f7ff61c7-32d1-4407-a792-8e22bb4d50f9/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/f7ff61c7-32d1-4407-a792-8e22bb4d50f9/volumes/kubernetes.io~secret/serving-cert major:0 minor:222 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/fea7b899-fde4-4463-9520-4d433a8ebe21/volumes/kubernetes.io~projected/kube-api-access-ts9b9:{mountpoint:/var/lib/kubelet/pods/fea7b899-fde4-4463-9520-4d433a8ebe21/volumes/kubernetes.io~projected/kube-api-access-ts9b9 major:0 minor:100 fsType:tmpfs blockSize:0} overlay_0-106:{mountpoint:/var/lib/containers/storage/overlay/3e5a3adaba6a56dd4426c71040fc587e60bbdde94919e0abd38918058afc3893/merged major:0 minor:106 fsType:overlay blockSize:0} overlay_0-108:{mountpoint:/var/lib/containers/storage/overlay/ab4b5f0ed4b684d8f0b363dac491b853fa3da515dfb9ddbed84b9783f3b0d424/merged major:0 minor:108 fsType:overlay blockSize:0} overlay_0-110:{mountpoint:/var/lib/containers/storage/overlay/369129e17811005390480eedea56c06a5fcad23b5b815fb94f37dafde6bc4a8e/merged major:0 minor:110 fsType:overlay blockSize:0} overlay_0-117:{mountpoint:/var/lib/containers/storage/overlay/230b845cd22cdbae440715b993ceacf024c3eb27456ff73c3f13cb327dc2a15c/merged major:0 minor:117 fsType:overlay blockSize:0} overlay_0-121:{mountpoint:/var/lib/containers/storage/overlay/048f03f3200551b0fdb293888e2cfab6b47ba228d07864c32879e47fd544d31a/merged major:0 minor:121 fsType:overlay blockSize:0} overlay_0-128:{mountpoint:/var/lib/containers/storage/overlay/d7b32d6f52c6e21b8d4c124367bf9bc94d1ec8d01eba6c8b154fb9d4b6ff252f/merged major:0 minor:128 fsType:overlay blockSize:0} overlay_0-132:{mountpoint:/var/lib/containers/storage/overlay/ba821875d8f2d09c67960b628bd80ba7295c7a86307dd559a993d655ad74695b/merged major:0 minor:132 fsType:overlay blockSize:0} overlay_0-136:{mountpoint:/var/lib/containers/storage/overlay/938bb47b9676cc4c014ae3c6218e7b9d004161e8536a78c4dd5ba4b9cf1c0ff9/merged major:0 minor:136 fsType:overlay blockSize:0} overlay_0-140:{mountpoint:/var/lib/containers/storage/overlay/9de8a38afff62807154665d68e5d53e978e24142bf5a081c7f63e366cb1fa26e/merged major:0 minor:140 fsType:overlay blockSize:0} overlay_0-144:{mountpoint:/var/lib/containers/storage/overlay/72c94b4d3c9098bc9a42db251d40eb350f0a2f91869b1b53620fc92337547242/merged major:0 minor:144 fsType:overlay blockSize:0} overlay_0-152:{mountpoint:/var/lib/containers/storage/overlay/4995490c1d892df79ba3f9ab0ff04542ab70e207b8943f4d819e6ce7253d6766/merged major:0 minor:152 fsType:overlay blockSize:0} overlay_0-154:{mountpoint:/var/lib/containers/storage/overlay/b0d49126a79fb2c47fe3b0a3891f38cbf46fdc19604cdf07665fb7961850bf8c/merged major:0 minor:154 fsType:overlay blockSize:0} overlay_0-155:{mountpoint:/var/lib/containers/storage/overlay/f0d5f8d9a4b06d70a38e9e9e9eefa61585b062a6247cac3700ceb11eccb3997c/merged major:0 minor:155 fsType:overlay blockSize:0} overlay_0-164:{mountpoint:/var/lib/containers/storage/overlay/3e524e4fca6a121b14d4862ca00042bbf168d85be6c414d6b49d27bebb363917/merged major:0 minor:164 fsType:overlay blockSize:0} overlay_0-169:{mountpoint:/var/lib/containers/storage/overlay/dbda68c3fa2850449d5f0a63bd24f1aa1b17c6d15e3e73f05b64faeb598ea167/merged major:0 minor:169 fsType:overlay blockSize:0} overlay_0-171:{mountpoint:/var/lib/containers/storage/overlay/e0a8b65888cecdba28af337c7264e3253d10cb1831f887836211476cfdeb23c5/merged major:0 minor:171 fsType:overlay blockSize:0} overlay_0-179:{mountpoint:/var/lib/containers/storage/overlay/e39a1115d62f46a98f84dab0fae5939bd3450f50111ed27cab088b0bb23f9bcc/merged major:0 minor:179 fsType:overlay blockSize:0} overlay_0-184:{mountpoint:/var/lib/containers/storage/overlay/397b8335ce595c480c2fb98072849c0a4f2d4f9e31c706fdc8799c3ccbc2bdc6/merged major:0 minor:184 fsType:overlay blockSize:0} overlay_0-189:{mountpoint:/var/lib/containers/storage/overlay/afde5f7cb1b5e67175567ec51589b957bf63b638658fbf75fe266c74f183da1f/merged major:0 minor:189 fsType:overlay blockSize:0} overlay_0-194:{mountpoint:/var/lib/containers/storage/overlay/ca75afc0abe50c3c409a3dd7b3ff5d29c918e7940798e4bdc799eeb4590e3c63/merged major:0 minor:194 fsType:overlay blockSize:0} overlay_0-199:{mountpoint:/var/lib/containers/storage/overlay/51810ee05c7308617d1b9228d22bd2f2a94d94f05c0862a99ea75abcd1e9a068/merged major:0 minor:199 fsType:overlay blockSize:0} overlay_0-204:{mountpoint:/var/lib/containers/storage/overlay/1b314302766293157882743f84a5f315e7d6d6e6a6d7e21ee0b0dc6bc750895e/merged major:0 minor:204 fsType:overlay blockSize:0} overlay_0-242:{mountpoint:/var/lib/containers/storage/overlay/a7a1d3d76ac2b826816ae765ff55db0ff84190d30a2fc6a06f084db3e17661c7/merged major:0 minor:242 fsType:overlay blockSize:0} overlay_0-273:{mountpoint:/var/lib/containers/storage/overlay/be29a67b0c658ff064407ca06bef7e2258154a4e31b0977a9988164b4a74a969/merged major:0 minor:273 fsType:overlay blockSize:0} overlay_0-280:{mountpoint:/var/lib/containers/storage/overlay/05327422cee626a7c5414860ed297136bde63e6b55ef9a6c141a037c71090962/merged major:0 minor:280 fsType:overlay blockSize:0} overlay_0-282:{mountpoint:/var/lib/containers/storage/overlay/f6765a5e8ba7b22d75c7a3e1cc3b26d4c166ad3137715f64a1a55cb5cb6b56a6/merged major:0 minor:282 fsType:overlay blockSize:0} overlay_0-284:{mountpoint:/var/lib/containers/storage/overlay/14d5fa9bb70c1e978ec7103419d0cb59559bf114aad1c80282385ce045275da5/merged major:0 minor:284 fsType:overlay blockSize:0} overlay_0-286:{mountpoint:/var/lib/containers/storage/overlay/eb889e1eadd9b0335f01c2dcae987a9508309e056586b6599fc5a93f332952d3/merged major:0 minor:286 fsType:overlay blockSize:0} overlay_0-288:{mountpoint:/var/lib/containers/storage/overlay/4f3403bbde21c93358ba87e3a6eb0668028009a625947e5b1f47ea684323422c/merged major:0 minor:288 fsType:overlay blockSize:0} overlay_0-290:{mountpoint:/var/lib/containers/storage/overlay/f51f0eb157b480f321da334a35d40bd5a4b33933eada48d487740f6561b9afce/merged major:0 minor:290 fsType:overlay blockSize:0} overlay_0-292:{mountpoint:/var/lib/containers/storage/overlay/77f0935936d9573ce9046d94b3fc61c2c6788bb2b8f3bcbbb3cac5607b01c69b/merged major:0 minor:292 fsType:overlay blockSize:0} overlay_0-294:{mountpoint:/var/lib/containers/storage/overlay/6f0e887f9a1c796c7618ad7ba6babea82030c85225510ab9a3bdbd3edcc8a9cb/merged major:0 minor:294 fsType:overlay blockSize:0} overlay_0-296:{mountpoint:/var/lib/containers/storage/overlay/330b317d85d787d8bec4f1d97d1ef090c4b38d0876a653bdd29641a33a1dc672/merged major:0 minor:296 fsType:overlay blockSize:0} overlay_0-298:{mountpoint:/var/lib/containers/storage/overlay/353621adc635806ddeadb892b03ed5d02c2d3f9e6a6aafca04b09f694565fab5/merged major:0 minor:298 fsType:overlay blockSize:0} overlay_0-300:{mountpoint:/var/lib/containers/storage/overlay/e92ce1bc425864ea9580a8b1e0b3c9f8f24f633a0ca77182575d1ea9182046cf/merged major:0 minor:300 fsType:overlay blockSize:0} overlay_0-302:{mountpoint:/var/lib/containers/storage/overlay/ed74f4f46f9e8b2d3077870db2d79dd0b7360627a8ea2addc63302756effab1e/merged major:0 minor:302 fsType:overlay blockSize:0} overlay_0-48:{mountpoint:/var/lib/containers/storage/overlay/c7e2403561ddb79e9f52886f555ccb0da14339e83ae7f98fed1049a89a94b5cf/merged major:0 minor:48 fsType:overlay blockSize:0} overlay_0-50:{mountpoint:/var/lib/containers/storage/overlay/73312ec6085d6bd0e1eb4140c64c831823862f5444576a408658f0a5826d7f8b/merged major:0 minor:50 fsType:overlay blockSize:0} overlay_0-52:{mountpoint:/var/lib/containers/storage/overlay/de4da002ed304645c3f7ec4483af48389ad73d6dca7f08e56d03991c21383076/merged major:0 minor:52 fsType:overlay blockSize:0} overlay_0-56:{mountpoint:/var/lib/containers/storage/overlay/79ed0ba7437157a00acb54b4f4f7c7ebc9d5e59fa031cc5c2e664cddd2eea6ad/merged major:0 minor:56 fsType:overlay blockSize:0} overlay_0-60:{mountpoint:/var/lib/containers/storage/overlay/eab16f4a5dff9dc84531f67a6f6ee083580a09024b6b89772e495794e72f8b3e/merged major:0 minor:60 fsType:overlay blockSize:0} overlay_0-62:{mountpoint:/var/lib/containers/storage/overlay/72fd8ac34e066ad02e6544b8be7d86da1ddc73baccf1720173eb75633b3ee9f8/merged major:0 minor:62 fsType:overlay blockSize:0} overlay_0-64:{mountpoint:/var/lib/containers/storage/overlay/f46a05e69e181ca3044be598dd3a835b33bee4dba3d8991c730338644f2c6e6e/merged major:0 minor:64 fsType:overlay blockSize:0} overlay_0-66:{mountpoint:/var/lib/containers/storage/overlay/69fa5b9fff8ce2fd7ef03db2d52c69162d0d545f882aac5c09352c350f1c70c1/merged major:0 minor:66 fsType:overlay blockSize:0} overlay_0-74:{mountpoint:/var/lib/containers/storage/overlay/e136bde14d574c929a1898021f71c59728467ca8f641e32fa3296d46589cbbb5/merged major:0 minor:74 fsType:overlay blockSize:0} overlay_0-76:{mountpoint:/var/lib/containers/storage/overlay/979352e6c1a6e1a17d6fcd6b11d39badfd3c0612eae9d6e980b7054d70024857/merged major:0 minor:76 fsType:overlay blockSize:0} overlay_0-78:{mountpoint:/var/lib/containers/storage/overlay/10fce17863be41a4dc42c5fbdaca9588de6055aae6f18abd94cb40942d7c3577/merged major:0 minor:78 fsType:overlay blockSize:0} overlay_0-89:{mountpoint:/var/lib/containers/storage/overlay/33e2a4922c3c977952e736e288d0560ebb48df3351a279eb0a4847bf0a220efe/merged major:0 minor:89 fsType:overlay blockSize:0}] Mar 18 17:41:49.986998 master-0 kubenswrapper[7553]: I0318 17:41:49.984975 7553 manager.go:217] Machine: {Timestamp:2026-03-18 17:41:49.984009735 +0000 UTC m=+0.129844428 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2799998 MemoryCapacity:33654128640 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:6ad73e7bdc944176a9641991d01dd6fa SystemUUID:6ad73e7b-dc94-4176-a964-1991d01dd6fa BootID:00a5b6c0-ddc6-4fc3-aaa2-1f9950d0acc4 Filesystems:[{Device:overlay_0-121 DeviceMajor:0 DeviceMinor:121 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-184 DeviceMajor:0 DeviceMinor:184 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/9b424d6c-7440-4c98-ac19-2d0642c696fd/volumes/kubernetes.io~projected/kube-api-access DeviceMajor:0 DeviceMinor:215 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:overlay_0-62 DeviceMajor:0 DeviceMinor:62 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-128 DeviceMajor:0 DeviceMinor:128 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/37b3753f-bf4f-4a9e-a4a8-d58296bada79/volumes/kubernetes.io~projected/kube-api-access-zwlxb DeviceMajor:0 DeviceMinor:227 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/d451cc909e96cb90161ef2054b945e5cb54cff4fe2886dd65033c87b6a8fe884/userdata/shm DeviceMajor:0 DeviceMinor:245 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/62f87c779c80aac58d08d6114e2c8cc2c2974d823d9538d2de8360d3c4243057/userdata/shm DeviceMajor:0 DeviceMinor:271 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true} {Device:/run/containers/storage/overlay-containers/4c05ce2500dc59522a6a15e9d7a181f449cb0590dccc6ae6225c9f2d2a528378/userdata/shm DeviceMajor:0 DeviceMinor:41 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-50 DeviceMajor:0 DeviceMinor:50 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-76 DeviceMajor:0 DeviceMinor:76 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/5b0e38f3-3ab5-4519-86a6-68003deb94da/volumes/kubernetes.io~projected/kube-api-access-grnqn DeviceMajor:0 DeviceMinor:99 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-155 DeviceMajor:0 DeviceMinor:155 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/c087ce06-a16b-41f4-ba93-8fccdee09003/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:223 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-300 DeviceMajor:0 DeviceMinor:300 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-52 DeviceMajor:0 DeviceMinor:52 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-169 DeviceMajor:0 DeviceMinor:169 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-280 DeviceMajor:0 DeviceMinor:280 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/506ad6c80a1b7c5f38a2826db8ee7d4115a8417001018ebd69e1309cf067fc6a/userdata/shm DeviceMajor:0 DeviceMinor:129 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/d5c2063a6c515ba48297abcc083d96a0ee10588d1979ea68a5e48cdf4d96c90e/userdata/shm DeviceMajor:0 DeviceMinor:277 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-140 DeviceMajor:0 DeviceMinor:140 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/0b9ff55a-73fb-473f-b406-1f8b6cffdb89/volumes/kubernetes.io~projected/kube-api-access-2tvgq DeviceMajor:0 DeviceMinor:214 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/d26d4515-391e-41a5-8c82-1b2b8a375662/volumes/kubernetes.io~projected/kube-api-access-bm8jj DeviceMajor:0 DeviceMinor:265 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/6855c26bf134f973aca5b753cd9252cc1f86b218f035870b1dab49845cbadb56/userdata/shm DeviceMajor:0 DeviceMinor:266 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/6a2b235f936941b3155c2b3d3485ca14ee2d3465fd9996759d1380c11160d84a/userdata/shm DeviceMajor:0 DeviceMinor:58 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-78 DeviceMajor:0 DeviceMinor:78 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-136 DeviceMajor:0 DeviceMinor:136 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/0100a259-1358-45e8-8191-4e1f9a14ec89/volumes/kubernetes.io~projected/kube-api-access-tnknt DeviceMajor:0 DeviceMinor:239 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/681e9cfa9d99b6787480ff89127df11d81327ab93296d6efacd157b94bbfa393/userdata/shm DeviceMajor:0 DeviceMinor:269 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/14a0661b-7bde-4e22-a9a9-5e3fb24df77f/volumes/kubernetes.io~secret/metrics-tls DeviceMajor:0 DeviceMinor:43 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-106 DeviceMajor:0 DeviceMinor:106 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/7b94e08c-7944-445e-bfb7-6c7c14880c65/volumes/kubernetes.io~secret/ovn-control-plane-metrics-cert DeviceMajor:0 DeviceMinor:124 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/9875ed82-813c-483d-8471-8f9b74b774ee/volumes/kubernetes.io~secret/webhook-cert DeviceMajor:0 DeviceMinor:138 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/e73f2834-c56c-4cef-ac3c-2317e9a4324c/volumes/kubernetes.io~projected/kube-api-access-qwps9 DeviceMajor:0 DeviceMinor:232 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/6f26e239-2988-4faa-bc1d-24b15b95b7f1/volumes/kubernetes.io~projected/bound-sa-token DeviceMajor:0 DeviceMinor:249 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/7e64a377-f497-4416-8f22-d5c7f52e0b65/volumes/kubernetes.io~projected/kube-api-access-sclm5 DeviceMajor:0 DeviceMinor:238 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/c087ce06-a16b-41f4-ba93-8fccdee09003/volumes/kubernetes.io~projected/kube-api-access-789k6 DeviceMajor:0 DeviceMinor:244 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/5d787dbd681a8506a724fcd5492ed061edac8e14293b27fcf81a68f92d0df82e/userdata/shm DeviceMajor:0 DeviceMinor:258 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-302 DeviceMajor:0 DeviceMinor:302 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/9875ed82-813c-483d-8471-8f9b74b774ee/volumes/kubernetes.io~projected/kube-api-access-76j8w DeviceMajor:0 DeviceMinor:143 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-144 DeviceMajor:0 DeviceMinor:144 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/e9e04572-1425-440e-9869-6deef05e13e3/volumes/kubernetes.io~projected/kube-api-access-t92bz DeviceMajor:0 DeviceMinor:235 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/8d76a48b181c0cd15d1de5c39a3bc3d9f330bf1dff375bce677cfee095393ae6/userdata/shm DeviceMajor:0 DeviceMinor:247 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/cb522b02-0b93-4711-9041-566daa06b95a/volumes/kubernetes.io~projected/kube-api-access-fk59q DeviceMajor:0 DeviceMinor:251 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-282 DeviceMajor:0 DeviceMinor:282 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/db5aec9e61cde7bec4f42fa13ed7d132af8e2baca532c12d1638296fcf06dd34/userdata/shm DeviceMajor:0 DeviceMinor:104 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-152 DeviceMajor:0 DeviceMinor:152 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/1add5afbf418952e0016f7866a470207154a949d28966174c8a7f5fa79ba0e1f/userdata/shm DeviceMajor:0 DeviceMinor:134 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/39f34c1f903429d7c69072e5211db003fe4dc2847c946a6e7e2b74d4bd2e8ac8/userdata/shm DeviceMajor:0 DeviceMinor:216 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/9a240ab7-a1d5-4e9a-96f3-4590681cc7ed/volumes/kubernetes.io~projected/kube-api-access-l5tw2 DeviceMajor:0 DeviceMinor:231 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/9c6ba19a43312e7426d156208bc0c31b36ee526eb8006e7186d5ea94923d4e9f/userdata/shm DeviceMajor:0 DeviceMinor:252 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/3850c530da1325c13b135240c71869228656f1ceff63510ab0a98443cee54a55/userdata/shm DeviceMajor:0 DeviceMinor:276 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/a94ad7630ca05ccfa8a345aad202de39848166c994875cfad1d5875137f9cf66/userdata/shm DeviceMajor:0 DeviceMinor:115 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/9a240ab7-a1d5-4e9a-96f3-4590681cc7ed/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:220 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/0100a259-1358-45e8-8191-4e1f9a14ec89/volumes/kubernetes.io~secret/etcd-client DeviceMajor:0 DeviceMinor:229 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827064320 Type:vfs Inodes:1048576 HasInodes:true} {Device:/var/lib/kubelet/pods/994fff04-c1d7-4f10-8d4b-6b49a6934829/volumes/kubernetes.io~secret/ovn-node-metrics-cert DeviceMajor:0 DeviceMinor:126 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/994fff04-c1d7-4f10-8d4b-6b49a6934829/volume-subpaths/run-systemd/ovnkube-controller/6 DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true} {Device:/var/lib/kubelet/pods/f7ff61c7-32d1-4407-a792-8e22bb4d50f9/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:222 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/99e215da-759d-4fff-af65-0fb64245fbd0/volumes/kubernetes.io~secret/cluster-olm-operator-serving-cert DeviceMajor:0 DeviceMinor:226 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/c355c750-ae2f-49fa-9a16-8fb4f688853e/volumes/kubernetes.io~projected/kube-api-access-zfnqp DeviceMajor:0 DeviceMinor:236 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/ce5831a6-5a8d-4cda-9299-5d86437bcab2/volumes/kubernetes.io~projected/kube-api-access-756j8 DeviceMajor:0 DeviceMinor:241 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/99e215da-759d-4fff-af65-0fb64245fbd0/volumes/kubernetes.io~projected/kube-api-access-n8k5q DeviceMajor:0 DeviceMinor:250 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-60 DeviceMajor:0 DeviceMinor:60 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/a02399de-859b-45b1-9b00-18a08f285f39/volumes/kubernetes.io~projected/kube-api-access DeviceMajor:0 DeviceMinor:91 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-242 DeviceMajor:0 DeviceMinor:242 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/a06a3f0fb54d1869684741c01721cbf6af520d75473205b84e908f306a368b3a/userdata/shm DeviceMajor:0 DeviceMinor:257 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/fea7b899-fde4-4463-9520-4d433a8ebe21/volumes/kubernetes.io~projected/kube-api-access-ts9b9 DeviceMajor:0 DeviceMinor:100 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-189 DeviceMajor:0 DeviceMinor:189 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/0100a259-1358-45e8-8191-4e1f9a14ec89/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:224 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/7c6694a8-ccd0-491b-9f21-215450f6ce67/volumes/kubernetes.io~projected/kube-api-access-mrdqg DeviceMajor:0 DeviceMinor:268 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-290 DeviceMajor:0 DeviceMinor:290 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-294 DeviceMajor:0 DeviceMinor:294 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-298 DeviceMajor:0 DeviceMinor:298 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/8d5e9525-6c0d-4c0b-a2ce-e42eaf66c311/volumes/kubernetes.io~projected/kube-api-access-clm4b DeviceMajor:0 DeviceMinor:237 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-66 DeviceMajor:0 DeviceMinor:66 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-74 DeviceMajor:0 DeviceMinor:74 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/14a0661b-7bde-4e22-a9a9-5e3fb24df77f/volumes/kubernetes.io~projected/kube-api-access-2pqww DeviceMajor:0 DeviceMinor:92 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-132 DeviceMajor:0 DeviceMinor:132 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/9b424d6c-7440-4c98-ac19-2d0642c696fd/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:209 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/26575d68-0488-4dfa-a5d0-5016e481dba6/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:219 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-296 DeviceMajor:0 DeviceMinor:296 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/7b94e08c-7944-445e-bfb7-6c7c14880c65/volumes/kubernetes.io~projected/kube-api-access-g4zcv DeviceMajor:0 DeviceMinor:125 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-194 DeviceMajor:0 DeviceMinor:194 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/f7ff61c7-32d1-4407-a792-8e22bb4d50f9/volumes/kubernetes.io~projected/kube-api-access-nf82n DeviceMajor:0 DeviceMinor:230 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/26575d68-0488-4dfa-a5d0-5016e481dba6/volumes/kubernetes.io~projected/kube-api-access DeviceMajor:0 DeviceMinor:233 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/7e64a377-f497-4416-8f22-d5c7f52e0b65/volumes/kubernetes.io~projected/bound-sa-token DeviceMajor:0 DeviceMinor:234 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-273 DeviceMajor:0 DeviceMinor:273 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/b4db07afd1a03d8c1456d9bd3e2fc4e66947bcaa942aef9864e3ed3e54889795/userdata/shm DeviceMajor:0 DeviceMinor:119 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/0b9ff55a-73fb-473f-b406-1f8b6cffdb89/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:213 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/b1352cc7-4099-44c5-9c31-8259fb783bc7/volumes/kubernetes.io~projected/kube-api-access-9pp5f DeviceMajor:0 DeviceMinor:240 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-199 DeviceMajor:0 DeviceMinor:199 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/dba5f8d7-4d25-42b5-9c58-813221bf96bb/volumes/kubernetes.io~projected/kube-api-access-lmsm4 DeviceMajor:0 DeviceMinor:254 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/3a3a6c2c-78e7-41f3-acff-20173cbc012a/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:218 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-56 DeviceMajor:0 DeviceMinor:56 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/994fff04-c1d7-4f10-8d4b-6b49a6934829/volumes/kubernetes.io~projected/kube-api-access-9lwsm DeviceMajor:0 DeviceMinor:127 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/c355c750-ae2f-49fa-9a16-8fb4f688853e/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:221 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/cb522b02-0b93-4711-9041-566daa06b95a/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:228 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-64 DeviceMajor:0 DeviceMinor:64 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-284 DeviceMajor:0 DeviceMinor:284 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-292 DeviceMajor:0 DeviceMinor:292 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-164 DeviceMajor:0 DeviceMinor:164 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-108 DeviceMajor:0 DeviceMinor:108 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-154 DeviceMajor:0 DeviceMinor:154 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/3a3a6c2c-78e7-41f3-acff-20173cbc012a/volumes/kubernetes.io~projected/kube-api-access DeviceMajor:0 DeviceMinor:259 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827064320 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/6a85b3ee12aea7b46bda118fb48d0b8760d887f0c07b29fb0b4386fa0f1ccc35/userdata/shm DeviceMajor:0 DeviceMinor:42 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-48 DeviceMajor:0 DeviceMinor:48 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-110 DeviceMajor:0 DeviceMinor:110 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/3d376cebaacd129d547a2b1f5d7c73be7200c80d9de53fd252db3ff4f06f931e/userdata/shm DeviceMajor:0 DeviceMinor:139 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-171 DeviceMajor:0 DeviceMinor:171 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-179 DeviceMajor:0 DeviceMinor:179 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/b5e733421a5534241a408f87a9d1282c96549651c461bd5bf9e9c1999c97d9e5/userdata/shm DeviceMajor:0 DeviceMinor:255 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-286 DeviceMajor:0 DeviceMinor:286 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/e03411b7c2bf8367c968ed1adc09d9fecafd420d1122805ea765d466352e23a3/userdata/shm DeviceMajor:0 DeviceMinor:54 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-89 DeviceMajor:0 DeviceMinor:89 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/6bd8b74e410d81f6dbc5c2f014e72715199a5fa6c057d771fdb8890689635805/userdata/shm DeviceMajor:0 DeviceMinor:262 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/a8c34df1-ea0d-4dfa-bf4d-5b58dc5bee8e/volumes/kubernetes.io~projected/kube-api-access-dlcnh DeviceMajor:0 DeviceMinor:263 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/1d969530-c138-4fb7-9bfe-0825be66c009/volumes/kubernetes.io~projected/kube-api-access-cd868 DeviceMajor:0 DeviceMinor:275 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-288 DeviceMajor:0 DeviceMinor:288 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/95d9ed6b43dde926cc6bff2e2109470565941e7ec534301d19b34350a3fd9914/userdata/shm DeviceMajor:0 DeviceMinor:44 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-117 DeviceMajor:0 DeviceMinor:117 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/5a4f94f3-d63a-4869-b723-ae9637610b4b/volumes/kubernetes.io~projected/kube-api-access-hgnz6 DeviceMajor:0 DeviceMinor:123 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-204 DeviceMajor:0 DeviceMinor:204 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/6f26e239-2988-4faa-bc1d-24b15b95b7f1/volumes/kubernetes.io~projected/kube-api-access-5sl7p DeviceMajor:0 DeviceMinor:225 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none} 252:16:{Name:vdb Major:252 Minor:16 Size:21474836480 Scheduler:none} 252:32:{Name:vdc Major:252 Minor:32 Size:21474836480 Scheduler:none} 252:48:{Name:vdd Major:252 Minor:48 Size:21474836480 Scheduler:none} 252:64:{Name:vde Major:252 Minor:64 Size:21474836480 Scheduler:none}] NetworkDevices:[{Name:3850c530da1325c MacAddress:72:a7:af:8c:b2:7d Speed:10000 Mtu:8900} {Name:39f34c1f903429d MacAddress:16:50:21:d6:25:f6 Speed:10000 Mtu:8900} {Name:5d787dbd681a850 MacAddress:72:4c:ed:3e:5b:2f Speed:10000 Mtu:8900} {Name:62f87c779c80aac MacAddress:e2:b4:4c:fa:50:aa Speed:10000 Mtu:8900} {Name:681e9cfa9d99b67 MacAddress:4a:4b:99:ce:1e:1e Speed:10000 Mtu:8900} {Name:6855c26bf134f97 MacAddress:ca:1b:2e:bf:ee:57 Speed:10000 Mtu:8900} {Name:6bd8b74e410d81f MacAddress:a6:8b:a0:6c:b2:eb Speed:10000 Mtu:8900} {Name:8d76a48b181c0cd MacAddress:fe:3b:de:c5:2e:fa Speed:10000 Mtu:8900} {Name:9c6ba19a43312e7 MacAddress:2e:56:81:01:73:02 Speed:10000 Mtu:8900} {Name:a06a3f0fb54d186 MacAddress:ce:3a:7c:2c:4a:1b Speed:10000 Mtu:8900} {Name:b5e733421a55342 MacAddress:2e:33:97:41:42:99 Speed:10000 Mtu:8900} {Name:br-ex MacAddress:fa:16:9e:81:f6:10 Speed:0 Mtu:9000} {Name:br-int MacAddress:ba:7c:70:ac:0a:a4 Speed:0 Mtu:8900} {Name:d451cc909e96cb9 MacAddress:ca:e7:70:a0:07:26 Speed:10000 Mtu:8900} {Name:eth0 MacAddress:fa:16:9e:81:f6:10 Speed:-1 Mtu:9000} {Name:eth1 MacAddress:fa:16:3e:91:e0:f5 Speed:-1 Mtu:9000} {Name:eth2 MacAddress:fa:16:3e:ff:27:ac Speed:-1 Mtu:9000} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:80:00:02 Speed:0 Mtu:8900} {Name:ovs-system MacAddress:96:16:48:af:1f:d9 Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654128640 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Mar 18 17:41:49.986998 master-0 kubenswrapper[7553]: I0318 17:41:49.986976 7553 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Mar 18 17:41:49.987617 master-0 kubenswrapper[7553]: I0318 17:41:49.987063 7553 manager.go:233] Version: {KernelVersion:5.14.0-427.113.1.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202603021444-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Mar 18 17:41:49.987617 master-0 kubenswrapper[7553]: I0318 17:41:49.987519 7553 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Mar 18 17:41:49.987726 master-0 kubenswrapper[7553]: I0318 17:41:49.987678 7553 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 18 17:41:49.987944 master-0 kubenswrapper[7553]: I0318 17:41:49.987719 7553 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"master-0","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"500m","ephemeral-storage":"1Gi","memory":"1Gi"},"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 18 17:41:49.988034 master-0 kubenswrapper[7553]: I0318 17:41:49.987957 7553 topology_manager.go:138] "Creating topology manager with none policy" Mar 18 17:41:49.988034 master-0 kubenswrapper[7553]: I0318 17:41:49.987969 7553 container_manager_linux.go:303] "Creating device plugin manager" Mar 18 17:41:49.988034 master-0 kubenswrapper[7553]: I0318 17:41:49.987980 7553 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Mar 18 17:41:49.988034 master-0 kubenswrapper[7553]: I0318 17:41:49.988006 7553 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Mar 18 17:41:49.988315 master-0 kubenswrapper[7553]: I0318 17:41:49.988296 7553 state_mem.go:36] "Initialized new in-memory state store" Mar 18 17:41:49.988422 master-0 kubenswrapper[7553]: I0318 17:41:49.988403 7553 server.go:1245] "Using root directory" path="/var/lib/kubelet" Mar 18 17:41:49.988504 master-0 kubenswrapper[7553]: I0318 17:41:49.988496 7553 kubelet.go:418] "Attempting to sync node with API server" Mar 18 17:41:49.988551 master-0 kubenswrapper[7553]: I0318 17:41:49.988516 7553 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 18 17:41:49.988551 master-0 kubenswrapper[7553]: I0318 17:41:49.988536 7553 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Mar 18 17:41:49.988551 master-0 kubenswrapper[7553]: I0318 17:41:49.988549 7553 kubelet.go:324] "Adding apiserver pod source" Mar 18 17:41:49.988661 master-0 kubenswrapper[7553]: I0318 17:41:49.988563 7553 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 18 17:41:49.990119 master-0 kubenswrapper[7553]: I0318 17:41:49.990059 7553 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.13-8.rhaos4.18.gitd78977c.el9" apiVersion="v1" Mar 18 17:41:49.990509 master-0 kubenswrapper[7553]: I0318 17:41:49.990444 7553 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Mar 18 17:41:49.990900 master-0 kubenswrapper[7553]: I0318 17:41:49.990865 7553 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Mar 18 17:41:49.991110 master-0 kubenswrapper[7553]: I0318 17:41:49.991081 7553 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Mar 18 17:41:49.991110 master-0 kubenswrapper[7553]: I0318 17:41:49.991109 7553 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Mar 18 17:41:49.991192 master-0 kubenswrapper[7553]: I0318 17:41:49.991120 7553 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Mar 18 17:41:49.991192 master-0 kubenswrapper[7553]: I0318 17:41:49.991139 7553 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Mar 18 17:41:49.991192 master-0 kubenswrapper[7553]: I0318 17:41:49.991150 7553 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Mar 18 17:41:49.991192 master-0 kubenswrapper[7553]: I0318 17:41:49.991159 7553 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Mar 18 17:41:49.991192 master-0 kubenswrapper[7553]: I0318 17:41:49.991169 7553 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Mar 18 17:41:49.991192 master-0 kubenswrapper[7553]: I0318 17:41:49.991178 7553 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Mar 18 17:41:49.991192 master-0 kubenswrapper[7553]: I0318 17:41:49.991193 7553 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Mar 18 17:41:49.991192 master-0 kubenswrapper[7553]: I0318 17:41:49.991204 7553 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Mar 18 17:41:49.991500 master-0 kubenswrapper[7553]: I0318 17:41:49.991219 7553 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Mar 18 17:41:49.991500 master-0 kubenswrapper[7553]: I0318 17:41:49.991238 7553 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Mar 18 17:41:49.991500 master-0 kubenswrapper[7553]: I0318 17:41:49.991300 7553 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Mar 18 17:41:49.991913 master-0 kubenswrapper[7553]: I0318 17:41:49.991880 7553 server.go:1280] "Started kubelet" Mar 18 17:41:49.992185 master-0 kubenswrapper[7553]: I0318 17:41:49.992075 7553 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 18 17:41:49.992239 master-0 kubenswrapper[7553]: I0318 17:41:49.992226 7553 server_v1.go:47] "podresources" method="list" useActivePods=true Mar 18 17:41:49.992396 master-0 kubenswrapper[7553]: I0318 17:41:49.992268 7553 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Mar 18 17:41:49.993449 master-0 kubenswrapper[7553]: I0318 17:41:49.993355 7553 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 18 17:41:49.994456 master-0 systemd[1]: Started Kubernetes Kubelet. Mar 18 17:41:49.997644 master-0 kubenswrapper[7553]: I0318 17:41:49.997611 7553 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Mar 18 17:41:50.005366 master-0 kubenswrapper[7553]: I0318 17:41:50.003609 7553 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Mar 18 17:41:50.005366 master-0 kubenswrapper[7553]: I0318 17:41:50.004622 7553 server.go:449] "Adding debug handlers to kubelet server" Mar 18 17:41:50.006486 master-0 kubenswrapper[7553]: I0318 17:41:50.006464 7553 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Mar 18 17:41:50.006679 master-0 kubenswrapper[7553]: I0318 17:41:50.006651 7553 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 18 17:41:50.006740 master-0 kubenswrapper[7553]: I0318 17:41:50.006663 7553 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-03-19 17:31:47 +0000 UTC, rotation deadline is 2026-03-19 14:07:24.923400298 +0000 UTC Mar 18 17:41:50.006807 master-0 kubenswrapper[7553]: I0318 17:41:50.006740 7553 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 20h25m34.916662922s for next certificate rotation Mar 18 17:41:50.006911 master-0 kubenswrapper[7553]: I0318 17:41:50.006895 7553 volume_manager.go:287] "The desired_state_of_world populator starts" Mar 18 17:41:50.006911 master-0 kubenswrapper[7553]: I0318 17:41:50.006909 7553 volume_manager.go:289] "Starting Kubelet Volume Manager" Mar 18 17:41:50.007106 master-0 kubenswrapper[7553]: I0318 17:41:50.007081 7553 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Mar 18 17:41:50.011587 master-0 kubenswrapper[7553]: I0318 17:41:50.011561 7553 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Mar 18 17:41:50.011693 master-0 kubenswrapper[7553]: I0318 17:41:50.011600 7553 factory.go:55] Registering systemd factory Mar 18 17:41:50.011693 master-0 kubenswrapper[7553]: I0318 17:41:50.011630 7553 factory.go:221] Registration of the systemd container factory successfully Mar 18 17:41:50.012638 master-0 kubenswrapper[7553]: I0318 17:41:50.012475 7553 factory.go:153] Registering CRI-O factory Mar 18 17:41:50.012638 master-0 kubenswrapper[7553]: I0318 17:41:50.012510 7553 factory.go:221] Registration of the crio container factory successfully Mar 18 17:41:50.012797 master-0 kubenswrapper[7553]: I0318 17:41:50.012668 7553 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Mar 18 17:41:50.012797 master-0 kubenswrapper[7553]: I0318 17:41:50.012694 7553 factory.go:103] Registering Raw factory Mar 18 17:41:50.012797 master-0 kubenswrapper[7553]: I0318 17:41:50.012713 7553 manager.go:1196] Started watching for new ooms in manager Mar 18 17:41:50.013188 master-0 kubenswrapper[7553]: I0318 17:41:50.013164 7553 manager.go:319] Starting recovery of all containers Mar 18 17:41:50.019586 master-0 kubenswrapper[7553]: I0318 17:41:50.019532 7553 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ce5831a6-5a8d-4cda-9299-5d86437bcab2" volumeName="kubernetes.io/configmap/ce5831a6-5a8d-4cda-9299-5d86437bcab2-marketplace-trusted-ca" seLinuxMountContext="" Mar 18 17:41:50.019586 master-0 kubenswrapper[7553]: I0318 17:41:50.019582 7553 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fea7b899-fde4-4463-9520-4d433a8ebe21" volumeName="kubernetes.io/configmap/fea7b899-fde4-4463-9520-4d433a8ebe21-cni-binary-copy" seLinuxMountContext="" Mar 18 17:41:50.019586 master-0 kubenswrapper[7553]: I0318 17:41:50.019593 7553 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="26575d68-0488-4dfa-a5d0-5016e481dba6" volumeName="kubernetes.io/secret/26575d68-0488-4dfa-a5d0-5016e481dba6-serving-cert" seLinuxMountContext="" Mar 18 17:41:50.019798 master-0 kubenswrapper[7553]: I0318 17:41:50.019605 7553 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6f26e239-2988-4faa-bc1d-24b15b95b7f1" volumeName="kubernetes.io/projected/6f26e239-2988-4faa-bc1d-24b15b95b7f1-kube-api-access-5sl7p" seLinuxMountContext="" Mar 18 17:41:50.019798 master-0 kubenswrapper[7553]: I0318 17:41:50.019615 7553 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="994fff04-c1d7-4f10-8d4b-6b49a6934829" volumeName="kubernetes.io/configmap/994fff04-c1d7-4f10-8d4b-6b49a6934829-env-overrides" seLinuxMountContext="" Mar 18 17:41:50.019798 master-0 kubenswrapper[7553]: I0318 17:41:50.019624 7553 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="99e215da-759d-4fff-af65-0fb64245fbd0" volumeName="kubernetes.io/projected/99e215da-759d-4fff-af65-0fb64245fbd0-kube-api-access-n8k5q" seLinuxMountContext="" Mar 18 17:41:50.019798 master-0 kubenswrapper[7553]: I0318 17:41:50.019634 7553 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a02399de-859b-45b1-9b00-18a08f285f39" volumeName="kubernetes.io/configmap/a02399de-859b-45b1-9b00-18a08f285f39-service-ca" seLinuxMountContext="" Mar 18 17:41:50.019798 master-0 kubenswrapper[7553]: I0318 17:41:50.019643 7553 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cb522b02-0b93-4711-9041-566daa06b95a" volumeName="kubernetes.io/projected/cb522b02-0b93-4711-9041-566daa06b95a-kube-api-access-fk59q" seLinuxMountContext="" Mar 18 17:41:50.019798 master-0 kubenswrapper[7553]: I0318 17:41:50.019657 7553 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="26575d68-0488-4dfa-a5d0-5016e481dba6" volumeName="kubernetes.io/configmap/26575d68-0488-4dfa-a5d0-5016e481dba6-config" seLinuxMountContext="" Mar 18 17:41:50.019798 master-0 kubenswrapper[7553]: I0318 17:41:50.019667 7553 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3a3a6c2c-78e7-41f3-acff-20173cbc012a" volumeName="kubernetes.io/secret/3a3a6c2c-78e7-41f3-acff-20173cbc012a-serving-cert" seLinuxMountContext="" Mar 18 17:41:50.019798 master-0 kubenswrapper[7553]: I0318 17:41:50.019676 7553 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7b94e08c-7944-445e-bfb7-6c7c14880c65" volumeName="kubernetes.io/secret/7b94e08c-7944-445e-bfb7-6c7c14880c65-ovn-control-plane-metrics-cert" seLinuxMountContext="" Mar 18 17:41:50.019798 master-0 kubenswrapper[7553]: I0318 17:41:50.019702 7553 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7e64a377-f497-4416-8f22-d5c7f52e0b65" volumeName="kubernetes.io/projected/7e64a377-f497-4416-8f22-d5c7f52e0b65-kube-api-access-sclm5" seLinuxMountContext="" Mar 18 17:41:50.019798 master-0 kubenswrapper[7553]: I0318 17:41:50.019711 7553 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="99e215da-759d-4fff-af65-0fb64245fbd0" volumeName="kubernetes.io/secret/99e215da-759d-4fff-af65-0fb64245fbd0-cluster-olm-operator-serving-cert" seLinuxMountContext="" Mar 18 17:41:50.019798 master-0 kubenswrapper[7553]: I0318 17:41:50.019721 7553 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a02399de-859b-45b1-9b00-18a08f285f39" volumeName="kubernetes.io/projected/a02399de-859b-45b1-9b00-18a08f285f39-kube-api-access" seLinuxMountContext="" Mar 18 17:41:50.019798 master-0 kubenswrapper[7553]: I0318 17:41:50.019730 7553 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5a4f94f3-d63a-4869-b723-ae9637610b4b" volumeName="kubernetes.io/projected/5a4f94f3-d63a-4869-b723-ae9637610b4b-kube-api-access-hgnz6" seLinuxMountContext="" Mar 18 17:41:50.019798 master-0 kubenswrapper[7553]: I0318 17:41:50.019739 7553 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7e64a377-f497-4416-8f22-d5c7f52e0b65" volumeName="kubernetes.io/projected/7e64a377-f497-4416-8f22-d5c7f52e0b65-bound-sa-token" seLinuxMountContext="" Mar 18 17:41:50.019798 master-0 kubenswrapper[7553]: I0318 17:41:50.019748 7553 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c087ce06-a16b-41f4-ba93-8fccdee09003" volumeName="kubernetes.io/configmap/c087ce06-a16b-41f4-ba93-8fccdee09003-config" seLinuxMountContext="" Mar 18 17:41:50.019798 master-0 kubenswrapper[7553]: I0318 17:41:50.019756 7553 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c355c750-ae2f-49fa-9a16-8fb4f688853e" volumeName="kubernetes.io/secret/c355c750-ae2f-49fa-9a16-8fb4f688853e-serving-cert" seLinuxMountContext="" Mar 18 17:41:50.019798 master-0 kubenswrapper[7553]: I0318 17:41:50.019765 7553 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cb522b02-0b93-4711-9041-566daa06b95a" volumeName="kubernetes.io/empty-dir/cb522b02-0b93-4711-9041-566daa06b95a-available-featuregates" seLinuxMountContext="" Mar 18 17:41:50.019798 master-0 kubenswrapper[7553]: I0318 17:41:50.019773 7553 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fea7b899-fde4-4463-9520-4d433a8ebe21" volumeName="kubernetes.io/configmap/fea7b899-fde4-4463-9520-4d433a8ebe21-whereabouts-flatfile-configmap" seLinuxMountContext="" Mar 18 17:41:50.019798 master-0 kubenswrapper[7553]: I0318 17:41:50.019782 7553 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8d5e9525-6c0d-4c0b-a2ce-e42eaf66c311" volumeName="kubernetes.io/configmap/8d5e9525-6c0d-4c0b-a2ce-e42eaf66c311-telemetry-config" seLinuxMountContext="" Mar 18 17:41:50.019798 master-0 kubenswrapper[7553]: I0318 17:41:50.019792 7553 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9875ed82-813c-483d-8471-8f9b74b774ee" volumeName="kubernetes.io/configmap/9875ed82-813c-483d-8471-8f9b74b774ee-env-overrides" seLinuxMountContext="" Mar 18 17:41:50.019798 master-0 kubenswrapper[7553]: I0318 17:41:50.019801 7553 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a8c34df1-ea0d-4dfa-bf4d-5b58dc5bee8e" volumeName="kubernetes.io/projected/a8c34df1-ea0d-4dfa-bf4d-5b58dc5bee8e-kube-api-access-dlcnh" seLinuxMountContext="" Mar 18 17:41:50.019798 master-0 kubenswrapper[7553]: I0318 17:41:50.019813 7553 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c087ce06-a16b-41f4-ba93-8fccdee09003" volumeName="kubernetes.io/configmap/c087ce06-a16b-41f4-ba93-8fccdee09003-service-ca-bundle" seLinuxMountContext="" Mar 18 17:41:50.019798 master-0 kubenswrapper[7553]: I0318 17:41:50.019826 7553 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cb522b02-0b93-4711-9041-566daa06b95a" volumeName="kubernetes.io/secret/cb522b02-0b93-4711-9041-566daa06b95a-serving-cert" seLinuxMountContext="" Mar 18 17:41:50.019798 master-0 kubenswrapper[7553]: I0318 17:41:50.019838 7553 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e73f2834-c56c-4cef-ac3c-2317e9a4324c" volumeName="kubernetes.io/projected/e73f2834-c56c-4cef-ac3c-2317e9a4324c-kube-api-access-qwps9" seLinuxMountContext="" Mar 18 17:41:50.020915 master-0 kubenswrapper[7553]: I0318 17:41:50.019852 7553 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3a3a6c2c-78e7-41f3-acff-20173cbc012a" volumeName="kubernetes.io/projected/3a3a6c2c-78e7-41f3-acff-20173cbc012a-kube-api-access" seLinuxMountContext="" Mar 18 17:41:50.020915 master-0 kubenswrapper[7553]: I0318 17:41:50.019864 7553 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="994fff04-c1d7-4f10-8d4b-6b49a6934829" volumeName="kubernetes.io/configmap/994fff04-c1d7-4f10-8d4b-6b49a6934829-ovnkube-script-lib" seLinuxMountContext="" Mar 18 17:41:50.020915 master-0 kubenswrapper[7553]: I0318 17:41:50.019874 7553 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="99e215da-759d-4fff-af65-0fb64245fbd0" volumeName="kubernetes.io/empty-dir/99e215da-759d-4fff-af65-0fb64245fbd0-operand-assets" seLinuxMountContext="" Mar 18 17:41:50.020915 master-0 kubenswrapper[7553]: I0318 17:41:50.019888 7553 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b1352cc7-4099-44c5-9c31-8259fb783bc7" volumeName="kubernetes.io/projected/b1352cc7-4099-44c5-9c31-8259fb783bc7-kube-api-access-9pp5f" seLinuxMountContext="" Mar 18 17:41:50.020915 master-0 kubenswrapper[7553]: I0318 17:41:50.019898 7553 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c087ce06-a16b-41f4-ba93-8fccdee09003" volumeName="kubernetes.io/configmap/c087ce06-a16b-41f4-ba93-8fccdee09003-trusted-ca-bundle" seLinuxMountContext="" Mar 18 17:41:50.020915 master-0 kubenswrapper[7553]: I0318 17:41:50.019908 7553 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fea7b899-fde4-4463-9520-4d433a8ebe21" volumeName="kubernetes.io/projected/fea7b899-fde4-4463-9520-4d433a8ebe21-kube-api-access-ts9b9" seLinuxMountContext="" Mar 18 17:41:50.020915 master-0 kubenswrapper[7553]: I0318 17:41:50.019918 7553 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8d5e9525-6c0d-4c0b-a2ce-e42eaf66c311" volumeName="kubernetes.io/projected/8d5e9525-6c0d-4c0b-a2ce-e42eaf66c311-kube-api-access-clm4b" seLinuxMountContext="" Mar 18 17:41:50.020915 master-0 kubenswrapper[7553]: I0318 17:41:50.019941 7553 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9875ed82-813c-483d-8471-8f9b74b774ee" volumeName="kubernetes.io/projected/9875ed82-813c-483d-8471-8f9b74b774ee-kube-api-access-76j8w" seLinuxMountContext="" Mar 18 17:41:50.020915 master-0 kubenswrapper[7553]: I0318 17:41:50.019951 7553 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e9e04572-1425-440e-9869-6deef05e13e3" volumeName="kubernetes.io/projected/e9e04572-1425-440e-9869-6deef05e13e3-kube-api-access-t92bz" seLinuxMountContext="" Mar 18 17:41:50.020915 master-0 kubenswrapper[7553]: I0318 17:41:50.019962 7553 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0100a259-1358-45e8-8191-4e1f9a14ec89" volumeName="kubernetes.io/configmap/0100a259-1358-45e8-8191-4e1f9a14ec89-etcd-service-ca" seLinuxMountContext="" Mar 18 17:41:50.020915 master-0 kubenswrapper[7553]: I0318 17:41:50.019971 7553 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0100a259-1358-45e8-8191-4e1f9a14ec89" volumeName="kubernetes.io/secret/0100a259-1358-45e8-8191-4e1f9a14ec89-serving-cert" seLinuxMountContext="" Mar 18 17:41:50.020915 master-0 kubenswrapper[7553]: I0318 17:41:50.019982 7553 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3a3a6c2c-78e7-41f3-acff-20173cbc012a" volumeName="kubernetes.io/configmap/3a3a6c2c-78e7-41f3-acff-20173cbc012a-config" seLinuxMountContext="" Mar 18 17:41:50.020915 master-0 kubenswrapper[7553]: I0318 17:41:50.019993 7553 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="994fff04-c1d7-4f10-8d4b-6b49a6934829" volumeName="kubernetes.io/projected/994fff04-c1d7-4f10-8d4b-6b49a6934829-kube-api-access-9lwsm" seLinuxMountContext="" Mar 18 17:41:50.020915 master-0 kubenswrapper[7553]: I0318 17:41:50.020003 7553 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0100a259-1358-45e8-8191-4e1f9a14ec89" volumeName="kubernetes.io/configmap/0100a259-1358-45e8-8191-4e1f9a14ec89-etcd-ca" seLinuxMountContext="" Mar 18 17:41:50.020915 master-0 kubenswrapper[7553]: I0318 17:41:50.020012 7553 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="14a0661b-7bde-4e22-a9a9-5e3fb24df77f" volumeName="kubernetes.io/secret/14a0661b-7bde-4e22-a9a9-5e3fb24df77f-metrics-tls" seLinuxMountContext="" Mar 18 17:41:50.020915 master-0 kubenswrapper[7553]: I0318 17:41:50.020021 7553 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b0e38f3-3ab5-4519-86a6-68003deb94da" volumeName="kubernetes.io/configmap/5b0e38f3-3ab5-4519-86a6-68003deb94da-multus-daemon-config" seLinuxMountContext="" Mar 18 17:41:50.020915 master-0 kubenswrapper[7553]: I0318 17:41:50.020031 7553 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c087ce06-a16b-41f4-ba93-8fccdee09003" volumeName="kubernetes.io/secret/c087ce06-a16b-41f4-ba93-8fccdee09003-serving-cert" seLinuxMountContext="" Mar 18 17:41:50.020915 master-0 kubenswrapper[7553]: I0318 17:41:50.020040 7553 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="dba5f8d7-4d25-42b5-9c58-813221bf96bb" volumeName="kubernetes.io/projected/dba5f8d7-4d25-42b5-9c58-813221bf96bb-kube-api-access-lmsm4" seLinuxMountContext="" Mar 18 17:41:50.020915 master-0 kubenswrapper[7553]: I0318 17:41:50.020049 7553 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37b3753f-bf4f-4a9e-a4a8-d58296bada79" volumeName="kubernetes.io/configmap/37b3753f-bf4f-4a9e-a4a8-d58296bada79-images" seLinuxMountContext="" Mar 18 17:41:50.020915 master-0 kubenswrapper[7553]: I0318 17:41:50.020058 7553 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6f26e239-2988-4faa-bc1d-24b15b95b7f1" volumeName="kubernetes.io/projected/6f26e239-2988-4faa-bc1d-24b15b95b7f1-bound-sa-token" seLinuxMountContext="" Mar 18 17:41:50.020915 master-0 kubenswrapper[7553]: I0318 17:41:50.020068 7553 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9b424d6c-7440-4c98-ac19-2d0642c696fd" volumeName="kubernetes.io/secret/9b424d6c-7440-4c98-ac19-2d0642c696fd-serving-cert" seLinuxMountContext="" Mar 18 17:41:50.020915 master-0 kubenswrapper[7553]: I0318 17:41:50.020077 7553 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c355c750-ae2f-49fa-9a16-8fb4f688853e" volumeName="kubernetes.io/configmap/c355c750-ae2f-49fa-9a16-8fb4f688853e-config" seLinuxMountContext="" Mar 18 17:41:50.020915 master-0 kubenswrapper[7553]: I0318 17:41:50.020087 7553 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f7ff61c7-32d1-4407-a792-8e22bb4d50f9" volumeName="kubernetes.io/projected/f7ff61c7-32d1-4407-a792-8e22bb4d50f9-kube-api-access-nf82n" seLinuxMountContext="" Mar 18 17:41:50.020915 master-0 kubenswrapper[7553]: I0318 17:41:50.020097 7553 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37b3753f-bf4f-4a9e-a4a8-d58296bada79" volumeName="kubernetes.io/projected/37b3753f-bf4f-4a9e-a4a8-d58296bada79-kube-api-access-zwlxb" seLinuxMountContext="" Mar 18 17:41:50.020915 master-0 kubenswrapper[7553]: I0318 17:41:50.020107 7553 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7c6694a8-ccd0-491b-9f21-215450f6ce67" volumeName="kubernetes.io/projected/7c6694a8-ccd0-491b-9f21-215450f6ce67-kube-api-access-mrdqg" seLinuxMountContext="" Mar 18 17:41:50.020915 master-0 kubenswrapper[7553]: I0318 17:41:50.020117 7553 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9b424d6c-7440-4c98-ac19-2d0642c696fd" volumeName="kubernetes.io/projected/9b424d6c-7440-4c98-ac19-2d0642c696fd-kube-api-access" seLinuxMountContext="" Mar 18 17:41:50.020915 master-0 kubenswrapper[7553]: I0318 17:41:50.020130 7553 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f7ff61c7-32d1-4407-a792-8e22bb4d50f9" volumeName="kubernetes.io/secret/f7ff61c7-32d1-4407-a792-8e22bb4d50f9-serving-cert" seLinuxMountContext="" Mar 18 17:41:50.020915 master-0 kubenswrapper[7553]: I0318 17:41:50.020142 7553 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9a240ab7-a1d5-4e9a-96f3-4590681cc7ed" volumeName="kubernetes.io/secret/9a240ab7-a1d5-4e9a-96f3-4590681cc7ed-serving-cert" seLinuxMountContext="" Mar 18 17:41:50.020915 master-0 kubenswrapper[7553]: I0318 17:41:50.020151 7553 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ce5831a6-5a8d-4cda-9299-5d86437bcab2" volumeName="kubernetes.io/projected/ce5831a6-5a8d-4cda-9299-5d86437bcab2-kube-api-access-756j8" seLinuxMountContext="" Mar 18 17:41:50.020915 master-0 kubenswrapper[7553]: I0318 17:41:50.020160 7553 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0100a259-1358-45e8-8191-4e1f9a14ec89" volumeName="kubernetes.io/projected/0100a259-1358-45e8-8191-4e1f9a14ec89-kube-api-access-tnknt" seLinuxMountContext="" Mar 18 17:41:50.020915 master-0 kubenswrapper[7553]: I0318 17:41:50.020169 7553 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b9ff55a-73fb-473f-b406-1f8b6cffdb89" volumeName="kubernetes.io/configmap/0b9ff55a-73fb-473f-b406-1f8b6cffdb89-config" seLinuxMountContext="" Mar 18 17:41:50.020915 master-0 kubenswrapper[7553]: I0318 17:41:50.020178 7553 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6f26e239-2988-4faa-bc1d-24b15b95b7f1" volumeName="kubernetes.io/configmap/6f26e239-2988-4faa-bc1d-24b15b95b7f1-trusted-ca" seLinuxMountContext="" Mar 18 17:41:50.020915 master-0 kubenswrapper[7553]: I0318 17:41:50.020187 7553 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7c6694a8-ccd0-491b-9f21-215450f6ce67" volumeName="kubernetes.io/configmap/7c6694a8-ccd0-491b-9f21-215450f6ce67-trusted-ca" seLinuxMountContext="" Mar 18 17:41:50.020915 master-0 kubenswrapper[7553]: I0318 17:41:50.020196 7553 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="994fff04-c1d7-4f10-8d4b-6b49a6934829" volumeName="kubernetes.io/secret/994fff04-c1d7-4f10-8d4b-6b49a6934829-ovn-node-metrics-cert" seLinuxMountContext="" Mar 18 17:41:50.020915 master-0 kubenswrapper[7553]: I0318 17:41:50.020205 7553 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9a240ab7-a1d5-4e9a-96f3-4590681cc7ed" volumeName="kubernetes.io/projected/9a240ab7-a1d5-4e9a-96f3-4590681cc7ed-kube-api-access-l5tw2" seLinuxMountContext="" Mar 18 17:41:50.020915 master-0 kubenswrapper[7553]: I0318 17:41:50.020213 7553 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d26d4515-391e-41a5-8c82-1b2b8a375662" volumeName="kubernetes.io/projected/d26d4515-391e-41a5-8c82-1b2b8a375662-kube-api-access-bm8jj" seLinuxMountContext="" Mar 18 17:41:50.020915 master-0 kubenswrapper[7553]: I0318 17:41:50.020222 7553 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0100a259-1358-45e8-8191-4e1f9a14ec89" volumeName="kubernetes.io/configmap/0100a259-1358-45e8-8191-4e1f9a14ec89-config" seLinuxMountContext="" Mar 18 17:41:50.020915 master-0 kubenswrapper[7553]: I0318 17:41:50.020230 7553 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b9ff55a-73fb-473f-b406-1f8b6cffdb89" volumeName="kubernetes.io/projected/0b9ff55a-73fb-473f-b406-1f8b6cffdb89-kube-api-access-2tvgq" seLinuxMountContext="" Mar 18 17:41:50.020915 master-0 kubenswrapper[7553]: I0318 17:41:50.020239 7553 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="14a0661b-7bde-4e22-a9a9-5e3fb24df77f" volumeName="kubernetes.io/projected/14a0661b-7bde-4e22-a9a9-5e3fb24df77f-kube-api-access-2pqww" seLinuxMountContext="" Mar 18 17:41:50.020915 master-0 kubenswrapper[7553]: I0318 17:41:50.020247 7553 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9875ed82-813c-483d-8471-8f9b74b774ee" volumeName="kubernetes.io/configmap/9875ed82-813c-483d-8471-8f9b74b774ee-ovnkube-identity-cm" seLinuxMountContext="" Mar 18 17:41:50.020915 master-0 kubenswrapper[7553]: I0318 17:41:50.020257 7553 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9875ed82-813c-483d-8471-8f9b74b774ee" volumeName="kubernetes.io/secret/9875ed82-813c-483d-8471-8f9b74b774ee-webhook-cert" seLinuxMountContext="" Mar 18 17:41:50.020915 master-0 kubenswrapper[7553]: I0318 17:41:50.020266 7553 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f7ff61c7-32d1-4407-a792-8e22bb4d50f9" volumeName="kubernetes.io/configmap/f7ff61c7-32d1-4407-a792-8e22bb4d50f9-config" seLinuxMountContext="" Mar 18 17:41:50.020915 master-0 kubenswrapper[7553]: I0318 17:41:50.020295 7553 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b9ff55a-73fb-473f-b406-1f8b6cffdb89" volumeName="kubernetes.io/secret/0b9ff55a-73fb-473f-b406-1f8b6cffdb89-serving-cert" seLinuxMountContext="" Mar 18 17:41:50.020915 master-0 kubenswrapper[7553]: I0318 17:41:50.020304 7553 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d969530-c138-4fb7-9bfe-0825be66c009" volumeName="kubernetes.io/configmap/1d969530-c138-4fb7-9bfe-0825be66c009-iptables-alerter-script" seLinuxMountContext="" Mar 18 17:41:50.020915 master-0 kubenswrapper[7553]: I0318 17:41:50.020313 7553 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7b94e08c-7944-445e-bfb7-6c7c14880c65" volumeName="kubernetes.io/projected/7b94e08c-7944-445e-bfb7-6c7c14880c65-kube-api-access-g4zcv" seLinuxMountContext="" Mar 18 17:41:50.020915 master-0 kubenswrapper[7553]: I0318 17:41:50.020323 7553 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="994fff04-c1d7-4f10-8d4b-6b49a6934829" volumeName="kubernetes.io/configmap/994fff04-c1d7-4f10-8d4b-6b49a6934829-ovnkube-config" seLinuxMountContext="" Mar 18 17:41:50.020915 master-0 kubenswrapper[7553]: I0318 17:41:50.020332 7553 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9a240ab7-a1d5-4e9a-96f3-4590681cc7ed" volumeName="kubernetes.io/configmap/9a240ab7-a1d5-4e9a-96f3-4590681cc7ed-config" seLinuxMountContext="" Mar 18 17:41:50.020915 master-0 kubenswrapper[7553]: I0318 17:41:50.020341 7553 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9b424d6c-7440-4c98-ac19-2d0642c696fd" volumeName="kubernetes.io/configmap/9b424d6c-7440-4c98-ac19-2d0642c696fd-config" seLinuxMountContext="" Mar 18 17:41:50.020915 master-0 kubenswrapper[7553]: I0318 17:41:50.020351 7553 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d969530-c138-4fb7-9bfe-0825be66c009" volumeName="kubernetes.io/projected/1d969530-c138-4fb7-9bfe-0825be66c009-kube-api-access-cd868" seLinuxMountContext="" Mar 18 17:41:50.020915 master-0 kubenswrapper[7553]: I0318 17:41:50.020360 7553 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="26575d68-0488-4dfa-a5d0-5016e481dba6" volumeName="kubernetes.io/projected/26575d68-0488-4dfa-a5d0-5016e481dba6-kube-api-access" seLinuxMountContext="" Mar 18 17:41:50.020915 master-0 kubenswrapper[7553]: I0318 17:41:50.020369 7553 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37b3753f-bf4f-4a9e-a4a8-d58296bada79" volumeName="kubernetes.io/configmap/37b3753f-bf4f-4a9e-a4a8-d58296bada79-config" seLinuxMountContext="" Mar 18 17:41:50.020915 master-0 kubenswrapper[7553]: I0318 17:41:50.020378 7553 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b0e38f3-3ab5-4519-86a6-68003deb94da" volumeName="kubernetes.io/configmap/5b0e38f3-3ab5-4519-86a6-68003deb94da-cni-binary-copy" seLinuxMountContext="" Mar 18 17:41:50.020915 master-0 kubenswrapper[7553]: I0318 17:41:50.020387 7553 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b0e38f3-3ab5-4519-86a6-68003deb94da" volumeName="kubernetes.io/projected/5b0e38f3-3ab5-4519-86a6-68003deb94da-kube-api-access-grnqn" seLinuxMountContext="" Mar 18 17:41:50.020915 master-0 kubenswrapper[7553]: I0318 17:41:50.020396 7553 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7b94e08c-7944-445e-bfb7-6c7c14880c65" volumeName="kubernetes.io/configmap/7b94e08c-7944-445e-bfb7-6c7c14880c65-ovnkube-config" seLinuxMountContext="" Mar 18 17:41:50.020915 master-0 kubenswrapper[7553]: I0318 17:41:50.020406 7553 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c355c750-ae2f-49fa-9a16-8fb4f688853e" volumeName="kubernetes.io/projected/c355c750-ae2f-49fa-9a16-8fb4f688853e-kube-api-access-zfnqp" seLinuxMountContext="" Mar 18 17:41:50.020915 master-0 kubenswrapper[7553]: I0318 17:41:50.020415 7553 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fea7b899-fde4-4463-9520-4d433a8ebe21" volumeName="kubernetes.io/configmap/fea7b899-fde4-4463-9520-4d433a8ebe21-cni-sysctl-allowlist" seLinuxMountContext="" Mar 18 17:41:50.020915 master-0 kubenswrapper[7553]: I0318 17:41:50.020425 7553 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0100a259-1358-45e8-8191-4e1f9a14ec89" volumeName="kubernetes.io/secret/0100a259-1358-45e8-8191-4e1f9a14ec89-etcd-client" seLinuxMountContext="" Mar 18 17:41:50.020915 master-0 kubenswrapper[7553]: I0318 17:41:50.020434 7553 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7b94e08c-7944-445e-bfb7-6c7c14880c65" volumeName="kubernetes.io/configmap/7b94e08c-7944-445e-bfb7-6c7c14880c65-env-overrides" seLinuxMountContext="" Mar 18 17:41:50.020915 master-0 kubenswrapper[7553]: I0318 17:41:50.020444 7553 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7e64a377-f497-4416-8f22-d5c7f52e0b65" volumeName="kubernetes.io/configmap/7e64a377-f497-4416-8f22-d5c7f52e0b65-trusted-ca" seLinuxMountContext="" Mar 18 17:41:50.020915 master-0 kubenswrapper[7553]: I0318 17:41:50.020455 7553 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c087ce06-a16b-41f4-ba93-8fccdee09003" volumeName="kubernetes.io/projected/c087ce06-a16b-41f4-ba93-8fccdee09003-kube-api-access-789k6" seLinuxMountContext="" Mar 18 17:41:50.020915 master-0 kubenswrapper[7553]: I0318 17:41:50.020465 7553 reconstruct.go:97] "Volume reconstruction finished" Mar 18 17:41:50.020915 master-0 kubenswrapper[7553]: I0318 17:41:50.020474 7553 reconciler.go:26] "Reconciler: start to sync state" Mar 18 17:41:50.023574 master-0 kubenswrapper[7553]: I0318 17:41:50.023047 7553 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Mar 18 17:41:50.047971 master-0 kubenswrapper[7553]: I0318 17:41:50.047869 7553 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Mar 18 17:41:50.051894 master-0 kubenswrapper[7553]: I0318 17:41:50.051823 7553 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Mar 18 17:41:50.051894 master-0 kubenswrapper[7553]: I0318 17:41:50.051896 7553 status_manager.go:217] "Starting to sync pod status with apiserver" Mar 18 17:41:50.052075 master-0 kubenswrapper[7553]: I0318 17:41:50.051931 7553 kubelet.go:2335] "Starting kubelet main sync loop" Mar 18 17:41:50.052075 master-0 kubenswrapper[7553]: E0318 17:41:50.051992 7553 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 18 17:41:50.054235 master-0 kubenswrapper[7553]: I0318 17:41:50.054197 7553 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Mar 18 17:41:50.063176 master-0 kubenswrapper[7553]: I0318 17:41:50.063074 7553 generic.go:334] "Generic (PLEG): container finished" podID="fea7b899-fde4-4463-9520-4d433a8ebe21" containerID="e43f9ea395a7c58acd7f5ae682a5f3d1676e30932b7eae1967401d8e7c98e640" exitCode=0 Mar 18 17:41:50.063176 master-0 kubenswrapper[7553]: I0318 17:41:50.063119 7553 generic.go:334] "Generic (PLEG): container finished" podID="fea7b899-fde4-4463-9520-4d433a8ebe21" containerID="49a577ee2ac2a159de0067da85450704e2357b11d86f52af06168530d5d8c67c" exitCode=0 Mar 18 17:41:50.063176 master-0 kubenswrapper[7553]: I0318 17:41:50.063128 7553 generic.go:334] "Generic (PLEG): container finished" podID="fea7b899-fde4-4463-9520-4d433a8ebe21" containerID="f487efac96ddc2a1600d3e4cc87d8a45b4d735699e028d3a82f0ba6a3bf9f4b3" exitCode=0 Mar 18 17:41:50.063176 master-0 kubenswrapper[7553]: I0318 17:41:50.063136 7553 generic.go:334] "Generic (PLEG): container finished" podID="fea7b899-fde4-4463-9520-4d433a8ebe21" containerID="3d5985c493f4dbc8ecc65a775668e215bdb1fee71a640074b8e4b3117da777c6" exitCode=0 Mar 18 17:41:50.063176 master-0 kubenswrapper[7553]: I0318 17:41:50.063146 7553 generic.go:334] "Generic (PLEG): container finished" podID="fea7b899-fde4-4463-9520-4d433a8ebe21" containerID="de9eecaae100670e0a012da69d0c99fbaef83817e585514383e37a63852714c7" exitCode=0 Mar 18 17:41:50.063176 master-0 kubenswrapper[7553]: I0318 17:41:50.063154 7553 generic.go:334] "Generic (PLEG): container finished" podID="fea7b899-fde4-4463-9520-4d433a8ebe21" containerID="88001466f79b98c5070d70264ed313350538e29ea013a0dee819ce0396f0e3a4" exitCode=0 Mar 18 17:41:50.077994 master-0 kubenswrapper[7553]: I0318 17:41:50.077930 7553 generic.go:334] "Generic (PLEG): container finished" podID="994fff04-c1d7-4f10-8d4b-6b49a6934829" containerID="1a93390a62f28ef65e80a805fc6b9268f2506ce23dcb2e7e0c063ca4b86c7617" exitCode=0 Mar 18 17:41:50.083717 master-0 kubenswrapper[7553]: I0318 17:41:50.083674 7553 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-master-0_1249822f86f23526277d165c0d5d3c19/kube-rbac-proxy-crio/2.log" Mar 18 17:41:50.084168 master-0 kubenswrapper[7553]: I0318 17:41:50.084143 7553 generic.go:334] "Generic (PLEG): container finished" podID="1249822f86f23526277d165c0d5d3c19" containerID="7b9bddc18b37dc8b661d9d8aa6fa9e351c2bd5fe2e18de32692f6b5ce1bb25c8" exitCode=1 Mar 18 17:41:50.084168 master-0 kubenswrapper[7553]: I0318 17:41:50.084166 7553 generic.go:334] "Generic (PLEG): container finished" podID="1249822f86f23526277d165c0d5d3c19" containerID="3803a5540326c74452858cec12bf5343a5ecb670acc1d4e7c87a18dad91b712b" exitCode=0 Mar 18 17:41:50.095923 master-0 kubenswrapper[7553]: I0318 17:41:50.095878 7553 generic.go:334] "Generic (PLEG): container finished" podID="49fac1b46a11e49501805e891baae4a9" containerID="9229a0847dcc4bfd99187b8d4d1c4189d57cc38cb01e1689224e1d421ed9426b" exitCode=0 Mar 18 17:41:50.115204 master-0 kubenswrapper[7553]: I0318 17:41:50.115141 7553 generic.go:334] "Generic (PLEG): container finished" podID="be6633f4-7370-49b8-a607-6a3fa52a098e" containerID="5a8c8b2dda583c7f8335b717181054066b935f797ea92e14efe72d4f776836d4" exitCode=0 Mar 18 17:41:50.120333 master-0 kubenswrapper[7553]: I0318 17:41:50.120292 7553 generic.go:334] "Generic (PLEG): container finished" podID="f2eeb961-15e7-4c19-8f37-659cc2cb6539" containerID="c94a2985fe4117cc55a54b6163c21e92395f0ed45215b4c6fffd52daf31ec16f" exitCode=0 Mar 18 17:41:50.152484 master-0 kubenswrapper[7553]: E0318 17:41:50.152339 7553 kubelet.go:2359] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Mar 18 17:41:50.177527 master-0 kubenswrapper[7553]: I0318 17:41:50.177474 7553 manager.go:324] Recovery completed Mar 18 17:41:50.231848 master-0 kubenswrapper[7553]: I0318 17:41:50.231804 7553 cpu_manager.go:225] "Starting CPU manager" policy="none" Mar 18 17:41:50.231848 master-0 kubenswrapper[7553]: I0318 17:41:50.231832 7553 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Mar 18 17:41:50.231848 master-0 kubenswrapper[7553]: I0318 17:41:50.231877 7553 state_mem.go:36] "Initialized new in-memory state store" Mar 18 17:41:50.232234 master-0 kubenswrapper[7553]: I0318 17:41:50.232126 7553 state_mem.go:88] "Updated default CPUSet" cpuSet="" Mar 18 17:41:50.232234 master-0 kubenswrapper[7553]: I0318 17:41:50.232140 7553 state_mem.go:96] "Updated CPUSet assignments" assignments={} Mar 18 17:41:50.232234 master-0 kubenswrapper[7553]: I0318 17:41:50.232167 7553 state_checkpoint.go:136] "State checkpoint: restored state from checkpoint" Mar 18 17:41:50.232234 master-0 kubenswrapper[7553]: I0318 17:41:50.232174 7553 state_checkpoint.go:137] "State checkpoint: defaultCPUSet" defaultCpuSet="" Mar 18 17:41:50.232234 master-0 kubenswrapper[7553]: I0318 17:41:50.232182 7553 policy_none.go:49] "None policy: Start" Mar 18 17:41:50.233858 master-0 kubenswrapper[7553]: I0318 17:41:50.233831 7553 memory_manager.go:170] "Starting memorymanager" policy="None" Mar 18 17:41:50.233858 master-0 kubenswrapper[7553]: I0318 17:41:50.233860 7553 state_mem.go:35] "Initializing new in-memory state store" Mar 18 17:41:50.234075 master-0 kubenswrapper[7553]: I0318 17:41:50.234053 7553 state_mem.go:75] "Updated machine memory state" Mar 18 17:41:50.234075 master-0 kubenswrapper[7553]: I0318 17:41:50.234068 7553 state_checkpoint.go:82] "State checkpoint: restored state from checkpoint" Mar 18 17:41:50.243197 master-0 kubenswrapper[7553]: I0318 17:41:50.243176 7553 manager.go:334] "Starting Device Plugin manager" Mar 18 17:41:50.243265 master-0 kubenswrapper[7553]: I0318 17:41:50.243216 7553 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Mar 18 17:41:50.243265 master-0 kubenswrapper[7553]: I0318 17:41:50.243232 7553 server.go:79] "Starting device plugin registration server" Mar 18 17:41:50.243815 master-0 kubenswrapper[7553]: I0318 17:41:50.243798 7553 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 18 17:41:50.243859 master-0 kubenswrapper[7553]: I0318 17:41:50.243816 7553 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 18 17:41:50.244202 master-0 kubenswrapper[7553]: I0318 17:41:50.244176 7553 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Mar 18 17:41:50.244261 master-0 kubenswrapper[7553]: I0318 17:41:50.244254 7553 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Mar 18 17:41:50.244261 master-0 kubenswrapper[7553]: I0318 17:41:50.244260 7553 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 18 17:41:50.344346 master-0 kubenswrapper[7553]: I0318 17:41:50.344248 7553 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 17:41:50.347250 master-0 kubenswrapper[7553]: I0318 17:41:50.347225 7553 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 18 17:41:50.347354 master-0 kubenswrapper[7553]: I0318 17:41:50.347292 7553 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 18 17:41:50.347354 master-0 kubenswrapper[7553]: I0318 17:41:50.347304 7553 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 18 17:41:50.347420 master-0 kubenswrapper[7553]: I0318 17:41:50.347400 7553 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Mar 18 17:41:50.352604 master-0 kubenswrapper[7553]: I0318 17:41:50.352442 7553 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-master-0","openshift-etcd/etcd-master-0-master-0","openshift-kube-apiserver/bootstrap-kube-apiserver-master-0","kube-system/bootstrap-kube-controller-manager-master-0","kube-system/bootstrap-kube-scheduler-master-0"] Mar 18 17:41:50.353510 master-0 kubenswrapper[7553]: I0318 17:41:50.353406 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"46f265536aba6292ead501bc9b49f327","Type":"ContainerStarted","Data":"6a3212eaacddf8a633d9171d89d86f056fc2eaf17af107aa2bced9e6262d3611"} Mar 18 17:41:50.353582 master-0 kubenswrapper[7553]: I0318 17:41:50.353511 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"46f265536aba6292ead501bc9b49f327","Type":"ContainerStarted","Data":"f0b81b3cfa0e5fd097c90e819ab6e9fd9565a5c9b97fe1b2e8315e5233938b1e"} Mar 18 17:41:50.353582 master-0 kubenswrapper[7553]: I0318 17:41:50.353528 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"46f265536aba6292ead501bc9b49f327","Type":"ContainerStarted","Data":"4c05ce2500dc59522a6a15e9d7a181f449cb0590dccc6ae6225c9f2d2a528378"} Mar 18 17:41:50.353885 master-0 kubenswrapper[7553]: I0318 17:41:50.353781 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"1249822f86f23526277d165c0d5d3c19","Type":"ContainerStarted","Data":"43d0194c7af8a79987b694f6624dcbd9737a923184624c98fa52f07e27abb8b3"} Mar 18 17:41:50.353932 master-0 kubenswrapper[7553]: I0318 17:41:50.353893 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"1249822f86f23526277d165c0d5d3c19","Type":"ContainerDied","Data":"7b9bddc18b37dc8b661d9d8aa6fa9e351c2bd5fe2e18de32692f6b5ce1bb25c8"} Mar 18 17:41:50.353932 master-0 kubenswrapper[7553]: I0318 17:41:50.353912 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"1249822f86f23526277d165c0d5d3c19","Type":"ContainerDied","Data":"3803a5540326c74452858cec12bf5343a5ecb670acc1d4e7c87a18dad91b712b"} Mar 18 17:41:50.353932 master-0 kubenswrapper[7553]: I0318 17:41:50.353923 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"1249822f86f23526277d165c0d5d3c19","Type":"ContainerStarted","Data":"e03411b7c2bf8367c968ed1adc09d9fecafd420d1122805ea765d466352e23a3"} Mar 18 17:41:50.353932 master-0 kubenswrapper[7553]: I0318 17:41:50.353934 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0-master-0" event={"ID":"d664a6d0d2a24360dee10612610f1b59","Type":"ContainerStarted","Data":"1d30b6f37f4ad53c3294bea48dd4a0769d42ea2d80a5395f6ef8c16034150f6c"} Mar 18 17:41:50.354060 master-0 kubenswrapper[7553]: I0318 17:41:50.353945 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0-master-0" event={"ID":"d664a6d0d2a24360dee10612610f1b59","Type":"ContainerStarted","Data":"99dc9cff4665f248f4ae68c96db3198a4bcd4d7b5dbfb367bdf3864e44ad29fc"} Mar 18 17:41:50.354060 master-0 kubenswrapper[7553]: I0318 17:41:50.353956 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0-master-0" event={"ID":"d664a6d0d2a24360dee10612610f1b59","Type":"ContainerStarted","Data":"6a85b3ee12aea7b46bda118fb48d0b8760d887f0c07b29fb0b4386fa0f1ccc35"} Mar 18 17:41:50.354060 master-0 kubenswrapper[7553]: I0318 17:41:50.353972 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" event={"ID":"49fac1b46a11e49501805e891baae4a9","Type":"ContainerStarted","Data":"b07f4eb106a117d2a3aedb26bb538e640c6545e341eb4a44bae581e10c947c17"} Mar 18 17:41:50.354060 master-0 kubenswrapper[7553]: I0318 17:41:50.353983 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" event={"ID":"49fac1b46a11e49501805e891baae4a9","Type":"ContainerStarted","Data":"3c6f642b736991fd20242697f9273f8f6a126bc6027f7c5ddd27e70569fd9054"} Mar 18 17:41:50.354060 master-0 kubenswrapper[7553]: I0318 17:41:50.353991 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" event={"ID":"49fac1b46a11e49501805e891baae4a9","Type":"ContainerDied","Data":"9229a0847dcc4bfd99187b8d4d1c4189d57cc38cb01e1689224e1d421ed9426b"} Mar 18 17:41:50.354060 master-0 kubenswrapper[7553]: I0318 17:41:50.354002 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" event={"ID":"49fac1b46a11e49501805e891baae4a9","Type":"ContainerStarted","Data":"6a2b235f936941b3155c2b3d3485ca14ee2d3465fd9996759d1380c11160d84a"} Mar 18 17:41:50.354060 master-0 kubenswrapper[7553]: I0318 17:41:50.354014 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-scheduler-master-0" event={"ID":"c83737980b9ee109184b1d78e942cf36","Type":"ContainerStarted","Data":"774c63dac090e52a2318d2a44e73b16fc328b4dc2d265dcfd10522ed7532c288"} Mar 18 17:41:50.354060 master-0 kubenswrapper[7553]: I0318 17:41:50.354024 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-scheduler-master-0" event={"ID":"c83737980b9ee109184b1d78e942cf36","Type":"ContainerStarted","Data":"95d9ed6b43dde926cc6bff2e2109470565941e7ec534301d19b34350a3fd9914"} Mar 18 17:41:50.354060 master-0 kubenswrapper[7553]: I0318 17:41:50.354045 7553 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="de191ef380880e41074c916544a090af370497a2183310a181d94c72cfa6a53a" Mar 18 17:41:50.354060 master-0 kubenswrapper[7553]: I0318 17:41:50.354071 7553 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1d9e36c9c12a1291e1dc0d36bf35c4d9718af9aa6ca59ee2ad69bf2e6669af26" Mar 18 17:41:50.354627 master-0 kubenswrapper[7553]: I0318 17:41:50.354085 7553 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="44b61e136de21d6c51f86eb4424513da867694db0dfb6fc4c6a30b8dc6efbae6" Mar 18 17:41:50.367261 master-0 kubenswrapper[7553]: I0318 17:41:50.367227 7553 kubelet_node_status.go:115] "Node was previously registered" node="master-0" Mar 18 17:41:50.367493 master-0 kubenswrapper[7553]: I0318 17:41:50.367386 7553 kubelet_node_status.go:79] "Successfully registered node" node="master-0" Mar 18 17:41:50.377961 master-0 kubenswrapper[7553]: E0318 17:41:50.377893 7553 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"bootstrap-kube-scheduler-master-0\" already exists" pod="kube-system/bootstrap-kube-scheduler-master-0" Mar 18 17:41:50.383146 master-0 kubenswrapper[7553]: E0318 17:41:50.383107 7553 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-rbac-proxy-crio-master-0\" already exists" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 18 17:41:50.383146 master-0 kubenswrapper[7553]: W0318 17:41:50.383122 7553 warnings.go:70] would violate PodSecurity "restricted:latest": host namespaces (hostNetwork=true), hostPort (container "etcd" uses hostPorts 2379, 2380), privileged (containers "etcdctl", "etcd" must not set securityContext.privileged=true), allowPrivilegeEscalation != false (containers "etcdctl", "etcd" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (containers "etcdctl", "etcd" must set securityContext.capabilities.drop=["ALL"]), restricted volume types (volumes "certs", "data-dir" use restricted volume type "hostPath"), runAsNonRoot != true (pod or containers "etcdctl", "etcd" must set securityContext.runAsNonRoot=true), seccompProfile (pod or containers "etcdctl", "etcd" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost") Mar 18 17:41:50.383441 master-0 kubenswrapper[7553]: E0318 17:41:50.383185 7553 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"etcd-master-0-master-0\" already exists" pod="openshift-etcd/etcd-master-0-master-0" Mar 18 17:41:50.383441 master-0 kubenswrapper[7553]: E0318 17:41:50.383142 7553 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"bootstrap-kube-controller-manager-master-0\" already exists" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 18 17:41:50.383441 master-0 kubenswrapper[7553]: E0318 17:41:50.383383 7553 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"bootstrap-kube-apiserver-master-0\" already exists" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 18 17:41:50.425304 master-0 kubenswrapper[7553]: I0318 17:41:50.425239 7553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/d664a6d0d2a24360dee10612610f1b59-data-dir\") pod \"etcd-master-0-master-0\" (UID: \"d664a6d0d2a24360dee10612610f1b59\") " pod="openshift-etcd/etcd-master-0-master-0" Mar 18 17:41:50.425304 master-0 kubenswrapper[7553]: I0318 17:41:50.425309 7553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-ssl-certs-host\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"49fac1b46a11e49501805e891baae4a9\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 18 17:41:50.425670 master-0 kubenswrapper[7553]: I0318 17:41:50.425338 7553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-audit-dir\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"49fac1b46a11e49501805e891baae4a9\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 18 17:41:50.425670 master-0 kubenswrapper[7553]: I0318 17:41:50.425428 7553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/host-path/46f265536aba6292ead501bc9b49f327-config\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"46f265536aba6292ead501bc9b49f327\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 18 17:41:50.425670 master-0 kubenswrapper[7553]: I0318 17:41:50.425501 7553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/46f265536aba6292ead501bc9b49f327-ssl-certs-host\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"46f265536aba6292ead501bc9b49f327\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 18 17:41:50.425670 master-0 kubenswrapper[7553]: I0318 17:41:50.425526 7553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-secrets\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"49fac1b46a11e49501805e891baae4a9\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 18 17:41:50.425670 master-0 kubenswrapper[7553]: I0318 17:41:50.425549 7553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/c83737980b9ee109184b1d78e942cf36-secrets\") pod \"bootstrap-kube-scheduler-master-0\" (UID: \"c83737980b9ee109184b1d78e942cf36\") " pod="kube-system/bootstrap-kube-scheduler-master-0" Mar 18 17:41:50.425670 master-0 kubenswrapper[7553]: I0318 17:41:50.425571 7553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/1249822f86f23526277d165c0d5d3c19-etc-kube\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"1249822f86f23526277d165c0d5d3c19\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 18 17:41:50.425670 master-0 kubenswrapper[7553]: I0318 17:41:50.425589 7553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/1249822f86f23526277d165c0d5d3c19-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"1249822f86f23526277d165c0d5d3c19\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 18 17:41:50.425670 master-0 kubenswrapper[7553]: I0318 17:41:50.425640 7553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-etc-kubernetes-cloud\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"49fac1b46a11e49501805e891baae4a9\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 18 17:41:50.425936 master-0 kubenswrapper[7553]: I0318 17:41:50.425683 7553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-config\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"49fac1b46a11e49501805e891baae4a9\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 18 17:41:50.425936 master-0 kubenswrapper[7553]: I0318 17:41:50.425707 7553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/46f265536aba6292ead501bc9b49f327-secrets\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"46f265536aba6292ead501bc9b49f327\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 18 17:41:50.425936 master-0 kubenswrapper[7553]: I0318 17:41:50.425731 7553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/46f265536aba6292ead501bc9b49f327-etc-kubernetes-cloud\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"46f265536aba6292ead501bc9b49f327\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 18 17:41:50.425936 master-0 kubenswrapper[7553]: I0318 17:41:50.425752 7553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/46f265536aba6292ead501bc9b49f327-logs\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"46f265536aba6292ead501bc9b49f327\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 18 17:41:50.425936 master-0 kubenswrapper[7553]: I0318 17:41:50.425773 7553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/host-path/d664a6d0d2a24360dee10612610f1b59-certs\") pod \"etcd-master-0-master-0\" (UID: \"d664a6d0d2a24360dee10612610f1b59\") " pod="openshift-etcd/etcd-master-0-master-0" Mar 18 17:41:50.425936 master-0 kubenswrapper[7553]: I0318 17:41:50.425798 7553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-logs\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"49fac1b46a11e49501805e891baae4a9\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 18 17:41:50.425936 master-0 kubenswrapper[7553]: I0318 17:41:50.425823 7553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/c83737980b9ee109184b1d78e942cf36-logs\") pod \"bootstrap-kube-scheduler-master-0\" (UID: \"c83737980b9ee109184b1d78e942cf36\") " pod="kube-system/bootstrap-kube-scheduler-master-0" Mar 18 17:41:50.527120 master-0 kubenswrapper[7553]: I0318 17:41:50.527051 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/host-path/d664a6d0d2a24360dee10612610f1b59-certs\") pod \"etcd-master-0-master-0\" (UID: \"d664a6d0d2a24360dee10612610f1b59\") " pod="openshift-etcd/etcd-master-0-master-0" Mar 18 17:41:50.527120 master-0 kubenswrapper[7553]: I0318 17:41:50.527104 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-logs\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"49fac1b46a11e49501805e891baae4a9\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 18 17:41:50.527482 master-0 kubenswrapper[7553]: I0318 17:41:50.527173 7553 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/host-path/d664a6d0d2a24360dee10612610f1b59-certs\") pod \"etcd-master-0-master-0\" (UID: \"d664a6d0d2a24360dee10612610f1b59\") " pod="openshift-etcd/etcd-master-0-master-0" Mar 18 17:41:50.527482 master-0 kubenswrapper[7553]: I0318 17:41:50.527248 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/c83737980b9ee109184b1d78e942cf36-logs\") pod \"bootstrap-kube-scheduler-master-0\" (UID: \"c83737980b9ee109184b1d78e942cf36\") " pod="kube-system/bootstrap-kube-scheduler-master-0" Mar 18 17:41:50.527482 master-0 kubenswrapper[7553]: I0318 17:41:50.527292 7553 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/c83737980b9ee109184b1d78e942cf36-logs\") pod \"bootstrap-kube-scheduler-master-0\" (UID: \"c83737980b9ee109184b1d78e942cf36\") " pod="kube-system/bootstrap-kube-scheduler-master-0" Mar 18 17:41:50.527482 master-0 kubenswrapper[7553]: I0318 17:41:50.527296 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/d664a6d0d2a24360dee10612610f1b59-data-dir\") pod \"etcd-master-0-master-0\" (UID: \"d664a6d0d2a24360dee10612610f1b59\") " pod="openshift-etcd/etcd-master-0-master-0" Mar 18 17:41:50.527482 master-0 kubenswrapper[7553]: I0318 17:41:50.527260 7553 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-logs\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"49fac1b46a11e49501805e891baae4a9\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 18 17:41:50.527482 master-0 kubenswrapper[7553]: I0318 17:41:50.527326 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-ssl-certs-host\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"49fac1b46a11e49501805e891baae4a9\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 18 17:41:50.527482 master-0 kubenswrapper[7553]: I0318 17:41:50.527350 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-audit-dir\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"49fac1b46a11e49501805e891baae4a9\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 18 17:41:50.527482 master-0 kubenswrapper[7553]: I0318 17:41:50.527359 7553 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/d664a6d0d2a24360dee10612610f1b59-data-dir\") pod \"etcd-master-0-master-0\" (UID: \"d664a6d0d2a24360dee10612610f1b59\") " pod="openshift-etcd/etcd-master-0-master-0" Mar 18 17:41:50.527482 master-0 kubenswrapper[7553]: I0318 17:41:50.527372 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/host-path/46f265536aba6292ead501bc9b49f327-config\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"46f265536aba6292ead501bc9b49f327\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 18 17:41:50.527482 master-0 kubenswrapper[7553]: I0318 17:41:50.527392 7553 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/host-path/46f265536aba6292ead501bc9b49f327-config\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"46f265536aba6292ead501bc9b49f327\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 18 17:41:50.527482 master-0 kubenswrapper[7553]: I0318 17:41:50.527352 7553 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-ssl-certs-host\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"49fac1b46a11e49501805e891baae4a9\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 18 17:41:50.527482 master-0 kubenswrapper[7553]: I0318 17:41:50.527404 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/46f265536aba6292ead501bc9b49f327-ssl-certs-host\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"46f265536aba6292ead501bc9b49f327\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 18 17:41:50.527482 master-0 kubenswrapper[7553]: I0318 17:41:50.527425 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-secrets\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"49fac1b46a11e49501805e891baae4a9\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 18 17:41:50.527482 master-0 kubenswrapper[7553]: I0318 17:41:50.527373 7553 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-audit-dir\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"49fac1b46a11e49501805e891baae4a9\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 18 17:41:50.527482 master-0 kubenswrapper[7553]: I0318 17:41:50.527425 7553 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/46f265536aba6292ead501bc9b49f327-ssl-certs-host\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"46f265536aba6292ead501bc9b49f327\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 18 17:41:50.527482 master-0 kubenswrapper[7553]: I0318 17:41:50.527445 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/46f265536aba6292ead501bc9b49f327-logs\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"46f265536aba6292ead501bc9b49f327\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 18 17:41:50.527482 master-0 kubenswrapper[7553]: I0318 17:41:50.527470 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/c83737980b9ee109184b1d78e942cf36-secrets\") pod \"bootstrap-kube-scheduler-master-0\" (UID: \"c83737980b9ee109184b1d78e942cf36\") " pod="kube-system/bootstrap-kube-scheduler-master-0" Mar 18 17:41:50.527482 master-0 kubenswrapper[7553]: I0318 17:41:50.527490 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/1249822f86f23526277d165c0d5d3c19-etc-kube\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"1249822f86f23526277d165c0d5d3c19\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 18 17:41:50.527482 master-0 kubenswrapper[7553]: I0318 17:41:50.527471 7553 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/46f265536aba6292ead501bc9b49f327-logs\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"46f265536aba6292ead501bc9b49f327\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 18 17:41:50.527482 master-0 kubenswrapper[7553]: I0318 17:41:50.527495 7553 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/c83737980b9ee109184b1d78e942cf36-secrets\") pod \"bootstrap-kube-scheduler-master-0\" (UID: \"c83737980b9ee109184b1d78e942cf36\") " pod="kube-system/bootstrap-kube-scheduler-master-0" Mar 18 17:41:50.528054 master-0 kubenswrapper[7553]: I0318 17:41:50.527617 7553 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-secrets\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"49fac1b46a11e49501805e891baae4a9\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 18 17:41:50.528054 master-0 kubenswrapper[7553]: I0318 17:41:50.527650 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/1249822f86f23526277d165c0d5d3c19-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"1249822f86f23526277d165c0d5d3c19\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 18 17:41:50.528054 master-0 kubenswrapper[7553]: I0318 17:41:50.527832 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-etc-kubernetes-cloud\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"49fac1b46a11e49501805e891baae4a9\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 18 17:41:50.528054 master-0 kubenswrapper[7553]: I0318 17:41:50.527680 7553 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/1249822f86f23526277d165c0d5d3c19-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"1249822f86f23526277d165c0d5d3c19\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 18 17:41:50.528054 master-0 kubenswrapper[7553]: I0318 17:41:50.527894 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-config\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"49fac1b46a11e49501805e891baae4a9\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 18 17:41:50.528054 master-0 kubenswrapper[7553]: I0318 17:41:50.527920 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/46f265536aba6292ead501bc9b49f327-secrets\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"46f265536aba6292ead501bc9b49f327\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 18 17:41:50.528054 master-0 kubenswrapper[7553]: I0318 17:41:50.527966 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/46f265536aba6292ead501bc9b49f327-etc-kubernetes-cloud\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"46f265536aba6292ead501bc9b49f327\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 18 17:41:50.528054 master-0 kubenswrapper[7553]: I0318 17:41:50.527975 7553 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-config\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"49fac1b46a11e49501805e891baae4a9\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 18 17:41:50.528054 master-0 kubenswrapper[7553]: I0318 17:41:50.527976 7553 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-etc-kubernetes-cloud\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"49fac1b46a11e49501805e891baae4a9\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 18 17:41:50.528054 master-0 kubenswrapper[7553]: I0318 17:41:50.527993 7553 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/46f265536aba6292ead501bc9b49f327-secrets\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"46f265536aba6292ead501bc9b49f327\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 18 17:41:50.528054 master-0 kubenswrapper[7553]: I0318 17:41:50.528048 7553 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/46f265536aba6292ead501bc9b49f327-etc-kubernetes-cloud\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"46f265536aba6292ead501bc9b49f327\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 18 17:41:50.529363 master-0 kubenswrapper[7553]: I0318 17:41:50.527676 7553 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/1249822f86f23526277d165c0d5d3c19-etc-kube\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"1249822f86f23526277d165c0d5d3c19\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 18 17:41:50.990692 master-0 kubenswrapper[7553]: I0318 17:41:50.990079 7553 apiserver.go:52] "Watching apiserver" Mar 18 17:41:51.007247 master-0 kubenswrapper[7553]: I0318 17:41:51.007189 7553 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Mar 18 17:41:51.008813 master-0 kubenswrapper[7553]: I0318 17:41:51.008749 7553 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-56d8475767-lqvvj","openshift-image-registry/cluster-image-registry-operator-5549dc66cb-ljrq8","openshift-network-node-identity/network-node-identity-7s68k","openshift-operator-lifecycle-manager/catalog-operator-68f85b4d6c-qpgfz","openshift-ovn-kubernetes/ovnkube-control-plane-57f769d897-m82wx","openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-7qwxn","openshift-cluster-storage-operator/csi-snapshot-controller-operator-5f5d689c6b-z9vvz","openshift-controller-manager-operator/openshift-controller-manager-operator-8c94f4649-hpsbd","openshift-kube-apiserver-operator/kube-apiserver-operator-8b68b9d9b-p72m2","openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-6bb5bfb6fd-xwwcg","openshift-machine-api/cluster-baremetal-operator-6f69995874-dh5zl","openshift-machine-config-operator/kube-rbac-proxy-crio-master-0","kube-system/bootstrap-kube-controller-manager-master-0","kube-system/bootstrap-kube-scheduler-master-0","openshift-etcd-operator/etcd-operator-8544cbcf9c-rws9x","openshift-etcd/etcd-master-0-master-0","openshift-marketplace/marketplace-operator-89ccd998f-l5gm7","openshift-monitoring/cluster-monitoring-operator-58845fbb57-vjrjg","openshift-network-operator/network-operator-7bd846bfc4-dxxbl","openshift-ovn-kubernetes/ovnkube-node-5l4qp","openshift-kube-controller-manager-operator/kube-controller-manager-operator-ff989d6cc-qk279","openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-dddff6458-wlfj4","openshift-operator-lifecycle-manager/package-server-manager-7b95f86987-6qqz4","openshift-config-operator/openshift-config-operator-95bf4f4d-q27fh","openshift-network-diagnostics/network-check-target-ctd49","openshift-ingress-operator/ingress-operator-66b84d69b-qb7n6","openshift-kube-apiserver/bootstrap-kube-apiserver-master-0","openshift-multus/multus-additional-cni-plugins-ttbr5","openshift-multus/network-metrics-daemon-mfn52","openshift-service-ca-operator/service-ca-operator-b865698dc-5zj8r","openshift-authentication-operator/authentication-operator-5885bfd7f4-8sxdf","openshift-cluster-olm-operator/cluster-olm-operator-67dcd4998-lljnt","openshift-multus/multus-64tx9","openshift-network-operator/iptables-alerter-f7jp5","openshift-dns-operator/dns-operator-9c5679d8f-7sc7v","openshift-multus/multus-admission-controller-5dbbb8b86f-gr8jc","openshift-operator-lifecycle-manager/olm-operator-5c9796789-6hngr","assisted-installer/assisted-installer-controller-trlzv","openshift-apiserver-operator/openshift-apiserver-operator-d65958b8-t266j"] Mar 18 17:41:51.009139 master-0 kubenswrapper[7553]: I0318 17:41:51.009092 7553 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="assisted-installer/assisted-installer-controller-trlzv" Mar 18 17:41:51.009342 master-0 kubenswrapper[7553]: I0318 17:41:51.009301 7553 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-9c5679d8f-7sc7v" Mar 18 17:41:51.011095 master-0 kubenswrapper[7553]: I0318 17:41:51.010416 7553 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-56d8475767-lqvvj" Mar 18 17:41:51.011095 master-0 kubenswrapper[7553]: I0318 17:41:51.010995 7553 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Mar 18 17:41:51.011503 master-0 kubenswrapper[7553]: I0318 17:41:51.011308 7553 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Mar 18 17:41:51.011620 master-0 kubenswrapper[7553]: I0318 17:41:51.011557 7553 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Mar 18 17:41:51.011969 master-0 kubenswrapper[7553]: I0318 17:41:51.011932 7553 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-5c9796789-6hngr" Mar 18 17:41:51.012082 master-0 kubenswrapper[7553]: I0318 17:41:51.012031 7553 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-5549dc66cb-ljrq8" Mar 18 17:41:51.013460 master-0 kubenswrapper[7553]: I0318 17:41:51.013166 7553 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-7b95f86987-6qqz4" Mar 18 17:41:51.014097 master-0 kubenswrapper[7553]: I0318 17:41:51.013539 7553 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-dh5zl" Mar 18 17:41:51.014097 master-0 kubenswrapper[7553]: I0318 17:41:51.013596 7553 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-66b84d69b-qb7n6" Mar 18 17:41:51.014097 master-0 kubenswrapper[7553]: I0318 17:41:51.013627 7553 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/cluster-monitoring-operator-58845fbb57-vjrjg" Mar 18 17:41:51.017131 master-0 kubenswrapper[7553]: I0318 17:41:51.017089 7553 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Mar 18 17:41:51.017316 master-0 kubenswrapper[7553]: I0318 17:41:51.013670 7553 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-ctd49" Mar 18 17:41:51.017487 master-0 kubenswrapper[7553]: I0318 17:41:51.013736 7553 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mfn52" Mar 18 17:41:51.017599 master-0 kubenswrapper[7553]: I0318 17:41:51.014015 7553 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-89ccd998f-l5gm7" Mar 18 17:41:51.017768 master-0 kubenswrapper[7553]: I0318 17:41:51.014029 7553 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68f85b4d6c-qpgfz" Mar 18 17:41:51.017933 master-0 kubenswrapper[7553]: I0318 17:41:51.014044 7553 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-7qwxn" Mar 18 17:41:51.019170 master-0 kubenswrapper[7553]: I0318 17:41:51.019117 7553 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Mar 18 17:41:51.019776 master-0 kubenswrapper[7553]: I0318 17:41:51.019734 7553 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Mar 18 17:41:51.020145 master-0 kubenswrapper[7553]: I0318 17:41:51.020012 7553 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Mar 18 17:41:51.020336 master-0 kubenswrapper[7553]: I0318 17:41:51.020307 7553 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Mar 18 17:41:51.020422 master-0 kubenswrapper[7553]: I0318 17:41:51.013647 7553 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-5dbbb8b86f-gr8jc" Mar 18 17:41:51.020463 master-0 kubenswrapper[7553]: I0318 17:41:51.020420 7553 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Mar 18 17:41:51.020868 master-0 kubenswrapper[7553]: I0318 17:41:51.020844 7553 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Mar 18 17:41:51.020934 master-0 kubenswrapper[7553]: I0318 17:41:51.020881 7553 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Mar 18 17:41:51.021151 master-0 kubenswrapper[7553]: I0318 17:41:51.021108 7553 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Mar 18 17:41:51.021151 master-0 kubenswrapper[7553]: I0318 17:41:51.021122 7553 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Mar 18 17:41:51.021151 master-0 kubenswrapper[7553]: I0318 17:41:51.021144 7553 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Mar 18 17:41:51.021260 master-0 kubenswrapper[7553]: I0318 17:41:51.021158 7553 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Mar 18 17:41:51.021260 master-0 kubenswrapper[7553]: I0318 17:41:51.021201 7553 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Mar 18 17:41:51.021380 master-0 kubenswrapper[7553]: I0318 17:41:51.021361 7553 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Mar 18 17:41:51.021534 master-0 kubenswrapper[7553]: I0318 17:41:51.021492 7553 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Mar 18 17:41:51.021534 master-0 kubenswrapper[7553]: I0318 17:41:51.021531 7553 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Mar 18 17:41:51.021707 master-0 kubenswrapper[7553]: I0318 17:41:51.021357 7553 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Mar 18 17:41:51.021752 master-0 kubenswrapper[7553]: I0318 17:41:51.021713 7553 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Mar 18 17:41:51.021802 master-0 kubenswrapper[7553]: I0318 17:41:51.021783 7553 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Mar 18 17:41:51.021839 master-0 kubenswrapper[7553]: I0318 17:41:51.021364 7553 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Mar 18 17:41:51.021884 master-0 kubenswrapper[7553]: I0318 17:41:51.020888 7553 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Mar 18 17:41:51.021884 master-0 kubenswrapper[7553]: I0318 17:41:51.021718 7553 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Mar 18 17:41:51.021943 master-0 kubenswrapper[7553]: I0318 17:41:51.021888 7553 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Mar 18 17:41:51.021943 master-0 kubenswrapper[7553]: I0318 17:41:51.021910 7553 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Mar 18 17:41:51.022007 master-0 kubenswrapper[7553]: I0318 17:41:51.021673 7553 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Mar 18 17:41:51.022007 master-0 kubenswrapper[7553]: I0318 17:41:51.021690 7553 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Mar 18 17:41:51.022076 master-0 kubenswrapper[7553]: I0318 17:41:51.022010 7553 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Mar 18 17:41:51.022228 master-0 kubenswrapper[7553]: I0318 17:41:51.022207 7553 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Mar 18 17:41:51.023961 master-0 kubenswrapper[7553]: I0318 17:41:51.023832 7553 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Mar 18 17:41:51.026120 master-0 kubenswrapper[7553]: I0318 17:41:51.026087 7553 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Mar 18 17:41:51.031952 master-0 kubenswrapper[7553]: I0318 17:41:51.031920 7553 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Mar 18 17:41:51.032188 master-0 kubenswrapper[7553]: I0318 17:41:51.032145 7553 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Mar 18 17:41:51.032323 master-0 kubenswrapper[7553]: I0318 17:41:51.032296 7553 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Mar 18 17:41:51.032411 master-0 kubenswrapper[7553]: I0318 17:41:51.032374 7553 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Mar 18 17:41:51.032482 master-0 kubenswrapper[7553]: I0318 17:41:51.032452 7553 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-storage-operator"/"openshift-service-ca.crt" Mar 18 17:41:51.032528 master-0 kubenswrapper[7553]: I0318 17:41:51.032478 7553 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Mar 18 17:41:51.032653 master-0 kubenswrapper[7553]: I0318 17:41:51.032621 7553 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Mar 18 17:41:51.032653 master-0 kubenswrapper[7553]: I0318 17:41:51.032626 7553 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Mar 18 17:41:51.032977 master-0 kubenswrapper[7553]: I0318 17:41:51.032909 7553 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-storage-operator"/"kube-root-ca.crt" Mar 18 17:41:51.033028 master-0 kubenswrapper[7553]: I0318 17:41:51.033013 7553 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Mar 18 17:41:51.033126 master-0 kubenswrapper[7553]: I0318 17:41:51.033100 7553 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Mar 18 17:41:51.033305 master-0 kubenswrapper[7553]: I0318 17:41:51.033285 7553 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Mar 18 17:41:51.033422 master-0 kubenswrapper[7553]: I0318 17:41:51.033394 7553 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-olm-operator"/"openshift-service-ca.crt" Mar 18 17:41:51.033473 master-0 kubenswrapper[7553]: I0318 17:41:51.033448 7553 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Mar 18 17:41:51.033653 master-0 kubenswrapper[7553]: I0318 17:41:51.033609 7553 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Mar 18 17:41:51.033934 master-0 kubenswrapper[7553]: I0318 17:41:51.033905 7553 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Mar 18 17:41:51.034102 master-0 kubenswrapper[7553]: I0318 17:41:51.034080 7553 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Mar 18 17:41:51.034160 master-0 kubenswrapper[7553]: I0318 17:41:51.034143 7553 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"whereabouts-flatfile-config" Mar 18 17:41:51.034329 master-0 kubenswrapper[7553]: I0318 17:41:51.034307 7553 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Mar 18 17:41:51.034528 master-0 kubenswrapper[7553]: I0318 17:41:51.034502 7553 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Mar 18 17:41:51.034823 master-0 kubenswrapper[7553]: I0318 17:41:51.034716 7553 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-olm-operator"/"kube-root-ca.crt" Mar 18 17:41:51.034974 master-0 kubenswrapper[7553]: I0318 17:41:51.034935 7553 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-baremetal-webhook-server-cert" Mar 18 17:41:51.037782 master-0 kubenswrapper[7553]: I0318 17:41:51.037739 7553 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-baremetal-operator-tls" Mar 18 17:41:51.037962 master-0 kubenswrapper[7553]: I0318 17:41:51.037935 7553 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Mar 18 17:41:51.038853 master-0 kubenswrapper[7553]: I0318 17:41:51.038820 7553 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"cluster-baremetal-operator-images" Mar 18 17:41:51.038942 master-0 kubenswrapper[7553]: I0318 17:41:51.038916 7553 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"baremetal-kube-rbac-proxy" Mar 18 17:41:51.039336 master-0 kubenswrapper[7553]: I0318 17:41:51.039297 7553 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Mar 18 17:41:51.039336 master-0 kubenswrapper[7553]: I0318 17:41:51.039298 7553 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Mar 18 17:41:51.039417 master-0 kubenswrapper[7553]: I0318 17:41:51.039338 7553 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Mar 18 17:41:51.039590 master-0 kubenswrapper[7553]: I0318 17:41:51.039567 7553 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Mar 18 17:41:51.039776 master-0 kubenswrapper[7553]: I0318 17:41:51.039756 7553 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Mar 18 17:41:51.039889 master-0 kubenswrapper[7553]: I0318 17:41:51.039814 7553 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Mar 18 17:41:51.039987 master-0 kubenswrapper[7553]: I0318 17:41:51.039919 7553 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Mar 18 17:41:51.040035 master-0 kubenswrapper[7553]: I0318 17:41:51.039866 7553 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Mar 18 17:41:51.042031 master-0 kubenswrapper[7553]: I0318 17:41:51.041989 7553 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Mar 18 17:41:51.042194 master-0 kubenswrapper[7553]: I0318 17:41:51.042163 7553 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-olm-operator"/"cluster-olm-operator-serving-cert" Mar 18 17:41:51.042238 master-0 kubenswrapper[7553]: I0318 17:41:51.042203 7553 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Mar 18 17:41:51.042238 master-0 kubenswrapper[7553]: I0318 17:41:51.042223 7553 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Mar 18 17:41:51.042312 master-0 kubenswrapper[7553]: I0318 17:41:51.042246 7553 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Mar 18 17:41:51.042374 master-0 kubenswrapper[7553]: I0318 17:41:51.042335 7553 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Mar 18 17:41:51.042419 master-0 kubenswrapper[7553]: I0318 17:41:51.042368 7553 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Mar 18 17:41:51.042600 master-0 kubenswrapper[7553]: I0318 17:41:51.042566 7553 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Mar 18 17:41:51.042698 master-0 kubenswrapper[7553]: I0318 17:41:51.042665 7553 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Mar 18 17:41:51.042698 master-0 kubenswrapper[7553]: I0318 17:41:51.042587 7553 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Mar 18 17:41:51.042766 master-0 kubenswrapper[7553]: I0318 17:41:51.042735 7553 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Mar 18 17:41:51.054086 master-0 kubenswrapper[7553]: I0318 17:41:51.054060 7553 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Mar 18 17:41:51.061302 master-0 kubenswrapper[7553]: I0318 17:41:51.054890 7553 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Mar 18 17:41:51.062177 master-0 kubenswrapper[7553]: I0318 17:41:51.062138 7553 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Mar 18 17:41:51.062525 master-0 kubenswrapper[7553]: I0318 17:41:51.062485 7553 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Mar 18 17:41:51.063977 master-0 kubenswrapper[7553]: I0318 17:41:51.063950 7553 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Mar 18 17:41:51.064681 master-0 kubenswrapper[7553]: I0318 17:41:51.064651 7553 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"cluster-monitoring-operator-tls" Mar 18 17:41:51.065056 master-0 kubenswrapper[7553]: I0318 17:41:51.055344 7553 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Mar 18 17:41:51.065100 master-0 kubenswrapper[7553]: I0318 17:41:51.055409 7553 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"openshift-service-ca.crt" Mar 18 17:41:51.065137 master-0 kubenswrapper[7553]: I0318 17:41:51.055639 7553 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Mar 18 17:41:51.065268 master-0 kubenswrapper[7553]: I0318 17:41:51.055833 7553 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Mar 18 17:41:51.065450 master-0 kubenswrapper[7553]: I0318 17:41:51.056469 7553 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Mar 18 17:41:51.065531 master-0 kubenswrapper[7553]: I0318 17:41:51.056609 7553 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Mar 18 17:41:51.065896 master-0 kubenswrapper[7553]: I0318 17:41:51.065877 7553 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kube-root-ca.crt" Mar 18 17:41:51.065971 master-0 kubenswrapper[7553]: I0318 17:41:51.065930 7553 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Mar 18 17:41:51.066025 master-0 kubenswrapper[7553]: I0318 17:41:51.065981 7553 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"telemetry-config" Mar 18 17:41:51.066064 master-0 kubenswrapper[7553]: I0318 17:41:51.065878 7553 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Mar 18 17:41:51.066103 master-0 kubenswrapper[7553]: I0318 17:41:51.066064 7553 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Mar 18 17:41:51.066131 master-0 kubenswrapper[7553]: I0318 17:41:51.066087 7553 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Mar 18 17:41:51.066305 master-0 kubenswrapper[7553]: I0318 17:41:51.066116 7553 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Mar 18 17:41:51.066388 master-0 kubenswrapper[7553]: I0318 17:41:51.066354 7553 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Mar 18 17:41:51.069154 master-0 kubenswrapper[7553]: I0318 17:41:51.067480 7553 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Mar 18 17:41:51.072988 master-0 kubenswrapper[7553]: I0318 17:41:51.072948 7553 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Mar 18 17:41:51.076266 master-0 kubenswrapper[7553]: I0318 17:41:51.076203 7553 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Mar 18 17:41:51.079562 master-0 kubenswrapper[7553]: I0318 17:41:51.079517 7553 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Mar 18 17:41:51.092564 master-0 kubenswrapper[7553]: I0318 17:41:51.092519 7553 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Mar 18 17:41:51.113135 master-0 kubenswrapper[7553]: I0318 17:41:51.113080 7553 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-node-tuning-operator"/"performance-addon-operator-webhook-cert" Mar 18 17:41:51.113221 master-0 kubenswrapper[7553]: I0318 17:41:51.113151 7553 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Mar 18 17:41:51.132588 master-0 kubenswrapper[7553]: I0318 17:41:51.132532 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/37b3753f-bf4f-4a9e-a4a8-d58296bada79-config\") pod \"cluster-baremetal-operator-6f69995874-dh5zl\" (UID: \"37b3753f-bf4f-4a9e-a4a8-d58296bada79\") " pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-dh5zl" Mar 18 17:41:51.132588 master-0 kubenswrapper[7553]: I0318 17:41:51.132591 7553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/994fff04-c1d7-4f10-8d4b-6b49a6934829-host-run-netns\") pod \"ovnkube-node-5l4qp\" (UID: \"994fff04-c1d7-4f10-8d4b-6b49a6934829\") " pod="openshift-ovn-kubernetes/ovnkube-node-5l4qp" Mar 18 17:41:51.132867 master-0 kubenswrapper[7553]: I0318 17:41:51.132612 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/0100a259-1358-45e8-8191-4e1f9a14ec89-etcd-service-ca\") pod \"etcd-operator-8544cbcf9c-rws9x\" (UID: \"0100a259-1358-45e8-8191-4e1f9a14ec89\") " pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-rws9x" Mar 18 17:41:51.132867 master-0 kubenswrapper[7553]: I0318 17:41:51.132634 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a02399de-859b-45b1-9b00-18a08f285f39-kube-api-access\") pod \"cluster-version-operator-56d8475767-lqvvj\" (UID: \"a02399de-859b-45b1-9b00-18a08f285f39\") " pod="openshift-cluster-version/cluster-version-operator-56d8475767-lqvvj" Mar 18 17:41:51.132947 master-0 kubenswrapper[7553]: I0318 17:41:51.132892 7553 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/0100a259-1358-45e8-8191-4e1f9a14ec89-etcd-service-ca\") pod \"etcd-operator-8544cbcf9c-rws9x\" (UID: \"0100a259-1358-45e8-8191-4e1f9a14ec89\") " pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-rws9x" Mar 18 17:41:51.133159 master-0 kubenswrapper[7553]: I0318 17:41:51.132966 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-756j8\" (UniqueName: \"kubernetes.io/projected/ce5831a6-5a8d-4cda-9299-5d86437bcab2-kube-api-access-756j8\") pod \"marketplace-operator-89ccd998f-l5gm7\" (UID: \"ce5831a6-5a8d-4cda-9299-5d86437bcab2\") " pod="openshift-marketplace/marketplace-operator-89ccd998f-l5gm7" Mar 18 17:41:51.133159 master-0 kubenswrapper[7553]: I0318 17:41:51.133139 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dlcnh\" (UniqueName: \"kubernetes.io/projected/a8c34df1-ea0d-4dfa-bf4d-5b58dc5bee8e-kube-api-access-dlcnh\") pod \"multus-admission-controller-5dbbb8b86f-gr8jc\" (UID: \"a8c34df1-ea0d-4dfa-bf4d-5b58dc5bee8e\") " pod="openshift-multus/multus-admission-controller-5dbbb8b86f-gr8jc" Mar 18 17:41:51.133245 master-0 kubenswrapper[7553]: I0318 17:41:51.133197 7553 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Mar 18 17:41:51.133302 master-0 kubenswrapper[7553]: I0318 17:41:51.133231 7553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/5b0e38f3-3ab5-4519-86a6-68003deb94da-etc-kubernetes\") pod \"multus-64tx9\" (UID: \"5b0e38f3-3ab5-4519-86a6-68003deb94da\") " pod="openshift-multus/multus-64tx9" Mar 18 17:41:51.133441 master-0 kubenswrapper[7553]: I0318 17:41:51.133364 7553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/994fff04-c1d7-4f10-8d4b-6b49a6934829-log-socket\") pod \"ovnkube-node-5l4qp\" (UID: \"994fff04-c1d7-4f10-8d4b-6b49a6934829\") " pod="openshift-ovn-kubernetes/ovnkube-node-5l4qp" Mar 18 17:41:51.133610 master-0 kubenswrapper[7553]: I0318 17:41:51.133564 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9b424d6c-7440-4c98-ac19-2d0642c696fd-kube-api-access\") pod \"kube-controller-manager-operator-ff989d6cc-qk279\" (UID: \"9b424d6c-7440-4c98-ac19-2d0642c696fd\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-ff989d6cc-qk279" Mar 18 17:41:51.133745 master-0 kubenswrapper[7553]: I0318 17:41:51.133660 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c087ce06-a16b-41f4-ba93-8fccdee09003-service-ca-bundle\") pod \"authentication-operator-5885bfd7f4-8sxdf\" (UID: \"c087ce06-a16b-41f4-ba93-8fccdee09003\") " pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-8sxdf" Mar 18 17:41:51.133823 master-0 kubenswrapper[7553]: I0318 17:41:51.133766 7553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/fea7b899-fde4-4463-9520-4d433a8ebe21-tuning-conf-dir\") pod \"multus-additional-cni-plugins-ttbr5\" (UID: \"fea7b899-fde4-4463-9520-4d433a8ebe21\") " pod="openshift-multus/multus-additional-cni-plugins-ttbr5" Mar 18 17:41:51.133863 master-0 kubenswrapper[7553]: I0318 17:41:51.133848 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bm8jj\" (UniqueName: \"kubernetes.io/projected/d26d4515-391e-41a5-8c82-1b2b8a375662-kube-api-access-bm8jj\") pod \"package-server-manager-7b95f86987-6qqz4\" (UID: \"d26d4515-391e-41a5-8c82-1b2b8a375662\") " pod="openshift-operator-lifecycle-manager/package-server-manager-7b95f86987-6qqz4" Mar 18 17:41:51.133920 master-0 kubenswrapper[7553]: I0318 17:41:51.133889 7553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/1d969530-c138-4fb7-9bfe-0825be66c009-host-slash\") pod \"iptables-alerter-f7jp5\" (UID: \"1d969530-c138-4fb7-9bfe-0825be66c009\") " pod="openshift-network-operator/iptables-alerter-f7jp5" Mar 18 17:41:51.133971 master-0 kubenswrapper[7553]: I0318 17:41:51.133947 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/37b3753f-bf4f-4a9e-a4a8-d58296bada79-images\") pod \"cluster-baremetal-operator-6f69995874-dh5zl\" (UID: \"37b3753f-bf4f-4a9e-a4a8-d58296bada79\") " pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-dh5zl" Mar 18 17:41:51.134013 master-0 kubenswrapper[7553]: I0318 17:41:51.133984 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f7ff61c7-32d1-4407-a792-8e22bb4d50f9-config\") pod \"kube-storage-version-migrator-operator-6bb5bfb6fd-xwwcg\" (UID: \"f7ff61c7-32d1-4407-a792-8e22bb4d50f9\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-6bb5bfb6fd-xwwcg" Mar 18 17:41:51.134052 master-0 kubenswrapper[7553]: I0318 17:41:51.134023 7553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/e73f2834-c56c-4cef-ac3c-2317e9a4324c-srv-cert\") pod \"olm-operator-5c9796789-6hngr\" (UID: \"e73f2834-c56c-4cef-ac3c-2317e9a4324c\") " pod="openshift-operator-lifecycle-manager/olm-operator-5c9796789-6hngr" Mar 18 17:41:51.134052 master-0 kubenswrapper[7553]: I0318 17:41:51.134026 7553 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c087ce06-a16b-41f4-ba93-8fccdee09003-service-ca-bundle\") pod \"authentication-operator-5885bfd7f4-8sxdf\" (UID: \"c087ce06-a16b-41f4-ba93-8fccdee09003\") " pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-8sxdf" Mar 18 17:41:51.134132 master-0 kubenswrapper[7553]: I0318 17:41:51.134058 7553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/5b0e38f3-3ab5-4519-86a6-68003deb94da-multus-cni-dir\") pod \"multus-64tx9\" (UID: \"5b0e38f3-3ab5-4519-86a6-68003deb94da\") " pod="openshift-multus/multus-64tx9" Mar 18 17:41:51.134132 master-0 kubenswrapper[7553]: I0318 17:41:51.134096 7553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/5b0e38f3-3ab5-4519-86a6-68003deb94da-multus-socket-dir-parent\") pod \"multus-64tx9\" (UID: \"5b0e38f3-3ab5-4519-86a6-68003deb94da\") " pod="openshift-multus/multus-64tx9" Mar 18 17:41:51.134198 master-0 kubenswrapper[7553]: I0318 17:41:51.134129 7553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/994fff04-c1d7-4f10-8d4b-6b49a6934829-host-slash\") pod \"ovnkube-node-5l4qp\" (UID: \"994fff04-c1d7-4f10-8d4b-6b49a6934829\") " pod="openshift-ovn-kubernetes/ovnkube-node-5l4qp" Mar 18 17:41:51.134198 master-0 kubenswrapper[7553]: I0318 17:41:51.134171 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0b9ff55a-73fb-473f-b406-1f8b6cffdb89-config\") pod \"openshift-apiserver-operator-d65958b8-t266j\" (UID: \"0b9ff55a-73fb-473f-b406-1f8b6cffdb89\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-d65958b8-t266j" Mar 18 17:41:51.134291 master-0 kubenswrapper[7553]: I0318 17:41:51.134247 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/fea7b899-fde4-4463-9520-4d433a8ebe21-cni-binary-copy\") pod \"multus-additional-cni-plugins-ttbr5\" (UID: \"fea7b899-fde4-4463-9520-4d433a8ebe21\") " pod="openshift-multus/multus-additional-cni-plugins-ttbr5" Mar 18 17:41:51.134556 master-0 kubenswrapper[7553]: I0318 17:41:51.134527 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ts9b9\" (UniqueName: \"kubernetes.io/projected/fea7b899-fde4-4463-9520-4d433a8ebe21-kube-api-access-ts9b9\") pod \"multus-additional-cni-plugins-ttbr5\" (UID: \"fea7b899-fde4-4463-9520-4d433a8ebe21\") " pod="openshift-multus/multus-additional-cni-plugins-ttbr5" Mar 18 17:41:51.134604 master-0 kubenswrapper[7553]: I0318 17:41:51.134565 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nf82n\" (UniqueName: \"kubernetes.io/projected/f7ff61c7-32d1-4407-a792-8e22bb4d50f9-kube-api-access-nf82n\") pod \"kube-storage-version-migrator-operator-6bb5bfb6fd-xwwcg\" (UID: \"f7ff61c7-32d1-4407-a792-8e22bb4d50f9\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-6bb5bfb6fd-xwwcg" Mar 18 17:41:51.134669 master-0 kubenswrapper[7553]: I0318 17:41:51.134616 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cb522b02-0b93-4711-9041-566daa06b95a-serving-cert\") pod \"openshift-config-operator-95bf4f4d-q27fh\" (UID: \"cb522b02-0b93-4711-9041-566daa06b95a\") " pod="openshift-config-operator/openshift-config-operator-95bf4f4d-q27fh" Mar 18 17:41:51.134669 master-0 kubenswrapper[7553]: I0318 17:41:51.134653 7553 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f7ff61c7-32d1-4407-a792-8e22bb4d50f9-config\") pod \"kube-storage-version-migrator-operator-6bb5bfb6fd-xwwcg\" (UID: \"f7ff61c7-32d1-4407-a792-8e22bb4d50f9\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-6bb5bfb6fd-xwwcg" Mar 18 17:41:51.134730 master-0 kubenswrapper[7553]: I0318 17:41:51.134675 7553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/5b0e38f3-3ab5-4519-86a6-68003deb94da-host-var-lib-cni-multus\") pod \"multus-64tx9\" (UID: \"5b0e38f3-3ab5-4519-86a6-68003deb94da\") " pod="openshift-multus/multus-64tx9" Mar 18 17:41:51.134730 master-0 kubenswrapper[7553]: I0318 17:41:51.134695 7553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/994fff04-c1d7-4f10-8d4b-6b49a6934829-host-cni-netd\") pod \"ovnkube-node-5l4qp\" (UID: \"994fff04-c1d7-4f10-8d4b-6b49a6934829\") " pod="openshift-ovn-kubernetes/ovnkube-node-5l4qp" Mar 18 17:41:51.134822 master-0 kubenswrapper[7553]: I0318 17:41:51.134776 7553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/d26d4515-391e-41a5-8c82-1b2b8a375662-package-server-manager-serving-cert\") pod \"package-server-manager-7b95f86987-6qqz4\" (UID: \"d26d4515-391e-41a5-8c82-1b2b8a375662\") " pod="openshift-operator-lifecycle-manager/package-server-manager-7b95f86987-6qqz4" Mar 18 17:41:51.134858 master-0 kubenswrapper[7553]: I0318 17:41:51.134827 7553 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0b9ff55a-73fb-473f-b406-1f8b6cffdb89-config\") pod \"openshift-apiserver-operator-d65958b8-t266j\" (UID: \"0b9ff55a-73fb-473f-b406-1f8b6cffdb89\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-d65958b8-t266j" Mar 18 17:41:51.134902 master-0 kubenswrapper[7553]: I0318 17:41:51.134875 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n8k5q\" (UniqueName: \"kubernetes.io/projected/99e215da-759d-4fff-af65-0fb64245fbd0-kube-api-access-n8k5q\") pod \"cluster-olm-operator-67dcd4998-lljnt\" (UID: \"99e215da-759d-4fff-af65-0fb64245fbd0\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-67dcd4998-lljnt" Mar 18 17:41:51.134937 master-0 kubenswrapper[7553]: I0318 17:41:51.134915 7553 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/fea7b899-fde4-4463-9520-4d433a8ebe21-cni-binary-copy\") pod \"multus-additional-cni-plugins-ttbr5\" (UID: \"fea7b899-fde4-4463-9520-4d433a8ebe21\") " pod="openshift-multus/multus-additional-cni-plugins-ttbr5" Mar 18 17:41:51.134971 master-0 kubenswrapper[7553]: I0318 17:41:51.134939 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3a3a6c2c-78e7-41f3-acff-20173cbc012a-kube-api-access\") pod \"openshift-kube-scheduler-operator-dddff6458-wlfj4\" (UID: \"3a3a6c2c-78e7-41f3-acff-20173cbc012a\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-dddff6458-wlfj4" Mar 18 17:41:51.135108 master-0 kubenswrapper[7553]: I0318 17:41:51.135066 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/7e64a377-f497-4416-8f22-d5c7f52e0b65-bound-sa-token\") pod \"ingress-operator-66b84d69b-qb7n6\" (UID: \"7e64a377-f497-4416-8f22-d5c7f52e0b65\") " pod="openshift-ingress-operator/ingress-operator-66b84d69b-qb7n6" Mar 18 17:41:51.135171 master-0 kubenswrapper[7553]: I0318 17:41:51.135135 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cd868\" (UniqueName: \"kubernetes.io/projected/1d969530-c138-4fb7-9bfe-0825be66c009-kube-api-access-cd868\") pod \"iptables-alerter-f7jp5\" (UID: \"1d969530-c138-4fb7-9bfe-0825be66c009\") " pod="openshift-network-operator/iptables-alerter-f7jp5" Mar 18 17:41:51.135225 master-0 kubenswrapper[7553]: I0318 17:41:51.135201 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/14a0661b-7bde-4e22-a9a9-5e3fb24df77f-metrics-tls\") pod \"network-operator-7bd846bfc4-dxxbl\" (UID: \"14a0661b-7bde-4e22-a9a9-5e3fb24df77f\") " pod="openshift-network-operator/network-operator-7bd846bfc4-dxxbl" Mar 18 17:41:51.135260 master-0 kubenswrapper[7553]: I0318 17:41:51.135218 7553 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/37b3753f-bf4f-4a9e-a4a8-d58296bada79-images\") pod \"cluster-baremetal-operator-6f69995874-dh5zl\" (UID: \"37b3753f-bf4f-4a9e-a4a8-d58296bada79\") " pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-dh5zl" Mar 18 17:41:51.135260 master-0 kubenswrapper[7553]: I0318 17:41:51.135089 7553 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cb522b02-0b93-4711-9041-566daa06b95a-serving-cert\") pod \"openshift-config-operator-95bf4f4d-q27fh\" (UID: \"cb522b02-0b93-4711-9041-566daa06b95a\") " pod="openshift-config-operator/openshift-config-operator-95bf4f4d-q27fh" Mar 18 17:41:51.135441 master-0 kubenswrapper[7553]: I0318 17:41:51.135390 7553 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/37b3753f-bf4f-4a9e-a4a8-d58296bada79-config\") pod \"cluster-baremetal-operator-6f69995874-dh5zl\" (UID: \"37b3753f-bf4f-4a9e-a4a8-d58296bada79\") " pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-dh5zl" Mar 18 17:41:51.135519 master-0 kubenswrapper[7553]: I0318 17:41:51.135420 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3a3a6c2c-78e7-41f3-acff-20173cbc012a-serving-cert\") pod \"openshift-kube-scheduler-operator-dddff6458-wlfj4\" (UID: \"3a3a6c2c-78e7-41f3-acff-20173cbc012a\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-dddff6458-wlfj4" Mar 18 17:41:51.135586 master-0 kubenswrapper[7553]: I0318 17:41:51.135568 7553 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3a3a6c2c-78e7-41f3-acff-20173cbc012a-serving-cert\") pod \"openshift-kube-scheduler-operator-dddff6458-wlfj4\" (UID: \"3a3a6c2c-78e7-41f3-acff-20173cbc012a\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-dddff6458-wlfj4" Mar 18 17:41:51.135627 master-0 kubenswrapper[7553]: I0318 17:41:51.135592 7553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/994fff04-c1d7-4f10-8d4b-6b49a6934829-var-lib-openvswitch\") pod \"ovnkube-node-5l4qp\" (UID: \"994fff04-c1d7-4f10-8d4b-6b49a6934829\") " pod="openshift-ovn-kubernetes/ovnkube-node-5l4qp" Mar 18 17:41:51.135751 master-0 kubenswrapper[7553]: I0318 17:41:51.135701 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/9875ed82-813c-483d-8471-8f9b74b774ee-webhook-cert\") pod \"network-node-identity-7s68k\" (UID: \"9875ed82-813c-483d-8471-8f9b74b774ee\") " pod="openshift-network-node-identity/network-node-identity-7s68k" Mar 18 17:41:51.135837 master-0 kubenswrapper[7553]: I0318 17:41:51.135798 7553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cluster-baremetal-operator-tls\" (UniqueName: \"kubernetes.io/secret/37b3753f-bf4f-4a9e-a4a8-d58296bada79-cluster-baremetal-operator-tls\") pod \"cluster-baremetal-operator-6f69995874-dh5zl\" (UID: \"37b3753f-bf4f-4a9e-a4a8-d58296bada79\") " pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-dh5zl" Mar 18 17:41:51.135922 master-0 kubenswrapper[7553]: I0318 17:41:51.135894 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mrdqg\" (UniqueName: \"kubernetes.io/projected/7c6694a8-ccd0-491b-9f21-215450f6ce67-kube-api-access-mrdqg\") pod \"cluster-node-tuning-operator-598fbc5f8f-7qwxn\" (UID: \"7c6694a8-ccd0-491b-9f21-215450f6ce67\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-7qwxn" Mar 18 17:41:51.136040 master-0 kubenswrapper[7553]: I0318 17:41:51.136010 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/7b94e08c-7944-445e-bfb7-6c7c14880c65-ovnkube-config\") pod \"ovnkube-control-plane-57f769d897-m82wx\" (UID: \"7b94e08c-7944-445e-bfb7-6c7c14880c65\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57f769d897-m82wx" Mar 18 17:41:51.136119 master-0 kubenswrapper[7553]: I0318 17:41:51.136090 7553 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/14a0661b-7bde-4e22-a9a9-5e3fb24df77f-metrics-tls\") pod \"network-operator-7bd846bfc4-dxxbl\" (UID: \"14a0661b-7bde-4e22-a9a9-5e3fb24df77f\") " pod="openshift-network-operator/network-operator-7bd846bfc4-dxxbl" Mar 18 17:41:51.136156 master-0 kubenswrapper[7553]: I0318 17:41:51.136094 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c087ce06-a16b-41f4-ba93-8fccdee09003-serving-cert\") pod \"authentication-operator-5885bfd7f4-8sxdf\" (UID: \"c087ce06-a16b-41f4-ba93-8fccdee09003\") " pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-8sxdf" Mar 18 17:41:51.136206 master-0 kubenswrapper[7553]: I0318 17:41:51.136182 7553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/5b0e38f3-3ab5-4519-86a6-68003deb94da-multus-conf-dir\") pod \"multus-64tx9\" (UID: \"5b0e38f3-3ab5-4519-86a6-68003deb94da\") " pod="openshift-multus/multus-64tx9" Mar 18 17:41:51.136255 master-0 kubenswrapper[7553]: I0318 17:41:51.136232 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/994fff04-c1d7-4f10-8d4b-6b49a6934829-env-overrides\") pod \"ovnkube-node-5l4qp\" (UID: \"994fff04-c1d7-4f10-8d4b-6b49a6934829\") " pod="openshift-ovn-kubernetes/ovnkube-node-5l4qp" Mar 18 17:41:51.136350 master-0 kubenswrapper[7553]: I0318 17:41:51.136324 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/994fff04-c1d7-4f10-8d4b-6b49a6934829-ovn-node-metrics-cert\") pod \"ovnkube-node-5l4qp\" (UID: \"994fff04-c1d7-4f10-8d4b-6b49a6934829\") " pod="openshift-ovn-kubernetes/ovnkube-node-5l4qp" Mar 18 17:41:51.136396 master-0 kubenswrapper[7553]: I0318 17:41:51.136373 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/994fff04-c1d7-4f10-8d4b-6b49a6934829-ovnkube-script-lib\") pod \"ovnkube-node-5l4qp\" (UID: \"994fff04-c1d7-4f10-8d4b-6b49a6934829\") " pod="openshift-ovn-kubernetes/ovnkube-node-5l4qp" Mar 18 17:41:51.136439 master-0 kubenswrapper[7553]: I0318 17:41:51.136410 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ce5831a6-5a8d-4cda-9299-5d86437bcab2-marketplace-trusted-ca\") pod \"marketplace-operator-89ccd998f-l5gm7\" (UID: \"ce5831a6-5a8d-4cda-9299-5d86437bcab2\") " pod="openshift-marketplace/marketplace-operator-89ccd998f-l5gm7" Mar 18 17:41:51.136593 master-0 kubenswrapper[7553]: I0318 17:41:51.136556 7553 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/994fff04-c1d7-4f10-8d4b-6b49a6934829-env-overrides\") pod \"ovnkube-node-5l4qp\" (UID: \"994fff04-c1d7-4f10-8d4b-6b49a6934829\") " pod="openshift-ovn-kubernetes/ovnkube-node-5l4qp" Mar 18 17:41:51.136694 master-0 kubenswrapper[7553]: I0318 17:41:51.136672 7553 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/7b94e08c-7944-445e-bfb7-6c7c14880c65-ovnkube-config\") pod \"ovnkube-control-plane-57f769d897-m82wx\" (UID: \"7b94e08c-7944-445e-bfb7-6c7c14880c65\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57f769d897-m82wx" Mar 18 17:41:51.136730 master-0 kubenswrapper[7553]: I0318 17:41:51.136685 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/1d969530-c138-4fb7-9bfe-0825be66c009-iptables-alerter-script\") pod \"iptables-alerter-f7jp5\" (UID: \"1d969530-c138-4fb7-9bfe-0825be66c009\") " pod="openshift-network-operator/iptables-alerter-f7jp5" Mar 18 17:41:51.136766 master-0 kubenswrapper[7553]: I0318 17:41:51.136749 7553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a02399de-859b-45b1-9b00-18a08f285f39-serving-cert\") pod \"cluster-version-operator-56d8475767-lqvvj\" (UID: \"a02399de-859b-45b1-9b00-18a08f285f39\") " pod="openshift-cluster-version/cluster-version-operator-56d8475767-lqvvj" Mar 18 17:41:51.136799 master-0 kubenswrapper[7553]: I0318 17:41:51.136778 7553 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c087ce06-a16b-41f4-ba93-8fccdee09003-serving-cert\") pod \"authentication-operator-5885bfd7f4-8sxdf\" (UID: \"c087ce06-a16b-41f4-ba93-8fccdee09003\") " pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-8sxdf" Mar 18 17:41:51.136830 master-0 kubenswrapper[7553]: I0318 17:41:51.136787 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/a02399de-859b-45b1-9b00-18a08f285f39-service-ca\") pod \"cluster-version-operator-56d8475767-lqvvj\" (UID: \"a02399de-859b-45b1-9b00-18a08f285f39\") " pod="openshift-cluster-version/cluster-version-operator-56d8475767-lqvvj" Mar 18 17:41:51.136878 master-0 kubenswrapper[7553]: I0318 17:41:51.136859 7553 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/1d969530-c138-4fb7-9bfe-0825be66c009-iptables-alerter-script\") pod \"iptables-alerter-f7jp5\" (UID: \"1d969530-c138-4fb7-9bfe-0825be66c009\") " pod="openshift-network-operator/iptables-alerter-f7jp5" Mar 18 17:41:51.136915 master-0 kubenswrapper[7553]: I0318 17:41:51.136878 7553 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/994fff04-c1d7-4f10-8d4b-6b49a6934829-ovn-node-metrics-cert\") pod \"ovnkube-node-5l4qp\" (UID: \"994fff04-c1d7-4f10-8d4b-6b49a6934829\") " pod="openshift-ovn-kubernetes/ovnkube-node-5l4qp" Mar 18 17:41:51.137011 master-0 kubenswrapper[7553]: I0318 17:41:51.136975 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fk59q\" (UniqueName: \"kubernetes.io/projected/cb522b02-0b93-4711-9041-566daa06b95a-kube-api-access-fk59q\") pod \"openshift-config-operator-95bf4f4d-q27fh\" (UID: \"cb522b02-0b93-4711-9041-566daa06b95a\") " pod="openshift-config-operator/openshift-config-operator-95bf4f4d-q27fh" Mar 18 17:41:51.137156 master-0 kubenswrapper[7553]: I0318 17:41:51.137086 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-grnqn\" (UniqueName: \"kubernetes.io/projected/5b0e38f3-3ab5-4519-86a6-68003deb94da-kube-api-access-grnqn\") pod \"multus-64tx9\" (UID: \"5b0e38f3-3ab5-4519-86a6-68003deb94da\") " pod="openshift-multus/multus-64tx9" Mar 18 17:41:51.137206 master-0 kubenswrapper[7553]: I0318 17:41:51.137177 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9a240ab7-a1d5-4e9a-96f3-4590681cc7ed-serving-cert\") pod \"openshift-controller-manager-operator-8c94f4649-hpsbd\" (UID: \"9a240ab7-a1d5-4e9a-96f3-4590681cc7ed\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8c94f4649-hpsbd" Mar 18 17:41:51.137315 master-0 kubenswrapper[7553]: I0318 17:41:51.137258 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/0100a259-1358-45e8-8191-4e1f9a14ec89-etcd-client\") pod \"etcd-operator-8544cbcf9c-rws9x\" (UID: \"0100a259-1358-45e8-8191-4e1f9a14ec89\") " pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-rws9x" Mar 18 17:41:51.137401 master-0 kubenswrapper[7553]: I0318 17:41:51.137295 7553 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/994fff04-c1d7-4f10-8d4b-6b49a6934829-ovnkube-script-lib\") pod \"ovnkube-node-5l4qp\" (UID: \"994fff04-c1d7-4f10-8d4b-6b49a6934829\") " pod="openshift-ovn-kubernetes/ovnkube-node-5l4qp" Mar 18 17:41:51.137492 master-0 kubenswrapper[7553]: I0318 17:41:51.137121 7553 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ce5831a6-5a8d-4cda-9299-5d86437bcab2-marketplace-trusted-ca\") pod \"marketplace-operator-89ccd998f-l5gm7\" (UID: \"ce5831a6-5a8d-4cda-9299-5d86437bcab2\") " pod="openshift-marketplace/marketplace-operator-89ccd998f-l5gm7" Mar 18 17:41:51.137575 master-0 kubenswrapper[7553]: I0318 17:41:51.137413 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c355c750-ae2f-49fa-9a16-8fb4f688853e-serving-cert\") pod \"service-ca-operator-b865698dc-5zj8r\" (UID: \"c355c750-ae2f-49fa-9a16-8fb4f688853e\") " pod="openshift-service-ca-operator/service-ca-operator-b865698dc-5zj8r" Mar 18 17:41:51.137689 master-0 kubenswrapper[7553]: I0318 17:41:51.137670 7553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/7e64a377-f497-4416-8f22-d5c7f52e0b65-metrics-tls\") pod \"ingress-operator-66b84d69b-qb7n6\" (UID: \"7e64a377-f497-4416-8f22-d5c7f52e0b65\") " pod="openshift-ingress-operator/ingress-operator-66b84d69b-qb7n6" Mar 18 17:41:51.137805 master-0 kubenswrapper[7553]: I0318 17:41:51.137084 7553 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/a02399de-859b-45b1-9b00-18a08f285f39-service-ca\") pod \"cluster-version-operator-56d8475767-lqvvj\" (UID: \"a02399de-859b-45b1-9b00-18a08f285f39\") " pod="openshift-cluster-version/cluster-version-operator-56d8475767-lqvvj" Mar 18 17:41:51.137866 master-0 kubenswrapper[7553]: I0318 17:41:51.137796 7553 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c355c750-ae2f-49fa-9a16-8fb4f688853e-serving-cert\") pod \"service-ca-operator-b865698dc-5zj8r\" (UID: \"c355c750-ae2f-49fa-9a16-8fb4f688853e\") " pod="openshift-service-ca-operator/service-ca-operator-b865698dc-5zj8r" Mar 18 17:41:51.137906 master-0 kubenswrapper[7553]: I0318 17:41:51.137441 7553 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9a240ab7-a1d5-4e9a-96f3-4590681cc7ed-serving-cert\") pod \"openshift-controller-manager-operator-8c94f4649-hpsbd\" (UID: \"9a240ab7-a1d5-4e9a-96f3-4590681cc7ed\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8c94f4649-hpsbd" Mar 18 17:41:51.137953 master-0 kubenswrapper[7553]: I0318 17:41:51.137774 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0100a259-1358-45e8-8191-4e1f9a14ec89-config\") pod \"etcd-operator-8544cbcf9c-rws9x\" (UID: \"0100a259-1358-45e8-8191-4e1f9a14ec89\") " pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-rws9x" Mar 18 17:41:51.138003 master-0 kubenswrapper[7553]: I0318 17:41:51.137986 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9a240ab7-a1d5-4e9a-96f3-4590681cc7ed-config\") pod \"openshift-controller-manager-operator-8c94f4649-hpsbd\" (UID: \"9a240ab7-a1d5-4e9a-96f3-4590681cc7ed\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8c94f4649-hpsbd" Mar 18 17:41:51.138106 master-0 kubenswrapper[7553]: I0318 17:41:51.138069 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lmsm4\" (UniqueName: \"kubernetes.io/projected/dba5f8d7-4d25-42b5-9c58-813221bf96bb-kube-api-access-lmsm4\") pod \"csi-snapshot-controller-operator-5f5d689c6b-z9vvz\" (UID: \"dba5f8d7-4d25-42b5-9c58-813221bf96bb\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-5f5d689c6b-z9vvz" Mar 18 17:41:51.138211 master-0 kubenswrapper[7553]: I0318 17:41:51.138178 7553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/fea7b899-fde4-4463-9520-4d433a8ebe21-cnibin\") pod \"multus-additional-cni-plugins-ttbr5\" (UID: \"fea7b899-fde4-4463-9520-4d433a8ebe21\") " pod="openshift-multus/multus-additional-cni-plugins-ttbr5" Mar 18 17:41:51.138336 master-0 kubenswrapper[7553]: I0318 17:41:51.138261 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"whereabouts-flatfile-configmap\" (UniqueName: \"kubernetes.io/configmap/fea7b899-fde4-4463-9520-4d433a8ebe21-whereabouts-flatfile-configmap\") pod \"multus-additional-cni-plugins-ttbr5\" (UID: \"fea7b899-fde4-4463-9520-4d433a8ebe21\") " pod="openshift-multus/multus-additional-cni-plugins-ttbr5" Mar 18 17:41:51.138390 master-0 kubenswrapper[7553]: I0318 17:41:51.138352 7553 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9a240ab7-a1d5-4e9a-96f3-4590681cc7ed-config\") pod \"openshift-controller-manager-operator-8c94f4649-hpsbd\" (UID: \"9a240ab7-a1d5-4e9a-96f3-4590681cc7ed\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8c94f4649-hpsbd" Mar 18 17:41:51.138429 master-0 kubenswrapper[7553]: I0318 17:41:51.137778 7553 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/0100a259-1358-45e8-8191-4e1f9a14ec89-etcd-client\") pod \"etcd-operator-8544cbcf9c-rws9x\" (UID: \"0100a259-1358-45e8-8191-4e1f9a14ec89\") " pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-rws9x" Mar 18 17:41:51.138429 master-0 kubenswrapper[7553]: I0318 17:41:51.138361 7553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/6f26e239-2988-4faa-bc1d-24b15b95b7f1-image-registry-operator-tls\") pod \"cluster-image-registry-operator-5549dc66cb-ljrq8\" (UID: \"6f26e239-2988-4faa-bc1d-24b15b95b7f1\") " pod="openshift-image-registry/cluster-image-registry-operator-5549dc66cb-ljrq8" Mar 18 17:41:51.138511 master-0 kubenswrapper[7553]: I0318 17:41:51.137477 7553 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/9875ed82-813c-483d-8471-8f9b74b774ee-webhook-cert\") pod \"network-node-identity-7s68k\" (UID: \"9875ed82-813c-483d-8471-8f9b74b774ee\") " pod="openshift-network-node-identity/network-node-identity-7s68k" Mar 18 17:41:51.138626 master-0 kubenswrapper[7553]: I0318 17:41:51.138593 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/7b94e08c-7944-445e-bfb7-6c7c14880c65-env-overrides\") pod \"ovnkube-control-plane-57f769d897-m82wx\" (UID: \"7b94e08c-7944-445e-bfb7-6c7c14880c65\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57f769d897-m82wx" Mar 18 17:41:51.138713 master-0 kubenswrapper[7553]: I0318 17:41:51.138682 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l5tw2\" (UniqueName: \"kubernetes.io/projected/9a240ab7-a1d5-4e9a-96f3-4590681cc7ed-kube-api-access-l5tw2\") pod \"openshift-controller-manager-operator-8c94f4649-hpsbd\" (UID: \"9a240ab7-a1d5-4e9a-96f3-4590681cc7ed\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8c94f4649-hpsbd" Mar 18 17:41:51.138713 master-0 kubenswrapper[7553]: I0318 17:41:51.138689 7553 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0100a259-1358-45e8-8191-4e1f9a14ec89-config\") pod \"etcd-operator-8544cbcf9c-rws9x\" (UID: \"0100a259-1358-45e8-8191-4e1f9a14ec89\") " pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-rws9x" Mar 18 17:41:51.138789 master-0 kubenswrapper[7553]: I0318 17:41:51.138744 7553 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"whereabouts-flatfile-configmap\" (UniqueName: \"kubernetes.io/configmap/fea7b899-fde4-4463-9520-4d433a8ebe21-whereabouts-flatfile-configmap\") pod \"multus-additional-cni-plugins-ttbr5\" (UID: \"fea7b899-fde4-4463-9520-4d433a8ebe21\") " pod="openshift-multus/multus-additional-cni-plugins-ttbr5" Mar 18 17:41:51.138857 master-0 kubenswrapper[7553]: I0318 17:41:51.138822 7553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/a02399de-859b-45b1-9b00-18a08f285f39-etc-ssl-certs\") pod \"cluster-version-operator-56d8475767-lqvvj\" (UID: \"a02399de-859b-45b1-9b00-18a08f285f39\") " pod="openshift-cluster-version/cluster-version-operator-56d8475767-lqvvj" Mar 18 17:41:51.139097 master-0 kubenswrapper[7553]: I0318 17:41:51.139054 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/8d5e9525-6c0d-4c0b-a2ce-e42eaf66c311-telemetry-config\") pod \"cluster-monitoring-operator-58845fbb57-vjrjg\" (UID: \"8d5e9525-6c0d-4c0b-a2ce-e42eaf66c311\") " pod="openshift-monitoring/cluster-monitoring-operator-58845fbb57-vjrjg" Mar 18 17:41:51.139229 master-0 kubenswrapper[7553]: I0318 17:41:51.139199 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/6f26e239-2988-4faa-bc1d-24b15b95b7f1-trusted-ca\") pod \"cluster-image-registry-operator-5549dc66cb-ljrq8\" (UID: \"6f26e239-2988-4faa-bc1d-24b15b95b7f1\") " pod="openshift-image-registry/cluster-image-registry-operator-5549dc66cb-ljrq8" Mar 18 17:41:51.139298 master-0 kubenswrapper[7553]: I0318 17:41:51.139244 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9pp5f\" (UniqueName: \"kubernetes.io/projected/b1352cc7-4099-44c5-9c31-8259fb783bc7-kube-api-access-9pp5f\") pod \"dns-operator-9c5679d8f-7sc7v\" (UID: \"b1352cc7-4099-44c5-9c31-8259fb783bc7\") " pod="openshift-dns-operator/dns-operator-9c5679d8f-7sc7v" Mar 18 17:41:51.139298 master-0 kubenswrapper[7553]: I0318 17:41:51.139249 7553 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/7b94e08c-7944-445e-bfb7-6c7c14880c65-env-overrides\") pod \"ovnkube-control-plane-57f769d897-m82wx\" (UID: \"7b94e08c-7944-445e-bfb7-6c7c14880c65\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57f769d897-m82wx" Mar 18 17:41:51.139383 master-0 kubenswrapper[7553]: I0318 17:41:51.139335 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b9ff55a-73fb-473f-b406-1f8b6cffdb89-serving-cert\") pod \"openshift-apiserver-operator-d65958b8-t266j\" (UID: \"0b9ff55a-73fb-473f-b406-1f8b6cffdb89\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-d65958b8-t266j" Mar 18 17:41:51.139383 master-0 kubenswrapper[7553]: I0318 17:41:51.139365 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/cb522b02-0b93-4711-9041-566daa06b95a-available-featuregates\") pod \"openshift-config-operator-95bf4f4d-q27fh\" (UID: \"cb522b02-0b93-4711-9041-566daa06b95a\") " pod="openshift-config-operator/openshift-config-operator-95bf4f4d-q27fh" Mar 18 17:41:51.139462 master-0 kubenswrapper[7553]: I0318 17:41:51.139389 7553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/994fff04-c1d7-4f10-8d4b-6b49a6934829-systemd-units\") pod \"ovnkube-node-5l4qp\" (UID: \"994fff04-c1d7-4f10-8d4b-6b49a6934829\") " pod="openshift-ovn-kubernetes/ovnkube-node-5l4qp" Mar 18 17:41:51.139462 master-0 kubenswrapper[7553]: I0318 17:41:51.139455 7553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/994fff04-c1d7-4f10-8d4b-6b49a6934829-run-ovn\") pod \"ovnkube-node-5l4qp\" (UID: \"994fff04-c1d7-4f10-8d4b-6b49a6934829\") " pod="openshift-ovn-kubernetes/ovnkube-node-5l4qp" Mar 18 17:41:51.139553 master-0 kubenswrapper[7553]: I0318 17:41:51.139479 7553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/994fff04-c1d7-4f10-8d4b-6b49a6934829-host-run-ovn-kubernetes\") pod \"ovnkube-node-5l4qp\" (UID: \"994fff04-c1d7-4f10-8d4b-6b49a6934829\") " pod="openshift-ovn-kubernetes/ovnkube-node-5l4qp" Mar 18 17:41:51.139553 master-0 kubenswrapper[7553]: I0318 17:41:51.139508 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/9875ed82-813c-483d-8471-8f9b74b774ee-env-overrides\") pod \"network-node-identity-7s68k\" (UID: \"9875ed82-813c-483d-8471-8f9b74b774ee\") " pod="openshift-network-node-identity/network-node-identity-7s68k" Mar 18 17:41:51.139553 master-0 kubenswrapper[7553]: I0318 17:41:51.139532 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/9875ed82-813c-483d-8471-8f9b74b774ee-ovnkube-identity-cm\") pod \"network-node-identity-7s68k\" (UID: \"9875ed82-813c-483d-8471-8f9b74b774ee\") " pod="openshift-network-node-identity/network-node-identity-7s68k" Mar 18 17:41:51.139657 master-0 kubenswrapper[7553]: I0318 17:41:51.139555 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/0100a259-1358-45e8-8191-4e1f9a14ec89-etcd-ca\") pod \"etcd-operator-8544cbcf9c-rws9x\" (UID: \"0100a259-1358-45e8-8191-4e1f9a14ec89\") " pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-rws9x" Mar 18 17:41:51.139657 master-0 kubenswrapper[7553]: I0318 17:41:51.139592 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g4zcv\" (UniqueName: \"kubernetes.io/projected/7b94e08c-7944-445e-bfb7-6c7c14880c65-kube-api-access-g4zcv\") pod \"ovnkube-control-plane-57f769d897-m82wx\" (UID: \"7b94e08c-7944-445e-bfb7-6c7c14880c65\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57f769d897-m82wx" Mar 18 17:41:51.139657 master-0 kubenswrapper[7553]: I0318 17:41:51.139616 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tnknt\" (UniqueName: \"kubernetes.io/projected/0100a259-1358-45e8-8191-4e1f9a14ec89-kube-api-access-tnknt\") pod \"etcd-operator-8544cbcf9c-rws9x\" (UID: \"0100a259-1358-45e8-8191-4e1f9a14ec89\") " pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-rws9x" Mar 18 17:41:51.139657 master-0 kubenswrapper[7553]: I0318 17:41:51.139639 7553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/5b0e38f3-3ab5-4519-86a6-68003deb94da-hostroot\") pod \"multus-64tx9\" (UID: \"5b0e38f3-3ab5-4519-86a6-68003deb94da\") " pod="openshift-multus/multus-64tx9" Mar 18 17:41:51.139812 master-0 kubenswrapper[7553]: I0318 17:41:51.139668 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qwps9\" (UniqueName: \"kubernetes.io/projected/e73f2834-c56c-4cef-ac3c-2317e9a4324c-kube-api-access-qwps9\") pod \"olm-operator-5c9796789-6hngr\" (UID: \"e73f2834-c56c-4cef-ac3c-2317e9a4324c\") " pod="openshift-operator-lifecycle-manager/olm-operator-5c9796789-6hngr" Mar 18 17:41:51.139812 master-0 kubenswrapper[7553]: I0318 17:41:51.139695 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9b424d6c-7440-4c98-ac19-2d0642c696fd-serving-cert\") pod \"kube-controller-manager-operator-ff989d6cc-qk279\" (UID: \"9b424d6c-7440-4c98-ac19-2d0642c696fd\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-ff989d6cc-qk279" Mar 18 17:41:51.139812 master-0 kubenswrapper[7553]: I0318 17:41:51.139723 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c087ce06-a16b-41f4-ba93-8fccdee09003-trusted-ca-bundle\") pod \"authentication-operator-5885bfd7f4-8sxdf\" (UID: \"c087ce06-a16b-41f4-ba93-8fccdee09003\") " pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-8sxdf" Mar 18 17:41:51.139812 master-0 kubenswrapper[7553]: I0318 17:41:51.139747 7553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/a8c34df1-ea0d-4dfa-bf4d-5b58dc5bee8e-webhook-certs\") pod \"multus-admission-controller-5dbbb8b86f-gr8jc\" (UID: \"a8c34df1-ea0d-4dfa-bf4d-5b58dc5bee8e\") " pod="openshift-multus/multus-admission-controller-5dbbb8b86f-gr8jc" Mar 18 17:41:51.139812 master-0 kubenswrapper[7553]: I0318 17:41:51.139772 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2tvgq\" (UniqueName: \"kubernetes.io/projected/0b9ff55a-73fb-473f-b406-1f8b6cffdb89-kube-api-access-2tvgq\") pod \"openshift-apiserver-operator-d65958b8-t266j\" (UID: \"0b9ff55a-73fb-473f-b406-1f8b6cffdb89\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-d65958b8-t266j" Mar 18 17:41:51.139812 master-0 kubenswrapper[7553]: I0318 17:41:51.139798 7553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/994fff04-c1d7-4f10-8d4b-6b49a6934829-run-openvswitch\") pod \"ovnkube-node-5l4qp\" (UID: \"994fff04-c1d7-4f10-8d4b-6b49a6934829\") " pod="openshift-ovn-kubernetes/ovnkube-node-5l4qp" Mar 18 17:41:51.140063 master-0 kubenswrapper[7553]: I0318 17:41:51.139822 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operand-assets\" (UniqueName: \"kubernetes.io/empty-dir/99e215da-759d-4fff-af65-0fb64245fbd0-operand-assets\") pod \"cluster-olm-operator-67dcd4998-lljnt\" (UID: \"99e215da-759d-4fff-af65-0fb64245fbd0\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-67dcd4998-lljnt" Mar 18 17:41:51.140063 master-0 kubenswrapper[7553]: I0318 17:41:51.139850 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c355c750-ae2f-49fa-9a16-8fb4f688853e-config\") pod \"service-ca-operator-b865698dc-5zj8r\" (UID: \"c355c750-ae2f-49fa-9a16-8fb4f688853e\") " pod="openshift-service-ca-operator/service-ca-operator-b865698dc-5zj8r" Mar 18 17:41:51.140063 master-0 kubenswrapper[7553]: I0318 17:41:51.139878 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zfnqp\" (UniqueName: \"kubernetes.io/projected/c355c750-ae2f-49fa-9a16-8fb4f688853e-kube-api-access-zfnqp\") pod \"service-ca-operator-b865698dc-5zj8r\" (UID: \"c355c750-ae2f-49fa-9a16-8fb4f688853e\") " pod="openshift-service-ca-operator/service-ca-operator-b865698dc-5zj8r" Mar 18 17:41:51.140063 master-0 kubenswrapper[7553]: I0318 17:41:51.139993 7553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/7c6694a8-ccd0-491b-9f21-215450f6ce67-apiservice-cert\") pod \"cluster-node-tuning-operator-598fbc5f8f-7qwxn\" (UID: \"7c6694a8-ccd0-491b-9f21-215450f6ce67\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-7qwxn" Mar 18 17:41:51.140063 master-0 kubenswrapper[7553]: I0318 17:41:51.140012 7553 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/6f26e239-2988-4faa-bc1d-24b15b95b7f1-trusted-ca\") pod \"cluster-image-registry-operator-5549dc66cb-ljrq8\" (UID: \"6f26e239-2988-4faa-bc1d-24b15b95b7f1\") " pod="openshift-image-registry/cluster-image-registry-operator-5549dc66cb-ljrq8" Mar 18 17:41:51.140305 master-0 kubenswrapper[7553]: I0318 17:41:51.140025 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7c6694a8-ccd0-491b-9f21-215450f6ce67-trusted-ca\") pod \"cluster-node-tuning-operator-598fbc5f8f-7qwxn\" (UID: \"7c6694a8-ccd0-491b-9f21-215450f6ce67\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-7qwxn" Mar 18 17:41:51.140305 master-0 kubenswrapper[7553]: I0318 17:41:51.140131 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9b424d6c-7440-4c98-ac19-2d0642c696fd-config\") pod \"kube-controller-manager-operator-ff989d6cc-qk279\" (UID: \"9b424d6c-7440-4c98-ac19-2d0642c696fd\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-ff989d6cc-qk279" Mar 18 17:41:51.140305 master-0 kubenswrapper[7553]: I0318 17:41:51.140211 7553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/5b0e38f3-3ab5-4519-86a6-68003deb94da-os-release\") pod \"multus-64tx9\" (UID: \"5b0e38f3-3ab5-4519-86a6-68003deb94da\") " pod="openshift-multus/multus-64tx9" Mar 18 17:41:51.140305 master-0 kubenswrapper[7553]: I0318 17:41:51.140255 7553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/5b0e38f3-3ab5-4519-86a6-68003deb94da-host-run-multus-certs\") pod \"multus-64tx9\" (UID: \"5b0e38f3-3ab5-4519-86a6-68003deb94da\") " pod="openshift-multus/multus-64tx9" Mar 18 17:41:51.140447 master-0 kubenswrapper[7553]: I0318 17:41:51.140337 7553 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b9ff55a-73fb-473f-b406-1f8b6cffdb89-serving-cert\") pod \"openshift-apiserver-operator-d65958b8-t266j\" (UID: \"0b9ff55a-73fb-473f-b406-1f8b6cffdb89\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-d65958b8-t266j" Mar 18 17:41:51.140530 master-0 kubenswrapper[7553]: I0318 17:41:51.140504 7553 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9b424d6c-7440-4c98-ac19-2d0642c696fd-serving-cert\") pod \"kube-controller-manager-operator-ff989d6cc-qk279\" (UID: \"9b424d6c-7440-4c98-ac19-2d0642c696fd\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-ff989d6cc-qk279" Mar 18 17:41:51.140577 master-0 kubenswrapper[7553]: I0318 17:41:51.140558 7553 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/0100a259-1358-45e8-8191-4e1f9a14ec89-etcd-ca\") pod \"etcd-operator-8544cbcf9c-rws9x\" (UID: \"0100a259-1358-45e8-8191-4e1f9a14ec89\") " pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-rws9x" Mar 18 17:41:51.140782 master-0 kubenswrapper[7553]: I0318 17:41:51.140752 7553 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9b424d6c-7440-4c98-ac19-2d0642c696fd-config\") pod \"kube-controller-manager-operator-ff989d6cc-qk279\" (UID: \"9b424d6c-7440-4c98-ac19-2d0642c696fd\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-ff989d6cc-qk279" Mar 18 17:41:51.140829 master-0 kubenswrapper[7553]: I0318 17:41:51.140760 7553 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/9875ed82-813c-483d-8471-8f9b74b774ee-env-overrides\") pod \"network-node-identity-7s68k\" (UID: \"9875ed82-813c-483d-8471-8f9b74b774ee\") " pod="openshift-network-node-identity/network-node-identity-7s68k" Mar 18 17:41:51.140829 master-0 kubenswrapper[7553]: I0318 17:41:51.140808 7553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/fea7b899-fde4-4463-9520-4d433a8ebe21-system-cni-dir\") pod \"multus-additional-cni-plugins-ttbr5\" (UID: \"fea7b899-fde4-4463-9520-4d433a8ebe21\") " pod="openshift-multus/multus-additional-cni-plugins-ttbr5" Mar 18 17:41:51.140945 master-0 kubenswrapper[7553]: I0318 17:41:51.140918 7553 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c087ce06-a16b-41f4-ba93-8fccdee09003-trusted-ca-bundle\") pod \"authentication-operator-5885bfd7f4-8sxdf\" (UID: \"c087ce06-a16b-41f4-ba93-8fccdee09003\") " pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-8sxdf" Mar 18 17:41:51.141070 master-0 kubenswrapper[7553]: I0318 17:41:51.141042 7553 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/8d5e9525-6c0d-4c0b-a2ce-e42eaf66c311-telemetry-config\") pod \"cluster-monitoring-operator-58845fbb57-vjrjg\" (UID: \"8d5e9525-6c0d-4c0b-a2ce-e42eaf66c311\") " pod="openshift-monitoring/cluster-monitoring-operator-58845fbb57-vjrjg" Mar 18 17:41:51.141070 master-0 kubenswrapper[7553]: I0318 17:41:51.141055 7553 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/cb522b02-0b93-4711-9041-566daa06b95a-available-featuregates\") pod \"openshift-config-operator-95bf4f4d-q27fh\" (UID: \"cb522b02-0b93-4711-9041-566daa06b95a\") " pod="openshift-config-operator/openshift-config-operator-95bf4f4d-q27fh" Mar 18 17:41:51.141159 master-0 kubenswrapper[7553]: I0318 17:41:51.141074 7553 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operand-assets\" (UniqueName: \"kubernetes.io/empty-dir/99e215da-759d-4fff-af65-0fb64245fbd0-operand-assets\") pod \"cluster-olm-operator-67dcd4998-lljnt\" (UID: \"99e215da-759d-4fff-af65-0fb64245fbd0\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-67dcd4998-lljnt" Mar 18 17:41:51.141207 master-0 kubenswrapper[7553]: I0318 17:41:51.141171 7553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/fea7b899-fde4-4463-9520-4d433a8ebe21-os-release\") pod \"multus-additional-cni-plugins-ttbr5\" (UID: \"fea7b899-fde4-4463-9520-4d433a8ebe21\") " pod="openshift-multus/multus-additional-cni-plugins-ttbr5" Mar 18 17:41:51.141252 master-0 kubenswrapper[7553]: I0318 17:41:51.141217 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9lwsm\" (UniqueName: \"kubernetes.io/projected/994fff04-c1d7-4f10-8d4b-6b49a6934829-kube-api-access-9lwsm\") pod \"ovnkube-node-5l4qp\" (UID: \"994fff04-c1d7-4f10-8d4b-6b49a6934829\") " pod="openshift-ovn-kubernetes/ovnkube-node-5l4qp" Mar 18 17:41:51.141310 master-0 kubenswrapper[7553]: I0318 17:41:51.141269 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-olm-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/99e215da-759d-4fff-af65-0fb64245fbd0-cluster-olm-operator-serving-cert\") pod \"cluster-olm-operator-67dcd4998-lljnt\" (UID: \"99e215da-759d-4fff-af65-0fb64245fbd0\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-67dcd4998-lljnt" Mar 18 17:41:51.141424 master-0 kubenswrapper[7553]: I0318 17:41:51.141389 7553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/14a0661b-7bde-4e22-a9a9-5e3fb24df77f-host-etc-kube\") pod \"network-operator-7bd846bfc4-dxxbl\" (UID: \"14a0661b-7bde-4e22-a9a9-5e3fb24df77f\") " pod="openshift-network-operator/network-operator-7bd846bfc4-dxxbl" Mar 18 17:41:51.141477 master-0 kubenswrapper[7553]: I0318 17:41:51.141443 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0100a259-1358-45e8-8191-4e1f9a14ec89-serving-cert\") pod \"etcd-operator-8544cbcf9c-rws9x\" (UID: \"0100a259-1358-45e8-8191-4e1f9a14ec89\") " pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-rws9x" Mar 18 17:41:51.141477 master-0 kubenswrapper[7553]: I0318 17:41:51.141454 7553 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/9875ed82-813c-483d-8471-8f9b74b774ee-ovnkube-identity-cm\") pod \"network-node-identity-7s68k\" (UID: \"9875ed82-813c-483d-8471-8f9b74b774ee\") " pod="openshift-network-node-identity/network-node-identity-7s68k" Mar 18 17:41:51.141562 master-0 kubenswrapper[7553]: I0318 17:41:51.141496 7553 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c355c750-ae2f-49fa-9a16-8fb4f688853e-config\") pod \"service-ca-operator-b865698dc-5zj8r\" (UID: \"c355c750-ae2f-49fa-9a16-8fb4f688853e\") " pod="openshift-service-ca-operator/service-ca-operator-b865698dc-5zj8r" Mar 18 17:41:51.141562 master-0 kubenswrapper[7553]: I0318 17:41:51.141522 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/994fff04-c1d7-4f10-8d4b-6b49a6934829-ovnkube-config\") pod \"ovnkube-node-5l4qp\" (UID: \"994fff04-c1d7-4f10-8d4b-6b49a6934829\") " pod="openshift-ovn-kubernetes/ovnkube-node-5l4qp" Mar 18 17:41:51.141633 master-0 kubenswrapper[7553]: I0318 17:41:51.141566 7553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/e9e04572-1425-440e-9869-6deef05e13e3-srv-cert\") pod \"catalog-operator-68f85b4d6c-qpgfz\" (UID: \"e9e04572-1425-440e-9869-6deef05e13e3\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68f85b4d6c-qpgfz" Mar 18 17:41:51.141733 master-0 kubenswrapper[7553]: I0318 17:41:51.141706 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2pqww\" (UniqueName: \"kubernetes.io/projected/14a0661b-7bde-4e22-a9a9-5e3fb24df77f-kube-api-access-2pqww\") pod \"network-operator-7bd846bfc4-dxxbl\" (UID: \"14a0661b-7bde-4e22-a9a9-5e3fb24df77f\") " pod="openshift-network-operator/network-operator-7bd846bfc4-dxxbl" Mar 18 17:41:51.141785 master-0 kubenswrapper[7553]: I0318 17:41:51.141733 7553 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/994fff04-c1d7-4f10-8d4b-6b49a6934829-ovnkube-config\") pod \"ovnkube-node-5l4qp\" (UID: \"994fff04-c1d7-4f10-8d4b-6b49a6934829\") " pod="openshift-ovn-kubernetes/ovnkube-node-5l4qp" Mar 18 17:41:51.141785 master-0 kubenswrapper[7553]: I0318 17:41:51.141716 7553 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0100a259-1358-45e8-8191-4e1f9a14ec89-serving-cert\") pod \"etcd-operator-8544cbcf9c-rws9x\" (UID: \"0100a259-1358-45e8-8191-4e1f9a14ec89\") " pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-rws9x" Mar 18 17:41:51.141888 master-0 kubenswrapper[7553]: I0318 17:41:51.141792 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-clm4b\" (UniqueName: \"kubernetes.io/projected/8d5e9525-6c0d-4c0b-a2ce-e42eaf66c311-kube-api-access-clm4b\") pod \"cluster-monitoring-operator-58845fbb57-vjrjg\" (UID: \"8d5e9525-6c0d-4c0b-a2ce-e42eaf66c311\") " pod="openshift-monitoring/cluster-monitoring-operator-58845fbb57-vjrjg" Mar 18 17:41:51.141888 master-0 kubenswrapper[7553]: I0318 17:41:51.141884 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5sl7p\" (UniqueName: \"kubernetes.io/projected/6f26e239-2988-4faa-bc1d-24b15b95b7f1-kube-api-access-5sl7p\") pod \"cluster-image-registry-operator-5549dc66cb-ljrq8\" (UID: \"6f26e239-2988-4faa-bc1d-24b15b95b7f1\") " pod="openshift-image-registry/cluster-image-registry-operator-5549dc66cb-ljrq8" Mar 18 17:41:51.142007 master-0 kubenswrapper[7553]: I0318 17:41:51.141986 7553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/5b0e38f3-3ab5-4519-86a6-68003deb94da-system-cni-dir\") pod \"multus-64tx9\" (UID: \"5b0e38f3-3ab5-4519-86a6-68003deb94da\") " pod="openshift-multus/multus-64tx9" Mar 18 17:41:51.142050 master-0 kubenswrapper[7553]: I0318 17:41:51.142016 7553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/5b0e38f3-3ab5-4519-86a6-68003deb94da-host-run-netns\") pod \"multus-64tx9\" (UID: \"5b0e38f3-3ab5-4519-86a6-68003deb94da\") " pod="openshift-multus/multus-64tx9" Mar 18 17:41:51.142050 master-0 kubenswrapper[7553]: I0318 17:41:51.142042 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hgnz6\" (UniqueName: \"kubernetes.io/projected/5a4f94f3-d63a-4869-b723-ae9637610b4b-kube-api-access-hgnz6\") pod \"network-metrics-daemon-mfn52\" (UID: \"5a4f94f3-d63a-4869-b723-ae9637610b4b\") " pod="openshift-multus/network-metrics-daemon-mfn52" Mar 18 17:41:51.142118 master-0 kubenswrapper[7553]: I0318 17:41:51.142061 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/5b0e38f3-3ab5-4519-86a6-68003deb94da-cni-binary-copy\") pod \"multus-64tx9\" (UID: \"5b0e38f3-3ab5-4519-86a6-68003deb94da\") " pod="openshift-multus/multus-64tx9" Mar 18 17:41:51.142182 master-0 kubenswrapper[7553]: I0318 17:41:51.142150 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/fea7b899-fde4-4463-9520-4d433a8ebe21-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-ttbr5\" (UID: \"fea7b899-fde4-4463-9520-4d433a8ebe21\") " pod="openshift-multus/multus-additional-cni-plugins-ttbr5" Mar 18 17:41:51.142246 master-0 kubenswrapper[7553]: I0318 17:41:51.142190 7553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/8d5e9525-6c0d-4c0b-a2ce-e42eaf66c311-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-58845fbb57-vjrjg\" (UID: \"8d5e9525-6c0d-4c0b-a2ce-e42eaf66c311\") " pod="openshift-monitoring/cluster-monitoring-operator-58845fbb57-vjrjg" Mar 18 17:41:51.142246 master-0 kubenswrapper[7553]: I0318 17:41:51.142222 7553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/a02399de-859b-45b1-9b00-18a08f285f39-etc-cvo-updatepayloads\") pod \"cluster-version-operator-56d8475767-lqvvj\" (UID: \"a02399de-859b-45b1-9b00-18a08f285f39\") " pod="openshift-cluster-version/cluster-version-operator-56d8475767-lqvvj" Mar 18 17:41:51.142246 master-0 kubenswrapper[7553]: I0318 17:41:51.142234 7553 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/5b0e38f3-3ab5-4519-86a6-68003deb94da-cni-binary-copy\") pod \"multus-64tx9\" (UID: \"5b0e38f3-3ab5-4519-86a6-68003deb94da\") " pod="openshift-multus/multus-64tx9" Mar 18 17:41:51.142381 master-0 kubenswrapper[7553]: I0318 17:41:51.142251 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zwlxb\" (UniqueName: \"kubernetes.io/projected/37b3753f-bf4f-4a9e-a4a8-d58296bada79-kube-api-access-zwlxb\") pod \"cluster-baremetal-operator-6f69995874-dh5zl\" (UID: \"37b3753f-bf4f-4a9e-a4a8-d58296bada79\") " pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-dh5zl" Mar 18 17:41:51.142381 master-0 kubenswrapper[7553]: I0318 17:41:51.142334 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c087ce06-a16b-41f4-ba93-8fccdee09003-config\") pod \"authentication-operator-5885bfd7f4-8sxdf\" (UID: \"c087ce06-a16b-41f4-ba93-8fccdee09003\") " pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-8sxdf" Mar 18 17:41:51.142509 master-0 kubenswrapper[7553]: I0318 17:41:51.142463 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-789k6\" (UniqueName: \"kubernetes.io/projected/c087ce06-a16b-41f4-ba93-8fccdee09003-kube-api-access-789k6\") pod \"authentication-operator-5885bfd7f4-8sxdf\" (UID: \"c087ce06-a16b-41f4-ba93-8fccdee09003\") " pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-8sxdf" Mar 18 17:41:51.142585 master-0 kubenswrapper[7553]: I0318 17:41:51.142544 7553 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c087ce06-a16b-41f4-ba93-8fccdee09003-config\") pod \"authentication-operator-5885bfd7f4-8sxdf\" (UID: \"c087ce06-a16b-41f4-ba93-8fccdee09003\") " pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-8sxdf" Mar 18 17:41:51.142585 master-0 kubenswrapper[7553]: I0318 17:41:51.142580 7553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/994fff04-c1d7-4f10-8d4b-6b49a6934829-node-log\") pod \"ovnkube-node-5l4qp\" (UID: \"994fff04-c1d7-4f10-8d4b-6b49a6934829\") " pod="openshift-ovn-kubernetes/ovnkube-node-5l4qp" Mar 18 17:41:51.142662 master-0 kubenswrapper[7553]: I0318 17:41:51.142580 7553 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/fea7b899-fde4-4463-9520-4d433a8ebe21-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-ttbr5\" (UID: \"fea7b899-fde4-4463-9520-4d433a8ebe21\") " pod="openshift-multus/multus-additional-cni-plugins-ttbr5" Mar 18 17:41:51.142662 master-0 kubenswrapper[7553]: I0318 17:41:51.142613 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/26575d68-0488-4dfa-a5d0-5016e481dba6-serving-cert\") pod \"kube-apiserver-operator-8b68b9d9b-p72m2\" (UID: \"26575d68-0488-4dfa-a5d0-5016e481dba6\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-8b68b9d9b-p72m2" Mar 18 17:41:51.142662 master-0 kubenswrapper[7553]: I0318 17:41:51.142638 7553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/5b0e38f3-3ab5-4519-86a6-68003deb94da-host-var-lib-cni-bin\") pod \"multus-64tx9\" (UID: \"5b0e38f3-3ab5-4519-86a6-68003deb94da\") " pod="openshift-multus/multus-64tx9" Mar 18 17:41:51.142662 master-0 kubenswrapper[7553]: I0318 17:41:51.142657 7553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/994fff04-c1d7-4f10-8d4b-6b49a6934829-run-systemd\") pod \"ovnkube-node-5l4qp\" (UID: \"994fff04-c1d7-4f10-8d4b-6b49a6934829\") " pod="openshift-ovn-kubernetes/ovnkube-node-5l4qp" Mar 18 17:41:51.142787 master-0 kubenswrapper[7553]: I0318 17:41:51.142676 7553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/994fff04-c1d7-4f10-8d4b-6b49a6934829-host-cni-bin\") pod \"ovnkube-node-5l4qp\" (UID: \"994fff04-c1d7-4f10-8d4b-6b49a6934829\") " pod="openshift-ovn-kubernetes/ovnkube-node-5l4qp" Mar 18 17:41:51.142787 master-0 kubenswrapper[7553]: I0318 17:41:51.142693 7553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/994fff04-c1d7-4f10-8d4b-6b49a6934829-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-5l4qp\" (UID: \"994fff04-c1d7-4f10-8d4b-6b49a6934829\") " pod="openshift-ovn-kubernetes/ovnkube-node-5l4qp" Mar 18 17:41:51.142787 master-0 kubenswrapper[7553]: I0318 17:41:51.142715 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/26575d68-0488-4dfa-a5d0-5016e481dba6-config\") pod \"kube-apiserver-operator-8b68b9d9b-p72m2\" (UID: \"26575d68-0488-4dfa-a5d0-5016e481dba6\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-8b68b9d9b-p72m2" Mar 18 17:41:51.142787 master-0 kubenswrapper[7553]: I0318 17:41:51.142736 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/6f26e239-2988-4faa-bc1d-24b15b95b7f1-bound-sa-token\") pod \"cluster-image-registry-operator-5549dc66cb-ljrq8\" (UID: \"6f26e239-2988-4faa-bc1d-24b15b95b7f1\") " pod="openshift-image-registry/cluster-image-registry-operator-5549dc66cb-ljrq8" Mar 18 17:41:51.142787 master-0 kubenswrapper[7553]: I0318 17:41:51.142760 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/7b94e08c-7944-445e-bfb7-6c7c14880c65-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-57f769d897-m82wx\" (UID: \"7b94e08c-7944-445e-bfb7-6c7c14880c65\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57f769d897-m82wx" Mar 18 17:41:51.142787 master-0 kubenswrapper[7553]: I0318 17:41:51.142782 7553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/b1352cc7-4099-44c5-9c31-8259fb783bc7-metrics-tls\") pod \"dns-operator-9c5679d8f-7sc7v\" (UID: \"b1352cc7-4099-44c5-9c31-8259fb783bc7\") " pod="openshift-dns-operator/dns-operator-9c5679d8f-7sc7v" Mar 18 17:41:51.142986 master-0 kubenswrapper[7553]: I0318 17:41:51.142830 7553 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/26575d68-0488-4dfa-a5d0-5016e481dba6-serving-cert\") pod \"kube-apiserver-operator-8b68b9d9b-p72m2\" (UID: \"26575d68-0488-4dfa-a5d0-5016e481dba6\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-8b68b9d9b-p72m2" Mar 18 17:41:51.142986 master-0 kubenswrapper[7553]: I0318 17:41:51.142937 7553 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/7b94e08c-7944-445e-bfb7-6c7c14880c65-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-57f769d897-m82wx\" (UID: \"7b94e08c-7944-445e-bfb7-6c7c14880c65\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57f769d897-m82wx" Mar 18 17:41:51.142986 master-0 kubenswrapper[7553]: I0318 17:41:51.142933 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sclm5\" (UniqueName: \"kubernetes.io/projected/7e64a377-f497-4416-8f22-d5c7f52e0b65-kube-api-access-sclm5\") pod \"ingress-operator-66b84d69b-qb7n6\" (UID: \"7e64a377-f497-4416-8f22-d5c7f52e0b65\") " pod="openshift-ingress-operator/ingress-operator-66b84d69b-qb7n6" Mar 18 17:41:51.143092 master-0 kubenswrapper[7553]: I0318 17:41:51.143018 7553 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/26575d68-0488-4dfa-a5d0-5016e481dba6-config\") pod \"kube-apiserver-operator-8b68b9d9b-p72m2\" (UID: \"26575d68-0488-4dfa-a5d0-5016e481dba6\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-8b68b9d9b-p72m2" Mar 18 17:41:51.143092 master-0 kubenswrapper[7553]: I0318 17:41:51.143051 7553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/5b0e38f3-3ab5-4519-86a6-68003deb94da-cnibin\") pod \"multus-64tx9\" (UID: \"5b0e38f3-3ab5-4519-86a6-68003deb94da\") " pod="openshift-multus/multus-64tx9" Mar 18 17:41:51.143165 master-0 kubenswrapper[7553]: I0318 17:41:51.143103 7553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5s6f5\" (UniqueName: \"kubernetes.io/projected/978dcca6-b396-463f-9614-9e24194a1aaa-kube-api-access-5s6f5\") pod \"network-check-target-ctd49\" (UID: \"978dcca6-b396-463f-9614-9e24194a1aaa\") " pod="openshift-network-diagnostics/network-check-target-ctd49" Mar 18 17:41:51.143165 master-0 kubenswrapper[7553]: I0318 17:41:51.143125 7553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/994fff04-c1d7-4f10-8d4b-6b49a6934829-host-kubelet\") pod \"ovnkube-node-5l4qp\" (UID: \"994fff04-c1d7-4f10-8d4b-6b49a6934829\") " pod="openshift-ovn-kubernetes/ovnkube-node-5l4qp" Mar 18 17:41:51.143236 master-0 kubenswrapper[7553]: I0318 17:41:51.143175 7553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/994fff04-c1d7-4f10-8d4b-6b49a6934829-etc-openvswitch\") pod \"ovnkube-node-5l4qp\" (UID: \"994fff04-c1d7-4f10-8d4b-6b49a6934829\") " pod="openshift-ovn-kubernetes/ovnkube-node-5l4qp" Mar 18 17:41:51.143306 master-0 kubenswrapper[7553]: I0318 17:41:51.143238 7553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5a4f94f3-d63a-4869-b723-ae9637610b4b-metrics-certs\") pod \"network-metrics-daemon-mfn52\" (UID: \"5a4f94f3-d63a-4869-b723-ae9637610b4b\") " pod="openshift-multus/network-metrics-daemon-mfn52" Mar 18 17:41:51.143352 master-0 kubenswrapper[7553]: I0318 17:41:51.143316 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f7ff61c7-32d1-4407-a792-8e22bb4d50f9-serving-cert\") pod \"kube-storage-version-migrator-operator-6bb5bfb6fd-xwwcg\" (UID: \"f7ff61c7-32d1-4407-a792-8e22bb4d50f9\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-6bb5bfb6fd-xwwcg" Mar 18 17:41:51.143407 master-0 kubenswrapper[7553]: I0318 17:41:51.143358 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3a3a6c2c-78e7-41f3-acff-20173cbc012a-config\") pod \"openshift-kube-scheduler-operator-dddff6458-wlfj4\" (UID: \"3a3a6c2c-78e7-41f3-acff-20173cbc012a\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-dddff6458-wlfj4" Mar 18 17:41:51.143407 master-0 kubenswrapper[7553]: I0318 17:41:51.143398 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7e64a377-f497-4416-8f22-d5c7f52e0b65-trusted-ca\") pod \"ingress-operator-66b84d69b-qb7n6\" (UID: \"7e64a377-f497-4416-8f22-d5c7f52e0b65\") " pod="openshift-ingress-operator/ingress-operator-66b84d69b-qb7n6" Mar 18 17:41:51.143482 master-0 kubenswrapper[7553]: I0318 17:41:51.143440 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t92bz\" (UniqueName: \"kubernetes.io/projected/e9e04572-1425-440e-9869-6deef05e13e3-kube-api-access-t92bz\") pod \"catalog-operator-68f85b4d6c-qpgfz\" (UID: \"e9e04572-1425-440e-9869-6deef05e13e3\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68f85b4d6c-qpgfz" Mar 18 17:41:51.143521 master-0 kubenswrapper[7553]: I0318 17:41:51.143494 7553 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f7ff61c7-32d1-4407-a792-8e22bb4d50f9-serving-cert\") pod \"kube-storage-version-migrator-operator-6bb5bfb6fd-xwwcg\" (UID: \"f7ff61c7-32d1-4407-a792-8e22bb4d50f9\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-6bb5bfb6fd-xwwcg" Mar 18 17:41:51.143557 master-0 kubenswrapper[7553]: I0318 17:41:51.143504 7553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/ce5831a6-5a8d-4cda-9299-5d86437bcab2-marketplace-operator-metrics\") pod \"marketplace-operator-89ccd998f-l5gm7\" (UID: \"ce5831a6-5a8d-4cda-9299-5d86437bcab2\") " pod="openshift-marketplace/marketplace-operator-89ccd998f-l5gm7" Mar 18 17:41:51.143600 master-0 kubenswrapper[7553]: I0318 17:41:51.143580 7553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/37b3753f-bf4f-4a9e-a4a8-d58296bada79-cert\") pod \"cluster-baremetal-operator-6f69995874-dh5zl\" (UID: \"37b3753f-bf4f-4a9e-a4a8-d58296bada79\") " pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-dh5zl" Mar 18 17:41:51.143638 master-0 kubenswrapper[7553]: I0318 17:41:51.143619 7553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/5b0e38f3-3ab5-4519-86a6-68003deb94da-host-run-k8s-cni-cncf-io\") pod \"multus-64tx9\" (UID: \"5b0e38f3-3ab5-4519-86a6-68003deb94da\") " pod="openshift-multus/multus-64tx9" Mar 18 17:41:51.143680 master-0 kubenswrapper[7553]: I0318 17:41:51.143635 7553 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3a3a6c2c-78e7-41f3-acff-20173cbc012a-config\") pod \"openshift-kube-scheduler-operator-dddff6458-wlfj4\" (UID: \"3a3a6c2c-78e7-41f3-acff-20173cbc012a\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-dddff6458-wlfj4" Mar 18 17:41:51.143680 master-0 kubenswrapper[7553]: I0318 17:41:51.143660 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/5b0e38f3-3ab5-4519-86a6-68003deb94da-multus-daemon-config\") pod \"multus-64tx9\" (UID: \"5b0e38f3-3ab5-4519-86a6-68003deb94da\") " pod="openshift-multus/multus-64tx9" Mar 18 17:41:51.143806 master-0 kubenswrapper[7553]: I0318 17:41:51.143723 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/26575d68-0488-4dfa-a5d0-5016e481dba6-kube-api-access\") pod \"kube-apiserver-operator-8b68b9d9b-p72m2\" (UID: \"26575d68-0488-4dfa-a5d0-5016e481dba6\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-8b68b9d9b-p72m2" Mar 18 17:41:51.143858 master-0 kubenswrapper[7553]: I0318 17:41:51.143834 7553 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7e64a377-f497-4416-8f22-d5c7f52e0b65-trusted-ca\") pod \"ingress-operator-66b84d69b-qb7n6\" (UID: \"7e64a377-f497-4416-8f22-d5c7f52e0b65\") " pod="openshift-ingress-operator/ingress-operator-66b84d69b-qb7n6" Mar 18 17:41:51.143898 master-0 kubenswrapper[7553]: I0318 17:41:51.143822 7553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/7c6694a8-ccd0-491b-9f21-215450f6ce67-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-598fbc5f8f-7qwxn\" (UID: \"7c6694a8-ccd0-491b-9f21-215450f6ce67\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-7qwxn" Mar 18 17:41:51.143898 master-0 kubenswrapper[7553]: I0318 17:41:51.143886 7553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/5b0e38f3-3ab5-4519-86a6-68003deb94da-host-var-lib-kubelet\") pod \"multus-64tx9\" (UID: \"5b0e38f3-3ab5-4519-86a6-68003deb94da\") " pod="openshift-multus/multus-64tx9" Mar 18 17:41:51.144042 master-0 kubenswrapper[7553]: I0318 17:41:51.144012 7553 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/5b0e38f3-3ab5-4519-86a6-68003deb94da-multus-daemon-config\") pod \"multus-64tx9\" (UID: \"5b0e38f3-3ab5-4519-86a6-68003deb94da\") " pod="openshift-multus/multus-64tx9" Mar 18 17:41:51.144042 master-0 kubenswrapper[7553]: I0318 17:41:51.144020 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-76j8w\" (UniqueName: \"kubernetes.io/projected/9875ed82-813c-483d-8471-8f9b74b774ee-kube-api-access-76j8w\") pod \"network-node-identity-7s68k\" (UID: \"9875ed82-813c-483d-8471-8f9b74b774ee\") " pod="openshift-network-node-identity/network-node-identity-7s68k" Mar 18 17:41:51.144203 master-0 kubenswrapper[7553]: I0318 17:41:51.144182 7553 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-olm-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/99e215da-759d-4fff-af65-0fb64245fbd0-cluster-olm-operator-serving-cert\") pod \"cluster-olm-operator-67dcd4998-lljnt\" (UID: \"99e215da-759d-4fff-af65-0fb64245fbd0\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-67dcd4998-lljnt" Mar 18 17:41:51.154943 master-0 kubenswrapper[7553]: I0318 17:41:51.154896 7553 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"openshift-service-ca.crt" Mar 18 17:41:51.245074 master-0 kubenswrapper[7553]: I0318 17:41:51.244885 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/7c6694a8-ccd0-491b-9f21-215450f6ce67-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-598fbc5f8f-7qwxn\" (UID: \"7c6694a8-ccd0-491b-9f21-215450f6ce67\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-7qwxn" Mar 18 17:41:51.245443 master-0 kubenswrapper[7553]: I0318 17:41:51.245423 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/5b0e38f3-3ab5-4519-86a6-68003deb94da-host-var-lib-kubelet\") pod \"multus-64tx9\" (UID: \"5b0e38f3-3ab5-4519-86a6-68003deb94da\") " pod="openshift-multus/multus-64tx9" Mar 18 17:41:51.245585 master-0 kubenswrapper[7553]: I0318 17:41:51.245552 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/994fff04-c1d7-4f10-8d4b-6b49a6934829-host-run-netns\") pod \"ovnkube-node-5l4qp\" (UID: \"994fff04-c1d7-4f10-8d4b-6b49a6934829\") " pod="openshift-ovn-kubernetes/ovnkube-node-5l4qp" Mar 18 17:41:51.245707 master-0 kubenswrapper[7553]: I0318 17:41:51.245695 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/5b0e38f3-3ab5-4519-86a6-68003deb94da-etc-kubernetes\") pod \"multus-64tx9\" (UID: \"5b0e38f3-3ab5-4519-86a6-68003deb94da\") " pod="openshift-multus/multus-64tx9" Mar 18 17:41:51.245789 master-0 kubenswrapper[7553]: I0318 17:41:51.245776 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/994fff04-c1d7-4f10-8d4b-6b49a6934829-log-socket\") pod \"ovnkube-node-5l4qp\" (UID: \"994fff04-c1d7-4f10-8d4b-6b49a6934829\") " pod="openshift-ovn-kubernetes/ovnkube-node-5l4qp" Mar 18 17:41:51.245871 master-0 kubenswrapper[7553]: I0318 17:41:51.245859 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/1d969530-c138-4fb7-9bfe-0825be66c009-host-slash\") pod \"iptables-alerter-f7jp5\" (UID: \"1d969530-c138-4fb7-9bfe-0825be66c009\") " pod="openshift-network-operator/iptables-alerter-f7jp5" Mar 18 17:41:51.245947 master-0 kubenswrapper[7553]: I0318 17:41:51.245936 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/fea7b899-fde4-4463-9520-4d433a8ebe21-tuning-conf-dir\") pod \"multus-additional-cni-plugins-ttbr5\" (UID: \"fea7b899-fde4-4463-9520-4d433a8ebe21\") " pod="openshift-multus/multus-additional-cni-plugins-ttbr5" Mar 18 17:41:51.246020 master-0 kubenswrapper[7553]: I0318 17:41:51.246009 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/e73f2834-c56c-4cef-ac3c-2317e9a4324c-srv-cert\") pod \"olm-operator-5c9796789-6hngr\" (UID: \"e73f2834-c56c-4cef-ac3c-2317e9a4324c\") " pod="openshift-operator-lifecycle-manager/olm-operator-5c9796789-6hngr" Mar 18 17:41:51.246087 master-0 kubenswrapper[7553]: I0318 17:41:51.246076 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/5b0e38f3-3ab5-4519-86a6-68003deb94da-multus-cni-dir\") pod \"multus-64tx9\" (UID: \"5b0e38f3-3ab5-4519-86a6-68003deb94da\") " pod="openshift-multus/multus-64tx9" Mar 18 17:41:51.246172 master-0 kubenswrapper[7553]: I0318 17:41:51.246142 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/5b0e38f3-3ab5-4519-86a6-68003deb94da-multus-socket-dir-parent\") pod \"multus-64tx9\" (UID: \"5b0e38f3-3ab5-4519-86a6-68003deb94da\") " pod="openshift-multus/multus-64tx9" Mar 18 17:41:51.246248 master-0 kubenswrapper[7553]: I0318 17:41:51.246237 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/994fff04-c1d7-4f10-8d4b-6b49a6934829-host-slash\") pod \"ovnkube-node-5l4qp\" (UID: \"994fff04-c1d7-4f10-8d4b-6b49a6934829\") " pod="openshift-ovn-kubernetes/ovnkube-node-5l4qp" Mar 18 17:41:51.246359 master-0 kubenswrapper[7553]: I0318 17:41:51.246344 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/d26d4515-391e-41a5-8c82-1b2b8a375662-package-server-manager-serving-cert\") pod \"package-server-manager-7b95f86987-6qqz4\" (UID: \"d26d4515-391e-41a5-8c82-1b2b8a375662\") " pod="openshift-operator-lifecycle-manager/package-server-manager-7b95f86987-6qqz4" Mar 18 17:41:51.246456 master-0 kubenswrapper[7553]: I0318 17:41:51.246443 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/5b0e38f3-3ab5-4519-86a6-68003deb94da-host-var-lib-cni-multus\") pod \"multus-64tx9\" (UID: \"5b0e38f3-3ab5-4519-86a6-68003deb94da\") " pod="openshift-multus/multus-64tx9" Mar 18 17:41:51.246547 master-0 kubenswrapper[7553]: I0318 17:41:51.246529 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/994fff04-c1d7-4f10-8d4b-6b49a6934829-host-cni-netd\") pod \"ovnkube-node-5l4qp\" (UID: \"994fff04-c1d7-4f10-8d4b-6b49a6934829\") " pod="openshift-ovn-kubernetes/ovnkube-node-5l4qp" Mar 18 17:41:51.246650 master-0 kubenswrapper[7553]: I0318 17:41:51.246637 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/994fff04-c1d7-4f10-8d4b-6b49a6934829-var-lib-openvswitch\") pod \"ovnkube-node-5l4qp\" (UID: \"994fff04-c1d7-4f10-8d4b-6b49a6934829\") " pod="openshift-ovn-kubernetes/ovnkube-node-5l4qp" Mar 18 17:41:51.246727 master-0 kubenswrapper[7553]: I0318 17:41:51.246715 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/5b0e38f3-3ab5-4519-86a6-68003deb94da-multus-conf-dir\") pod \"multus-64tx9\" (UID: \"5b0e38f3-3ab5-4519-86a6-68003deb94da\") " pod="openshift-multus/multus-64tx9" Mar 18 17:41:51.246809 master-0 kubenswrapper[7553]: I0318 17:41:51.246796 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-baremetal-operator-tls\" (UniqueName: \"kubernetes.io/secret/37b3753f-bf4f-4a9e-a4a8-d58296bada79-cluster-baremetal-operator-tls\") pod \"cluster-baremetal-operator-6f69995874-dh5zl\" (UID: \"37b3753f-bf4f-4a9e-a4a8-d58296bada79\") " pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-dh5zl" Mar 18 17:41:51.246893 master-0 kubenswrapper[7553]: I0318 17:41:51.246881 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a02399de-859b-45b1-9b00-18a08f285f39-serving-cert\") pod \"cluster-version-operator-56d8475767-lqvvj\" (UID: \"a02399de-859b-45b1-9b00-18a08f285f39\") " pod="openshift-cluster-version/cluster-version-operator-56d8475767-lqvvj" Mar 18 17:41:51.246976 master-0 kubenswrapper[7553]: I0318 17:41:51.246965 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/7e64a377-f497-4416-8f22-d5c7f52e0b65-metrics-tls\") pod \"ingress-operator-66b84d69b-qb7n6\" (UID: \"7e64a377-f497-4416-8f22-d5c7f52e0b65\") " pod="openshift-ingress-operator/ingress-operator-66b84d69b-qb7n6" Mar 18 17:41:51.247060 master-0 kubenswrapper[7553]: I0318 17:41:51.247048 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/fea7b899-fde4-4463-9520-4d433a8ebe21-cnibin\") pod \"multus-additional-cni-plugins-ttbr5\" (UID: \"fea7b899-fde4-4463-9520-4d433a8ebe21\") " pod="openshift-multus/multus-additional-cni-plugins-ttbr5" Mar 18 17:41:51.247135 master-0 kubenswrapper[7553]: I0318 17:41:51.247121 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/6f26e239-2988-4faa-bc1d-24b15b95b7f1-image-registry-operator-tls\") pod \"cluster-image-registry-operator-5549dc66cb-ljrq8\" (UID: \"6f26e239-2988-4faa-bc1d-24b15b95b7f1\") " pod="openshift-image-registry/cluster-image-registry-operator-5549dc66cb-ljrq8" Mar 18 17:41:51.247214 master-0 kubenswrapper[7553]: I0318 17:41:51.247203 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/a02399de-859b-45b1-9b00-18a08f285f39-etc-ssl-certs\") pod \"cluster-version-operator-56d8475767-lqvvj\" (UID: \"a02399de-859b-45b1-9b00-18a08f285f39\") " pod="openshift-cluster-version/cluster-version-operator-56d8475767-lqvvj" Mar 18 17:41:51.247302 master-0 kubenswrapper[7553]: I0318 17:41:51.247269 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/994fff04-c1d7-4f10-8d4b-6b49a6934829-systemd-units\") pod \"ovnkube-node-5l4qp\" (UID: \"994fff04-c1d7-4f10-8d4b-6b49a6934829\") " pod="openshift-ovn-kubernetes/ovnkube-node-5l4qp" Mar 18 17:41:51.247375 master-0 kubenswrapper[7553]: I0318 17:41:51.247364 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/994fff04-c1d7-4f10-8d4b-6b49a6934829-run-ovn\") pod \"ovnkube-node-5l4qp\" (UID: \"994fff04-c1d7-4f10-8d4b-6b49a6934829\") " pod="openshift-ovn-kubernetes/ovnkube-node-5l4qp" Mar 18 17:41:51.247450 master-0 kubenswrapper[7553]: I0318 17:41:51.247438 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/994fff04-c1d7-4f10-8d4b-6b49a6934829-host-run-ovn-kubernetes\") pod \"ovnkube-node-5l4qp\" (UID: \"994fff04-c1d7-4f10-8d4b-6b49a6934829\") " pod="openshift-ovn-kubernetes/ovnkube-node-5l4qp" Mar 18 17:41:51.247542 master-0 kubenswrapper[7553]: I0318 17:41:51.247531 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/5b0e38f3-3ab5-4519-86a6-68003deb94da-hostroot\") pod \"multus-64tx9\" (UID: \"5b0e38f3-3ab5-4519-86a6-68003deb94da\") " pod="openshift-multus/multus-64tx9" Mar 18 17:41:51.247613 master-0 kubenswrapper[7553]: I0318 17:41:51.247599 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/a8c34df1-ea0d-4dfa-bf4d-5b58dc5bee8e-webhook-certs\") pod \"multus-admission-controller-5dbbb8b86f-gr8jc\" (UID: \"a8c34df1-ea0d-4dfa-bf4d-5b58dc5bee8e\") " pod="openshift-multus/multus-admission-controller-5dbbb8b86f-gr8jc" Mar 18 17:41:51.247707 master-0 kubenswrapper[7553]: I0318 17:41:51.247692 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/994fff04-c1d7-4f10-8d4b-6b49a6934829-run-openvswitch\") pod \"ovnkube-node-5l4qp\" (UID: \"994fff04-c1d7-4f10-8d4b-6b49a6934829\") " pod="openshift-ovn-kubernetes/ovnkube-node-5l4qp" Mar 18 17:41:51.247797 master-0 kubenswrapper[7553]: I0318 17:41:51.247783 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/5b0e38f3-3ab5-4519-86a6-68003deb94da-os-release\") pod \"multus-64tx9\" (UID: \"5b0e38f3-3ab5-4519-86a6-68003deb94da\") " pod="openshift-multus/multus-64tx9" Mar 18 17:41:51.247882 master-0 kubenswrapper[7553]: I0318 17:41:51.247870 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/5b0e38f3-3ab5-4519-86a6-68003deb94da-host-run-multus-certs\") pod \"multus-64tx9\" (UID: \"5b0e38f3-3ab5-4519-86a6-68003deb94da\") " pod="openshift-multus/multus-64tx9" Mar 18 17:41:51.247950 master-0 kubenswrapper[7553]: I0318 17:41:51.247939 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/fea7b899-fde4-4463-9520-4d433a8ebe21-system-cni-dir\") pod \"multus-additional-cni-plugins-ttbr5\" (UID: \"fea7b899-fde4-4463-9520-4d433a8ebe21\") " pod="openshift-multus/multus-additional-cni-plugins-ttbr5" Mar 18 17:41:51.248016 master-0 kubenswrapper[7553]: I0318 17:41:51.248006 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/fea7b899-fde4-4463-9520-4d433a8ebe21-os-release\") pod \"multus-additional-cni-plugins-ttbr5\" (UID: \"fea7b899-fde4-4463-9520-4d433a8ebe21\") " pod="openshift-multus/multus-additional-cni-plugins-ttbr5" Mar 18 17:41:51.248089 master-0 kubenswrapper[7553]: I0318 17:41:51.248075 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/7c6694a8-ccd0-491b-9f21-215450f6ce67-apiservice-cert\") pod \"cluster-node-tuning-operator-598fbc5f8f-7qwxn\" (UID: \"7c6694a8-ccd0-491b-9f21-215450f6ce67\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-7qwxn" Mar 18 17:41:51.248175 master-0 kubenswrapper[7553]: I0318 17:41:51.248163 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/14a0661b-7bde-4e22-a9a9-5e3fb24df77f-host-etc-kube\") pod \"network-operator-7bd846bfc4-dxxbl\" (UID: \"14a0661b-7bde-4e22-a9a9-5e3fb24df77f\") " pod="openshift-network-operator/network-operator-7bd846bfc4-dxxbl" Mar 18 17:41:51.248250 master-0 kubenswrapper[7553]: I0318 17:41:51.248238 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/e9e04572-1425-440e-9869-6deef05e13e3-srv-cert\") pod \"catalog-operator-68f85b4d6c-qpgfz\" (UID: \"e9e04572-1425-440e-9869-6deef05e13e3\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68f85b4d6c-qpgfz" Mar 18 17:41:51.248409 master-0 kubenswrapper[7553]: I0318 17:41:51.248395 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/5b0e38f3-3ab5-4519-86a6-68003deb94da-system-cni-dir\") pod \"multus-64tx9\" (UID: \"5b0e38f3-3ab5-4519-86a6-68003deb94da\") " pod="openshift-multus/multus-64tx9" Mar 18 17:41:51.248487 master-0 kubenswrapper[7553]: I0318 17:41:51.248475 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/5b0e38f3-3ab5-4519-86a6-68003deb94da-host-run-netns\") pod \"multus-64tx9\" (UID: \"5b0e38f3-3ab5-4519-86a6-68003deb94da\") " pod="openshift-multus/multus-64tx9" Mar 18 17:41:51.248556 master-0 kubenswrapper[7553]: I0318 17:41:51.248544 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/8d5e9525-6c0d-4c0b-a2ce-e42eaf66c311-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-58845fbb57-vjrjg\" (UID: \"8d5e9525-6c0d-4c0b-a2ce-e42eaf66c311\") " pod="openshift-monitoring/cluster-monitoring-operator-58845fbb57-vjrjg" Mar 18 17:41:51.248628 master-0 kubenswrapper[7553]: I0318 17:41:51.248614 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/a02399de-859b-45b1-9b00-18a08f285f39-etc-cvo-updatepayloads\") pod \"cluster-version-operator-56d8475767-lqvvj\" (UID: \"a02399de-859b-45b1-9b00-18a08f285f39\") " pod="openshift-cluster-version/cluster-version-operator-56d8475767-lqvvj" Mar 18 17:41:51.248706 master-0 kubenswrapper[7553]: I0318 17:41:51.248694 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/994fff04-c1d7-4f10-8d4b-6b49a6934829-node-log\") pod \"ovnkube-node-5l4qp\" (UID: \"994fff04-c1d7-4f10-8d4b-6b49a6934829\") " pod="openshift-ovn-kubernetes/ovnkube-node-5l4qp" Mar 18 17:41:51.248813 master-0 kubenswrapper[7553]: I0318 17:41:51.248798 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/5b0e38f3-3ab5-4519-86a6-68003deb94da-host-var-lib-cni-bin\") pod \"multus-64tx9\" (UID: \"5b0e38f3-3ab5-4519-86a6-68003deb94da\") " pod="openshift-multus/multus-64tx9" Mar 18 17:41:51.248901 master-0 kubenswrapper[7553]: I0318 17:41:51.248889 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/994fff04-c1d7-4f10-8d4b-6b49a6934829-run-systemd\") pod \"ovnkube-node-5l4qp\" (UID: \"994fff04-c1d7-4f10-8d4b-6b49a6934829\") " pod="openshift-ovn-kubernetes/ovnkube-node-5l4qp" Mar 18 17:41:51.248973 master-0 kubenswrapper[7553]: I0318 17:41:51.248961 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/994fff04-c1d7-4f10-8d4b-6b49a6934829-host-cni-bin\") pod \"ovnkube-node-5l4qp\" (UID: \"994fff04-c1d7-4f10-8d4b-6b49a6934829\") " pod="openshift-ovn-kubernetes/ovnkube-node-5l4qp" Mar 18 17:41:51.249045 master-0 kubenswrapper[7553]: I0318 17:41:51.249034 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/994fff04-c1d7-4f10-8d4b-6b49a6934829-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-5l4qp\" (UID: \"994fff04-c1d7-4f10-8d4b-6b49a6934829\") " pod="openshift-ovn-kubernetes/ovnkube-node-5l4qp" Mar 18 17:41:51.249114 master-0 kubenswrapper[7553]: I0318 17:41:51.249103 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/5b0e38f3-3ab5-4519-86a6-68003deb94da-cnibin\") pod \"multus-64tx9\" (UID: \"5b0e38f3-3ab5-4519-86a6-68003deb94da\") " pod="openshift-multus/multus-64tx9" Mar 18 17:41:51.249179 master-0 kubenswrapper[7553]: I0318 17:41:51.249168 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5s6f5\" (UniqueName: \"kubernetes.io/projected/978dcca6-b396-463f-9614-9e24194a1aaa-kube-api-access-5s6f5\") pod \"network-check-target-ctd49\" (UID: \"978dcca6-b396-463f-9614-9e24194a1aaa\") " pod="openshift-network-diagnostics/network-check-target-ctd49" Mar 18 17:41:51.249252 master-0 kubenswrapper[7553]: I0318 17:41:51.249241 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/994fff04-c1d7-4f10-8d4b-6b49a6934829-host-kubelet\") pod \"ovnkube-node-5l4qp\" (UID: \"994fff04-c1d7-4f10-8d4b-6b49a6934829\") " pod="openshift-ovn-kubernetes/ovnkube-node-5l4qp" Mar 18 17:41:51.249352 master-0 kubenswrapper[7553]: I0318 17:41:51.249339 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/994fff04-c1d7-4f10-8d4b-6b49a6934829-etc-openvswitch\") pod \"ovnkube-node-5l4qp\" (UID: \"994fff04-c1d7-4f10-8d4b-6b49a6934829\") " pod="openshift-ovn-kubernetes/ovnkube-node-5l4qp" Mar 18 17:41:51.249455 master-0 kubenswrapper[7553]: I0318 17:41:51.249433 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/b1352cc7-4099-44c5-9c31-8259fb783bc7-metrics-tls\") pod \"dns-operator-9c5679d8f-7sc7v\" (UID: \"b1352cc7-4099-44c5-9c31-8259fb783bc7\") " pod="openshift-dns-operator/dns-operator-9c5679d8f-7sc7v" Mar 18 17:41:51.249595 master-0 kubenswrapper[7553]: I0318 17:41:51.249565 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/ce5831a6-5a8d-4cda-9299-5d86437bcab2-marketplace-operator-metrics\") pod \"marketplace-operator-89ccd998f-l5gm7\" (UID: \"ce5831a6-5a8d-4cda-9299-5d86437bcab2\") " pod="openshift-marketplace/marketplace-operator-89ccd998f-l5gm7" Mar 18 17:41:51.249668 master-0 kubenswrapper[7553]: I0318 17:41:51.249657 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5a4f94f3-d63a-4869-b723-ae9637610b4b-metrics-certs\") pod \"network-metrics-daemon-mfn52\" (UID: \"5a4f94f3-d63a-4869-b723-ae9637610b4b\") " pod="openshift-multus/network-metrics-daemon-mfn52" Mar 18 17:41:51.249748 master-0 kubenswrapper[7553]: I0318 17:41:51.249735 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/37b3753f-bf4f-4a9e-a4a8-d58296bada79-cert\") pod \"cluster-baremetal-operator-6f69995874-dh5zl\" (UID: \"37b3753f-bf4f-4a9e-a4a8-d58296bada79\") " pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-dh5zl" Mar 18 17:41:51.249826 master-0 kubenswrapper[7553]: I0318 17:41:51.249808 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/5b0e38f3-3ab5-4519-86a6-68003deb94da-host-run-k8s-cni-cncf-io\") pod \"multus-64tx9\" (UID: \"5b0e38f3-3ab5-4519-86a6-68003deb94da\") " pod="openshift-multus/multus-64tx9" Mar 18 17:41:51.249998 master-0 kubenswrapper[7553]: I0318 17:41:51.249982 7553 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/5b0e38f3-3ab5-4519-86a6-68003deb94da-host-run-k8s-cni-cncf-io\") pod \"multus-64tx9\" (UID: \"5b0e38f3-3ab5-4519-86a6-68003deb94da\") " pod="openshift-multus/multus-64tx9" Mar 18 17:41:51.250206 master-0 kubenswrapper[7553]: I0318 17:41:51.250193 7553 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/5b0e38f3-3ab5-4519-86a6-68003deb94da-host-var-lib-kubelet\") pod \"multus-64tx9\" (UID: \"5b0e38f3-3ab5-4519-86a6-68003deb94da\") " pod="openshift-multus/multus-64tx9" Mar 18 17:41:51.250325 master-0 kubenswrapper[7553]: I0318 17:41:51.250306 7553 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/994fff04-c1d7-4f10-8d4b-6b49a6934829-host-run-netns\") pod \"ovnkube-node-5l4qp\" (UID: \"994fff04-c1d7-4f10-8d4b-6b49a6934829\") " pod="openshift-ovn-kubernetes/ovnkube-node-5l4qp" Mar 18 17:41:51.250446 master-0 kubenswrapper[7553]: I0318 17:41:51.250422 7553 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/5b0e38f3-3ab5-4519-86a6-68003deb94da-etc-kubernetes\") pod \"multus-64tx9\" (UID: \"5b0e38f3-3ab5-4519-86a6-68003deb94da\") " pod="openshift-multus/multus-64tx9" Mar 18 17:41:51.250530 master-0 kubenswrapper[7553]: I0318 17:41:51.250517 7553 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/994fff04-c1d7-4f10-8d4b-6b49a6934829-log-socket\") pod \"ovnkube-node-5l4qp\" (UID: \"994fff04-c1d7-4f10-8d4b-6b49a6934829\") " pod="openshift-ovn-kubernetes/ovnkube-node-5l4qp" Mar 18 17:41:51.251082 master-0 kubenswrapper[7553]: I0318 17:41:51.251066 7553 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/1d969530-c138-4fb7-9bfe-0825be66c009-host-slash\") pod \"iptables-alerter-f7jp5\" (UID: \"1d969530-c138-4fb7-9bfe-0825be66c009\") " pod="openshift-network-operator/iptables-alerter-f7jp5" Mar 18 17:41:51.251201 master-0 kubenswrapper[7553]: I0318 17:41:51.251189 7553 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/fea7b899-fde4-4463-9520-4d433a8ebe21-tuning-conf-dir\") pod \"multus-additional-cni-plugins-ttbr5\" (UID: \"fea7b899-fde4-4463-9520-4d433a8ebe21\") " pod="openshift-multus/multus-additional-cni-plugins-ttbr5" Mar 18 17:41:51.251362 master-0 kubenswrapper[7553]: E0318 17:41:51.251332 7553 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: secret "olm-operator-serving-cert" not found Mar 18 17:41:51.251514 master-0 kubenswrapper[7553]: E0318 17:41:51.251502 7553 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e73f2834-c56c-4cef-ac3c-2317e9a4324c-srv-cert podName:e73f2834-c56c-4cef-ac3c-2317e9a4324c nodeName:}" failed. No retries permitted until 2026-03-18 17:41:51.751476968 +0000 UTC m=+1.897311631 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/e73f2834-c56c-4cef-ac3c-2317e9a4324c-srv-cert") pod "olm-operator-5c9796789-6hngr" (UID: "e73f2834-c56c-4cef-ac3c-2317e9a4324c") : secret "olm-operator-serving-cert" not found Mar 18 17:41:51.251633 master-0 kubenswrapper[7553]: I0318 17:41:51.251620 7553 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/5b0e38f3-3ab5-4519-86a6-68003deb94da-multus-cni-dir\") pod \"multus-64tx9\" (UID: \"5b0e38f3-3ab5-4519-86a6-68003deb94da\") " pod="openshift-multus/multus-64tx9" Mar 18 17:41:51.251724 master-0 kubenswrapper[7553]: I0318 17:41:51.251712 7553 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/5b0e38f3-3ab5-4519-86a6-68003deb94da-multus-socket-dir-parent\") pod \"multus-64tx9\" (UID: \"5b0e38f3-3ab5-4519-86a6-68003deb94da\") " pod="openshift-multus/multus-64tx9" Mar 18 17:41:51.251805 master-0 kubenswrapper[7553]: I0318 17:41:51.251791 7553 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/994fff04-c1d7-4f10-8d4b-6b49a6934829-host-slash\") pod \"ovnkube-node-5l4qp\" (UID: \"994fff04-c1d7-4f10-8d4b-6b49a6934829\") " pod="openshift-ovn-kubernetes/ovnkube-node-5l4qp" Mar 18 17:41:51.251906 master-0 kubenswrapper[7553]: E0318 17:41:51.251893 7553 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: secret "package-server-manager-serving-cert" not found Mar 18 17:41:51.251992 master-0 kubenswrapper[7553]: E0318 17:41:51.251982 7553 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d26d4515-391e-41a5-8c82-1b2b8a375662-package-server-manager-serving-cert podName:d26d4515-391e-41a5-8c82-1b2b8a375662 nodeName:}" failed. No retries permitted until 2026-03-18 17:41:51.751968968 +0000 UTC m=+1.897803641 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/d26d4515-391e-41a5-8c82-1b2b8a375662-package-server-manager-serving-cert") pod "package-server-manager-7b95f86987-6qqz4" (UID: "d26d4515-391e-41a5-8c82-1b2b8a375662") : secret "package-server-manager-serving-cert" not found Mar 18 17:41:51.252102 master-0 kubenswrapper[7553]: I0318 17:41:51.252082 7553 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/5b0e38f3-3ab5-4519-86a6-68003deb94da-host-var-lib-cni-multus\") pod \"multus-64tx9\" (UID: \"5b0e38f3-3ab5-4519-86a6-68003deb94da\") " pod="openshift-multus/multus-64tx9" Mar 18 17:41:51.252219 master-0 kubenswrapper[7553]: I0318 17:41:51.252202 7553 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/994fff04-c1d7-4f10-8d4b-6b49a6934829-host-cni-netd\") pod \"ovnkube-node-5l4qp\" (UID: \"994fff04-c1d7-4f10-8d4b-6b49a6934829\") " pod="openshift-ovn-kubernetes/ovnkube-node-5l4qp" Mar 18 17:41:51.252359 master-0 kubenswrapper[7553]: I0318 17:41:51.252341 7553 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/994fff04-c1d7-4f10-8d4b-6b49a6934829-var-lib-openvswitch\") pod \"ovnkube-node-5l4qp\" (UID: \"994fff04-c1d7-4f10-8d4b-6b49a6934829\") " pod="openshift-ovn-kubernetes/ovnkube-node-5l4qp" Mar 18 17:41:51.252446 master-0 kubenswrapper[7553]: I0318 17:41:51.252434 7553 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/5b0e38f3-3ab5-4519-86a6-68003deb94da-multus-conf-dir\") pod \"multus-64tx9\" (UID: \"5b0e38f3-3ab5-4519-86a6-68003deb94da\") " pod="openshift-multus/multus-64tx9" Mar 18 17:41:51.252543 master-0 kubenswrapper[7553]: E0318 17:41:51.252533 7553 secret.go:189] Couldn't get secret openshift-machine-api/cluster-baremetal-operator-tls: secret "cluster-baremetal-operator-tls" not found Mar 18 17:41:51.252618 master-0 kubenswrapper[7553]: E0318 17:41:51.252609 7553 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/37b3753f-bf4f-4a9e-a4a8-d58296bada79-cluster-baremetal-operator-tls podName:37b3753f-bf4f-4a9e-a4a8-d58296bada79 nodeName:}" failed. No retries permitted until 2026-03-18 17:41:51.752598241 +0000 UTC m=+1.898432914 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cluster-baremetal-operator-tls" (UniqueName: "kubernetes.io/secret/37b3753f-bf4f-4a9e-a4a8-d58296bada79-cluster-baremetal-operator-tls") pod "cluster-baremetal-operator-6f69995874-dh5zl" (UID: "37b3753f-bf4f-4a9e-a4a8-d58296bada79") : secret "cluster-baremetal-operator-tls" not found Mar 18 17:41:51.252731 master-0 kubenswrapper[7553]: E0318 17:41:51.252720 7553 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Mar 18 17:41:51.252802 master-0 kubenswrapper[7553]: E0318 17:41:51.252793 7553 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a02399de-859b-45b1-9b00-18a08f285f39-serving-cert podName:a02399de-859b-45b1-9b00-18a08f285f39 nodeName:}" failed. No retries permitted until 2026-03-18 17:41:51.752784964 +0000 UTC m=+1.898619637 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/a02399de-859b-45b1-9b00-18a08f285f39-serving-cert") pod "cluster-version-operator-56d8475767-lqvvj" (UID: "a02399de-859b-45b1-9b00-18a08f285f39") : secret "cluster-version-operator-serving-cert" not found Mar 18 17:41:51.252898 master-0 kubenswrapper[7553]: E0318 17:41:51.252887 7553 secret.go:189] Couldn't get secret openshift-ingress-operator/metrics-tls: secret "metrics-tls" not found Mar 18 17:41:51.252980 master-0 kubenswrapper[7553]: E0318 17:41:51.252970 7553 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7e64a377-f497-4416-8f22-d5c7f52e0b65-metrics-tls podName:7e64a377-f497-4416-8f22-d5c7f52e0b65 nodeName:}" failed. No retries permitted until 2026-03-18 17:41:51.752959378 +0000 UTC m=+1.898794051 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/7e64a377-f497-4416-8f22-d5c7f52e0b65-metrics-tls") pod "ingress-operator-66b84d69b-qb7n6" (UID: "7e64a377-f497-4416-8f22-d5c7f52e0b65") : secret "metrics-tls" not found Mar 18 17:41:51.253068 master-0 kubenswrapper[7553]: I0318 17:41:51.253056 7553 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/fea7b899-fde4-4463-9520-4d433a8ebe21-cnibin\") pod \"multus-additional-cni-plugins-ttbr5\" (UID: \"fea7b899-fde4-4463-9520-4d433a8ebe21\") " pod="openshift-multus/multus-additional-cni-plugins-ttbr5" Mar 18 17:41:51.253170 master-0 kubenswrapper[7553]: E0318 17:41:51.253158 7553 secret.go:189] Couldn't get secret openshift-image-registry/image-registry-operator-tls: secret "image-registry-operator-tls" not found Mar 18 17:41:51.253240 master-0 kubenswrapper[7553]: E0318 17:41:51.253231 7553 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6f26e239-2988-4faa-bc1d-24b15b95b7f1-image-registry-operator-tls podName:6f26e239-2988-4faa-bc1d-24b15b95b7f1 nodeName:}" failed. No retries permitted until 2026-03-18 17:41:51.753222913 +0000 UTC m=+1.899057586 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "image-registry-operator-tls" (UniqueName: "kubernetes.io/secret/6f26e239-2988-4faa-bc1d-24b15b95b7f1-image-registry-operator-tls") pod "cluster-image-registry-operator-5549dc66cb-ljrq8" (UID: "6f26e239-2988-4faa-bc1d-24b15b95b7f1") : secret "image-registry-operator-tls" not found Mar 18 17:41:51.253367 master-0 kubenswrapper[7553]: I0318 17:41:51.253350 7553 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/a02399de-859b-45b1-9b00-18a08f285f39-etc-ssl-certs\") pod \"cluster-version-operator-56d8475767-lqvvj\" (UID: \"a02399de-859b-45b1-9b00-18a08f285f39\") " pod="openshift-cluster-version/cluster-version-operator-56d8475767-lqvvj" Mar 18 17:41:51.253451 master-0 kubenswrapper[7553]: I0318 17:41:51.253440 7553 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/994fff04-c1d7-4f10-8d4b-6b49a6934829-systemd-units\") pod \"ovnkube-node-5l4qp\" (UID: \"994fff04-c1d7-4f10-8d4b-6b49a6934829\") " pod="openshift-ovn-kubernetes/ovnkube-node-5l4qp" Mar 18 17:41:51.253524 master-0 kubenswrapper[7553]: I0318 17:41:51.253513 7553 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/994fff04-c1d7-4f10-8d4b-6b49a6934829-run-ovn\") pod \"ovnkube-node-5l4qp\" (UID: \"994fff04-c1d7-4f10-8d4b-6b49a6934829\") " pod="openshift-ovn-kubernetes/ovnkube-node-5l4qp" Mar 18 17:41:51.253613 master-0 kubenswrapper[7553]: I0318 17:41:51.253599 7553 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/994fff04-c1d7-4f10-8d4b-6b49a6934829-host-run-ovn-kubernetes\") pod \"ovnkube-node-5l4qp\" (UID: \"994fff04-c1d7-4f10-8d4b-6b49a6934829\") " pod="openshift-ovn-kubernetes/ovnkube-node-5l4qp" Mar 18 17:41:51.253707 master-0 kubenswrapper[7553]: I0318 17:41:51.253695 7553 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/5b0e38f3-3ab5-4519-86a6-68003deb94da-hostroot\") pod \"multus-64tx9\" (UID: \"5b0e38f3-3ab5-4519-86a6-68003deb94da\") " pod="openshift-multus/multus-64tx9" Mar 18 17:41:51.253812 master-0 kubenswrapper[7553]: E0318 17:41:51.253794 7553 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: secret "multus-admission-controller-secret" not found Mar 18 17:41:51.253911 master-0 kubenswrapper[7553]: E0318 17:41:51.253901 7553 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a8c34df1-ea0d-4dfa-bf4d-5b58dc5bee8e-webhook-certs podName:a8c34df1-ea0d-4dfa-bf4d-5b58dc5bee8e nodeName:}" failed. No retries permitted until 2026-03-18 17:41:51.753891198 +0000 UTC m=+1.899725871 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/a8c34df1-ea0d-4dfa-bf4d-5b58dc5bee8e-webhook-certs") pod "multus-admission-controller-5dbbb8b86f-gr8jc" (UID: "a8c34df1-ea0d-4dfa-bf4d-5b58dc5bee8e") : secret "multus-admission-controller-secret" not found Mar 18 17:41:51.254007 master-0 kubenswrapper[7553]: I0318 17:41:51.253992 7553 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/994fff04-c1d7-4f10-8d4b-6b49a6934829-run-openvswitch\") pod \"ovnkube-node-5l4qp\" (UID: \"994fff04-c1d7-4f10-8d4b-6b49a6934829\") " pod="openshift-ovn-kubernetes/ovnkube-node-5l4qp" Mar 18 17:41:51.254124 master-0 kubenswrapper[7553]: I0318 17:41:51.254112 7553 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/5b0e38f3-3ab5-4519-86a6-68003deb94da-os-release\") pod \"multus-64tx9\" (UID: \"5b0e38f3-3ab5-4519-86a6-68003deb94da\") " pod="openshift-multus/multus-64tx9" Mar 18 17:41:51.254202 master-0 kubenswrapper[7553]: I0318 17:41:51.254190 7553 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/5b0e38f3-3ab5-4519-86a6-68003deb94da-host-run-multus-certs\") pod \"multus-64tx9\" (UID: \"5b0e38f3-3ab5-4519-86a6-68003deb94da\") " pod="openshift-multus/multus-64tx9" Mar 18 17:41:51.254304 master-0 kubenswrapper[7553]: I0318 17:41:51.254265 7553 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/fea7b899-fde4-4463-9520-4d433a8ebe21-system-cni-dir\") pod \"multus-additional-cni-plugins-ttbr5\" (UID: \"fea7b899-fde4-4463-9520-4d433a8ebe21\") " pod="openshift-multus/multus-additional-cni-plugins-ttbr5" Mar 18 17:41:51.254433 master-0 kubenswrapper[7553]: I0318 17:41:51.254415 7553 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/fea7b899-fde4-4463-9520-4d433a8ebe21-os-release\") pod \"multus-additional-cni-plugins-ttbr5\" (UID: \"fea7b899-fde4-4463-9520-4d433a8ebe21\") " pod="openshift-multus/multus-additional-cni-plugins-ttbr5" Mar 18 17:41:51.254549 master-0 kubenswrapper[7553]: E0318 17:41:51.254538 7553 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/performance-addon-operator-webhook-cert: secret "performance-addon-operator-webhook-cert" not found Mar 18 17:41:51.254629 master-0 kubenswrapper[7553]: E0318 17:41:51.254620 7553 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7c6694a8-ccd0-491b-9f21-215450f6ce67-apiservice-cert podName:7c6694a8-ccd0-491b-9f21-215450f6ce67 nodeName:}" failed. No retries permitted until 2026-03-18 17:41:51.754611932 +0000 UTC m=+1.900446605 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/7c6694a8-ccd0-491b-9f21-215450f6ce67-apiservice-cert") pod "cluster-node-tuning-operator-598fbc5f8f-7qwxn" (UID: "7c6694a8-ccd0-491b-9f21-215450f6ce67") : secret "performance-addon-operator-webhook-cert" not found Mar 18 17:41:51.254724 master-0 kubenswrapper[7553]: I0318 17:41:51.254710 7553 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/14a0661b-7bde-4e22-a9a9-5e3fb24df77f-host-etc-kube\") pod \"network-operator-7bd846bfc4-dxxbl\" (UID: \"14a0661b-7bde-4e22-a9a9-5e3fb24df77f\") " pod="openshift-network-operator/network-operator-7bd846bfc4-dxxbl" Mar 18 17:41:51.254819 master-0 kubenswrapper[7553]: E0318 17:41:51.254808 7553 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: secret "catalog-operator-serving-cert" not found Mar 18 17:41:51.254888 master-0 kubenswrapper[7553]: E0318 17:41:51.254879 7553 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e9e04572-1425-440e-9869-6deef05e13e3-srv-cert podName:e9e04572-1425-440e-9869-6deef05e13e3 nodeName:}" failed. No retries permitted until 2026-03-18 17:41:51.754870638 +0000 UTC m=+1.900705311 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/e9e04572-1425-440e-9869-6deef05e13e3-srv-cert") pod "catalog-operator-68f85b4d6c-qpgfz" (UID: "e9e04572-1425-440e-9869-6deef05e13e3") : secret "catalog-operator-serving-cert" not found Mar 18 17:41:51.254975 master-0 kubenswrapper[7553]: I0318 17:41:51.254964 7553 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/5b0e38f3-3ab5-4519-86a6-68003deb94da-system-cni-dir\") pod \"multus-64tx9\" (UID: \"5b0e38f3-3ab5-4519-86a6-68003deb94da\") " pod="openshift-multus/multus-64tx9" Mar 18 17:41:51.255175 master-0 kubenswrapper[7553]: I0318 17:41:51.255162 7553 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/5b0e38f3-3ab5-4519-86a6-68003deb94da-host-run-netns\") pod \"multus-64tx9\" (UID: \"5b0e38f3-3ab5-4519-86a6-68003deb94da\") " pod="openshift-multus/multus-64tx9" Mar 18 17:41:51.255293 master-0 kubenswrapper[7553]: E0318 17:41:51.255266 7553 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: secret "cluster-monitoring-operator-tls" not found Mar 18 17:41:51.255388 master-0 kubenswrapper[7553]: E0318 17:41:51.255378 7553 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8d5e9525-6c0d-4c0b-a2ce-e42eaf66c311-cluster-monitoring-operator-tls podName:8d5e9525-6c0d-4c0b-a2ce-e42eaf66c311 nodeName:}" failed. No retries permitted until 2026-03-18 17:41:51.755368668 +0000 UTC m=+1.901203341 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/8d5e9525-6c0d-4c0b-a2ce-e42eaf66c311-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-58845fbb57-vjrjg" (UID: "8d5e9525-6c0d-4c0b-a2ce-e42eaf66c311") : secret "cluster-monitoring-operator-tls" not found Mar 18 17:41:51.255475 master-0 kubenswrapper[7553]: I0318 17:41:51.255459 7553 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/a02399de-859b-45b1-9b00-18a08f285f39-etc-cvo-updatepayloads\") pod \"cluster-version-operator-56d8475767-lqvvj\" (UID: \"a02399de-859b-45b1-9b00-18a08f285f39\") " pod="openshift-cluster-version/cluster-version-operator-56d8475767-lqvvj" Mar 18 17:41:51.255575 master-0 kubenswrapper[7553]: I0318 17:41:51.255560 7553 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/994fff04-c1d7-4f10-8d4b-6b49a6934829-node-log\") pod \"ovnkube-node-5l4qp\" (UID: \"994fff04-c1d7-4f10-8d4b-6b49a6934829\") " pod="openshift-ovn-kubernetes/ovnkube-node-5l4qp" Mar 18 17:41:51.255680 master-0 kubenswrapper[7553]: I0318 17:41:51.255667 7553 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/5b0e38f3-3ab5-4519-86a6-68003deb94da-host-var-lib-cni-bin\") pod \"multus-64tx9\" (UID: \"5b0e38f3-3ab5-4519-86a6-68003deb94da\") " pod="openshift-multus/multus-64tx9" Mar 18 17:41:51.255787 master-0 kubenswrapper[7553]: I0318 17:41:51.255775 7553 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/994fff04-c1d7-4f10-8d4b-6b49a6934829-run-systemd\") pod \"ovnkube-node-5l4qp\" (UID: \"994fff04-c1d7-4f10-8d4b-6b49a6934829\") " pod="openshift-ovn-kubernetes/ovnkube-node-5l4qp" Mar 18 17:41:51.255872 master-0 kubenswrapper[7553]: I0318 17:41:51.255860 7553 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/994fff04-c1d7-4f10-8d4b-6b49a6934829-host-cni-bin\") pod \"ovnkube-node-5l4qp\" (UID: \"994fff04-c1d7-4f10-8d4b-6b49a6934829\") " pod="openshift-ovn-kubernetes/ovnkube-node-5l4qp" Mar 18 17:41:51.255948 master-0 kubenswrapper[7553]: I0318 17:41:51.255936 7553 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/994fff04-c1d7-4f10-8d4b-6b49a6934829-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-5l4qp\" (UID: \"994fff04-c1d7-4f10-8d4b-6b49a6934829\") " pod="openshift-ovn-kubernetes/ovnkube-node-5l4qp" Mar 18 17:41:51.256071 master-0 kubenswrapper[7553]: I0318 17:41:51.256035 7553 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/5b0e38f3-3ab5-4519-86a6-68003deb94da-cnibin\") pod \"multus-64tx9\" (UID: \"5b0e38f3-3ab5-4519-86a6-68003deb94da\") " pod="openshift-multus/multus-64tx9" Mar 18 17:41:51.256310 master-0 kubenswrapper[7553]: I0318 17:41:51.256261 7553 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/994fff04-c1d7-4f10-8d4b-6b49a6934829-host-kubelet\") pod \"ovnkube-node-5l4qp\" (UID: \"994fff04-c1d7-4f10-8d4b-6b49a6934829\") " pod="openshift-ovn-kubernetes/ovnkube-node-5l4qp" Mar 18 17:41:51.256401 master-0 kubenswrapper[7553]: I0318 17:41:51.256388 7553 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/994fff04-c1d7-4f10-8d4b-6b49a6934829-etc-openvswitch\") pod \"ovnkube-node-5l4qp\" (UID: \"994fff04-c1d7-4f10-8d4b-6b49a6934829\") " pod="openshift-ovn-kubernetes/ovnkube-node-5l4qp" Mar 18 17:41:51.256510 master-0 kubenswrapper[7553]: E0318 17:41:51.256496 7553 secret.go:189] Couldn't get secret openshift-dns-operator/metrics-tls: secret "metrics-tls" not found Mar 18 17:41:51.256586 master-0 kubenswrapper[7553]: E0318 17:41:51.256576 7553 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b1352cc7-4099-44c5-9c31-8259fb783bc7-metrics-tls podName:b1352cc7-4099-44c5-9c31-8259fb783bc7 nodeName:}" failed. No retries permitted until 2026-03-18 17:41:51.756568082 +0000 UTC m=+1.902402755 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/b1352cc7-4099-44c5-9c31-8259fb783bc7-metrics-tls") pod "dns-operator-9c5679d8f-7sc7v" (UID: "b1352cc7-4099-44c5-9c31-8259fb783bc7") : secret "metrics-tls" not found Mar 18 17:41:51.256880 master-0 kubenswrapper[7553]: E0318 17:41:51.256675 7553 secret.go:189] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: secret "marketplace-operator-metrics" not found Mar 18 17:41:51.256954 master-0 kubenswrapper[7553]: E0318 17:41:51.256945 7553 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ce5831a6-5a8d-4cda-9299-5d86437bcab2-marketplace-operator-metrics podName:ce5831a6-5a8d-4cda-9299-5d86437bcab2 nodeName:}" failed. No retries permitted until 2026-03-18 17:41:51.756936811 +0000 UTC m=+1.902771484 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/ce5831a6-5a8d-4cda-9299-5d86437bcab2-marketplace-operator-metrics") pod "marketplace-operator-89ccd998f-l5gm7" (UID: "ce5831a6-5a8d-4cda-9299-5d86437bcab2") : secret "marketplace-operator-metrics" not found Mar 18 17:41:51.257091 master-0 kubenswrapper[7553]: E0318 17:41:51.257078 7553 secret.go:189] Couldn't get secret openshift-machine-api/cluster-baremetal-webhook-server-cert: secret "cluster-baremetal-webhook-server-cert" not found Mar 18 17:41:51.257164 master-0 kubenswrapper[7553]: E0318 17:41:51.257155 7553 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/37b3753f-bf4f-4a9e-a4a8-d58296bada79-cert podName:37b3753f-bf4f-4a9e-a4a8-d58296bada79 nodeName:}" failed. No retries permitted until 2026-03-18 17:41:51.757147995 +0000 UTC m=+1.902982668 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/37b3753f-bf4f-4a9e-a4a8-d58296bada79-cert") pod "cluster-baremetal-operator-6f69995874-dh5zl" (UID: "37b3753f-bf4f-4a9e-a4a8-d58296bada79") : secret "cluster-baremetal-webhook-server-cert" not found Mar 18 17:41:51.758974 master-0 kubenswrapper[7553]: I0318 17:41:51.758853 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/6f26e239-2988-4faa-bc1d-24b15b95b7f1-image-registry-operator-tls\") pod \"cluster-image-registry-operator-5549dc66cb-ljrq8\" (UID: \"6f26e239-2988-4faa-bc1d-24b15b95b7f1\") " pod="openshift-image-registry/cluster-image-registry-operator-5549dc66cb-ljrq8" Mar 18 17:41:51.759411 master-0 kubenswrapper[7553]: I0318 17:41:51.759392 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/a8c34df1-ea0d-4dfa-bf4d-5b58dc5bee8e-webhook-certs\") pod \"multus-admission-controller-5dbbb8b86f-gr8jc\" (UID: \"a8c34df1-ea0d-4dfa-bf4d-5b58dc5bee8e\") " pod="openshift-multus/multus-admission-controller-5dbbb8b86f-gr8jc" Mar 18 17:41:51.759538 master-0 kubenswrapper[7553]: I0318 17:41:51.759496 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/7c6694a8-ccd0-491b-9f21-215450f6ce67-apiservice-cert\") pod \"cluster-node-tuning-operator-598fbc5f8f-7qwxn\" (UID: \"7c6694a8-ccd0-491b-9f21-215450f6ce67\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-7qwxn" Mar 18 17:41:51.759633 master-0 kubenswrapper[7553]: I0318 17:41:51.759621 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/e9e04572-1425-440e-9869-6deef05e13e3-srv-cert\") pod \"catalog-operator-68f85b4d6c-qpgfz\" (UID: \"e9e04572-1425-440e-9869-6deef05e13e3\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68f85b4d6c-qpgfz" Mar 18 17:41:51.759728 master-0 kubenswrapper[7553]: I0318 17:41:51.759717 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/8d5e9525-6c0d-4c0b-a2ce-e42eaf66c311-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-58845fbb57-vjrjg\" (UID: \"8d5e9525-6c0d-4c0b-a2ce-e42eaf66c311\") " pod="openshift-monitoring/cluster-monitoring-operator-58845fbb57-vjrjg" Mar 18 17:41:51.759833 master-0 kubenswrapper[7553]: I0318 17:41:51.759816 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/b1352cc7-4099-44c5-9c31-8259fb783bc7-metrics-tls\") pod \"dns-operator-9c5679d8f-7sc7v\" (UID: \"b1352cc7-4099-44c5-9c31-8259fb783bc7\") " pod="openshift-dns-operator/dns-operator-9c5679d8f-7sc7v" Mar 18 17:41:51.759914 master-0 kubenswrapper[7553]: I0318 17:41:51.759900 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/ce5831a6-5a8d-4cda-9299-5d86437bcab2-marketplace-operator-metrics\") pod \"marketplace-operator-89ccd998f-l5gm7\" (UID: \"ce5831a6-5a8d-4cda-9299-5d86437bcab2\") " pod="openshift-marketplace/marketplace-operator-89ccd998f-l5gm7" Mar 18 17:41:51.760040 master-0 kubenswrapper[7553]: I0318 17:41:51.760027 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/37b3753f-bf4f-4a9e-a4a8-d58296bada79-cert\") pod \"cluster-baremetal-operator-6f69995874-dh5zl\" (UID: \"37b3753f-bf4f-4a9e-a4a8-d58296bada79\") " pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-dh5zl" Mar 18 17:41:51.760161 master-0 kubenswrapper[7553]: I0318 17:41:51.760140 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/e73f2834-c56c-4cef-ac3c-2317e9a4324c-srv-cert\") pod \"olm-operator-5c9796789-6hngr\" (UID: \"e73f2834-c56c-4cef-ac3c-2317e9a4324c\") " pod="openshift-operator-lifecycle-manager/olm-operator-5c9796789-6hngr" Mar 18 17:41:51.760269 master-0 kubenswrapper[7553]: I0318 17:41:51.760254 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/d26d4515-391e-41a5-8c82-1b2b8a375662-package-server-manager-serving-cert\") pod \"package-server-manager-7b95f86987-6qqz4\" (UID: \"d26d4515-391e-41a5-8c82-1b2b8a375662\") " pod="openshift-operator-lifecycle-manager/package-server-manager-7b95f86987-6qqz4" Mar 18 17:41:51.760391 master-0 kubenswrapper[7553]: I0318 17:41:51.760374 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-baremetal-operator-tls\" (UniqueName: \"kubernetes.io/secret/37b3753f-bf4f-4a9e-a4a8-d58296bada79-cluster-baremetal-operator-tls\") pod \"cluster-baremetal-operator-6f69995874-dh5zl\" (UID: \"37b3753f-bf4f-4a9e-a4a8-d58296bada79\") " pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-dh5zl" Mar 18 17:41:51.760496 master-0 kubenswrapper[7553]: I0318 17:41:51.760483 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a02399de-859b-45b1-9b00-18a08f285f39-serving-cert\") pod \"cluster-version-operator-56d8475767-lqvvj\" (UID: \"a02399de-859b-45b1-9b00-18a08f285f39\") " pod="openshift-cluster-version/cluster-version-operator-56d8475767-lqvvj" Mar 18 17:41:51.760602 master-0 kubenswrapper[7553]: I0318 17:41:51.760588 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/7e64a377-f497-4416-8f22-d5c7f52e0b65-metrics-tls\") pod \"ingress-operator-66b84d69b-qb7n6\" (UID: \"7e64a377-f497-4416-8f22-d5c7f52e0b65\") " pod="openshift-ingress-operator/ingress-operator-66b84d69b-qb7n6" Mar 18 17:41:51.760783 master-0 kubenswrapper[7553]: E0318 17:41:51.760769 7553 secret.go:189] Couldn't get secret openshift-image-registry/image-registry-operator-tls: secret "image-registry-operator-tls" not found Mar 18 17:41:51.760907 master-0 kubenswrapper[7553]: E0318 17:41:51.760894 7553 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6f26e239-2988-4faa-bc1d-24b15b95b7f1-image-registry-operator-tls podName:6f26e239-2988-4faa-bc1d-24b15b95b7f1 nodeName:}" failed. No retries permitted until 2026-03-18 17:41:52.760874031 +0000 UTC m=+2.906708704 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "image-registry-operator-tls" (UniqueName: "kubernetes.io/secret/6f26e239-2988-4faa-bc1d-24b15b95b7f1-image-registry-operator-tls") pod "cluster-image-registry-operator-5549dc66cb-ljrq8" (UID: "6f26e239-2988-4faa-bc1d-24b15b95b7f1") : secret "image-registry-operator-tls" not found Mar 18 17:41:51.761017 master-0 kubenswrapper[7553]: E0318 17:41:51.761007 7553 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: secret "multus-admission-controller-secret" not found Mar 18 17:41:51.761097 master-0 kubenswrapper[7553]: E0318 17:41:51.761087 7553 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a8c34df1-ea0d-4dfa-bf4d-5b58dc5bee8e-webhook-certs podName:a8c34df1-ea0d-4dfa-bf4d-5b58dc5bee8e nodeName:}" failed. No retries permitted until 2026-03-18 17:41:52.761079845 +0000 UTC m=+2.906914518 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/a8c34df1-ea0d-4dfa-bf4d-5b58dc5bee8e-webhook-certs") pod "multus-admission-controller-5dbbb8b86f-gr8jc" (UID: "a8c34df1-ea0d-4dfa-bf4d-5b58dc5bee8e") : secret "multus-admission-controller-secret" not found Mar 18 17:41:51.761190 master-0 kubenswrapper[7553]: E0318 17:41:51.761180 7553 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/performance-addon-operator-webhook-cert: secret "performance-addon-operator-webhook-cert" not found Mar 18 17:41:51.761304 master-0 kubenswrapper[7553]: E0318 17:41:51.761291 7553 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7c6694a8-ccd0-491b-9f21-215450f6ce67-apiservice-cert podName:7c6694a8-ccd0-491b-9f21-215450f6ce67 nodeName:}" failed. No retries permitted until 2026-03-18 17:41:52.761259098 +0000 UTC m=+2.907093771 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/7c6694a8-ccd0-491b-9f21-215450f6ce67-apiservice-cert") pod "cluster-node-tuning-operator-598fbc5f8f-7qwxn" (UID: "7c6694a8-ccd0-491b-9f21-215450f6ce67") : secret "performance-addon-operator-webhook-cert" not found Mar 18 17:41:51.761434 master-0 kubenswrapper[7553]: E0318 17:41:51.761420 7553 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: secret "catalog-operator-serving-cert" not found Mar 18 17:41:51.761528 master-0 kubenswrapper[7553]: E0318 17:41:51.761515 7553 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e9e04572-1425-440e-9869-6deef05e13e3-srv-cert podName:e9e04572-1425-440e-9869-6deef05e13e3 nodeName:}" failed. No retries permitted until 2026-03-18 17:41:52.761504813 +0000 UTC m=+2.907339486 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/e9e04572-1425-440e-9869-6deef05e13e3-srv-cert") pod "catalog-operator-68f85b4d6c-qpgfz" (UID: "e9e04572-1425-440e-9869-6deef05e13e3") : secret "catalog-operator-serving-cert" not found Mar 18 17:41:51.761690 master-0 kubenswrapper[7553]: E0318 17:41:51.761644 7553 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: secret "cluster-monitoring-operator-tls" not found Mar 18 17:41:51.761883 master-0 kubenswrapper[7553]: E0318 17:41:51.761868 7553 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8d5e9525-6c0d-4c0b-a2ce-e42eaf66c311-cluster-monitoring-operator-tls podName:8d5e9525-6c0d-4c0b-a2ce-e42eaf66c311 nodeName:}" failed. No retries permitted until 2026-03-18 17:41:52.761856211 +0000 UTC m=+2.907690884 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/8d5e9525-6c0d-4c0b-a2ce-e42eaf66c311-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-58845fbb57-vjrjg" (UID: "8d5e9525-6c0d-4c0b-a2ce-e42eaf66c311") : secret "cluster-monitoring-operator-tls" not found Mar 18 17:41:51.761998 master-0 kubenswrapper[7553]: E0318 17:41:51.761986 7553 secret.go:189] Couldn't get secret openshift-dns-operator/metrics-tls: secret "metrics-tls" not found Mar 18 17:41:51.762067 master-0 kubenswrapper[7553]: E0318 17:41:51.762058 7553 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b1352cc7-4099-44c5-9c31-8259fb783bc7-metrics-tls podName:b1352cc7-4099-44c5-9c31-8259fb783bc7 nodeName:}" failed. No retries permitted until 2026-03-18 17:41:52.762050734 +0000 UTC m=+2.907885407 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/b1352cc7-4099-44c5-9c31-8259fb783bc7-metrics-tls") pod "dns-operator-9c5679d8f-7sc7v" (UID: "b1352cc7-4099-44c5-9c31-8259fb783bc7") : secret "metrics-tls" not found Mar 18 17:41:51.762158 master-0 kubenswrapper[7553]: E0318 17:41:51.762147 7553 secret.go:189] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: secret "marketplace-operator-metrics" not found Mar 18 17:41:51.762246 master-0 kubenswrapper[7553]: E0318 17:41:51.762234 7553 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ce5831a6-5a8d-4cda-9299-5d86437bcab2-marketplace-operator-metrics podName:ce5831a6-5a8d-4cda-9299-5d86437bcab2 nodeName:}" failed. No retries permitted until 2026-03-18 17:41:52.762224028 +0000 UTC m=+2.908058701 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/ce5831a6-5a8d-4cda-9299-5d86437bcab2-marketplace-operator-metrics") pod "marketplace-operator-89ccd998f-l5gm7" (UID: "ce5831a6-5a8d-4cda-9299-5d86437bcab2") : secret "marketplace-operator-metrics" not found Mar 18 17:41:51.762397 master-0 kubenswrapper[7553]: E0318 17:41:51.762384 7553 secret.go:189] Couldn't get secret openshift-machine-api/cluster-baremetal-webhook-server-cert: secret "cluster-baremetal-webhook-server-cert" not found Mar 18 17:41:51.762528 master-0 kubenswrapper[7553]: E0318 17:41:51.762514 7553 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/37b3753f-bf4f-4a9e-a4a8-d58296bada79-cert podName:37b3753f-bf4f-4a9e-a4a8-d58296bada79 nodeName:}" failed. No retries permitted until 2026-03-18 17:41:52.762503204 +0000 UTC m=+2.908337877 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/37b3753f-bf4f-4a9e-a4a8-d58296bada79-cert") pod "cluster-baremetal-operator-6f69995874-dh5zl" (UID: "37b3753f-bf4f-4a9e-a4a8-d58296bada79") : secret "cluster-baremetal-webhook-server-cert" not found Mar 18 17:41:51.762631 master-0 kubenswrapper[7553]: E0318 17:41:51.762620 7553 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: secret "olm-operator-serving-cert" not found Mar 18 17:41:51.762707 master-0 kubenswrapper[7553]: E0318 17:41:51.762698 7553 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e73f2834-c56c-4cef-ac3c-2317e9a4324c-srv-cert podName:e73f2834-c56c-4cef-ac3c-2317e9a4324c nodeName:}" failed. No retries permitted until 2026-03-18 17:41:52.762690027 +0000 UTC m=+2.908524700 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/e73f2834-c56c-4cef-ac3c-2317e9a4324c-srv-cert") pod "olm-operator-5c9796789-6hngr" (UID: "e73f2834-c56c-4cef-ac3c-2317e9a4324c") : secret "olm-operator-serving-cert" not found Mar 18 17:41:51.762802 master-0 kubenswrapper[7553]: E0318 17:41:51.762791 7553 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: secret "package-server-manager-serving-cert" not found Mar 18 17:41:51.762877 master-0 kubenswrapper[7553]: E0318 17:41:51.762868 7553 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d26d4515-391e-41a5-8c82-1b2b8a375662-package-server-manager-serving-cert podName:d26d4515-391e-41a5-8c82-1b2b8a375662 nodeName:}" failed. No retries permitted until 2026-03-18 17:41:52.762859281 +0000 UTC m=+2.908693954 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/d26d4515-391e-41a5-8c82-1b2b8a375662-package-server-manager-serving-cert") pod "package-server-manager-7b95f86987-6qqz4" (UID: "d26d4515-391e-41a5-8c82-1b2b8a375662") : secret "package-server-manager-serving-cert" not found Mar 18 17:41:51.762979 master-0 kubenswrapper[7553]: E0318 17:41:51.762967 7553 secret.go:189] Couldn't get secret openshift-machine-api/cluster-baremetal-operator-tls: secret "cluster-baremetal-operator-tls" not found Mar 18 17:41:51.763051 master-0 kubenswrapper[7553]: E0318 17:41:51.763043 7553 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/37b3753f-bf4f-4a9e-a4a8-d58296bada79-cluster-baremetal-operator-tls podName:37b3753f-bf4f-4a9e-a4a8-d58296bada79 nodeName:}" failed. No retries permitted until 2026-03-18 17:41:52.763035545 +0000 UTC m=+2.908870218 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cluster-baremetal-operator-tls" (UniqueName: "kubernetes.io/secret/37b3753f-bf4f-4a9e-a4a8-d58296bada79-cluster-baremetal-operator-tls") pod "cluster-baremetal-operator-6f69995874-dh5zl" (UID: "37b3753f-bf4f-4a9e-a4a8-d58296bada79") : secret "cluster-baremetal-operator-tls" not found Mar 18 17:41:51.763138 master-0 kubenswrapper[7553]: E0318 17:41:51.763128 7553 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Mar 18 17:41:51.763210 master-0 kubenswrapper[7553]: E0318 17:41:51.763201 7553 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a02399de-859b-45b1-9b00-18a08f285f39-serving-cert podName:a02399de-859b-45b1-9b00-18a08f285f39 nodeName:}" failed. No retries permitted until 2026-03-18 17:41:52.763193458 +0000 UTC m=+2.909028121 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/a02399de-859b-45b1-9b00-18a08f285f39-serving-cert") pod "cluster-version-operator-56d8475767-lqvvj" (UID: "a02399de-859b-45b1-9b00-18a08f285f39") : secret "cluster-version-operator-serving-cert" not found Mar 18 17:41:51.763334 master-0 kubenswrapper[7553]: E0318 17:41:51.763320 7553 secret.go:189] Couldn't get secret openshift-ingress-operator/metrics-tls: secret "metrics-tls" not found Mar 18 17:41:51.763412 master-0 kubenswrapper[7553]: E0318 17:41:51.763403 7553 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7e64a377-f497-4416-8f22-d5c7f52e0b65-metrics-tls podName:7e64a377-f497-4416-8f22-d5c7f52e0b65 nodeName:}" failed. No retries permitted until 2026-03-18 17:41:52.763392922 +0000 UTC m=+2.909227595 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/7e64a377-f497-4416-8f22-d5c7f52e0b65-metrics-tls") pod "ingress-operator-66b84d69b-qb7n6" (UID: "7e64a377-f497-4416-8f22-d5c7f52e0b65") : secret "metrics-tls" not found Mar 18 17:41:52.141173 master-0 kubenswrapper[7553]: E0318 17:41:52.141095 7553 configmap.go:193] Couldn't get configMap openshift-cluster-node-tuning-operator/trusted-ca: failed to sync configmap cache: timed out waiting for the condition Mar 18 17:41:52.142449 master-0 kubenswrapper[7553]: E0318 17:41:52.142411 7553 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/7c6694a8-ccd0-491b-9f21-215450f6ce67-trusted-ca podName:7c6694a8-ccd0-491b-9f21-215450f6ce67 nodeName:}" failed. No retries permitted until 2026-03-18 17:41:52.642371316 +0000 UTC m=+2.788206039 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "trusted-ca" (UniqueName: "kubernetes.io/configmap/7c6694a8-ccd0-491b-9f21-215450f6ce67-trusted-ca") pod "cluster-node-tuning-operator-598fbc5f8f-7qwxn" (UID: "7c6694a8-ccd0-491b-9f21-215450f6ce67") : failed to sync configmap cache: timed out waiting for the condition Mar 18 17:41:52.157061 master-0 kubenswrapper[7553]: I0318 17:41:52.156217 7553 request.go:700] Waited for 1.021915619s due to client-side throttling, not priority and fairness, request: POST:https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-version/serviceaccounts/default/token Mar 18 17:41:52.161320 master-0 kubenswrapper[7553]: I0318 17:41:52.160752 7553 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"kube-root-ca.crt" Mar 18 17:41:52.189715 master-0 kubenswrapper[7553]: I0318 17:41:52.183711 7553 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-node-tuning-operator"/"node-tuning-operator-tls" Mar 18 17:41:52.246299 master-0 kubenswrapper[7553]: E0318 17:41:52.224396 7553 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/node-tuning-operator-tls: secret "node-tuning-operator-tls" not found Mar 18 17:41:52.246299 master-0 kubenswrapper[7553]: E0318 17:41:52.224501 7553 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7c6694a8-ccd0-491b-9f21-215450f6ce67-node-tuning-operator-tls podName:7c6694a8-ccd0-491b-9f21-215450f6ce67 nodeName:}" failed. No retries permitted until 2026-03-18 17:41:52.724477365 +0000 UTC m=+2.870312058 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "node-tuning-operator-tls" (UniqueName: "kubernetes.io/secret/7c6694a8-ccd0-491b-9f21-215450f6ce67-node-tuning-operator-tls") pod "cluster-node-tuning-operator-598fbc5f8f-7qwxn" (UID: "7c6694a8-ccd0-491b-9f21-215450f6ce67") : secret "node-tuning-operator-tls" not found Mar 18 17:41:52.277951 master-0 kubenswrapper[7553]: I0318 17:41:52.268976 7553 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lmsm4\" (UniqueName: \"kubernetes.io/projected/dba5f8d7-4d25-42b5-9c58-813221bf96bb-kube-api-access-lmsm4\") pod \"csi-snapshot-controller-operator-5f5d689c6b-z9vvz\" (UID: \"dba5f8d7-4d25-42b5-9c58-813221bf96bb\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-5f5d689c6b-z9vvz" Mar 18 17:41:52.277951 master-0 kubenswrapper[7553]: I0318 17:41:52.273384 7553 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Mar 18 17:41:52.281915 master-0 kubenswrapper[7553]: I0318 17:41:52.281886 7553 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ts9b9\" (UniqueName: \"kubernetes.io/projected/fea7b899-fde4-4463-9520-4d433a8ebe21-kube-api-access-ts9b9\") pod \"multus-additional-cni-plugins-ttbr5\" (UID: \"fea7b899-fde4-4463-9520-4d433a8ebe21\") " pod="openshift-multus/multus-additional-cni-plugins-ttbr5" Mar 18 17:41:52.282098 master-0 kubenswrapper[7553]: E0318 17:41:52.282080 7553 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"bootstrap-kube-scheduler-master-0\" already exists" pod="kube-system/bootstrap-kube-scheduler-master-0" Mar 18 17:41:52.283145 master-0 kubenswrapper[7553]: I0318 17:41:52.283131 7553 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a02399de-859b-45b1-9b00-18a08f285f39-kube-api-access\") pod \"cluster-version-operator-56d8475767-lqvvj\" (UID: \"a02399de-859b-45b1-9b00-18a08f285f39\") " pod="openshift-cluster-version/cluster-version-operator-56d8475767-lqvvj" Mar 18 17:41:52.285179 master-0 kubenswrapper[7553]: I0318 17:41:52.284713 7553 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nf82n\" (UniqueName: \"kubernetes.io/projected/f7ff61c7-32d1-4407-a792-8e22bb4d50f9-kube-api-access-nf82n\") pod \"kube-storage-version-migrator-operator-6bb5bfb6fd-xwwcg\" (UID: \"f7ff61c7-32d1-4407-a792-8e22bb4d50f9\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-6bb5bfb6fd-xwwcg" Mar 18 17:41:52.285179 master-0 kubenswrapper[7553]: I0318 17:41:52.279797 7553 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l5tw2\" (UniqueName: \"kubernetes.io/projected/9a240ab7-a1d5-4e9a-96f3-4590681cc7ed-kube-api-access-l5tw2\") pod \"openshift-controller-manager-operator-8c94f4649-hpsbd\" (UID: \"9a240ab7-a1d5-4e9a-96f3-4590681cc7ed\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8c94f4649-hpsbd" Mar 18 17:41:52.285401 master-0 kubenswrapper[7553]: I0318 17:41:52.285379 7553 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9pp5f\" (UniqueName: \"kubernetes.io/projected/b1352cc7-4099-44c5-9c31-8259fb783bc7-kube-api-access-9pp5f\") pod \"dns-operator-9c5679d8f-7sc7v\" (UID: \"b1352cc7-4099-44c5-9c31-8259fb783bc7\") " pod="openshift-dns-operator/dns-operator-9c5679d8f-7sc7v" Mar 18 17:41:52.285804 master-0 kubenswrapper[7553]: I0318 17:41:52.285782 7553 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9lwsm\" (UniqueName: \"kubernetes.io/projected/994fff04-c1d7-4f10-8d4b-6b49a6934829-kube-api-access-9lwsm\") pod \"ovnkube-node-5l4qp\" (UID: \"994fff04-c1d7-4f10-8d4b-6b49a6934829\") " pod="openshift-ovn-kubernetes/ovnkube-node-5l4qp" Mar 18 17:41:52.286378 master-0 kubenswrapper[7553]: I0318 17:41:52.286306 7553 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/26575d68-0488-4dfa-a5d0-5016e481dba6-kube-api-access\") pod \"kube-apiserver-operator-8b68b9d9b-p72m2\" (UID: \"26575d68-0488-4dfa-a5d0-5016e481dba6\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-8b68b9d9b-p72m2" Mar 18 17:41:52.296523 master-0 kubenswrapper[7553]: I0318 17:41:52.287469 7553 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5sl7p\" (UniqueName: \"kubernetes.io/projected/6f26e239-2988-4faa-bc1d-24b15b95b7f1-kube-api-access-5sl7p\") pod \"cluster-image-registry-operator-5549dc66cb-ljrq8\" (UID: \"6f26e239-2988-4faa-bc1d-24b15b95b7f1\") " pod="openshift-image-registry/cluster-image-registry-operator-5549dc66cb-ljrq8" Mar 18 17:41:52.296523 master-0 kubenswrapper[7553]: I0318 17:41:52.287858 7553 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9b424d6c-7440-4c98-ac19-2d0642c696fd-kube-api-access\") pod \"kube-controller-manager-operator-ff989d6cc-qk279\" (UID: \"9b424d6c-7440-4c98-ac19-2d0642c696fd\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-ff989d6cc-qk279" Mar 18 17:41:52.296523 master-0 kubenswrapper[7553]: I0318 17:41:52.289139 7553 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/6f26e239-2988-4faa-bc1d-24b15b95b7f1-bound-sa-token\") pod \"cluster-image-registry-operator-5549dc66cb-ljrq8\" (UID: \"6f26e239-2988-4faa-bc1d-24b15b95b7f1\") " pod="openshift-image-registry/cluster-image-registry-operator-5549dc66cb-ljrq8" Mar 18 17:41:52.296523 master-0 kubenswrapper[7553]: I0318 17:41:52.289914 7553 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-grnqn\" (UniqueName: \"kubernetes.io/projected/5b0e38f3-3ab5-4519-86a6-68003deb94da-kube-api-access-grnqn\") pod \"multus-64tx9\" (UID: \"5b0e38f3-3ab5-4519-86a6-68003deb94da\") " pod="openshift-multus/multus-64tx9" Mar 18 17:41:52.296523 master-0 kubenswrapper[7553]: I0318 17:41:52.292701 7553 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hgnz6\" (UniqueName: \"kubernetes.io/projected/5a4f94f3-d63a-4869-b723-ae9637610b4b-kube-api-access-hgnz6\") pod \"network-metrics-daemon-mfn52\" (UID: \"5a4f94f3-d63a-4869-b723-ae9637610b4b\") " pod="openshift-multus/network-metrics-daemon-mfn52" Mar 18 17:41:52.296523 master-0 kubenswrapper[7553]: E0318 17:41:52.294396 7553 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: failed to sync secret cache: timed out waiting for the condition Mar 18 17:41:52.296523 master-0 kubenswrapper[7553]: E0318 17:41:52.294930 7553 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5a4f94f3-d63a-4869-b723-ae9637610b4b-metrics-certs podName:5a4f94f3-d63a-4869-b723-ae9637610b4b nodeName:}" failed. No retries permitted until 2026-03-18 17:41:52.794895852 +0000 UTC m=+2.940730525 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/5a4f94f3-d63a-4869-b723-ae9637610b4b-metrics-certs") pod "network-metrics-daemon-mfn52" (UID: "5a4f94f3-d63a-4869-b723-ae9637610b4b") : failed to sync secret cache: timed out waiting for the condition Mar 18 17:41:52.307214 master-0 kubenswrapper[7553]: I0318 17:41:52.296895 7553 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g4zcv\" (UniqueName: \"kubernetes.io/projected/7b94e08c-7944-445e-bfb7-6c7c14880c65-kube-api-access-g4zcv\") pod \"ovnkube-control-plane-57f769d897-m82wx\" (UID: \"7b94e08c-7944-445e-bfb7-6c7c14880c65\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57f769d897-m82wx" Mar 18 17:41:52.307214 master-0 kubenswrapper[7553]: W0318 17:41:52.297323 7553 warnings.go:70] would violate PodSecurity "restricted:latest": host namespaces (hostNetwork=true), hostPort (container "etcd" uses hostPorts 2379, 2380), privileged (containers "etcdctl", "etcd" must not set securityContext.privileged=true), allowPrivilegeEscalation != false (containers "etcdctl", "etcd" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (containers "etcdctl", "etcd" must set securityContext.capabilities.drop=["ALL"]), restricted volume types (volumes "certs", "data-dir" use restricted volume type "hostPath"), runAsNonRoot != true (pod or containers "etcdctl", "etcd" must set securityContext.runAsNonRoot=true), seccompProfile (pod or containers "etcdctl", "etcd" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost") Mar 18 17:41:52.307214 master-0 kubenswrapper[7553]: E0318 17:41:52.297469 7553 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-rbac-proxy-crio-master-0\" already exists" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 18 17:41:52.307214 master-0 kubenswrapper[7553]: E0318 17:41:52.297544 7553 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"bootstrap-kube-apiserver-master-0\" already exists" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 18 17:41:52.307214 master-0 kubenswrapper[7553]: I0318 17:41:52.297545 7553 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t92bz\" (UniqueName: \"kubernetes.io/projected/e9e04572-1425-440e-9869-6deef05e13e3-kube-api-access-t92bz\") pod \"catalog-operator-68f85b4d6c-qpgfz\" (UID: \"e9e04572-1425-440e-9869-6deef05e13e3\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68f85b4d6c-qpgfz" Mar 18 17:41:52.307214 master-0 kubenswrapper[7553]: I0318 17:41:52.297730 7553 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mrdqg\" (UniqueName: \"kubernetes.io/projected/7c6694a8-ccd0-491b-9f21-215450f6ce67-kube-api-access-mrdqg\") pod \"cluster-node-tuning-operator-598fbc5f8f-7qwxn\" (UID: \"7c6694a8-ccd0-491b-9f21-215450f6ce67\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-7qwxn" Mar 18 17:41:52.307214 master-0 kubenswrapper[7553]: E0318 17:41:52.297787 7553 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"etcd-master-0-master-0\" already exists" pod="openshift-etcd/etcd-master-0-master-0" Mar 18 17:41:52.307214 master-0 kubenswrapper[7553]: I0318 17:41:52.297799 7553 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fk59q\" (UniqueName: \"kubernetes.io/projected/cb522b02-0b93-4711-9041-566daa06b95a-kube-api-access-fk59q\") pod \"openshift-config-operator-95bf4f4d-q27fh\" (UID: \"cb522b02-0b93-4711-9041-566daa06b95a\") " pod="openshift-config-operator/openshift-config-operator-95bf4f4d-q27fh" Mar 18 17:41:52.307214 master-0 kubenswrapper[7553]: I0318 17:41:52.297981 7553 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dlcnh\" (UniqueName: \"kubernetes.io/projected/a8c34df1-ea0d-4dfa-bf4d-5b58dc5bee8e-kube-api-access-dlcnh\") pod \"multus-admission-controller-5dbbb8b86f-gr8jc\" (UID: \"a8c34df1-ea0d-4dfa-bf4d-5b58dc5bee8e\") " pod="openshift-multus/multus-admission-controller-5dbbb8b86f-gr8jc" Mar 18 17:41:52.307214 master-0 kubenswrapper[7553]: I0318 17:41:52.298119 7553 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2tvgq\" (UniqueName: \"kubernetes.io/projected/0b9ff55a-73fb-473f-b406-1f8b6cffdb89-kube-api-access-2tvgq\") pod \"openshift-apiserver-operator-d65958b8-t266j\" (UID: \"0b9ff55a-73fb-473f-b406-1f8b6cffdb89\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-d65958b8-t266j" Mar 18 17:41:52.307214 master-0 kubenswrapper[7553]: I0318 17:41:52.298445 7553 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zwlxb\" (UniqueName: \"kubernetes.io/projected/37b3753f-bf4f-4a9e-a4a8-d58296bada79-kube-api-access-zwlxb\") pod \"cluster-baremetal-operator-6f69995874-dh5zl\" (UID: \"37b3753f-bf4f-4a9e-a4a8-d58296bada79\") " pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-dh5zl" Mar 18 17:41:52.307214 master-0 kubenswrapper[7553]: I0318 17:41:52.298643 7553 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sclm5\" (UniqueName: \"kubernetes.io/projected/7e64a377-f497-4416-8f22-d5c7f52e0b65-kube-api-access-sclm5\") pod \"ingress-operator-66b84d69b-qb7n6\" (UID: \"7e64a377-f497-4416-8f22-d5c7f52e0b65\") " pod="openshift-ingress-operator/ingress-operator-66b84d69b-qb7n6" Mar 18 17:41:52.307214 master-0 kubenswrapper[7553]: I0318 17:41:52.298867 7553 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-clm4b\" (UniqueName: \"kubernetes.io/projected/8d5e9525-6c0d-4c0b-a2ce-e42eaf66c311-kube-api-access-clm4b\") pod \"cluster-monitoring-operator-58845fbb57-vjrjg\" (UID: \"8d5e9525-6c0d-4c0b-a2ce-e42eaf66c311\") " pod="openshift-monitoring/cluster-monitoring-operator-58845fbb57-vjrjg" Mar 18 17:41:52.307214 master-0 kubenswrapper[7553]: I0318 17:41:52.298913 7553 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-76j8w\" (UniqueName: \"kubernetes.io/projected/9875ed82-813c-483d-8471-8f9b74b774ee-kube-api-access-76j8w\") pod \"network-node-identity-7s68k\" (UID: \"9875ed82-813c-483d-8471-8f9b74b774ee\") " pod="openshift-network-node-identity/network-node-identity-7s68k" Mar 18 17:41:52.307214 master-0 kubenswrapper[7553]: I0318 17:41:52.299457 7553 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cd868\" (UniqueName: \"kubernetes.io/projected/1d969530-c138-4fb7-9bfe-0825be66c009-kube-api-access-cd868\") pod \"iptables-alerter-f7jp5\" (UID: \"1d969530-c138-4fb7-9bfe-0825be66c009\") " pod="openshift-network-operator/iptables-alerter-f7jp5" Mar 18 17:41:52.307214 master-0 kubenswrapper[7553]: I0318 17:41:52.305266 7553 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/7e64a377-f497-4416-8f22-d5c7f52e0b65-bound-sa-token\") pod \"ingress-operator-66b84d69b-qb7n6\" (UID: \"7e64a377-f497-4416-8f22-d5c7f52e0b65\") " pod="openshift-ingress-operator/ingress-operator-66b84d69b-qb7n6" Mar 18 17:41:52.307214 master-0 kubenswrapper[7553]: I0318 17:41:52.306320 7553 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-756j8\" (UniqueName: \"kubernetes.io/projected/ce5831a6-5a8d-4cda-9299-5d86437bcab2-kube-api-access-756j8\") pod \"marketplace-operator-89ccd998f-l5gm7\" (UID: \"ce5831a6-5a8d-4cda-9299-5d86437bcab2\") " pod="openshift-marketplace/marketplace-operator-89ccd998f-l5gm7" Mar 18 17:41:52.307214 master-0 kubenswrapper[7553]: I0318 17:41:52.306555 7553 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-789k6\" (UniqueName: \"kubernetes.io/projected/c087ce06-a16b-41f4-ba93-8fccdee09003-kube-api-access-789k6\") pod \"authentication-operator-5885bfd7f4-8sxdf\" (UID: \"c087ce06-a16b-41f4-ba93-8fccdee09003\") " pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-8sxdf" Mar 18 17:41:52.307214 master-0 kubenswrapper[7553]: I0318 17:41:52.306943 7553 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Mar 18 17:41:52.333629 master-0 kubenswrapper[7553]: I0318 17:41:52.311322 7553 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5s6f5\" (UniqueName: \"kubernetes.io/projected/978dcca6-b396-463f-9614-9e24194a1aaa-kube-api-access-5s6f5\") pod \"network-check-target-ctd49\" (UID: \"978dcca6-b396-463f-9614-9e24194a1aaa\") " pod="openshift-network-diagnostics/network-check-target-ctd49" Mar 18 17:41:52.333629 master-0 kubenswrapper[7553]: I0318 17:41:52.315135 7553 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n8k5q\" (UniqueName: \"kubernetes.io/projected/99e215da-759d-4fff-af65-0fb64245fbd0-kube-api-access-n8k5q\") pod \"cluster-olm-operator-67dcd4998-lljnt\" (UID: \"99e215da-759d-4fff-af65-0fb64245fbd0\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-67dcd4998-lljnt" Mar 18 17:41:52.333629 master-0 kubenswrapper[7553]: I0318 17:41:52.321233 7553 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bm8jj\" (UniqueName: \"kubernetes.io/projected/d26d4515-391e-41a5-8c82-1b2b8a375662-kube-api-access-bm8jj\") pod \"package-server-manager-7b95f86987-6qqz4\" (UID: \"d26d4515-391e-41a5-8c82-1b2b8a375662\") " pod="openshift-operator-lifecycle-manager/package-server-manager-7b95f86987-6qqz4" Mar 18 17:41:52.347414 master-0 kubenswrapper[7553]: I0318 17:41:52.347345 7553 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"trusted-ca" Mar 18 17:41:52.352554 master-0 kubenswrapper[7553]: E0318 17:41:52.348443 7553 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"bootstrap-kube-controller-manager-master-0\" already exists" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 18 17:41:52.352554 master-0 kubenswrapper[7553]: I0318 17:41:52.349046 7553 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zfnqp\" (UniqueName: \"kubernetes.io/projected/c355c750-ae2f-49fa-9a16-8fb4f688853e-kube-api-access-zfnqp\") pod \"service-ca-operator-b865698dc-5zj8r\" (UID: \"c355c750-ae2f-49fa-9a16-8fb4f688853e\") " pod="openshift-service-ca-operator/service-ca-operator-b865698dc-5zj8r" Mar 18 17:41:52.352554 master-0 kubenswrapper[7553]: I0318 17:41:52.349514 7553 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3a3a6c2c-78e7-41f3-acff-20173cbc012a-kube-api-access\") pod \"openshift-kube-scheduler-operator-dddff6458-wlfj4\" (UID: \"3a3a6c2c-78e7-41f3-acff-20173cbc012a\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-dddff6458-wlfj4" Mar 18 17:41:52.352554 master-0 kubenswrapper[7553]: I0318 17:41:52.349999 7553 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2pqww\" (UniqueName: \"kubernetes.io/projected/14a0661b-7bde-4e22-a9a9-5e3fb24df77f-kube-api-access-2pqww\") pod \"network-operator-7bd846bfc4-dxxbl\" (UID: \"14a0661b-7bde-4e22-a9a9-5e3fb24df77f\") " pod="openshift-network-operator/network-operator-7bd846bfc4-dxxbl" Mar 18 17:41:52.352554 master-0 kubenswrapper[7553]: I0318 17:41:52.350059 7553 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 18 17:41:52.355504 master-0 kubenswrapper[7553]: I0318 17:41:52.355398 7553 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qwps9\" (UniqueName: \"kubernetes.io/projected/e73f2834-c56c-4cef-ac3c-2317e9a4324c-kube-api-access-qwps9\") pod \"olm-operator-5c9796789-6hngr\" (UID: \"e73f2834-c56c-4cef-ac3c-2317e9a4324c\") " pod="openshift-operator-lifecycle-manager/olm-operator-5c9796789-6hngr" Mar 18 17:41:52.360292 master-0 kubenswrapper[7553]: I0318 17:41:52.360237 7553 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tnknt\" (UniqueName: \"kubernetes.io/projected/0100a259-1358-45e8-8191-4e1f9a14ec89-kube-api-access-tnknt\") pod \"etcd-operator-8544cbcf9c-rws9x\" (UID: \"0100a259-1358-45e8-8191-4e1f9a14ec89\") " pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-rws9x" Mar 18 17:41:52.512983 master-0 kubenswrapper[7553]: I0318 17:41:52.512931 7553 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 18 17:41:52.523214 master-0 kubenswrapper[7553]: I0318 17:41:52.523001 7553 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-ctd49" Mar 18 17:41:52.698952 master-0 kubenswrapper[7553]: I0318 17:41:52.698876 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7c6694a8-ccd0-491b-9f21-215450f6ce67-trusted-ca\") pod \"cluster-node-tuning-operator-598fbc5f8f-7qwxn\" (UID: \"7c6694a8-ccd0-491b-9f21-215450f6ce67\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-7qwxn" Mar 18 17:41:52.699474 master-0 kubenswrapper[7553]: I0318 17:41:52.699436 7553 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7c6694a8-ccd0-491b-9f21-215450f6ce67-trusted-ca\") pod \"cluster-node-tuning-operator-598fbc5f8f-7qwxn\" (UID: \"7c6694a8-ccd0-491b-9f21-215450f6ce67\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-7qwxn" Mar 18 17:41:52.799602 master-0 kubenswrapper[7553]: I0318 17:41:52.799331 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/e73f2834-c56c-4cef-ac3c-2317e9a4324c-srv-cert\") pod \"olm-operator-5c9796789-6hngr\" (UID: \"e73f2834-c56c-4cef-ac3c-2317e9a4324c\") " pod="openshift-operator-lifecycle-manager/olm-operator-5c9796789-6hngr" Mar 18 17:41:52.799602 master-0 kubenswrapper[7553]: I0318 17:41:52.799394 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/d26d4515-391e-41a5-8c82-1b2b8a375662-package-server-manager-serving-cert\") pod \"package-server-manager-7b95f86987-6qqz4\" (UID: \"d26d4515-391e-41a5-8c82-1b2b8a375662\") " pod="openshift-operator-lifecycle-manager/package-server-manager-7b95f86987-6qqz4" Mar 18 17:41:52.799602 master-0 kubenswrapper[7553]: I0318 17:41:52.799423 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-baremetal-operator-tls\" (UniqueName: \"kubernetes.io/secret/37b3753f-bf4f-4a9e-a4a8-d58296bada79-cluster-baremetal-operator-tls\") pod \"cluster-baremetal-operator-6f69995874-dh5zl\" (UID: \"37b3753f-bf4f-4a9e-a4a8-d58296bada79\") " pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-dh5zl" Mar 18 17:41:52.799602 master-0 kubenswrapper[7553]: I0318 17:41:52.799444 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a02399de-859b-45b1-9b00-18a08f285f39-serving-cert\") pod \"cluster-version-operator-56d8475767-lqvvj\" (UID: \"a02399de-859b-45b1-9b00-18a08f285f39\") " pod="openshift-cluster-version/cluster-version-operator-56d8475767-lqvvj" Mar 18 17:41:52.799602 master-0 kubenswrapper[7553]: I0318 17:41:52.799462 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/7e64a377-f497-4416-8f22-d5c7f52e0b65-metrics-tls\") pod \"ingress-operator-66b84d69b-qb7n6\" (UID: \"7e64a377-f497-4416-8f22-d5c7f52e0b65\") " pod="openshift-ingress-operator/ingress-operator-66b84d69b-qb7n6" Mar 18 17:41:52.799602 master-0 kubenswrapper[7553]: I0318 17:41:52.799487 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/6f26e239-2988-4faa-bc1d-24b15b95b7f1-image-registry-operator-tls\") pod \"cluster-image-registry-operator-5549dc66cb-ljrq8\" (UID: \"6f26e239-2988-4faa-bc1d-24b15b95b7f1\") " pod="openshift-image-registry/cluster-image-registry-operator-5549dc66cb-ljrq8" Mar 18 17:41:52.799602 master-0 kubenswrapper[7553]: I0318 17:41:52.799508 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/a8c34df1-ea0d-4dfa-bf4d-5b58dc5bee8e-webhook-certs\") pod \"multus-admission-controller-5dbbb8b86f-gr8jc\" (UID: \"a8c34df1-ea0d-4dfa-bf4d-5b58dc5bee8e\") " pod="openshift-multus/multus-admission-controller-5dbbb8b86f-gr8jc" Mar 18 17:41:52.799602 master-0 kubenswrapper[7553]: I0318 17:41:52.799527 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/7c6694a8-ccd0-491b-9f21-215450f6ce67-apiservice-cert\") pod \"cluster-node-tuning-operator-598fbc5f8f-7qwxn\" (UID: \"7c6694a8-ccd0-491b-9f21-215450f6ce67\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-7qwxn" Mar 18 17:41:52.799602 master-0 kubenswrapper[7553]: I0318 17:41:52.799543 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/e9e04572-1425-440e-9869-6deef05e13e3-srv-cert\") pod \"catalog-operator-68f85b4d6c-qpgfz\" (UID: \"e9e04572-1425-440e-9869-6deef05e13e3\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68f85b4d6c-qpgfz" Mar 18 17:41:52.799602 master-0 kubenswrapper[7553]: I0318 17:41:52.799562 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/8d5e9525-6c0d-4c0b-a2ce-e42eaf66c311-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-58845fbb57-vjrjg\" (UID: \"8d5e9525-6c0d-4c0b-a2ce-e42eaf66c311\") " pod="openshift-monitoring/cluster-monitoring-operator-58845fbb57-vjrjg" Mar 18 17:41:52.799602 master-0 kubenswrapper[7553]: I0318 17:41:52.799584 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/b1352cc7-4099-44c5-9c31-8259fb783bc7-metrics-tls\") pod \"dns-operator-9c5679d8f-7sc7v\" (UID: \"b1352cc7-4099-44c5-9c31-8259fb783bc7\") " pod="openshift-dns-operator/dns-operator-9c5679d8f-7sc7v" Mar 18 17:41:52.799602 master-0 kubenswrapper[7553]: I0318 17:41:52.799616 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/ce5831a6-5a8d-4cda-9299-5d86437bcab2-marketplace-operator-metrics\") pod \"marketplace-operator-89ccd998f-l5gm7\" (UID: \"ce5831a6-5a8d-4cda-9299-5d86437bcab2\") " pod="openshift-marketplace/marketplace-operator-89ccd998f-l5gm7" Mar 18 17:41:52.799602 master-0 kubenswrapper[7553]: I0318 17:41:52.799633 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5a4f94f3-d63a-4869-b723-ae9637610b4b-metrics-certs\") pod \"network-metrics-daemon-mfn52\" (UID: \"5a4f94f3-d63a-4869-b723-ae9637610b4b\") " pod="openshift-multus/network-metrics-daemon-mfn52" Mar 18 17:41:52.800753 master-0 kubenswrapper[7553]: I0318 17:41:52.799653 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/37b3753f-bf4f-4a9e-a4a8-d58296bada79-cert\") pod \"cluster-baremetal-operator-6f69995874-dh5zl\" (UID: \"37b3753f-bf4f-4a9e-a4a8-d58296bada79\") " pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-dh5zl" Mar 18 17:41:52.800753 master-0 kubenswrapper[7553]: I0318 17:41:52.799676 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/7c6694a8-ccd0-491b-9f21-215450f6ce67-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-598fbc5f8f-7qwxn\" (UID: \"7c6694a8-ccd0-491b-9f21-215450f6ce67\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-7qwxn" Mar 18 17:41:52.800753 master-0 kubenswrapper[7553]: E0318 17:41:52.799805 7553 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/node-tuning-operator-tls: secret "node-tuning-operator-tls" not found Mar 18 17:41:52.800753 master-0 kubenswrapper[7553]: E0318 17:41:52.799858 7553 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7c6694a8-ccd0-491b-9f21-215450f6ce67-node-tuning-operator-tls podName:7c6694a8-ccd0-491b-9f21-215450f6ce67 nodeName:}" failed. No retries permitted until 2026-03-18 17:41:53.799837952 +0000 UTC m=+3.945672625 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "node-tuning-operator-tls" (UniqueName: "kubernetes.io/secret/7c6694a8-ccd0-491b-9f21-215450f6ce67-node-tuning-operator-tls") pod "cluster-node-tuning-operator-598fbc5f8f-7qwxn" (UID: "7c6694a8-ccd0-491b-9f21-215450f6ce67") : secret "node-tuning-operator-tls" not found Mar 18 17:41:52.800753 master-0 kubenswrapper[7553]: E0318 17:41:52.800483 7553 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: secret "catalog-operator-serving-cert" not found Mar 18 17:41:52.800753 master-0 kubenswrapper[7553]: E0318 17:41:52.800496 7553 secret.go:189] Couldn't get secret openshift-machine-api/cluster-baremetal-webhook-server-cert: secret "cluster-baremetal-webhook-server-cert" not found Mar 18 17:41:52.800753 master-0 kubenswrapper[7553]: E0318 17:41:52.800520 7553 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e9e04572-1425-440e-9869-6deef05e13e3-srv-cert podName:e9e04572-1425-440e-9869-6deef05e13e3 nodeName:}" failed. No retries permitted until 2026-03-18 17:41:54.800510287 +0000 UTC m=+4.946344960 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/e9e04572-1425-440e-9869-6deef05e13e3-srv-cert") pod "catalog-operator-68f85b4d6c-qpgfz" (UID: "e9e04572-1425-440e-9869-6deef05e13e3") : secret "catalog-operator-serving-cert" not found Mar 18 17:41:52.800753 master-0 kubenswrapper[7553]: E0318 17:41:52.800533 7553 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/37b3753f-bf4f-4a9e-a4a8-d58296bada79-cert podName:37b3753f-bf4f-4a9e-a4a8-d58296bada79 nodeName:}" failed. No retries permitted until 2026-03-18 17:41:54.800527717 +0000 UTC m=+4.946362390 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/37b3753f-bf4f-4a9e-a4a8-d58296bada79-cert") pod "cluster-baremetal-operator-6f69995874-dh5zl" (UID: "37b3753f-bf4f-4a9e-a4a8-d58296bada79") : secret "cluster-baremetal-webhook-server-cert" not found Mar 18 17:41:52.800753 master-0 kubenswrapper[7553]: E0318 17:41:52.800536 7553 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/performance-addon-operator-webhook-cert: secret "performance-addon-operator-webhook-cert" not found Mar 18 17:41:52.800753 master-0 kubenswrapper[7553]: E0318 17:41:52.800571 7553 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: secret "cluster-monitoring-operator-tls" not found Mar 18 17:41:52.800753 master-0 kubenswrapper[7553]: E0318 17:41:52.800600 7553 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Mar 18 17:41:52.800753 master-0 kubenswrapper[7553]: E0318 17:41:52.800556 7553 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: secret "metrics-daemon-secret" not found Mar 18 17:41:52.800753 master-0 kubenswrapper[7553]: E0318 17:41:52.800635 7553 secret.go:189] Couldn't get secret openshift-dns-operator/metrics-tls: secret "metrics-tls" not found Mar 18 17:41:52.800753 master-0 kubenswrapper[7553]: E0318 17:41:52.800601 7553 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8d5e9525-6c0d-4c0b-a2ce-e42eaf66c311-cluster-monitoring-operator-tls podName:8d5e9525-6c0d-4c0b-a2ce-e42eaf66c311 nodeName:}" failed. No retries permitted until 2026-03-18 17:41:54.800590269 +0000 UTC m=+4.946424942 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/8d5e9525-6c0d-4c0b-a2ce-e42eaf66c311-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-58845fbb57-vjrjg" (UID: "8d5e9525-6c0d-4c0b-a2ce-e42eaf66c311") : secret "cluster-monitoring-operator-tls" not found Mar 18 17:41:52.800753 master-0 kubenswrapper[7553]: E0318 17:41:52.800656 7553 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: secret "package-server-manager-serving-cert" not found Mar 18 17:41:52.800753 master-0 kubenswrapper[7553]: E0318 17:41:52.800667 7553 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7c6694a8-ccd0-491b-9f21-215450f6ce67-apiservice-cert podName:7c6694a8-ccd0-491b-9f21-215450f6ce67 nodeName:}" failed. No retries permitted until 2026-03-18 17:41:54.80065817 +0000 UTC m=+4.946492843 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/7c6694a8-ccd0-491b-9f21-215450f6ce67-apiservice-cert") pod "cluster-node-tuning-operator-598fbc5f8f-7qwxn" (UID: "7c6694a8-ccd0-491b-9f21-215450f6ce67") : secret "performance-addon-operator-webhook-cert" not found Mar 18 17:41:52.800753 master-0 kubenswrapper[7553]: E0318 17:41:52.800676 7553 secret.go:189] Couldn't get secret openshift-ingress-operator/metrics-tls: secret "metrics-tls" not found Mar 18 17:41:52.800753 master-0 kubenswrapper[7553]: E0318 17:41:52.800680 7553 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d26d4515-391e-41a5-8c82-1b2b8a375662-package-server-manager-serving-cert podName:d26d4515-391e-41a5-8c82-1b2b8a375662 nodeName:}" failed. No retries permitted until 2026-03-18 17:41:54.80067172 +0000 UTC m=+4.946506393 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/d26d4515-391e-41a5-8c82-1b2b8a375662-package-server-manager-serving-cert") pod "package-server-manager-7b95f86987-6qqz4" (UID: "d26d4515-391e-41a5-8c82-1b2b8a375662") : secret "package-server-manager-serving-cert" not found Mar 18 17:41:52.800753 master-0 kubenswrapper[7553]: E0318 17:41:52.800697 7553 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a02399de-859b-45b1-9b00-18a08f285f39-serving-cert podName:a02399de-859b-45b1-9b00-18a08f285f39 nodeName:}" failed. No retries permitted until 2026-03-18 17:41:54.800691711 +0000 UTC m=+4.946526374 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/a02399de-859b-45b1-9b00-18a08f285f39-serving-cert") pod "cluster-version-operator-56d8475767-lqvvj" (UID: "a02399de-859b-45b1-9b00-18a08f285f39") : secret "cluster-version-operator-serving-cert" not found Mar 18 17:41:52.800753 master-0 kubenswrapper[7553]: E0318 17:41:52.800711 7553 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5a4f94f3-d63a-4869-b723-ae9637610b4b-metrics-certs podName:5a4f94f3-d63a-4869-b723-ae9637610b4b nodeName:}" failed. No retries permitted until 2026-03-18 17:41:53.800703851 +0000 UTC m=+3.946538524 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/5a4f94f3-d63a-4869-b723-ae9637610b4b-metrics-certs") pod "network-metrics-daemon-mfn52" (UID: "5a4f94f3-d63a-4869-b723-ae9637610b4b") : secret "metrics-daemon-secret" not found Mar 18 17:41:52.800753 master-0 kubenswrapper[7553]: E0318 17:41:52.800710 7553 secret.go:189] Couldn't get secret openshift-machine-api/cluster-baremetal-operator-tls: secret "cluster-baremetal-operator-tls" not found Mar 18 17:41:52.800753 master-0 kubenswrapper[7553]: E0318 17:41:52.800725 7553 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b1352cc7-4099-44c5-9c31-8259fb783bc7-metrics-tls podName:b1352cc7-4099-44c5-9c31-8259fb783bc7 nodeName:}" failed. No retries permitted until 2026-03-18 17:41:54.800717021 +0000 UTC m=+4.946551694 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/b1352cc7-4099-44c5-9c31-8259fb783bc7-metrics-tls") pod "dns-operator-9c5679d8f-7sc7v" (UID: "b1352cc7-4099-44c5-9c31-8259fb783bc7") : secret "metrics-tls" not found Mar 18 17:41:52.800753 master-0 kubenswrapper[7553]: E0318 17:41:52.800742 7553 secret.go:189] Couldn't get secret openshift-image-registry/image-registry-operator-tls: secret "image-registry-operator-tls" not found Mar 18 17:41:52.800753 master-0 kubenswrapper[7553]: E0318 17:41:52.800768 7553 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: secret "multus-admission-controller-secret" not found Mar 18 17:41:52.800753 master-0 kubenswrapper[7553]: E0318 17:41:52.800743 7553 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7e64a377-f497-4416-8f22-d5c7f52e0b65-metrics-tls podName:7e64a377-f497-4416-8f22-d5c7f52e0b65 nodeName:}" failed. No retries permitted until 2026-03-18 17:41:54.800735602 +0000 UTC m=+4.946570275 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/7e64a377-f497-4416-8f22-d5c7f52e0b65-metrics-tls") pod "ingress-operator-66b84d69b-qb7n6" (UID: "7e64a377-f497-4416-8f22-d5c7f52e0b65") : secret "metrics-tls" not found Mar 18 17:41:52.800753 master-0 kubenswrapper[7553]: E0318 17:41:52.800781 7553 secret.go:189] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: secret "marketplace-operator-metrics" not found Mar 18 17:41:52.800753 master-0 kubenswrapper[7553]: E0318 17:41:52.800791 7553 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a8c34df1-ea0d-4dfa-bf4d-5b58dc5bee8e-webhook-certs podName:a8c34df1-ea0d-4dfa-bf4d-5b58dc5bee8e nodeName:}" failed. No retries permitted until 2026-03-18 17:41:54.800785013 +0000 UTC m=+4.946619686 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/a8c34df1-ea0d-4dfa-bf4d-5b58dc5bee8e-webhook-certs") pod "multus-admission-controller-5dbbb8b86f-gr8jc" (UID: "a8c34df1-ea0d-4dfa-bf4d-5b58dc5bee8e") : secret "multus-admission-controller-secret" not found Mar 18 17:41:52.800753 master-0 kubenswrapper[7553]: E0318 17:41:52.800805 7553 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/37b3753f-bf4f-4a9e-a4a8-d58296bada79-cluster-baremetal-operator-tls podName:37b3753f-bf4f-4a9e-a4a8-d58296bada79 nodeName:}" failed. No retries permitted until 2026-03-18 17:41:54.800799903 +0000 UTC m=+4.946634576 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cluster-baremetal-operator-tls" (UniqueName: "kubernetes.io/secret/37b3753f-bf4f-4a9e-a4a8-d58296bada79-cluster-baremetal-operator-tls") pod "cluster-baremetal-operator-6f69995874-dh5zl" (UID: "37b3753f-bf4f-4a9e-a4a8-d58296bada79") : secret "cluster-baremetal-operator-tls" not found Mar 18 17:41:52.800753 master-0 kubenswrapper[7553]: E0318 17:41:52.800820 7553 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6f26e239-2988-4faa-bc1d-24b15b95b7f1-image-registry-operator-tls podName:6f26e239-2988-4faa-bc1d-24b15b95b7f1 nodeName:}" failed. No retries permitted until 2026-03-18 17:41:54.800812653 +0000 UTC m=+4.946647326 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "image-registry-operator-tls" (UniqueName: "kubernetes.io/secret/6f26e239-2988-4faa-bc1d-24b15b95b7f1-image-registry-operator-tls") pod "cluster-image-registry-operator-5549dc66cb-ljrq8" (UID: "6f26e239-2988-4faa-bc1d-24b15b95b7f1") : secret "image-registry-operator-tls" not found Mar 18 17:41:52.800753 master-0 kubenswrapper[7553]: E0318 17:41:52.800820 7553 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: secret "olm-operator-serving-cert" not found Mar 18 17:41:52.800753 master-0 kubenswrapper[7553]: E0318 17:41:52.800831 7553 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ce5831a6-5a8d-4cda-9299-5d86437bcab2-marketplace-operator-metrics podName:ce5831a6-5a8d-4cda-9299-5d86437bcab2 nodeName:}" failed. No retries permitted until 2026-03-18 17:41:54.800826913 +0000 UTC m=+4.946661586 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/ce5831a6-5a8d-4cda-9299-5d86437bcab2-marketplace-operator-metrics") pod "marketplace-operator-89ccd998f-l5gm7" (UID: "ce5831a6-5a8d-4cda-9299-5d86437bcab2") : secret "marketplace-operator-metrics" not found Mar 18 17:41:52.800753 master-0 kubenswrapper[7553]: E0318 17:41:52.800848 7553 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e73f2834-c56c-4cef-ac3c-2317e9a4324c-srv-cert podName:e73f2834-c56c-4cef-ac3c-2317e9a4324c nodeName:}" failed. No retries permitted until 2026-03-18 17:41:54.800839524 +0000 UTC m=+4.946674207 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/e73f2834-c56c-4cef-ac3c-2317e9a4324c-srv-cert") pod "olm-operator-5c9796789-6hngr" (UID: "e73f2834-c56c-4cef-ac3c-2317e9a4324c") : secret "olm-operator-serving-cert" not found Mar 18 17:41:53.094164 master-0 kubenswrapper[7553]: E0318 17:41:53.094094 7553 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c983016b9ceed0fca1f51bd49c2653243c7e5af91cbf2f478b091db6e028252" Mar 18 17:41:53.094994 master-0 kubenswrapper[7553]: E0318 17:41:53.094930 7553 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-storage-version-migrator-operator,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c983016b9ceed0fca1f51bd49c2653243c7e5af91cbf2f478b091db6e028252,Command:[cluster-kube-storage-version-migrator-operator start],Args:[--config=/var/run/configmaps/config/config.yaml],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:951ecfeba9b2da4b653034d09275f925396a79c2d8461b8a7c71c776fee67ba0,ValueFrom:nil,},EnvVar{Name:OPERATOR_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c983016b9ceed0fca1f51bd49c2653243c7e5af91cbf2f478b091db6e028252,ValueFrom:nil,},EnvVar{Name:OPERATOR_IMAGE_VERSION,Value:4.18.35,ValueFrom:nil,},EnvVar{Name:OPERAND_IMAGE_VERSION,Value:4.18.35,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:false,MountPath:/var/run/configmaps/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:serving-cert,ReadOnly:false,MountPath:/var/run/secrets/serving-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nf82n,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1001,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod kube-storage-version-migrator-operator-6bb5bfb6fd-xwwcg_openshift-kube-storage-version-migrator-operator(f7ff61c7-32d1-4407-a792-8e22bb4d50f9): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 18 17:41:53.096553 master-0 kubenswrapper[7553]: E0318 17:41:53.096492 7553 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-storage-version-migrator-operator\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-6bb5bfb6fd-xwwcg" podUID="f7ff61c7-32d1-4407-a792-8e22bb4d50f9" Mar 18 17:41:53.425971 master-0 kubenswrapper[7553]: I0318 17:41:53.425584 7553 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 18 17:41:53.455706 master-0 kubenswrapper[7553]: I0318 17:41:53.454843 7553 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 18 17:41:53.457055 master-0 kubenswrapper[7553]: I0318 17:41:53.457006 7553 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-network-diagnostics/network-check-target-ctd49"] Mar 18 17:41:53.482660 master-0 kubenswrapper[7553]: W0318 17:41:53.482602 7553 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod978dcca6_b396_463f_9614_9e24194a1aaa.slice/crio-a0f14825defb92b50c4747c20631ca30f9e30632027bb38a918f6a6a14b5c095 WatchSource:0}: Error finding container a0f14825defb92b50c4747c20631ca30f9e30632027bb38a918f6a6a14b5c095: Status 404 returned error can't find the container with id a0f14825defb92b50c4747c20631ca30f9e30632027bb38a918f6a6a14b5c095 Mar 18 17:41:53.828913 master-0 kubenswrapper[7553]: I0318 17:41:53.828160 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5a4f94f3-d63a-4869-b723-ae9637610b4b-metrics-certs\") pod \"network-metrics-daemon-mfn52\" (UID: \"5a4f94f3-d63a-4869-b723-ae9637610b4b\") " pod="openshift-multus/network-metrics-daemon-mfn52" Mar 18 17:41:53.828913 master-0 kubenswrapper[7553]: I0318 17:41:53.828918 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/7c6694a8-ccd0-491b-9f21-215450f6ce67-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-598fbc5f8f-7qwxn\" (UID: \"7c6694a8-ccd0-491b-9f21-215450f6ce67\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-7qwxn" Mar 18 17:41:53.829212 master-0 kubenswrapper[7553]: E0318 17:41:53.828358 7553 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: secret "metrics-daemon-secret" not found Mar 18 17:41:53.829212 master-0 kubenswrapper[7553]: E0318 17:41:53.829044 7553 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5a4f94f3-d63a-4869-b723-ae9637610b4b-metrics-certs podName:5a4f94f3-d63a-4869-b723-ae9637610b4b nodeName:}" failed. No retries permitted until 2026-03-18 17:41:55.829020943 +0000 UTC m=+5.974855606 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/5a4f94f3-d63a-4869-b723-ae9637610b4b-metrics-certs") pod "network-metrics-daemon-mfn52" (UID: "5a4f94f3-d63a-4869-b723-ae9637610b4b") : secret "metrics-daemon-secret" not found Mar 18 17:41:53.829212 master-0 kubenswrapper[7553]: E0318 17:41:53.829098 7553 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/node-tuning-operator-tls: secret "node-tuning-operator-tls" not found Mar 18 17:41:53.829212 master-0 kubenswrapper[7553]: E0318 17:41:53.829162 7553 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7c6694a8-ccd0-491b-9f21-215450f6ce67-node-tuning-operator-tls podName:7c6694a8-ccd0-491b-9f21-215450f6ce67 nodeName:}" failed. No retries permitted until 2026-03-18 17:41:55.829142416 +0000 UTC m=+5.974977089 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "node-tuning-operator-tls" (UniqueName: "kubernetes.io/secret/7c6694a8-ccd0-491b-9f21-215450f6ce67-node-tuning-operator-tls") pod "cluster-node-tuning-operator-598fbc5f8f-7qwxn" (UID: "7c6694a8-ccd0-491b-9f21-215450f6ce67") : secret "node-tuning-operator-tls" not found Mar 18 17:41:54.187550 master-0 kubenswrapper[7553]: I0318 17:41:54.187479 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8c94f4649-hpsbd" event={"ID":"9a240ab7-a1d5-4e9a-96f3-4590681cc7ed","Type":"ContainerStarted","Data":"61d9d3ba71cb2cabd3457558bd5286207ddf90062735fd7732ad59d0bbbaa150"} Mar 18 17:41:54.189016 master-0 kubenswrapper[7553]: I0318 17:41:54.188973 7553 generic.go:334] "Generic (PLEG): container finished" podID="cb522b02-0b93-4711-9041-566daa06b95a" containerID="f7dedaead357f68edfb6b1633ceea1f3b2a9443afcc42c378f59d11efb0de8ae" exitCode=0 Mar 18 17:41:54.189082 master-0 kubenswrapper[7553]: I0318 17:41:54.189036 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-95bf4f4d-q27fh" event={"ID":"cb522b02-0b93-4711-9041-566daa06b95a","Type":"ContainerDied","Data":"f7dedaead357f68edfb6b1633ceea1f3b2a9443afcc42c378f59d11efb0de8ae"} Mar 18 17:41:54.190738 master-0 kubenswrapper[7553]: I0318 17:41:54.190696 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-8sxdf" event={"ID":"c087ce06-a16b-41f4-ba93-8fccdee09003","Type":"ContainerStarted","Data":"d912b8d4a9e9d78b8f87c200cd35424b252b93249d3289fc7b152d013f54ec26"} Mar 18 17:41:54.192934 master-0 kubenswrapper[7553]: I0318 17:41:54.192892 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-ctd49" event={"ID":"978dcca6-b396-463f-9614-9e24194a1aaa","Type":"ContainerStarted","Data":"2e108b018237775152fb7257eb59fedef9eb4f2cf8d068a30234fba03a6488da"} Mar 18 17:41:54.193055 master-0 kubenswrapper[7553]: I0318 17:41:54.193036 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-ctd49" event={"ID":"978dcca6-b396-463f-9614-9e24194a1aaa","Type":"ContainerStarted","Data":"a0f14825defb92b50c4747c20631ca30f9e30632027bb38a918f6a6a14b5c095"} Mar 18 17:41:54.193122 master-0 kubenswrapper[7553]: I0318 17:41:54.193111 7553 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-ctd49" Mar 18 17:41:54.194403 master-0 kubenswrapper[7553]: I0318 17:41:54.194364 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-dddff6458-wlfj4" event={"ID":"3a3a6c2c-78e7-41f3-acff-20173cbc012a","Type":"ContainerStarted","Data":"b999f5b1617525702f37002cc0c97c3e6d4fa7798646c662c89a3d3f27b32af1"} Mar 18 17:41:54.196908 master-0 kubenswrapper[7553]: I0318 17:41:54.196856 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-rws9x" event={"ID":"0100a259-1358-45e8-8191-4e1f9a14ec89","Type":"ContainerStarted","Data":"958f393f34a137d2d87755401a3809cbe07ffd8110f371434c3fc67267936608"} Mar 18 17:41:54.203123 master-0 kubenswrapper[7553]: I0318 17:41:54.203072 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-b865698dc-5zj8r" event={"ID":"c355c750-ae2f-49fa-9a16-8fb4f688853e","Type":"ContainerStarted","Data":"0cb61f4df91a50839abfb90676637f2a5c84478782eb2749acec5427cc366219"} Mar 18 17:41:54.205004 master-0 kubenswrapper[7553]: I0318 17:41:54.204967 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-d65958b8-t266j" event={"ID":"0b9ff55a-73fb-473f-b406-1f8b6cffdb89","Type":"ContainerStarted","Data":"fa4790d4c10a7e1c45ffad9596658e2a3e44e654967b539ab7d40f5e263966e8"} Mar 18 17:41:54.206550 master-0 kubenswrapper[7553]: I0318 17:41:54.206474 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-5f5d689c6b-z9vvz" event={"ID":"dba5f8d7-4d25-42b5-9c58-813221bf96bb","Type":"ContainerStarted","Data":"398454ad32431a1333f76c77a1b11d599119897614da05c5c31c8fb7c4b10bc1"} Mar 18 17:41:54.208163 master-0 kubenswrapper[7553]: I0318 17:41:54.208109 7553 generic.go:334] "Generic (PLEG): container finished" podID="99e215da-759d-4fff-af65-0fb64245fbd0" containerID="836d36e41f9d465b68171473ea87c95a04be32a563d9abf3bd2beb4eacf6a497" exitCode=0 Mar 18 17:41:54.208240 master-0 kubenswrapper[7553]: I0318 17:41:54.208188 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-olm-operator/cluster-olm-operator-67dcd4998-lljnt" event={"ID":"99e215da-759d-4fff-af65-0fb64245fbd0","Type":"ContainerDied","Data":"836d36e41f9d465b68171473ea87c95a04be32a563d9abf3bd2beb4eacf6a497"} Mar 18 17:41:54.210045 master-0 kubenswrapper[7553]: I0318 17:41:54.210007 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-ff989d6cc-qk279" event={"ID":"9b424d6c-7440-4c98-ac19-2d0642c696fd","Type":"ContainerStarted","Data":"5027239b16a33eaa242303aa483c7e285890e090e452f1d81a0bd3e82446b39c"} Mar 18 17:41:54.215551 master-0 kubenswrapper[7553]: I0318 17:41:54.215511 7553 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 18 17:41:54.589350 master-0 kubenswrapper[7553]: I0318 17:41:54.581600 7553 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 18 17:41:54.589350 master-0 kubenswrapper[7553]: I0318 17:41:54.589198 7553 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 18 17:41:54.691482 master-0 kubenswrapper[7553]: I0318 17:41:54.691395 7553 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-5l4qp" Mar 18 17:41:54.694344 master-0 kubenswrapper[7553]: I0318 17:41:54.693872 7553 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-5l4qp" Mar 18 17:41:54.725573 master-0 kubenswrapper[7553]: I0318 17:41:54.722188 7553 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-5l4qp" Mar 18 17:41:54.767321 master-0 kubenswrapper[7553]: I0318 17:41:54.765961 7553 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-5l4qp" Mar 18 17:41:54.824030 master-0 kubenswrapper[7553]: I0318 17:41:54.823958 7553 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-storage-operator/csi-snapshot-controller-64854d9cff-vpjmp"] Mar 18 17:41:54.824336 master-0 kubenswrapper[7553]: E0318 17:41:54.824134 7553 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f2eeb961-15e7-4c19-8f37-659cc2cb6539" containerName="prober" Mar 18 17:41:54.824336 master-0 kubenswrapper[7553]: I0318 17:41:54.824147 7553 state_mem.go:107] "Deleted CPUSet assignment" podUID="f2eeb961-15e7-4c19-8f37-659cc2cb6539" containerName="prober" Mar 18 17:41:54.824336 master-0 kubenswrapper[7553]: E0318 17:41:54.824156 7553 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="be6633f4-7370-49b8-a607-6a3fa52a098e" containerName="assisted-installer-controller" Mar 18 17:41:54.824336 master-0 kubenswrapper[7553]: I0318 17:41:54.824163 7553 state_mem.go:107] "Deleted CPUSet assignment" podUID="be6633f4-7370-49b8-a607-6a3fa52a098e" containerName="assisted-installer-controller" Mar 18 17:41:54.824336 master-0 kubenswrapper[7553]: I0318 17:41:54.824227 7553 memory_manager.go:354] "RemoveStaleState removing state" podUID="f2eeb961-15e7-4c19-8f37-659cc2cb6539" containerName="prober" Mar 18 17:41:54.824336 master-0 kubenswrapper[7553]: I0318 17:41:54.824236 7553 memory_manager.go:354] "RemoveStaleState removing state" podUID="be6633f4-7370-49b8-a607-6a3fa52a098e" containerName="assisted-installer-controller" Mar 18 17:41:54.824711 master-0 kubenswrapper[7553]: I0318 17:41:54.824534 7553 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-storage-operator/csi-snapshot-controller-64854d9cff-vpjmp" Mar 18 17:41:54.837990 master-0 kubenswrapper[7553]: I0318 17:41:54.837924 7553 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-storage-operator/csi-snapshot-controller-64854d9cff-vpjmp"] Mar 18 17:41:54.852035 master-0 kubenswrapper[7553]: I0318 17:41:54.850920 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/e9e04572-1425-440e-9869-6deef05e13e3-srv-cert\") pod \"catalog-operator-68f85b4d6c-qpgfz\" (UID: \"e9e04572-1425-440e-9869-6deef05e13e3\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68f85b4d6c-qpgfz" Mar 18 17:41:54.852035 master-0 kubenswrapper[7553]: I0318 17:41:54.850962 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/8d5e9525-6c0d-4c0b-a2ce-e42eaf66c311-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-58845fbb57-vjrjg\" (UID: \"8d5e9525-6c0d-4c0b-a2ce-e42eaf66c311\") " pod="openshift-monitoring/cluster-monitoring-operator-58845fbb57-vjrjg" Mar 18 17:41:54.852035 master-0 kubenswrapper[7553]: I0318 17:41:54.850995 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/b1352cc7-4099-44c5-9c31-8259fb783bc7-metrics-tls\") pod \"dns-operator-9c5679d8f-7sc7v\" (UID: \"b1352cc7-4099-44c5-9c31-8259fb783bc7\") " pod="openshift-dns-operator/dns-operator-9c5679d8f-7sc7v" Mar 18 17:41:54.852035 master-0 kubenswrapper[7553]: I0318 17:41:54.851015 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/ce5831a6-5a8d-4cda-9299-5d86437bcab2-marketplace-operator-metrics\") pod \"marketplace-operator-89ccd998f-l5gm7\" (UID: \"ce5831a6-5a8d-4cda-9299-5d86437bcab2\") " pod="openshift-marketplace/marketplace-operator-89ccd998f-l5gm7" Mar 18 17:41:54.852035 master-0 kubenswrapper[7553]: I0318 17:41:54.851044 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/37b3753f-bf4f-4a9e-a4a8-d58296bada79-cert\") pod \"cluster-baremetal-operator-6f69995874-dh5zl\" (UID: \"37b3753f-bf4f-4a9e-a4a8-d58296bada79\") " pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-dh5zl" Mar 18 17:41:54.852035 master-0 kubenswrapper[7553]: I0318 17:41:54.851074 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/e73f2834-c56c-4cef-ac3c-2317e9a4324c-srv-cert\") pod \"olm-operator-5c9796789-6hngr\" (UID: \"e73f2834-c56c-4cef-ac3c-2317e9a4324c\") " pod="openshift-operator-lifecycle-manager/olm-operator-5c9796789-6hngr" Mar 18 17:41:54.852035 master-0 kubenswrapper[7553]: I0318 17:41:54.851096 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/d26d4515-391e-41a5-8c82-1b2b8a375662-package-server-manager-serving-cert\") pod \"package-server-manager-7b95f86987-6qqz4\" (UID: \"d26d4515-391e-41a5-8c82-1b2b8a375662\") " pod="openshift-operator-lifecycle-manager/package-server-manager-7b95f86987-6qqz4" Mar 18 17:41:54.852035 master-0 kubenswrapper[7553]: I0318 17:41:54.851115 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-baremetal-operator-tls\" (UniqueName: \"kubernetes.io/secret/37b3753f-bf4f-4a9e-a4a8-d58296bada79-cluster-baremetal-operator-tls\") pod \"cluster-baremetal-operator-6f69995874-dh5zl\" (UID: \"37b3753f-bf4f-4a9e-a4a8-d58296bada79\") " pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-dh5zl" Mar 18 17:41:54.852035 master-0 kubenswrapper[7553]: I0318 17:41:54.851133 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a02399de-859b-45b1-9b00-18a08f285f39-serving-cert\") pod \"cluster-version-operator-56d8475767-lqvvj\" (UID: \"a02399de-859b-45b1-9b00-18a08f285f39\") " pod="openshift-cluster-version/cluster-version-operator-56d8475767-lqvvj" Mar 18 17:41:54.852035 master-0 kubenswrapper[7553]: I0318 17:41:54.851156 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/7e64a377-f497-4416-8f22-d5c7f52e0b65-metrics-tls\") pod \"ingress-operator-66b84d69b-qb7n6\" (UID: \"7e64a377-f497-4416-8f22-d5c7f52e0b65\") " pod="openshift-ingress-operator/ingress-operator-66b84d69b-qb7n6" Mar 18 17:41:54.852035 master-0 kubenswrapper[7553]: I0318 17:41:54.851177 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/6f26e239-2988-4faa-bc1d-24b15b95b7f1-image-registry-operator-tls\") pod \"cluster-image-registry-operator-5549dc66cb-ljrq8\" (UID: \"6f26e239-2988-4faa-bc1d-24b15b95b7f1\") " pod="openshift-image-registry/cluster-image-registry-operator-5549dc66cb-ljrq8" Mar 18 17:41:54.852035 master-0 kubenswrapper[7553]: I0318 17:41:54.851201 7553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vkcx9\" (UniqueName: \"kubernetes.io/projected/7d39d93e-9be3-47e1-a44e-be2d18b55446-kube-api-access-vkcx9\") pod \"csi-snapshot-controller-64854d9cff-vpjmp\" (UID: \"7d39d93e-9be3-47e1-a44e-be2d18b55446\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-64854d9cff-vpjmp" Mar 18 17:41:54.852035 master-0 kubenswrapper[7553]: I0318 17:41:54.851224 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/a8c34df1-ea0d-4dfa-bf4d-5b58dc5bee8e-webhook-certs\") pod \"multus-admission-controller-5dbbb8b86f-gr8jc\" (UID: \"a8c34df1-ea0d-4dfa-bf4d-5b58dc5bee8e\") " pod="openshift-multus/multus-admission-controller-5dbbb8b86f-gr8jc" Mar 18 17:41:54.852035 master-0 kubenswrapper[7553]: I0318 17:41:54.851242 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/7c6694a8-ccd0-491b-9f21-215450f6ce67-apiservice-cert\") pod \"cluster-node-tuning-operator-598fbc5f8f-7qwxn\" (UID: \"7c6694a8-ccd0-491b-9f21-215450f6ce67\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-7qwxn" Mar 18 17:41:54.852035 master-0 kubenswrapper[7553]: E0318 17:41:54.851380 7553 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/performance-addon-operator-webhook-cert: secret "performance-addon-operator-webhook-cert" not found Mar 18 17:41:54.852035 master-0 kubenswrapper[7553]: E0318 17:41:54.851433 7553 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7c6694a8-ccd0-491b-9f21-215450f6ce67-apiservice-cert podName:7c6694a8-ccd0-491b-9f21-215450f6ce67 nodeName:}" failed. No retries permitted until 2026-03-18 17:41:58.851418223 +0000 UTC m=+8.997252896 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/7c6694a8-ccd0-491b-9f21-215450f6ce67-apiservice-cert") pod "cluster-node-tuning-operator-598fbc5f8f-7qwxn" (UID: "7c6694a8-ccd0-491b-9f21-215450f6ce67") : secret "performance-addon-operator-webhook-cert" not found Mar 18 17:41:54.852605 master-0 kubenswrapper[7553]: E0318 17:41:54.852142 7553 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: secret "catalog-operator-serving-cert" not found Mar 18 17:41:54.852605 master-0 kubenswrapper[7553]: E0318 17:41:54.852168 7553 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e9e04572-1425-440e-9869-6deef05e13e3-srv-cert podName:e9e04572-1425-440e-9869-6deef05e13e3 nodeName:}" failed. No retries permitted until 2026-03-18 17:41:58.852160178 +0000 UTC m=+8.997994851 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/e9e04572-1425-440e-9869-6deef05e13e3-srv-cert") pod "catalog-operator-68f85b4d6c-qpgfz" (UID: "e9e04572-1425-440e-9869-6deef05e13e3") : secret "catalog-operator-serving-cert" not found Mar 18 17:41:54.852605 master-0 kubenswrapper[7553]: E0318 17:41:54.852203 7553 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: secret "cluster-monitoring-operator-tls" not found Mar 18 17:41:54.852605 master-0 kubenswrapper[7553]: E0318 17:41:54.852222 7553 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8d5e9525-6c0d-4c0b-a2ce-e42eaf66c311-cluster-monitoring-operator-tls podName:8d5e9525-6c0d-4c0b-a2ce-e42eaf66c311 nodeName:}" failed. No retries permitted until 2026-03-18 17:41:58.852216479 +0000 UTC m=+8.998051152 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/8d5e9525-6c0d-4c0b-a2ce-e42eaf66c311-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-58845fbb57-vjrjg" (UID: "8d5e9525-6c0d-4c0b-a2ce-e42eaf66c311") : secret "cluster-monitoring-operator-tls" not found Mar 18 17:41:54.852605 master-0 kubenswrapper[7553]: E0318 17:41:54.852254 7553 secret.go:189] Couldn't get secret openshift-dns-operator/metrics-tls: secret "metrics-tls" not found Mar 18 17:41:54.852605 master-0 kubenswrapper[7553]: E0318 17:41:54.852269 7553 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b1352cc7-4099-44c5-9c31-8259fb783bc7-metrics-tls podName:b1352cc7-4099-44c5-9c31-8259fb783bc7 nodeName:}" failed. No retries permitted until 2026-03-18 17:41:58.85226485 +0000 UTC m=+8.998099523 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/b1352cc7-4099-44c5-9c31-8259fb783bc7-metrics-tls") pod "dns-operator-9c5679d8f-7sc7v" (UID: "b1352cc7-4099-44c5-9c31-8259fb783bc7") : secret "metrics-tls" not found Mar 18 17:41:54.852605 master-0 kubenswrapper[7553]: E0318 17:41:54.852320 7553 secret.go:189] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: secret "marketplace-operator-metrics" not found Mar 18 17:41:54.852605 master-0 kubenswrapper[7553]: E0318 17:41:54.852338 7553 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ce5831a6-5a8d-4cda-9299-5d86437bcab2-marketplace-operator-metrics podName:ce5831a6-5a8d-4cda-9299-5d86437bcab2 nodeName:}" failed. No retries permitted until 2026-03-18 17:41:58.852333322 +0000 UTC m=+8.998167995 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/ce5831a6-5a8d-4cda-9299-5d86437bcab2-marketplace-operator-metrics") pod "marketplace-operator-89ccd998f-l5gm7" (UID: "ce5831a6-5a8d-4cda-9299-5d86437bcab2") : secret "marketplace-operator-metrics" not found Mar 18 17:41:54.852605 master-0 kubenswrapper[7553]: E0318 17:41:54.852370 7553 secret.go:189] Couldn't get secret openshift-machine-api/cluster-baremetal-webhook-server-cert: secret "cluster-baremetal-webhook-server-cert" not found Mar 18 17:41:54.852605 master-0 kubenswrapper[7553]: E0318 17:41:54.852386 7553 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/37b3753f-bf4f-4a9e-a4a8-d58296bada79-cert podName:37b3753f-bf4f-4a9e-a4a8-d58296bada79 nodeName:}" failed. No retries permitted until 2026-03-18 17:41:58.852380773 +0000 UTC m=+8.998215446 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/37b3753f-bf4f-4a9e-a4a8-d58296bada79-cert") pod "cluster-baremetal-operator-6f69995874-dh5zl" (UID: "37b3753f-bf4f-4a9e-a4a8-d58296bada79") : secret "cluster-baremetal-webhook-server-cert" not found Mar 18 17:41:54.852605 master-0 kubenswrapper[7553]: E0318 17:41:54.852418 7553 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: secret "olm-operator-serving-cert" not found Mar 18 17:41:54.852605 master-0 kubenswrapper[7553]: E0318 17:41:54.852432 7553 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e73f2834-c56c-4cef-ac3c-2317e9a4324c-srv-cert podName:e73f2834-c56c-4cef-ac3c-2317e9a4324c nodeName:}" failed. No retries permitted until 2026-03-18 17:41:58.852427684 +0000 UTC m=+8.998262357 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/e73f2834-c56c-4cef-ac3c-2317e9a4324c-srv-cert") pod "olm-operator-5c9796789-6hngr" (UID: "e73f2834-c56c-4cef-ac3c-2317e9a4324c") : secret "olm-operator-serving-cert" not found Mar 18 17:41:54.852605 master-0 kubenswrapper[7553]: E0318 17:41:54.852464 7553 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: secret "package-server-manager-serving-cert" not found Mar 18 17:41:54.852605 master-0 kubenswrapper[7553]: E0318 17:41:54.852480 7553 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d26d4515-391e-41a5-8c82-1b2b8a375662-package-server-manager-serving-cert podName:d26d4515-391e-41a5-8c82-1b2b8a375662 nodeName:}" failed. No retries permitted until 2026-03-18 17:41:58.852474964 +0000 UTC m=+8.998309637 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/d26d4515-391e-41a5-8c82-1b2b8a375662-package-server-manager-serving-cert") pod "package-server-manager-7b95f86987-6qqz4" (UID: "d26d4515-391e-41a5-8c82-1b2b8a375662") : secret "package-server-manager-serving-cert" not found Mar 18 17:41:54.852605 master-0 kubenswrapper[7553]: E0318 17:41:54.852508 7553 secret.go:189] Couldn't get secret openshift-machine-api/cluster-baremetal-operator-tls: secret "cluster-baremetal-operator-tls" not found Mar 18 17:41:54.852605 master-0 kubenswrapper[7553]: E0318 17:41:54.852525 7553 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/37b3753f-bf4f-4a9e-a4a8-d58296bada79-cluster-baremetal-operator-tls podName:37b3753f-bf4f-4a9e-a4a8-d58296bada79 nodeName:}" failed. No retries permitted until 2026-03-18 17:41:58.852519385 +0000 UTC m=+8.998354058 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cluster-baremetal-operator-tls" (UniqueName: "kubernetes.io/secret/37b3753f-bf4f-4a9e-a4a8-d58296bada79-cluster-baremetal-operator-tls") pod "cluster-baremetal-operator-6f69995874-dh5zl" (UID: "37b3753f-bf4f-4a9e-a4a8-d58296bada79") : secret "cluster-baremetal-operator-tls" not found Mar 18 17:41:54.852605 master-0 kubenswrapper[7553]: E0318 17:41:54.852558 7553 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Mar 18 17:41:54.852605 master-0 kubenswrapper[7553]: E0318 17:41:54.852574 7553 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a02399de-859b-45b1-9b00-18a08f285f39-serving-cert podName:a02399de-859b-45b1-9b00-18a08f285f39 nodeName:}" failed. No retries permitted until 2026-03-18 17:41:58.852569416 +0000 UTC m=+8.998404089 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/a02399de-859b-45b1-9b00-18a08f285f39-serving-cert") pod "cluster-version-operator-56d8475767-lqvvj" (UID: "a02399de-859b-45b1-9b00-18a08f285f39") : secret "cluster-version-operator-serving-cert" not found Mar 18 17:41:54.852605 master-0 kubenswrapper[7553]: E0318 17:41:54.852606 7553 secret.go:189] Couldn't get secret openshift-ingress-operator/metrics-tls: secret "metrics-tls" not found Mar 18 17:41:54.852605 master-0 kubenswrapper[7553]: E0318 17:41:54.852623 7553 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7e64a377-f497-4416-8f22-d5c7f52e0b65-metrics-tls podName:7e64a377-f497-4416-8f22-d5c7f52e0b65 nodeName:}" failed. No retries permitted until 2026-03-18 17:41:58.852618047 +0000 UTC m=+8.998452720 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/7e64a377-f497-4416-8f22-d5c7f52e0b65-metrics-tls") pod "ingress-operator-66b84d69b-qb7n6" (UID: "7e64a377-f497-4416-8f22-d5c7f52e0b65") : secret "metrics-tls" not found Mar 18 17:41:54.853123 master-0 kubenswrapper[7553]: E0318 17:41:54.852653 7553 secret.go:189] Couldn't get secret openshift-image-registry/image-registry-operator-tls: secret "image-registry-operator-tls" not found Mar 18 17:41:54.853123 master-0 kubenswrapper[7553]: E0318 17:41:54.852669 7553 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6f26e239-2988-4faa-bc1d-24b15b95b7f1-image-registry-operator-tls podName:6f26e239-2988-4faa-bc1d-24b15b95b7f1 nodeName:}" failed. No retries permitted until 2026-03-18 17:41:58.852664258 +0000 UTC m=+8.998498931 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "image-registry-operator-tls" (UniqueName: "kubernetes.io/secret/6f26e239-2988-4faa-bc1d-24b15b95b7f1-image-registry-operator-tls") pod "cluster-image-registry-operator-5549dc66cb-ljrq8" (UID: "6f26e239-2988-4faa-bc1d-24b15b95b7f1") : secret "image-registry-operator-tls" not found Mar 18 17:41:54.853123 master-0 kubenswrapper[7553]: E0318 17:41:54.852721 7553 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: secret "multus-admission-controller-secret" not found Mar 18 17:41:54.853123 master-0 kubenswrapper[7553]: E0318 17:41:54.852739 7553 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a8c34df1-ea0d-4dfa-bf4d-5b58dc5bee8e-webhook-certs podName:a8c34df1-ea0d-4dfa-bf4d-5b58dc5bee8e nodeName:}" failed. No retries permitted until 2026-03-18 17:41:58.85273287 +0000 UTC m=+8.998567533 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/a8c34df1-ea0d-4dfa-bf4d-5b58dc5bee8e-webhook-certs") pod "multus-admission-controller-5dbbb8b86f-gr8jc" (UID: "a8c34df1-ea0d-4dfa-bf4d-5b58dc5bee8e") : secret "multus-admission-controller-secret" not found Mar 18 17:41:54.954307 master-0 kubenswrapper[7553]: I0318 17:41:54.952173 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vkcx9\" (UniqueName: \"kubernetes.io/projected/7d39d93e-9be3-47e1-a44e-be2d18b55446-kube-api-access-vkcx9\") pod \"csi-snapshot-controller-64854d9cff-vpjmp\" (UID: \"7d39d93e-9be3-47e1-a44e-be2d18b55446\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-64854d9cff-vpjmp" Mar 18 17:41:55.007323 master-0 kubenswrapper[7553]: I0318 17:41:55.005891 7553 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vkcx9\" (UniqueName: \"kubernetes.io/projected/7d39d93e-9be3-47e1-a44e-be2d18b55446-kube-api-access-vkcx9\") pod \"csi-snapshot-controller-64854d9cff-vpjmp\" (UID: \"7d39d93e-9be3-47e1-a44e-be2d18b55446\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-64854d9cff-vpjmp" Mar 18 17:41:55.143962 master-0 kubenswrapper[7553]: I0318 17:41:55.143837 7553 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-storage-operator/csi-snapshot-controller-64854d9cff-vpjmp" Mar 18 17:41:55.184837 master-0 kubenswrapper[7553]: I0318 17:41:55.184779 7553 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 18 17:41:55.192981 master-0 kubenswrapper[7553]: I0318 17:41:55.192749 7553 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 18 17:41:55.219668 master-0 kubenswrapper[7553]: I0318 17:41:55.217623 7553 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 18 17:41:55.470929 master-0 kubenswrapper[7553]: I0318 17:41:55.470837 7553 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-storage-operator/csi-snapshot-controller-64854d9cff-vpjmp"] Mar 18 17:41:55.651017 master-0 kubenswrapper[7553]: W0318 17:41:55.650694 7553 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7d39d93e_9be3_47e1_a44e_be2d18b55446.slice/crio-3b679ceb3c60d8555810f42293ecb4e72f346293b26bbcc64d5cc427efca2bcd WatchSource:0}: Error finding container 3b679ceb3c60d8555810f42293ecb4e72f346293b26bbcc64d5cc427efca2bcd: Status 404 returned error can't find the container with id 3b679ceb3c60d8555810f42293ecb4e72f346293b26bbcc64d5cc427efca2bcd Mar 18 17:41:55.876694 master-0 kubenswrapper[7553]: I0318 17:41:55.876635 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/7c6694a8-ccd0-491b-9f21-215450f6ce67-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-598fbc5f8f-7qwxn\" (UID: \"7c6694a8-ccd0-491b-9f21-215450f6ce67\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-7qwxn" Mar 18 17:41:55.876963 master-0 kubenswrapper[7553]: I0318 17:41:55.876791 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5a4f94f3-d63a-4869-b723-ae9637610b4b-metrics-certs\") pod \"network-metrics-daemon-mfn52\" (UID: \"5a4f94f3-d63a-4869-b723-ae9637610b4b\") " pod="openshift-multus/network-metrics-daemon-mfn52" Mar 18 17:41:55.876963 master-0 kubenswrapper[7553]: E0318 17:41:55.876943 7553 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: secret "metrics-daemon-secret" not found Mar 18 17:41:55.877030 master-0 kubenswrapper[7553]: E0318 17:41:55.877000 7553 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5a4f94f3-d63a-4869-b723-ae9637610b4b-metrics-certs podName:5a4f94f3-d63a-4869-b723-ae9637610b4b nodeName:}" failed. No retries permitted until 2026-03-18 17:41:59.876982888 +0000 UTC m=+10.022817561 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/5a4f94f3-d63a-4869-b723-ae9637610b4b-metrics-certs") pod "network-metrics-daemon-mfn52" (UID: "5a4f94f3-d63a-4869-b723-ae9637610b4b") : secret "metrics-daemon-secret" not found Mar 18 17:41:55.877405 master-0 kubenswrapper[7553]: E0318 17:41:55.877385 7553 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/node-tuning-operator-tls: secret "node-tuning-operator-tls" not found Mar 18 17:41:55.877458 master-0 kubenswrapper[7553]: E0318 17:41:55.877416 7553 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7c6694a8-ccd0-491b-9f21-215450f6ce67-node-tuning-operator-tls podName:7c6694a8-ccd0-491b-9f21-215450f6ce67 nodeName:}" failed. No retries permitted until 2026-03-18 17:41:59.877408577 +0000 UTC m=+10.023243250 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "node-tuning-operator-tls" (UniqueName: "kubernetes.io/secret/7c6694a8-ccd0-491b-9f21-215450f6ce67-node-tuning-operator-tls") pod "cluster-node-tuning-operator-598fbc5f8f-7qwxn" (UID: "7c6694a8-ccd0-491b-9f21-215450f6ce67") : secret "node-tuning-operator-tls" not found Mar 18 17:41:56.222930 master-0 kubenswrapper[7553]: I0318 17:41:56.222890 7553 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 18 17:41:56.223102 master-0 kubenswrapper[7553]: I0318 17:41:56.222887 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-64854d9cff-vpjmp" event={"ID":"7d39d93e-9be3-47e1-a44e-be2d18b55446","Type":"ContainerStarted","Data":"3b679ceb3c60d8555810f42293ecb4e72f346293b26bbcc64d5cc427efca2bcd"} Mar 18 17:41:56.223198 master-0 kubenswrapper[7553]: I0318 17:41:56.223143 7553 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 18 17:41:56.223401 master-0 kubenswrapper[7553]: I0318 17:41:56.223355 7553 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 18 17:41:56.846345 master-0 kubenswrapper[7553]: I0318 17:41:56.844084 7553 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-f5df8899c-8nhkn"] Mar 18 17:41:56.846345 master-0 kubenswrapper[7553]: I0318 17:41:56.844645 7553 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-f5df8899c-8nhkn" Mar 18 17:41:56.846908 master-0 kubenswrapper[7553]: I0318 17:41:56.846890 7553 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Mar 18 17:41:56.848555 master-0 kubenswrapper[7553]: I0318 17:41:56.847544 7553 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Mar 18 17:41:56.848555 master-0 kubenswrapper[7553]: I0318 17:41:56.847647 7553 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Mar 18 17:41:56.854595 master-0 kubenswrapper[7553]: I0318 17:41:56.854561 7553 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Mar 18 17:41:56.856578 master-0 kubenswrapper[7553]: I0318 17:41:56.855738 7553 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Mar 18 17:41:56.856578 master-0 kubenswrapper[7553]: I0318 17:41:56.856347 7553 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Mar 18 17:41:56.858103 master-0 kubenswrapper[7553]: I0318 17:41:56.857853 7553 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-f5df8899c-8nhkn"] Mar 18 17:41:56.943807 master-0 kubenswrapper[7553]: I0318 17:41:56.943751 7553 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6cd6978d68-zdcm4"] Mar 18 17:41:56.946350 master-0 kubenswrapper[7553]: I0318 17:41:56.944432 7553 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6cd6978d68-zdcm4" Mar 18 17:41:56.949552 master-0 kubenswrapper[7553]: I0318 17:41:56.949507 7553 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Mar 18 17:41:56.949779 master-0 kubenswrapper[7553]: I0318 17:41:56.949746 7553 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Mar 18 17:41:56.949902 master-0 kubenswrapper[7553]: I0318 17:41:56.949869 7553 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Mar 18 17:41:56.949942 master-0 kubenswrapper[7553]: I0318 17:41:56.949911 7553 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Mar 18 17:41:56.950297 master-0 kubenswrapper[7553]: I0318 17:41:56.950056 7553 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Mar 18 17:41:56.960170 master-0 kubenswrapper[7553]: I0318 17:41:56.960124 7553 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6cd6978d68-zdcm4"] Mar 18 17:41:56.993630 master-0 kubenswrapper[7553]: I0318 17:41:56.993565 7553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d4ec93a3-fdfb-400d-86c3-932df6200fe4-config\") pod \"controller-manager-f5df8899c-8nhkn\" (UID: \"d4ec93a3-fdfb-400d-86c3-932df6200fe4\") " pod="openshift-controller-manager/controller-manager-f5df8899c-8nhkn" Mar 18 17:41:56.993821 master-0 kubenswrapper[7553]: I0318 17:41:56.993783 7553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xqhdk\" (UniqueName: \"kubernetes.io/projected/d4ec93a3-fdfb-400d-86c3-932df6200fe4-kube-api-access-xqhdk\") pod \"controller-manager-f5df8899c-8nhkn\" (UID: \"d4ec93a3-fdfb-400d-86c3-932df6200fe4\") " pod="openshift-controller-manager/controller-manager-f5df8899c-8nhkn" Mar 18 17:41:56.993929 master-0 kubenswrapper[7553]: I0318 17:41:56.993843 7553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d4ec93a3-fdfb-400d-86c3-932df6200fe4-client-ca\") pod \"controller-manager-f5df8899c-8nhkn\" (UID: \"d4ec93a3-fdfb-400d-86c3-932df6200fe4\") " pod="openshift-controller-manager/controller-manager-f5df8899c-8nhkn" Mar 18 17:41:56.993961 master-0 kubenswrapper[7553]: I0318 17:41:56.993931 7553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d4ec93a3-fdfb-400d-86c3-932df6200fe4-proxy-ca-bundles\") pod \"controller-manager-f5df8899c-8nhkn\" (UID: \"d4ec93a3-fdfb-400d-86c3-932df6200fe4\") " pod="openshift-controller-manager/controller-manager-f5df8899c-8nhkn" Mar 18 17:41:56.994032 master-0 kubenswrapper[7553]: I0318 17:41:56.993998 7553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d4ec93a3-fdfb-400d-86c3-932df6200fe4-serving-cert\") pod \"controller-manager-f5df8899c-8nhkn\" (UID: \"d4ec93a3-fdfb-400d-86c3-932df6200fe4\") " pod="openshift-controller-manager/controller-manager-f5df8899c-8nhkn" Mar 18 17:41:57.095488 master-0 kubenswrapper[7553]: I0318 17:41:57.094905 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d4ec93a3-fdfb-400d-86c3-932df6200fe4-serving-cert\") pod \"controller-manager-f5df8899c-8nhkn\" (UID: \"d4ec93a3-fdfb-400d-86c3-932df6200fe4\") " pod="openshift-controller-manager/controller-manager-f5df8899c-8nhkn" Mar 18 17:41:57.095488 master-0 kubenswrapper[7553]: I0318 17:41:57.095476 7553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4ktgn\" (UniqueName: \"kubernetes.io/projected/6897138d-43c5-4502-83a5-64ac783886a0-kube-api-access-4ktgn\") pod \"route-controller-manager-6cd6978d68-zdcm4\" (UID: \"6897138d-43c5-4502-83a5-64ac783886a0\") " pod="openshift-route-controller-manager/route-controller-manager-6cd6978d68-zdcm4" Mar 18 17:41:57.095488 master-0 kubenswrapper[7553]: I0318 17:41:57.095501 7553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6897138d-43c5-4502-83a5-64ac783886a0-client-ca\") pod \"route-controller-manager-6cd6978d68-zdcm4\" (UID: \"6897138d-43c5-4502-83a5-64ac783886a0\") " pod="openshift-route-controller-manager/route-controller-manager-6cd6978d68-zdcm4" Mar 18 17:41:57.095973 master-0 kubenswrapper[7553]: I0318 17:41:57.095526 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d4ec93a3-fdfb-400d-86c3-932df6200fe4-config\") pod \"controller-manager-f5df8899c-8nhkn\" (UID: \"d4ec93a3-fdfb-400d-86c3-932df6200fe4\") " pod="openshift-controller-manager/controller-manager-f5df8899c-8nhkn" Mar 18 17:41:57.095973 master-0 kubenswrapper[7553]: E0318 17:41:57.095117 7553 secret.go:189] Couldn't get secret openshift-controller-manager/serving-cert: secret "serving-cert" not found Mar 18 17:41:57.095973 master-0 kubenswrapper[7553]: I0318 17:41:57.095571 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xqhdk\" (UniqueName: \"kubernetes.io/projected/d4ec93a3-fdfb-400d-86c3-932df6200fe4-kube-api-access-xqhdk\") pod \"controller-manager-f5df8899c-8nhkn\" (UID: \"d4ec93a3-fdfb-400d-86c3-932df6200fe4\") " pod="openshift-controller-manager/controller-manager-f5df8899c-8nhkn" Mar 18 17:41:57.095973 master-0 kubenswrapper[7553]: E0318 17:41:57.095647 7553 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d4ec93a3-fdfb-400d-86c3-932df6200fe4-serving-cert podName:d4ec93a3-fdfb-400d-86c3-932df6200fe4 nodeName:}" failed. No retries permitted until 2026-03-18 17:41:57.595621409 +0000 UTC m=+7.741456082 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/d4ec93a3-fdfb-400d-86c3-932df6200fe4-serving-cert") pod "controller-manager-f5df8899c-8nhkn" (UID: "d4ec93a3-fdfb-400d-86c3-932df6200fe4") : secret "serving-cert" not found Mar 18 17:41:57.095973 master-0 kubenswrapper[7553]: E0318 17:41:57.095688 7553 configmap.go:193] Couldn't get configMap openshift-controller-manager/config: configmap "config" not found Mar 18 17:41:57.095973 master-0 kubenswrapper[7553]: I0318 17:41:57.095705 7553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6897138d-43c5-4502-83a5-64ac783886a0-serving-cert\") pod \"route-controller-manager-6cd6978d68-zdcm4\" (UID: \"6897138d-43c5-4502-83a5-64ac783886a0\") " pod="openshift-route-controller-manager/route-controller-manager-6cd6978d68-zdcm4" Mar 18 17:41:57.095973 master-0 kubenswrapper[7553]: E0318 17:41:57.095765 7553 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/d4ec93a3-fdfb-400d-86c3-932df6200fe4-config podName:d4ec93a3-fdfb-400d-86c3-932df6200fe4 nodeName:}" failed. No retries permitted until 2026-03-18 17:41:57.595743892 +0000 UTC m=+7.741578665 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/d4ec93a3-fdfb-400d-86c3-932df6200fe4-config") pod "controller-manager-f5df8899c-8nhkn" (UID: "d4ec93a3-fdfb-400d-86c3-932df6200fe4") : configmap "config" not found Mar 18 17:41:57.095973 master-0 kubenswrapper[7553]: I0318 17:41:57.095792 7553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6897138d-43c5-4502-83a5-64ac783886a0-config\") pod \"route-controller-manager-6cd6978d68-zdcm4\" (UID: \"6897138d-43c5-4502-83a5-64ac783886a0\") " pod="openshift-route-controller-manager/route-controller-manager-6cd6978d68-zdcm4" Mar 18 17:41:57.095973 master-0 kubenswrapper[7553]: I0318 17:41:57.095834 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d4ec93a3-fdfb-400d-86c3-932df6200fe4-client-ca\") pod \"controller-manager-f5df8899c-8nhkn\" (UID: \"d4ec93a3-fdfb-400d-86c3-932df6200fe4\") " pod="openshift-controller-manager/controller-manager-f5df8899c-8nhkn" Mar 18 17:41:57.095973 master-0 kubenswrapper[7553]: E0318 17:41:57.095918 7553 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: configmap "client-ca" not found Mar 18 17:41:57.095973 master-0 kubenswrapper[7553]: E0318 17:41:57.095954 7553 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/d4ec93a3-fdfb-400d-86c3-932df6200fe4-client-ca podName:d4ec93a3-fdfb-400d-86c3-932df6200fe4 nodeName:}" failed. No retries permitted until 2026-03-18 17:41:57.595943916 +0000 UTC m=+7.741778589 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/d4ec93a3-fdfb-400d-86c3-932df6200fe4-client-ca") pod "controller-manager-f5df8899c-8nhkn" (UID: "d4ec93a3-fdfb-400d-86c3-932df6200fe4") : configmap "client-ca" not found Mar 18 17:41:57.095973 master-0 kubenswrapper[7553]: E0318 17:41:57.095965 7553 configmap.go:193] Couldn't get configMap openshift-controller-manager/openshift-global-ca: configmap "openshift-global-ca" not found Mar 18 17:41:57.095973 master-0 kubenswrapper[7553]: E0318 17:41:57.096005 7553 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/d4ec93a3-fdfb-400d-86c3-932df6200fe4-proxy-ca-bundles podName:d4ec93a3-fdfb-400d-86c3-932df6200fe4 nodeName:}" failed. No retries permitted until 2026-03-18 17:41:57.595992307 +0000 UTC m=+7.741826980 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "proxy-ca-bundles" (UniqueName: "kubernetes.io/configmap/d4ec93a3-fdfb-400d-86c3-932df6200fe4-proxy-ca-bundles") pod "controller-manager-f5df8899c-8nhkn" (UID: "d4ec93a3-fdfb-400d-86c3-932df6200fe4") : configmap "openshift-global-ca" not found Mar 18 17:41:57.095973 master-0 kubenswrapper[7553]: I0318 17:41:57.095922 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d4ec93a3-fdfb-400d-86c3-932df6200fe4-proxy-ca-bundles\") pod \"controller-manager-f5df8899c-8nhkn\" (UID: \"d4ec93a3-fdfb-400d-86c3-932df6200fe4\") " pod="openshift-controller-manager/controller-manager-f5df8899c-8nhkn" Mar 18 17:41:57.122928 master-0 kubenswrapper[7553]: I0318 17:41:57.122762 7553 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xqhdk\" (UniqueName: \"kubernetes.io/projected/d4ec93a3-fdfb-400d-86c3-932df6200fe4-kube-api-access-xqhdk\") pod \"controller-manager-f5df8899c-8nhkn\" (UID: \"d4ec93a3-fdfb-400d-86c3-932df6200fe4\") " pod="openshift-controller-manager/controller-manager-f5df8899c-8nhkn" Mar 18 17:41:57.197085 master-0 kubenswrapper[7553]: I0318 17:41:57.196994 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4ktgn\" (UniqueName: \"kubernetes.io/projected/6897138d-43c5-4502-83a5-64ac783886a0-kube-api-access-4ktgn\") pod \"route-controller-manager-6cd6978d68-zdcm4\" (UID: \"6897138d-43c5-4502-83a5-64ac783886a0\") " pod="openshift-route-controller-manager/route-controller-manager-6cd6978d68-zdcm4" Mar 18 17:41:57.197393 master-0 kubenswrapper[7553]: I0318 17:41:57.197049 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6897138d-43c5-4502-83a5-64ac783886a0-client-ca\") pod \"route-controller-manager-6cd6978d68-zdcm4\" (UID: \"6897138d-43c5-4502-83a5-64ac783886a0\") " pod="openshift-route-controller-manager/route-controller-manager-6cd6978d68-zdcm4" Mar 18 17:41:57.197393 master-0 kubenswrapper[7553]: I0318 17:41:57.197303 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6897138d-43c5-4502-83a5-64ac783886a0-serving-cert\") pod \"route-controller-manager-6cd6978d68-zdcm4\" (UID: \"6897138d-43c5-4502-83a5-64ac783886a0\") " pod="openshift-route-controller-manager/route-controller-manager-6cd6978d68-zdcm4" Mar 18 17:41:57.197393 master-0 kubenswrapper[7553]: I0318 17:41:57.197343 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6897138d-43c5-4502-83a5-64ac783886a0-config\") pod \"route-controller-manager-6cd6978d68-zdcm4\" (UID: \"6897138d-43c5-4502-83a5-64ac783886a0\") " pod="openshift-route-controller-manager/route-controller-manager-6cd6978d68-zdcm4" Mar 18 17:41:57.197612 master-0 kubenswrapper[7553]: E0318 17:41:57.197550 7553 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/config: configmap "config" not found Mar 18 17:41:57.197673 master-0 kubenswrapper[7553]: E0318 17:41:57.197618 7553 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6897138d-43c5-4502-83a5-64ac783886a0-config podName:6897138d-43c5-4502-83a5-64ac783886a0 nodeName:}" failed. No retries permitted until 2026-03-18 17:41:57.69760007 +0000 UTC m=+7.843434743 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/6897138d-43c5-4502-83a5-64ac783886a0-config") pod "route-controller-manager-6cd6978d68-zdcm4" (UID: "6897138d-43c5-4502-83a5-64ac783886a0") : configmap "config" not found Mar 18 17:41:57.197827 master-0 kubenswrapper[7553]: E0318 17:41:57.197778 7553 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/client-ca: configmap "client-ca" not found Mar 18 17:41:57.197827 master-0 kubenswrapper[7553]: E0318 17:41:57.197821 7553 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6897138d-43c5-4502-83a5-64ac783886a0-client-ca podName:6897138d-43c5-4502-83a5-64ac783886a0 nodeName:}" failed. No retries permitted until 2026-03-18 17:41:57.697809484 +0000 UTC m=+7.843644157 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/6897138d-43c5-4502-83a5-64ac783886a0-client-ca") pod "route-controller-manager-6cd6978d68-zdcm4" (UID: "6897138d-43c5-4502-83a5-64ac783886a0") : configmap "client-ca" not found Mar 18 17:41:57.197966 master-0 kubenswrapper[7553]: E0318 17:41:57.197890 7553 secret.go:189] Couldn't get secret openshift-route-controller-manager/serving-cert: secret "serving-cert" not found Mar 18 17:41:57.197966 master-0 kubenswrapper[7553]: E0318 17:41:57.197913 7553 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6897138d-43c5-4502-83a5-64ac783886a0-serving-cert podName:6897138d-43c5-4502-83a5-64ac783886a0 nodeName:}" failed. No retries permitted until 2026-03-18 17:41:57.697907686 +0000 UTC m=+7.843742359 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/6897138d-43c5-4502-83a5-64ac783886a0-serving-cert") pod "route-controller-manager-6cd6978d68-zdcm4" (UID: "6897138d-43c5-4502-83a5-64ac783886a0") : secret "serving-cert" not found Mar 18 17:41:57.215398 master-0 kubenswrapper[7553]: I0318 17:41:57.215356 7553 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4ktgn\" (UniqueName: \"kubernetes.io/projected/6897138d-43c5-4502-83a5-64ac783886a0-kube-api-access-4ktgn\") pod \"route-controller-manager-6cd6978d68-zdcm4\" (UID: \"6897138d-43c5-4502-83a5-64ac783886a0\") " pod="openshift-route-controller-manager/route-controller-manager-6cd6978d68-zdcm4" Mar 18 17:41:57.233873 master-0 kubenswrapper[7553]: I0318 17:41:57.233798 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-f7jp5" event={"ID":"1d969530-c138-4fb7-9bfe-0825be66c009","Type":"ContainerStarted","Data":"595eec8ce574f73083f1e2371c96407b89cd94b2370674847bb36ec121b703b2"} Mar 18 17:41:57.327472 master-0 kubenswrapper[7553]: I0318 17:41:57.326246 7553 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-79bc6b8d76-g5brm"] Mar 18 17:41:57.327472 master-0 kubenswrapper[7553]: I0318 17:41:57.327013 7553 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-79bc6b8d76-g5brm" Mar 18 17:41:57.330552 master-0 kubenswrapper[7553]: I0318 17:41:57.330496 7553 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Mar 18 17:41:57.330777 master-0 kubenswrapper[7553]: I0318 17:41:57.330757 7553 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Mar 18 17:41:57.332251 master-0 kubenswrapper[7553]: I0318 17:41:57.330893 7553 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Mar 18 17:41:57.332251 master-0 kubenswrapper[7553]: I0318 17:41:57.331034 7553 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Mar 18 17:41:57.335260 master-0 kubenswrapper[7553]: I0318 17:41:57.335148 7553 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-79bc6b8d76-g5brm"] Mar 18 17:41:57.432339 master-0 kubenswrapper[7553]: I0318 17:41:57.432200 7553 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 18 17:41:57.432510 master-0 kubenswrapper[7553]: I0318 17:41:57.432375 7553 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 18 17:41:57.432510 master-0 kubenswrapper[7553]: I0318 17:41:57.432387 7553 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 18 17:41:57.438587 master-0 kubenswrapper[7553]: I0318 17:41:57.438540 7553 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 18 17:41:57.500977 master-0 kubenswrapper[7553]: I0318 17:41:57.500902 7553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rf2qx\" (UniqueName: \"kubernetes.io/projected/fd4c81e2-699b-4fdf-ac7d-1607cde6a8ab-kube-api-access-rf2qx\") pod \"service-ca-79bc6b8d76-g5brm\" (UID: \"fd4c81e2-699b-4fdf-ac7d-1607cde6a8ab\") " pod="openshift-service-ca/service-ca-79bc6b8d76-g5brm" Mar 18 17:41:57.501371 master-0 kubenswrapper[7553]: I0318 17:41:57.501004 7553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/fd4c81e2-699b-4fdf-ac7d-1607cde6a8ab-signing-key\") pod \"service-ca-79bc6b8d76-g5brm\" (UID: \"fd4c81e2-699b-4fdf-ac7d-1607cde6a8ab\") " pod="openshift-service-ca/service-ca-79bc6b8d76-g5brm" Mar 18 17:41:57.501371 master-0 kubenswrapper[7553]: I0318 17:41:57.501058 7553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/fd4c81e2-699b-4fdf-ac7d-1607cde6a8ab-signing-cabundle\") pod \"service-ca-79bc6b8d76-g5brm\" (UID: \"fd4c81e2-699b-4fdf-ac7d-1607cde6a8ab\") " pod="openshift-service-ca/service-ca-79bc6b8d76-g5brm" Mar 18 17:41:57.603084 master-0 kubenswrapper[7553]: I0318 17:41:57.602993 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d4ec93a3-fdfb-400d-86c3-932df6200fe4-serving-cert\") pod \"controller-manager-f5df8899c-8nhkn\" (UID: \"d4ec93a3-fdfb-400d-86c3-932df6200fe4\") " pod="openshift-controller-manager/controller-manager-f5df8899c-8nhkn" Mar 18 17:41:57.603084 master-0 kubenswrapper[7553]: I0318 17:41:57.603072 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/fd4c81e2-699b-4fdf-ac7d-1607cde6a8ab-signing-key\") pod \"service-ca-79bc6b8d76-g5brm\" (UID: \"fd4c81e2-699b-4fdf-ac7d-1607cde6a8ab\") " pod="openshift-service-ca/service-ca-79bc6b8d76-g5brm" Mar 18 17:41:57.603475 master-0 kubenswrapper[7553]: I0318 17:41:57.603112 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d4ec93a3-fdfb-400d-86c3-932df6200fe4-config\") pod \"controller-manager-f5df8899c-8nhkn\" (UID: \"d4ec93a3-fdfb-400d-86c3-932df6200fe4\") " pod="openshift-controller-manager/controller-manager-f5df8899c-8nhkn" Mar 18 17:41:57.603475 master-0 kubenswrapper[7553]: I0318 17:41:57.603148 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/fd4c81e2-699b-4fdf-ac7d-1607cde6a8ab-signing-cabundle\") pod \"service-ca-79bc6b8d76-g5brm\" (UID: \"fd4c81e2-699b-4fdf-ac7d-1607cde6a8ab\") " pod="openshift-service-ca/service-ca-79bc6b8d76-g5brm" Mar 18 17:41:57.603475 master-0 kubenswrapper[7553]: I0318 17:41:57.603238 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d4ec93a3-fdfb-400d-86c3-932df6200fe4-client-ca\") pod \"controller-manager-f5df8899c-8nhkn\" (UID: \"d4ec93a3-fdfb-400d-86c3-932df6200fe4\") " pod="openshift-controller-manager/controller-manager-f5df8899c-8nhkn" Mar 18 17:41:57.603475 master-0 kubenswrapper[7553]: I0318 17:41:57.603326 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d4ec93a3-fdfb-400d-86c3-932df6200fe4-proxy-ca-bundles\") pod \"controller-manager-f5df8899c-8nhkn\" (UID: \"d4ec93a3-fdfb-400d-86c3-932df6200fe4\") " pod="openshift-controller-manager/controller-manager-f5df8899c-8nhkn" Mar 18 17:41:57.603475 master-0 kubenswrapper[7553]: I0318 17:41:57.603367 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rf2qx\" (UniqueName: \"kubernetes.io/projected/fd4c81e2-699b-4fdf-ac7d-1607cde6a8ab-kube-api-access-rf2qx\") pod \"service-ca-79bc6b8d76-g5brm\" (UID: \"fd4c81e2-699b-4fdf-ac7d-1607cde6a8ab\") " pod="openshift-service-ca/service-ca-79bc6b8d76-g5brm" Mar 18 17:41:57.606290 master-0 kubenswrapper[7553]: E0318 17:41:57.603803 7553 configmap.go:193] Couldn't get configMap openshift-controller-manager/config: configmap "config" not found Mar 18 17:41:57.606290 master-0 kubenswrapper[7553]: E0318 17:41:57.603921 7553 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/d4ec93a3-fdfb-400d-86c3-932df6200fe4-config podName:d4ec93a3-fdfb-400d-86c3-932df6200fe4 nodeName:}" failed. No retries permitted until 2026-03-18 17:41:58.603892302 +0000 UTC m=+8.749726975 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/d4ec93a3-fdfb-400d-86c3-932df6200fe4-config") pod "controller-manager-f5df8899c-8nhkn" (UID: "d4ec93a3-fdfb-400d-86c3-932df6200fe4") : configmap "config" not found Mar 18 17:41:57.606290 master-0 kubenswrapper[7553]: E0318 17:41:57.604023 7553 secret.go:189] Couldn't get secret openshift-controller-manager/serving-cert: secret "serving-cert" not found Mar 18 17:41:57.606290 master-0 kubenswrapper[7553]: E0318 17:41:57.604055 7553 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d4ec93a3-fdfb-400d-86c3-932df6200fe4-serving-cert podName:d4ec93a3-fdfb-400d-86c3-932df6200fe4 nodeName:}" failed. No retries permitted until 2026-03-18 17:41:58.604045885 +0000 UTC m=+8.749880628 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/d4ec93a3-fdfb-400d-86c3-932df6200fe4-serving-cert") pod "controller-manager-f5df8899c-8nhkn" (UID: "d4ec93a3-fdfb-400d-86c3-932df6200fe4") : secret "serving-cert" not found Mar 18 17:41:57.606290 master-0 kubenswrapper[7553]: E0318 17:41:57.606090 7553 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: configmap "client-ca" not found Mar 18 17:41:57.606290 master-0 kubenswrapper[7553]: E0318 17:41:57.606229 7553 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/d4ec93a3-fdfb-400d-86c3-932df6200fe4-client-ca podName:d4ec93a3-fdfb-400d-86c3-932df6200fe4 nodeName:}" failed. No retries permitted until 2026-03-18 17:41:58.606193873 +0000 UTC m=+8.752028596 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/d4ec93a3-fdfb-400d-86c3-932df6200fe4-client-ca") pod "controller-manager-f5df8899c-8nhkn" (UID: "d4ec93a3-fdfb-400d-86c3-932df6200fe4") : configmap "client-ca" not found Mar 18 17:41:57.606521 master-0 kubenswrapper[7553]: E0318 17:41:57.606312 7553 configmap.go:193] Couldn't get configMap openshift-controller-manager/openshift-global-ca: configmap "openshift-global-ca" not found Mar 18 17:41:57.606521 master-0 kubenswrapper[7553]: E0318 17:41:57.606343 7553 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/d4ec93a3-fdfb-400d-86c3-932df6200fe4-proxy-ca-bundles podName:d4ec93a3-fdfb-400d-86c3-932df6200fe4 nodeName:}" failed. No retries permitted until 2026-03-18 17:41:58.606333516 +0000 UTC m=+8.752168259 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "proxy-ca-bundles" (UniqueName: "kubernetes.io/configmap/d4ec93a3-fdfb-400d-86c3-932df6200fe4-proxy-ca-bundles") pod "controller-manager-f5df8899c-8nhkn" (UID: "d4ec93a3-fdfb-400d-86c3-932df6200fe4") : configmap "openshift-global-ca" not found Mar 18 17:41:57.608029 master-0 kubenswrapper[7553]: I0318 17:41:57.607644 7553 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/fd4c81e2-699b-4fdf-ac7d-1607cde6a8ab-signing-cabundle\") pod \"service-ca-79bc6b8d76-g5brm\" (UID: \"fd4c81e2-699b-4fdf-ac7d-1607cde6a8ab\") " pod="openshift-service-ca/service-ca-79bc6b8d76-g5brm" Mar 18 17:41:57.611980 master-0 kubenswrapper[7553]: I0318 17:41:57.611934 7553 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/fd4c81e2-699b-4fdf-ac7d-1607cde6a8ab-signing-key\") pod \"service-ca-79bc6b8d76-g5brm\" (UID: \"fd4c81e2-699b-4fdf-ac7d-1607cde6a8ab\") " pod="openshift-service-ca/service-ca-79bc6b8d76-g5brm" Mar 18 17:41:57.623670 master-0 kubenswrapper[7553]: I0318 17:41:57.623496 7553 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rf2qx\" (UniqueName: \"kubernetes.io/projected/fd4c81e2-699b-4fdf-ac7d-1607cde6a8ab-kube-api-access-rf2qx\") pod \"service-ca-79bc6b8d76-g5brm\" (UID: \"fd4c81e2-699b-4fdf-ac7d-1607cde6a8ab\") " pod="openshift-service-ca/service-ca-79bc6b8d76-g5brm" Mar 18 17:41:57.662823 master-0 kubenswrapper[7553]: I0318 17:41:57.662753 7553 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-79bc6b8d76-g5brm" Mar 18 17:41:57.704963 master-0 kubenswrapper[7553]: I0318 17:41:57.704385 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6897138d-43c5-4502-83a5-64ac783886a0-client-ca\") pod \"route-controller-manager-6cd6978d68-zdcm4\" (UID: \"6897138d-43c5-4502-83a5-64ac783886a0\") " pod="openshift-route-controller-manager/route-controller-manager-6cd6978d68-zdcm4" Mar 18 17:41:57.706490 master-0 kubenswrapper[7553]: I0318 17:41:57.705727 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6897138d-43c5-4502-83a5-64ac783886a0-serving-cert\") pod \"route-controller-manager-6cd6978d68-zdcm4\" (UID: \"6897138d-43c5-4502-83a5-64ac783886a0\") " pod="openshift-route-controller-manager/route-controller-manager-6cd6978d68-zdcm4" Mar 18 17:41:57.706490 master-0 kubenswrapper[7553]: I0318 17:41:57.705841 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6897138d-43c5-4502-83a5-64ac783886a0-config\") pod \"route-controller-manager-6cd6978d68-zdcm4\" (UID: \"6897138d-43c5-4502-83a5-64ac783886a0\") " pod="openshift-route-controller-manager/route-controller-manager-6cd6978d68-zdcm4" Mar 18 17:41:57.706490 master-0 kubenswrapper[7553]: E0318 17:41:57.705936 7553 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/config: configmap "config" not found Mar 18 17:41:57.706490 master-0 kubenswrapper[7553]: E0318 17:41:57.705955 7553 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/client-ca: configmap "client-ca" not found Mar 18 17:41:57.706490 master-0 kubenswrapper[7553]: E0318 17:41:57.706077 7553 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6897138d-43c5-4502-83a5-64ac783886a0-config podName:6897138d-43c5-4502-83a5-64ac783886a0 nodeName:}" failed. No retries permitted until 2026-03-18 17:41:58.706038701 +0000 UTC m=+8.851873424 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/6897138d-43c5-4502-83a5-64ac783886a0-config") pod "route-controller-manager-6cd6978d68-zdcm4" (UID: "6897138d-43c5-4502-83a5-64ac783886a0") : configmap "config" not found Mar 18 17:41:57.706490 master-0 kubenswrapper[7553]: E0318 17:41:57.706098 7553 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6897138d-43c5-4502-83a5-64ac783886a0-client-ca podName:6897138d-43c5-4502-83a5-64ac783886a0 nodeName:}" failed. No retries permitted until 2026-03-18 17:41:58.706090552 +0000 UTC m=+8.851925325 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/6897138d-43c5-4502-83a5-64ac783886a0-client-ca") pod "route-controller-manager-6cd6978d68-zdcm4" (UID: "6897138d-43c5-4502-83a5-64ac783886a0") : configmap "client-ca" not found Mar 18 17:41:57.706843 master-0 kubenswrapper[7553]: E0318 17:41:57.706636 7553 secret.go:189] Couldn't get secret openshift-route-controller-manager/serving-cert: secret "serving-cert" not found Mar 18 17:41:57.706843 master-0 kubenswrapper[7553]: E0318 17:41:57.706680 7553 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6897138d-43c5-4502-83a5-64ac783886a0-serving-cert podName:6897138d-43c5-4502-83a5-64ac783886a0 nodeName:}" failed. No retries permitted until 2026-03-18 17:41:58.706668084 +0000 UTC m=+8.852502977 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/6897138d-43c5-4502-83a5-64ac783886a0-serving-cert") pod "route-controller-manager-6cd6978d68-zdcm4" (UID: "6897138d-43c5-4502-83a5-64ac783886a0") : secret "serving-cert" not found Mar 18 17:41:58.239024 master-0 kubenswrapper[7553]: I0318 17:41:58.238881 7553 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 18 17:41:58.498345 master-0 kubenswrapper[7553]: I0318 17:41:58.498192 7553 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 18 17:41:58.502138 master-0 kubenswrapper[7553]: I0318 17:41:58.502096 7553 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 18 17:41:58.621037 master-0 kubenswrapper[7553]: I0318 17:41:58.620986 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d4ec93a3-fdfb-400d-86c3-932df6200fe4-proxy-ca-bundles\") pod \"controller-manager-f5df8899c-8nhkn\" (UID: \"d4ec93a3-fdfb-400d-86c3-932df6200fe4\") " pod="openshift-controller-manager/controller-manager-f5df8899c-8nhkn" Mar 18 17:41:58.621231 master-0 kubenswrapper[7553]: I0318 17:41:58.621147 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d4ec93a3-fdfb-400d-86c3-932df6200fe4-serving-cert\") pod \"controller-manager-f5df8899c-8nhkn\" (UID: \"d4ec93a3-fdfb-400d-86c3-932df6200fe4\") " pod="openshift-controller-manager/controller-manager-f5df8899c-8nhkn" Mar 18 17:41:58.621359 master-0 kubenswrapper[7553]: E0318 17:41:58.621308 7553 configmap.go:193] Couldn't get configMap openshift-controller-manager/openshift-global-ca: configmap "openshift-global-ca" not found Mar 18 17:41:58.621455 master-0 kubenswrapper[7553]: E0318 17:41:58.621434 7553 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/d4ec93a3-fdfb-400d-86c3-932df6200fe4-proxy-ca-bundles podName:d4ec93a3-fdfb-400d-86c3-932df6200fe4 nodeName:}" failed. No retries permitted until 2026-03-18 17:42:00.621403147 +0000 UTC m=+10.767237840 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "proxy-ca-bundles" (UniqueName: "kubernetes.io/configmap/d4ec93a3-fdfb-400d-86c3-932df6200fe4-proxy-ca-bundles") pod "controller-manager-f5df8899c-8nhkn" (UID: "d4ec93a3-fdfb-400d-86c3-932df6200fe4") : configmap "openshift-global-ca" not found Mar 18 17:41:58.621563 master-0 kubenswrapper[7553]: I0318 17:41:58.621535 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d4ec93a3-fdfb-400d-86c3-932df6200fe4-config\") pod \"controller-manager-f5df8899c-8nhkn\" (UID: \"d4ec93a3-fdfb-400d-86c3-932df6200fe4\") " pod="openshift-controller-manager/controller-manager-f5df8899c-8nhkn" Mar 18 17:41:58.621686 master-0 kubenswrapper[7553]: E0318 17:41:58.621635 7553 secret.go:189] Couldn't get secret openshift-controller-manager/serving-cert: secret "serving-cert" not found Mar 18 17:41:58.621759 master-0 kubenswrapper[7553]: E0318 17:41:58.621737 7553 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d4ec93a3-fdfb-400d-86c3-932df6200fe4-serving-cert podName:d4ec93a3-fdfb-400d-86c3-932df6200fe4 nodeName:}" failed. No retries permitted until 2026-03-18 17:42:00.621714514 +0000 UTC m=+10.767549197 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/d4ec93a3-fdfb-400d-86c3-932df6200fe4-serving-cert") pod "controller-manager-f5df8899c-8nhkn" (UID: "d4ec93a3-fdfb-400d-86c3-932df6200fe4") : secret "serving-cert" not found Mar 18 17:41:58.621935 master-0 kubenswrapper[7553]: I0318 17:41:58.621909 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d4ec93a3-fdfb-400d-86c3-932df6200fe4-client-ca\") pod \"controller-manager-f5df8899c-8nhkn\" (UID: \"d4ec93a3-fdfb-400d-86c3-932df6200fe4\") " pod="openshift-controller-manager/controller-manager-f5df8899c-8nhkn" Mar 18 17:41:58.623126 master-0 kubenswrapper[7553]: I0318 17:41:58.623106 7553 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d4ec93a3-fdfb-400d-86c3-932df6200fe4-config\") pod \"controller-manager-f5df8899c-8nhkn\" (UID: \"d4ec93a3-fdfb-400d-86c3-932df6200fe4\") " pod="openshift-controller-manager/controller-manager-f5df8899c-8nhkn" Mar 18 17:41:58.623531 master-0 kubenswrapper[7553]: I0318 17:41:58.623507 7553 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d4ec93a3-fdfb-400d-86c3-932df6200fe4-client-ca\") pod \"controller-manager-f5df8899c-8nhkn\" (UID: \"d4ec93a3-fdfb-400d-86c3-932df6200fe4\") " pod="openshift-controller-manager/controller-manager-f5df8899c-8nhkn" Mar 18 17:41:58.722862 master-0 kubenswrapper[7553]: I0318 17:41:58.722807 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6897138d-43c5-4502-83a5-64ac783886a0-client-ca\") pod \"route-controller-manager-6cd6978d68-zdcm4\" (UID: \"6897138d-43c5-4502-83a5-64ac783886a0\") " pod="openshift-route-controller-manager/route-controller-manager-6cd6978d68-zdcm4" Mar 18 17:41:58.723172 master-0 kubenswrapper[7553]: E0318 17:41:58.723013 7553 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/client-ca: configmap "client-ca" not found Mar 18 17:41:58.723172 master-0 kubenswrapper[7553]: E0318 17:41:58.723141 7553 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6897138d-43c5-4502-83a5-64ac783886a0-client-ca podName:6897138d-43c5-4502-83a5-64ac783886a0 nodeName:}" failed. No retries permitted until 2026-03-18 17:42:00.723111625 +0000 UTC m=+10.868946308 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/6897138d-43c5-4502-83a5-64ac783886a0-client-ca") pod "route-controller-manager-6cd6978d68-zdcm4" (UID: "6897138d-43c5-4502-83a5-64ac783886a0") : configmap "client-ca" not found Mar 18 17:41:58.723372 master-0 kubenswrapper[7553]: I0318 17:41:58.723302 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6897138d-43c5-4502-83a5-64ac783886a0-serving-cert\") pod \"route-controller-manager-6cd6978d68-zdcm4\" (UID: \"6897138d-43c5-4502-83a5-64ac783886a0\") " pod="openshift-route-controller-manager/route-controller-manager-6cd6978d68-zdcm4" Mar 18 17:41:58.723561 master-0 kubenswrapper[7553]: I0318 17:41:58.723523 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6897138d-43c5-4502-83a5-64ac783886a0-config\") pod \"route-controller-manager-6cd6978d68-zdcm4\" (UID: \"6897138d-43c5-4502-83a5-64ac783886a0\") " pod="openshift-route-controller-manager/route-controller-manager-6cd6978d68-zdcm4" Mar 18 17:41:58.723746 master-0 kubenswrapper[7553]: E0318 17:41:58.723527 7553 secret.go:189] Couldn't get secret openshift-route-controller-manager/serving-cert: secret "serving-cert" not found Mar 18 17:41:58.723838 master-0 kubenswrapper[7553]: E0318 17:41:58.723765 7553 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6897138d-43c5-4502-83a5-64ac783886a0-serving-cert podName:6897138d-43c5-4502-83a5-64ac783886a0 nodeName:}" failed. No retries permitted until 2026-03-18 17:42:00.723754229 +0000 UTC m=+10.869588912 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/6897138d-43c5-4502-83a5-64ac783886a0-serving-cert") pod "route-controller-manager-6cd6978d68-zdcm4" (UID: "6897138d-43c5-4502-83a5-64ac783886a0") : secret "serving-cert" not found Mar 18 17:41:58.724710 master-0 kubenswrapper[7553]: I0318 17:41:58.724665 7553 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6897138d-43c5-4502-83a5-64ac783886a0-config\") pod \"route-controller-manager-6cd6978d68-zdcm4\" (UID: \"6897138d-43c5-4502-83a5-64ac783886a0\") " pod="openshift-route-controller-manager/route-controller-manager-6cd6978d68-zdcm4" Mar 18 17:41:58.941300 master-0 kubenswrapper[7553]: I0318 17:41:58.936599 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/6f26e239-2988-4faa-bc1d-24b15b95b7f1-image-registry-operator-tls\") pod \"cluster-image-registry-operator-5549dc66cb-ljrq8\" (UID: \"6f26e239-2988-4faa-bc1d-24b15b95b7f1\") " pod="openshift-image-registry/cluster-image-registry-operator-5549dc66cb-ljrq8" Mar 18 17:41:58.941300 master-0 kubenswrapper[7553]: I0318 17:41:58.936660 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/a8c34df1-ea0d-4dfa-bf4d-5b58dc5bee8e-webhook-certs\") pod \"multus-admission-controller-5dbbb8b86f-gr8jc\" (UID: \"a8c34df1-ea0d-4dfa-bf4d-5b58dc5bee8e\") " pod="openshift-multus/multus-admission-controller-5dbbb8b86f-gr8jc" Mar 18 17:41:58.941300 master-0 kubenswrapper[7553]: I0318 17:41:58.936683 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/7c6694a8-ccd0-491b-9f21-215450f6ce67-apiservice-cert\") pod \"cluster-node-tuning-operator-598fbc5f8f-7qwxn\" (UID: \"7c6694a8-ccd0-491b-9f21-215450f6ce67\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-7qwxn" Mar 18 17:41:58.941300 master-0 kubenswrapper[7553]: I0318 17:41:58.936701 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/e9e04572-1425-440e-9869-6deef05e13e3-srv-cert\") pod \"catalog-operator-68f85b4d6c-qpgfz\" (UID: \"e9e04572-1425-440e-9869-6deef05e13e3\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68f85b4d6c-qpgfz" Mar 18 17:41:58.941300 master-0 kubenswrapper[7553]: I0318 17:41:58.936728 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/8d5e9525-6c0d-4c0b-a2ce-e42eaf66c311-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-58845fbb57-vjrjg\" (UID: \"8d5e9525-6c0d-4c0b-a2ce-e42eaf66c311\") " pod="openshift-monitoring/cluster-monitoring-operator-58845fbb57-vjrjg" Mar 18 17:41:58.941300 master-0 kubenswrapper[7553]: I0318 17:41:58.936750 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/b1352cc7-4099-44c5-9c31-8259fb783bc7-metrics-tls\") pod \"dns-operator-9c5679d8f-7sc7v\" (UID: \"b1352cc7-4099-44c5-9c31-8259fb783bc7\") " pod="openshift-dns-operator/dns-operator-9c5679d8f-7sc7v" Mar 18 17:41:58.941300 master-0 kubenswrapper[7553]: I0318 17:41:58.936774 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/ce5831a6-5a8d-4cda-9299-5d86437bcab2-marketplace-operator-metrics\") pod \"marketplace-operator-89ccd998f-l5gm7\" (UID: \"ce5831a6-5a8d-4cda-9299-5d86437bcab2\") " pod="openshift-marketplace/marketplace-operator-89ccd998f-l5gm7" Mar 18 17:41:58.941300 master-0 kubenswrapper[7553]: I0318 17:41:58.936794 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/37b3753f-bf4f-4a9e-a4a8-d58296bada79-cert\") pod \"cluster-baremetal-operator-6f69995874-dh5zl\" (UID: \"37b3753f-bf4f-4a9e-a4a8-d58296bada79\") " pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-dh5zl" Mar 18 17:41:58.941300 master-0 kubenswrapper[7553]: I0318 17:41:58.936828 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/e73f2834-c56c-4cef-ac3c-2317e9a4324c-srv-cert\") pod \"olm-operator-5c9796789-6hngr\" (UID: \"e73f2834-c56c-4cef-ac3c-2317e9a4324c\") " pod="openshift-operator-lifecycle-manager/olm-operator-5c9796789-6hngr" Mar 18 17:41:58.941300 master-0 kubenswrapper[7553]: I0318 17:41:58.936848 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/d26d4515-391e-41a5-8c82-1b2b8a375662-package-server-manager-serving-cert\") pod \"package-server-manager-7b95f86987-6qqz4\" (UID: \"d26d4515-391e-41a5-8c82-1b2b8a375662\") " pod="openshift-operator-lifecycle-manager/package-server-manager-7b95f86987-6qqz4" Mar 18 17:41:58.941300 master-0 kubenswrapper[7553]: I0318 17:41:58.936868 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-baremetal-operator-tls\" (UniqueName: \"kubernetes.io/secret/37b3753f-bf4f-4a9e-a4a8-d58296bada79-cluster-baremetal-operator-tls\") pod \"cluster-baremetal-operator-6f69995874-dh5zl\" (UID: \"37b3753f-bf4f-4a9e-a4a8-d58296bada79\") " pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-dh5zl" Mar 18 17:41:58.941300 master-0 kubenswrapper[7553]: I0318 17:41:58.936888 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a02399de-859b-45b1-9b00-18a08f285f39-serving-cert\") pod \"cluster-version-operator-56d8475767-lqvvj\" (UID: \"a02399de-859b-45b1-9b00-18a08f285f39\") " pod="openshift-cluster-version/cluster-version-operator-56d8475767-lqvvj" Mar 18 17:41:58.941300 master-0 kubenswrapper[7553]: I0318 17:41:58.936910 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/7e64a377-f497-4416-8f22-d5c7f52e0b65-metrics-tls\") pod \"ingress-operator-66b84d69b-qb7n6\" (UID: \"7e64a377-f497-4416-8f22-d5c7f52e0b65\") " pod="openshift-ingress-operator/ingress-operator-66b84d69b-qb7n6" Mar 18 17:41:58.941300 master-0 kubenswrapper[7553]: E0318 17:41:58.937039 7553 secret.go:189] Couldn't get secret openshift-ingress-operator/metrics-tls: secret "metrics-tls" not found Mar 18 17:41:58.941300 master-0 kubenswrapper[7553]: E0318 17:41:58.937092 7553 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7e64a377-f497-4416-8f22-d5c7f52e0b65-metrics-tls podName:7e64a377-f497-4416-8f22-d5c7f52e0b65 nodeName:}" failed. No retries permitted until 2026-03-18 17:42:06.937077124 +0000 UTC m=+17.082911797 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/7e64a377-f497-4416-8f22-d5c7f52e0b65-metrics-tls") pod "ingress-operator-66b84d69b-qb7n6" (UID: "7e64a377-f497-4416-8f22-d5c7f52e0b65") : secret "metrics-tls" not found Mar 18 17:41:58.941300 master-0 kubenswrapper[7553]: E0318 17:41:58.937133 7553 secret.go:189] Couldn't get secret openshift-image-registry/image-registry-operator-tls: secret "image-registry-operator-tls" not found Mar 18 17:41:58.941300 master-0 kubenswrapper[7553]: E0318 17:41:58.937151 7553 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6f26e239-2988-4faa-bc1d-24b15b95b7f1-image-registry-operator-tls podName:6f26e239-2988-4faa-bc1d-24b15b95b7f1 nodeName:}" failed. No retries permitted until 2026-03-18 17:42:06.937144106 +0000 UTC m=+17.082978779 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "image-registry-operator-tls" (UniqueName: "kubernetes.io/secret/6f26e239-2988-4faa-bc1d-24b15b95b7f1-image-registry-operator-tls") pod "cluster-image-registry-operator-5549dc66cb-ljrq8" (UID: "6f26e239-2988-4faa-bc1d-24b15b95b7f1") : secret "image-registry-operator-tls" not found Mar 18 17:41:58.941300 master-0 kubenswrapper[7553]: E0318 17:41:58.937185 7553 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: secret "multus-admission-controller-secret" not found Mar 18 17:41:58.941300 master-0 kubenswrapper[7553]: E0318 17:41:58.937200 7553 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a8c34df1-ea0d-4dfa-bf4d-5b58dc5bee8e-webhook-certs podName:a8c34df1-ea0d-4dfa-bf4d-5b58dc5bee8e nodeName:}" failed. No retries permitted until 2026-03-18 17:42:06.937195637 +0000 UTC m=+17.083030310 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/a8c34df1-ea0d-4dfa-bf4d-5b58dc5bee8e-webhook-certs") pod "multus-admission-controller-5dbbb8b86f-gr8jc" (UID: "a8c34df1-ea0d-4dfa-bf4d-5b58dc5bee8e") : secret "multus-admission-controller-secret" not found Mar 18 17:41:58.941300 master-0 kubenswrapper[7553]: E0318 17:41:58.937235 7553 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/performance-addon-operator-webhook-cert: secret "performance-addon-operator-webhook-cert" not found Mar 18 17:41:58.941300 master-0 kubenswrapper[7553]: E0318 17:41:58.937252 7553 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7c6694a8-ccd0-491b-9f21-215450f6ce67-apiservice-cert podName:7c6694a8-ccd0-491b-9f21-215450f6ce67 nodeName:}" failed. No retries permitted until 2026-03-18 17:42:06.937246848 +0000 UTC m=+17.083081521 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/7c6694a8-ccd0-491b-9f21-215450f6ce67-apiservice-cert") pod "cluster-node-tuning-operator-598fbc5f8f-7qwxn" (UID: "7c6694a8-ccd0-491b-9f21-215450f6ce67") : secret "performance-addon-operator-webhook-cert" not found Mar 18 17:41:58.941300 master-0 kubenswrapper[7553]: E0318 17:41:58.937299 7553 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: secret "catalog-operator-serving-cert" not found Mar 18 17:41:58.941300 master-0 kubenswrapper[7553]: E0318 17:41:58.937316 7553 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e9e04572-1425-440e-9869-6deef05e13e3-srv-cert podName:e9e04572-1425-440e-9869-6deef05e13e3 nodeName:}" failed. No retries permitted until 2026-03-18 17:42:06.937310449 +0000 UTC m=+17.083145122 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/e9e04572-1425-440e-9869-6deef05e13e3-srv-cert") pod "catalog-operator-68f85b4d6c-qpgfz" (UID: "e9e04572-1425-440e-9869-6deef05e13e3") : secret "catalog-operator-serving-cert" not found Mar 18 17:41:58.941300 master-0 kubenswrapper[7553]: E0318 17:41:58.937351 7553 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: secret "cluster-monitoring-operator-tls" not found Mar 18 17:41:58.941300 master-0 kubenswrapper[7553]: E0318 17:41:58.937371 7553 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8d5e9525-6c0d-4c0b-a2ce-e42eaf66c311-cluster-monitoring-operator-tls podName:8d5e9525-6c0d-4c0b-a2ce-e42eaf66c311 nodeName:}" failed. No retries permitted until 2026-03-18 17:42:06.937365181 +0000 UTC m=+17.083199854 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/8d5e9525-6c0d-4c0b-a2ce-e42eaf66c311-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-58845fbb57-vjrjg" (UID: "8d5e9525-6c0d-4c0b-a2ce-e42eaf66c311") : secret "cluster-monitoring-operator-tls" not found Mar 18 17:41:58.941300 master-0 kubenswrapper[7553]: E0318 17:41:58.937406 7553 secret.go:189] Couldn't get secret openshift-dns-operator/metrics-tls: secret "metrics-tls" not found Mar 18 17:41:58.941300 master-0 kubenswrapper[7553]: E0318 17:41:58.937424 7553 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b1352cc7-4099-44c5-9c31-8259fb783bc7-metrics-tls podName:b1352cc7-4099-44c5-9c31-8259fb783bc7 nodeName:}" failed. No retries permitted until 2026-03-18 17:42:06.937418912 +0000 UTC m=+17.083253585 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/b1352cc7-4099-44c5-9c31-8259fb783bc7-metrics-tls") pod "dns-operator-9c5679d8f-7sc7v" (UID: "b1352cc7-4099-44c5-9c31-8259fb783bc7") : secret "metrics-tls" not found Mar 18 17:41:58.941300 master-0 kubenswrapper[7553]: E0318 17:41:58.937457 7553 secret.go:189] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: secret "marketplace-operator-metrics" not found Mar 18 17:41:58.941300 master-0 kubenswrapper[7553]: E0318 17:41:58.937473 7553 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ce5831a6-5a8d-4cda-9299-5d86437bcab2-marketplace-operator-metrics podName:ce5831a6-5a8d-4cda-9299-5d86437bcab2 nodeName:}" failed. No retries permitted until 2026-03-18 17:42:06.937468023 +0000 UTC m=+17.083302696 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/ce5831a6-5a8d-4cda-9299-5d86437bcab2-marketplace-operator-metrics") pod "marketplace-operator-89ccd998f-l5gm7" (UID: "ce5831a6-5a8d-4cda-9299-5d86437bcab2") : secret "marketplace-operator-metrics" not found Mar 18 17:41:58.941300 master-0 kubenswrapper[7553]: E0318 17:41:58.937510 7553 secret.go:189] Couldn't get secret openshift-machine-api/cluster-baremetal-webhook-server-cert: secret "cluster-baremetal-webhook-server-cert" not found Mar 18 17:41:58.941300 master-0 kubenswrapper[7553]: E0318 17:41:58.937526 7553 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/37b3753f-bf4f-4a9e-a4a8-d58296bada79-cert podName:37b3753f-bf4f-4a9e-a4a8-d58296bada79 nodeName:}" failed. No retries permitted until 2026-03-18 17:42:06.937520854 +0000 UTC m=+17.083355527 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/37b3753f-bf4f-4a9e-a4a8-d58296bada79-cert") pod "cluster-baremetal-operator-6f69995874-dh5zl" (UID: "37b3753f-bf4f-4a9e-a4a8-d58296bada79") : secret "cluster-baremetal-webhook-server-cert" not found Mar 18 17:41:58.941300 master-0 kubenswrapper[7553]: E0318 17:41:58.937558 7553 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: secret "olm-operator-serving-cert" not found Mar 18 17:41:58.941300 master-0 kubenswrapper[7553]: E0318 17:41:58.937574 7553 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e73f2834-c56c-4cef-ac3c-2317e9a4324c-srv-cert podName:e73f2834-c56c-4cef-ac3c-2317e9a4324c nodeName:}" failed. No retries permitted until 2026-03-18 17:42:06.937569555 +0000 UTC m=+17.083404228 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/e73f2834-c56c-4cef-ac3c-2317e9a4324c-srv-cert") pod "olm-operator-5c9796789-6hngr" (UID: "e73f2834-c56c-4cef-ac3c-2317e9a4324c") : secret "olm-operator-serving-cert" not found Mar 18 17:41:58.941300 master-0 kubenswrapper[7553]: E0318 17:41:58.937626 7553 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: secret "package-server-manager-serving-cert" not found Mar 18 17:41:58.941300 master-0 kubenswrapper[7553]: E0318 17:41:58.937645 7553 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d26d4515-391e-41a5-8c82-1b2b8a375662-package-server-manager-serving-cert podName:d26d4515-391e-41a5-8c82-1b2b8a375662 nodeName:}" failed. No retries permitted until 2026-03-18 17:42:06.937639626 +0000 UTC m=+17.083474309 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/d26d4515-391e-41a5-8c82-1b2b8a375662-package-server-manager-serving-cert") pod "package-server-manager-7b95f86987-6qqz4" (UID: "d26d4515-391e-41a5-8c82-1b2b8a375662") : secret "package-server-manager-serving-cert" not found Mar 18 17:41:58.941300 master-0 kubenswrapper[7553]: E0318 17:41:58.937675 7553 secret.go:189] Couldn't get secret openshift-machine-api/cluster-baremetal-operator-tls: secret "cluster-baremetal-operator-tls" not found Mar 18 17:41:58.941300 master-0 kubenswrapper[7553]: E0318 17:41:58.937693 7553 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/37b3753f-bf4f-4a9e-a4a8-d58296bada79-cluster-baremetal-operator-tls podName:37b3753f-bf4f-4a9e-a4a8-d58296bada79 nodeName:}" failed. No retries permitted until 2026-03-18 17:42:06.937686727 +0000 UTC m=+17.083521400 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cluster-baremetal-operator-tls" (UniqueName: "kubernetes.io/secret/37b3753f-bf4f-4a9e-a4a8-d58296bada79-cluster-baremetal-operator-tls") pod "cluster-baremetal-operator-6f69995874-dh5zl" (UID: "37b3753f-bf4f-4a9e-a4a8-d58296bada79") : secret "cluster-baremetal-operator-tls" not found Mar 18 17:41:58.941300 master-0 kubenswrapper[7553]: E0318 17:41:58.937725 7553 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Mar 18 17:41:58.941300 master-0 kubenswrapper[7553]: E0318 17:41:58.937743 7553 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a02399de-859b-45b1-9b00-18a08f285f39-serving-cert podName:a02399de-859b-45b1-9b00-18a08f285f39 nodeName:}" failed. No retries permitted until 2026-03-18 17:42:06.937737809 +0000 UTC m=+17.083572482 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/a02399de-859b-45b1-9b00-18a08f285f39-serving-cert") pod "cluster-version-operator-56d8475767-lqvvj" (UID: "a02399de-859b-45b1-9b00-18a08f285f39") : secret "cluster-version-operator-serving-cert" not found Mar 18 17:41:58.984526 master-0 kubenswrapper[7553]: I0318 17:41:58.982877 7553 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-f5df8899c-8nhkn"] Mar 18 17:41:58.984526 master-0 kubenswrapper[7553]: E0318 17:41:58.983210 7553 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[proxy-ca-bundles serving-cert], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openshift-controller-manager/controller-manager-f5df8899c-8nhkn" podUID="d4ec93a3-fdfb-400d-86c3-932df6200fe4" Mar 18 17:41:58.998171 master-0 kubenswrapper[7553]: I0318 17:41:58.998122 7553 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6cd6978d68-zdcm4"] Mar 18 17:41:58.999238 master-0 kubenswrapper[7553]: E0318 17:41:58.999193 7553 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[client-ca serving-cert], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openshift-route-controller-manager/route-controller-manager-6cd6978d68-zdcm4" podUID="6897138d-43c5-4502-83a5-64ac783886a0" Mar 18 17:41:59.135967 master-0 kubenswrapper[7553]: I0318 17:41:59.135128 7553 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-79bc6b8d76-g5brm"] Mar 18 17:41:59.157538 master-0 kubenswrapper[7553]: W0318 17:41:59.157459 7553 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfd4c81e2_699b_4fdf_ac7d_1607cde6a8ab.slice/crio-93f149a1ecb7aaccb9bdce489447440893c003702d0a6409833391c55955f7eb WatchSource:0}: Error finding container 93f149a1ecb7aaccb9bdce489447440893c003702d0a6409833391c55955f7eb: Status 404 returned error can't find the container with id 93f149a1ecb7aaccb9bdce489447440893c003702d0a6409833391c55955f7eb Mar 18 17:41:59.246325 master-0 kubenswrapper[7553]: I0318 17:41:59.246220 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-64854d9cff-vpjmp" event={"ID":"7d39d93e-9be3-47e1-a44e-be2d18b55446","Type":"ContainerStarted","Data":"579cc4ec2812b3f1711dac655f16b18685227d170cb9b02fdd9aad6faa3c3865"} Mar 18 17:41:59.249549 master-0 kubenswrapper[7553]: I0318 17:41:59.249481 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-olm-operator/cluster-olm-operator-67dcd4998-lljnt" event={"ID":"99e215da-759d-4fff-af65-0fb64245fbd0","Type":"ContainerDied","Data":"345b9877bce66c031277690013e8db931d86b5ac05fc33b7cbd7c55a24998003"} Mar 18 17:41:59.249661 master-0 kubenswrapper[7553]: I0318 17:41:59.249437 7553 generic.go:334] "Generic (PLEG): container finished" podID="99e215da-759d-4fff-af65-0fb64245fbd0" containerID="345b9877bce66c031277690013e8db931d86b5ac05fc33b7cbd7c55a24998003" exitCode=0 Mar 18 17:41:59.254767 master-0 kubenswrapper[7553]: I0318 17:41:59.254723 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-79bc6b8d76-g5brm" event={"ID":"fd4c81e2-699b-4fdf-ac7d-1607cde6a8ab","Type":"ContainerStarted","Data":"93f149a1ecb7aaccb9bdce489447440893c003702d0a6409833391c55955f7eb"} Mar 18 17:41:59.258390 master-0 kubenswrapper[7553]: I0318 17:41:59.257819 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-95bf4f4d-q27fh" event={"ID":"cb522b02-0b93-4711-9041-566daa06b95a","Type":"ContainerStarted","Data":"8ec96d66f498df1f17ff1b07f364e893b390b96c326cc03f6199600b04196d04"} Mar 18 17:41:59.258390 master-0 kubenswrapper[7553]: I0318 17:41:59.257822 7553 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6cd6978d68-zdcm4" Mar 18 17:41:59.258390 master-0 kubenswrapper[7553]: I0318 17:41:59.258063 7553 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-f5df8899c-8nhkn" Mar 18 17:41:59.264296 master-0 kubenswrapper[7553]: I0318 17:41:59.263558 7553 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-storage-operator/csi-snapshot-controller-64854d9cff-vpjmp" podStartSLOduration=1.962713296 podStartE2EDuration="5.26354119s" podCreationTimestamp="2026-03-18 17:41:54 +0000 UTC" firstStartedPulling="2026-03-18 17:41:55.654612145 +0000 UTC m=+5.800446818" lastFinishedPulling="2026-03-18 17:41:58.955440039 +0000 UTC m=+9.101274712" observedRunningTime="2026-03-18 17:41:59.262973928 +0000 UTC m=+9.408808601" watchObservedRunningTime="2026-03-18 17:41:59.26354119 +0000 UTC m=+9.409375863" Mar 18 17:41:59.274954 master-0 kubenswrapper[7553]: I0318 17:41:59.274851 7553 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-f5df8899c-8nhkn" Mar 18 17:41:59.278245 master-0 kubenswrapper[7553]: I0318 17:41:59.277774 7553 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6cd6978d68-zdcm4" Mar 18 17:41:59.344064 master-0 kubenswrapper[7553]: I0318 17:41:59.343977 7553 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4ktgn\" (UniqueName: \"kubernetes.io/projected/6897138d-43c5-4502-83a5-64ac783886a0-kube-api-access-4ktgn\") pod \"6897138d-43c5-4502-83a5-64ac783886a0\" (UID: \"6897138d-43c5-4502-83a5-64ac783886a0\") " Mar 18 17:41:59.344337 master-0 kubenswrapper[7553]: I0318 17:41:59.344083 7553 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xqhdk\" (UniqueName: \"kubernetes.io/projected/d4ec93a3-fdfb-400d-86c3-932df6200fe4-kube-api-access-xqhdk\") pod \"d4ec93a3-fdfb-400d-86c3-932df6200fe4\" (UID: \"d4ec93a3-fdfb-400d-86c3-932df6200fe4\") " Mar 18 17:41:59.344337 master-0 kubenswrapper[7553]: I0318 17:41:59.344156 7553 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d4ec93a3-fdfb-400d-86c3-932df6200fe4-config\") pod \"d4ec93a3-fdfb-400d-86c3-932df6200fe4\" (UID: \"d4ec93a3-fdfb-400d-86c3-932df6200fe4\") " Mar 18 17:41:59.344337 master-0 kubenswrapper[7553]: I0318 17:41:59.344204 7553 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6897138d-43c5-4502-83a5-64ac783886a0-config\") pod \"6897138d-43c5-4502-83a5-64ac783886a0\" (UID: \"6897138d-43c5-4502-83a5-64ac783886a0\") " Mar 18 17:41:59.344337 master-0 kubenswrapper[7553]: I0318 17:41:59.344243 7553 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d4ec93a3-fdfb-400d-86c3-932df6200fe4-client-ca\") pod \"d4ec93a3-fdfb-400d-86c3-932df6200fe4\" (UID: \"d4ec93a3-fdfb-400d-86c3-932df6200fe4\") " Mar 18 17:41:59.344931 master-0 kubenswrapper[7553]: I0318 17:41:59.344887 7553 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d4ec93a3-fdfb-400d-86c3-932df6200fe4-config" (OuterVolumeSpecName: "config") pod "d4ec93a3-fdfb-400d-86c3-932df6200fe4" (UID: "d4ec93a3-fdfb-400d-86c3-932df6200fe4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 17:41:59.345010 master-0 kubenswrapper[7553]: I0318 17:41:59.344892 7553 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6897138d-43c5-4502-83a5-64ac783886a0-config" (OuterVolumeSpecName: "config") pod "6897138d-43c5-4502-83a5-64ac783886a0" (UID: "6897138d-43c5-4502-83a5-64ac783886a0"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 17:41:59.345093 master-0 kubenswrapper[7553]: I0318 17:41:59.345032 7553 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d4ec93a3-fdfb-400d-86c3-932df6200fe4-client-ca" (OuterVolumeSpecName: "client-ca") pod "d4ec93a3-fdfb-400d-86c3-932df6200fe4" (UID: "d4ec93a3-fdfb-400d-86c3-932df6200fe4"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 17:41:59.345415 master-0 kubenswrapper[7553]: I0318 17:41:59.345369 7553 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d4ec93a3-fdfb-400d-86c3-932df6200fe4-config\") on node \"master-0\" DevicePath \"\"" Mar 18 17:41:59.345415 master-0 kubenswrapper[7553]: I0318 17:41:59.345401 7553 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6897138d-43c5-4502-83a5-64ac783886a0-config\") on node \"master-0\" DevicePath \"\"" Mar 18 17:41:59.345513 master-0 kubenswrapper[7553]: I0318 17:41:59.345420 7553 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d4ec93a3-fdfb-400d-86c3-932df6200fe4-client-ca\") on node \"master-0\" DevicePath \"\"" Mar 18 17:41:59.350290 master-0 kubenswrapper[7553]: I0318 17:41:59.349362 7553 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d4ec93a3-fdfb-400d-86c3-932df6200fe4-kube-api-access-xqhdk" (OuterVolumeSpecName: "kube-api-access-xqhdk") pod "d4ec93a3-fdfb-400d-86c3-932df6200fe4" (UID: "d4ec93a3-fdfb-400d-86c3-932df6200fe4"). InnerVolumeSpecName "kube-api-access-xqhdk". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 17:41:59.350290 master-0 kubenswrapper[7553]: I0318 17:41:59.349369 7553 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6897138d-43c5-4502-83a5-64ac783886a0-kube-api-access-4ktgn" (OuterVolumeSpecName: "kube-api-access-4ktgn") pod "6897138d-43c5-4502-83a5-64ac783886a0" (UID: "6897138d-43c5-4502-83a5-64ac783886a0"). InnerVolumeSpecName "kube-api-access-4ktgn". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 17:41:59.447133 master-0 kubenswrapper[7553]: I0318 17:41:59.446973 7553 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4ktgn\" (UniqueName: \"kubernetes.io/projected/6897138d-43c5-4502-83a5-64ac783886a0-kube-api-access-4ktgn\") on node \"master-0\" DevicePath \"\"" Mar 18 17:41:59.447133 master-0 kubenswrapper[7553]: I0318 17:41:59.447037 7553 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xqhdk\" (UniqueName: \"kubernetes.io/projected/d4ec93a3-fdfb-400d-86c3-932df6200fe4-kube-api-access-xqhdk\") on node \"master-0\" DevicePath \"\"" Mar 18 17:41:59.952536 master-0 kubenswrapper[7553]: I0318 17:41:59.952418 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5a4f94f3-d63a-4869-b723-ae9637610b4b-metrics-certs\") pod \"network-metrics-daemon-mfn52\" (UID: \"5a4f94f3-d63a-4869-b723-ae9637610b4b\") " pod="openshift-multus/network-metrics-daemon-mfn52" Mar 18 17:41:59.952536 master-0 kubenswrapper[7553]: I0318 17:41:59.952495 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/7c6694a8-ccd0-491b-9f21-215450f6ce67-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-598fbc5f8f-7qwxn\" (UID: \"7c6694a8-ccd0-491b-9f21-215450f6ce67\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-7qwxn" Mar 18 17:41:59.953019 master-0 kubenswrapper[7553]: E0318 17:41:59.952709 7553 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: secret "metrics-daemon-secret" not found Mar 18 17:41:59.953019 master-0 kubenswrapper[7553]: E0318 17:41:59.952818 7553 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/node-tuning-operator-tls: secret "node-tuning-operator-tls" not found Mar 18 17:41:59.953019 master-0 kubenswrapper[7553]: E0318 17:41:59.952886 7553 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5a4f94f3-d63a-4869-b723-ae9637610b4b-metrics-certs podName:5a4f94f3-d63a-4869-b723-ae9637610b4b nodeName:}" failed. No retries permitted until 2026-03-18 17:42:07.952842971 +0000 UTC m=+18.098677644 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/5a4f94f3-d63a-4869-b723-ae9637610b4b-metrics-certs") pod "network-metrics-daemon-mfn52" (UID: "5a4f94f3-d63a-4869-b723-ae9637610b4b") : secret "metrics-daemon-secret" not found Mar 18 17:41:59.953019 master-0 kubenswrapper[7553]: E0318 17:41:59.952909 7553 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7c6694a8-ccd0-491b-9f21-215450f6ce67-node-tuning-operator-tls podName:7c6694a8-ccd0-491b-9f21-215450f6ce67 nodeName:}" failed. No retries permitted until 2026-03-18 17:42:07.952901672 +0000 UTC m=+18.098736345 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "node-tuning-operator-tls" (UniqueName: "kubernetes.io/secret/7c6694a8-ccd0-491b-9f21-215450f6ce67-node-tuning-operator-tls") pod "cluster-node-tuning-operator-598fbc5f8f-7qwxn" (UID: "7c6694a8-ccd0-491b-9f21-215450f6ce67") : secret "node-tuning-operator-tls" not found Mar 18 17:42:00.274751 master-0 kubenswrapper[7553]: I0318 17:42:00.269253 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-79bc6b8d76-g5brm" event={"ID":"fd4c81e2-699b-4fdf-ac7d-1607cde6a8ab","Type":"ContainerStarted","Data":"a8a00810d795e748f7416b26291bd5e824cc9027054e6c1fabd83a4ff999def0"} Mar 18 17:42:00.274751 master-0 kubenswrapper[7553]: I0318 17:42:00.269507 7553 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-f5df8899c-8nhkn" Mar 18 17:42:00.274751 master-0 kubenswrapper[7553]: I0318 17:42:00.270785 7553 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6cd6978d68-zdcm4" Mar 18 17:42:00.295454 master-0 kubenswrapper[7553]: I0318 17:42:00.290856 7553 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-79bc6b8d76-g5brm" podStartSLOduration=3.290824029 podStartE2EDuration="3.290824029s" podCreationTimestamp="2026-03-18 17:41:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 17:42:00.28995751 +0000 UTC m=+10.435792223" watchObservedRunningTime="2026-03-18 17:42:00.290824029 +0000 UTC m=+10.436658742" Mar 18 17:42:00.333961 master-0 kubenswrapper[7553]: I0318 17:42:00.333901 7553 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-67ffc948fb-bpqs9"] Mar 18 17:42:00.335078 master-0 kubenswrapper[7553]: I0318 17:42:00.334930 7553 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-67ffc948fb-bpqs9" Mar 18 17:42:00.339144 master-0 kubenswrapper[7553]: I0318 17:42:00.337868 7553 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6cd6978d68-zdcm4"] Mar 18 17:42:00.340015 master-0 kubenswrapper[7553]: I0318 17:42:00.339976 7553 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Mar 18 17:42:00.340749 master-0 kubenswrapper[7553]: I0318 17:42:00.340718 7553 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Mar 18 17:42:00.341564 master-0 kubenswrapper[7553]: I0318 17:42:00.341542 7553 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Mar 18 17:42:00.341683 master-0 kubenswrapper[7553]: I0318 17:42:00.341584 7553 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Mar 18 17:42:00.341808 master-0 kubenswrapper[7553]: I0318 17:42:00.341784 7553 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Mar 18 17:42:00.342244 master-0 kubenswrapper[7553]: I0318 17:42:00.342181 7553 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6cd6978d68-zdcm4"] Mar 18 17:42:00.348901 master-0 kubenswrapper[7553]: I0318 17:42:00.348842 7553 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-67ffc948fb-bpqs9"] Mar 18 17:42:00.367069 master-0 kubenswrapper[7553]: I0318 17:42:00.359236 7553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1d02eb9e-e5e3-49fc-9265-b0ee8aa14fb9-client-ca\") pod \"route-controller-manager-67ffc948fb-bpqs9\" (UID: \"1d02eb9e-e5e3-49fc-9265-b0ee8aa14fb9\") " pod="openshift-route-controller-manager/route-controller-manager-67ffc948fb-bpqs9" Mar 18 17:42:00.367069 master-0 kubenswrapper[7553]: I0318 17:42:00.359291 7553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1d02eb9e-e5e3-49fc-9265-b0ee8aa14fb9-serving-cert\") pod \"route-controller-manager-67ffc948fb-bpqs9\" (UID: \"1d02eb9e-e5e3-49fc-9265-b0ee8aa14fb9\") " pod="openshift-route-controller-manager/route-controller-manager-67ffc948fb-bpqs9" Mar 18 17:42:00.367069 master-0 kubenswrapper[7553]: I0318 17:42:00.359343 7553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qwpzm\" (UniqueName: \"kubernetes.io/projected/1d02eb9e-e5e3-49fc-9265-b0ee8aa14fb9-kube-api-access-qwpzm\") pod \"route-controller-manager-67ffc948fb-bpqs9\" (UID: \"1d02eb9e-e5e3-49fc-9265-b0ee8aa14fb9\") " pod="openshift-route-controller-manager/route-controller-manager-67ffc948fb-bpqs9" Mar 18 17:42:00.367069 master-0 kubenswrapper[7553]: I0318 17:42:00.359411 7553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1d02eb9e-e5e3-49fc-9265-b0ee8aa14fb9-config\") pod \"route-controller-manager-67ffc948fb-bpqs9\" (UID: \"1d02eb9e-e5e3-49fc-9265-b0ee8aa14fb9\") " pod="openshift-route-controller-manager/route-controller-manager-67ffc948fb-bpqs9" Mar 18 17:42:00.367069 master-0 kubenswrapper[7553]: I0318 17:42:00.359474 7553 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6897138d-43c5-4502-83a5-64ac783886a0-client-ca\") on node \"master-0\" DevicePath \"\"" Mar 18 17:42:00.367069 master-0 kubenswrapper[7553]: I0318 17:42:00.359488 7553 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6897138d-43c5-4502-83a5-64ac783886a0-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 18 17:42:00.375641 master-0 kubenswrapper[7553]: I0318 17:42:00.375503 7553 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-f5df8899c-8nhkn"] Mar 18 17:42:00.378906 master-0 kubenswrapper[7553]: I0318 17:42:00.378839 7553 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-f5df8899c-8nhkn"] Mar 18 17:42:00.460658 master-0 kubenswrapper[7553]: I0318 17:42:00.460596 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1d02eb9e-e5e3-49fc-9265-b0ee8aa14fb9-client-ca\") pod \"route-controller-manager-67ffc948fb-bpqs9\" (UID: \"1d02eb9e-e5e3-49fc-9265-b0ee8aa14fb9\") " pod="openshift-route-controller-manager/route-controller-manager-67ffc948fb-bpqs9" Mar 18 17:42:00.460658 master-0 kubenswrapper[7553]: I0318 17:42:00.460645 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1d02eb9e-e5e3-49fc-9265-b0ee8aa14fb9-serving-cert\") pod \"route-controller-manager-67ffc948fb-bpqs9\" (UID: \"1d02eb9e-e5e3-49fc-9265-b0ee8aa14fb9\") " pod="openshift-route-controller-manager/route-controller-manager-67ffc948fb-bpqs9" Mar 18 17:42:00.460897 master-0 kubenswrapper[7553]: I0318 17:42:00.460693 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qwpzm\" (UniqueName: \"kubernetes.io/projected/1d02eb9e-e5e3-49fc-9265-b0ee8aa14fb9-kube-api-access-qwpzm\") pod \"route-controller-manager-67ffc948fb-bpqs9\" (UID: \"1d02eb9e-e5e3-49fc-9265-b0ee8aa14fb9\") " pod="openshift-route-controller-manager/route-controller-manager-67ffc948fb-bpqs9" Mar 18 17:42:00.460897 master-0 kubenswrapper[7553]: I0318 17:42:00.460754 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1d02eb9e-e5e3-49fc-9265-b0ee8aa14fb9-config\") pod \"route-controller-manager-67ffc948fb-bpqs9\" (UID: \"1d02eb9e-e5e3-49fc-9265-b0ee8aa14fb9\") " pod="openshift-route-controller-manager/route-controller-manager-67ffc948fb-bpqs9" Mar 18 17:42:00.460897 master-0 kubenswrapper[7553]: I0318 17:42:00.460842 7553 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d4ec93a3-fdfb-400d-86c3-932df6200fe4-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 18 17:42:00.460897 master-0 kubenswrapper[7553]: I0318 17:42:00.460854 7553 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d4ec93a3-fdfb-400d-86c3-932df6200fe4-proxy-ca-bundles\") on node \"master-0\" DevicePath \"\"" Mar 18 17:42:00.461167 master-0 kubenswrapper[7553]: E0318 17:42:00.461149 7553 secret.go:189] Couldn't get secret openshift-route-controller-manager/serving-cert: secret "serving-cert" not found Mar 18 17:42:00.461316 master-0 kubenswrapper[7553]: E0318 17:42:00.461303 7553 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1d02eb9e-e5e3-49fc-9265-b0ee8aa14fb9-serving-cert podName:1d02eb9e-e5e3-49fc-9265-b0ee8aa14fb9 nodeName:}" failed. No retries permitted until 2026-03-18 17:42:00.96126888 +0000 UTC m=+11.107103553 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/1d02eb9e-e5e3-49fc-9265-b0ee8aa14fb9-serving-cert") pod "route-controller-manager-67ffc948fb-bpqs9" (UID: "1d02eb9e-e5e3-49fc-9265-b0ee8aa14fb9") : secret "serving-cert" not found Mar 18 17:42:00.461789 master-0 kubenswrapper[7553]: I0318 17:42:00.461765 7553 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1d02eb9e-e5e3-49fc-9265-b0ee8aa14fb9-config\") pod \"route-controller-manager-67ffc948fb-bpqs9\" (UID: \"1d02eb9e-e5e3-49fc-9265-b0ee8aa14fb9\") " pod="openshift-route-controller-manager/route-controller-manager-67ffc948fb-bpqs9" Mar 18 17:42:00.462195 master-0 kubenswrapper[7553]: I0318 17:42:00.462176 7553 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1d02eb9e-e5e3-49fc-9265-b0ee8aa14fb9-client-ca\") pod \"route-controller-manager-67ffc948fb-bpqs9\" (UID: \"1d02eb9e-e5e3-49fc-9265-b0ee8aa14fb9\") " pod="openshift-route-controller-manager/route-controller-manager-67ffc948fb-bpqs9" Mar 18 17:42:00.485526 master-0 kubenswrapper[7553]: I0318 17:42:00.485488 7553 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qwpzm\" (UniqueName: \"kubernetes.io/projected/1d02eb9e-e5e3-49fc-9265-b0ee8aa14fb9-kube-api-access-qwpzm\") pod \"route-controller-manager-67ffc948fb-bpqs9\" (UID: \"1d02eb9e-e5e3-49fc-9265-b0ee8aa14fb9\") " pod="openshift-route-controller-manager/route-controller-manager-67ffc948fb-bpqs9" Mar 18 17:42:00.830415 master-0 kubenswrapper[7553]: I0318 17:42:00.829872 7553 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-67ffc948fb-bpqs9"] Mar 18 17:42:00.830415 master-0 kubenswrapper[7553]: E0318 17:42:00.830346 7553 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[serving-cert], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openshift-route-controller-manager/route-controller-manager-67ffc948fb-bpqs9" podUID="1d02eb9e-e5e3-49fc-9265-b0ee8aa14fb9" Mar 18 17:42:00.890079 master-0 kubenswrapper[7553]: I0318 17:42:00.888527 7553 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-6f57667fcd-x6jtn"] Mar 18 17:42:00.890079 master-0 kubenswrapper[7553]: I0318 17:42:00.889126 7553 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6f57667fcd-x6jtn" Mar 18 17:42:00.895420 master-0 kubenswrapper[7553]: I0318 17:42:00.894916 7553 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Mar 18 17:42:00.895420 master-0 kubenswrapper[7553]: I0318 17:42:00.895140 7553 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Mar 18 17:42:00.895420 master-0 kubenswrapper[7553]: I0318 17:42:00.895326 7553 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Mar 18 17:42:00.907215 master-0 kubenswrapper[7553]: I0318 17:42:00.907175 7553 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Mar 18 17:42:00.908078 master-0 kubenswrapper[7553]: I0318 17:42:00.908051 7553 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Mar 18 17:42:00.910522 master-0 kubenswrapper[7553]: I0318 17:42:00.910496 7553 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Mar 18 17:42:00.924618 master-0 kubenswrapper[7553]: I0318 17:42:00.924569 7553 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6f57667fcd-x6jtn"] Mar 18 17:42:00.966370 master-0 kubenswrapper[7553]: I0318 17:42:00.966318 7553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5066fcb0-4af6-4606-b7e0-6a49915d74f9-client-ca\") pod \"controller-manager-6f57667fcd-x6jtn\" (UID: \"5066fcb0-4af6-4606-b7e0-6a49915d74f9\") " pod="openshift-controller-manager/controller-manager-6f57667fcd-x6jtn" Mar 18 17:42:00.966638 master-0 kubenswrapper[7553]: I0318 17:42:00.966612 7553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5066fcb0-4af6-4606-b7e0-6a49915d74f9-proxy-ca-bundles\") pod \"controller-manager-6f57667fcd-x6jtn\" (UID: \"5066fcb0-4af6-4606-b7e0-6a49915d74f9\") " pod="openshift-controller-manager/controller-manager-6f57667fcd-x6jtn" Mar 18 17:42:00.966773 master-0 kubenswrapper[7553]: I0318 17:42:00.966759 7553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5066fcb0-4af6-4606-b7e0-6a49915d74f9-serving-cert\") pod \"controller-manager-6f57667fcd-x6jtn\" (UID: \"5066fcb0-4af6-4606-b7e0-6a49915d74f9\") " pod="openshift-controller-manager/controller-manager-6f57667fcd-x6jtn" Mar 18 17:42:00.966846 master-0 kubenswrapper[7553]: I0318 17:42:00.966833 7553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4h7vt\" (UniqueName: \"kubernetes.io/projected/5066fcb0-4af6-4606-b7e0-6a49915d74f9-kube-api-access-4h7vt\") pod \"controller-manager-6f57667fcd-x6jtn\" (UID: \"5066fcb0-4af6-4606-b7e0-6a49915d74f9\") " pod="openshift-controller-manager/controller-manager-6f57667fcd-x6jtn" Mar 18 17:42:00.966943 master-0 kubenswrapper[7553]: I0318 17:42:00.966929 7553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5066fcb0-4af6-4606-b7e0-6a49915d74f9-config\") pod \"controller-manager-6f57667fcd-x6jtn\" (UID: \"5066fcb0-4af6-4606-b7e0-6a49915d74f9\") " pod="openshift-controller-manager/controller-manager-6f57667fcd-x6jtn" Mar 18 17:42:00.967086 master-0 kubenswrapper[7553]: I0318 17:42:00.967073 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1d02eb9e-e5e3-49fc-9265-b0ee8aa14fb9-serving-cert\") pod \"route-controller-manager-67ffc948fb-bpqs9\" (UID: \"1d02eb9e-e5e3-49fc-9265-b0ee8aa14fb9\") " pod="openshift-route-controller-manager/route-controller-manager-67ffc948fb-bpqs9" Mar 18 17:42:00.967340 master-0 kubenswrapper[7553]: E0318 17:42:00.967317 7553 secret.go:189] Couldn't get secret openshift-route-controller-manager/serving-cert: secret "serving-cert" not found Mar 18 17:42:00.967877 master-0 kubenswrapper[7553]: E0318 17:42:00.967815 7553 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1d02eb9e-e5e3-49fc-9265-b0ee8aa14fb9-serving-cert podName:1d02eb9e-e5e3-49fc-9265-b0ee8aa14fb9 nodeName:}" failed. No retries permitted until 2026-03-18 17:42:01.967770558 +0000 UTC m=+12.113605421 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/1d02eb9e-e5e3-49fc-9265-b0ee8aa14fb9-serving-cert") pod "route-controller-manager-67ffc948fb-bpqs9" (UID: "1d02eb9e-e5e3-49fc-9265-b0ee8aa14fb9") : secret "serving-cert" not found Mar 18 17:42:01.067644 master-0 kubenswrapper[7553]: I0318 17:42:01.067579 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5066fcb0-4af6-4606-b7e0-6a49915d74f9-serving-cert\") pod \"controller-manager-6f57667fcd-x6jtn\" (UID: \"5066fcb0-4af6-4606-b7e0-6a49915d74f9\") " pod="openshift-controller-manager/controller-manager-6f57667fcd-x6jtn" Mar 18 17:42:01.067644 master-0 kubenswrapper[7553]: I0318 17:42:01.067629 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4h7vt\" (UniqueName: \"kubernetes.io/projected/5066fcb0-4af6-4606-b7e0-6a49915d74f9-kube-api-access-4h7vt\") pod \"controller-manager-6f57667fcd-x6jtn\" (UID: \"5066fcb0-4af6-4606-b7e0-6a49915d74f9\") " pod="openshift-controller-manager/controller-manager-6f57667fcd-x6jtn" Mar 18 17:42:01.067644 master-0 kubenswrapper[7553]: I0318 17:42:01.067661 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5066fcb0-4af6-4606-b7e0-6a49915d74f9-config\") pod \"controller-manager-6f57667fcd-x6jtn\" (UID: \"5066fcb0-4af6-4606-b7e0-6a49915d74f9\") " pod="openshift-controller-manager/controller-manager-6f57667fcd-x6jtn" Mar 18 17:42:01.067974 master-0 kubenswrapper[7553]: I0318 17:42:01.067802 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5066fcb0-4af6-4606-b7e0-6a49915d74f9-client-ca\") pod \"controller-manager-6f57667fcd-x6jtn\" (UID: \"5066fcb0-4af6-4606-b7e0-6a49915d74f9\") " pod="openshift-controller-manager/controller-manager-6f57667fcd-x6jtn" Mar 18 17:42:01.067974 master-0 kubenswrapper[7553]: I0318 17:42:01.067827 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5066fcb0-4af6-4606-b7e0-6a49915d74f9-proxy-ca-bundles\") pod \"controller-manager-6f57667fcd-x6jtn\" (UID: \"5066fcb0-4af6-4606-b7e0-6a49915d74f9\") " pod="openshift-controller-manager/controller-manager-6f57667fcd-x6jtn" Mar 18 17:42:01.069424 master-0 kubenswrapper[7553]: E0318 17:42:01.068488 7553 secret.go:189] Couldn't get secret openshift-controller-manager/serving-cert: secret "serving-cert" not found Mar 18 17:42:01.069424 master-0 kubenswrapper[7553]: E0318 17:42:01.068568 7553 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5066fcb0-4af6-4606-b7e0-6a49915d74f9-serving-cert podName:5066fcb0-4af6-4606-b7e0-6a49915d74f9 nodeName:}" failed. No retries permitted until 2026-03-18 17:42:01.568545216 +0000 UTC m=+11.714380059 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/5066fcb0-4af6-4606-b7e0-6a49915d74f9-serving-cert") pod "controller-manager-6f57667fcd-x6jtn" (UID: "5066fcb0-4af6-4606-b7e0-6a49915d74f9") : secret "serving-cert" not found Mar 18 17:42:01.069424 master-0 kubenswrapper[7553]: I0318 17:42:01.069290 7553 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5066fcb0-4af6-4606-b7e0-6a49915d74f9-proxy-ca-bundles\") pod \"controller-manager-6f57667fcd-x6jtn\" (UID: \"5066fcb0-4af6-4606-b7e0-6a49915d74f9\") " pod="openshift-controller-manager/controller-manager-6f57667fcd-x6jtn" Mar 18 17:42:01.069424 master-0 kubenswrapper[7553]: I0318 17:42:01.069372 7553 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5066fcb0-4af6-4606-b7e0-6a49915d74f9-config\") pod \"controller-manager-6f57667fcd-x6jtn\" (UID: \"5066fcb0-4af6-4606-b7e0-6a49915d74f9\") " pod="openshift-controller-manager/controller-manager-6f57667fcd-x6jtn" Mar 18 17:42:01.069609 master-0 kubenswrapper[7553]: I0318 17:42:01.069532 7553 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5066fcb0-4af6-4606-b7e0-6a49915d74f9-client-ca\") pod \"controller-manager-6f57667fcd-x6jtn\" (UID: \"5066fcb0-4af6-4606-b7e0-6a49915d74f9\") " pod="openshift-controller-manager/controller-manager-6f57667fcd-x6jtn" Mar 18 17:42:01.090549 master-0 kubenswrapper[7553]: I0318 17:42:01.090443 7553 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4h7vt\" (UniqueName: \"kubernetes.io/projected/5066fcb0-4af6-4606-b7e0-6a49915d74f9-kube-api-access-4h7vt\") pod \"controller-manager-6f57667fcd-x6jtn\" (UID: \"5066fcb0-4af6-4606-b7e0-6a49915d74f9\") " pod="openshift-controller-manager/controller-manager-6f57667fcd-x6jtn" Mar 18 17:42:01.098649 master-0 kubenswrapper[7553]: I0318 17:42:01.098575 7553 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-5l4qp" Mar 18 17:42:01.098826 master-0 kubenswrapper[7553]: I0318 17:42:01.098791 7553 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 18 17:42:01.127263 master-0 kubenswrapper[7553]: I0318 17:42:01.127204 7553 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-5l4qp" Mar 18 17:42:01.181687 master-0 kubenswrapper[7553]: I0318 17:42:01.181629 7553 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-95bf4f4d-q27fh" Mar 18 17:42:01.298330 master-0 kubenswrapper[7553]: I0318 17:42:01.298300 7553 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-67ffc948fb-bpqs9" Mar 18 17:42:01.323596 master-0 kubenswrapper[7553]: I0318 17:42:01.323203 7553 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-67ffc948fb-bpqs9" Mar 18 17:42:01.474650 master-0 kubenswrapper[7553]: I0318 17:42:01.474524 7553 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qwpzm\" (UniqueName: \"kubernetes.io/projected/1d02eb9e-e5e3-49fc-9265-b0ee8aa14fb9-kube-api-access-qwpzm\") pod \"1d02eb9e-e5e3-49fc-9265-b0ee8aa14fb9\" (UID: \"1d02eb9e-e5e3-49fc-9265-b0ee8aa14fb9\") " Mar 18 17:42:01.474650 master-0 kubenswrapper[7553]: I0318 17:42:01.474621 7553 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1d02eb9e-e5e3-49fc-9265-b0ee8aa14fb9-config\") pod \"1d02eb9e-e5e3-49fc-9265-b0ee8aa14fb9\" (UID: \"1d02eb9e-e5e3-49fc-9265-b0ee8aa14fb9\") " Mar 18 17:42:01.474857 master-0 kubenswrapper[7553]: I0318 17:42:01.474709 7553 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1d02eb9e-e5e3-49fc-9265-b0ee8aa14fb9-client-ca\") pod \"1d02eb9e-e5e3-49fc-9265-b0ee8aa14fb9\" (UID: \"1d02eb9e-e5e3-49fc-9265-b0ee8aa14fb9\") " Mar 18 17:42:01.475655 master-0 kubenswrapper[7553]: I0318 17:42:01.475380 7553 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1d02eb9e-e5e3-49fc-9265-b0ee8aa14fb9-config" (OuterVolumeSpecName: "config") pod "1d02eb9e-e5e3-49fc-9265-b0ee8aa14fb9" (UID: "1d02eb9e-e5e3-49fc-9265-b0ee8aa14fb9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 17:42:01.475655 master-0 kubenswrapper[7553]: I0318 17:42:01.475570 7553 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1d02eb9e-e5e3-49fc-9265-b0ee8aa14fb9-client-ca" (OuterVolumeSpecName: "client-ca") pod "1d02eb9e-e5e3-49fc-9265-b0ee8aa14fb9" (UID: "1d02eb9e-e5e3-49fc-9265-b0ee8aa14fb9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 17:42:01.476390 master-0 kubenswrapper[7553]: I0318 17:42:01.476104 7553 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1d02eb9e-e5e3-49fc-9265-b0ee8aa14fb9-config\") on node \"master-0\" DevicePath \"\"" Mar 18 17:42:01.476390 master-0 kubenswrapper[7553]: I0318 17:42:01.476134 7553 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1d02eb9e-e5e3-49fc-9265-b0ee8aa14fb9-client-ca\") on node \"master-0\" DevicePath \"\"" Mar 18 17:42:01.478884 master-0 kubenswrapper[7553]: I0318 17:42:01.478837 7553 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d02eb9e-e5e3-49fc-9265-b0ee8aa14fb9-kube-api-access-qwpzm" (OuterVolumeSpecName: "kube-api-access-qwpzm") pod "1d02eb9e-e5e3-49fc-9265-b0ee8aa14fb9" (UID: "1d02eb9e-e5e3-49fc-9265-b0ee8aa14fb9"). InnerVolumeSpecName "kube-api-access-qwpzm". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 17:42:01.577738 master-0 kubenswrapper[7553]: I0318 17:42:01.576925 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5066fcb0-4af6-4606-b7e0-6a49915d74f9-serving-cert\") pod \"controller-manager-6f57667fcd-x6jtn\" (UID: \"5066fcb0-4af6-4606-b7e0-6a49915d74f9\") " pod="openshift-controller-manager/controller-manager-6f57667fcd-x6jtn" Mar 18 17:42:01.577738 master-0 kubenswrapper[7553]: I0318 17:42:01.577082 7553 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qwpzm\" (UniqueName: \"kubernetes.io/projected/1d02eb9e-e5e3-49fc-9265-b0ee8aa14fb9-kube-api-access-qwpzm\") on node \"master-0\" DevicePath \"\"" Mar 18 17:42:01.577738 master-0 kubenswrapper[7553]: E0318 17:42:01.577203 7553 secret.go:189] Couldn't get secret openshift-controller-manager/serving-cert: secret "serving-cert" not found Mar 18 17:42:01.577738 master-0 kubenswrapper[7553]: E0318 17:42:01.577309 7553 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5066fcb0-4af6-4606-b7e0-6a49915d74f9-serving-cert podName:5066fcb0-4af6-4606-b7e0-6a49915d74f9 nodeName:}" failed. No retries permitted until 2026-03-18 17:42:02.577256803 +0000 UTC m=+12.723091476 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/5066fcb0-4af6-4606-b7e0-6a49915d74f9-serving-cert") pod "controller-manager-6f57667fcd-x6jtn" (UID: "5066fcb0-4af6-4606-b7e0-6a49915d74f9") : secret "serving-cert" not found Mar 18 17:42:01.981551 master-0 kubenswrapper[7553]: I0318 17:42:01.980995 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1d02eb9e-e5e3-49fc-9265-b0ee8aa14fb9-serving-cert\") pod \"route-controller-manager-67ffc948fb-bpqs9\" (UID: \"1d02eb9e-e5e3-49fc-9265-b0ee8aa14fb9\") " pod="openshift-route-controller-manager/route-controller-manager-67ffc948fb-bpqs9" Mar 18 17:42:01.981904 master-0 kubenswrapper[7553]: E0318 17:42:01.981262 7553 secret.go:189] Couldn't get secret openshift-route-controller-manager/serving-cert: secret "serving-cert" not found Mar 18 17:42:01.981904 master-0 kubenswrapper[7553]: E0318 17:42:01.981843 7553 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1d02eb9e-e5e3-49fc-9265-b0ee8aa14fb9-serving-cert podName:1d02eb9e-e5e3-49fc-9265-b0ee8aa14fb9 nodeName:}" failed. No retries permitted until 2026-03-18 17:42:03.981814316 +0000 UTC m=+14.127648999 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/1d02eb9e-e5e3-49fc-9265-b0ee8aa14fb9-serving-cert") pod "route-controller-manager-67ffc948fb-bpqs9" (UID: "1d02eb9e-e5e3-49fc-9265-b0ee8aa14fb9") : secret "serving-cert" not found Mar 18 17:42:02.125550 master-0 kubenswrapper[7553]: I0318 17:42:02.125466 7553 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6897138d-43c5-4502-83a5-64ac783886a0" path="/var/lib/kubelet/pods/6897138d-43c5-4502-83a5-64ac783886a0/volumes" Mar 18 17:42:02.125892 master-0 kubenswrapper[7553]: I0318 17:42:02.125804 7553 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d4ec93a3-fdfb-400d-86c3-932df6200fe4" path="/var/lib/kubelet/pods/d4ec93a3-fdfb-400d-86c3-932df6200fe4/volumes" Mar 18 17:42:02.300877 master-0 kubenswrapper[7553]: I0318 17:42:02.300798 7553 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-67ffc948fb-bpqs9" Mar 18 17:42:02.362630 master-0 kubenswrapper[7553]: I0318 17:42:02.362561 7553 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-cb78c4f4b-7s77b"] Mar 18 17:42:02.363410 master-0 kubenswrapper[7553]: I0318 17:42:02.363372 7553 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-cb78c4f4b-7s77b" Mar 18 17:42:02.366099 master-0 kubenswrapper[7553]: I0318 17:42:02.366043 7553 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Mar 18 17:42:02.368024 master-0 kubenswrapper[7553]: I0318 17:42:02.367968 7553 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Mar 18 17:42:02.368168 master-0 kubenswrapper[7553]: I0318 17:42:02.368050 7553 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Mar 18 17:42:02.368370 master-0 kubenswrapper[7553]: I0318 17:42:02.368336 7553 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Mar 18 17:42:02.374000 master-0 kubenswrapper[7553]: I0318 17:42:02.369968 7553 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Mar 18 17:42:02.374000 master-0 kubenswrapper[7553]: I0318 17:42:02.371467 7553 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-67ffc948fb-bpqs9"] Mar 18 17:42:02.375039 master-0 kubenswrapper[7553]: I0318 17:42:02.374979 7553 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-cb78c4f4b-7s77b"] Mar 18 17:42:02.376117 master-0 kubenswrapper[7553]: I0318 17:42:02.376069 7553 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-67ffc948fb-bpqs9"] Mar 18 17:42:02.476086 master-0 kubenswrapper[7553]: I0318 17:42:02.474613 7553 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-6f57667fcd-x6jtn"] Mar 18 17:42:02.476679 master-0 kubenswrapper[7553]: E0318 17:42:02.476603 7553 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[serving-cert], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openshift-controller-manager/controller-manager-6f57667fcd-x6jtn" podUID="5066fcb0-4af6-4606-b7e0-6a49915d74f9" Mar 18 17:42:02.524361 master-0 kubenswrapper[7553]: I0318 17:42:02.524292 7553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/414430ec-af84-4826-b5db-c920c7653c7a-client-ca\") pod \"route-controller-manager-cb78c4f4b-7s77b\" (UID: \"414430ec-af84-4826-b5db-c920c7653c7a\") " pod="openshift-route-controller-manager/route-controller-manager-cb78c4f4b-7s77b" Mar 18 17:42:02.524603 master-0 kubenswrapper[7553]: I0318 17:42:02.524426 7553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/414430ec-af84-4826-b5db-c920c7653c7a-serving-cert\") pod \"route-controller-manager-cb78c4f4b-7s77b\" (UID: \"414430ec-af84-4826-b5db-c920c7653c7a\") " pod="openshift-route-controller-manager/route-controller-manager-cb78c4f4b-7s77b" Mar 18 17:42:02.524603 master-0 kubenswrapper[7553]: I0318 17:42:02.524482 7553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/414430ec-af84-4826-b5db-c920c7653c7a-config\") pod \"route-controller-manager-cb78c4f4b-7s77b\" (UID: \"414430ec-af84-4826-b5db-c920c7653c7a\") " pod="openshift-route-controller-manager/route-controller-manager-cb78c4f4b-7s77b" Mar 18 17:42:02.524603 master-0 kubenswrapper[7553]: I0318 17:42:02.524589 7553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9k877\" (UniqueName: \"kubernetes.io/projected/414430ec-af84-4826-b5db-c920c7653c7a-kube-api-access-9k877\") pod \"route-controller-manager-cb78c4f4b-7s77b\" (UID: \"414430ec-af84-4826-b5db-c920c7653c7a\") " pod="openshift-route-controller-manager/route-controller-manager-cb78c4f4b-7s77b" Mar 18 17:42:02.524742 master-0 kubenswrapper[7553]: I0318 17:42:02.524651 7553 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1d02eb9e-e5e3-49fc-9265-b0ee8aa14fb9-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 18 17:42:02.625896 master-0 kubenswrapper[7553]: I0318 17:42:02.625756 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/414430ec-af84-4826-b5db-c920c7653c7a-client-ca\") pod \"route-controller-manager-cb78c4f4b-7s77b\" (UID: \"414430ec-af84-4826-b5db-c920c7653c7a\") " pod="openshift-route-controller-manager/route-controller-manager-cb78c4f4b-7s77b" Mar 18 17:42:02.626114 master-0 kubenswrapper[7553]: I0318 17:42:02.626070 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/414430ec-af84-4826-b5db-c920c7653c7a-serving-cert\") pod \"route-controller-manager-cb78c4f4b-7s77b\" (UID: \"414430ec-af84-4826-b5db-c920c7653c7a\") " pod="openshift-route-controller-manager/route-controller-manager-cb78c4f4b-7s77b" Mar 18 17:42:02.626204 master-0 kubenswrapper[7553]: I0318 17:42:02.626185 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/414430ec-af84-4826-b5db-c920c7653c7a-config\") pod \"route-controller-manager-cb78c4f4b-7s77b\" (UID: \"414430ec-af84-4826-b5db-c920c7653c7a\") " pod="openshift-route-controller-manager/route-controller-manager-cb78c4f4b-7s77b" Mar 18 17:42:02.626317 master-0 kubenswrapper[7553]: I0318 17:42:02.626301 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9k877\" (UniqueName: \"kubernetes.io/projected/414430ec-af84-4826-b5db-c920c7653c7a-kube-api-access-9k877\") pod \"route-controller-manager-cb78c4f4b-7s77b\" (UID: \"414430ec-af84-4826-b5db-c920c7653c7a\") " pod="openshift-route-controller-manager/route-controller-manager-cb78c4f4b-7s77b" Mar 18 17:42:02.626355 master-0 kubenswrapper[7553]: I0318 17:42:02.626341 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5066fcb0-4af6-4606-b7e0-6a49915d74f9-serving-cert\") pod \"controller-manager-6f57667fcd-x6jtn\" (UID: \"5066fcb0-4af6-4606-b7e0-6a49915d74f9\") " pod="openshift-controller-manager/controller-manager-6f57667fcd-x6jtn" Mar 18 17:42:02.626448 master-0 kubenswrapper[7553]: E0318 17:42:02.626396 7553 secret.go:189] Couldn't get secret openshift-route-controller-manager/serving-cert: secret "serving-cert" not found Mar 18 17:42:02.626562 master-0 kubenswrapper[7553]: E0318 17:42:02.626530 7553 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/414430ec-af84-4826-b5db-c920c7653c7a-serving-cert podName:414430ec-af84-4826-b5db-c920c7653c7a nodeName:}" failed. No retries permitted until 2026-03-18 17:42:03.126497485 +0000 UTC m=+13.272332198 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/414430ec-af84-4826-b5db-c920c7653c7a-serving-cert") pod "route-controller-manager-cb78c4f4b-7s77b" (UID: "414430ec-af84-4826-b5db-c920c7653c7a") : secret "serving-cert" not found Mar 18 17:42:02.627040 master-0 kubenswrapper[7553]: I0318 17:42:02.627014 7553 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/414430ec-af84-4826-b5db-c920c7653c7a-client-ca\") pod \"route-controller-manager-cb78c4f4b-7s77b\" (UID: \"414430ec-af84-4826-b5db-c920c7653c7a\") " pod="openshift-route-controller-manager/route-controller-manager-cb78c4f4b-7s77b" Mar 18 17:42:02.631244 master-0 kubenswrapper[7553]: I0318 17:42:02.630873 7553 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5066fcb0-4af6-4606-b7e0-6a49915d74f9-serving-cert\") pod \"controller-manager-6f57667fcd-x6jtn\" (UID: \"5066fcb0-4af6-4606-b7e0-6a49915d74f9\") " pod="openshift-controller-manager/controller-manager-6f57667fcd-x6jtn" Mar 18 17:42:02.632260 master-0 kubenswrapper[7553]: I0318 17:42:02.632213 7553 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/414430ec-af84-4826-b5db-c920c7653c7a-config\") pod \"route-controller-manager-cb78c4f4b-7s77b\" (UID: \"414430ec-af84-4826-b5db-c920c7653c7a\") " pod="openshift-route-controller-manager/route-controller-manager-cb78c4f4b-7s77b" Mar 18 17:42:02.654876 master-0 kubenswrapper[7553]: I0318 17:42:02.654826 7553 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9k877\" (UniqueName: \"kubernetes.io/projected/414430ec-af84-4826-b5db-c920c7653c7a-kube-api-access-9k877\") pod \"route-controller-manager-cb78c4f4b-7s77b\" (UID: \"414430ec-af84-4826-b5db-c920c7653c7a\") " pod="openshift-route-controller-manager/route-controller-manager-cb78c4f4b-7s77b" Mar 18 17:42:03.131505 master-0 kubenswrapper[7553]: I0318 17:42:03.131458 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/414430ec-af84-4826-b5db-c920c7653c7a-serving-cert\") pod \"route-controller-manager-cb78c4f4b-7s77b\" (UID: \"414430ec-af84-4826-b5db-c920c7653c7a\") " pod="openshift-route-controller-manager/route-controller-manager-cb78c4f4b-7s77b" Mar 18 17:42:03.131779 master-0 kubenswrapper[7553]: E0318 17:42:03.131738 7553 secret.go:189] Couldn't get secret openshift-route-controller-manager/serving-cert: secret "serving-cert" not found Mar 18 17:42:03.131840 master-0 kubenswrapper[7553]: E0318 17:42:03.131799 7553 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/414430ec-af84-4826-b5db-c920c7653c7a-serving-cert podName:414430ec-af84-4826-b5db-c920c7653c7a nodeName:}" failed. No retries permitted until 2026-03-18 17:42:04.131782056 +0000 UTC m=+14.277616729 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/414430ec-af84-4826-b5db-c920c7653c7a-serving-cert") pod "route-controller-manager-cb78c4f4b-7s77b" (UID: "414430ec-af84-4826-b5db-c920c7653c7a") : secret "serving-cert" not found Mar 18 17:42:03.306299 master-0 kubenswrapper[7553]: I0318 17:42:03.303502 7553 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6f57667fcd-x6jtn" Mar 18 17:42:03.329530 master-0 kubenswrapper[7553]: I0318 17:42:03.320622 7553 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6f57667fcd-x6jtn" Mar 18 17:42:03.436363 master-0 kubenswrapper[7553]: I0318 17:42:03.434505 7553 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5066fcb0-4af6-4606-b7e0-6a49915d74f9-proxy-ca-bundles\") pod \"5066fcb0-4af6-4606-b7e0-6a49915d74f9\" (UID: \"5066fcb0-4af6-4606-b7e0-6a49915d74f9\") " Mar 18 17:42:03.436363 master-0 kubenswrapper[7553]: I0318 17:42:03.434578 7553 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5066fcb0-4af6-4606-b7e0-6a49915d74f9-config\") pod \"5066fcb0-4af6-4606-b7e0-6a49915d74f9\" (UID: \"5066fcb0-4af6-4606-b7e0-6a49915d74f9\") " Mar 18 17:42:03.436363 master-0 kubenswrapper[7553]: I0318 17:42:03.434629 7553 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5066fcb0-4af6-4606-b7e0-6a49915d74f9-client-ca\") pod \"5066fcb0-4af6-4606-b7e0-6a49915d74f9\" (UID: \"5066fcb0-4af6-4606-b7e0-6a49915d74f9\") " Mar 18 17:42:03.436363 master-0 kubenswrapper[7553]: I0318 17:42:03.434656 7553 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4h7vt\" (UniqueName: \"kubernetes.io/projected/5066fcb0-4af6-4606-b7e0-6a49915d74f9-kube-api-access-4h7vt\") pod \"5066fcb0-4af6-4606-b7e0-6a49915d74f9\" (UID: \"5066fcb0-4af6-4606-b7e0-6a49915d74f9\") " Mar 18 17:42:03.436363 master-0 kubenswrapper[7553]: I0318 17:42:03.434680 7553 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5066fcb0-4af6-4606-b7e0-6a49915d74f9-serving-cert\") pod \"5066fcb0-4af6-4606-b7e0-6a49915d74f9\" (UID: \"5066fcb0-4af6-4606-b7e0-6a49915d74f9\") " Mar 18 17:42:03.436363 master-0 kubenswrapper[7553]: I0318 17:42:03.435097 7553 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5066fcb0-4af6-4606-b7e0-6a49915d74f9-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "5066fcb0-4af6-4606-b7e0-6a49915d74f9" (UID: "5066fcb0-4af6-4606-b7e0-6a49915d74f9"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 17:42:03.436363 master-0 kubenswrapper[7553]: I0318 17:42:03.435499 7553 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5066fcb0-4af6-4606-b7e0-6a49915d74f9-proxy-ca-bundles\") on node \"master-0\" DevicePath \"\"" Mar 18 17:42:03.436363 master-0 kubenswrapper[7553]: I0318 17:42:03.435573 7553 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5066fcb0-4af6-4606-b7e0-6a49915d74f9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5066fcb0-4af6-4606-b7e0-6a49915d74f9" (UID: "5066fcb0-4af6-4606-b7e0-6a49915d74f9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 17:42:03.436363 master-0 kubenswrapper[7553]: I0318 17:42:03.435640 7553 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5066fcb0-4af6-4606-b7e0-6a49915d74f9-config" (OuterVolumeSpecName: "config") pod "5066fcb0-4af6-4606-b7e0-6a49915d74f9" (UID: "5066fcb0-4af6-4606-b7e0-6a49915d74f9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 17:42:03.477610 master-0 kubenswrapper[7553]: I0318 17:42:03.477467 7553 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5066fcb0-4af6-4606-b7e0-6a49915d74f9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5066fcb0-4af6-4606-b7e0-6a49915d74f9" (UID: "5066fcb0-4af6-4606-b7e0-6a49915d74f9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 17:42:03.483084 master-0 kubenswrapper[7553]: I0318 17:42:03.482996 7553 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5066fcb0-4af6-4606-b7e0-6a49915d74f9-kube-api-access-4h7vt" (OuterVolumeSpecName: "kube-api-access-4h7vt") pod "5066fcb0-4af6-4606-b7e0-6a49915d74f9" (UID: "5066fcb0-4af6-4606-b7e0-6a49915d74f9"). InnerVolumeSpecName "kube-api-access-4h7vt". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 17:42:03.536525 master-0 kubenswrapper[7553]: I0318 17:42:03.536471 7553 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5066fcb0-4af6-4606-b7e0-6a49915d74f9-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 18 17:42:03.536833 master-0 kubenswrapper[7553]: I0318 17:42:03.536538 7553 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5066fcb0-4af6-4606-b7e0-6a49915d74f9-config\") on node \"master-0\" DevicePath \"\"" Mar 18 17:42:03.536833 master-0 kubenswrapper[7553]: I0318 17:42:03.536554 7553 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5066fcb0-4af6-4606-b7e0-6a49915d74f9-client-ca\") on node \"master-0\" DevicePath \"\"" Mar 18 17:42:03.536833 master-0 kubenswrapper[7553]: I0318 17:42:03.536567 7553 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4h7vt\" (UniqueName: \"kubernetes.io/projected/5066fcb0-4af6-4606-b7e0-6a49915d74f9-kube-api-access-4h7vt\") on node \"master-0\" DevicePath \"\"" Mar 18 17:42:04.063209 master-0 kubenswrapper[7553]: I0318 17:42:04.062791 7553 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d02eb9e-e5e3-49fc-9265-b0ee8aa14fb9" path="/var/lib/kubelet/pods/1d02eb9e-e5e3-49fc-9265-b0ee8aa14fb9/volumes" Mar 18 17:42:04.144015 master-0 kubenswrapper[7553]: I0318 17:42:04.143959 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/414430ec-af84-4826-b5db-c920c7653c7a-serving-cert\") pod \"route-controller-manager-cb78c4f4b-7s77b\" (UID: \"414430ec-af84-4826-b5db-c920c7653c7a\") " pod="openshift-route-controller-manager/route-controller-manager-cb78c4f4b-7s77b" Mar 18 17:42:04.144418 master-0 kubenswrapper[7553]: E0318 17:42:04.144184 7553 secret.go:189] Couldn't get secret openshift-route-controller-manager/serving-cert: secret "serving-cert" not found Mar 18 17:42:04.144637 master-0 kubenswrapper[7553]: E0318 17:42:04.144617 7553 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/414430ec-af84-4826-b5db-c920c7653c7a-serving-cert podName:414430ec-af84-4826-b5db-c920c7653c7a nodeName:}" failed. No retries permitted until 2026-03-18 17:42:06.144587867 +0000 UTC m=+16.290422550 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/414430ec-af84-4826-b5db-c920c7653c7a-serving-cert") pod "route-controller-manager-cb78c4f4b-7s77b" (UID: "414430ec-af84-4826-b5db-c920c7653c7a") : secret "serving-cert" not found Mar 18 17:42:04.188789 master-0 kubenswrapper[7553]: I0318 17:42:04.188725 7553 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-95bf4f4d-q27fh" Mar 18 17:42:04.309950 master-0 kubenswrapper[7553]: I0318 17:42:04.309889 7553 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6f57667fcd-x6jtn" Mar 18 17:42:04.311000 master-0 kubenswrapper[7553]: I0318 17:42:04.310074 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-olm-operator/cluster-olm-operator-67dcd4998-lljnt" event={"ID":"99e215da-759d-4fff-af65-0fb64245fbd0","Type":"ContainerStarted","Data":"526fb1f5737ab88a407bf2b841c814ad5e5c2b858476030b2e358c55fa03c304"} Mar 18 17:42:04.346033 master-0 kubenswrapper[7553]: I0318 17:42:04.345353 7553 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-7c846c589b-4cpj2"] Mar 18 17:42:04.347129 master-0 kubenswrapper[7553]: I0318 17:42:04.347071 7553 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7c846c589b-4cpj2" Mar 18 17:42:04.350606 master-0 kubenswrapper[7553]: I0318 17:42:04.350565 7553 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-6f57667fcd-x6jtn"] Mar 18 17:42:04.353624 master-0 kubenswrapper[7553]: I0318 17:42:04.353595 7553 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Mar 18 17:42:04.354134 master-0 kubenswrapper[7553]: I0318 17:42:04.354109 7553 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Mar 18 17:42:04.355034 master-0 kubenswrapper[7553]: I0318 17:42:04.355013 7553 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Mar 18 17:42:04.359230 master-0 kubenswrapper[7553]: I0318 17:42:04.359168 7553 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Mar 18 17:42:04.362551 master-0 kubenswrapper[7553]: I0318 17:42:04.361143 7553 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Mar 18 17:42:04.365075 master-0 kubenswrapper[7553]: I0318 17:42:04.365034 7553 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Mar 18 17:42:04.366585 master-0 kubenswrapper[7553]: I0318 17:42:04.366516 7553 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7c846c589b-4cpj2"] Mar 18 17:42:04.367953 master-0 kubenswrapper[7553]: I0318 17:42:04.367917 7553 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-6f57667fcd-x6jtn"] Mar 18 17:42:04.450890 master-0 kubenswrapper[7553]: I0318 17:42:04.450813 7553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/dedeb921-f1f2-4fa4-8d16-8740b1c0cd14-proxy-ca-bundles\") pod \"controller-manager-7c846c589b-4cpj2\" (UID: \"dedeb921-f1f2-4fa4-8d16-8740b1c0cd14\") " pod="openshift-controller-manager/controller-manager-7c846c589b-4cpj2" Mar 18 17:42:04.450890 master-0 kubenswrapper[7553]: I0318 17:42:04.450888 7553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fq4hv\" (UniqueName: \"kubernetes.io/projected/dedeb921-f1f2-4fa4-8d16-8740b1c0cd14-kube-api-access-fq4hv\") pod \"controller-manager-7c846c589b-4cpj2\" (UID: \"dedeb921-f1f2-4fa4-8d16-8740b1c0cd14\") " pod="openshift-controller-manager/controller-manager-7c846c589b-4cpj2" Mar 18 17:42:04.451198 master-0 kubenswrapper[7553]: I0318 17:42:04.450941 7553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dedeb921-f1f2-4fa4-8d16-8740b1c0cd14-serving-cert\") pod \"controller-manager-7c846c589b-4cpj2\" (UID: \"dedeb921-f1f2-4fa4-8d16-8740b1c0cd14\") " pod="openshift-controller-manager/controller-manager-7c846c589b-4cpj2" Mar 18 17:42:04.451198 master-0 kubenswrapper[7553]: I0318 17:42:04.450980 7553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/dedeb921-f1f2-4fa4-8d16-8740b1c0cd14-client-ca\") pod \"controller-manager-7c846c589b-4cpj2\" (UID: \"dedeb921-f1f2-4fa4-8d16-8740b1c0cd14\") " pod="openshift-controller-manager/controller-manager-7c846c589b-4cpj2" Mar 18 17:42:04.451310 master-0 kubenswrapper[7553]: I0318 17:42:04.451222 7553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dedeb921-f1f2-4fa4-8d16-8740b1c0cd14-config\") pod \"controller-manager-7c846c589b-4cpj2\" (UID: \"dedeb921-f1f2-4fa4-8d16-8740b1c0cd14\") " pod="openshift-controller-manager/controller-manager-7c846c589b-4cpj2" Mar 18 17:42:04.552965 master-0 kubenswrapper[7553]: I0318 17:42:04.552916 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dedeb921-f1f2-4fa4-8d16-8740b1c0cd14-config\") pod \"controller-manager-7c846c589b-4cpj2\" (UID: \"dedeb921-f1f2-4fa4-8d16-8740b1c0cd14\") " pod="openshift-controller-manager/controller-manager-7c846c589b-4cpj2" Mar 18 17:42:04.553364 master-0 kubenswrapper[7553]: I0318 17:42:04.553339 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/dedeb921-f1f2-4fa4-8d16-8740b1c0cd14-proxy-ca-bundles\") pod \"controller-manager-7c846c589b-4cpj2\" (UID: \"dedeb921-f1f2-4fa4-8d16-8740b1c0cd14\") " pod="openshift-controller-manager/controller-manager-7c846c589b-4cpj2" Mar 18 17:42:04.553530 master-0 kubenswrapper[7553]: I0318 17:42:04.553511 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fq4hv\" (UniqueName: \"kubernetes.io/projected/dedeb921-f1f2-4fa4-8d16-8740b1c0cd14-kube-api-access-fq4hv\") pod \"controller-manager-7c846c589b-4cpj2\" (UID: \"dedeb921-f1f2-4fa4-8d16-8740b1c0cd14\") " pod="openshift-controller-manager/controller-manager-7c846c589b-4cpj2" Mar 18 17:42:04.554118 master-0 kubenswrapper[7553]: I0318 17:42:04.554101 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dedeb921-f1f2-4fa4-8d16-8740b1c0cd14-serving-cert\") pod \"controller-manager-7c846c589b-4cpj2\" (UID: \"dedeb921-f1f2-4fa4-8d16-8740b1c0cd14\") " pod="openshift-controller-manager/controller-manager-7c846c589b-4cpj2" Mar 18 17:42:04.554232 master-0 kubenswrapper[7553]: I0318 17:42:04.554218 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/dedeb921-f1f2-4fa4-8d16-8740b1c0cd14-client-ca\") pod \"controller-manager-7c846c589b-4cpj2\" (UID: \"dedeb921-f1f2-4fa4-8d16-8740b1c0cd14\") " pod="openshift-controller-manager/controller-manager-7c846c589b-4cpj2" Mar 18 17:42:04.555470 master-0 kubenswrapper[7553]: I0318 17:42:04.555451 7553 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/dedeb921-f1f2-4fa4-8d16-8740b1c0cd14-client-ca\") pod \"controller-manager-7c846c589b-4cpj2\" (UID: \"dedeb921-f1f2-4fa4-8d16-8740b1c0cd14\") " pod="openshift-controller-manager/controller-manager-7c846c589b-4cpj2" Mar 18 17:42:04.556647 master-0 kubenswrapper[7553]: I0318 17:42:04.556628 7553 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dedeb921-f1f2-4fa4-8d16-8740b1c0cd14-config\") pod \"controller-manager-7c846c589b-4cpj2\" (UID: \"dedeb921-f1f2-4fa4-8d16-8740b1c0cd14\") " pod="openshift-controller-manager/controller-manager-7c846c589b-4cpj2" Mar 18 17:42:04.558826 master-0 kubenswrapper[7553]: I0318 17:42:04.558806 7553 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/dedeb921-f1f2-4fa4-8d16-8740b1c0cd14-proxy-ca-bundles\") pod \"controller-manager-7c846c589b-4cpj2\" (UID: \"dedeb921-f1f2-4fa4-8d16-8740b1c0cd14\") " pod="openshift-controller-manager/controller-manager-7c846c589b-4cpj2" Mar 18 17:42:04.563523 master-0 kubenswrapper[7553]: I0318 17:42:04.563497 7553 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dedeb921-f1f2-4fa4-8d16-8740b1c0cd14-serving-cert\") pod \"controller-manager-7c846c589b-4cpj2\" (UID: \"dedeb921-f1f2-4fa4-8d16-8740b1c0cd14\") " pod="openshift-controller-manager/controller-manager-7c846c589b-4cpj2" Mar 18 17:42:04.584878 master-0 kubenswrapper[7553]: I0318 17:42:04.584817 7553 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fq4hv\" (UniqueName: \"kubernetes.io/projected/dedeb921-f1f2-4fa4-8d16-8740b1c0cd14-kube-api-access-fq4hv\") pod \"controller-manager-7c846c589b-4cpj2\" (UID: \"dedeb921-f1f2-4fa4-8d16-8740b1c0cd14\") " pod="openshift-controller-manager/controller-manager-7c846c589b-4cpj2" Mar 18 17:42:04.684627 master-0 kubenswrapper[7553]: I0318 17:42:04.684458 7553 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7c846c589b-4cpj2" Mar 18 17:42:04.917339 master-0 kubenswrapper[7553]: I0318 17:42:04.916915 7553 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7c846c589b-4cpj2"] Mar 18 17:42:04.936123 master-0 kubenswrapper[7553]: W0318 17:42:04.936031 7553 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddedeb921_f1f2_4fa4_8d16_8740b1c0cd14.slice/crio-cc2910a0cd567315922fb83de14c3f15ace2cb8fa5a09873d2b88ea103feb4a5 WatchSource:0}: Error finding container cc2910a0cd567315922fb83de14c3f15ace2cb8fa5a09873d2b88ea103feb4a5: Status 404 returned error can't find the container with id cc2910a0cd567315922fb83de14c3f15ace2cb8fa5a09873d2b88ea103feb4a5 Mar 18 17:42:05.351012 master-0 kubenswrapper[7553]: I0318 17:42:05.350941 7553 generic.go:334] "Generic (PLEG): container finished" podID="cb522b02-0b93-4711-9041-566daa06b95a" containerID="8ec96d66f498df1f17ff1b07f364e893b390b96c326cc03f6199600b04196d04" exitCode=0 Mar 18 17:42:05.352136 master-0 kubenswrapper[7553]: I0318 17:42:05.351051 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-95bf4f4d-q27fh" event={"ID":"cb522b02-0b93-4711-9041-566daa06b95a","Type":"ContainerDied","Data":"8ec96d66f498df1f17ff1b07f364e893b390b96c326cc03f6199600b04196d04"} Mar 18 17:42:05.352136 master-0 kubenswrapper[7553]: I0318 17:42:05.351737 7553 scope.go:117] "RemoveContainer" containerID="8ec96d66f498df1f17ff1b07f364e893b390b96c326cc03f6199600b04196d04" Mar 18 17:42:05.361194 master-0 kubenswrapper[7553]: I0318 17:42:05.361122 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7c846c589b-4cpj2" event={"ID":"dedeb921-f1f2-4fa4-8d16-8740b1c0cd14","Type":"ContainerStarted","Data":"cc2910a0cd567315922fb83de14c3f15ace2cb8fa5a09873d2b88ea103feb4a5"} Mar 18 17:42:06.064584 master-0 kubenswrapper[7553]: I0318 17:42:06.063067 7553 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5066fcb0-4af6-4606-b7e0-6a49915d74f9" path="/var/lib/kubelet/pods/5066fcb0-4af6-4606-b7e0-6a49915d74f9/volumes" Mar 18 17:42:06.181928 master-0 kubenswrapper[7553]: I0318 17:42:06.181837 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/414430ec-af84-4826-b5db-c920c7653c7a-serving-cert\") pod \"route-controller-manager-cb78c4f4b-7s77b\" (UID: \"414430ec-af84-4826-b5db-c920c7653c7a\") " pod="openshift-route-controller-manager/route-controller-manager-cb78c4f4b-7s77b" Mar 18 17:42:06.182251 master-0 kubenswrapper[7553]: E0318 17:42:06.182188 7553 secret.go:189] Couldn't get secret openshift-route-controller-manager/serving-cert: secret "serving-cert" not found Mar 18 17:42:06.182396 master-0 kubenswrapper[7553]: E0318 17:42:06.182362 7553 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/414430ec-af84-4826-b5db-c920c7653c7a-serving-cert podName:414430ec-af84-4826-b5db-c920c7653c7a nodeName:}" failed. No retries permitted until 2026-03-18 17:42:10.182328106 +0000 UTC m=+20.328162819 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/414430ec-af84-4826-b5db-c920c7653c7a-serving-cert") pod "route-controller-manager-cb78c4f4b-7s77b" (UID: "414430ec-af84-4826-b5db-c920c7653c7a") : secret "serving-cert" not found Mar 18 17:42:06.367808 master-0 kubenswrapper[7553]: I0318 17:42:06.367634 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-95bf4f4d-q27fh" event={"ID":"cb522b02-0b93-4711-9041-566daa06b95a","Type":"ContainerStarted","Data":"399bf3be19e41993ba7e873949068ec6c32cf9d08ee1196692654605dc3ddd51"} Mar 18 17:42:06.368515 master-0 kubenswrapper[7553]: I0318 17:42:06.367968 7553 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-95bf4f4d-q27fh" Mar 18 17:42:06.764003 master-0 kubenswrapper[7553]: I0318 17:42:06.763306 7553 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-967479477-gwn76"] Mar 18 17:42:06.768768 master-0 kubenswrapper[7553]: I0318 17:42:06.764459 7553 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-967479477-gwn76" Mar 18 17:42:06.777170 master-0 kubenswrapper[7553]: I0318 17:42:06.776804 7553 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Mar 18 17:42:06.777170 master-0 kubenswrapper[7553]: I0318 17:42:06.776951 7553 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Mar 18 17:42:06.777170 master-0 kubenswrapper[7553]: I0318 17:42:06.776964 7553 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Mar 18 17:42:06.777170 master-0 kubenswrapper[7553]: I0318 17:42:06.777045 7553 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Mar 18 17:42:06.777422 master-0 kubenswrapper[7553]: I0318 17:42:06.777255 7553 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Mar 18 17:42:06.777422 master-0 kubenswrapper[7553]: I0318 17:42:06.777322 7553 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Mar 18 17:42:06.777486 master-0 kubenswrapper[7553]: I0318 17:42:06.777436 7553 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-0" Mar 18 17:42:06.777730 master-0 kubenswrapper[7553]: I0318 17:42:06.777611 7553 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Mar 18 17:42:06.777730 master-0 kubenswrapper[7553]: I0318 17:42:06.777636 7553 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-0" Mar 18 17:42:06.780487 master-0 kubenswrapper[7553]: I0318 17:42:06.780456 7553 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Mar 18 17:42:06.791379 master-0 kubenswrapper[7553]: I0318 17:42:06.791333 7553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/9d0c4a10-8e58-45a4-813b-efd3ef8353d3-image-import-ca\") pod \"apiserver-967479477-gwn76\" (UID: \"9d0c4a10-8e58-45a4-813b-efd3ef8353d3\") " pod="openshift-apiserver/apiserver-967479477-gwn76" Mar 18 17:42:06.791379 master-0 kubenswrapper[7553]: I0318 17:42:06.791382 7553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9d0c4a10-8e58-45a4-813b-efd3ef8353d3-trusted-ca-bundle\") pod \"apiserver-967479477-gwn76\" (UID: \"9d0c4a10-8e58-45a4-813b-efd3ef8353d3\") " pod="openshift-apiserver/apiserver-967479477-gwn76" Mar 18 17:42:06.791722 master-0 kubenswrapper[7553]: I0318 17:42:06.791402 7553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/9d0c4a10-8e58-45a4-813b-efd3ef8353d3-etcd-client\") pod \"apiserver-967479477-gwn76\" (UID: \"9d0c4a10-8e58-45a4-813b-efd3ef8353d3\") " pod="openshift-apiserver/apiserver-967479477-gwn76" Mar 18 17:42:06.791722 master-0 kubenswrapper[7553]: I0318 17:42:06.791492 7553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/9d0c4a10-8e58-45a4-813b-efd3ef8353d3-node-pullsecrets\") pod \"apiserver-967479477-gwn76\" (UID: \"9d0c4a10-8e58-45a4-813b-efd3ef8353d3\") " pod="openshift-apiserver/apiserver-967479477-gwn76" Mar 18 17:42:06.791722 master-0 kubenswrapper[7553]: I0318 17:42:06.791519 7553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/9d0c4a10-8e58-45a4-813b-efd3ef8353d3-audit-dir\") pod \"apiserver-967479477-gwn76\" (UID: \"9d0c4a10-8e58-45a4-813b-efd3ef8353d3\") " pod="openshift-apiserver/apiserver-967479477-gwn76" Mar 18 17:42:06.791722 master-0 kubenswrapper[7553]: I0318 17:42:06.791627 7553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d0c4a10-8e58-45a4-813b-efd3ef8353d3-config\") pod \"apiserver-967479477-gwn76\" (UID: \"9d0c4a10-8e58-45a4-813b-efd3ef8353d3\") " pod="openshift-apiserver/apiserver-967479477-gwn76" Mar 18 17:42:06.791722 master-0 kubenswrapper[7553]: I0318 17:42:06.791690 7553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/9d0c4a10-8e58-45a4-813b-efd3ef8353d3-etcd-serving-ca\") pod \"apiserver-967479477-gwn76\" (UID: \"9d0c4a10-8e58-45a4-813b-efd3ef8353d3\") " pod="openshift-apiserver/apiserver-967479477-gwn76" Mar 18 17:42:06.791988 master-0 kubenswrapper[7553]: I0318 17:42:06.791771 7553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/9d0c4a10-8e58-45a4-813b-efd3ef8353d3-audit\") pod \"apiserver-967479477-gwn76\" (UID: \"9d0c4a10-8e58-45a4-813b-efd3ef8353d3\") " pod="openshift-apiserver/apiserver-967479477-gwn76" Mar 18 17:42:06.791988 master-0 kubenswrapper[7553]: I0318 17:42:06.791794 7553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4pzvp\" (UniqueName: \"kubernetes.io/projected/9d0c4a10-8e58-45a4-813b-efd3ef8353d3-kube-api-access-4pzvp\") pod \"apiserver-967479477-gwn76\" (UID: \"9d0c4a10-8e58-45a4-813b-efd3ef8353d3\") " pod="openshift-apiserver/apiserver-967479477-gwn76" Mar 18 17:42:06.791988 master-0 kubenswrapper[7553]: I0318 17:42:06.791917 7553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d0c4a10-8e58-45a4-813b-efd3ef8353d3-serving-cert\") pod \"apiserver-967479477-gwn76\" (UID: \"9d0c4a10-8e58-45a4-813b-efd3ef8353d3\") " pod="openshift-apiserver/apiserver-967479477-gwn76" Mar 18 17:42:06.791988 master-0 kubenswrapper[7553]: I0318 17:42:06.791941 7553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/9d0c4a10-8e58-45a4-813b-efd3ef8353d3-encryption-config\") pod \"apiserver-967479477-gwn76\" (UID: \"9d0c4a10-8e58-45a4-813b-efd3ef8353d3\") " pod="openshift-apiserver/apiserver-967479477-gwn76" Mar 18 17:42:06.810776 master-0 kubenswrapper[7553]: I0318 17:42:06.807815 7553 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-967479477-gwn76"] Mar 18 17:42:06.893206 master-0 kubenswrapper[7553]: I0318 17:42:06.893141 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/9d0c4a10-8e58-45a4-813b-efd3ef8353d3-etcd-client\") pod \"apiserver-967479477-gwn76\" (UID: \"9d0c4a10-8e58-45a4-813b-efd3ef8353d3\") " pod="openshift-apiserver/apiserver-967479477-gwn76" Mar 18 17:42:06.893420 master-0 kubenswrapper[7553]: I0318 17:42:06.893226 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/9d0c4a10-8e58-45a4-813b-efd3ef8353d3-node-pullsecrets\") pod \"apiserver-967479477-gwn76\" (UID: \"9d0c4a10-8e58-45a4-813b-efd3ef8353d3\") " pod="openshift-apiserver/apiserver-967479477-gwn76" Mar 18 17:42:06.893420 master-0 kubenswrapper[7553]: I0318 17:42:06.893339 7553 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/9d0c4a10-8e58-45a4-813b-efd3ef8353d3-node-pullsecrets\") pod \"apiserver-967479477-gwn76\" (UID: \"9d0c4a10-8e58-45a4-813b-efd3ef8353d3\") " pod="openshift-apiserver/apiserver-967479477-gwn76" Mar 18 17:42:06.893519 master-0 kubenswrapper[7553]: E0318 17:42:06.893436 7553 secret.go:189] Couldn't get secret openshift-apiserver/etcd-client: secret "etcd-client" not found Mar 18 17:42:06.893519 master-0 kubenswrapper[7553]: I0318 17:42:06.893463 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/9d0c4a10-8e58-45a4-813b-efd3ef8353d3-audit-dir\") pod \"apiserver-967479477-gwn76\" (UID: \"9d0c4a10-8e58-45a4-813b-efd3ef8353d3\") " pod="openshift-apiserver/apiserver-967479477-gwn76" Mar 18 17:42:06.893519 master-0 kubenswrapper[7553]: I0318 17:42:06.893492 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d0c4a10-8e58-45a4-813b-efd3ef8353d3-config\") pod \"apiserver-967479477-gwn76\" (UID: \"9d0c4a10-8e58-45a4-813b-efd3ef8353d3\") " pod="openshift-apiserver/apiserver-967479477-gwn76" Mar 18 17:42:06.893644 master-0 kubenswrapper[7553]: E0318 17:42:06.893523 7553 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9d0c4a10-8e58-45a4-813b-efd3ef8353d3-etcd-client podName:9d0c4a10-8e58-45a4-813b-efd3ef8353d3 nodeName:}" failed. No retries permitted until 2026-03-18 17:42:07.393500327 +0000 UTC m=+17.539335000 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etcd-client" (UniqueName: "kubernetes.io/secret/9d0c4a10-8e58-45a4-813b-efd3ef8353d3-etcd-client") pod "apiserver-967479477-gwn76" (UID: "9d0c4a10-8e58-45a4-813b-efd3ef8353d3") : secret "etcd-client" not found Mar 18 17:42:06.893644 master-0 kubenswrapper[7553]: I0318 17:42:06.893571 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/9d0c4a10-8e58-45a4-813b-efd3ef8353d3-etcd-serving-ca\") pod \"apiserver-967479477-gwn76\" (UID: \"9d0c4a10-8e58-45a4-813b-efd3ef8353d3\") " pod="openshift-apiserver/apiserver-967479477-gwn76" Mar 18 17:42:06.893644 master-0 kubenswrapper[7553]: I0318 17:42:06.893625 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/9d0c4a10-8e58-45a4-813b-efd3ef8353d3-audit\") pod \"apiserver-967479477-gwn76\" (UID: \"9d0c4a10-8e58-45a4-813b-efd3ef8353d3\") " pod="openshift-apiserver/apiserver-967479477-gwn76" Mar 18 17:42:06.893772 master-0 kubenswrapper[7553]: I0318 17:42:06.893646 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4pzvp\" (UniqueName: \"kubernetes.io/projected/9d0c4a10-8e58-45a4-813b-efd3ef8353d3-kube-api-access-4pzvp\") pod \"apiserver-967479477-gwn76\" (UID: \"9d0c4a10-8e58-45a4-813b-efd3ef8353d3\") " pod="openshift-apiserver/apiserver-967479477-gwn76" Mar 18 17:42:06.893772 master-0 kubenswrapper[7553]: I0318 17:42:06.893722 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d0c4a10-8e58-45a4-813b-efd3ef8353d3-serving-cert\") pod \"apiserver-967479477-gwn76\" (UID: \"9d0c4a10-8e58-45a4-813b-efd3ef8353d3\") " pod="openshift-apiserver/apiserver-967479477-gwn76" Mar 18 17:42:06.893772 master-0 kubenswrapper[7553]: I0318 17:42:06.893753 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/9d0c4a10-8e58-45a4-813b-efd3ef8353d3-encryption-config\") pod \"apiserver-967479477-gwn76\" (UID: \"9d0c4a10-8e58-45a4-813b-efd3ef8353d3\") " pod="openshift-apiserver/apiserver-967479477-gwn76" Mar 18 17:42:06.893886 master-0 kubenswrapper[7553]: I0318 17:42:06.893797 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/9d0c4a10-8e58-45a4-813b-efd3ef8353d3-image-import-ca\") pod \"apiserver-967479477-gwn76\" (UID: \"9d0c4a10-8e58-45a4-813b-efd3ef8353d3\") " pod="openshift-apiserver/apiserver-967479477-gwn76" Mar 18 17:42:06.893886 master-0 kubenswrapper[7553]: I0318 17:42:06.893828 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9d0c4a10-8e58-45a4-813b-efd3ef8353d3-trusted-ca-bundle\") pod \"apiserver-967479477-gwn76\" (UID: \"9d0c4a10-8e58-45a4-813b-efd3ef8353d3\") " pod="openshift-apiserver/apiserver-967479477-gwn76" Mar 18 17:42:06.894241 master-0 kubenswrapper[7553]: I0318 17:42:06.894206 7553 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/9d0c4a10-8e58-45a4-813b-efd3ef8353d3-audit-dir\") pod \"apiserver-967479477-gwn76\" (UID: \"9d0c4a10-8e58-45a4-813b-efd3ef8353d3\") " pod="openshift-apiserver/apiserver-967479477-gwn76" Mar 18 17:42:06.894241 master-0 kubenswrapper[7553]: I0318 17:42:06.894218 7553 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d0c4a10-8e58-45a4-813b-efd3ef8353d3-config\") pod \"apiserver-967479477-gwn76\" (UID: \"9d0c4a10-8e58-45a4-813b-efd3ef8353d3\") " pod="openshift-apiserver/apiserver-967479477-gwn76" Mar 18 17:42:06.894982 master-0 kubenswrapper[7553]: E0318 17:42:06.894944 7553 configmap.go:193] Couldn't get configMap openshift-apiserver/audit-0: configmap "audit-0" not found Mar 18 17:42:06.895063 master-0 kubenswrapper[7553]: E0318 17:42:06.895038 7553 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9d0c4a10-8e58-45a4-813b-efd3ef8353d3-audit podName:9d0c4a10-8e58-45a4-813b-efd3ef8353d3 nodeName:}" failed. No retries permitted until 2026-03-18 17:42:07.395014081 +0000 UTC m=+17.540848754 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "audit" (UniqueName: "kubernetes.io/configmap/9d0c4a10-8e58-45a4-813b-efd3ef8353d3-audit") pod "apiserver-967479477-gwn76" (UID: "9d0c4a10-8e58-45a4-813b-efd3ef8353d3") : configmap "audit-0" not found Mar 18 17:42:06.895140 master-0 kubenswrapper[7553]: E0318 17:42:06.895119 7553 secret.go:189] Couldn't get secret openshift-apiserver/serving-cert: secret "serving-cert" not found Mar 18 17:42:06.895199 master-0 kubenswrapper[7553]: E0318 17:42:06.895150 7553 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9d0c4a10-8e58-45a4-813b-efd3ef8353d3-serving-cert podName:9d0c4a10-8e58-45a4-813b-efd3ef8353d3 nodeName:}" failed. No retries permitted until 2026-03-18 17:42:07.395142844 +0000 UTC m=+17.540977517 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/9d0c4a10-8e58-45a4-813b-efd3ef8353d3-serving-cert") pod "apiserver-967479477-gwn76" (UID: "9d0c4a10-8e58-45a4-813b-efd3ef8353d3") : secret "serving-cert" not found Mar 18 17:42:06.895199 master-0 kubenswrapper[7553]: I0318 17:42:06.895155 7553 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/9d0c4a10-8e58-45a4-813b-efd3ef8353d3-etcd-serving-ca\") pod \"apiserver-967479477-gwn76\" (UID: \"9d0c4a10-8e58-45a4-813b-efd3ef8353d3\") " pod="openshift-apiserver/apiserver-967479477-gwn76" Mar 18 17:42:06.895307 master-0 kubenswrapper[7553]: I0318 17:42:06.895248 7553 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/9d0c4a10-8e58-45a4-813b-efd3ef8353d3-image-import-ca\") pod \"apiserver-967479477-gwn76\" (UID: \"9d0c4a10-8e58-45a4-813b-efd3ef8353d3\") " pod="openshift-apiserver/apiserver-967479477-gwn76" Mar 18 17:42:06.895382 master-0 kubenswrapper[7553]: I0318 17:42:06.895354 7553 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9d0c4a10-8e58-45a4-813b-efd3ef8353d3-trusted-ca-bundle\") pod \"apiserver-967479477-gwn76\" (UID: \"9d0c4a10-8e58-45a4-813b-efd3ef8353d3\") " pod="openshift-apiserver/apiserver-967479477-gwn76" Mar 18 17:42:06.904181 master-0 kubenswrapper[7553]: I0318 17:42:06.904148 7553 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/9d0c4a10-8e58-45a4-813b-efd3ef8353d3-encryption-config\") pod \"apiserver-967479477-gwn76\" (UID: \"9d0c4a10-8e58-45a4-813b-efd3ef8353d3\") " pod="openshift-apiserver/apiserver-967479477-gwn76" Mar 18 17:42:06.912628 master-0 kubenswrapper[7553]: I0318 17:42:06.912596 7553 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4pzvp\" (UniqueName: \"kubernetes.io/projected/9d0c4a10-8e58-45a4-813b-efd3ef8353d3-kube-api-access-4pzvp\") pod \"apiserver-967479477-gwn76\" (UID: \"9d0c4a10-8e58-45a4-813b-efd3ef8353d3\") " pod="openshift-apiserver/apiserver-967479477-gwn76" Mar 18 17:42:06.994747 master-0 kubenswrapper[7553]: I0318 17:42:06.994682 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/e73f2834-c56c-4cef-ac3c-2317e9a4324c-srv-cert\") pod \"olm-operator-5c9796789-6hngr\" (UID: \"e73f2834-c56c-4cef-ac3c-2317e9a4324c\") " pod="openshift-operator-lifecycle-manager/olm-operator-5c9796789-6hngr" Mar 18 17:42:06.995008 master-0 kubenswrapper[7553]: I0318 17:42:06.994779 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/d26d4515-391e-41a5-8c82-1b2b8a375662-package-server-manager-serving-cert\") pod \"package-server-manager-7b95f86987-6qqz4\" (UID: \"d26d4515-391e-41a5-8c82-1b2b8a375662\") " pod="openshift-operator-lifecycle-manager/package-server-manager-7b95f86987-6qqz4" Mar 18 17:42:06.995008 master-0 kubenswrapper[7553]: I0318 17:42:06.994815 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-baremetal-operator-tls\" (UniqueName: \"kubernetes.io/secret/37b3753f-bf4f-4a9e-a4a8-d58296bada79-cluster-baremetal-operator-tls\") pod \"cluster-baremetal-operator-6f69995874-dh5zl\" (UID: \"37b3753f-bf4f-4a9e-a4a8-d58296bada79\") " pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-dh5zl" Mar 18 17:42:06.995008 master-0 kubenswrapper[7553]: I0318 17:42:06.994853 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a02399de-859b-45b1-9b00-18a08f285f39-serving-cert\") pod \"cluster-version-operator-56d8475767-lqvvj\" (UID: \"a02399de-859b-45b1-9b00-18a08f285f39\") " pod="openshift-cluster-version/cluster-version-operator-56d8475767-lqvvj" Mar 18 17:42:06.995008 master-0 kubenswrapper[7553]: I0318 17:42:06.994893 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/7e64a377-f497-4416-8f22-d5c7f52e0b65-metrics-tls\") pod \"ingress-operator-66b84d69b-qb7n6\" (UID: \"7e64a377-f497-4416-8f22-d5c7f52e0b65\") " pod="openshift-ingress-operator/ingress-operator-66b84d69b-qb7n6" Mar 18 17:42:06.995008 master-0 kubenswrapper[7553]: I0318 17:42:06.994915 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/6f26e239-2988-4faa-bc1d-24b15b95b7f1-image-registry-operator-tls\") pod \"cluster-image-registry-operator-5549dc66cb-ljrq8\" (UID: \"6f26e239-2988-4faa-bc1d-24b15b95b7f1\") " pod="openshift-image-registry/cluster-image-registry-operator-5549dc66cb-ljrq8" Mar 18 17:42:06.995008 master-0 kubenswrapper[7553]: I0318 17:42:06.994937 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/a8c34df1-ea0d-4dfa-bf4d-5b58dc5bee8e-webhook-certs\") pod \"multus-admission-controller-5dbbb8b86f-gr8jc\" (UID: \"a8c34df1-ea0d-4dfa-bf4d-5b58dc5bee8e\") " pod="openshift-multus/multus-admission-controller-5dbbb8b86f-gr8jc" Mar 18 17:42:06.995008 master-0 kubenswrapper[7553]: I0318 17:42:06.994957 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/7c6694a8-ccd0-491b-9f21-215450f6ce67-apiservice-cert\") pod \"cluster-node-tuning-operator-598fbc5f8f-7qwxn\" (UID: \"7c6694a8-ccd0-491b-9f21-215450f6ce67\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-7qwxn" Mar 18 17:42:06.995008 master-0 kubenswrapper[7553]: I0318 17:42:06.994974 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/e9e04572-1425-440e-9869-6deef05e13e3-srv-cert\") pod \"catalog-operator-68f85b4d6c-qpgfz\" (UID: \"e9e04572-1425-440e-9869-6deef05e13e3\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68f85b4d6c-qpgfz" Mar 18 17:42:06.995008 master-0 kubenswrapper[7553]: I0318 17:42:06.994992 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/8d5e9525-6c0d-4c0b-a2ce-e42eaf66c311-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-58845fbb57-vjrjg\" (UID: \"8d5e9525-6c0d-4c0b-a2ce-e42eaf66c311\") " pod="openshift-monitoring/cluster-monitoring-operator-58845fbb57-vjrjg" Mar 18 17:42:06.995008 master-0 kubenswrapper[7553]: I0318 17:42:06.995018 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/b1352cc7-4099-44c5-9c31-8259fb783bc7-metrics-tls\") pod \"dns-operator-9c5679d8f-7sc7v\" (UID: \"b1352cc7-4099-44c5-9c31-8259fb783bc7\") " pod="openshift-dns-operator/dns-operator-9c5679d8f-7sc7v" Mar 18 17:42:06.995445 master-0 kubenswrapper[7553]: I0318 17:42:06.995045 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/ce5831a6-5a8d-4cda-9299-5d86437bcab2-marketplace-operator-metrics\") pod \"marketplace-operator-89ccd998f-l5gm7\" (UID: \"ce5831a6-5a8d-4cda-9299-5d86437bcab2\") " pod="openshift-marketplace/marketplace-operator-89ccd998f-l5gm7" Mar 18 17:42:06.995445 master-0 kubenswrapper[7553]: I0318 17:42:06.995062 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/37b3753f-bf4f-4a9e-a4a8-d58296bada79-cert\") pod \"cluster-baremetal-operator-6f69995874-dh5zl\" (UID: \"37b3753f-bf4f-4a9e-a4a8-d58296bada79\") " pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-dh5zl" Mar 18 17:42:06.995445 master-0 kubenswrapper[7553]: E0318 17:42:06.995212 7553 secret.go:189] Couldn't get secret openshift-machine-api/cluster-baremetal-webhook-server-cert: secret "cluster-baremetal-webhook-server-cert" not found Mar 18 17:42:06.995445 master-0 kubenswrapper[7553]: E0318 17:42:06.995288 7553 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/37b3753f-bf4f-4a9e-a4a8-d58296bada79-cert podName:37b3753f-bf4f-4a9e-a4a8-d58296bada79 nodeName:}" failed. No retries permitted until 2026-03-18 17:42:22.995254448 +0000 UTC m=+33.141089121 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/37b3753f-bf4f-4a9e-a4a8-d58296bada79-cert") pod "cluster-baremetal-operator-6f69995874-dh5zl" (UID: "37b3753f-bf4f-4a9e-a4a8-d58296bada79") : secret "cluster-baremetal-webhook-server-cert" not found Mar 18 17:42:06.995716 master-0 kubenswrapper[7553]: E0318 17:42:06.995695 7553 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: secret "olm-operator-serving-cert" not found Mar 18 17:42:06.995760 master-0 kubenswrapper[7553]: E0318 17:42:06.995723 7553 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e73f2834-c56c-4cef-ac3c-2317e9a4324c-srv-cert podName:e73f2834-c56c-4cef-ac3c-2317e9a4324c nodeName:}" failed. No retries permitted until 2026-03-18 17:42:22.995716018 +0000 UTC m=+33.141550691 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/e73f2834-c56c-4cef-ac3c-2317e9a4324c-srv-cert") pod "olm-operator-5c9796789-6hngr" (UID: "e73f2834-c56c-4cef-ac3c-2317e9a4324c") : secret "olm-operator-serving-cert" not found Mar 18 17:42:06.995760 master-0 kubenswrapper[7553]: E0318 17:42:06.995758 7553 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: secret "package-server-manager-serving-cert" not found Mar 18 17:42:06.995853 master-0 kubenswrapper[7553]: E0318 17:42:06.995777 7553 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d26d4515-391e-41a5-8c82-1b2b8a375662-package-server-manager-serving-cert podName:d26d4515-391e-41a5-8c82-1b2b8a375662 nodeName:}" failed. No retries permitted until 2026-03-18 17:42:22.995772239 +0000 UTC m=+33.141606912 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/d26d4515-391e-41a5-8c82-1b2b8a375662-package-server-manager-serving-cert") pod "package-server-manager-7b95f86987-6qqz4" (UID: "d26d4515-391e-41a5-8c82-1b2b8a375662") : secret "package-server-manager-serving-cert" not found Mar 18 17:42:06.995853 master-0 kubenswrapper[7553]: E0318 17:42:06.995817 7553 secret.go:189] Couldn't get secret openshift-machine-api/cluster-baremetal-operator-tls: secret "cluster-baremetal-operator-tls" not found Mar 18 17:42:06.995853 master-0 kubenswrapper[7553]: E0318 17:42:06.995834 7553 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/37b3753f-bf4f-4a9e-a4a8-d58296bada79-cluster-baremetal-operator-tls podName:37b3753f-bf4f-4a9e-a4a8-d58296bada79 nodeName:}" failed. No retries permitted until 2026-03-18 17:42:22.99582849 +0000 UTC m=+33.141663163 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cluster-baremetal-operator-tls" (UniqueName: "kubernetes.io/secret/37b3753f-bf4f-4a9e-a4a8-d58296bada79-cluster-baremetal-operator-tls") pod "cluster-baremetal-operator-6f69995874-dh5zl" (UID: "37b3753f-bf4f-4a9e-a4a8-d58296bada79") : secret "cluster-baremetal-operator-tls" not found Mar 18 17:42:06.996704 master-0 kubenswrapper[7553]: E0318 17:42:06.996666 7553 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: secret "multus-admission-controller-secret" not found Mar 18 17:42:06.996893 master-0 kubenswrapper[7553]: E0318 17:42:06.996872 7553 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a8c34df1-ea0d-4dfa-bf4d-5b58dc5bee8e-webhook-certs podName:a8c34df1-ea0d-4dfa-bf4d-5b58dc5bee8e nodeName:}" failed. No retries permitted until 2026-03-18 17:42:22.996840052 +0000 UTC m=+33.142674745 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/a8c34df1-ea0d-4dfa-bf4d-5b58dc5bee8e-webhook-certs") pod "multus-admission-controller-5dbbb8b86f-gr8jc" (UID: "a8c34df1-ea0d-4dfa-bf4d-5b58dc5bee8e") : secret "multus-admission-controller-secret" not found Mar 18 17:42:06.998709 master-0 kubenswrapper[7553]: E0318 17:42:06.998669 7553 secret.go:189] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: secret "marketplace-operator-metrics" not found Mar 18 17:42:06.998803 master-0 kubenswrapper[7553]: E0318 17:42:06.998749 7553 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ce5831a6-5a8d-4cda-9299-5d86437bcab2-marketplace-operator-metrics podName:ce5831a6-5a8d-4cda-9299-5d86437bcab2 nodeName:}" failed. No retries permitted until 2026-03-18 17:42:22.998727914 +0000 UTC m=+33.144562587 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/ce5831a6-5a8d-4cda-9299-5d86437bcab2-marketplace-operator-metrics") pod "marketplace-operator-89ccd998f-l5gm7" (UID: "ce5831a6-5a8d-4cda-9299-5d86437bcab2") : secret "marketplace-operator-metrics" not found Mar 18 17:42:06.998870 master-0 kubenswrapper[7553]: E0318 17:42:06.998815 7553 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: secret "cluster-monitoring-operator-tls" not found Mar 18 17:42:06.998870 master-0 kubenswrapper[7553]: E0318 17:42:06.998855 7553 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8d5e9525-6c0d-4c0b-a2ce-e42eaf66c311-cluster-monitoring-operator-tls podName:8d5e9525-6c0d-4c0b-a2ce-e42eaf66c311 nodeName:}" failed. No retries permitted until 2026-03-18 17:42:22.998848476 +0000 UTC m=+33.144683149 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/8d5e9525-6c0d-4c0b-a2ce-e42eaf66c311-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-58845fbb57-vjrjg" (UID: "8d5e9525-6c0d-4c0b-a2ce-e42eaf66c311") : secret "cluster-monitoring-operator-tls" not found Mar 18 17:42:06.999062 master-0 kubenswrapper[7553]: E0318 17:42:06.999032 7553 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: secret "catalog-operator-serving-cert" not found Mar 18 17:42:06.999118 master-0 kubenswrapper[7553]: E0318 17:42:06.999065 7553 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e9e04572-1425-440e-9869-6deef05e13e3-srv-cert podName:e9e04572-1425-440e-9869-6deef05e13e3 nodeName:}" failed. No retries permitted until 2026-03-18 17:42:22.999057861 +0000 UTC m=+33.144892524 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/e9e04572-1425-440e-9869-6deef05e13e3-srv-cert") pod "catalog-operator-68f85b4d6c-qpgfz" (UID: "e9e04572-1425-440e-9869-6deef05e13e3") : secret "catalog-operator-serving-cert" not found Mar 18 17:42:07.001734 master-0 kubenswrapper[7553]: I0318 17:42:07.001701 7553 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/b1352cc7-4099-44c5-9c31-8259fb783bc7-metrics-tls\") pod \"dns-operator-9c5679d8f-7sc7v\" (UID: \"b1352cc7-4099-44c5-9c31-8259fb783bc7\") " pod="openshift-dns-operator/dns-operator-9c5679d8f-7sc7v" Mar 18 17:42:07.008293 master-0 kubenswrapper[7553]: I0318 17:42:07.007567 7553 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a02399de-859b-45b1-9b00-18a08f285f39-serving-cert\") pod \"cluster-version-operator-56d8475767-lqvvj\" (UID: \"a02399de-859b-45b1-9b00-18a08f285f39\") " pod="openshift-cluster-version/cluster-version-operator-56d8475767-lqvvj" Mar 18 17:42:07.009843 master-0 kubenswrapper[7553]: I0318 17:42:07.009807 7553 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/6f26e239-2988-4faa-bc1d-24b15b95b7f1-image-registry-operator-tls\") pod \"cluster-image-registry-operator-5549dc66cb-ljrq8\" (UID: \"6f26e239-2988-4faa-bc1d-24b15b95b7f1\") " pod="openshift-image-registry/cluster-image-registry-operator-5549dc66cb-ljrq8" Mar 18 17:42:07.010096 master-0 kubenswrapper[7553]: I0318 17:42:07.010075 7553 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/7c6694a8-ccd0-491b-9f21-215450f6ce67-apiservice-cert\") pod \"cluster-node-tuning-operator-598fbc5f8f-7qwxn\" (UID: \"7c6694a8-ccd0-491b-9f21-215450f6ce67\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-7qwxn" Mar 18 17:42:07.017028 master-0 kubenswrapper[7553]: I0318 17:42:07.016150 7553 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/7e64a377-f497-4416-8f22-d5c7f52e0b65-metrics-tls\") pod \"ingress-operator-66b84d69b-qb7n6\" (UID: \"7e64a377-f497-4416-8f22-d5c7f52e0b65\") " pod="openshift-ingress-operator/ingress-operator-66b84d69b-qb7n6" Mar 18 17:42:07.211338 master-0 kubenswrapper[7553]: I0318 17:42:07.211253 7553 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-9c5679d8f-7sc7v" Mar 18 17:42:07.211556 master-0 kubenswrapper[7553]: I0318 17:42:07.211367 7553 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-56d8475767-lqvvj" Mar 18 17:42:07.213917 master-0 kubenswrapper[7553]: I0318 17:42:07.213875 7553 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-5549dc66cb-ljrq8" Mar 18 17:42:07.222987 master-0 kubenswrapper[7553]: I0318 17:42:07.222908 7553 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-66b84d69b-qb7n6" Mar 18 17:42:07.255968 master-0 kubenswrapper[7553]: W0318 17:42:07.255764 7553 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda02399de_859b_45b1_9b00_18a08f285f39.slice/crio-b910fcd86d2c6a577227001de82fb055189643becfc32f71187a0e36a182af53 WatchSource:0}: Error finding container b910fcd86d2c6a577227001de82fb055189643becfc32f71187a0e36a182af53: Status 404 returned error can't find the container with id b910fcd86d2c6a577227001de82fb055189643becfc32f71187a0e36a182af53 Mar 18 17:42:07.374292 master-0 kubenswrapper[7553]: I0318 17:42:07.372798 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-56d8475767-lqvvj" event={"ID":"a02399de-859b-45b1-9b00-18a08f285f39","Type":"ContainerStarted","Data":"b910fcd86d2c6a577227001de82fb055189643becfc32f71187a0e36a182af53"} Mar 18 17:42:07.400358 master-0 kubenswrapper[7553]: I0318 17:42:07.400313 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/9d0c4a10-8e58-45a4-813b-efd3ef8353d3-audit\") pod \"apiserver-967479477-gwn76\" (UID: \"9d0c4a10-8e58-45a4-813b-efd3ef8353d3\") " pod="openshift-apiserver/apiserver-967479477-gwn76" Mar 18 17:42:07.400486 master-0 kubenswrapper[7553]: I0318 17:42:07.400391 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d0c4a10-8e58-45a4-813b-efd3ef8353d3-serving-cert\") pod \"apiserver-967479477-gwn76\" (UID: \"9d0c4a10-8e58-45a4-813b-efd3ef8353d3\") " pod="openshift-apiserver/apiserver-967479477-gwn76" Mar 18 17:42:07.400486 master-0 kubenswrapper[7553]: E0318 17:42:07.400465 7553 configmap.go:193] Couldn't get configMap openshift-apiserver/audit-0: configmap "audit-0" not found Mar 18 17:42:07.400486 master-0 kubenswrapper[7553]: E0318 17:42:07.400483 7553 secret.go:189] Couldn't get secret openshift-apiserver/serving-cert: secret "serving-cert" not found Mar 18 17:42:07.400631 master-0 kubenswrapper[7553]: E0318 17:42:07.400527 7553 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9d0c4a10-8e58-45a4-813b-efd3ef8353d3-serving-cert podName:9d0c4a10-8e58-45a4-813b-efd3ef8353d3 nodeName:}" failed. No retries permitted until 2026-03-18 17:42:08.400512906 +0000 UTC m=+18.546347579 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/9d0c4a10-8e58-45a4-813b-efd3ef8353d3-serving-cert") pod "apiserver-967479477-gwn76" (UID: "9d0c4a10-8e58-45a4-813b-efd3ef8353d3") : secret "serving-cert" not found Mar 18 17:42:07.400631 master-0 kubenswrapper[7553]: E0318 17:42:07.400539 7553 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9d0c4a10-8e58-45a4-813b-efd3ef8353d3-audit podName:9d0c4a10-8e58-45a4-813b-efd3ef8353d3 nodeName:}" failed. No retries permitted until 2026-03-18 17:42:08.400533507 +0000 UTC m=+18.546368180 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "audit" (UniqueName: "kubernetes.io/configmap/9d0c4a10-8e58-45a4-813b-efd3ef8353d3-audit") pod "apiserver-967479477-gwn76" (UID: "9d0c4a10-8e58-45a4-813b-efd3ef8353d3") : configmap "audit-0" not found Mar 18 17:42:07.403088 master-0 kubenswrapper[7553]: I0318 17:42:07.400708 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/9d0c4a10-8e58-45a4-813b-efd3ef8353d3-etcd-client\") pod \"apiserver-967479477-gwn76\" (UID: \"9d0c4a10-8e58-45a4-813b-efd3ef8353d3\") " pod="openshift-apiserver/apiserver-967479477-gwn76" Mar 18 17:42:07.409215 master-0 kubenswrapper[7553]: I0318 17:42:07.409004 7553 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/9d0c4a10-8e58-45a4-813b-efd3ef8353d3-etcd-client\") pod \"apiserver-967479477-gwn76\" (UID: \"9d0c4a10-8e58-45a4-813b-efd3ef8353d3\") " pod="openshift-apiserver/apiserver-967479477-gwn76" Mar 18 17:42:07.420160 master-0 kubenswrapper[7553]: I0318 17:42:07.419832 7553 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-9c5679d8f-7sc7v"] Mar 18 17:42:07.433669 master-0 kubenswrapper[7553]: W0318 17:42:07.433246 7553 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb1352cc7_4099_44c5_9c31_8259fb783bc7.slice/crio-cc45ef13b745a7538de0764bc9063fe610d54078c6f17e39280d0e2b21ebeeb0 WatchSource:0}: Error finding container cc45ef13b745a7538de0764bc9063fe610d54078c6f17e39280d0e2b21ebeeb0: Status 404 returned error can't find the container with id cc45ef13b745a7538de0764bc9063fe610d54078c6f17e39280d0e2b21ebeeb0 Mar 18 17:42:07.478377 master-0 kubenswrapper[7553]: I0318 17:42:07.478301 7553 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-5549dc66cb-ljrq8"] Mar 18 17:42:07.486774 master-0 kubenswrapper[7553]: W0318 17:42:07.486710 7553 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6f26e239_2988_4faa_bc1d_24b15b95b7f1.slice/crio-016383ed2ea822809808dec1c74c3db939646679d52a777698739d705adae757 WatchSource:0}: Error finding container 016383ed2ea822809808dec1c74c3db939646679d52a777698739d705adae757: Status 404 returned error can't find the container with id 016383ed2ea822809808dec1c74c3db939646679d52a777698739d705adae757 Mar 18 17:42:07.487299 master-0 kubenswrapper[7553]: I0318 17:42:07.487156 7553 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-66b84d69b-qb7n6"] Mar 18 17:42:07.495384 master-0 kubenswrapper[7553]: W0318 17:42:07.495347 7553 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7e64a377_f497_4416_8f22_d5c7f52e0b65.slice/crio-fecfc938509f77a7c6b0246891b9f62fa9cb5c8d24c6ae113e36e04682301649 WatchSource:0}: Error finding container fecfc938509f77a7c6b0246891b9f62fa9cb5c8d24c6ae113e36e04682301649: Status 404 returned error can't find the container with id fecfc938509f77a7c6b0246891b9f62fa9cb5c8d24c6ae113e36e04682301649 Mar 18 17:42:08.012506 master-0 kubenswrapper[7553]: I0318 17:42:08.008184 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5a4f94f3-d63a-4869-b723-ae9637610b4b-metrics-certs\") pod \"network-metrics-daemon-mfn52\" (UID: \"5a4f94f3-d63a-4869-b723-ae9637610b4b\") " pod="openshift-multus/network-metrics-daemon-mfn52" Mar 18 17:42:08.012506 master-0 kubenswrapper[7553]: I0318 17:42:08.008263 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/7c6694a8-ccd0-491b-9f21-215450f6ce67-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-598fbc5f8f-7qwxn\" (UID: \"7c6694a8-ccd0-491b-9f21-215450f6ce67\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-7qwxn" Mar 18 17:42:08.012506 master-0 kubenswrapper[7553]: E0318 17:42:08.009017 7553 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: secret "metrics-daemon-secret" not found Mar 18 17:42:08.012506 master-0 kubenswrapper[7553]: E0318 17:42:08.009084 7553 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5a4f94f3-d63a-4869-b723-ae9637610b4b-metrics-certs podName:5a4f94f3-d63a-4869-b723-ae9637610b4b nodeName:}" failed. No retries permitted until 2026-03-18 17:42:24.009064381 +0000 UTC m=+34.154899054 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/5a4f94f3-d63a-4869-b723-ae9637610b4b-metrics-certs") pod "network-metrics-daemon-mfn52" (UID: "5a4f94f3-d63a-4869-b723-ae9637610b4b") : secret "metrics-daemon-secret" not found Mar 18 17:42:08.023409 master-0 kubenswrapper[7553]: I0318 17:42:08.015813 7553 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/7c6694a8-ccd0-491b-9f21-215450f6ce67-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-598fbc5f8f-7qwxn\" (UID: \"7c6694a8-ccd0-491b-9f21-215450f6ce67\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-7qwxn" Mar 18 17:42:08.126267 master-0 kubenswrapper[7553]: I0318 17:42:08.126204 7553 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-7qwxn" Mar 18 17:42:08.378140 master-0 kubenswrapper[7553]: I0318 17:42:08.377810 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-5549dc66cb-ljrq8" event={"ID":"6f26e239-2988-4faa-bc1d-24b15b95b7f1","Type":"ContainerStarted","Data":"016383ed2ea822809808dec1c74c3db939646679d52a777698739d705adae757"} Mar 18 17:42:08.381860 master-0 kubenswrapper[7553]: I0318 17:42:08.381812 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-6bb5bfb6fd-xwwcg" event={"ID":"f7ff61c7-32d1-4407-a792-8e22bb4d50f9","Type":"ContainerStarted","Data":"24610a985db5ce85023cf9747ca14df30c98ba89aeb22c58ca49f5ef21707a5f"} Mar 18 17:42:08.385513 master-0 kubenswrapper[7553]: I0318 17:42:08.385398 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-9c5679d8f-7sc7v" event={"ID":"b1352cc7-4099-44c5-9c31-8259fb783bc7","Type":"ContainerStarted","Data":"cc45ef13b745a7538de0764bc9063fe610d54078c6f17e39280d0e2b21ebeeb0"} Mar 18 17:42:08.386824 master-0 kubenswrapper[7553]: I0318 17:42:08.386774 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-66b84d69b-qb7n6" event={"ID":"7e64a377-f497-4416-8f22-d5c7f52e0b65","Type":"ContainerStarted","Data":"fecfc938509f77a7c6b0246891b9f62fa9cb5c8d24c6ae113e36e04682301649"} Mar 18 17:42:08.413720 master-0 kubenswrapper[7553]: I0318 17:42:08.413664 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/9d0c4a10-8e58-45a4-813b-efd3ef8353d3-audit\") pod \"apiserver-967479477-gwn76\" (UID: \"9d0c4a10-8e58-45a4-813b-efd3ef8353d3\") " pod="openshift-apiserver/apiserver-967479477-gwn76" Mar 18 17:42:08.413921 master-0 kubenswrapper[7553]: E0318 17:42:08.413827 7553 configmap.go:193] Couldn't get configMap openshift-apiserver/audit-0: configmap "audit-0" not found Mar 18 17:42:08.413921 master-0 kubenswrapper[7553]: E0318 17:42:08.413897 7553 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9d0c4a10-8e58-45a4-813b-efd3ef8353d3-audit podName:9d0c4a10-8e58-45a4-813b-efd3ef8353d3 nodeName:}" failed. No retries permitted until 2026-03-18 17:42:10.41387659 +0000 UTC m=+20.559711263 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "audit" (UniqueName: "kubernetes.io/configmap/9d0c4a10-8e58-45a4-813b-efd3ef8353d3-audit") pod "apiserver-967479477-gwn76" (UID: "9d0c4a10-8e58-45a4-813b-efd3ef8353d3") : configmap "audit-0" not found Mar 18 17:42:08.414036 master-0 kubenswrapper[7553]: I0318 17:42:08.413968 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d0c4a10-8e58-45a4-813b-efd3ef8353d3-serving-cert\") pod \"apiserver-967479477-gwn76\" (UID: \"9d0c4a10-8e58-45a4-813b-efd3ef8353d3\") " pod="openshift-apiserver/apiserver-967479477-gwn76" Mar 18 17:42:08.414238 master-0 kubenswrapper[7553]: E0318 17:42:08.414215 7553 secret.go:189] Couldn't get secret openshift-apiserver/serving-cert: secret "serving-cert" not found Mar 18 17:42:08.414409 master-0 kubenswrapper[7553]: E0318 17:42:08.414378 7553 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9d0c4a10-8e58-45a4-813b-efd3ef8353d3-serving-cert podName:9d0c4a10-8e58-45a4-813b-efd3ef8353d3 nodeName:}" failed. No retries permitted until 2026-03-18 17:42:10.4143511 +0000 UTC m=+20.560185783 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/9d0c4a10-8e58-45a4-813b-efd3ef8353d3-serving-cert") pod "apiserver-967479477-gwn76" (UID: "9d0c4a10-8e58-45a4-813b-efd3ef8353d3") : secret "serving-cert" not found Mar 18 17:42:08.501899 master-0 kubenswrapper[7553]: I0318 17:42:08.500948 7553 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-8487694857-8dsx2"] Mar 18 17:42:08.502135 master-0 kubenswrapper[7553]: I0318 17:42:08.501919 7553 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-8487694857-8dsx2" Mar 18 17:42:08.502668 master-0 kubenswrapper[7553]: I0318 17:42:08.502623 7553 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-8487694857-8dsx2"] Mar 18 17:42:08.507700 master-0 kubenswrapper[7553]: I0318 17:42:08.504549 7553 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Mar 18 17:42:08.508342 master-0 kubenswrapper[7553]: I0318 17:42:08.508315 7553 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Mar 18 17:42:08.617684 master-0 kubenswrapper[7553]: I0318 17:42:08.617622 7553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4g42g\" (UniqueName: \"kubernetes.io/projected/7047a862-8cbe-46fb-9af3-06ba224cbe26-kube-api-access-4g42g\") pod \"migrator-8487694857-8dsx2\" (UID: \"7047a862-8cbe-46fb-9af3-06ba224cbe26\") " pod="openshift-kube-storage-version-migrator/migrator-8487694857-8dsx2" Mar 18 17:42:08.719508 master-0 kubenswrapper[7553]: I0318 17:42:08.719387 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4g42g\" (UniqueName: \"kubernetes.io/projected/7047a862-8cbe-46fb-9af3-06ba224cbe26-kube-api-access-4g42g\") pod \"migrator-8487694857-8dsx2\" (UID: \"7047a862-8cbe-46fb-9af3-06ba224cbe26\") " pod="openshift-kube-storage-version-migrator/migrator-8487694857-8dsx2" Mar 18 17:42:08.775975 master-0 kubenswrapper[7553]: I0318 17:42:08.775933 7553 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4g42g\" (UniqueName: \"kubernetes.io/projected/7047a862-8cbe-46fb-9af3-06ba224cbe26-kube-api-access-4g42g\") pod \"migrator-8487694857-8dsx2\" (UID: \"7047a862-8cbe-46fb-9af3-06ba224cbe26\") " pod="openshift-kube-storage-version-migrator/migrator-8487694857-8dsx2" Mar 18 17:42:08.836365 master-0 kubenswrapper[7553]: I0318 17:42:08.836312 7553 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-8487694857-8dsx2" Mar 18 17:42:10.190741 master-0 kubenswrapper[7553]: I0318 17:42:10.190667 7553 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-95bf4f4d-q27fh" Mar 18 17:42:10.238414 master-0 kubenswrapper[7553]: I0318 17:42:10.238334 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/414430ec-af84-4826-b5db-c920c7653c7a-serving-cert\") pod \"route-controller-manager-cb78c4f4b-7s77b\" (UID: \"414430ec-af84-4826-b5db-c920c7653c7a\") " pod="openshift-route-controller-manager/route-controller-manager-cb78c4f4b-7s77b" Mar 18 17:42:10.238723 master-0 kubenswrapper[7553]: E0318 17:42:10.238654 7553 secret.go:189] Couldn't get secret openshift-route-controller-manager/serving-cert: secret "serving-cert" not found Mar 18 17:42:10.238809 master-0 kubenswrapper[7553]: E0318 17:42:10.238780 7553 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/414430ec-af84-4826-b5db-c920c7653c7a-serving-cert podName:414430ec-af84-4826-b5db-c920c7653c7a nodeName:}" failed. No retries permitted until 2026-03-18 17:42:18.238749624 +0000 UTC m=+28.384584297 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/414430ec-af84-4826-b5db-c920c7653c7a-serving-cert") pod "route-controller-manager-cb78c4f4b-7s77b" (UID: "414430ec-af84-4826-b5db-c920c7653c7a") : secret "serving-cert" not found Mar 18 17:42:10.400567 master-0 kubenswrapper[7553]: I0318 17:42:10.400438 7553 generic.go:334] "Generic (PLEG): container finished" podID="99e215da-759d-4fff-af65-0fb64245fbd0" containerID="526fb1f5737ab88a407bf2b841c814ad5e5c2b858476030b2e358c55fa03c304" exitCode=0 Mar 18 17:42:10.400567 master-0 kubenswrapper[7553]: I0318 17:42:10.400516 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-olm-operator/cluster-olm-operator-67dcd4998-lljnt" event={"ID":"99e215da-759d-4fff-af65-0fb64245fbd0","Type":"ContainerDied","Data":"526fb1f5737ab88a407bf2b841c814ad5e5c2b858476030b2e358c55fa03c304"} Mar 18 17:42:10.401669 master-0 kubenswrapper[7553]: I0318 17:42:10.401301 7553 scope.go:117] "RemoveContainer" containerID="526fb1f5737ab88a407bf2b841c814ad5e5c2b858476030b2e358c55fa03c304" Mar 18 17:42:10.441101 master-0 kubenswrapper[7553]: I0318 17:42:10.441020 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/9d0c4a10-8e58-45a4-813b-efd3ef8353d3-audit\") pod \"apiserver-967479477-gwn76\" (UID: \"9d0c4a10-8e58-45a4-813b-efd3ef8353d3\") " pod="openshift-apiserver/apiserver-967479477-gwn76" Mar 18 17:42:10.441314 master-0 kubenswrapper[7553]: I0318 17:42:10.441126 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d0c4a10-8e58-45a4-813b-efd3ef8353d3-serving-cert\") pod \"apiserver-967479477-gwn76\" (UID: \"9d0c4a10-8e58-45a4-813b-efd3ef8353d3\") " pod="openshift-apiserver/apiserver-967479477-gwn76" Mar 18 17:42:10.441893 master-0 kubenswrapper[7553]: E0318 17:42:10.441672 7553 secret.go:189] Couldn't get secret openshift-apiserver/serving-cert: secret "serving-cert" not found Mar 18 17:42:10.441893 master-0 kubenswrapper[7553]: E0318 17:42:10.441746 7553 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9d0c4a10-8e58-45a4-813b-efd3ef8353d3-serving-cert podName:9d0c4a10-8e58-45a4-813b-efd3ef8353d3 nodeName:}" failed. No retries permitted until 2026-03-18 17:42:14.441724831 +0000 UTC m=+24.587559504 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/9d0c4a10-8e58-45a4-813b-efd3ef8353d3-serving-cert") pod "apiserver-967479477-gwn76" (UID: "9d0c4a10-8e58-45a4-813b-efd3ef8353d3") : secret "serving-cert" not found Mar 18 17:42:10.443288 master-0 kubenswrapper[7553]: E0318 17:42:10.442213 7553 configmap.go:193] Couldn't get configMap openshift-apiserver/audit-0: configmap "audit-0" not found Mar 18 17:42:10.443288 master-0 kubenswrapper[7553]: E0318 17:42:10.442309 7553 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9d0c4a10-8e58-45a4-813b-efd3ef8353d3-audit podName:9d0c4a10-8e58-45a4-813b-efd3ef8353d3 nodeName:}" failed. No retries permitted until 2026-03-18 17:42:14.442294333 +0000 UTC m=+24.588129006 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "audit" (UniqueName: "kubernetes.io/configmap/9d0c4a10-8e58-45a4-813b-efd3ef8353d3-audit") pod "apiserver-967479477-gwn76" (UID: "9d0c4a10-8e58-45a4-813b-efd3ef8353d3") : configmap "audit-0" not found Mar 18 17:42:10.567512 master-0 kubenswrapper[7553]: I0318 17:42:10.567456 7553 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-apiserver/apiserver-967479477-gwn76"] Mar 18 17:42:10.568046 master-0 kubenswrapper[7553]: E0318 17:42:10.567964 7553 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[audit serving-cert], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openshift-apiserver/apiserver-967479477-gwn76" podUID="9d0c4a10-8e58-45a4-813b-efd3ef8353d3" Mar 18 17:42:11.406032 master-0 kubenswrapper[7553]: I0318 17:42:11.405572 7553 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-967479477-gwn76" Mar 18 17:42:11.417925 master-0 kubenswrapper[7553]: I0318 17:42:11.417893 7553 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-967479477-gwn76" Mar 18 17:42:11.456567 master-0 kubenswrapper[7553]: I0318 17:42:11.456509 7553 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/9d0c4a10-8e58-45a4-813b-efd3ef8353d3-audit-dir\") pod \"9d0c4a10-8e58-45a4-813b-efd3ef8353d3\" (UID: \"9d0c4a10-8e58-45a4-813b-efd3ef8353d3\") " Mar 18 17:42:11.456700 master-0 kubenswrapper[7553]: I0318 17:42:11.456588 7553 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/9d0c4a10-8e58-45a4-813b-efd3ef8353d3-node-pullsecrets\") pod \"9d0c4a10-8e58-45a4-813b-efd3ef8353d3\" (UID: \"9d0c4a10-8e58-45a4-813b-efd3ef8353d3\") " Mar 18 17:42:11.456700 master-0 kubenswrapper[7553]: I0318 17:42:11.456650 7553 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4pzvp\" (UniqueName: \"kubernetes.io/projected/9d0c4a10-8e58-45a4-813b-efd3ef8353d3-kube-api-access-4pzvp\") pod \"9d0c4a10-8e58-45a4-813b-efd3ef8353d3\" (UID: \"9d0c4a10-8e58-45a4-813b-efd3ef8353d3\") " Mar 18 17:42:11.456700 master-0 kubenswrapper[7553]: I0318 17:42:11.456689 7553 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9d0c4a10-8e58-45a4-813b-efd3ef8353d3-trusted-ca-bundle\") pod \"9d0c4a10-8e58-45a4-813b-efd3ef8353d3\" (UID: \"9d0c4a10-8e58-45a4-813b-efd3ef8353d3\") " Mar 18 17:42:11.456782 master-0 kubenswrapper[7553]: I0318 17:42:11.456751 7553 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/9d0c4a10-8e58-45a4-813b-efd3ef8353d3-encryption-config\") pod \"9d0c4a10-8e58-45a4-813b-efd3ef8353d3\" (UID: \"9d0c4a10-8e58-45a4-813b-efd3ef8353d3\") " Mar 18 17:42:11.456810 master-0 kubenswrapper[7553]: I0318 17:42:11.456796 7553 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/9d0c4a10-8e58-45a4-813b-efd3ef8353d3-etcd-serving-ca\") pod \"9d0c4a10-8e58-45a4-813b-efd3ef8353d3\" (UID: \"9d0c4a10-8e58-45a4-813b-efd3ef8353d3\") " Mar 18 17:42:11.457816 master-0 kubenswrapper[7553]: I0318 17:42:11.456839 7553 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d0c4a10-8e58-45a4-813b-efd3ef8353d3-config\") pod \"9d0c4a10-8e58-45a4-813b-efd3ef8353d3\" (UID: \"9d0c4a10-8e58-45a4-813b-efd3ef8353d3\") " Mar 18 17:42:11.457816 master-0 kubenswrapper[7553]: I0318 17:42:11.456886 7553 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/9d0c4a10-8e58-45a4-813b-efd3ef8353d3-image-import-ca\") pod \"9d0c4a10-8e58-45a4-813b-efd3ef8353d3\" (UID: \"9d0c4a10-8e58-45a4-813b-efd3ef8353d3\") " Mar 18 17:42:11.457816 master-0 kubenswrapper[7553]: I0318 17:42:11.457066 7553 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/9d0c4a10-8e58-45a4-813b-efd3ef8353d3-etcd-client\") pod \"9d0c4a10-8e58-45a4-813b-efd3ef8353d3\" (UID: \"9d0c4a10-8e58-45a4-813b-efd3ef8353d3\") " Mar 18 17:42:11.458062 master-0 kubenswrapper[7553]: I0318 17:42:11.457369 7553 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9d0c4a10-8e58-45a4-813b-efd3ef8353d3-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "9d0c4a10-8e58-45a4-813b-efd3ef8353d3" (UID: "9d0c4a10-8e58-45a4-813b-efd3ef8353d3"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 17:42:11.458062 master-0 kubenswrapper[7553]: I0318 17:42:11.457414 7553 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9d0c4a10-8e58-45a4-813b-efd3ef8353d3-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "9d0c4a10-8e58-45a4-813b-efd3ef8353d3" (UID: "9d0c4a10-8e58-45a4-813b-efd3ef8353d3"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 17:42:11.458062 master-0 kubenswrapper[7553]: I0318 17:42:11.457815 7553 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d0c4a10-8e58-45a4-813b-efd3ef8353d3-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "9d0c4a10-8e58-45a4-813b-efd3ef8353d3" (UID: "9d0c4a10-8e58-45a4-813b-efd3ef8353d3"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 17:42:11.458062 master-0 kubenswrapper[7553]: I0318 17:42:11.457895 7553 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d0c4a10-8e58-45a4-813b-efd3ef8353d3-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "9d0c4a10-8e58-45a4-813b-efd3ef8353d3" (UID: "9d0c4a10-8e58-45a4-813b-efd3ef8353d3"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 17:42:11.458062 master-0 kubenswrapper[7553]: I0318 17:42:11.457894 7553 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d0c4a10-8e58-45a4-813b-efd3ef8353d3-config" (OuterVolumeSpecName: "config") pod "9d0c4a10-8e58-45a4-813b-efd3ef8353d3" (UID: "9d0c4a10-8e58-45a4-813b-efd3ef8353d3"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 17:42:11.458062 master-0 kubenswrapper[7553]: I0318 17:42:11.457945 7553 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d0c4a10-8e58-45a4-813b-efd3ef8353d3-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "9d0c4a10-8e58-45a4-813b-efd3ef8353d3" (UID: "9d0c4a10-8e58-45a4-813b-efd3ef8353d3"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 17:42:11.458586 master-0 kubenswrapper[7553]: I0318 17:42:11.458521 7553 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9d0c4a10-8e58-45a4-813b-efd3ef8353d3-trusted-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 18 17:42:11.458586 master-0 kubenswrapper[7553]: I0318 17:42:11.458566 7553 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/9d0c4a10-8e58-45a4-813b-efd3ef8353d3-etcd-serving-ca\") on node \"master-0\" DevicePath \"\"" Mar 18 17:42:11.458777 master-0 kubenswrapper[7553]: I0318 17:42:11.458587 7553 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d0c4a10-8e58-45a4-813b-efd3ef8353d3-config\") on node \"master-0\" DevicePath \"\"" Mar 18 17:42:11.458777 master-0 kubenswrapper[7553]: I0318 17:42:11.458605 7553 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/9d0c4a10-8e58-45a4-813b-efd3ef8353d3-image-import-ca\") on node \"master-0\" DevicePath \"\"" Mar 18 17:42:11.458777 master-0 kubenswrapper[7553]: I0318 17:42:11.458623 7553 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/9d0c4a10-8e58-45a4-813b-efd3ef8353d3-audit-dir\") on node \"master-0\" DevicePath \"\"" Mar 18 17:42:11.458777 master-0 kubenswrapper[7553]: I0318 17:42:11.458640 7553 reconciler_common.go:293] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/9d0c4a10-8e58-45a4-813b-efd3ef8353d3-node-pullsecrets\") on node \"master-0\" DevicePath \"\"" Mar 18 17:42:11.462044 master-0 kubenswrapper[7553]: I0318 17:42:11.461995 7553 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d0c4a10-8e58-45a4-813b-efd3ef8353d3-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "9d0c4a10-8e58-45a4-813b-efd3ef8353d3" (UID: "9d0c4a10-8e58-45a4-813b-efd3ef8353d3"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 17:42:11.462213 master-0 kubenswrapper[7553]: I0318 17:42:11.462185 7553 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d0c4a10-8e58-45a4-813b-efd3ef8353d3-kube-api-access-4pzvp" (OuterVolumeSpecName: "kube-api-access-4pzvp") pod "9d0c4a10-8e58-45a4-813b-efd3ef8353d3" (UID: "9d0c4a10-8e58-45a4-813b-efd3ef8353d3"). InnerVolumeSpecName "kube-api-access-4pzvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 17:42:11.465851 master-0 kubenswrapper[7553]: I0318 17:42:11.465779 7553 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d0c4a10-8e58-45a4-813b-efd3ef8353d3-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "9d0c4a10-8e58-45a4-813b-efd3ef8353d3" (UID: "9d0c4a10-8e58-45a4-813b-efd3ef8353d3"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 17:42:11.559671 master-0 kubenswrapper[7553]: I0318 17:42:11.559602 7553 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4pzvp\" (UniqueName: \"kubernetes.io/projected/9d0c4a10-8e58-45a4-813b-efd3ef8353d3-kube-api-access-4pzvp\") on node \"master-0\" DevicePath \"\"" Mar 18 17:42:11.559671 master-0 kubenswrapper[7553]: I0318 17:42:11.559654 7553 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/9d0c4a10-8e58-45a4-813b-efd3ef8353d3-encryption-config\") on node \"master-0\" DevicePath \"\"" Mar 18 17:42:11.559671 master-0 kubenswrapper[7553]: I0318 17:42:11.559668 7553 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/9d0c4a10-8e58-45a4-813b-efd3ef8353d3-etcd-client\") on node \"master-0\" DevicePath \"\"" Mar 18 17:42:12.240214 master-0 kubenswrapper[7553]: I0318 17:42:12.239643 7553 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-8487694857-8dsx2"] Mar 18 17:42:12.277491 master-0 kubenswrapper[7553]: I0318 17:42:12.276902 7553 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-7qwxn"] Mar 18 17:42:12.304685 master-0 kubenswrapper[7553]: W0318 17:42:12.304603 7553 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7c6694a8_ccd0_491b_9f21_215450f6ce67.slice/crio-c9ef5e66c74bafc259dc619a6d19d1eda5f874894c689b2f23043bfdee6a39c1 WatchSource:0}: Error finding container c9ef5e66c74bafc259dc619a6d19d1eda5f874894c689b2f23043bfdee6a39c1: Status 404 returned error can't find the container with id c9ef5e66c74bafc259dc619a6d19d1eda5f874894c689b2f23043bfdee6a39c1 Mar 18 17:42:12.426202 master-0 kubenswrapper[7553]: I0318 17:42:12.426140 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-5549dc66cb-ljrq8" event={"ID":"6f26e239-2988-4faa-bc1d-24b15b95b7f1","Type":"ContainerStarted","Data":"d4e55edde3b012389f45dd8d1909f3ff7e569bfb5c590f0e8e7e8c080c91f4b0"} Mar 18 17:42:12.434223 master-0 kubenswrapper[7553]: I0318 17:42:12.433550 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-8487694857-8dsx2" event={"ID":"7047a862-8cbe-46fb-9af3-06ba224cbe26","Type":"ContainerStarted","Data":"22b260c86b95c080bc9989f63b5311a346d5ef3d9e462e33577fe76c4fe05c6d"} Mar 18 17:42:12.438933 master-0 kubenswrapper[7553]: I0318 17:42:12.438417 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-9c5679d8f-7sc7v" event={"ID":"b1352cc7-4099-44c5-9c31-8259fb783bc7","Type":"ContainerStarted","Data":"be2ab5162c19fe6afdcd220e7e3b6eefacf0015ee39e1b1196ea473e58d6e066"} Mar 18 17:42:12.440116 master-0 kubenswrapper[7553]: I0318 17:42:12.439680 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-66b84d69b-qb7n6" event={"ID":"7e64a377-f497-4416-8f22-d5c7f52e0b65","Type":"ContainerStarted","Data":"e9fc20131fe3301f3975c8ec80aa5b69c08756ea86094187749bed1e0e04517c"} Mar 18 17:42:12.440116 master-0 kubenswrapper[7553]: I0318 17:42:12.439707 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-66b84d69b-qb7n6" event={"ID":"7e64a377-f497-4416-8f22-d5c7f52e0b65","Type":"ContainerStarted","Data":"e7ad342e7df9192dcebc5d4c70aab2c6f8db5ac6b23cbc811c6ae00a013dee5b"} Mar 18 17:42:12.453089 master-0 kubenswrapper[7553]: I0318 17:42:12.453009 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-olm-operator/cluster-olm-operator-67dcd4998-lljnt" event={"ID":"99e215da-759d-4fff-af65-0fb64245fbd0","Type":"ContainerStarted","Data":"991a1bf80cc5f91f8bda7e5c2511f88f98023ee76020f581b2ef2e76ff7bcf29"} Mar 18 17:42:12.468616 master-0 kubenswrapper[7553]: I0318 17:42:12.468477 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-7qwxn" event={"ID":"7c6694a8-ccd0-491b-9f21-215450f6ce67","Type":"ContainerStarted","Data":"c9ef5e66c74bafc259dc619a6d19d1eda5f874894c689b2f23043bfdee6a39c1"} Mar 18 17:42:12.477718 master-0 kubenswrapper[7553]: I0318 17:42:12.474663 7553 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-967479477-gwn76" Mar 18 17:42:12.477718 master-0 kubenswrapper[7553]: I0318 17:42:12.474730 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7c846c589b-4cpj2" event={"ID":"dedeb921-f1f2-4fa4-8d16-8740b1c0cd14","Type":"ContainerStarted","Data":"5b6eed714222e25752fdf63e3f8f6cfb66e7b124c5f70e15ae2f2054a7693438"} Mar 18 17:42:12.477718 master-0 kubenswrapper[7553]: I0318 17:42:12.475123 7553 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-7c846c589b-4cpj2" Mar 18 17:42:12.490880 master-0 kubenswrapper[7553]: I0318 17:42:12.489934 7553 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-7c846c589b-4cpj2" Mar 18 17:42:12.526865 master-0 kubenswrapper[7553]: I0318 17:42:12.526768 7553 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-7c846c589b-4cpj2" podStartSLOduration=3.46513832 podStartE2EDuration="10.52673558s" podCreationTimestamp="2026-03-18 17:42:02 +0000 UTC" firstStartedPulling="2026-03-18 17:42:04.938352756 +0000 UTC m=+15.084187439" lastFinishedPulling="2026-03-18 17:42:11.999950026 +0000 UTC m=+22.145784699" observedRunningTime="2026-03-18 17:42:12.525760659 +0000 UTC m=+22.671595332" watchObservedRunningTime="2026-03-18 17:42:12.52673558 +0000 UTC m=+22.672570243" Mar 18 17:42:12.631258 master-0 kubenswrapper[7553]: I0318 17:42:12.628635 7553 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-897b458c6-vsss9"] Mar 18 17:42:12.631258 master-0 kubenswrapper[7553]: I0318 17:42:12.629932 7553 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-897b458c6-vsss9" Mar 18 17:42:12.637302 master-0 kubenswrapper[7553]: I0318 17:42:12.634255 7553 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-apiserver/apiserver-967479477-gwn76"] Mar 18 17:42:12.659303 master-0 kubenswrapper[7553]: I0318 17:42:12.654301 7553 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Mar 18 17:42:12.659303 master-0 kubenswrapper[7553]: I0318 17:42:12.654643 7553 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Mar 18 17:42:12.659303 master-0 kubenswrapper[7553]: I0318 17:42:12.654788 7553 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Mar 18 17:42:12.659303 master-0 kubenswrapper[7553]: I0318 17:42:12.654938 7553 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Mar 18 17:42:12.659303 master-0 kubenswrapper[7553]: I0318 17:42:12.655156 7553 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Mar 18 17:42:12.659303 master-0 kubenswrapper[7553]: I0318 17:42:12.655174 7553 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Mar 18 17:42:12.659303 master-0 kubenswrapper[7553]: I0318 17:42:12.655303 7553 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Mar 18 17:42:12.659303 master-0 kubenswrapper[7553]: I0318 17:42:12.655563 7553 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Mar 18 17:42:12.659303 master-0 kubenswrapper[7553]: I0318 17:42:12.656054 7553 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-apiserver/apiserver-967479477-gwn76"] Mar 18 17:42:12.659303 master-0 kubenswrapper[7553]: I0318 17:42:12.656810 7553 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-897b458c6-vsss9"] Mar 18 17:42:12.670307 master-0 kubenswrapper[7553]: I0318 17:42:12.663025 7553 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Mar 18 17:42:12.670307 master-0 kubenswrapper[7553]: I0318 17:42:12.664911 7553 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Mar 18 17:42:12.696322 master-0 kubenswrapper[7553]: I0318 17:42:12.687723 7553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/30d77a7c-222e-41c7-8a98-219854aa3da2-trusted-ca-bundle\") pod \"apiserver-897b458c6-vsss9\" (UID: \"30d77a7c-222e-41c7-8a98-219854aa3da2\") " pod="openshift-apiserver/apiserver-897b458c6-vsss9" Mar 18 17:42:12.696322 master-0 kubenswrapper[7553]: I0318 17:42:12.687792 7553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/30d77a7c-222e-41c7-8a98-219854aa3da2-serving-cert\") pod \"apiserver-897b458c6-vsss9\" (UID: \"30d77a7c-222e-41c7-8a98-219854aa3da2\") " pod="openshift-apiserver/apiserver-897b458c6-vsss9" Mar 18 17:42:12.696322 master-0 kubenswrapper[7553]: I0318 17:42:12.687827 7553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x47z7\" (UniqueName: \"kubernetes.io/projected/30d77a7c-222e-41c7-8a98-219854aa3da2-kube-api-access-x47z7\") pod \"apiserver-897b458c6-vsss9\" (UID: \"30d77a7c-222e-41c7-8a98-219854aa3da2\") " pod="openshift-apiserver/apiserver-897b458c6-vsss9" Mar 18 17:42:12.696322 master-0 kubenswrapper[7553]: I0318 17:42:12.687878 7553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/30d77a7c-222e-41c7-8a98-219854aa3da2-etcd-serving-ca\") pod \"apiserver-897b458c6-vsss9\" (UID: \"30d77a7c-222e-41c7-8a98-219854aa3da2\") " pod="openshift-apiserver/apiserver-897b458c6-vsss9" Mar 18 17:42:12.696322 master-0 kubenswrapper[7553]: I0318 17:42:12.687905 7553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/30d77a7c-222e-41c7-8a98-219854aa3da2-encryption-config\") pod \"apiserver-897b458c6-vsss9\" (UID: \"30d77a7c-222e-41c7-8a98-219854aa3da2\") " pod="openshift-apiserver/apiserver-897b458c6-vsss9" Mar 18 17:42:12.696322 master-0 kubenswrapper[7553]: I0318 17:42:12.687959 7553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/30d77a7c-222e-41c7-8a98-219854aa3da2-etcd-client\") pod \"apiserver-897b458c6-vsss9\" (UID: \"30d77a7c-222e-41c7-8a98-219854aa3da2\") " pod="openshift-apiserver/apiserver-897b458c6-vsss9" Mar 18 17:42:12.696322 master-0 kubenswrapper[7553]: I0318 17:42:12.688009 7553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/30d77a7c-222e-41c7-8a98-219854aa3da2-config\") pod \"apiserver-897b458c6-vsss9\" (UID: \"30d77a7c-222e-41c7-8a98-219854aa3da2\") " pod="openshift-apiserver/apiserver-897b458c6-vsss9" Mar 18 17:42:12.696322 master-0 kubenswrapper[7553]: I0318 17:42:12.688034 7553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/30d77a7c-222e-41c7-8a98-219854aa3da2-node-pullsecrets\") pod \"apiserver-897b458c6-vsss9\" (UID: \"30d77a7c-222e-41c7-8a98-219854aa3da2\") " pod="openshift-apiserver/apiserver-897b458c6-vsss9" Mar 18 17:42:12.696322 master-0 kubenswrapper[7553]: I0318 17:42:12.688136 7553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/30d77a7c-222e-41c7-8a98-219854aa3da2-image-import-ca\") pod \"apiserver-897b458c6-vsss9\" (UID: \"30d77a7c-222e-41c7-8a98-219854aa3da2\") " pod="openshift-apiserver/apiserver-897b458c6-vsss9" Mar 18 17:42:12.696322 master-0 kubenswrapper[7553]: I0318 17:42:12.688189 7553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/30d77a7c-222e-41c7-8a98-219854aa3da2-audit\") pod \"apiserver-897b458c6-vsss9\" (UID: \"30d77a7c-222e-41c7-8a98-219854aa3da2\") " pod="openshift-apiserver/apiserver-897b458c6-vsss9" Mar 18 17:42:12.696322 master-0 kubenswrapper[7553]: I0318 17:42:12.688225 7553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/30d77a7c-222e-41c7-8a98-219854aa3da2-audit-dir\") pod \"apiserver-897b458c6-vsss9\" (UID: \"30d77a7c-222e-41c7-8a98-219854aa3da2\") " pod="openshift-apiserver/apiserver-897b458c6-vsss9" Mar 18 17:42:12.696322 master-0 kubenswrapper[7553]: I0318 17:42:12.688353 7553 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/9d0c4a10-8e58-45a4-813b-efd3ef8353d3-audit\") on node \"master-0\" DevicePath \"\"" Mar 18 17:42:12.696322 master-0 kubenswrapper[7553]: I0318 17:42:12.688372 7553 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d0c4a10-8e58-45a4-813b-efd3ef8353d3-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 18 17:42:12.789376 master-0 kubenswrapper[7553]: I0318 17:42:12.789054 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/30d77a7c-222e-41c7-8a98-219854aa3da2-image-import-ca\") pod \"apiserver-897b458c6-vsss9\" (UID: \"30d77a7c-222e-41c7-8a98-219854aa3da2\") " pod="openshift-apiserver/apiserver-897b458c6-vsss9" Mar 18 17:42:12.789376 master-0 kubenswrapper[7553]: I0318 17:42:12.789132 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/30d77a7c-222e-41c7-8a98-219854aa3da2-audit\") pod \"apiserver-897b458c6-vsss9\" (UID: \"30d77a7c-222e-41c7-8a98-219854aa3da2\") " pod="openshift-apiserver/apiserver-897b458c6-vsss9" Mar 18 17:42:12.789376 master-0 kubenswrapper[7553]: I0318 17:42:12.789200 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/30d77a7c-222e-41c7-8a98-219854aa3da2-audit-dir\") pod \"apiserver-897b458c6-vsss9\" (UID: \"30d77a7c-222e-41c7-8a98-219854aa3da2\") " pod="openshift-apiserver/apiserver-897b458c6-vsss9" Mar 18 17:42:12.789376 master-0 kubenswrapper[7553]: I0318 17:42:12.789243 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/30d77a7c-222e-41c7-8a98-219854aa3da2-trusted-ca-bundle\") pod \"apiserver-897b458c6-vsss9\" (UID: \"30d77a7c-222e-41c7-8a98-219854aa3da2\") " pod="openshift-apiserver/apiserver-897b458c6-vsss9" Mar 18 17:42:12.789376 master-0 kubenswrapper[7553]: I0318 17:42:12.789267 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/30d77a7c-222e-41c7-8a98-219854aa3da2-serving-cert\") pod \"apiserver-897b458c6-vsss9\" (UID: \"30d77a7c-222e-41c7-8a98-219854aa3da2\") " pod="openshift-apiserver/apiserver-897b458c6-vsss9" Mar 18 17:42:12.789376 master-0 kubenswrapper[7553]: I0318 17:42:12.789309 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x47z7\" (UniqueName: \"kubernetes.io/projected/30d77a7c-222e-41c7-8a98-219854aa3da2-kube-api-access-x47z7\") pod \"apiserver-897b458c6-vsss9\" (UID: \"30d77a7c-222e-41c7-8a98-219854aa3da2\") " pod="openshift-apiserver/apiserver-897b458c6-vsss9" Mar 18 17:42:12.789376 master-0 kubenswrapper[7553]: I0318 17:42:12.789339 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/30d77a7c-222e-41c7-8a98-219854aa3da2-etcd-serving-ca\") pod \"apiserver-897b458c6-vsss9\" (UID: \"30d77a7c-222e-41c7-8a98-219854aa3da2\") " pod="openshift-apiserver/apiserver-897b458c6-vsss9" Mar 18 17:42:12.789376 master-0 kubenswrapper[7553]: I0318 17:42:12.789361 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/30d77a7c-222e-41c7-8a98-219854aa3da2-encryption-config\") pod \"apiserver-897b458c6-vsss9\" (UID: \"30d77a7c-222e-41c7-8a98-219854aa3da2\") " pod="openshift-apiserver/apiserver-897b458c6-vsss9" Mar 18 17:42:12.789376 master-0 kubenswrapper[7553]: I0318 17:42:12.789390 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/30d77a7c-222e-41c7-8a98-219854aa3da2-etcd-client\") pod \"apiserver-897b458c6-vsss9\" (UID: \"30d77a7c-222e-41c7-8a98-219854aa3da2\") " pod="openshift-apiserver/apiserver-897b458c6-vsss9" Mar 18 17:42:12.789823 master-0 kubenswrapper[7553]: I0318 17:42:12.789419 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/30d77a7c-222e-41c7-8a98-219854aa3da2-config\") pod \"apiserver-897b458c6-vsss9\" (UID: \"30d77a7c-222e-41c7-8a98-219854aa3da2\") " pod="openshift-apiserver/apiserver-897b458c6-vsss9" Mar 18 17:42:12.789823 master-0 kubenswrapper[7553]: I0318 17:42:12.789437 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/30d77a7c-222e-41c7-8a98-219854aa3da2-node-pullsecrets\") pod \"apiserver-897b458c6-vsss9\" (UID: \"30d77a7c-222e-41c7-8a98-219854aa3da2\") " pod="openshift-apiserver/apiserver-897b458c6-vsss9" Mar 18 17:42:12.789823 master-0 kubenswrapper[7553]: I0318 17:42:12.789573 7553 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/30d77a7c-222e-41c7-8a98-219854aa3da2-node-pullsecrets\") pod \"apiserver-897b458c6-vsss9\" (UID: \"30d77a7c-222e-41c7-8a98-219854aa3da2\") " pod="openshift-apiserver/apiserver-897b458c6-vsss9" Mar 18 17:42:12.789823 master-0 kubenswrapper[7553]: E0318 17:42:12.789700 7553 secret.go:189] Couldn't get secret openshift-apiserver/serving-cert: secret "serving-cert" not found Mar 18 17:42:12.797292 master-0 kubenswrapper[7553]: I0318 17:42:12.794159 7553 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/30d77a7c-222e-41c7-8a98-219854aa3da2-config\") pod \"apiserver-897b458c6-vsss9\" (UID: \"30d77a7c-222e-41c7-8a98-219854aa3da2\") " pod="openshift-apiserver/apiserver-897b458c6-vsss9" Mar 18 17:42:12.797292 master-0 kubenswrapper[7553]: I0318 17:42:12.794504 7553 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/30d77a7c-222e-41c7-8a98-219854aa3da2-audit-dir\") pod \"apiserver-897b458c6-vsss9\" (UID: \"30d77a7c-222e-41c7-8a98-219854aa3da2\") " pod="openshift-apiserver/apiserver-897b458c6-vsss9" Mar 18 17:42:12.797292 master-0 kubenswrapper[7553]: I0318 17:42:12.795029 7553 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/30d77a7c-222e-41c7-8a98-219854aa3da2-image-import-ca\") pod \"apiserver-897b458c6-vsss9\" (UID: \"30d77a7c-222e-41c7-8a98-219854aa3da2\") " pod="openshift-apiserver/apiserver-897b458c6-vsss9" Mar 18 17:42:12.797292 master-0 kubenswrapper[7553]: E0318 17:42:12.795372 7553 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/30d77a7c-222e-41c7-8a98-219854aa3da2-serving-cert podName:30d77a7c-222e-41c7-8a98-219854aa3da2 nodeName:}" failed. No retries permitted until 2026-03-18 17:42:13.29525121 +0000 UTC m=+23.441085883 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/30d77a7c-222e-41c7-8a98-219854aa3da2-serving-cert") pod "apiserver-897b458c6-vsss9" (UID: "30d77a7c-222e-41c7-8a98-219854aa3da2") : secret "serving-cert" not found Mar 18 17:42:12.797292 master-0 kubenswrapper[7553]: I0318 17:42:12.795504 7553 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/30d77a7c-222e-41c7-8a98-219854aa3da2-audit\") pod \"apiserver-897b458c6-vsss9\" (UID: \"30d77a7c-222e-41c7-8a98-219854aa3da2\") " pod="openshift-apiserver/apiserver-897b458c6-vsss9" Mar 18 17:42:12.797292 master-0 kubenswrapper[7553]: I0318 17:42:12.795966 7553 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/30d77a7c-222e-41c7-8a98-219854aa3da2-trusted-ca-bundle\") pod \"apiserver-897b458c6-vsss9\" (UID: \"30d77a7c-222e-41c7-8a98-219854aa3da2\") " pod="openshift-apiserver/apiserver-897b458c6-vsss9" Mar 18 17:42:12.797292 master-0 kubenswrapper[7553]: I0318 17:42:12.796123 7553 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/30d77a7c-222e-41c7-8a98-219854aa3da2-etcd-serving-ca\") pod \"apiserver-897b458c6-vsss9\" (UID: \"30d77a7c-222e-41c7-8a98-219854aa3da2\") " pod="openshift-apiserver/apiserver-897b458c6-vsss9" Mar 18 17:42:12.806998 master-0 kubenswrapper[7553]: I0318 17:42:12.806951 7553 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/30d77a7c-222e-41c7-8a98-219854aa3da2-encryption-config\") pod \"apiserver-897b458c6-vsss9\" (UID: \"30d77a7c-222e-41c7-8a98-219854aa3da2\") " pod="openshift-apiserver/apiserver-897b458c6-vsss9" Mar 18 17:42:12.813855 master-0 kubenswrapper[7553]: I0318 17:42:12.813789 7553 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/30d77a7c-222e-41c7-8a98-219854aa3da2-etcd-client\") pod \"apiserver-897b458c6-vsss9\" (UID: \"30d77a7c-222e-41c7-8a98-219854aa3da2\") " pod="openshift-apiserver/apiserver-897b458c6-vsss9" Mar 18 17:42:12.908963 master-0 kubenswrapper[7553]: I0318 17:42:12.907683 7553 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x47z7\" (UniqueName: \"kubernetes.io/projected/30d77a7c-222e-41c7-8a98-219854aa3da2-kube-api-access-x47z7\") pod \"apiserver-897b458c6-vsss9\" (UID: \"30d77a7c-222e-41c7-8a98-219854aa3da2\") " pod="openshift-apiserver/apiserver-897b458c6-vsss9" Mar 18 17:42:13.305441 master-0 kubenswrapper[7553]: I0318 17:42:13.305253 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/30d77a7c-222e-41c7-8a98-219854aa3da2-serving-cert\") pod \"apiserver-897b458c6-vsss9\" (UID: \"30d77a7c-222e-41c7-8a98-219854aa3da2\") " pod="openshift-apiserver/apiserver-897b458c6-vsss9" Mar 18 17:42:13.305710 master-0 kubenswrapper[7553]: E0318 17:42:13.305675 7553 secret.go:189] Couldn't get secret openshift-apiserver/serving-cert: secret "serving-cert" not found Mar 18 17:42:13.306206 master-0 kubenswrapper[7553]: E0318 17:42:13.305944 7553 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/30d77a7c-222e-41c7-8a98-219854aa3da2-serving-cert podName:30d77a7c-222e-41c7-8a98-219854aa3da2 nodeName:}" failed. No retries permitted until 2026-03-18 17:42:14.305724975 +0000 UTC m=+24.451559648 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/30d77a7c-222e-41c7-8a98-219854aa3da2-serving-cert") pod "apiserver-897b458c6-vsss9" (UID: "30d77a7c-222e-41c7-8a98-219854aa3da2") : secret "serving-cert" not found Mar 18 17:42:13.332723 master-0 kubenswrapper[7553]: I0318 17:42:13.331127 7553 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-lf9xl"] Mar 18 17:42:13.332723 master-0 kubenswrapper[7553]: I0318 17:42:13.331844 7553 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-lf9xl" Mar 18 17:42:13.336373 master-0 kubenswrapper[7553]: I0318 17:42:13.335741 7553 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Mar 18 17:42:13.336373 master-0 kubenswrapper[7553]: I0318 17:42:13.335899 7553 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Mar 18 17:42:13.336373 master-0 kubenswrapper[7553]: I0318 17:42:13.336007 7553 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Mar 18 17:42:13.336373 master-0 kubenswrapper[7553]: I0318 17:42:13.336083 7553 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Mar 18 17:42:13.350548 master-0 kubenswrapper[7553]: I0318 17:42:13.349622 7553 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-lf9xl"] Mar 18 17:42:13.411343 master-0 kubenswrapper[7553]: I0318 17:42:13.411208 7553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/59407fdf-b1e9-4992-a3c8-54b4e26f496c-config-volume\") pod \"dns-default-lf9xl\" (UID: \"59407fdf-b1e9-4992-a3c8-54b4e26f496c\") " pod="openshift-dns/dns-default-lf9xl" Mar 18 17:42:13.411343 master-0 kubenswrapper[7553]: I0318 17:42:13.411306 7553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/59407fdf-b1e9-4992-a3c8-54b4e26f496c-metrics-tls\") pod \"dns-default-lf9xl\" (UID: \"59407fdf-b1e9-4992-a3c8-54b4e26f496c\") " pod="openshift-dns/dns-default-lf9xl" Mar 18 17:42:13.411622 master-0 kubenswrapper[7553]: I0318 17:42:13.411512 7553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9dt8f\" (UniqueName: \"kubernetes.io/projected/59407fdf-b1e9-4992-a3c8-54b4e26f496c-kube-api-access-9dt8f\") pod \"dns-default-lf9xl\" (UID: \"59407fdf-b1e9-4992-a3c8-54b4e26f496c\") " pod="openshift-dns/dns-default-lf9xl" Mar 18 17:42:13.484797 master-0 kubenswrapper[7553]: I0318 17:42:13.484724 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-9c5679d8f-7sc7v" event={"ID":"b1352cc7-4099-44c5-9c31-8259fb783bc7","Type":"ContainerStarted","Data":"612e6a08d25d368e232c571369a7a1327ad6f1ef6d2c4485496a2910db77f28e"} Mar 18 17:42:13.517692 master-0 kubenswrapper[7553]: I0318 17:42:13.517634 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/59407fdf-b1e9-4992-a3c8-54b4e26f496c-metrics-tls\") pod \"dns-default-lf9xl\" (UID: \"59407fdf-b1e9-4992-a3c8-54b4e26f496c\") " pod="openshift-dns/dns-default-lf9xl" Mar 18 17:42:13.518111 master-0 kubenswrapper[7553]: I0318 17:42:13.518021 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9dt8f\" (UniqueName: \"kubernetes.io/projected/59407fdf-b1e9-4992-a3c8-54b4e26f496c-kube-api-access-9dt8f\") pod \"dns-default-lf9xl\" (UID: \"59407fdf-b1e9-4992-a3c8-54b4e26f496c\") " pod="openshift-dns/dns-default-lf9xl" Mar 18 17:42:13.519053 master-0 kubenswrapper[7553]: I0318 17:42:13.519011 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/59407fdf-b1e9-4992-a3c8-54b4e26f496c-config-volume\") pod \"dns-default-lf9xl\" (UID: \"59407fdf-b1e9-4992-a3c8-54b4e26f496c\") " pod="openshift-dns/dns-default-lf9xl" Mar 18 17:42:13.519113 master-0 kubenswrapper[7553]: E0318 17:42:13.519049 7553 secret.go:189] Couldn't get secret openshift-dns/dns-default-metrics-tls: secret "dns-default-metrics-tls" not found Mar 18 17:42:13.519209 master-0 kubenswrapper[7553]: E0318 17:42:13.519168 7553 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/59407fdf-b1e9-4992-a3c8-54b4e26f496c-metrics-tls podName:59407fdf-b1e9-4992-a3c8-54b4e26f496c nodeName:}" failed. No retries permitted until 2026-03-18 17:42:14.019143743 +0000 UTC m=+24.164978416 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/59407fdf-b1e9-4992-a3c8-54b4e26f496c-metrics-tls") pod "dns-default-lf9xl" (UID: "59407fdf-b1e9-4992-a3c8-54b4e26f496c") : secret "dns-default-metrics-tls" not found Mar 18 17:42:13.519764 master-0 kubenswrapper[7553]: I0318 17:42:13.519738 7553 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/59407fdf-b1e9-4992-a3c8-54b4e26f496c-config-volume\") pod \"dns-default-lf9xl\" (UID: \"59407fdf-b1e9-4992-a3c8-54b4e26f496c\") " pod="openshift-dns/dns-default-lf9xl" Mar 18 17:42:13.563715 master-0 kubenswrapper[7553]: I0318 17:42:13.563356 7553 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9dt8f\" (UniqueName: \"kubernetes.io/projected/59407fdf-b1e9-4992-a3c8-54b4e26f496c-kube-api-access-9dt8f\") pod \"dns-default-lf9xl\" (UID: \"59407fdf-b1e9-4992-a3c8-54b4e26f496c\") " pod="openshift-dns/dns-default-lf9xl" Mar 18 17:42:13.643606 master-0 kubenswrapper[7553]: I0318 17:42:13.643408 7553 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/installer-1-master-0"] Mar 18 17:42:13.644465 master-0 kubenswrapper[7553]: I0318 17:42:13.644443 7553 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-1-master-0" Mar 18 17:42:13.649182 master-0 kubenswrapper[7553]: I0318 17:42:13.648220 7553 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler"/"kube-root-ca.crt" Mar 18 17:42:13.654096 master-0 kubenswrapper[7553]: I0318 17:42:13.654035 7553 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/installer-1-master-0"] Mar 18 17:42:13.737301 master-0 kubenswrapper[7553]: I0318 17:42:13.735098 7553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/22e8652f-ee18-4cff-bccb-ef413456685f-var-lock\") pod \"installer-1-master-0\" (UID: \"22e8652f-ee18-4cff-bccb-ef413456685f\") " pod="openshift-kube-scheduler/installer-1-master-0" Mar 18 17:42:13.737301 master-0 kubenswrapper[7553]: I0318 17:42:13.735181 7553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/22e8652f-ee18-4cff-bccb-ef413456685f-kubelet-dir\") pod \"installer-1-master-0\" (UID: \"22e8652f-ee18-4cff-bccb-ef413456685f\") " pod="openshift-kube-scheduler/installer-1-master-0" Mar 18 17:42:13.737301 master-0 kubenswrapper[7553]: I0318 17:42:13.735235 7553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/22e8652f-ee18-4cff-bccb-ef413456685f-kube-api-access\") pod \"installer-1-master-0\" (UID: \"22e8652f-ee18-4cff-bccb-ef413456685f\") " pod="openshift-kube-scheduler/installer-1-master-0" Mar 18 17:42:13.737301 master-0 kubenswrapper[7553]: I0318 17:42:13.736461 7553 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-bwcgq"] Mar 18 17:42:13.737301 master-0 kubenswrapper[7553]: I0318 17:42:13.737211 7553 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-bwcgq" Mar 18 17:42:13.836665 master-0 kubenswrapper[7553]: I0318 17:42:13.836507 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/22e8652f-ee18-4cff-bccb-ef413456685f-var-lock\") pod \"installer-1-master-0\" (UID: \"22e8652f-ee18-4cff-bccb-ef413456685f\") " pod="openshift-kube-scheduler/installer-1-master-0" Mar 18 17:42:13.836665 master-0 kubenswrapper[7553]: I0318 17:42:13.836573 7553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5wkqk\" (UniqueName: \"kubernetes.io/projected/efd0d6b1-652c-44b2-b918-5c7ced5d15c3-kube-api-access-5wkqk\") pod \"node-resolver-bwcgq\" (UID: \"efd0d6b1-652c-44b2-b918-5c7ced5d15c3\") " pod="openshift-dns/node-resolver-bwcgq" Mar 18 17:42:13.836665 master-0 kubenswrapper[7553]: I0318 17:42:13.836600 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/22e8652f-ee18-4cff-bccb-ef413456685f-kubelet-dir\") pod \"installer-1-master-0\" (UID: \"22e8652f-ee18-4cff-bccb-ef413456685f\") " pod="openshift-kube-scheduler/installer-1-master-0" Mar 18 17:42:13.836665 master-0 kubenswrapper[7553]: I0318 17:42:13.836657 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/22e8652f-ee18-4cff-bccb-ef413456685f-kube-api-access\") pod \"installer-1-master-0\" (UID: \"22e8652f-ee18-4cff-bccb-ef413456685f\") " pod="openshift-kube-scheduler/installer-1-master-0" Mar 18 17:42:13.837044 master-0 kubenswrapper[7553]: I0318 17:42:13.836715 7553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/efd0d6b1-652c-44b2-b918-5c7ced5d15c3-hosts-file\") pod \"node-resolver-bwcgq\" (UID: \"efd0d6b1-652c-44b2-b918-5c7ced5d15c3\") " pod="openshift-dns/node-resolver-bwcgq" Mar 18 17:42:13.837044 master-0 kubenswrapper[7553]: I0318 17:42:13.836793 7553 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/22e8652f-ee18-4cff-bccb-ef413456685f-var-lock\") pod \"installer-1-master-0\" (UID: \"22e8652f-ee18-4cff-bccb-ef413456685f\") " pod="openshift-kube-scheduler/installer-1-master-0" Mar 18 17:42:13.837044 master-0 kubenswrapper[7553]: I0318 17:42:13.836898 7553 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/22e8652f-ee18-4cff-bccb-ef413456685f-kubelet-dir\") pod \"installer-1-master-0\" (UID: \"22e8652f-ee18-4cff-bccb-ef413456685f\") " pod="openshift-kube-scheduler/installer-1-master-0" Mar 18 17:42:13.937756 master-0 kubenswrapper[7553]: I0318 17:42:13.937676 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/efd0d6b1-652c-44b2-b918-5c7ced5d15c3-hosts-file\") pod \"node-resolver-bwcgq\" (UID: \"efd0d6b1-652c-44b2-b918-5c7ced5d15c3\") " pod="openshift-dns/node-resolver-bwcgq" Mar 18 17:42:13.938016 master-0 kubenswrapper[7553]: I0318 17:42:13.937811 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5wkqk\" (UniqueName: \"kubernetes.io/projected/efd0d6b1-652c-44b2-b918-5c7ced5d15c3-kube-api-access-5wkqk\") pod \"node-resolver-bwcgq\" (UID: \"efd0d6b1-652c-44b2-b918-5c7ced5d15c3\") " pod="openshift-dns/node-resolver-bwcgq" Mar 18 17:42:13.938158 master-0 kubenswrapper[7553]: I0318 17:42:13.938088 7553 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/efd0d6b1-652c-44b2-b918-5c7ced5d15c3-hosts-file\") pod \"node-resolver-bwcgq\" (UID: \"efd0d6b1-652c-44b2-b918-5c7ced5d15c3\") " pod="openshift-dns/node-resolver-bwcgq" Mar 18 17:42:14.040075 master-0 kubenswrapper[7553]: I0318 17:42:14.039980 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/59407fdf-b1e9-4992-a3c8-54b4e26f496c-metrics-tls\") pod \"dns-default-lf9xl\" (UID: \"59407fdf-b1e9-4992-a3c8-54b4e26f496c\") " pod="openshift-dns/dns-default-lf9xl" Mar 18 17:42:14.040409 master-0 kubenswrapper[7553]: E0318 17:42:14.040208 7553 secret.go:189] Couldn't get secret openshift-dns/dns-default-metrics-tls: secret "dns-default-metrics-tls" not found Mar 18 17:42:14.040409 master-0 kubenswrapper[7553]: E0318 17:42:14.040381 7553 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/59407fdf-b1e9-4992-a3c8-54b4e26f496c-metrics-tls podName:59407fdf-b1e9-4992-a3c8-54b4e26f496c nodeName:}" failed. No retries permitted until 2026-03-18 17:42:15.040344194 +0000 UTC m=+25.186178897 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/59407fdf-b1e9-4992-a3c8-54b4e26f496c-metrics-tls") pod "dns-default-lf9xl" (UID: "59407fdf-b1e9-4992-a3c8-54b4e26f496c") : secret "dns-default-metrics-tls" not found Mar 18 17:42:14.067495 master-0 kubenswrapper[7553]: I0318 17:42:14.067417 7553 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d0c4a10-8e58-45a4-813b-efd3ef8353d3" path="/var/lib/kubelet/pods/9d0c4a10-8e58-45a4-813b-efd3ef8353d3/volumes" Mar 18 17:42:14.245013 master-0 kubenswrapper[7553]: I0318 17:42:14.244759 7553 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/22e8652f-ee18-4cff-bccb-ef413456685f-kube-api-access\") pod \"installer-1-master-0\" (UID: \"22e8652f-ee18-4cff-bccb-ef413456685f\") " pod="openshift-kube-scheduler/installer-1-master-0" Mar 18 17:42:14.254184 master-0 kubenswrapper[7553]: I0318 17:42:14.254096 7553 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5wkqk\" (UniqueName: \"kubernetes.io/projected/efd0d6b1-652c-44b2-b918-5c7ced5d15c3-kube-api-access-5wkqk\") pod \"node-resolver-bwcgq\" (UID: \"efd0d6b1-652c-44b2-b918-5c7ced5d15c3\") " pod="openshift-dns/node-resolver-bwcgq" Mar 18 17:42:14.273185 master-0 kubenswrapper[7553]: I0318 17:42:14.272349 7553 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-1-master-0" Mar 18 17:42:14.346332 master-0 kubenswrapper[7553]: I0318 17:42:14.345823 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/30d77a7c-222e-41c7-8a98-219854aa3da2-serving-cert\") pod \"apiserver-897b458c6-vsss9\" (UID: \"30d77a7c-222e-41c7-8a98-219854aa3da2\") " pod="openshift-apiserver/apiserver-897b458c6-vsss9" Mar 18 17:42:14.346332 master-0 kubenswrapper[7553]: E0318 17:42:14.346171 7553 secret.go:189] Couldn't get secret openshift-apiserver/serving-cert: secret "serving-cert" not found Mar 18 17:42:14.346332 master-0 kubenswrapper[7553]: E0318 17:42:14.346237 7553 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/30d77a7c-222e-41c7-8a98-219854aa3da2-serving-cert podName:30d77a7c-222e-41c7-8a98-219854aa3da2 nodeName:}" failed. No retries permitted until 2026-03-18 17:42:16.346217595 +0000 UTC m=+26.492052268 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/30d77a7c-222e-41c7-8a98-219854aa3da2-serving-cert") pod "apiserver-897b458c6-vsss9" (UID: "30d77a7c-222e-41c7-8a98-219854aa3da2") : secret "serving-cert" not found Mar 18 17:42:14.362759 master-0 kubenswrapper[7553]: I0318 17:42:14.362651 7553 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-bwcgq" Mar 18 17:42:15.101176 master-0 kubenswrapper[7553]: I0318 17:42:15.101098 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/59407fdf-b1e9-4992-a3c8-54b4e26f496c-metrics-tls\") pod \"dns-default-lf9xl\" (UID: \"59407fdf-b1e9-4992-a3c8-54b4e26f496c\") " pod="openshift-dns/dns-default-lf9xl" Mar 18 17:42:15.101900 master-0 kubenswrapper[7553]: E0318 17:42:15.101343 7553 secret.go:189] Couldn't get secret openshift-dns/dns-default-metrics-tls: secret "dns-default-metrics-tls" not found Mar 18 17:42:15.101900 master-0 kubenswrapper[7553]: E0318 17:42:15.101606 7553 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/59407fdf-b1e9-4992-a3c8-54b4e26f496c-metrics-tls podName:59407fdf-b1e9-4992-a3c8-54b4e26f496c nodeName:}" failed. No retries permitted until 2026-03-18 17:42:17.1015841 +0000 UTC m=+27.247418773 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/59407fdf-b1e9-4992-a3c8-54b4e26f496c-metrics-tls") pod "dns-default-lf9xl" (UID: "59407fdf-b1e9-4992-a3c8-54b4e26f496c") : secret "dns-default-metrics-tls" not found Mar 18 17:42:16.374967 master-0 kubenswrapper[7553]: I0318 17:42:16.374858 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/30d77a7c-222e-41c7-8a98-219854aa3da2-serving-cert\") pod \"apiserver-897b458c6-vsss9\" (UID: \"30d77a7c-222e-41c7-8a98-219854aa3da2\") " pod="openshift-apiserver/apiserver-897b458c6-vsss9" Mar 18 17:42:16.379377 master-0 kubenswrapper[7553]: I0318 17:42:16.379332 7553 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/30d77a7c-222e-41c7-8a98-219854aa3da2-serving-cert\") pod \"apiserver-897b458c6-vsss9\" (UID: \"30d77a7c-222e-41c7-8a98-219854aa3da2\") " pod="openshift-apiserver/apiserver-897b458c6-vsss9" Mar 18 17:42:16.571512 master-0 kubenswrapper[7553]: I0318 17:42:16.571428 7553 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/installer-1-master-0"] Mar 18 17:42:16.572943 master-0 kubenswrapper[7553]: I0318 17:42:16.572074 7553 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-1-master-0" Mar 18 17:42:16.573164 master-0 kubenswrapper[7553]: I0318 17:42:16.573102 7553 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-897b458c6-vsss9" Mar 18 17:42:16.580546 master-0 kubenswrapper[7553]: I0318 17:42:16.576852 7553 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd"/"kube-root-ca.crt" Mar 18 17:42:16.580546 master-0 kubenswrapper[7553]: I0318 17:42:16.578512 7553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/08451d5b-cf84-45a1-a16d-7ce10a83a6e7-var-lock\") pod \"installer-1-master-0\" (UID: \"08451d5b-cf84-45a1-a16d-7ce10a83a6e7\") " pod="openshift-etcd/installer-1-master-0" Mar 18 17:42:16.580546 master-0 kubenswrapper[7553]: I0318 17:42:16.578570 7553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/08451d5b-cf84-45a1-a16d-7ce10a83a6e7-kube-api-access\") pod \"installer-1-master-0\" (UID: \"08451d5b-cf84-45a1-a16d-7ce10a83a6e7\") " pod="openshift-etcd/installer-1-master-0" Mar 18 17:42:16.580546 master-0 kubenswrapper[7553]: I0318 17:42:16.578596 7553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/08451d5b-cf84-45a1-a16d-7ce10a83a6e7-kubelet-dir\") pod \"installer-1-master-0\" (UID: \"08451d5b-cf84-45a1-a16d-7ce10a83a6e7\") " pod="openshift-etcd/installer-1-master-0" Mar 18 17:42:16.680268 master-0 kubenswrapper[7553]: I0318 17:42:16.680081 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/08451d5b-cf84-45a1-a16d-7ce10a83a6e7-kube-api-access\") pod \"installer-1-master-0\" (UID: \"08451d5b-cf84-45a1-a16d-7ce10a83a6e7\") " pod="openshift-etcd/installer-1-master-0" Mar 18 17:42:16.680268 master-0 kubenswrapper[7553]: I0318 17:42:16.680162 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/08451d5b-cf84-45a1-a16d-7ce10a83a6e7-kubelet-dir\") pod \"installer-1-master-0\" (UID: \"08451d5b-cf84-45a1-a16d-7ce10a83a6e7\") " pod="openshift-etcd/installer-1-master-0" Mar 18 17:42:16.680815 master-0 kubenswrapper[7553]: I0318 17:42:16.680411 7553 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/08451d5b-cf84-45a1-a16d-7ce10a83a6e7-kubelet-dir\") pod \"installer-1-master-0\" (UID: \"08451d5b-cf84-45a1-a16d-7ce10a83a6e7\") " pod="openshift-etcd/installer-1-master-0" Mar 18 17:42:16.680815 master-0 kubenswrapper[7553]: I0318 17:42:16.680429 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/08451d5b-cf84-45a1-a16d-7ce10a83a6e7-var-lock\") pod \"installer-1-master-0\" (UID: \"08451d5b-cf84-45a1-a16d-7ce10a83a6e7\") " pod="openshift-etcd/installer-1-master-0" Mar 18 17:42:16.680815 master-0 kubenswrapper[7553]: I0318 17:42:16.680489 7553 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/08451d5b-cf84-45a1-a16d-7ce10a83a6e7-var-lock\") pod \"installer-1-master-0\" (UID: \"08451d5b-cf84-45a1-a16d-7ce10a83a6e7\") " pod="openshift-etcd/installer-1-master-0" Mar 18 17:42:17.186309 master-0 kubenswrapper[7553]: I0318 17:42:17.186213 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/59407fdf-b1e9-4992-a3c8-54b4e26f496c-metrics-tls\") pod \"dns-default-lf9xl\" (UID: \"59407fdf-b1e9-4992-a3c8-54b4e26f496c\") " pod="openshift-dns/dns-default-lf9xl" Mar 18 17:42:17.186698 master-0 kubenswrapper[7553]: E0318 17:42:17.186583 7553 secret.go:189] Couldn't get secret openshift-dns/dns-default-metrics-tls: secret "dns-default-metrics-tls" not found Mar 18 17:42:17.186781 master-0 kubenswrapper[7553]: E0318 17:42:17.186714 7553 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/59407fdf-b1e9-4992-a3c8-54b4e26f496c-metrics-tls podName:59407fdf-b1e9-4992-a3c8-54b4e26f496c nodeName:}" failed. No retries permitted until 2026-03-18 17:42:21.186680531 +0000 UTC m=+31.332515234 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/59407fdf-b1e9-4992-a3c8-54b4e26f496c-metrics-tls") pod "dns-default-lf9xl" (UID: "59407fdf-b1e9-4992-a3c8-54b4e26f496c") : secret "dns-default-metrics-tls" not found Mar 18 17:42:18.253010 master-0 kubenswrapper[7553]: I0318 17:42:18.252931 7553 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd/installer-1-master-0"] Mar 18 17:42:18.287823 master-0 kubenswrapper[7553]: I0318 17:42:18.282722 7553 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/08451d5b-cf84-45a1-a16d-7ce10a83a6e7-kube-api-access\") pod \"installer-1-master-0\" (UID: \"08451d5b-cf84-45a1-a16d-7ce10a83a6e7\") " pod="openshift-etcd/installer-1-master-0" Mar 18 17:42:18.306235 master-0 kubenswrapper[7553]: I0318 17:42:18.302615 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/414430ec-af84-4826-b5db-c920c7653c7a-serving-cert\") pod \"route-controller-manager-cb78c4f4b-7s77b\" (UID: \"414430ec-af84-4826-b5db-c920c7653c7a\") " pod="openshift-route-controller-manager/route-controller-manager-cb78c4f4b-7s77b" Mar 18 17:42:18.311251 master-0 kubenswrapper[7553]: I0318 17:42:18.311207 7553 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/414430ec-af84-4826-b5db-c920c7653c7a-serving-cert\") pod \"route-controller-manager-cb78c4f4b-7s77b\" (UID: \"414430ec-af84-4826-b5db-c920c7653c7a\") " pod="openshift-route-controller-manager/route-controller-manager-cb78c4f4b-7s77b" Mar 18 17:42:18.398932 master-0 kubenswrapper[7553]: I0318 17:42:18.398860 7553 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-1-master-0" Mar 18 17:42:18.593003 master-0 kubenswrapper[7553]: I0318 17:42:18.592933 7553 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-cb78c4f4b-7s77b" Mar 18 17:42:21.277570 master-0 kubenswrapper[7553]: I0318 17:42:21.275722 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/59407fdf-b1e9-4992-a3c8-54b4e26f496c-metrics-tls\") pod \"dns-default-lf9xl\" (UID: \"59407fdf-b1e9-4992-a3c8-54b4e26f496c\") " pod="openshift-dns/dns-default-lf9xl" Mar 18 17:42:21.284729 master-0 kubenswrapper[7553]: I0318 17:42:21.282967 7553 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/59407fdf-b1e9-4992-a3c8-54b4e26f496c-metrics-tls\") pod \"dns-default-lf9xl\" (UID: \"59407fdf-b1e9-4992-a3c8-54b4e26f496c\") " pod="openshift-dns/dns-default-lf9xl" Mar 18 17:42:21.320914 master-0 kubenswrapper[7553]: I0318 17:42:21.320860 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-bwcgq" event={"ID":"efd0d6b1-652c-44b2-b918-5c7ced5d15c3","Type":"ContainerStarted","Data":"6607dcf54fd176dc56698130f9297b2ab4381953d03d40abc0b2240c71f3820b"} Mar 18 17:42:21.473011 master-0 kubenswrapper[7553]: I0318 17:42:21.472941 7553 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-lf9xl" Mar 18 17:42:22.110218 master-0 kubenswrapper[7553]: I0318 17:42:22.107325 7553 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd/installer-1-master-0"] Mar 18 17:42:22.112236 master-0 kubenswrapper[7553]: I0318 17:42:22.111305 7553 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-cb78c4f4b-7s77b"] Mar 18 17:42:22.112299 master-0 kubenswrapper[7553]: I0318 17:42:22.112241 7553 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-lf9xl"] Mar 18 17:42:22.116414 master-0 kubenswrapper[7553]: I0318 17:42:22.113335 7553 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-897b458c6-vsss9"] Mar 18 17:42:22.122315 master-0 kubenswrapper[7553]: I0318 17:42:22.117602 7553 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/installer-1-master-0"] Mar 18 17:42:22.261494 master-0 kubenswrapper[7553]: W0318 17:42:22.261436 7553 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod414430ec_af84_4826_b5db_c920c7653c7a.slice/crio-c96dd684fd83e0f8e9135640be47949f78da971f446a6ce776803ea3d9b198e7 WatchSource:0}: Error finding container c96dd684fd83e0f8e9135640be47949f78da971f446a6ce776803ea3d9b198e7: Status 404 returned error can't find the container with id c96dd684fd83e0f8e9135640be47949f78da971f446a6ce776803ea3d9b198e7 Mar 18 17:42:22.332147 master-0 kubenswrapper[7553]: I0318 17:42:22.331986 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-lf9xl" event={"ID":"59407fdf-b1e9-4992-a3c8-54b4e26f496c","Type":"ContainerStarted","Data":"cfbf03c8cc7b89c553e9ea829ef567259d08d9f435265881b903a1b99dfdd65c"} Mar 18 17:42:22.334117 master-0 kubenswrapper[7553]: I0318 17:42:22.334095 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-cb78c4f4b-7s77b" event={"ID":"414430ec-af84-4826-b5db-c920c7653c7a","Type":"ContainerStarted","Data":"c96dd684fd83e0f8e9135640be47949f78da971f446a6ce776803ea3d9b198e7"} Mar 18 17:42:22.336712 master-0 kubenswrapper[7553]: I0318 17:42:22.336410 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-bwcgq" event={"ID":"efd0d6b1-652c-44b2-b918-5c7ced5d15c3","Type":"ContainerStarted","Data":"b0f37d4e78d8373b6b8eb48b43c4310793bd3f2661b6dab2f75c84acd08ae019"} Mar 18 17:42:22.339791 master-0 kubenswrapper[7553]: I0318 17:42:22.339234 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/installer-1-master-0" event={"ID":"08451d5b-cf84-45a1-a16d-7ce10a83a6e7","Type":"ContainerStarted","Data":"ee60fb39e538f57e3a2c9cf050408fd1ce812a3cd024c1de0ff7127a4236fd69"} Mar 18 17:42:22.340890 master-0 kubenswrapper[7553]: I0318 17:42:22.340867 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-1-master-0" event={"ID":"22e8652f-ee18-4cff-bccb-ef413456685f","Type":"ContainerStarted","Data":"d0d3e69906c0ae9dcd09afc3f088fea05034a3ae07c3604def2e9ba4e74187c1"} Mar 18 17:42:22.342770 master-0 kubenswrapper[7553]: I0318 17:42:22.342750 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-897b458c6-vsss9" event={"ID":"30d77a7c-222e-41c7-8a98-219854aa3da2","Type":"ContainerStarted","Data":"2939a6d3195afe0f356d31ab56455f8d084b2077c497baf972062cb08363566d"} Mar 18 17:42:23.018462 master-0 kubenswrapper[7553]: I0318 17:42:23.018414 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/e73f2834-c56c-4cef-ac3c-2317e9a4324c-srv-cert\") pod \"olm-operator-5c9796789-6hngr\" (UID: \"e73f2834-c56c-4cef-ac3c-2317e9a4324c\") " pod="openshift-operator-lifecycle-manager/olm-operator-5c9796789-6hngr" Mar 18 17:42:23.018593 master-0 kubenswrapper[7553]: I0318 17:42:23.018476 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/d26d4515-391e-41a5-8c82-1b2b8a375662-package-server-manager-serving-cert\") pod \"package-server-manager-7b95f86987-6qqz4\" (UID: \"d26d4515-391e-41a5-8c82-1b2b8a375662\") " pod="openshift-operator-lifecycle-manager/package-server-manager-7b95f86987-6qqz4" Mar 18 17:42:23.018593 master-0 kubenswrapper[7553]: I0318 17:42:23.018510 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-baremetal-operator-tls\" (UniqueName: \"kubernetes.io/secret/37b3753f-bf4f-4a9e-a4a8-d58296bada79-cluster-baremetal-operator-tls\") pod \"cluster-baremetal-operator-6f69995874-dh5zl\" (UID: \"37b3753f-bf4f-4a9e-a4a8-d58296bada79\") " pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-dh5zl" Mar 18 17:42:23.018593 master-0 kubenswrapper[7553]: I0318 17:42:23.018556 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/a8c34df1-ea0d-4dfa-bf4d-5b58dc5bee8e-webhook-certs\") pod \"multus-admission-controller-5dbbb8b86f-gr8jc\" (UID: \"a8c34df1-ea0d-4dfa-bf4d-5b58dc5bee8e\") " pod="openshift-multus/multus-admission-controller-5dbbb8b86f-gr8jc" Mar 18 17:42:23.018593 master-0 kubenswrapper[7553]: I0318 17:42:23.018587 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/e9e04572-1425-440e-9869-6deef05e13e3-srv-cert\") pod \"catalog-operator-68f85b4d6c-qpgfz\" (UID: \"e9e04572-1425-440e-9869-6deef05e13e3\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68f85b4d6c-qpgfz" Mar 18 17:42:23.018758 master-0 kubenswrapper[7553]: I0318 17:42:23.018618 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/8d5e9525-6c0d-4c0b-a2ce-e42eaf66c311-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-58845fbb57-vjrjg\" (UID: \"8d5e9525-6c0d-4c0b-a2ce-e42eaf66c311\") " pod="openshift-monitoring/cluster-monitoring-operator-58845fbb57-vjrjg" Mar 18 17:42:23.018758 master-0 kubenswrapper[7553]: I0318 17:42:23.018666 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/ce5831a6-5a8d-4cda-9299-5d86437bcab2-marketplace-operator-metrics\") pod \"marketplace-operator-89ccd998f-l5gm7\" (UID: \"ce5831a6-5a8d-4cda-9299-5d86437bcab2\") " pod="openshift-marketplace/marketplace-operator-89ccd998f-l5gm7" Mar 18 17:42:23.018758 master-0 kubenswrapper[7553]: I0318 17:42:23.018696 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/37b3753f-bf4f-4a9e-a4a8-d58296bada79-cert\") pod \"cluster-baremetal-operator-6f69995874-dh5zl\" (UID: \"37b3753f-bf4f-4a9e-a4a8-d58296bada79\") " pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-dh5zl" Mar 18 17:42:23.025390 master-0 kubenswrapper[7553]: I0318 17:42:23.025323 7553 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/a8c34df1-ea0d-4dfa-bf4d-5b58dc5bee8e-webhook-certs\") pod \"multus-admission-controller-5dbbb8b86f-gr8jc\" (UID: \"a8c34df1-ea0d-4dfa-bf4d-5b58dc5bee8e\") " pod="openshift-multus/multus-admission-controller-5dbbb8b86f-gr8jc" Mar 18 17:42:23.025691 master-0 kubenswrapper[7553]: I0318 17:42:23.025645 7553 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/d26d4515-391e-41a5-8c82-1b2b8a375662-package-server-manager-serving-cert\") pod \"package-server-manager-7b95f86987-6qqz4\" (UID: \"d26d4515-391e-41a5-8c82-1b2b8a375662\") " pod="openshift-operator-lifecycle-manager/package-server-manager-7b95f86987-6qqz4" Mar 18 17:42:23.026060 master-0 kubenswrapper[7553]: I0318 17:42:23.025968 7553 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/37b3753f-bf4f-4a9e-a4a8-d58296bada79-cert\") pod \"cluster-baremetal-operator-6f69995874-dh5zl\" (UID: \"37b3753f-bf4f-4a9e-a4a8-d58296bada79\") " pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-dh5zl" Mar 18 17:42:23.026513 master-0 kubenswrapper[7553]: I0318 17:42:23.026462 7553 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/8d5e9525-6c0d-4c0b-a2ce-e42eaf66c311-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-58845fbb57-vjrjg\" (UID: \"8d5e9525-6c0d-4c0b-a2ce-e42eaf66c311\") " pod="openshift-monitoring/cluster-monitoring-operator-58845fbb57-vjrjg" Mar 18 17:42:23.026919 master-0 kubenswrapper[7553]: I0318 17:42:23.026874 7553 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-baremetal-operator-tls\" (UniqueName: \"kubernetes.io/secret/37b3753f-bf4f-4a9e-a4a8-d58296bada79-cluster-baremetal-operator-tls\") pod \"cluster-baremetal-operator-6f69995874-dh5zl\" (UID: \"37b3753f-bf4f-4a9e-a4a8-d58296bada79\") " pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-dh5zl" Mar 18 17:42:23.027338 master-0 kubenswrapper[7553]: I0318 17:42:23.027252 7553 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/ce5831a6-5a8d-4cda-9299-5d86437bcab2-marketplace-operator-metrics\") pod \"marketplace-operator-89ccd998f-l5gm7\" (UID: \"ce5831a6-5a8d-4cda-9299-5d86437bcab2\") " pod="openshift-marketplace/marketplace-operator-89ccd998f-l5gm7" Mar 18 17:42:23.028263 master-0 kubenswrapper[7553]: I0318 17:42:23.027440 7553 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/e73f2834-c56c-4cef-ac3c-2317e9a4324c-srv-cert\") pod \"olm-operator-5c9796789-6hngr\" (UID: \"e73f2834-c56c-4cef-ac3c-2317e9a4324c\") " pod="openshift-operator-lifecycle-manager/olm-operator-5c9796789-6hngr" Mar 18 17:42:23.028354 master-0 kubenswrapper[7553]: I0318 17:42:23.028161 7553 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/e9e04572-1425-440e-9869-6deef05e13e3-srv-cert\") pod \"catalog-operator-68f85b4d6c-qpgfz\" (UID: \"e9e04572-1425-440e-9869-6deef05e13e3\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68f85b4d6c-qpgfz" Mar 18 17:42:23.113259 master-0 kubenswrapper[7553]: I0318 17:42:23.113153 7553 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-5c9796789-6hngr" Mar 18 17:42:23.114318 master-0 kubenswrapper[7553]: I0318 17:42:23.114255 7553 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-dh5zl" Mar 18 17:42:23.114672 master-0 kubenswrapper[7553]: I0318 17:42:23.114578 7553 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-7b95f86987-6qqz4" Mar 18 17:42:23.123851 master-0 kubenswrapper[7553]: I0318 17:42:23.123769 7553 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/cluster-monitoring-operator-58845fbb57-vjrjg" Mar 18 17:42:23.123977 master-0 kubenswrapper[7553]: I0318 17:42:23.123754 7553 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-89ccd998f-l5gm7" Mar 18 17:42:23.125153 master-0 kubenswrapper[7553]: I0318 17:42:23.125109 7553 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-5dbbb8b86f-gr8jc" Mar 18 17:42:23.126938 master-0 kubenswrapper[7553]: I0318 17:42:23.126902 7553 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68f85b4d6c-qpgfz" Mar 18 17:42:23.382526 master-0 kubenswrapper[7553]: I0318 17:42:23.382452 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/installer-1-master-0" event={"ID":"08451d5b-cf84-45a1-a16d-7ce10a83a6e7","Type":"ContainerStarted","Data":"5314ec05fb03281eaddcd24c27457c3fda717a46b41bfa95e18bf5f7470daeb4"} Mar 18 17:42:23.391991 master-0 kubenswrapper[7553]: I0318 17:42:23.391088 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-1-master-0" event={"ID":"22e8652f-ee18-4cff-bccb-ef413456685f","Type":"ContainerStarted","Data":"e0ce789b272d7ec4bd7aac94ac37ecdd2765bd0434e740bbb25752a48e70911e"} Mar 18 17:42:23.408352 master-0 kubenswrapper[7553]: I0318 17:42:23.408220 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-8487694857-8dsx2" event={"ID":"7047a862-8cbe-46fb-9af3-06ba224cbe26","Type":"ContainerStarted","Data":"521e38f1827202ef01c663faadd2ffa7d8f597f8bcc9110b0d13bccc42f074bc"} Mar 18 17:42:23.526375 master-0 kubenswrapper[7553]: I0318 17:42:23.526187 7553 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-bwcgq" podStartSLOduration=10.526159556 podStartE2EDuration="10.526159556s" podCreationTimestamp="2026-03-18 17:42:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 17:42:23.524791556 +0000 UTC m=+33.670626229" watchObservedRunningTime="2026-03-18 17:42:23.526159556 +0000 UTC m=+33.671994239" Mar 18 17:42:23.572949 master-0 kubenswrapper[7553]: I0318 17:42:23.572869 7553 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/installer-1-master-0" podStartSLOduration=9.572842683 podStartE2EDuration="9.572842683s" podCreationTimestamp="2026-03-18 17:42:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 17:42:23.565770418 +0000 UTC m=+33.711605111" watchObservedRunningTime="2026-03-18 17:42:23.572842683 +0000 UTC m=+33.718677356" Mar 18 17:42:23.580900 master-0 kubenswrapper[7553]: I0318 17:42:23.580846 7553 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-688fbbb854-6n26v"] Mar 18 17:42:23.585292 master-0 kubenswrapper[7553]: I0318 17:42:23.581894 7553 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-688fbbb854-6n26v" Mar 18 17:42:23.590294 master-0 kubenswrapper[7553]: I0318 17:42:23.586143 7553 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Mar 18 17:42:23.590294 master-0 kubenswrapper[7553]: I0318 17:42:23.588889 7553 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Mar 18 17:42:23.590294 master-0 kubenswrapper[7553]: I0318 17:42:23.589079 7553 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Mar 18 17:42:23.590294 master-0 kubenswrapper[7553]: I0318 17:42:23.589216 7553 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Mar 18 17:42:23.590294 master-0 kubenswrapper[7553]: I0318 17:42:23.589345 7553 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Mar 18 17:42:23.590294 master-0 kubenswrapper[7553]: I0318 17:42:23.589472 7553 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Mar 18 17:42:23.590294 master-0 kubenswrapper[7553]: I0318 17:42:23.589591 7553 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Mar 18 17:42:23.590294 master-0 kubenswrapper[7553]: I0318 17:42:23.589728 7553 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Mar 18 17:42:23.623296 master-0 kubenswrapper[7553]: I0318 17:42:23.620402 7553 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/installer-1-master-0" podStartSLOduration=10.620381469 podStartE2EDuration="10.620381469s" podCreationTimestamp="2026-03-18 17:42:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 17:42:23.617996457 +0000 UTC m=+33.763831150" watchObservedRunningTime="2026-03-18 17:42:23.620381469 +0000 UTC m=+33.766216142" Mar 18 17:42:23.623296 master-0 kubenswrapper[7553]: I0318 17:42:23.620734 7553 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-688fbbb854-6n26v"] Mar 18 17:42:23.652120 master-0 kubenswrapper[7553]: I0318 17:42:23.644639 7553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43fab0f2-5cfd-4b5e-a632-728fd5b960fd-trusted-ca-bundle\") pod \"apiserver-688fbbb854-6n26v\" (UID: \"43fab0f2-5cfd-4b5e-a632-728fd5b960fd\") " pod="openshift-oauth-apiserver/apiserver-688fbbb854-6n26v" Mar 18 17:42:23.652120 master-0 kubenswrapper[7553]: I0318 17:42:23.644697 7553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/43fab0f2-5cfd-4b5e-a632-728fd5b960fd-etcd-serving-ca\") pod \"apiserver-688fbbb854-6n26v\" (UID: \"43fab0f2-5cfd-4b5e-a632-728fd5b960fd\") " pod="openshift-oauth-apiserver/apiserver-688fbbb854-6n26v" Mar 18 17:42:23.652120 master-0 kubenswrapper[7553]: I0318 17:42:23.644749 7553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/43fab0f2-5cfd-4b5e-a632-728fd5b960fd-encryption-config\") pod \"apiserver-688fbbb854-6n26v\" (UID: \"43fab0f2-5cfd-4b5e-a632-728fd5b960fd\") " pod="openshift-oauth-apiserver/apiserver-688fbbb854-6n26v" Mar 18 17:42:23.652120 master-0 kubenswrapper[7553]: I0318 17:42:23.644768 7553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/43fab0f2-5cfd-4b5e-a632-728fd5b960fd-audit-dir\") pod \"apiserver-688fbbb854-6n26v\" (UID: \"43fab0f2-5cfd-4b5e-a632-728fd5b960fd\") " pod="openshift-oauth-apiserver/apiserver-688fbbb854-6n26v" Mar 18 17:42:23.652120 master-0 kubenswrapper[7553]: I0318 17:42:23.644799 7553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/43fab0f2-5cfd-4b5e-a632-728fd5b960fd-serving-cert\") pod \"apiserver-688fbbb854-6n26v\" (UID: \"43fab0f2-5cfd-4b5e-a632-728fd5b960fd\") " pod="openshift-oauth-apiserver/apiserver-688fbbb854-6n26v" Mar 18 17:42:23.652120 master-0 kubenswrapper[7553]: I0318 17:42:23.644821 7553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/43fab0f2-5cfd-4b5e-a632-728fd5b960fd-audit-policies\") pod \"apiserver-688fbbb854-6n26v\" (UID: \"43fab0f2-5cfd-4b5e-a632-728fd5b960fd\") " pod="openshift-oauth-apiserver/apiserver-688fbbb854-6n26v" Mar 18 17:42:23.652120 master-0 kubenswrapper[7553]: I0318 17:42:23.644839 7553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/43fab0f2-5cfd-4b5e-a632-728fd5b960fd-etcd-client\") pod \"apiserver-688fbbb854-6n26v\" (UID: \"43fab0f2-5cfd-4b5e-a632-728fd5b960fd\") " pod="openshift-oauth-apiserver/apiserver-688fbbb854-6n26v" Mar 18 17:42:23.652120 master-0 kubenswrapper[7553]: I0318 17:42:23.644881 7553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rsj86\" (UniqueName: \"kubernetes.io/projected/43fab0f2-5cfd-4b5e-a632-728fd5b960fd-kube-api-access-rsj86\") pod \"apiserver-688fbbb854-6n26v\" (UID: \"43fab0f2-5cfd-4b5e-a632-728fd5b960fd\") " pod="openshift-oauth-apiserver/apiserver-688fbbb854-6n26v" Mar 18 17:42:23.726332 master-0 kubenswrapper[7553]: I0318 17:42:23.720230 7553 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-controller/operator-controller-controller-manager-57777556ff-bk26c"] Mar 18 17:42:23.748999 master-0 kubenswrapper[7553]: I0318 17:42:23.736532 7553 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-bk26c" Mar 18 17:42:23.748999 master-0 kubenswrapper[7553]: I0318 17:42:23.747227 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/43fab0f2-5cfd-4b5e-a632-728fd5b960fd-audit-policies\") pod \"apiserver-688fbbb854-6n26v\" (UID: \"43fab0f2-5cfd-4b5e-a632-728fd5b960fd\") " pod="openshift-oauth-apiserver/apiserver-688fbbb854-6n26v" Mar 18 17:42:23.748999 master-0 kubenswrapper[7553]: I0318 17:42:23.747299 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/43fab0f2-5cfd-4b5e-a632-728fd5b960fd-etcd-client\") pod \"apiserver-688fbbb854-6n26v\" (UID: \"43fab0f2-5cfd-4b5e-a632-728fd5b960fd\") " pod="openshift-oauth-apiserver/apiserver-688fbbb854-6n26v" Mar 18 17:42:23.748999 master-0 kubenswrapper[7553]: I0318 17:42:23.747354 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rsj86\" (UniqueName: \"kubernetes.io/projected/43fab0f2-5cfd-4b5e-a632-728fd5b960fd-kube-api-access-rsj86\") pod \"apiserver-688fbbb854-6n26v\" (UID: \"43fab0f2-5cfd-4b5e-a632-728fd5b960fd\") " pod="openshift-oauth-apiserver/apiserver-688fbbb854-6n26v" Mar 18 17:42:23.748999 master-0 kubenswrapper[7553]: I0318 17:42:23.747391 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43fab0f2-5cfd-4b5e-a632-728fd5b960fd-trusted-ca-bundle\") pod \"apiserver-688fbbb854-6n26v\" (UID: \"43fab0f2-5cfd-4b5e-a632-728fd5b960fd\") " pod="openshift-oauth-apiserver/apiserver-688fbbb854-6n26v" Mar 18 17:42:23.748999 master-0 kubenswrapper[7553]: I0318 17:42:23.747416 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/43fab0f2-5cfd-4b5e-a632-728fd5b960fd-etcd-serving-ca\") pod \"apiserver-688fbbb854-6n26v\" (UID: \"43fab0f2-5cfd-4b5e-a632-728fd5b960fd\") " pod="openshift-oauth-apiserver/apiserver-688fbbb854-6n26v" Mar 18 17:42:23.748999 master-0 kubenswrapper[7553]: I0318 17:42:23.747448 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/43fab0f2-5cfd-4b5e-a632-728fd5b960fd-encryption-config\") pod \"apiserver-688fbbb854-6n26v\" (UID: \"43fab0f2-5cfd-4b5e-a632-728fd5b960fd\") " pod="openshift-oauth-apiserver/apiserver-688fbbb854-6n26v" Mar 18 17:42:23.748999 master-0 kubenswrapper[7553]: I0318 17:42:23.747477 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/43fab0f2-5cfd-4b5e-a632-728fd5b960fd-audit-dir\") pod \"apiserver-688fbbb854-6n26v\" (UID: \"43fab0f2-5cfd-4b5e-a632-728fd5b960fd\") " pod="openshift-oauth-apiserver/apiserver-688fbbb854-6n26v" Mar 18 17:42:23.748999 master-0 kubenswrapper[7553]: I0318 17:42:23.747502 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/43fab0f2-5cfd-4b5e-a632-728fd5b960fd-serving-cert\") pod \"apiserver-688fbbb854-6n26v\" (UID: \"43fab0f2-5cfd-4b5e-a632-728fd5b960fd\") " pod="openshift-oauth-apiserver/apiserver-688fbbb854-6n26v" Mar 18 17:42:23.748999 master-0 kubenswrapper[7553]: I0318 17:42:23.749003 7553 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/43fab0f2-5cfd-4b5e-a632-728fd5b960fd-audit-dir\") pod \"apiserver-688fbbb854-6n26v\" (UID: \"43fab0f2-5cfd-4b5e-a632-728fd5b960fd\") " pod="openshift-oauth-apiserver/apiserver-688fbbb854-6n26v" Mar 18 17:42:23.766298 master-0 kubenswrapper[7553]: I0318 17:42:23.749700 7553 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/43fab0f2-5cfd-4b5e-a632-728fd5b960fd-audit-policies\") pod \"apiserver-688fbbb854-6n26v\" (UID: \"43fab0f2-5cfd-4b5e-a632-728fd5b960fd\") " pod="openshift-oauth-apiserver/apiserver-688fbbb854-6n26v" Mar 18 17:42:23.766298 master-0 kubenswrapper[7553]: I0318 17:42:23.751434 7553 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43fab0f2-5cfd-4b5e-a632-728fd5b960fd-trusted-ca-bundle\") pod \"apiserver-688fbbb854-6n26v\" (UID: \"43fab0f2-5cfd-4b5e-a632-728fd5b960fd\") " pod="openshift-oauth-apiserver/apiserver-688fbbb854-6n26v" Mar 18 17:42:23.766298 master-0 kubenswrapper[7553]: I0318 17:42:23.751599 7553 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/43fab0f2-5cfd-4b5e-a632-728fd5b960fd-etcd-serving-ca\") pod \"apiserver-688fbbb854-6n26v\" (UID: \"43fab0f2-5cfd-4b5e-a632-728fd5b960fd\") " pod="openshift-oauth-apiserver/apiserver-688fbbb854-6n26v" Mar 18 17:42:23.766298 master-0 kubenswrapper[7553]: I0318 17:42:23.759689 7553 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-controller"/"kube-root-ca.crt" Mar 18 17:42:23.766298 master-0 kubenswrapper[7553]: I0318 17:42:23.759948 7553 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-controller"/"openshift-service-ca.crt" Mar 18 17:42:23.766298 master-0 kubenswrapper[7553]: I0318 17:42:23.760093 7553 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-controller"/"operator-controller-trusted-ca-bundle" Mar 18 17:42:23.766298 master-0 kubenswrapper[7553]: I0318 17:42:23.760949 7553 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-controller/operator-controller-controller-manager-57777556ff-bk26c"] Mar 18 17:42:23.781421 master-0 kubenswrapper[7553]: I0318 17:42:23.770499 7553 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-7c846c589b-4cpj2"] Mar 18 17:42:23.781421 master-0 kubenswrapper[7553]: I0318 17:42:23.770775 7553 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-7c846c589b-4cpj2" podUID="dedeb921-f1f2-4fa4-8d16-8740b1c0cd14" containerName="controller-manager" containerID="cri-o://5b6eed714222e25752fdf63e3f8f6cfb66e7b124c5f70e15ae2f2054a7693438" gracePeriod=30 Mar 18 17:42:23.781421 master-0 kubenswrapper[7553]: I0318 17:42:23.775043 7553 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/43fab0f2-5cfd-4b5e-a632-728fd5b960fd-etcd-client\") pod \"apiserver-688fbbb854-6n26v\" (UID: \"43fab0f2-5cfd-4b5e-a632-728fd5b960fd\") " pod="openshift-oauth-apiserver/apiserver-688fbbb854-6n26v" Mar 18 17:42:23.781421 master-0 kubenswrapper[7553]: I0318 17:42:23.777083 7553 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/43fab0f2-5cfd-4b5e-a632-728fd5b960fd-serving-cert\") pod \"apiserver-688fbbb854-6n26v\" (UID: \"43fab0f2-5cfd-4b5e-a632-728fd5b960fd\") " pod="openshift-oauth-apiserver/apiserver-688fbbb854-6n26v" Mar 18 17:42:23.781421 master-0 kubenswrapper[7553]: I0318 17:42:23.780478 7553 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/43fab0f2-5cfd-4b5e-a632-728fd5b960fd-encryption-config\") pod \"apiserver-688fbbb854-6n26v\" (UID: \"43fab0f2-5cfd-4b5e-a632-728fd5b960fd\") " pod="openshift-oauth-apiserver/apiserver-688fbbb854-6n26v" Mar 18 17:42:23.813383 master-0 kubenswrapper[7553]: I0318 17:42:23.805009 7553 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-cb78c4f4b-7s77b"] Mar 18 17:42:23.818753 master-0 kubenswrapper[7553]: I0318 17:42:23.818709 7553 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rsj86\" (UniqueName: \"kubernetes.io/projected/43fab0f2-5cfd-4b5e-a632-728fd5b960fd-kube-api-access-rsj86\") pod \"apiserver-688fbbb854-6n26v\" (UID: \"43fab0f2-5cfd-4b5e-a632-728fd5b960fd\") " pod="openshift-oauth-apiserver/apiserver-688fbbb854-6n26v" Mar 18 17:42:23.850357 master-0 kubenswrapper[7553]: I0318 17:42:23.849920 7553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/efbcb147-d077-4749-9289-1682daccb657-ca-certs\") pod \"operator-controller-controller-manager-57777556ff-bk26c\" (UID: \"efbcb147-d077-4749-9289-1682daccb657\") " pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-bk26c" Mar 18 17:42:23.850357 master-0 kubenswrapper[7553]: I0318 17:42:23.850021 7553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/efbcb147-d077-4749-9289-1682daccb657-etc-docker\") pod \"operator-controller-controller-manager-57777556ff-bk26c\" (UID: \"efbcb147-d077-4749-9289-1682daccb657\") " pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-bk26c" Mar 18 17:42:23.850357 master-0 kubenswrapper[7553]: I0318 17:42:23.850053 7553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vqrdl\" (UniqueName: \"kubernetes.io/projected/efbcb147-d077-4749-9289-1682daccb657-kube-api-access-vqrdl\") pod \"operator-controller-controller-manager-57777556ff-bk26c\" (UID: \"efbcb147-d077-4749-9289-1682daccb657\") " pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-bk26c" Mar 18 17:42:23.850357 master-0 kubenswrapper[7553]: I0318 17:42:23.850088 7553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/efbcb147-d077-4749-9289-1682daccb657-cache\") pod \"operator-controller-controller-manager-57777556ff-bk26c\" (UID: \"efbcb147-d077-4749-9289-1682daccb657\") " pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-bk26c" Mar 18 17:42:23.850357 master-0 kubenswrapper[7553]: I0318 17:42:23.850116 7553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/efbcb147-d077-4749-9289-1682daccb657-etc-containers\") pod \"operator-controller-controller-manager-57777556ff-bk26c\" (UID: \"efbcb147-d077-4749-9289-1682daccb657\") " pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-bk26c" Mar 18 17:42:23.857971 master-0 kubenswrapper[7553]: I0318 17:42:23.856794 7553 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-catalogd/catalogd-controller-manager-6864dc98f7-8vmsv"] Mar 18 17:42:23.858137 master-0 kubenswrapper[7553]: I0318 17:42:23.858081 7553 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-8vmsv" Mar 18 17:42:23.866213 master-0 kubenswrapper[7553]: I0318 17:42:23.863099 7553 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-catalogd"/"catalogserver-cert" Mar 18 17:42:23.866213 master-0 kubenswrapper[7553]: I0318 17:42:23.863482 7553 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-catalogd"/"kube-root-ca.crt" Mar 18 17:42:23.866213 master-0 kubenswrapper[7553]: I0318 17:42:23.863511 7553 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-catalogd"/"openshift-service-ca.crt" Mar 18 17:42:23.874161 master-0 kubenswrapper[7553]: I0318 17:42:23.874120 7553 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-catalogd"/"catalogd-trusted-ca-bundle" Mar 18 17:42:23.884248 master-0 kubenswrapper[7553]: I0318 17:42:23.884195 7553 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-catalogd/catalogd-controller-manager-6864dc98f7-8vmsv"] Mar 18 17:42:23.947841 master-0 kubenswrapper[7553]: I0318 17:42:23.947795 7553 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-ctd49" Mar 18 17:42:23.955295 master-0 kubenswrapper[7553]: I0318 17:42:23.951876 7553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/56cde2f7-1742-45d6-aa22-8270cfb424a7-ca-certs\") pod \"catalogd-controller-manager-6864dc98f7-8vmsv\" (UID: \"56cde2f7-1742-45d6-aa22-8270cfb424a7\") " pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-8vmsv" Mar 18 17:42:23.955295 master-0 kubenswrapper[7553]: I0318 17:42:23.951932 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/efbcb147-d077-4749-9289-1682daccb657-etc-docker\") pod \"operator-controller-controller-manager-57777556ff-bk26c\" (UID: \"efbcb147-d077-4749-9289-1682daccb657\") " pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-bk26c" Mar 18 17:42:23.955295 master-0 kubenswrapper[7553]: I0318 17:42:23.951955 7553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/56cde2f7-1742-45d6-aa22-8270cfb424a7-cache\") pod \"catalogd-controller-manager-6864dc98f7-8vmsv\" (UID: \"56cde2f7-1742-45d6-aa22-8270cfb424a7\") " pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-8vmsv" Mar 18 17:42:23.955295 master-0 kubenswrapper[7553]: I0318 17:42:23.951981 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/efbcb147-d077-4749-9289-1682daccb657-etc-containers\") pod \"operator-controller-controller-manager-57777556ff-bk26c\" (UID: \"efbcb147-d077-4749-9289-1682daccb657\") " pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-bk26c" Mar 18 17:42:23.955295 master-0 kubenswrapper[7553]: I0318 17:42:23.952194 7553 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/efbcb147-d077-4749-9289-1682daccb657-etc-docker\") pod \"operator-controller-controller-manager-57777556ff-bk26c\" (UID: \"efbcb147-d077-4749-9289-1682daccb657\") " pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-bk26c" Mar 18 17:42:23.955295 master-0 kubenswrapper[7553]: I0318 17:42:23.952254 7553 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/efbcb147-d077-4749-9289-1682daccb657-etc-containers\") pod \"operator-controller-controller-manager-57777556ff-bk26c\" (UID: \"efbcb147-d077-4749-9289-1682daccb657\") " pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-bk26c" Mar 18 17:42:23.955295 master-0 kubenswrapper[7553]: I0318 17:42:23.952360 7553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalogserver-certs\" (UniqueName: \"kubernetes.io/secret/56cde2f7-1742-45d6-aa22-8270cfb424a7-catalogserver-certs\") pod \"catalogd-controller-manager-6864dc98f7-8vmsv\" (UID: \"56cde2f7-1742-45d6-aa22-8270cfb424a7\") " pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-8vmsv" Mar 18 17:42:23.955295 master-0 kubenswrapper[7553]: I0318 17:42:23.952405 7553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/56cde2f7-1742-45d6-aa22-8270cfb424a7-etc-containers\") pod \"catalogd-controller-manager-6864dc98f7-8vmsv\" (UID: \"56cde2f7-1742-45d6-aa22-8270cfb424a7\") " pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-8vmsv" Mar 18 17:42:23.955295 master-0 kubenswrapper[7553]: I0318 17:42:23.952463 7553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/56cde2f7-1742-45d6-aa22-8270cfb424a7-etc-docker\") pod \"catalogd-controller-manager-6864dc98f7-8vmsv\" (UID: \"56cde2f7-1742-45d6-aa22-8270cfb424a7\") " pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-8vmsv" Mar 18 17:42:23.955295 master-0 kubenswrapper[7553]: I0318 17:42:23.952504 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vqrdl\" (UniqueName: \"kubernetes.io/projected/efbcb147-d077-4749-9289-1682daccb657-kube-api-access-vqrdl\") pod \"operator-controller-controller-manager-57777556ff-bk26c\" (UID: \"efbcb147-d077-4749-9289-1682daccb657\") " pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-bk26c" Mar 18 17:42:23.955295 master-0 kubenswrapper[7553]: I0318 17:42:23.952535 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/efbcb147-d077-4749-9289-1682daccb657-cache\") pod \"operator-controller-controller-manager-57777556ff-bk26c\" (UID: \"efbcb147-d077-4749-9289-1682daccb657\") " pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-bk26c" Mar 18 17:42:23.955295 master-0 kubenswrapper[7553]: I0318 17:42:23.952590 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/efbcb147-d077-4749-9289-1682daccb657-ca-certs\") pod \"operator-controller-controller-manager-57777556ff-bk26c\" (UID: \"efbcb147-d077-4749-9289-1682daccb657\") " pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-bk26c" Mar 18 17:42:23.955295 master-0 kubenswrapper[7553]: I0318 17:42:23.952608 7553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mbctm\" (UniqueName: \"kubernetes.io/projected/56cde2f7-1742-45d6-aa22-8270cfb424a7-kube-api-access-mbctm\") pod \"catalogd-controller-manager-6864dc98f7-8vmsv\" (UID: \"56cde2f7-1742-45d6-aa22-8270cfb424a7\") " pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-8vmsv" Mar 18 17:42:23.955295 master-0 kubenswrapper[7553]: I0318 17:42:23.953204 7553 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/efbcb147-d077-4749-9289-1682daccb657-cache\") pod \"operator-controller-controller-manager-57777556ff-bk26c\" (UID: \"efbcb147-d077-4749-9289-1682daccb657\") " pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-bk26c" Mar 18 17:42:23.957299 master-0 kubenswrapper[7553]: I0318 17:42:23.957246 7553 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/efbcb147-d077-4749-9289-1682daccb657-ca-certs\") pod \"operator-controller-controller-manager-57777556ff-bk26c\" (UID: \"efbcb147-d077-4749-9289-1682daccb657\") " pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-bk26c" Mar 18 17:42:23.968836 master-0 kubenswrapper[7553]: I0318 17:42:23.968799 7553 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vqrdl\" (UniqueName: \"kubernetes.io/projected/efbcb147-d077-4749-9289-1682daccb657-kube-api-access-vqrdl\") pod \"operator-controller-controller-manager-57777556ff-bk26c\" (UID: \"efbcb147-d077-4749-9289-1682daccb657\") " pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-bk26c" Mar 18 17:42:24.039479 master-0 kubenswrapper[7553]: I0318 17:42:24.039312 7553 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-688fbbb854-6n26v" Mar 18 17:42:24.054514 master-0 kubenswrapper[7553]: I0318 17:42:24.054443 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mbctm\" (UniqueName: \"kubernetes.io/projected/56cde2f7-1742-45d6-aa22-8270cfb424a7-kube-api-access-mbctm\") pod \"catalogd-controller-manager-6864dc98f7-8vmsv\" (UID: \"56cde2f7-1742-45d6-aa22-8270cfb424a7\") " pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-8vmsv" Mar 18 17:42:24.054690 master-0 kubenswrapper[7553]: I0318 17:42:24.054558 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/56cde2f7-1742-45d6-aa22-8270cfb424a7-ca-certs\") pod \"catalogd-controller-manager-6864dc98f7-8vmsv\" (UID: \"56cde2f7-1742-45d6-aa22-8270cfb424a7\") " pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-8vmsv" Mar 18 17:42:24.054690 master-0 kubenswrapper[7553]: I0318 17:42:24.054607 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/56cde2f7-1742-45d6-aa22-8270cfb424a7-cache\") pod \"catalogd-controller-manager-6864dc98f7-8vmsv\" (UID: \"56cde2f7-1742-45d6-aa22-8270cfb424a7\") " pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-8vmsv" Mar 18 17:42:24.054690 master-0 kubenswrapper[7553]: I0318 17:42:24.054674 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5a4f94f3-d63a-4869-b723-ae9637610b4b-metrics-certs\") pod \"network-metrics-daemon-mfn52\" (UID: \"5a4f94f3-d63a-4869-b723-ae9637610b4b\") " pod="openshift-multus/network-metrics-daemon-mfn52" Mar 18 17:42:24.054777 master-0 kubenswrapper[7553]: I0318 17:42:24.054705 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalogserver-certs\" (UniqueName: \"kubernetes.io/secret/56cde2f7-1742-45d6-aa22-8270cfb424a7-catalogserver-certs\") pod \"catalogd-controller-manager-6864dc98f7-8vmsv\" (UID: \"56cde2f7-1742-45d6-aa22-8270cfb424a7\") " pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-8vmsv" Mar 18 17:42:24.056652 master-0 kubenswrapper[7553]: I0318 17:42:24.055084 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/56cde2f7-1742-45d6-aa22-8270cfb424a7-etc-containers\") pod \"catalogd-controller-manager-6864dc98f7-8vmsv\" (UID: \"56cde2f7-1742-45d6-aa22-8270cfb424a7\") " pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-8vmsv" Mar 18 17:42:24.056652 master-0 kubenswrapper[7553]: I0318 17:42:24.055149 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/56cde2f7-1742-45d6-aa22-8270cfb424a7-etc-docker\") pod \"catalogd-controller-manager-6864dc98f7-8vmsv\" (UID: \"56cde2f7-1742-45d6-aa22-8270cfb424a7\") " pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-8vmsv" Mar 18 17:42:24.056652 master-0 kubenswrapper[7553]: I0318 17:42:24.055333 7553 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/56cde2f7-1742-45d6-aa22-8270cfb424a7-etc-docker\") pod \"catalogd-controller-manager-6864dc98f7-8vmsv\" (UID: \"56cde2f7-1742-45d6-aa22-8270cfb424a7\") " pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-8vmsv" Mar 18 17:42:24.056652 master-0 kubenswrapper[7553]: I0318 17:42:24.055640 7553 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/56cde2f7-1742-45d6-aa22-8270cfb424a7-etc-containers\") pod \"catalogd-controller-manager-6864dc98f7-8vmsv\" (UID: \"56cde2f7-1742-45d6-aa22-8270cfb424a7\") " pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-8vmsv" Mar 18 17:42:24.056652 master-0 kubenswrapper[7553]: I0318 17:42:24.056609 7553 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/56cde2f7-1742-45d6-aa22-8270cfb424a7-cache\") pod \"catalogd-controller-manager-6864dc98f7-8vmsv\" (UID: \"56cde2f7-1742-45d6-aa22-8270cfb424a7\") " pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-8vmsv" Mar 18 17:42:24.058897 master-0 kubenswrapper[7553]: I0318 17:42:24.058863 7553 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5a4f94f3-d63a-4869-b723-ae9637610b4b-metrics-certs\") pod \"network-metrics-daemon-mfn52\" (UID: \"5a4f94f3-d63a-4869-b723-ae9637610b4b\") " pod="openshift-multus/network-metrics-daemon-mfn52" Mar 18 17:42:24.060389 master-0 kubenswrapper[7553]: I0318 17:42:24.060347 7553 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalogserver-certs\" (UniqueName: \"kubernetes.io/secret/56cde2f7-1742-45d6-aa22-8270cfb424a7-catalogserver-certs\") pod \"catalogd-controller-manager-6864dc98f7-8vmsv\" (UID: \"56cde2f7-1742-45d6-aa22-8270cfb424a7\") " pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-8vmsv" Mar 18 17:42:24.061379 master-0 kubenswrapper[7553]: I0318 17:42:24.061265 7553 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/56cde2f7-1742-45d6-aa22-8270cfb424a7-ca-certs\") pod \"catalogd-controller-manager-6864dc98f7-8vmsv\" (UID: \"56cde2f7-1742-45d6-aa22-8270cfb424a7\") " pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-8vmsv" Mar 18 17:42:24.117581 master-0 kubenswrapper[7553]: I0318 17:42:24.114036 7553 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-scheduler/installer-1-master-0"] Mar 18 17:42:24.125723 master-0 kubenswrapper[7553]: I0318 17:42:24.125579 7553 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mbctm\" (UniqueName: \"kubernetes.io/projected/56cde2f7-1742-45d6-aa22-8270cfb424a7-kube-api-access-mbctm\") pod \"catalogd-controller-manager-6864dc98f7-8vmsv\" (UID: \"56cde2f7-1742-45d6-aa22-8270cfb424a7\") " pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-8vmsv" Mar 18 17:42:24.210307 master-0 kubenswrapper[7553]: I0318 17:42:24.207749 7553 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-bk26c" Mar 18 17:42:24.237304 master-0 kubenswrapper[7553]: I0318 17:42:24.234119 7553 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-8vmsv" Mar 18 17:42:24.325598 master-0 kubenswrapper[7553]: I0318 17:42:24.325251 7553 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mfn52" Mar 18 17:42:24.685921 master-0 kubenswrapper[7553]: I0318 17:42:24.685785 7553 patch_prober.go:28] interesting pod/controller-manager-7c846c589b-4cpj2 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.128.0.35:8443/healthz\": dial tcp 10.128.0.35:8443: connect: connection refused" start-of-body= Mar 18 17:42:24.686845 master-0 kubenswrapper[7553]: I0318 17:42:24.685880 7553 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-7c846c589b-4cpj2" podUID="dedeb921-f1f2-4fa4-8d16-8740b1c0cd14" containerName="controller-manager" probeResult="failure" output="Get \"https://10.128.0.35:8443/healthz\": dial tcp 10.128.0.35:8443: connect: connection refused" Mar 18 17:42:25.415571 master-0 kubenswrapper[7553]: I0318 17:42:25.415195 7553 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-scheduler/installer-1-master-0" podUID="22e8652f-ee18-4cff-bccb-ef413456685f" containerName="installer" containerID="cri-o://e0ce789b272d7ec4bd7aac94ac37ecdd2765bd0434e740bbb25752a48e70911e" gracePeriod=30 Mar 18 17:42:26.423342 master-0 kubenswrapper[7553]: I0318 17:42:26.423239 7553 generic.go:334] "Generic (PLEG): container finished" podID="dedeb921-f1f2-4fa4-8d16-8740b1c0cd14" containerID="5b6eed714222e25752fdf63e3f8f6cfb66e7b124c5f70e15ae2f2054a7693438" exitCode=0 Mar 18 17:42:26.423342 master-0 kubenswrapper[7553]: I0318 17:42:26.423342 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7c846c589b-4cpj2" event={"ID":"dedeb921-f1f2-4fa4-8d16-8740b1c0cd14","Type":"ContainerDied","Data":"5b6eed714222e25752fdf63e3f8f6cfb66e7b124c5f70e15ae2f2054a7693438"} Mar 18 17:42:26.771564 master-0 kubenswrapper[7553]: I0318 17:42:26.771520 7553 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/installer-2-master-0"] Mar 18 17:42:26.776104 master-0 kubenswrapper[7553]: I0318 17:42:26.775761 7553 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-2-master-0" Mar 18 17:42:26.779567 master-0 kubenswrapper[7553]: I0318 17:42:26.779539 7553 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/installer-2-master-0"] Mar 18 17:42:26.894961 master-0 kubenswrapper[7553]: I0318 17:42:26.894901 7553 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/cluster-baremetal-operator-6f69995874-dh5zl"] Mar 18 17:42:26.898460 master-0 kubenswrapper[7553]: I0318 17:42:26.898340 7553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/7ee0d87f-dc6e-44d7-ab20-0118116ec893-var-lock\") pod \"installer-2-master-0\" (UID: \"7ee0d87f-dc6e-44d7-ab20-0118116ec893\") " pod="openshift-kube-scheduler/installer-2-master-0" Mar 18 17:42:26.898672 master-0 kubenswrapper[7553]: I0318 17:42:26.898600 7553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7ee0d87f-dc6e-44d7-ab20-0118116ec893-kubelet-dir\") pod \"installer-2-master-0\" (UID: \"7ee0d87f-dc6e-44d7-ab20-0118116ec893\") " pod="openshift-kube-scheduler/installer-2-master-0" Mar 18 17:42:26.898837 master-0 kubenswrapper[7553]: I0318 17:42:26.898793 7553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7ee0d87f-dc6e-44d7-ab20-0118116ec893-kube-api-access\") pod \"installer-2-master-0\" (UID: \"7ee0d87f-dc6e-44d7-ab20-0118116ec893\") " pod="openshift-kube-scheduler/installer-2-master-0" Mar 18 17:42:27.002627 master-0 kubenswrapper[7553]: I0318 17:42:27.000810 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/7ee0d87f-dc6e-44d7-ab20-0118116ec893-var-lock\") pod \"installer-2-master-0\" (UID: \"7ee0d87f-dc6e-44d7-ab20-0118116ec893\") " pod="openshift-kube-scheduler/installer-2-master-0" Mar 18 17:42:27.002627 master-0 kubenswrapper[7553]: I0318 17:42:27.000913 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7ee0d87f-dc6e-44d7-ab20-0118116ec893-kubelet-dir\") pod \"installer-2-master-0\" (UID: \"7ee0d87f-dc6e-44d7-ab20-0118116ec893\") " pod="openshift-kube-scheduler/installer-2-master-0" Mar 18 17:42:27.002627 master-0 kubenswrapper[7553]: I0318 17:42:27.000986 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7ee0d87f-dc6e-44d7-ab20-0118116ec893-kube-api-access\") pod \"installer-2-master-0\" (UID: \"7ee0d87f-dc6e-44d7-ab20-0118116ec893\") " pod="openshift-kube-scheduler/installer-2-master-0" Mar 18 17:42:27.002627 master-0 kubenswrapper[7553]: I0318 17:42:27.001053 7553 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/7ee0d87f-dc6e-44d7-ab20-0118116ec893-var-lock\") pod \"installer-2-master-0\" (UID: \"7ee0d87f-dc6e-44d7-ab20-0118116ec893\") " pod="openshift-kube-scheduler/installer-2-master-0" Mar 18 17:42:27.002627 master-0 kubenswrapper[7553]: I0318 17:42:27.001197 7553 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7ee0d87f-dc6e-44d7-ab20-0118116ec893-kubelet-dir\") pod \"installer-2-master-0\" (UID: \"7ee0d87f-dc6e-44d7-ab20-0118116ec893\") " pod="openshift-kube-scheduler/installer-2-master-0" Mar 18 17:42:27.047264 master-0 kubenswrapper[7553]: I0318 17:42:27.046399 7553 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7ee0d87f-dc6e-44d7-ab20-0118116ec893-kube-api-access\") pod \"installer-2-master-0\" (UID: \"7ee0d87f-dc6e-44d7-ab20-0118116ec893\") " pod="openshift-kube-scheduler/installer-2-master-0" Mar 18 17:42:27.109092 master-0 kubenswrapper[7553]: I0318 17:42:27.109032 7553 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-2-master-0" Mar 18 17:42:27.961364 master-0 kubenswrapper[7553]: I0318 17:42:27.961286 7553 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-1-master-0"] Mar 18 17:42:27.962349 master-0 kubenswrapper[7553]: I0318 17:42:27.962029 7553 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-1-master-0" Mar 18 17:42:27.972259 master-0 kubenswrapper[7553]: I0318 17:42:27.972202 7553 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-1-master-0"] Mar 18 17:42:27.972629 master-0 kubenswrapper[7553]: I0318 17:42:27.972585 7553 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Mar 18 17:42:28.020219 master-0 kubenswrapper[7553]: I0318 17:42:28.018000 7553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/41191498-89c5-44dc-b648-dbea889c72f5-kubelet-dir\") pod \"installer-1-master-0\" (UID: \"41191498-89c5-44dc-b648-dbea889c72f5\") " pod="openshift-kube-apiserver/installer-1-master-0" Mar 18 17:42:28.020219 master-0 kubenswrapper[7553]: I0318 17:42:28.018040 7553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/41191498-89c5-44dc-b648-dbea889c72f5-var-lock\") pod \"installer-1-master-0\" (UID: \"41191498-89c5-44dc-b648-dbea889c72f5\") " pod="openshift-kube-apiserver/installer-1-master-0" Mar 18 17:42:28.020219 master-0 kubenswrapper[7553]: I0318 17:42:28.018062 7553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/41191498-89c5-44dc-b648-dbea889c72f5-kube-api-access\") pod \"installer-1-master-0\" (UID: \"41191498-89c5-44dc-b648-dbea889c72f5\") " pod="openshift-kube-apiserver/installer-1-master-0" Mar 18 17:42:28.021846 master-0 kubenswrapper[7553]: I0318 17:42:28.021814 7553 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7c846c589b-4cpj2" Mar 18 17:42:28.095102 master-0 kubenswrapper[7553]: I0318 17:42:28.095070 7553 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-7f9db7db88-vbx76"] Mar 18 17:42:28.095325 master-0 kubenswrapper[7553]: E0318 17:42:28.095306 7553 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dedeb921-f1f2-4fa4-8d16-8740b1c0cd14" containerName="controller-manager" Mar 18 17:42:28.095325 master-0 kubenswrapper[7553]: I0318 17:42:28.095324 7553 state_mem.go:107] "Deleted CPUSet assignment" podUID="dedeb921-f1f2-4fa4-8d16-8740b1c0cd14" containerName="controller-manager" Mar 18 17:42:28.095430 master-0 kubenswrapper[7553]: I0318 17:42:28.095414 7553 memory_manager.go:354] "RemoveStaleState removing state" podUID="dedeb921-f1f2-4fa4-8d16-8740b1c0cd14" containerName="controller-manager" Mar 18 17:42:28.095757 master-0 kubenswrapper[7553]: I0318 17:42:28.095739 7553 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7f9db7db88-vbx76"] Mar 18 17:42:28.095847 master-0 kubenswrapper[7553]: I0318 17:42:28.095829 7553 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7f9db7db88-vbx76" Mar 18 17:42:28.118878 master-0 kubenswrapper[7553]: I0318 17:42:28.118538 7553 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dedeb921-f1f2-4fa4-8d16-8740b1c0cd14-serving-cert\") pod \"dedeb921-f1f2-4fa4-8d16-8740b1c0cd14\" (UID: \"dedeb921-f1f2-4fa4-8d16-8740b1c0cd14\") " Mar 18 17:42:28.119037 master-0 kubenswrapper[7553]: I0318 17:42:28.118905 7553 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/dedeb921-f1f2-4fa4-8d16-8740b1c0cd14-client-ca\") pod \"dedeb921-f1f2-4fa4-8d16-8740b1c0cd14\" (UID: \"dedeb921-f1f2-4fa4-8d16-8740b1c0cd14\") " Mar 18 17:42:28.119037 master-0 kubenswrapper[7553]: I0318 17:42:28.118995 7553 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/dedeb921-f1f2-4fa4-8d16-8740b1c0cd14-proxy-ca-bundles\") pod \"dedeb921-f1f2-4fa4-8d16-8740b1c0cd14\" (UID: \"dedeb921-f1f2-4fa4-8d16-8740b1c0cd14\") " Mar 18 17:42:28.119037 master-0 kubenswrapper[7553]: I0318 17:42:28.119027 7553 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fq4hv\" (UniqueName: \"kubernetes.io/projected/dedeb921-f1f2-4fa4-8d16-8740b1c0cd14-kube-api-access-fq4hv\") pod \"dedeb921-f1f2-4fa4-8d16-8740b1c0cd14\" (UID: \"dedeb921-f1f2-4fa4-8d16-8740b1c0cd14\") " Mar 18 17:42:28.119155 master-0 kubenswrapper[7553]: I0318 17:42:28.119076 7553 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dedeb921-f1f2-4fa4-8d16-8740b1c0cd14-config\") pod \"dedeb921-f1f2-4fa4-8d16-8740b1c0cd14\" (UID: \"dedeb921-f1f2-4fa4-8d16-8740b1c0cd14\") " Mar 18 17:42:28.119239 master-0 kubenswrapper[7553]: I0318 17:42:28.119217 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/41191498-89c5-44dc-b648-dbea889c72f5-kubelet-dir\") pod \"installer-1-master-0\" (UID: \"41191498-89c5-44dc-b648-dbea889c72f5\") " pod="openshift-kube-apiserver/installer-1-master-0" Mar 18 17:42:28.119293 master-0 kubenswrapper[7553]: I0318 17:42:28.119243 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/41191498-89c5-44dc-b648-dbea889c72f5-var-lock\") pod \"installer-1-master-0\" (UID: \"41191498-89c5-44dc-b648-dbea889c72f5\") " pod="openshift-kube-apiserver/installer-1-master-0" Mar 18 17:42:28.119293 master-0 kubenswrapper[7553]: I0318 17:42:28.119263 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/41191498-89c5-44dc-b648-dbea889c72f5-kube-api-access\") pod \"installer-1-master-0\" (UID: \"41191498-89c5-44dc-b648-dbea889c72f5\") " pod="openshift-kube-apiserver/installer-1-master-0" Mar 18 17:42:28.119760 master-0 kubenswrapper[7553]: I0318 17:42:28.119715 7553 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dedeb921-f1f2-4fa4-8d16-8740b1c0cd14-client-ca" (OuterVolumeSpecName: "client-ca") pod "dedeb921-f1f2-4fa4-8d16-8740b1c0cd14" (UID: "dedeb921-f1f2-4fa4-8d16-8740b1c0cd14"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 17:42:28.119967 master-0 kubenswrapper[7553]: I0318 17:42:28.119932 7553 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/41191498-89c5-44dc-b648-dbea889c72f5-kubelet-dir\") pod \"installer-1-master-0\" (UID: \"41191498-89c5-44dc-b648-dbea889c72f5\") " pod="openshift-kube-apiserver/installer-1-master-0" Mar 18 17:42:28.120013 master-0 kubenswrapper[7553]: I0318 17:42:28.119997 7553 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/41191498-89c5-44dc-b648-dbea889c72f5-var-lock\") pod \"installer-1-master-0\" (UID: \"41191498-89c5-44dc-b648-dbea889c72f5\") " pod="openshift-kube-apiserver/installer-1-master-0" Mar 18 17:42:28.120225 master-0 kubenswrapper[7553]: I0318 17:42:28.120186 7553 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dedeb921-f1f2-4fa4-8d16-8740b1c0cd14-config" (OuterVolumeSpecName: "config") pod "dedeb921-f1f2-4fa4-8d16-8740b1c0cd14" (UID: "dedeb921-f1f2-4fa4-8d16-8740b1c0cd14"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 17:42:28.120225 master-0 kubenswrapper[7553]: I0318 17:42:28.120194 7553 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dedeb921-f1f2-4fa4-8d16-8740b1c0cd14-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "dedeb921-f1f2-4fa4-8d16-8740b1c0cd14" (UID: "dedeb921-f1f2-4fa4-8d16-8740b1c0cd14"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 17:42:28.132144 master-0 kubenswrapper[7553]: I0318 17:42:28.132105 7553 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dedeb921-f1f2-4fa4-8d16-8740b1c0cd14-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "dedeb921-f1f2-4fa4-8d16-8740b1c0cd14" (UID: "dedeb921-f1f2-4fa4-8d16-8740b1c0cd14"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 17:42:28.134421 master-0 kubenswrapper[7553]: I0318 17:42:28.134365 7553 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dedeb921-f1f2-4fa4-8d16-8740b1c0cd14-kube-api-access-fq4hv" (OuterVolumeSpecName: "kube-api-access-fq4hv") pod "dedeb921-f1f2-4fa4-8d16-8740b1c0cd14" (UID: "dedeb921-f1f2-4fa4-8d16-8740b1c0cd14"). InnerVolumeSpecName "kube-api-access-fq4hv". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 17:42:28.142360 master-0 kubenswrapper[7553]: I0318 17:42:28.142331 7553 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/41191498-89c5-44dc-b648-dbea889c72f5-kube-api-access\") pod \"installer-1-master-0\" (UID: \"41191498-89c5-44dc-b648-dbea889c72f5\") " pod="openshift-kube-apiserver/installer-1-master-0" Mar 18 17:42:28.221732 master-0 kubenswrapper[7553]: I0318 17:42:28.221261 7553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d4b73dcd-592d-493e-926b-7264fb81aa8e-serving-cert\") pod \"controller-manager-7f9db7db88-vbx76\" (UID: \"d4b73dcd-592d-493e-926b-7264fb81aa8e\") " pod="openshift-controller-manager/controller-manager-7f9db7db88-vbx76" Mar 18 17:42:28.221732 master-0 kubenswrapper[7553]: I0318 17:42:28.221321 7553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d4b73dcd-592d-493e-926b-7264fb81aa8e-proxy-ca-bundles\") pod \"controller-manager-7f9db7db88-vbx76\" (UID: \"d4b73dcd-592d-493e-926b-7264fb81aa8e\") " pod="openshift-controller-manager/controller-manager-7f9db7db88-vbx76" Mar 18 17:42:28.221732 master-0 kubenswrapper[7553]: I0318 17:42:28.221344 7553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-58cnc\" (UniqueName: \"kubernetes.io/projected/d4b73dcd-592d-493e-926b-7264fb81aa8e-kube-api-access-58cnc\") pod \"controller-manager-7f9db7db88-vbx76\" (UID: \"d4b73dcd-592d-493e-926b-7264fb81aa8e\") " pod="openshift-controller-manager/controller-manager-7f9db7db88-vbx76" Mar 18 17:42:28.221732 master-0 kubenswrapper[7553]: I0318 17:42:28.221415 7553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d4b73dcd-592d-493e-926b-7264fb81aa8e-config\") pod \"controller-manager-7f9db7db88-vbx76\" (UID: \"d4b73dcd-592d-493e-926b-7264fb81aa8e\") " pod="openshift-controller-manager/controller-manager-7f9db7db88-vbx76" Mar 18 17:42:28.221732 master-0 kubenswrapper[7553]: I0318 17:42:28.221451 7553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d4b73dcd-592d-493e-926b-7264fb81aa8e-client-ca\") pod \"controller-manager-7f9db7db88-vbx76\" (UID: \"d4b73dcd-592d-493e-926b-7264fb81aa8e\") " pod="openshift-controller-manager/controller-manager-7f9db7db88-vbx76" Mar 18 17:42:28.221732 master-0 kubenswrapper[7553]: I0318 17:42:28.221487 7553 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/dedeb921-f1f2-4fa4-8d16-8740b1c0cd14-proxy-ca-bundles\") on node \"master-0\" DevicePath \"\"" Mar 18 17:42:28.221732 master-0 kubenswrapper[7553]: I0318 17:42:28.221499 7553 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fq4hv\" (UniqueName: \"kubernetes.io/projected/dedeb921-f1f2-4fa4-8d16-8740b1c0cd14-kube-api-access-fq4hv\") on node \"master-0\" DevicePath \"\"" Mar 18 17:42:28.221732 master-0 kubenswrapper[7553]: I0318 17:42:28.221511 7553 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dedeb921-f1f2-4fa4-8d16-8740b1c0cd14-config\") on node \"master-0\" DevicePath \"\"" Mar 18 17:42:28.221732 master-0 kubenswrapper[7553]: I0318 17:42:28.221523 7553 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dedeb921-f1f2-4fa4-8d16-8740b1c0cd14-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 18 17:42:28.221732 master-0 kubenswrapper[7553]: I0318 17:42:28.221532 7553 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/dedeb921-f1f2-4fa4-8d16-8740b1c0cd14-client-ca\") on node \"master-0\" DevicePath \"\"" Mar 18 17:42:28.237573 master-0 kubenswrapper[7553]: I0318 17:42:28.237056 7553 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-89ccd998f-l5gm7"] Mar 18 17:42:28.280469 master-0 kubenswrapper[7553]: I0318 17:42:28.279957 7553 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-688fbbb854-6n26v"] Mar 18 17:42:28.323176 master-0 kubenswrapper[7553]: I0318 17:42:28.323119 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d4b73dcd-592d-493e-926b-7264fb81aa8e-client-ca\") pod \"controller-manager-7f9db7db88-vbx76\" (UID: \"d4b73dcd-592d-493e-926b-7264fb81aa8e\") " pod="openshift-controller-manager/controller-manager-7f9db7db88-vbx76" Mar 18 17:42:28.323176 master-0 kubenswrapper[7553]: I0318 17:42:28.323167 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d4b73dcd-592d-493e-926b-7264fb81aa8e-serving-cert\") pod \"controller-manager-7f9db7db88-vbx76\" (UID: \"d4b73dcd-592d-493e-926b-7264fb81aa8e\") " pod="openshift-controller-manager/controller-manager-7f9db7db88-vbx76" Mar 18 17:42:28.323452 master-0 kubenswrapper[7553]: I0318 17:42:28.323375 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-58cnc\" (UniqueName: \"kubernetes.io/projected/d4b73dcd-592d-493e-926b-7264fb81aa8e-kube-api-access-58cnc\") pod \"controller-manager-7f9db7db88-vbx76\" (UID: \"d4b73dcd-592d-493e-926b-7264fb81aa8e\") " pod="openshift-controller-manager/controller-manager-7f9db7db88-vbx76" Mar 18 17:42:28.323452 master-0 kubenswrapper[7553]: I0318 17:42:28.323428 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d4b73dcd-592d-493e-926b-7264fb81aa8e-proxy-ca-bundles\") pod \"controller-manager-7f9db7db88-vbx76\" (UID: \"d4b73dcd-592d-493e-926b-7264fb81aa8e\") " pod="openshift-controller-manager/controller-manager-7f9db7db88-vbx76" Mar 18 17:42:28.323768 master-0 kubenswrapper[7553]: I0318 17:42:28.323591 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d4b73dcd-592d-493e-926b-7264fb81aa8e-config\") pod \"controller-manager-7f9db7db88-vbx76\" (UID: \"d4b73dcd-592d-493e-926b-7264fb81aa8e\") " pod="openshift-controller-manager/controller-manager-7f9db7db88-vbx76" Mar 18 17:42:28.324991 master-0 kubenswrapper[7553]: I0318 17:42:28.324168 7553 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d4b73dcd-592d-493e-926b-7264fb81aa8e-client-ca\") pod \"controller-manager-7f9db7db88-vbx76\" (UID: \"d4b73dcd-592d-493e-926b-7264fb81aa8e\") " pod="openshift-controller-manager/controller-manager-7f9db7db88-vbx76" Mar 18 17:42:28.325343 master-0 kubenswrapper[7553]: I0318 17:42:28.325017 7553 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d4b73dcd-592d-493e-926b-7264fb81aa8e-proxy-ca-bundles\") pod \"controller-manager-7f9db7db88-vbx76\" (UID: \"d4b73dcd-592d-493e-926b-7264fb81aa8e\") " pod="openshift-controller-manager/controller-manager-7f9db7db88-vbx76" Mar 18 17:42:28.325343 master-0 kubenswrapper[7553]: I0318 17:42:28.325263 7553 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d4b73dcd-592d-493e-926b-7264fb81aa8e-config\") pod \"controller-manager-7f9db7db88-vbx76\" (UID: \"d4b73dcd-592d-493e-926b-7264fb81aa8e\") " pod="openshift-controller-manager/controller-manager-7f9db7db88-vbx76" Mar 18 17:42:28.328684 master-0 kubenswrapper[7553]: I0318 17:42:28.328448 7553 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d4b73dcd-592d-493e-926b-7264fb81aa8e-serving-cert\") pod \"controller-manager-7f9db7db88-vbx76\" (UID: \"d4b73dcd-592d-493e-926b-7264fb81aa8e\") " pod="openshift-controller-manager/controller-manager-7f9db7db88-vbx76" Mar 18 17:42:28.351779 master-0 kubenswrapper[7553]: I0318 17:42:28.351725 7553 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-58cnc\" (UniqueName: \"kubernetes.io/projected/d4b73dcd-592d-493e-926b-7264fb81aa8e-kube-api-access-58cnc\") pod \"controller-manager-7f9db7db88-vbx76\" (UID: \"d4b73dcd-592d-493e-926b-7264fb81aa8e\") " pod="openshift-controller-manager/controller-manager-7f9db7db88-vbx76" Mar 18 17:42:28.386778 master-0 kubenswrapper[7553]: I0318 17:42:28.386707 7553 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-1-master-0" Mar 18 17:42:28.395016 master-0 kubenswrapper[7553]: I0318 17:42:28.393484 7553 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-controller/operator-controller-controller-manager-57777556ff-bk26c"] Mar 18 17:42:28.433856 master-0 kubenswrapper[7553]: I0318 17:42:28.433803 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-dh5zl" event={"ID":"37b3753f-bf4f-4a9e-a4a8-d58296bada79","Type":"ContainerStarted","Data":"ce5639dc0f602d1c7e6ad6fc44e82114cfe133ad8a9de1890037405180569936"} Mar 18 17:42:28.434988 master-0 kubenswrapper[7553]: I0318 17:42:28.434967 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7c846c589b-4cpj2" event={"ID":"dedeb921-f1f2-4fa4-8d16-8740b1c0cd14","Type":"ContainerDied","Data":"cc2910a0cd567315922fb83de14c3f15ace2cb8fa5a09873d2b88ea103feb4a5"} Mar 18 17:42:28.435053 master-0 kubenswrapper[7553]: I0318 17:42:28.435002 7553 scope.go:117] "RemoveContainer" containerID="5b6eed714222e25752fdf63e3f8f6cfb66e7b124c5f70e15ae2f2054a7693438" Mar 18 17:42:28.435147 master-0 kubenswrapper[7553]: I0318 17:42:28.435129 7553 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7c846c589b-4cpj2" Mar 18 17:42:28.457679 master-0 kubenswrapper[7553]: I0318 17:42:28.457642 7553 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-5dbbb8b86f-gr8jc"] Mar 18 17:42:28.473556 master-0 kubenswrapper[7553]: I0318 17:42:28.469527 7553 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-7b95f86987-6qqz4"] Mar 18 17:42:28.485092 master-0 kubenswrapper[7553]: I0318 17:42:28.485032 7553 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-7c846c589b-4cpj2"] Mar 18 17:42:28.486879 master-0 kubenswrapper[7553]: I0318 17:42:28.486857 7553 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-7c846c589b-4cpj2"] Mar 18 17:42:28.491357 master-0 kubenswrapper[7553]: I0318 17:42:28.491334 7553 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7f9db7db88-vbx76" Mar 18 17:42:28.527808 master-0 kubenswrapper[7553]: I0318 17:42:28.527725 7553 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68f85b4d6c-qpgfz"] Mar 18 17:42:28.528031 master-0 kubenswrapper[7553]: I0318 17:42:28.527906 7553 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-mfn52"] Mar 18 17:42:28.534355 master-0 kubenswrapper[7553]: I0318 17:42:28.532577 7553 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-catalogd/catalogd-controller-manager-6864dc98f7-8vmsv"] Mar 18 17:42:28.534355 master-0 kubenswrapper[7553]: I0318 17:42:28.533719 7553 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/cluster-monitoring-operator-58845fbb57-vjrjg"] Mar 18 17:42:28.535016 master-0 kubenswrapper[7553]: I0318 17:42:28.534967 7553 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-5c9796789-6hngr"] Mar 18 17:42:28.889314 master-0 kubenswrapper[7553]: W0318 17:42:28.885131 7553 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podce5831a6_5a8d_4cda_9299_5d86437bcab2.slice/crio-dd1b805aae172e18f337dd45784c075e0ad3687afa3a8879338aa90a6a42ed54 WatchSource:0}: Error finding container dd1b805aae172e18f337dd45784c075e0ad3687afa3a8879338aa90a6a42ed54: Status 404 returned error can't find the container with id dd1b805aae172e18f337dd45784c075e0ad3687afa3a8879338aa90a6a42ed54 Mar 18 17:42:28.889314 master-0 kubenswrapper[7553]: W0318 17:42:28.887079 7553 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda8c34df1_ea0d_4dfa_bf4d_5b58dc5bee8e.slice/crio-a1b64f60bcb1d57a34f6bca29856ee1a6dadd3b9493681f5dd98bb90b3066e3b WatchSource:0}: Error finding container a1b64f60bcb1d57a34f6bca29856ee1a6dadd3b9493681f5dd98bb90b3066e3b: Status 404 returned error can't find the container with id a1b64f60bcb1d57a34f6bca29856ee1a6dadd3b9493681f5dd98bb90b3066e3b Mar 18 17:42:28.893335 master-0 kubenswrapper[7553]: W0318 17:42:28.893077 7553 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd26d4515_391e_41a5_8c82_1b2b8a375662.slice/crio-8381cd7a6e5c885500f3bdd0849aefb2b5f39ab2f05f498f742ce3eacc790c78 WatchSource:0}: Error finding container 8381cd7a6e5c885500f3bdd0849aefb2b5f39ab2f05f498f742ce3eacc790c78: Status 404 returned error can't find the container with id 8381cd7a6e5c885500f3bdd0849aefb2b5f39ab2f05f498f742ce3eacc790c78 Mar 18 17:42:29.148648 master-0 kubenswrapper[7553]: I0318 17:42:29.141682 7553 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/installer-2-master-0"] Mar 18 17:42:29.191475 master-0 kubenswrapper[7553]: I0318 17:42:29.191396 7553 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7f9db7db88-vbx76"] Mar 18 17:42:29.213856 master-0 kubenswrapper[7553]: I0318 17:42:29.212866 7553 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-1-master-0"] Mar 18 17:42:29.464889 master-0 kubenswrapper[7553]: I0318 17:42:29.464304 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-7qwxn" event={"ID":"7c6694a8-ccd0-491b-9f21-215450f6ce67","Type":"ContainerStarted","Data":"6af98a7327b83a0f9fcfd3425055ee2bbebd96176bf419d80ea4f980729da819"} Mar 18 17:42:29.473166 master-0 kubenswrapper[7553]: I0318 17:42:29.473100 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-mfn52" event={"ID":"5a4f94f3-d63a-4869-b723-ae9637610b4b","Type":"ContainerStarted","Data":"07ab0c66a64f7bf6d68ef0555d877888ab4c67aaec1ac0fea7f62d1ed0bed612"} Mar 18 17:42:29.477418 master-0 kubenswrapper[7553]: I0318 17:42:29.476412 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-8487694857-8dsx2" event={"ID":"7047a862-8cbe-46fb-9af3-06ba224cbe26","Type":"ContainerStarted","Data":"d8f8178c4236408acdfbba63df9d5d1cd40be7f539e96fe2a75db241f4c2334e"} Mar 18 17:42:29.498706 master-0 kubenswrapper[7553]: I0318 17:42:29.498648 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-56d8475767-lqvvj" event={"ID":"a02399de-859b-45b1-9b00-18a08f285f39","Type":"ContainerStarted","Data":"dcdc5126bc7dc1f71b0c2b6aa40d9d36da39eb734a75c107c672d7a72b2e46fb"} Mar 18 17:42:29.500339 master-0 kubenswrapper[7553]: I0318 17:42:29.500308 7553 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-node-tuning-operator/tuned-r6tf4"] Mar 18 17:42:29.538809 master-0 kubenswrapper[7553]: I0318 17:42:29.534431 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-688fbbb854-6n26v" event={"ID":"43fab0f2-5cfd-4b5e-a632-728fd5b960fd","Type":"ContainerStarted","Data":"86bb0fefbe9a7075d6c0212cf27e6d83a749aa0d66749340ff4d2f7ce29488f0"} Mar 18 17:42:29.538809 master-0 kubenswrapper[7553]: I0318 17:42:29.534915 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68f85b4d6c-qpgfz" event={"ID":"e9e04572-1425-440e-9869-6deef05e13e3","Type":"ContainerStarted","Data":"1efe23c09252f4c82f118ceb82a14b9f9f470b6a2eb0f4b9f30449b0d185550a"} Mar 18 17:42:29.538809 master-0 kubenswrapper[7553]: I0318 17:42:29.535074 7553 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-node-tuning-operator/tuned-r6tf4" Mar 18 17:42:29.538809 master-0 kubenswrapper[7553]: I0318 17:42:29.538070 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-lf9xl" event={"ID":"59407fdf-b1e9-4992-a3c8-54b4e26f496c","Type":"ContainerStarted","Data":"77194a0abd28cd95bdde3970f20c00a81382caa38a66f5d99e5cee403a9657a8"} Mar 18 17:42:29.545046 master-0 kubenswrapper[7553]: I0318 17:42:29.541229 7553 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-8487694857-8dsx2" podStartSLOduration=11.084414658 podStartE2EDuration="21.541189291s" podCreationTimestamp="2026-03-18 17:42:08 +0000 UTC" firstStartedPulling="2026-03-18 17:42:12.254243123 +0000 UTC m=+22.400077796" lastFinishedPulling="2026-03-18 17:42:22.711017746 +0000 UTC m=+32.856852429" observedRunningTime="2026-03-18 17:42:29.526836056 +0000 UTC m=+39.672670739" watchObservedRunningTime="2026-03-18 17:42:29.541189291 +0000 UTC m=+39.687023964" Mar 18 17:42:29.545046 master-0 kubenswrapper[7553]: I0318 17:42:29.541465 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-5dbbb8b86f-gr8jc" event={"ID":"a8c34df1-ea0d-4dfa-bf4d-5b58dc5bee8e","Type":"ContainerStarted","Data":"a1b64f60bcb1d57a34f6bca29856ee1a6dadd3b9493681f5dd98bb90b3066e3b"} Mar 18 17:42:29.547328 master-0 kubenswrapper[7553]: I0318 17:42:29.546613 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-5c9796789-6hngr" event={"ID":"e73f2834-c56c-4cef-ac3c-2317e9a4324c","Type":"ContainerStarted","Data":"8a589501a96ed1e6f8752cc00ece99aa42162ad128546ec6cfe89722a04ec5b1"} Mar 18 17:42:29.547328 master-0 kubenswrapper[7553]: I0318 17:42:29.547091 7553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-sysctl-conf\" (UniqueName: \"kubernetes.io/host-path/822080a5-2926-4a51-866d-86bb0b437da2-etc-sysctl-conf\") pod \"tuned-r6tf4\" (UID: \"822080a5-2926-4a51-866d-86bb0b437da2\") " pod="openshift-cluster-node-tuning-operator/tuned-r6tf4" Mar 18 17:42:29.547328 master-0 kubenswrapper[7553]: I0318 17:42:29.547139 7553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f48gg\" (UniqueName: \"kubernetes.io/projected/822080a5-2926-4a51-866d-86bb0b437da2-kube-api-access-f48gg\") pod \"tuned-r6tf4\" (UID: \"822080a5-2926-4a51-866d-86bb0b437da2\") " pod="openshift-cluster-node-tuning-operator/tuned-r6tf4" Mar 18 17:42:29.547328 master-0 kubenswrapper[7553]: I0318 17:42:29.547175 7553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/822080a5-2926-4a51-866d-86bb0b437da2-sys\") pod \"tuned-r6tf4\" (UID: \"822080a5-2926-4a51-866d-86bb0b437da2\") " pod="openshift-cluster-node-tuning-operator/tuned-r6tf4" Mar 18 17:42:29.547328 master-0 kubenswrapper[7553]: I0318 17:42:29.547232 7553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/822080a5-2926-4a51-866d-86bb0b437da2-etc-kubernetes\") pod \"tuned-r6tf4\" (UID: \"822080a5-2926-4a51-866d-86bb0b437da2\") " pod="openshift-cluster-node-tuning-operator/tuned-r6tf4" Mar 18 17:42:29.547328 master-0 kubenswrapper[7553]: I0318 17:42:29.547264 7553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-tuned\" (UniqueName: \"kubernetes.io/empty-dir/822080a5-2926-4a51-866d-86bb0b437da2-etc-tuned\") pod \"tuned-r6tf4\" (UID: \"822080a5-2926-4a51-866d-86bb0b437da2\") " pod="openshift-cluster-node-tuning-operator/tuned-r6tf4" Mar 18 17:42:29.547328 master-0 kubenswrapper[7553]: I0318 17:42:29.547324 7553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-systemd\" (UniqueName: \"kubernetes.io/host-path/822080a5-2926-4a51-866d-86bb0b437da2-etc-systemd\") pod \"tuned-r6tf4\" (UID: \"822080a5-2926-4a51-866d-86bb0b437da2\") " pod="openshift-cluster-node-tuning-operator/tuned-r6tf4" Mar 18 17:42:29.547651 master-0 kubenswrapper[7553]: I0318 17:42:29.547363 7553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-sysctl-d\" (UniqueName: \"kubernetes.io/host-path/822080a5-2926-4a51-866d-86bb0b437da2-etc-sysctl-d\") pod \"tuned-r6tf4\" (UID: \"822080a5-2926-4a51-866d-86bb0b437da2\") " pod="openshift-cluster-node-tuning-operator/tuned-r6tf4" Mar 18 17:42:29.550154 master-0 kubenswrapper[7553]: I0318 17:42:29.549404 7553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-modprobe-d\" (UniqueName: \"kubernetes.io/host-path/822080a5-2926-4a51-866d-86bb0b437da2-etc-modprobe-d\") pod \"tuned-r6tf4\" (UID: \"822080a5-2926-4a51-866d-86bb0b437da2\") " pod="openshift-cluster-node-tuning-operator/tuned-r6tf4" Mar 18 17:42:29.550154 master-0 kubenswrapper[7553]: I0318 17:42:29.549463 7553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/822080a5-2926-4a51-866d-86bb0b437da2-lib-modules\") pod \"tuned-r6tf4\" (UID: \"822080a5-2926-4a51-866d-86bb0b437da2\") " pod="openshift-cluster-node-tuning-operator/tuned-r6tf4" Mar 18 17:42:29.550154 master-0 kubenswrapper[7553]: I0318 17:42:29.549499 7553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/822080a5-2926-4a51-866d-86bb0b437da2-host\") pod \"tuned-r6tf4\" (UID: \"822080a5-2926-4a51-866d-86bb0b437da2\") " pod="openshift-cluster-node-tuning-operator/tuned-r6tf4" Mar 18 17:42:29.550154 master-0 kubenswrapper[7553]: I0318 17:42:29.549558 7553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-sysconfig\" (UniqueName: \"kubernetes.io/host-path/822080a5-2926-4a51-866d-86bb0b437da2-etc-sysconfig\") pod \"tuned-r6tf4\" (UID: \"822080a5-2926-4a51-866d-86bb0b437da2\") " pod="openshift-cluster-node-tuning-operator/tuned-r6tf4" Mar 18 17:42:29.550154 master-0 kubenswrapper[7553]: I0318 17:42:29.549701 7553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/822080a5-2926-4a51-866d-86bb0b437da2-var-lib-kubelet\") pod \"tuned-r6tf4\" (UID: \"822080a5-2926-4a51-866d-86bb0b437da2\") " pod="openshift-cluster-node-tuning-operator/tuned-r6tf4" Mar 18 17:42:29.550154 master-0 kubenswrapper[7553]: I0318 17:42:29.549780 7553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/822080a5-2926-4a51-866d-86bb0b437da2-tmp\") pod \"tuned-r6tf4\" (UID: \"822080a5-2926-4a51-866d-86bb0b437da2\") " pod="openshift-cluster-node-tuning-operator/tuned-r6tf4" Mar 18 17:42:29.550154 master-0 kubenswrapper[7553]: I0318 17:42:29.549823 7553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/822080a5-2926-4a51-866d-86bb0b437da2-run\") pod \"tuned-r6tf4\" (UID: \"822080a5-2926-4a51-866d-86bb0b437da2\") " pod="openshift-cluster-node-tuning-operator/tuned-r6tf4" Mar 18 17:42:29.559221 master-0 kubenswrapper[7553]: I0318 17:42:29.559150 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7f9db7db88-vbx76" event={"ID":"d4b73dcd-592d-493e-926b-7264fb81aa8e","Type":"ContainerStarted","Data":"9886f9fed81fd35bb0594bc240625b787b49143773d349c210121ef04b4b5e77"} Mar 18 17:42:29.561144 master-0 kubenswrapper[7553]: I0318 17:42:29.559697 7553 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-7f9db7db88-vbx76" Mar 18 17:42:29.571735 master-0 kubenswrapper[7553]: I0318 17:42:29.571655 7553 patch_prober.go:28] interesting pod/controller-manager-7f9db7db88-vbx76 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.128.0.47:8443/healthz\": dial tcp 10.128.0.47:8443: connect: connection refused" start-of-body= Mar 18 17:42:29.571943 master-0 kubenswrapper[7553]: I0318 17:42:29.571773 7553 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-7f9db7db88-vbx76" podUID="d4b73dcd-592d-493e-926b-7264fb81aa8e" containerName="controller-manager" probeResult="failure" output="Get \"https://10.128.0.47:8443/healthz\": dial tcp 10.128.0.47:8443: connect: connection refused" Mar 18 17:42:29.578786 master-0 kubenswrapper[7553]: I0318 17:42:29.578737 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-2-master-0" event={"ID":"7ee0d87f-dc6e-44d7-ab20-0118116ec893","Type":"ContainerStarted","Data":"4187ef5f64921b48a294bd87cfb36d2edc300feb73916a83f2cc847619fab117"} Mar 18 17:42:29.612216 master-0 kubenswrapper[7553]: I0318 17:42:29.612109 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-89ccd998f-l5gm7" event={"ID":"ce5831a6-5a8d-4cda-9299-5d86437bcab2","Type":"ContainerStarted","Data":"dd1b805aae172e18f337dd45784c075e0ad3687afa3a8879338aa90a6a42ed54"} Mar 18 17:42:29.636157 master-0 kubenswrapper[7553]: I0318 17:42:29.636099 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-cb78c4f4b-7s77b" event={"ID":"414430ec-af84-4826-b5db-c920c7653c7a","Type":"ContainerStarted","Data":"3ed9151d1dd1b2809008f0c24ee33b52d898df13ad4be69921038f4484d1773e"} Mar 18 17:42:29.636364 master-0 kubenswrapper[7553]: I0318 17:42:29.636266 7553 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-cb78c4f4b-7s77b" podUID="414430ec-af84-4826-b5db-c920c7653c7a" containerName="route-controller-manager" containerID="cri-o://3ed9151d1dd1b2809008f0c24ee33b52d898df13ad4be69921038f4484d1773e" gracePeriod=30 Mar 18 17:42:29.637033 master-0 kubenswrapper[7553]: I0318 17:42:29.636970 7553 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-cb78c4f4b-7s77b" Mar 18 17:42:29.638480 master-0 kubenswrapper[7553]: I0318 17:42:29.638397 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-bk26c" event={"ID":"efbcb147-d077-4749-9289-1682daccb657","Type":"ContainerStarted","Data":"2753215bec4df07a683a29fd9db1d0ae5aeba0e6f73fa6fbc662ede34576fdd9"} Mar 18 17:42:29.642407 master-0 kubenswrapper[7553]: I0318 17:42:29.642381 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-7b95f86987-6qqz4" event={"ID":"d26d4515-391e-41a5-8c82-1b2b8a375662","Type":"ContainerStarted","Data":"42cf11d0de60ec502e81fb8ea6c5d36be2f208e38382f00aa22475b8b0c29e97"} Mar 18 17:42:29.642407 master-0 kubenswrapper[7553]: I0318 17:42:29.642407 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-7b95f86987-6qqz4" event={"ID":"d26d4515-391e-41a5-8c82-1b2b8a375662","Type":"ContainerStarted","Data":"8381cd7a6e5c885500f3bdd0849aefb2b5f39ab2f05f498f742ce3eacc790c78"} Mar 18 17:42:29.644457 master-0 kubenswrapper[7553]: I0318 17:42:29.643789 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-1-master-0" event={"ID":"41191498-89c5-44dc-b648-dbea889c72f5","Type":"ContainerStarted","Data":"ca7a0939c8771a3524a053fbcf05a6e4e340302ea878636e59812ce8a826b33c"} Mar 18 17:42:29.648595 master-0 kubenswrapper[7553]: I0318 17:42:29.647754 7553 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-7f9db7db88-vbx76" podStartSLOduration=6.647735386 podStartE2EDuration="6.647735386s" podCreationTimestamp="2026-03-18 17:42:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 17:42:29.646135641 +0000 UTC m=+39.791970314" watchObservedRunningTime="2026-03-18 17:42:29.647735386 +0000 UTC m=+39.793570059" Mar 18 17:42:29.651383 master-0 kubenswrapper[7553]: I0318 17:42:29.651340 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-sysctl-conf\" (UniqueName: \"kubernetes.io/host-path/822080a5-2926-4a51-866d-86bb0b437da2-etc-sysctl-conf\") pod \"tuned-r6tf4\" (UID: \"822080a5-2926-4a51-866d-86bb0b437da2\") " pod="openshift-cluster-node-tuning-operator/tuned-r6tf4" Mar 18 17:42:29.651499 master-0 kubenswrapper[7553]: I0318 17:42:29.651399 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f48gg\" (UniqueName: \"kubernetes.io/projected/822080a5-2926-4a51-866d-86bb0b437da2-kube-api-access-f48gg\") pod \"tuned-r6tf4\" (UID: \"822080a5-2926-4a51-866d-86bb0b437da2\") " pod="openshift-cluster-node-tuning-operator/tuned-r6tf4" Mar 18 17:42:29.651499 master-0 kubenswrapper[7553]: I0318 17:42:29.651437 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/822080a5-2926-4a51-866d-86bb0b437da2-sys\") pod \"tuned-r6tf4\" (UID: \"822080a5-2926-4a51-866d-86bb0b437da2\") " pod="openshift-cluster-node-tuning-operator/tuned-r6tf4" Mar 18 17:42:29.651499 master-0 kubenswrapper[7553]: I0318 17:42:29.651485 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/822080a5-2926-4a51-866d-86bb0b437da2-etc-kubernetes\") pod \"tuned-r6tf4\" (UID: \"822080a5-2926-4a51-866d-86bb0b437da2\") " pod="openshift-cluster-node-tuning-operator/tuned-r6tf4" Mar 18 17:42:29.651591 master-0 kubenswrapper[7553]: I0318 17:42:29.651511 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-tuned\" (UniqueName: \"kubernetes.io/empty-dir/822080a5-2926-4a51-866d-86bb0b437da2-etc-tuned\") pod \"tuned-r6tf4\" (UID: \"822080a5-2926-4a51-866d-86bb0b437da2\") " pod="openshift-cluster-node-tuning-operator/tuned-r6tf4" Mar 18 17:42:29.651591 master-0 kubenswrapper[7553]: I0318 17:42:29.651536 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-systemd\" (UniqueName: \"kubernetes.io/host-path/822080a5-2926-4a51-866d-86bb0b437da2-etc-systemd\") pod \"tuned-r6tf4\" (UID: \"822080a5-2926-4a51-866d-86bb0b437da2\") " pod="openshift-cluster-node-tuning-operator/tuned-r6tf4" Mar 18 17:42:29.651591 master-0 kubenswrapper[7553]: I0318 17:42:29.651569 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-sysctl-d\" (UniqueName: \"kubernetes.io/host-path/822080a5-2926-4a51-866d-86bb0b437da2-etc-sysctl-d\") pod \"tuned-r6tf4\" (UID: \"822080a5-2926-4a51-866d-86bb0b437da2\") " pod="openshift-cluster-node-tuning-operator/tuned-r6tf4" Mar 18 17:42:29.651677 master-0 kubenswrapper[7553]: I0318 17:42:29.651609 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-modprobe-d\" (UniqueName: \"kubernetes.io/host-path/822080a5-2926-4a51-866d-86bb0b437da2-etc-modprobe-d\") pod \"tuned-r6tf4\" (UID: \"822080a5-2926-4a51-866d-86bb0b437da2\") " pod="openshift-cluster-node-tuning-operator/tuned-r6tf4" Mar 18 17:42:29.651677 master-0 kubenswrapper[7553]: I0318 17:42:29.651628 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/822080a5-2926-4a51-866d-86bb0b437da2-lib-modules\") pod \"tuned-r6tf4\" (UID: \"822080a5-2926-4a51-866d-86bb0b437da2\") " pod="openshift-cluster-node-tuning-operator/tuned-r6tf4" Mar 18 17:42:29.651677 master-0 kubenswrapper[7553]: I0318 17:42:29.651649 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/822080a5-2926-4a51-866d-86bb0b437da2-host\") pod \"tuned-r6tf4\" (UID: \"822080a5-2926-4a51-866d-86bb0b437da2\") " pod="openshift-cluster-node-tuning-operator/tuned-r6tf4" Mar 18 17:42:29.651763 master-0 kubenswrapper[7553]: I0318 17:42:29.651685 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-sysconfig\" (UniqueName: \"kubernetes.io/host-path/822080a5-2926-4a51-866d-86bb0b437da2-etc-sysconfig\") pod \"tuned-r6tf4\" (UID: \"822080a5-2926-4a51-866d-86bb0b437da2\") " pod="openshift-cluster-node-tuning-operator/tuned-r6tf4" Mar 18 17:42:29.651763 master-0 kubenswrapper[7553]: I0318 17:42:29.651741 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/822080a5-2926-4a51-866d-86bb0b437da2-var-lib-kubelet\") pod \"tuned-r6tf4\" (UID: \"822080a5-2926-4a51-866d-86bb0b437da2\") " pod="openshift-cluster-node-tuning-operator/tuned-r6tf4" Mar 18 17:42:29.651814 master-0 kubenswrapper[7553]: I0318 17:42:29.651770 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/822080a5-2926-4a51-866d-86bb0b437da2-tmp\") pod \"tuned-r6tf4\" (UID: \"822080a5-2926-4a51-866d-86bb0b437da2\") " pod="openshift-cluster-node-tuning-operator/tuned-r6tf4" Mar 18 17:42:29.651814 master-0 kubenswrapper[7553]: I0318 17:42:29.651793 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/822080a5-2926-4a51-866d-86bb0b437da2-run\") pod \"tuned-r6tf4\" (UID: \"822080a5-2926-4a51-866d-86bb0b437da2\") " pod="openshift-cluster-node-tuning-operator/tuned-r6tf4" Mar 18 17:42:29.652064 master-0 kubenswrapper[7553]: I0318 17:42:29.652038 7553 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/822080a5-2926-4a51-866d-86bb0b437da2-run\") pod \"tuned-r6tf4\" (UID: \"822080a5-2926-4a51-866d-86bb0b437da2\") " pod="openshift-cluster-node-tuning-operator/tuned-r6tf4" Mar 18 17:42:29.652479 master-0 kubenswrapper[7553]: I0318 17:42:29.652216 7553 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-modprobe-d\" (UniqueName: \"kubernetes.io/host-path/822080a5-2926-4a51-866d-86bb0b437da2-etc-modprobe-d\") pod \"tuned-r6tf4\" (UID: \"822080a5-2926-4a51-866d-86bb0b437da2\") " pod="openshift-cluster-node-tuning-operator/tuned-r6tf4" Mar 18 17:42:29.652988 master-0 kubenswrapper[7553]: I0318 17:42:29.652958 7553 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/822080a5-2926-4a51-866d-86bb0b437da2-sys\") pod \"tuned-r6tf4\" (UID: \"822080a5-2926-4a51-866d-86bb0b437da2\") " pod="openshift-cluster-node-tuning-operator/tuned-r6tf4" Mar 18 17:42:29.653579 master-0 kubenswrapper[7553]: I0318 17:42:29.653528 7553 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/822080a5-2926-4a51-866d-86bb0b437da2-lib-modules\") pod \"tuned-r6tf4\" (UID: \"822080a5-2926-4a51-866d-86bb0b437da2\") " pod="openshift-cluster-node-tuning-operator/tuned-r6tf4" Mar 18 17:42:29.663666 master-0 kubenswrapper[7553]: I0318 17:42:29.655902 7553 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/822080a5-2926-4a51-866d-86bb0b437da2-etc-kubernetes\") pod \"tuned-r6tf4\" (UID: \"822080a5-2926-4a51-866d-86bb0b437da2\") " pod="openshift-cluster-node-tuning-operator/tuned-r6tf4" Mar 18 17:42:29.663666 master-0 kubenswrapper[7553]: I0318 17:42:29.655982 7553 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-sysctl-d\" (UniqueName: \"kubernetes.io/host-path/822080a5-2926-4a51-866d-86bb0b437da2-etc-sysctl-d\") pod \"tuned-r6tf4\" (UID: \"822080a5-2926-4a51-866d-86bb0b437da2\") " pod="openshift-cluster-node-tuning-operator/tuned-r6tf4" Mar 18 17:42:29.663666 master-0 kubenswrapper[7553]: I0318 17:42:29.656039 7553 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-systemd\" (UniqueName: \"kubernetes.io/host-path/822080a5-2926-4a51-866d-86bb0b437da2-etc-systemd\") pod \"tuned-r6tf4\" (UID: \"822080a5-2926-4a51-866d-86bb0b437da2\") " pod="openshift-cluster-node-tuning-operator/tuned-r6tf4" Mar 18 17:42:29.663666 master-0 kubenswrapper[7553]: I0318 17:42:29.656465 7553 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/822080a5-2926-4a51-866d-86bb0b437da2-host\") pod \"tuned-r6tf4\" (UID: \"822080a5-2926-4a51-866d-86bb0b437da2\") " pod="openshift-cluster-node-tuning-operator/tuned-r6tf4" Mar 18 17:42:29.663666 master-0 kubenswrapper[7553]: I0318 17:42:29.656492 7553 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/822080a5-2926-4a51-866d-86bb0b437da2-var-lib-kubelet\") pod \"tuned-r6tf4\" (UID: \"822080a5-2926-4a51-866d-86bb0b437da2\") " pod="openshift-cluster-node-tuning-operator/tuned-r6tf4" Mar 18 17:42:29.663666 master-0 kubenswrapper[7553]: I0318 17:42:29.656537 7553 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-sysconfig\" (UniqueName: \"kubernetes.io/host-path/822080a5-2926-4a51-866d-86bb0b437da2-etc-sysconfig\") pod \"tuned-r6tf4\" (UID: \"822080a5-2926-4a51-866d-86bb0b437da2\") " pod="openshift-cluster-node-tuning-operator/tuned-r6tf4" Mar 18 17:42:29.663666 master-0 kubenswrapper[7553]: I0318 17:42:29.656981 7553 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-sysctl-conf\" (UniqueName: \"kubernetes.io/host-path/822080a5-2926-4a51-866d-86bb0b437da2-etc-sysctl-conf\") pod \"tuned-r6tf4\" (UID: \"822080a5-2926-4a51-866d-86bb0b437da2\") " pod="openshift-cluster-node-tuning-operator/tuned-r6tf4" Mar 18 17:42:29.668492 master-0 kubenswrapper[7553]: I0318 17:42:29.666706 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-8vmsv" event={"ID":"56cde2f7-1742-45d6-aa22-8270cfb424a7","Type":"ContainerStarted","Data":"0a0369f8937b75b0cf3ec39fd7868190e5e65c0761eb215ce5daab985dbfd750"} Mar 18 17:42:29.668492 master-0 kubenswrapper[7553]: I0318 17:42:29.666766 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-8vmsv" event={"ID":"56cde2f7-1742-45d6-aa22-8270cfb424a7","Type":"ContainerStarted","Data":"41da80af31fef99194cfa8b9345b104ba93283b541371be7f518ffdcd5945af7"} Mar 18 17:42:29.680543 master-0 kubenswrapper[7553]: I0318 17:42:29.674838 7553 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/822080a5-2926-4a51-866d-86bb0b437da2-tmp\") pod \"tuned-r6tf4\" (UID: \"822080a5-2926-4a51-866d-86bb0b437da2\") " pod="openshift-cluster-node-tuning-operator/tuned-r6tf4" Mar 18 17:42:29.680543 master-0 kubenswrapper[7553]: I0318 17:42:29.675961 7553 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f48gg\" (UniqueName: \"kubernetes.io/projected/822080a5-2926-4a51-866d-86bb0b437da2-kube-api-access-f48gg\") pod \"tuned-r6tf4\" (UID: \"822080a5-2926-4a51-866d-86bb0b437da2\") " pod="openshift-cluster-node-tuning-operator/tuned-r6tf4" Mar 18 17:42:29.680543 master-0 kubenswrapper[7553]: I0318 17:42:29.676673 7553 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-cb78c4f4b-7s77b" podStartSLOduration=23.40774028 podStartE2EDuration="29.676643092s" podCreationTimestamp="2026-03-18 17:42:00 +0000 UTC" firstStartedPulling="2026-03-18 17:42:22.686386144 +0000 UTC m=+32.832220827" lastFinishedPulling="2026-03-18 17:42:28.955288966 +0000 UTC m=+39.101123639" observedRunningTime="2026-03-18 17:42:29.673959663 +0000 UTC m=+39.819794356" watchObservedRunningTime="2026-03-18 17:42:29.676643092 +0000 UTC m=+39.822477765" Mar 18 17:42:29.688310 master-0 kubenswrapper[7553]: I0318 17:42:29.685765 7553 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-tuned\" (UniqueName: \"kubernetes.io/empty-dir/822080a5-2926-4a51-866d-86bb0b437da2-etc-tuned\") pod \"tuned-r6tf4\" (UID: \"822080a5-2926-4a51-866d-86bb0b437da2\") " pod="openshift-cluster-node-tuning-operator/tuned-r6tf4" Mar 18 17:42:29.688310 master-0 kubenswrapper[7553]: I0318 17:42:29.686317 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/cluster-monitoring-operator-58845fbb57-vjrjg" event={"ID":"8d5e9525-6c0d-4c0b-a2ce-e42eaf66c311","Type":"ContainerStarted","Data":"dfc93735e306184cc4596c59d2bb37e97390ba2f327b3655dd96eec7dc58139e"} Mar 18 17:42:29.923819 master-0 kubenswrapper[7553]: I0318 17:42:29.922586 7553 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-node-tuning-operator/tuned-r6tf4" Mar 18 17:42:30.096785 master-0 kubenswrapper[7553]: I0318 17:42:30.096085 7553 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dedeb921-f1f2-4fa4-8d16-8740b1c0cd14" path="/var/lib/kubelet/pods/dedeb921-f1f2-4fa4-8d16-8740b1c0cd14/volumes" Mar 18 17:42:30.207505 master-0 kubenswrapper[7553]: I0318 17:42:30.205234 7553 patch_prober.go:28] interesting pod/route-controller-manager-cb78c4f4b-7s77b container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.128.0.34:8443/healthz\": read tcp 10.128.0.2:60830->10.128.0.34:8443: read: connection reset by peer" start-of-body= Mar 18 17:42:30.207505 master-0 kubenswrapper[7553]: I0318 17:42:30.205315 7553 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-cb78c4f4b-7s77b" podUID="414430ec-af84-4826-b5db-c920c7653c7a" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.128.0.34:8443/healthz\": read tcp 10.128.0.2:60830->10.128.0.34:8443: read: connection reset by peer" Mar 18 17:42:30.282636 master-0 kubenswrapper[7553]: W0318 17:42:30.281922 7553 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod822080a5_2926_4a51_866d_86bb0b437da2.slice/crio-d9005e2315af45e5c8cea1302378b105c7c24309c1fff18522208d53df3ed1f6 WatchSource:0}: Error finding container d9005e2315af45e5c8cea1302378b105c7c24309c1fff18522208d53df3ed1f6: Status 404 returned error can't find the container with id d9005e2315af45e5c8cea1302378b105c7c24309c1fff18522208d53df3ed1f6 Mar 18 17:42:30.547670 master-0 kubenswrapper[7553]: I0318 17:42:30.546999 7553 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-route-controller-manager_route-controller-manager-cb78c4f4b-7s77b_414430ec-af84-4826-b5db-c920c7653c7a/route-controller-manager/0.log" Mar 18 17:42:30.547934 master-0 kubenswrapper[7553]: I0318 17:42:30.547728 7553 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-cb78c4f4b-7s77b" Mar 18 17:42:30.590298 master-0 kubenswrapper[7553]: I0318 17:42:30.589219 7553 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7f87dc7fd4-v8b77"] Mar 18 17:42:30.591223 master-0 kubenswrapper[7553]: E0318 17:42:30.590736 7553 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="414430ec-af84-4826-b5db-c920c7653c7a" containerName="route-controller-manager" Mar 18 17:42:30.591223 master-0 kubenswrapper[7553]: I0318 17:42:30.590757 7553 state_mem.go:107] "Deleted CPUSet assignment" podUID="414430ec-af84-4826-b5db-c920c7653c7a" containerName="route-controller-manager" Mar 18 17:42:30.592485 master-0 kubenswrapper[7553]: I0318 17:42:30.592454 7553 memory_manager.go:354] "RemoveStaleState removing state" podUID="414430ec-af84-4826-b5db-c920c7653c7a" containerName="route-controller-manager" Mar 18 17:42:30.593591 master-0 kubenswrapper[7553]: I0318 17:42:30.593448 7553 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7f87dc7fd4-v8b77" Mar 18 17:42:30.602810 master-0 kubenswrapper[7553]: I0318 17:42:30.602765 7553 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7f87dc7fd4-v8b77"] Mar 18 17:42:30.682816 master-0 kubenswrapper[7553]: I0318 17:42:30.681740 7553 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/414430ec-af84-4826-b5db-c920c7653c7a-serving-cert\") pod \"414430ec-af84-4826-b5db-c920c7653c7a\" (UID: \"414430ec-af84-4826-b5db-c920c7653c7a\") " Mar 18 17:42:30.682816 master-0 kubenswrapper[7553]: I0318 17:42:30.682189 7553 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/414430ec-af84-4826-b5db-c920c7653c7a-config\") pod \"414430ec-af84-4826-b5db-c920c7653c7a\" (UID: \"414430ec-af84-4826-b5db-c920c7653c7a\") " Mar 18 17:42:30.682816 master-0 kubenswrapper[7553]: I0318 17:42:30.682236 7553 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9k877\" (UniqueName: \"kubernetes.io/projected/414430ec-af84-4826-b5db-c920c7653c7a-kube-api-access-9k877\") pod \"414430ec-af84-4826-b5db-c920c7653c7a\" (UID: \"414430ec-af84-4826-b5db-c920c7653c7a\") " Mar 18 17:42:30.682816 master-0 kubenswrapper[7553]: I0318 17:42:30.682318 7553 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/414430ec-af84-4826-b5db-c920c7653c7a-client-ca\") pod \"414430ec-af84-4826-b5db-c920c7653c7a\" (UID: \"414430ec-af84-4826-b5db-c920c7653c7a\") " Mar 18 17:42:30.682816 master-0 kubenswrapper[7553]: I0318 17:42:30.682460 7553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cxn6x\" (UniqueName: \"kubernetes.io/projected/e109acbe-328f-4ff0-b665-1a822adacfc8-kube-api-access-cxn6x\") pod \"route-controller-manager-7f87dc7fd4-v8b77\" (UID: \"e109acbe-328f-4ff0-b665-1a822adacfc8\") " pod="openshift-route-controller-manager/route-controller-manager-7f87dc7fd4-v8b77" Mar 18 17:42:30.682816 master-0 kubenswrapper[7553]: I0318 17:42:30.682484 7553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e109acbe-328f-4ff0-b665-1a822adacfc8-config\") pod \"route-controller-manager-7f87dc7fd4-v8b77\" (UID: \"e109acbe-328f-4ff0-b665-1a822adacfc8\") " pod="openshift-route-controller-manager/route-controller-manager-7f87dc7fd4-v8b77" Mar 18 17:42:30.682816 master-0 kubenswrapper[7553]: I0318 17:42:30.682500 7553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e109acbe-328f-4ff0-b665-1a822adacfc8-serving-cert\") pod \"route-controller-manager-7f87dc7fd4-v8b77\" (UID: \"e109acbe-328f-4ff0-b665-1a822adacfc8\") " pod="openshift-route-controller-manager/route-controller-manager-7f87dc7fd4-v8b77" Mar 18 17:42:30.682816 master-0 kubenswrapper[7553]: I0318 17:42:30.682537 7553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e109acbe-328f-4ff0-b665-1a822adacfc8-client-ca\") pod \"route-controller-manager-7f87dc7fd4-v8b77\" (UID: \"e109acbe-328f-4ff0-b665-1a822adacfc8\") " pod="openshift-route-controller-manager/route-controller-manager-7f87dc7fd4-v8b77" Mar 18 17:42:30.683877 master-0 kubenswrapper[7553]: I0318 17:42:30.683741 7553 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/414430ec-af84-4826-b5db-c920c7653c7a-client-ca" (OuterVolumeSpecName: "client-ca") pod "414430ec-af84-4826-b5db-c920c7653c7a" (UID: "414430ec-af84-4826-b5db-c920c7653c7a"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 17:42:30.683877 master-0 kubenswrapper[7553]: I0318 17:42:30.683845 7553 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/414430ec-af84-4826-b5db-c920c7653c7a-config" (OuterVolumeSpecName: "config") pod "414430ec-af84-4826-b5db-c920c7653c7a" (UID: "414430ec-af84-4826-b5db-c920c7653c7a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 17:42:30.691978 master-0 kubenswrapper[7553]: I0318 17:42:30.691921 7553 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/414430ec-af84-4826-b5db-c920c7653c7a-kube-api-access-9k877" (OuterVolumeSpecName: "kube-api-access-9k877") pod "414430ec-af84-4826-b5db-c920c7653c7a" (UID: "414430ec-af84-4826-b5db-c920c7653c7a"). InnerVolumeSpecName "kube-api-access-9k877". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 17:42:30.693555 master-0 kubenswrapper[7553]: I0318 17:42:30.693501 7553 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/414430ec-af84-4826-b5db-c920c7653c7a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "414430ec-af84-4826-b5db-c920c7653c7a" (UID: "414430ec-af84-4826-b5db-c920c7653c7a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 17:42:30.712337 master-0 kubenswrapper[7553]: I0318 17:42:30.712258 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-1-master-0" event={"ID":"41191498-89c5-44dc-b648-dbea889c72f5","Type":"ContainerStarted","Data":"952d444a3fc2166b6fd7ae2111af2db0a2310710ae00c917ceccc2b70b6b3ce3"} Mar 18 17:42:30.718132 master-0 kubenswrapper[7553]: I0318 17:42:30.718089 7553 generic.go:334] "Generic (PLEG): container finished" podID="30d77a7c-222e-41c7-8a98-219854aa3da2" containerID="7dca962ecd78930d6ebff8babb7c8a998598fdaf8cc19f7bde50114fc03b1127" exitCode=0 Mar 18 17:42:30.718197 master-0 kubenswrapper[7553]: I0318 17:42:30.718166 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-897b458c6-vsss9" event={"ID":"30d77a7c-222e-41c7-8a98-219854aa3da2","Type":"ContainerDied","Data":"7dca962ecd78930d6ebff8babb7c8a998598fdaf8cc19f7bde50114fc03b1127"} Mar 18 17:42:30.724964 master-0 kubenswrapper[7553]: I0318 17:42:30.724936 7553 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-route-controller-manager_route-controller-manager-cb78c4f4b-7s77b_414430ec-af84-4826-b5db-c920c7653c7a/route-controller-manager/0.log" Mar 18 17:42:30.725015 master-0 kubenswrapper[7553]: I0318 17:42:30.724982 7553 generic.go:334] "Generic (PLEG): container finished" podID="414430ec-af84-4826-b5db-c920c7653c7a" containerID="3ed9151d1dd1b2809008f0c24ee33b52d898df13ad4be69921038f4484d1773e" exitCode=255 Mar 18 17:42:30.725143 master-0 kubenswrapper[7553]: I0318 17:42:30.725082 7553 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-cb78c4f4b-7s77b" Mar 18 17:42:30.725241 master-0 kubenswrapper[7553]: I0318 17:42:30.725201 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-cb78c4f4b-7s77b" event={"ID":"414430ec-af84-4826-b5db-c920c7653c7a","Type":"ContainerDied","Data":"3ed9151d1dd1b2809008f0c24ee33b52d898df13ad4be69921038f4484d1773e"} Mar 18 17:42:30.725343 master-0 kubenswrapper[7553]: I0318 17:42:30.725329 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-cb78c4f4b-7s77b" event={"ID":"414430ec-af84-4826-b5db-c920c7653c7a","Type":"ContainerDied","Data":"c96dd684fd83e0f8e9135640be47949f78da971f446a6ce776803ea3d9b198e7"} Mar 18 17:42:30.725730 master-0 kubenswrapper[7553]: I0318 17:42:30.725712 7553 scope.go:117] "RemoveContainer" containerID="3ed9151d1dd1b2809008f0c24ee33b52d898df13ad4be69921038f4484d1773e" Mar 18 17:42:30.738947 master-0 kubenswrapper[7553]: I0318 17:42:30.737857 7553 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-1-master-0" podStartSLOduration=3.737842338 podStartE2EDuration="3.737842338s" podCreationTimestamp="2026-03-18 17:42:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 17:42:30.735624179 +0000 UTC m=+40.881458852" watchObservedRunningTime="2026-03-18 17:42:30.737842338 +0000 UTC m=+40.883677011" Mar 18 17:42:30.744778 master-0 kubenswrapper[7553]: I0318 17:42:30.744722 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-8vmsv" event={"ID":"56cde2f7-1742-45d6-aa22-8270cfb424a7","Type":"ContainerStarted","Data":"9a3c783faf4f4f653f053e2f216b7497912efa5f57b792ca0a2a383ce66b1a4d"} Mar 18 17:42:30.744904 master-0 kubenswrapper[7553]: I0318 17:42:30.744880 7553 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-8vmsv" Mar 18 17:42:30.757329 master-0 kubenswrapper[7553]: I0318 17:42:30.757064 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-lf9xl" event={"ID":"59407fdf-b1e9-4992-a3c8-54b4e26f496c","Type":"ContainerStarted","Data":"9bbddff1908cb706b44ab66e3d879c74ea94927248587881f98380ad22aa2064"} Mar 18 17:42:30.757503 master-0 kubenswrapper[7553]: I0318 17:42:30.757361 7553 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-lf9xl" Mar 18 17:42:30.773469 master-0 kubenswrapper[7553]: I0318 17:42:30.773446 7553 scope.go:117] "RemoveContainer" containerID="3ed9151d1dd1b2809008f0c24ee33b52d898df13ad4be69921038f4484d1773e" Mar 18 17:42:30.774319 master-0 kubenswrapper[7553]: E0318 17:42:30.774120 7553 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3ed9151d1dd1b2809008f0c24ee33b52d898df13ad4be69921038f4484d1773e\": container with ID starting with 3ed9151d1dd1b2809008f0c24ee33b52d898df13ad4be69921038f4484d1773e not found: ID does not exist" containerID="3ed9151d1dd1b2809008f0c24ee33b52d898df13ad4be69921038f4484d1773e" Mar 18 17:42:30.774428 master-0 kubenswrapper[7553]: I0318 17:42:30.774338 7553 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3ed9151d1dd1b2809008f0c24ee33b52d898df13ad4be69921038f4484d1773e"} err="failed to get container status \"3ed9151d1dd1b2809008f0c24ee33b52d898df13ad4be69921038f4484d1773e\": rpc error: code = NotFound desc = could not find container \"3ed9151d1dd1b2809008f0c24ee33b52d898df13ad4be69921038f4484d1773e\": container with ID starting with 3ed9151d1dd1b2809008f0c24ee33b52d898df13ad4be69921038f4484d1773e not found: ID does not exist" Mar 18 17:42:30.775523 master-0 kubenswrapper[7553]: I0318 17:42:30.775503 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-node-tuning-operator/tuned-r6tf4" event={"ID":"822080a5-2926-4a51-866d-86bb0b437da2","Type":"ContainerStarted","Data":"4ccd466ba51acf658710b942fca6bc20c07b4a58733528efe1b0471c62147322"} Mar 18 17:42:30.775641 master-0 kubenswrapper[7553]: I0318 17:42:30.775627 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-node-tuning-operator/tuned-r6tf4" event={"ID":"822080a5-2926-4a51-866d-86bb0b437da2","Type":"ContainerStarted","Data":"d9005e2315af45e5c8cea1302378b105c7c24309c1fff18522208d53df3ed1f6"} Mar 18 17:42:30.778741 master-0 kubenswrapper[7553]: I0318 17:42:30.778688 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-bk26c" event={"ID":"efbcb147-d077-4749-9289-1682daccb657","Type":"ContainerStarted","Data":"c8bf797960855c96a7dd1015a59ba17f923918f9eedf49a7f51a1e737f4065a2"} Mar 18 17:42:30.778844 master-0 kubenswrapper[7553]: I0318 17:42:30.778743 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-bk26c" event={"ID":"efbcb147-d077-4749-9289-1682daccb657","Type":"ContainerStarted","Data":"b1d92bc61050e9dcfcb1bd9705c2f2b94007d572857fef98c987e76770e1ad13"} Mar 18 17:42:30.778844 master-0 kubenswrapper[7553]: I0318 17:42:30.778798 7553 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-bk26c" Mar 18 17:42:30.781380 master-0 kubenswrapper[7553]: I0318 17:42:30.781091 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-2-master-0" event={"ID":"7ee0d87f-dc6e-44d7-ab20-0118116ec893","Type":"ContainerStarted","Data":"441322c172514e8dc3f3a8770ab3b9678bd3c6294dbca14c06867f35aca91e9b"} Mar 18 17:42:30.785239 master-0 kubenswrapper[7553]: I0318 17:42:30.784296 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cxn6x\" (UniqueName: \"kubernetes.io/projected/e109acbe-328f-4ff0-b665-1a822adacfc8-kube-api-access-cxn6x\") pod \"route-controller-manager-7f87dc7fd4-v8b77\" (UID: \"e109acbe-328f-4ff0-b665-1a822adacfc8\") " pod="openshift-route-controller-manager/route-controller-manager-7f87dc7fd4-v8b77" Mar 18 17:42:30.785239 master-0 kubenswrapper[7553]: I0318 17:42:30.784331 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e109acbe-328f-4ff0-b665-1a822adacfc8-serving-cert\") pod \"route-controller-manager-7f87dc7fd4-v8b77\" (UID: \"e109acbe-328f-4ff0-b665-1a822adacfc8\") " pod="openshift-route-controller-manager/route-controller-manager-7f87dc7fd4-v8b77" Mar 18 17:42:30.785239 master-0 kubenswrapper[7553]: I0318 17:42:30.784430 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e109acbe-328f-4ff0-b665-1a822adacfc8-config\") pod \"route-controller-manager-7f87dc7fd4-v8b77\" (UID: \"e109acbe-328f-4ff0-b665-1a822adacfc8\") " pod="openshift-route-controller-manager/route-controller-manager-7f87dc7fd4-v8b77" Mar 18 17:42:30.785239 master-0 kubenswrapper[7553]: I0318 17:42:30.784469 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e109acbe-328f-4ff0-b665-1a822adacfc8-client-ca\") pod \"route-controller-manager-7f87dc7fd4-v8b77\" (UID: \"e109acbe-328f-4ff0-b665-1a822adacfc8\") " pod="openshift-route-controller-manager/route-controller-manager-7f87dc7fd4-v8b77" Mar 18 17:42:30.785239 master-0 kubenswrapper[7553]: I0318 17:42:30.784521 7553 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/414430ec-af84-4826-b5db-c920c7653c7a-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 18 17:42:30.785239 master-0 kubenswrapper[7553]: I0318 17:42:30.784536 7553 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/414430ec-af84-4826-b5db-c920c7653c7a-config\") on node \"master-0\" DevicePath \"\"" Mar 18 17:42:30.785239 master-0 kubenswrapper[7553]: I0318 17:42:30.784546 7553 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9k877\" (UniqueName: \"kubernetes.io/projected/414430ec-af84-4826-b5db-c920c7653c7a-kube-api-access-9k877\") on node \"master-0\" DevicePath \"\"" Mar 18 17:42:30.785239 master-0 kubenswrapper[7553]: I0318 17:42:30.784559 7553 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/414430ec-af84-4826-b5db-c920c7653c7a-client-ca\") on node \"master-0\" DevicePath \"\"" Mar 18 17:42:30.785601 master-0 kubenswrapper[7553]: I0318 17:42:30.785477 7553 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e109acbe-328f-4ff0-b665-1a822adacfc8-client-ca\") pod \"route-controller-manager-7f87dc7fd4-v8b77\" (UID: \"e109acbe-328f-4ff0-b665-1a822adacfc8\") " pod="openshift-route-controller-manager/route-controller-manager-7f87dc7fd4-v8b77" Mar 18 17:42:30.786609 master-0 kubenswrapper[7553]: I0318 17:42:30.786522 7553 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e109acbe-328f-4ff0-b665-1a822adacfc8-config\") pod \"route-controller-manager-7f87dc7fd4-v8b77\" (UID: \"e109acbe-328f-4ff0-b665-1a822adacfc8\") " pod="openshift-route-controller-manager/route-controller-manager-7f87dc7fd4-v8b77" Mar 18 17:42:30.788746 master-0 kubenswrapper[7553]: I0318 17:42:30.788557 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7f9db7db88-vbx76" event={"ID":"d4b73dcd-592d-493e-926b-7264fb81aa8e","Type":"ContainerStarted","Data":"9140b66f8352427e422b21871259d9c1897b722209bbe42b359cb9a5fcad237f"} Mar 18 17:42:30.804308 master-0 kubenswrapper[7553]: I0318 17:42:30.803705 7553 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-7f9db7db88-vbx76" Mar 18 17:42:30.812817 master-0 kubenswrapper[7553]: I0318 17:42:30.812779 7553 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e109acbe-328f-4ff0-b665-1a822adacfc8-serving-cert\") pod \"route-controller-manager-7f87dc7fd4-v8b77\" (UID: \"e109acbe-328f-4ff0-b665-1a822adacfc8\") " pod="openshift-route-controller-manager/route-controller-manager-7f87dc7fd4-v8b77" Mar 18 17:42:30.813038 master-0 kubenswrapper[7553]: I0318 17:42:30.813005 7553 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cxn6x\" (UniqueName: \"kubernetes.io/projected/e109acbe-328f-4ff0-b665-1a822adacfc8-kube-api-access-cxn6x\") pod \"route-controller-manager-7f87dc7fd4-v8b77\" (UID: \"e109acbe-328f-4ff0-b665-1a822adacfc8\") " pod="openshift-route-controller-manager/route-controller-manager-7f87dc7fd4-v8b77" Mar 18 17:42:30.824261 master-0 kubenswrapper[7553]: I0318 17:42:30.824127 7553 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-lf9xl" podStartSLOduration=12.103528203 podStartE2EDuration="17.824102287s" podCreationTimestamp="2026-03-18 17:42:13 +0000 UTC" firstStartedPulling="2026-03-18 17:42:22.252266349 +0000 UTC m=+32.398101052" lastFinishedPulling="2026-03-18 17:42:27.972840463 +0000 UTC m=+38.118675136" observedRunningTime="2026-03-18 17:42:30.790675971 +0000 UTC m=+40.936510644" watchObservedRunningTime="2026-03-18 17:42:30.824102287 +0000 UTC m=+40.969936960" Mar 18 17:42:30.843460 master-0 kubenswrapper[7553]: I0318 17:42:30.843343 7553 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-8vmsv" podStartSLOduration=7.843260659 podStartE2EDuration="7.843260659s" podCreationTimestamp="2026-03-18 17:42:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 17:42:30.822385169 +0000 UTC m=+40.968219842" watchObservedRunningTime="2026-03-18 17:42:30.843260659 +0000 UTC m=+40.989095352" Mar 18 17:42:30.857659 master-0 kubenswrapper[7553]: I0318 17:42:30.857561 7553 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-cb78c4f4b-7s77b"] Mar 18 17:42:30.871300 master-0 kubenswrapper[7553]: I0318 17:42:30.871228 7553 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-cb78c4f4b-7s77b"] Mar 18 17:42:30.877130 master-0 kubenswrapper[7553]: I0318 17:42:30.877055 7553 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/installer-2-master-0" podStartSLOduration=4.877031682 podStartE2EDuration="4.877031682s" podCreationTimestamp="2026-03-18 17:42:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 17:42:30.865079558 +0000 UTC m=+41.010914221" watchObservedRunningTime="2026-03-18 17:42:30.877031682 +0000 UTC m=+41.022866355" Mar 18 17:42:30.883702 master-0 kubenswrapper[7553]: I0318 17:42:30.883652 7553 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-bk26c" podStartSLOduration=7.883640337 podStartE2EDuration="7.883640337s" podCreationTimestamp="2026-03-18 17:42:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 17:42:30.883627407 +0000 UTC m=+41.029462070" watchObservedRunningTime="2026-03-18 17:42:30.883640337 +0000 UTC m=+41.029475000" Mar 18 17:42:30.934166 master-0 kubenswrapper[7553]: E0318 17:42:30.934006 7553 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod414430ec_af84_4826_b5db_c920c7653c7a.slice\": RecentStats: unable to find data in memory cache]" Mar 18 17:42:30.937833 master-0 kubenswrapper[7553]: I0318 17:42:30.937806 7553 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7f87dc7fd4-v8b77" Mar 18 17:42:30.944966 master-0 kubenswrapper[7553]: I0318 17:42:30.944899 7553 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-node-tuning-operator/tuned-r6tf4" podStartSLOduration=1.944886525 podStartE2EDuration="1.944886525s" podCreationTimestamp="2026-03-18 17:42:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 17:42:30.940884668 +0000 UTC m=+41.086719361" watchObservedRunningTime="2026-03-18 17:42:30.944886525 +0000 UTC m=+41.090721198" Mar 18 17:42:31.300662 master-0 kubenswrapper[7553]: I0318 17:42:31.296641 7553 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7f87dc7fd4-v8b77"] Mar 18 17:42:31.820588 master-0 kubenswrapper[7553]: I0318 17:42:31.819868 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-897b458c6-vsss9" event={"ID":"30d77a7c-222e-41c7-8a98-219854aa3da2","Type":"ContainerStarted","Data":"8665ded2756039844781f9d6484ed753a6b375a139db5a275998a586d338b068"} Mar 18 17:42:31.820588 master-0 kubenswrapper[7553]: I0318 17:42:31.820344 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-897b458c6-vsss9" event={"ID":"30d77a7c-222e-41c7-8a98-219854aa3da2","Type":"ContainerStarted","Data":"fa974eeb2ec6be1295ce9fcf8d22342e89753f396fed9a5ed430a7824e928cab"} Mar 18 17:42:31.831101 master-0 kubenswrapper[7553]: I0318 17:42:31.830168 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7f87dc7fd4-v8b77" event={"ID":"e109acbe-328f-4ff0-b665-1a822adacfc8","Type":"ContainerStarted","Data":"a0c2c1a7601a92929d602a239379df7d68994f2dcb8fa7e1dbbd4b3cdc7f7136"} Mar 18 17:42:31.831101 master-0 kubenswrapper[7553]: I0318 17:42:31.830222 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7f87dc7fd4-v8b77" event={"ID":"e109acbe-328f-4ff0-b665-1a822adacfc8","Type":"ContainerStarted","Data":"a355a71517e7d66c5dd4ce62f576d95f5192f9fda3f5afa9bf334abee5162c41"} Mar 18 17:42:31.831101 master-0 kubenswrapper[7553]: I0318 17:42:31.831068 7553 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-7f87dc7fd4-v8b77" Mar 18 17:42:31.844342 master-0 kubenswrapper[7553]: I0318 17:42:31.843543 7553 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-897b458c6-vsss9" podStartSLOduration=15.137283835 podStartE2EDuration="21.843518043s" podCreationTimestamp="2026-03-18 17:42:10 +0000 UTC" firstStartedPulling="2026-03-18 17:42:22.245748626 +0000 UTC m=+32.391583329" lastFinishedPulling="2026-03-18 17:42:28.951982874 +0000 UTC m=+39.097817537" observedRunningTime="2026-03-18 17:42:31.840103508 +0000 UTC m=+41.985938181" watchObservedRunningTime="2026-03-18 17:42:31.843518043 +0000 UTC m=+41.989352706" Mar 18 17:42:31.858726 master-0 kubenswrapper[7553]: I0318 17:42:31.858468 7553 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-7f87dc7fd4-v8b77" podStartSLOduration=8.858447561 podStartE2EDuration="8.858447561s" podCreationTimestamp="2026-03-18 17:42:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 17:42:31.855598419 +0000 UTC m=+42.001433092" watchObservedRunningTime="2026-03-18 17:42:31.858447561 +0000 UTC m=+42.004282234" Mar 18 17:42:31.980711 master-0 kubenswrapper[7553]: I0318 17:42:31.980517 7553 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-7f87dc7fd4-v8b77" Mar 18 17:42:32.073856 master-0 kubenswrapper[7553]: I0318 17:42:32.069200 7553 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="414430ec-af84-4826-b5db-c920c7653c7a" path="/var/lib/kubelet/pods/414430ec-af84-4826-b5db-c920c7653c7a/volumes" Mar 18 17:42:34.213878 master-0 kubenswrapper[7553]: I0318 17:42:34.213810 7553 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-bk26c" Mar 18 17:42:34.377161 master-0 kubenswrapper[7553]: I0318 17:42:34.377023 7553 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-scheduler/installer-2-master-0"] Mar 18 17:42:34.379397 master-0 kubenswrapper[7553]: I0318 17:42:34.379342 7553 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-scheduler/installer-2-master-0" podUID="7ee0d87f-dc6e-44d7-ab20-0118116ec893" containerName="installer" containerID="cri-o://441322c172514e8dc3f3a8770ab3b9678bd3c6294dbca14c06867f35aca91e9b" gracePeriod=30 Mar 18 17:42:34.409591 master-0 kubenswrapper[7553]: I0318 17:42:34.402898 7553 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/installer-1-master-0"] Mar 18 17:42:34.429094 master-0 kubenswrapper[7553]: I0318 17:42:34.429011 7553 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/installer-1-master-0"] Mar 18 17:42:34.429775 master-0 kubenswrapper[7553]: I0318 17:42:34.429592 7553 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-1-master-0" Mar 18 17:42:34.441847 master-0 kubenswrapper[7553]: I0318 17:42:34.440450 7553 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Mar 18 17:42:34.532568 master-0 kubenswrapper[7553]: I0318 17:42:34.532509 7553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/4f688df1-3bfc-412e-b311-f9f761a0b00a-var-lock\") pod \"installer-1-master-0\" (UID: \"4f688df1-3bfc-412e-b311-f9f761a0b00a\") " pod="openshift-kube-controller-manager/installer-1-master-0" Mar 18 17:42:34.532568 master-0 kubenswrapper[7553]: I0318 17:42:34.532575 7553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4f688df1-3bfc-412e-b311-f9f761a0b00a-kube-api-access\") pod \"installer-1-master-0\" (UID: \"4f688df1-3bfc-412e-b311-f9f761a0b00a\") " pod="openshift-kube-controller-manager/installer-1-master-0" Mar 18 17:42:34.532940 master-0 kubenswrapper[7553]: I0318 17:42:34.532742 7553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4f688df1-3bfc-412e-b311-f9f761a0b00a-kubelet-dir\") pod \"installer-1-master-0\" (UID: \"4f688df1-3bfc-412e-b311-f9f761a0b00a\") " pod="openshift-kube-controller-manager/installer-1-master-0" Mar 18 17:42:34.634669 master-0 kubenswrapper[7553]: I0318 17:42:34.634600 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/4f688df1-3bfc-412e-b311-f9f761a0b00a-var-lock\") pod \"installer-1-master-0\" (UID: \"4f688df1-3bfc-412e-b311-f9f761a0b00a\") " pod="openshift-kube-controller-manager/installer-1-master-0" Mar 18 17:42:34.634669 master-0 kubenswrapper[7553]: I0318 17:42:34.634679 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4f688df1-3bfc-412e-b311-f9f761a0b00a-kube-api-access\") pod \"installer-1-master-0\" (UID: \"4f688df1-3bfc-412e-b311-f9f761a0b00a\") " pod="openshift-kube-controller-manager/installer-1-master-0" Mar 18 17:42:34.635029 master-0 kubenswrapper[7553]: I0318 17:42:34.634980 7553 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/4f688df1-3bfc-412e-b311-f9f761a0b00a-var-lock\") pod \"installer-1-master-0\" (UID: \"4f688df1-3bfc-412e-b311-f9f761a0b00a\") " pod="openshift-kube-controller-manager/installer-1-master-0" Mar 18 17:42:34.635069 master-0 kubenswrapper[7553]: I0318 17:42:34.635039 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4f688df1-3bfc-412e-b311-f9f761a0b00a-kubelet-dir\") pod \"installer-1-master-0\" (UID: \"4f688df1-3bfc-412e-b311-f9f761a0b00a\") " pod="openshift-kube-controller-manager/installer-1-master-0" Mar 18 17:42:34.635289 master-0 kubenswrapper[7553]: I0318 17:42:34.635226 7553 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4f688df1-3bfc-412e-b311-f9f761a0b00a-kubelet-dir\") pod \"installer-1-master-0\" (UID: \"4f688df1-3bfc-412e-b311-f9f761a0b00a\") " pod="openshift-kube-controller-manager/installer-1-master-0" Mar 18 17:42:34.659146 master-0 kubenswrapper[7553]: I0318 17:42:34.659094 7553 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4f688df1-3bfc-412e-b311-f9f761a0b00a-kube-api-access\") pod \"installer-1-master-0\" (UID: \"4f688df1-3bfc-412e-b311-f9f761a0b00a\") " pod="openshift-kube-controller-manager/installer-1-master-0" Mar 18 17:42:34.793947 master-0 kubenswrapper[7553]: I0318 17:42:34.793788 7553 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-1-master-0" Mar 18 17:42:35.714084 master-0 kubenswrapper[7553]: I0318 17:42:35.714018 7553 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-7f9db7db88-vbx76"] Mar 18 17:42:35.714702 master-0 kubenswrapper[7553]: I0318 17:42:35.714423 7553 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-7f9db7db88-vbx76" podUID="d4b73dcd-592d-493e-926b-7264fb81aa8e" containerName="controller-manager" containerID="cri-o://9140b66f8352427e422b21871259d9c1897b722209bbe42b359cb9a5fcad237f" gracePeriod=30 Mar 18 17:42:35.780619 master-0 kubenswrapper[7553]: I0318 17:42:35.779645 7553 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7f87dc7fd4-v8b77"] Mar 18 17:42:35.780619 master-0 kubenswrapper[7553]: I0318 17:42:35.779909 7553 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-7f87dc7fd4-v8b77" podUID="e109acbe-328f-4ff0-b665-1a822adacfc8" containerName="route-controller-manager" containerID="cri-o://a0c2c1a7601a92929d602a239379df7d68994f2dcb8fa7e1dbbd4b3cdc7f7136" gracePeriod=30 Mar 18 17:42:36.569181 master-0 kubenswrapper[7553]: I0318 17:42:36.569074 7553 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/installer-3-master-0"] Mar 18 17:42:36.569819 master-0 kubenswrapper[7553]: I0318 17:42:36.569782 7553 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-3-master-0" Mar 18 17:42:36.575169 master-0 kubenswrapper[7553]: I0318 17:42:36.575105 7553 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-897b458c6-vsss9" Mar 18 17:42:36.575667 master-0 kubenswrapper[7553]: I0318 17:42:36.575621 7553 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-897b458c6-vsss9" Mar 18 17:42:36.582453 master-0 kubenswrapper[7553]: I0318 17:42:36.582398 7553 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-897b458c6-vsss9" Mar 18 17:42:36.594333 master-0 kubenswrapper[7553]: I0318 17:42:36.593895 7553 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/installer-3-master-0"] Mar 18 17:42:36.611886 master-0 kubenswrapper[7553]: I0318 17:42:36.611829 7553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1a709ef9-91c0-4193-acb4-0594d02f554c-kubelet-dir\") pod \"installer-3-master-0\" (UID: \"1a709ef9-91c0-4193-acb4-0594d02f554c\") " pod="openshift-kube-scheduler/installer-3-master-0" Mar 18 17:42:36.612101 master-0 kubenswrapper[7553]: I0318 17:42:36.611948 7553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/1a709ef9-91c0-4193-acb4-0594d02f554c-var-lock\") pod \"installer-3-master-0\" (UID: \"1a709ef9-91c0-4193-acb4-0594d02f554c\") " pod="openshift-kube-scheduler/installer-3-master-0" Mar 18 17:42:36.612101 master-0 kubenswrapper[7553]: I0318 17:42:36.612082 7553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1a709ef9-91c0-4193-acb4-0594d02f554c-kube-api-access\") pod \"installer-3-master-0\" (UID: \"1a709ef9-91c0-4193-acb4-0594d02f554c\") " pod="openshift-kube-scheduler/installer-3-master-0" Mar 18 17:42:36.713889 master-0 kubenswrapper[7553]: I0318 17:42:36.713742 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1a709ef9-91c0-4193-acb4-0594d02f554c-kubelet-dir\") pod \"installer-3-master-0\" (UID: \"1a709ef9-91c0-4193-acb4-0594d02f554c\") " pod="openshift-kube-scheduler/installer-3-master-0" Mar 18 17:42:36.714222 master-0 kubenswrapper[7553]: I0318 17:42:36.713951 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/1a709ef9-91c0-4193-acb4-0594d02f554c-var-lock\") pod \"installer-3-master-0\" (UID: \"1a709ef9-91c0-4193-acb4-0594d02f554c\") " pod="openshift-kube-scheduler/installer-3-master-0" Mar 18 17:42:36.714222 master-0 kubenswrapper[7553]: I0318 17:42:36.714018 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1a709ef9-91c0-4193-acb4-0594d02f554c-kube-api-access\") pod \"installer-3-master-0\" (UID: \"1a709ef9-91c0-4193-acb4-0594d02f554c\") " pod="openshift-kube-scheduler/installer-3-master-0" Mar 18 17:42:36.714222 master-0 kubenswrapper[7553]: I0318 17:42:36.713941 7553 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1a709ef9-91c0-4193-acb4-0594d02f554c-kubelet-dir\") pod \"installer-3-master-0\" (UID: \"1a709ef9-91c0-4193-acb4-0594d02f554c\") " pod="openshift-kube-scheduler/installer-3-master-0" Mar 18 17:42:36.714222 master-0 kubenswrapper[7553]: I0318 17:42:36.714075 7553 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/1a709ef9-91c0-4193-acb4-0594d02f554c-var-lock\") pod \"installer-3-master-0\" (UID: \"1a709ef9-91c0-4193-acb4-0594d02f554c\") " pod="openshift-kube-scheduler/installer-3-master-0" Mar 18 17:42:36.735901 master-0 kubenswrapper[7553]: I0318 17:42:36.735751 7553 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1a709ef9-91c0-4193-acb4-0594d02f554c-kube-api-access\") pod \"installer-3-master-0\" (UID: \"1a709ef9-91c0-4193-acb4-0594d02f554c\") " pod="openshift-kube-scheduler/installer-3-master-0" Mar 18 17:42:36.875117 master-0 kubenswrapper[7553]: I0318 17:42:36.875063 7553 generic.go:334] "Generic (PLEG): container finished" podID="d4b73dcd-592d-493e-926b-7264fb81aa8e" containerID="9140b66f8352427e422b21871259d9c1897b722209bbe42b359cb9a5fcad237f" exitCode=0 Mar 18 17:42:36.875385 master-0 kubenswrapper[7553]: I0318 17:42:36.875139 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7f9db7db88-vbx76" event={"ID":"d4b73dcd-592d-493e-926b-7264fb81aa8e","Type":"ContainerDied","Data":"9140b66f8352427e422b21871259d9c1897b722209bbe42b359cb9a5fcad237f"} Mar 18 17:42:36.877921 master-0 kubenswrapper[7553]: I0318 17:42:36.877897 7553 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_installer-2-master-0_7ee0d87f-dc6e-44d7-ab20-0118116ec893/installer/0.log" Mar 18 17:42:36.878007 master-0 kubenswrapper[7553]: I0318 17:42:36.877944 7553 generic.go:334] "Generic (PLEG): container finished" podID="7ee0d87f-dc6e-44d7-ab20-0118116ec893" containerID="441322c172514e8dc3f3a8770ab3b9678bd3c6294dbca14c06867f35aca91e9b" exitCode=1 Mar 18 17:42:36.878007 master-0 kubenswrapper[7553]: I0318 17:42:36.878005 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-2-master-0" event={"ID":"7ee0d87f-dc6e-44d7-ab20-0118116ec893","Type":"ContainerDied","Data":"441322c172514e8dc3f3a8770ab3b9678bd3c6294dbca14c06867f35aca91e9b"} Mar 18 17:42:36.880400 master-0 kubenswrapper[7553]: I0318 17:42:36.880355 7553 generic.go:334] "Generic (PLEG): container finished" podID="e109acbe-328f-4ff0-b665-1a822adacfc8" containerID="a0c2c1a7601a92929d602a239379df7d68994f2dcb8fa7e1dbbd4b3cdc7f7136" exitCode=0 Mar 18 17:42:36.880538 master-0 kubenswrapper[7553]: I0318 17:42:36.880459 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7f87dc7fd4-v8b77" event={"ID":"e109acbe-328f-4ff0-b665-1a822adacfc8","Type":"ContainerDied","Data":"a0c2c1a7601a92929d602a239379df7d68994f2dcb8fa7e1dbbd4b3cdc7f7136"} Mar 18 17:42:36.885932 master-0 kubenswrapper[7553]: I0318 17:42:36.885544 7553 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-897b458c6-vsss9" Mar 18 17:42:36.899183 master-0 kubenswrapper[7553]: I0318 17:42:36.899142 7553 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-3-master-0" Mar 18 17:42:37.810691 master-0 kubenswrapper[7553]: I0318 17:42:37.810628 7553 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_installer-2-master-0_7ee0d87f-dc6e-44d7-ab20-0118116ec893/installer/0.log" Mar 18 17:42:37.811141 master-0 kubenswrapper[7553]: I0318 17:42:37.810731 7553 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-2-master-0" Mar 18 17:42:37.851925 master-0 kubenswrapper[7553]: I0318 17:42:37.851511 7553 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7f9db7db88-vbx76" Mar 18 17:42:37.910831 master-0 kubenswrapper[7553]: I0318 17:42:37.909620 7553 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_installer-2-master-0_7ee0d87f-dc6e-44d7-ab20-0118116ec893/installer/0.log" Mar 18 17:42:37.910831 master-0 kubenswrapper[7553]: I0318 17:42:37.909927 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-2-master-0" event={"ID":"7ee0d87f-dc6e-44d7-ab20-0118116ec893","Type":"ContainerDied","Data":"4187ef5f64921b48a294bd87cfb36d2edc300feb73916a83f2cc847619fab117"} Mar 18 17:42:37.910831 master-0 kubenswrapper[7553]: I0318 17:42:37.909979 7553 scope.go:117] "RemoveContainer" containerID="441322c172514e8dc3f3a8770ab3b9678bd3c6294dbca14c06867f35aca91e9b" Mar 18 17:42:37.913885 master-0 kubenswrapper[7553]: I0318 17:42:37.913841 7553 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-2-master-0" Mar 18 17:42:37.933817 master-0 kubenswrapper[7553]: I0318 17:42:37.933782 7553 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7ee0d87f-dc6e-44d7-ab20-0118116ec893-kube-api-access\") pod \"7ee0d87f-dc6e-44d7-ab20-0118116ec893\" (UID: \"7ee0d87f-dc6e-44d7-ab20-0118116ec893\") " Mar 18 17:42:37.934057 master-0 kubenswrapper[7553]: I0318 17:42:37.933852 7553 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/7ee0d87f-dc6e-44d7-ab20-0118116ec893-var-lock\") pod \"7ee0d87f-dc6e-44d7-ab20-0118116ec893\" (UID: \"7ee0d87f-dc6e-44d7-ab20-0118116ec893\") " Mar 18 17:42:37.934057 master-0 kubenswrapper[7553]: I0318 17:42:37.933878 7553 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7ee0d87f-dc6e-44d7-ab20-0118116ec893-kubelet-dir\") pod \"7ee0d87f-dc6e-44d7-ab20-0118116ec893\" (UID: \"7ee0d87f-dc6e-44d7-ab20-0118116ec893\") " Mar 18 17:42:37.934193 master-0 kubenswrapper[7553]: I0318 17:42:37.934177 7553 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7ee0d87f-dc6e-44d7-ab20-0118116ec893-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "7ee0d87f-dc6e-44d7-ab20-0118116ec893" (UID: "7ee0d87f-dc6e-44d7-ab20-0118116ec893"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 17:42:37.934745 master-0 kubenswrapper[7553]: I0318 17:42:37.934689 7553 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7ee0d87f-dc6e-44d7-ab20-0118116ec893-var-lock" (OuterVolumeSpecName: "var-lock") pod "7ee0d87f-dc6e-44d7-ab20-0118116ec893" (UID: "7ee0d87f-dc6e-44d7-ab20-0118116ec893"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 17:42:37.934796 master-0 kubenswrapper[7553]: I0318 17:42:37.934735 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7f9db7db88-vbx76" event={"ID":"d4b73dcd-592d-493e-926b-7264fb81aa8e","Type":"ContainerDied","Data":"9886f9fed81fd35bb0594bc240625b787b49143773d349c210121ef04b4b5e77"} Mar 18 17:42:37.934863 master-0 kubenswrapper[7553]: I0318 17:42:37.934821 7553 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7f9db7db88-vbx76" Mar 18 17:42:37.939792 master-0 kubenswrapper[7553]: I0318 17:42:37.939749 7553 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7ee0d87f-dc6e-44d7-ab20-0118116ec893-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "7ee0d87f-dc6e-44d7-ab20-0118116ec893" (UID: "7ee0d87f-dc6e-44d7-ab20-0118116ec893"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 17:42:37.964634 master-0 kubenswrapper[7553]: I0318 17:42:37.964592 7553 scope.go:117] "RemoveContainer" containerID="9140b66f8352427e422b21871259d9c1897b722209bbe42b359cb9a5fcad237f" Mar 18 17:42:38.035496 master-0 kubenswrapper[7553]: I0318 17:42:38.035441 7553 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-58cnc\" (UniqueName: \"kubernetes.io/projected/d4b73dcd-592d-493e-926b-7264fb81aa8e-kube-api-access-58cnc\") pod \"d4b73dcd-592d-493e-926b-7264fb81aa8e\" (UID: \"d4b73dcd-592d-493e-926b-7264fb81aa8e\") " Mar 18 17:42:38.035665 master-0 kubenswrapper[7553]: I0318 17:42:38.035535 7553 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d4b73dcd-592d-493e-926b-7264fb81aa8e-client-ca\") pod \"d4b73dcd-592d-493e-926b-7264fb81aa8e\" (UID: \"d4b73dcd-592d-493e-926b-7264fb81aa8e\") " Mar 18 17:42:38.035665 master-0 kubenswrapper[7553]: I0318 17:42:38.035561 7553 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d4b73dcd-592d-493e-926b-7264fb81aa8e-serving-cert\") pod \"d4b73dcd-592d-493e-926b-7264fb81aa8e\" (UID: \"d4b73dcd-592d-493e-926b-7264fb81aa8e\") " Mar 18 17:42:38.035665 master-0 kubenswrapper[7553]: I0318 17:42:38.035601 7553 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d4b73dcd-592d-493e-926b-7264fb81aa8e-config\") pod \"d4b73dcd-592d-493e-926b-7264fb81aa8e\" (UID: \"d4b73dcd-592d-493e-926b-7264fb81aa8e\") " Mar 18 17:42:38.035665 master-0 kubenswrapper[7553]: I0318 17:42:38.035639 7553 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d4b73dcd-592d-493e-926b-7264fb81aa8e-proxy-ca-bundles\") pod \"d4b73dcd-592d-493e-926b-7264fb81aa8e\" (UID: \"d4b73dcd-592d-493e-926b-7264fb81aa8e\") " Mar 18 17:42:38.035908 master-0 kubenswrapper[7553]: I0318 17:42:38.035875 7553 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7ee0d87f-dc6e-44d7-ab20-0118116ec893-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Mar 18 17:42:38.035908 master-0 kubenswrapper[7553]: I0318 17:42:38.035897 7553 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7ee0d87f-dc6e-44d7-ab20-0118116ec893-kube-api-access\") on node \"master-0\" DevicePath \"\"" Mar 18 17:42:38.035908 master-0 kubenswrapper[7553]: I0318 17:42:38.035907 7553 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/7ee0d87f-dc6e-44d7-ab20-0118116ec893-var-lock\") on node \"master-0\" DevicePath \"\"" Mar 18 17:42:38.036415 master-0 kubenswrapper[7553]: I0318 17:42:38.036360 7553 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d4b73dcd-592d-493e-926b-7264fb81aa8e-client-ca" (OuterVolumeSpecName: "client-ca") pod "d4b73dcd-592d-493e-926b-7264fb81aa8e" (UID: "d4b73dcd-592d-493e-926b-7264fb81aa8e"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 17:42:38.036561 master-0 kubenswrapper[7553]: I0318 17:42:38.036526 7553 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d4b73dcd-592d-493e-926b-7264fb81aa8e-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "d4b73dcd-592d-493e-926b-7264fb81aa8e" (UID: "d4b73dcd-592d-493e-926b-7264fb81aa8e"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 17:42:38.037894 master-0 kubenswrapper[7553]: I0318 17:42:38.037842 7553 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d4b73dcd-592d-493e-926b-7264fb81aa8e-config" (OuterVolumeSpecName: "config") pod "d4b73dcd-592d-493e-926b-7264fb81aa8e" (UID: "d4b73dcd-592d-493e-926b-7264fb81aa8e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 17:42:38.041975 master-0 kubenswrapper[7553]: I0318 17:42:38.041940 7553 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d4b73dcd-592d-493e-926b-7264fb81aa8e-kube-api-access-58cnc" (OuterVolumeSpecName: "kube-api-access-58cnc") pod "d4b73dcd-592d-493e-926b-7264fb81aa8e" (UID: "d4b73dcd-592d-493e-926b-7264fb81aa8e"). InnerVolumeSpecName "kube-api-access-58cnc". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 17:42:38.042192 master-0 kubenswrapper[7553]: I0318 17:42:38.042151 7553 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d4b73dcd-592d-493e-926b-7264fb81aa8e-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "d4b73dcd-592d-493e-926b-7264fb81aa8e" (UID: "d4b73dcd-592d-493e-926b-7264fb81aa8e"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 17:42:38.113064 master-0 kubenswrapper[7553]: I0318 17:42:38.113030 7553 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7f87dc7fd4-v8b77" Mar 18 17:42:38.143804 master-0 kubenswrapper[7553]: I0318 17:42:38.143758 7553 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cxn6x\" (UniqueName: \"kubernetes.io/projected/e109acbe-328f-4ff0-b665-1a822adacfc8-kube-api-access-cxn6x\") pod \"e109acbe-328f-4ff0-b665-1a822adacfc8\" (UID: \"e109acbe-328f-4ff0-b665-1a822adacfc8\") " Mar 18 17:42:38.143894 master-0 kubenswrapper[7553]: I0318 17:42:38.143842 7553 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e109acbe-328f-4ff0-b665-1a822adacfc8-client-ca\") pod \"e109acbe-328f-4ff0-b665-1a822adacfc8\" (UID: \"e109acbe-328f-4ff0-b665-1a822adacfc8\") " Mar 18 17:42:38.143945 master-0 kubenswrapper[7553]: I0318 17:42:38.143904 7553 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e109acbe-328f-4ff0-b665-1a822adacfc8-serving-cert\") pod \"e109acbe-328f-4ff0-b665-1a822adacfc8\" (UID: \"e109acbe-328f-4ff0-b665-1a822adacfc8\") " Mar 18 17:42:38.144516 master-0 kubenswrapper[7553]: I0318 17:42:38.144501 7553 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-58cnc\" (UniqueName: \"kubernetes.io/projected/d4b73dcd-592d-493e-926b-7264fb81aa8e-kube-api-access-58cnc\") on node \"master-0\" DevicePath \"\"" Mar 18 17:42:38.144516 master-0 kubenswrapper[7553]: I0318 17:42:38.144516 7553 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d4b73dcd-592d-493e-926b-7264fb81aa8e-client-ca\") on node \"master-0\" DevicePath \"\"" Mar 18 17:42:38.144643 master-0 kubenswrapper[7553]: I0318 17:42:38.144525 7553 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d4b73dcd-592d-493e-926b-7264fb81aa8e-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 18 17:42:38.144643 master-0 kubenswrapper[7553]: I0318 17:42:38.144535 7553 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d4b73dcd-592d-493e-926b-7264fb81aa8e-config\") on node \"master-0\" DevicePath \"\"" Mar 18 17:42:38.144643 master-0 kubenswrapper[7553]: I0318 17:42:38.144543 7553 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d4b73dcd-592d-493e-926b-7264fb81aa8e-proxy-ca-bundles\") on node \"master-0\" DevicePath \"\"" Mar 18 17:42:38.147226 master-0 kubenswrapper[7553]: I0318 17:42:38.147191 7553 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e109acbe-328f-4ff0-b665-1a822adacfc8-client-ca" (OuterVolumeSpecName: "client-ca") pod "e109acbe-328f-4ff0-b665-1a822adacfc8" (UID: "e109acbe-328f-4ff0-b665-1a822adacfc8"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 17:42:38.164047 master-0 kubenswrapper[7553]: I0318 17:42:38.162957 7553 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e109acbe-328f-4ff0-b665-1a822adacfc8-kube-api-access-cxn6x" (OuterVolumeSpecName: "kube-api-access-cxn6x") pod "e109acbe-328f-4ff0-b665-1a822adacfc8" (UID: "e109acbe-328f-4ff0-b665-1a822adacfc8"). InnerVolumeSpecName "kube-api-access-cxn6x". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 17:42:38.173691 master-0 kubenswrapper[7553]: I0318 17:42:38.173622 7553 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e109acbe-328f-4ff0-b665-1a822adacfc8-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e109acbe-328f-4ff0-b665-1a822adacfc8" (UID: "e109acbe-328f-4ff0-b665-1a822adacfc8"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 17:42:38.234052 master-0 kubenswrapper[7553]: I0318 17:42:38.234015 7553 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-scheduler/installer-2-master-0"] Mar 18 17:42:38.241917 master-0 kubenswrapper[7553]: I0318 17:42:38.241877 7553 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-scheduler/installer-2-master-0"] Mar 18 17:42:38.246204 master-0 kubenswrapper[7553]: I0318 17:42:38.245346 7553 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e109acbe-328f-4ff0-b665-1a822adacfc8-config\") pod \"e109acbe-328f-4ff0-b665-1a822adacfc8\" (UID: \"e109acbe-328f-4ff0-b665-1a822adacfc8\") " Mar 18 17:42:38.246204 master-0 kubenswrapper[7553]: I0318 17:42:38.245708 7553 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e109acbe-328f-4ff0-b665-1a822adacfc8-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 18 17:42:38.246204 master-0 kubenswrapper[7553]: I0318 17:42:38.245723 7553 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cxn6x\" (UniqueName: \"kubernetes.io/projected/e109acbe-328f-4ff0-b665-1a822adacfc8-kube-api-access-cxn6x\") on node \"master-0\" DevicePath \"\"" Mar 18 17:42:38.246204 master-0 kubenswrapper[7553]: I0318 17:42:38.245734 7553 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e109acbe-328f-4ff0-b665-1a822adacfc8-client-ca\") on node \"master-0\" DevicePath \"\"" Mar 18 17:42:38.246412 master-0 kubenswrapper[7553]: I0318 17:42:38.246240 7553 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e109acbe-328f-4ff0-b665-1a822adacfc8-config" (OuterVolumeSpecName: "config") pod "e109acbe-328f-4ff0-b665-1a822adacfc8" (UID: "e109acbe-328f-4ff0-b665-1a822adacfc8"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 17:42:38.264576 master-0 kubenswrapper[7553]: I0318 17:42:38.264532 7553 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-7f9db7db88-vbx76"] Mar 18 17:42:38.265915 master-0 kubenswrapper[7553]: I0318 17:42:38.265877 7553 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-7f9db7db88-vbx76"] Mar 18 17:42:38.300601 master-0 kubenswrapper[7553]: I0318 17:42:38.299549 7553 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/installer-3-master-0"] Mar 18 17:42:38.346816 master-0 kubenswrapper[7553]: I0318 17:42:38.346777 7553 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e109acbe-328f-4ff0-b665-1a822adacfc8-config\") on node \"master-0\" DevicePath \"\"" Mar 18 17:42:38.362939 master-0 kubenswrapper[7553]: I0318 17:42:38.362798 7553 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/installer-1-master-0"] Mar 18 17:42:38.960824 master-0 kubenswrapper[7553]: I0318 17:42:38.960744 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-688fbbb854-6n26v" event={"ID":"43fab0f2-5cfd-4b5e-a632-728fd5b960fd","Type":"ContainerStarted","Data":"e9d865c621673d95e24957da6c5efc56f4b4cde9d2216c676659bdbab854d23a"} Mar 18 17:42:38.963369 master-0 kubenswrapper[7553]: I0318 17:42:38.963283 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-89ccd998f-l5gm7" event={"ID":"ce5831a6-5a8d-4cda-9299-5d86437bcab2","Type":"ContainerStarted","Data":"c7f5d502541807602a24d2f39710701583fd6aae06267e2b4ee473df7bbfd13e"} Mar 18 17:42:38.963450 master-0 kubenswrapper[7553]: I0318 17:42:38.963392 7553 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-89ccd998f-l5gm7" Mar 18 17:42:38.970550 master-0 kubenswrapper[7553]: I0318 17:42:38.969333 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7f87dc7fd4-v8b77" event={"ID":"e109acbe-328f-4ff0-b665-1a822adacfc8","Type":"ContainerDied","Data":"a355a71517e7d66c5dd4ce62f576d95f5192f9fda3f5afa9bf334abee5162c41"} Mar 18 17:42:38.970550 master-0 kubenswrapper[7553]: I0318 17:42:38.969388 7553 scope.go:117] "RemoveContainer" containerID="a0c2c1a7601a92929d602a239379df7d68994f2dcb8fa7e1dbbd4b3cdc7f7136" Mar 18 17:42:38.970550 master-0 kubenswrapper[7553]: I0318 17:42:38.969343 7553 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7f87dc7fd4-v8b77" Mar 18 17:42:38.970550 master-0 kubenswrapper[7553]: I0318 17:42:38.970174 7553 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-89ccd998f-l5gm7" Mar 18 17:42:38.973818 master-0 kubenswrapper[7553]: I0318 17:42:38.973758 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-mfn52" event={"ID":"5a4f94f3-d63a-4869-b723-ae9637610b4b","Type":"ContainerStarted","Data":"78a923a667127b1286dea35f21f78606a0571268603a959819d3e2b7d9228a74"} Mar 18 17:42:38.978361 master-0 kubenswrapper[7553]: I0318 17:42:38.978268 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-dh5zl" event={"ID":"37b3753f-bf4f-4a9e-a4a8-d58296bada79","Type":"ContainerStarted","Data":"e229bdd7fc0be3132bf6c41375784bfa193044dc824c60fd13a2faab5acbb534"} Mar 18 17:42:38.978361 master-0 kubenswrapper[7553]: I0318 17:42:38.978356 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-dh5zl" event={"ID":"37b3753f-bf4f-4a9e-a4a8-d58296bada79","Type":"ContainerStarted","Data":"45e3f6969d17c505bf035fc0bb6cc383a03ebc121f30574b4362e30470bf65e0"} Mar 18 17:42:38.981237 master-0 kubenswrapper[7553]: I0318 17:42:38.981195 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/cluster-monitoring-operator-58845fbb57-vjrjg" event={"ID":"8d5e9525-6c0d-4c0b-a2ce-e42eaf66c311","Type":"ContainerStarted","Data":"71885a6f2800c54f7ad69e938d01138ac94f5d17771a82aa2346c42f3c864d99"} Mar 18 17:42:38.983339 master-0 kubenswrapper[7553]: I0318 17:42:38.983259 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-5dbbb8b86f-gr8jc" event={"ID":"a8c34df1-ea0d-4dfa-bf4d-5b58dc5bee8e","Type":"ContainerStarted","Data":"44bf631b967a6a5c4f33c650ce7e77866fd0f758bbaa4aaabffd566bdac21bf2"} Mar 18 17:42:39.132202 master-0 kubenswrapper[7553]: I0318 17:42:39.132145 7553 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7f87dc7fd4-v8b77"] Mar 18 17:42:39.145399 master-0 kubenswrapper[7553]: I0318 17:42:39.144711 7553 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7f87dc7fd4-v8b77"] Mar 18 17:42:39.477461 master-0 kubenswrapper[7553]: I0318 17:42:39.477417 7553 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-lf9xl" Mar 18 17:42:40.063685 master-0 kubenswrapper[7553]: I0318 17:42:40.063618 7553 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7ee0d87f-dc6e-44d7-ab20-0118116ec893" path="/var/lib/kubelet/pods/7ee0d87f-dc6e-44d7-ab20-0118116ec893/volumes" Mar 18 17:42:40.064308 master-0 kubenswrapper[7553]: I0318 17:42:40.064143 7553 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d4b73dcd-592d-493e-926b-7264fb81aa8e" path="/var/lib/kubelet/pods/d4b73dcd-592d-493e-926b-7264fb81aa8e/volumes" Mar 18 17:42:40.064763 master-0 kubenswrapper[7553]: I0318 17:42:40.064732 7553 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e109acbe-328f-4ff0-b665-1a822adacfc8" path="/var/lib/kubelet/pods/e109acbe-328f-4ff0-b665-1a822adacfc8/volumes" Mar 18 17:42:40.724126 master-0 kubenswrapper[7553]: I0318 17:42:40.724025 7553 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-f5755b457-f4cbl"] Mar 18 17:42:40.724989 master-0 kubenswrapper[7553]: E0318 17:42:40.724333 7553 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7ee0d87f-dc6e-44d7-ab20-0118116ec893" containerName="installer" Mar 18 17:42:40.724989 master-0 kubenswrapper[7553]: I0318 17:42:40.724354 7553 state_mem.go:107] "Deleted CPUSet assignment" podUID="7ee0d87f-dc6e-44d7-ab20-0118116ec893" containerName="installer" Mar 18 17:42:40.724989 master-0 kubenswrapper[7553]: E0318 17:42:40.724370 7553 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d4b73dcd-592d-493e-926b-7264fb81aa8e" containerName="controller-manager" Mar 18 17:42:40.724989 master-0 kubenswrapper[7553]: I0318 17:42:40.724378 7553 state_mem.go:107] "Deleted CPUSet assignment" podUID="d4b73dcd-592d-493e-926b-7264fb81aa8e" containerName="controller-manager" Mar 18 17:42:40.724989 master-0 kubenswrapper[7553]: E0318 17:42:40.724393 7553 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e109acbe-328f-4ff0-b665-1a822adacfc8" containerName="route-controller-manager" Mar 18 17:42:40.724989 master-0 kubenswrapper[7553]: I0318 17:42:40.724403 7553 state_mem.go:107] "Deleted CPUSet assignment" podUID="e109acbe-328f-4ff0-b665-1a822adacfc8" containerName="route-controller-manager" Mar 18 17:42:40.724989 master-0 kubenswrapper[7553]: I0318 17:42:40.724491 7553 memory_manager.go:354] "RemoveStaleState removing state" podUID="7ee0d87f-dc6e-44d7-ab20-0118116ec893" containerName="installer" Mar 18 17:42:40.724989 master-0 kubenswrapper[7553]: I0318 17:42:40.724501 7553 memory_manager.go:354] "RemoveStaleState removing state" podUID="d4b73dcd-592d-493e-926b-7264fb81aa8e" containerName="controller-manager" Mar 18 17:42:40.724989 master-0 kubenswrapper[7553]: I0318 17:42:40.724512 7553 memory_manager.go:354] "RemoveStaleState removing state" podUID="e109acbe-328f-4ff0-b665-1a822adacfc8" containerName="route-controller-manager" Mar 18 17:42:40.724989 master-0 kubenswrapper[7553]: I0318 17:42:40.724987 7553 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-f5755b457-f4cbl" Mar 18 17:42:40.731353 master-0 kubenswrapper[7553]: I0318 17:42:40.731255 7553 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Mar 18 17:42:40.732350 master-0 kubenswrapper[7553]: I0318 17:42:40.732196 7553 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Mar 18 17:42:40.732350 master-0 kubenswrapper[7553]: I0318 17:42:40.732355 7553 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Mar 18 17:42:40.734039 master-0 kubenswrapper[7553]: I0318 17:42:40.732641 7553 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Mar 18 17:42:40.734039 master-0 kubenswrapper[7553]: I0318 17:42:40.732981 7553 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Mar 18 17:42:40.743841 master-0 kubenswrapper[7553]: I0318 17:42:40.743796 7553 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Mar 18 17:42:40.745597 master-0 kubenswrapper[7553]: I0318 17:42:40.743527 7553 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-57dbfd879f-44tfw"] Mar 18 17:42:40.746376 master-0 kubenswrapper[7553]: I0318 17:42:40.746314 7553 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-57dbfd879f-44tfw"] Mar 18 17:42:40.746514 master-0 kubenswrapper[7553]: I0318 17:42:40.746437 7553 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-57dbfd879f-44tfw" Mar 18 17:42:40.748735 master-0 kubenswrapper[7553]: I0318 17:42:40.748704 7553 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Mar 18 17:42:40.749403 master-0 kubenswrapper[7553]: I0318 17:42:40.749360 7553 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Mar 18 17:42:40.750747 master-0 kubenswrapper[7553]: I0318 17:42:40.750696 7553 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Mar 18 17:42:40.751024 master-0 kubenswrapper[7553]: I0318 17:42:40.750996 7553 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Mar 18 17:42:40.751309 master-0 kubenswrapper[7553]: I0318 17:42:40.751270 7553 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Mar 18 17:42:40.751877 master-0 kubenswrapper[7553]: I0318 17:42:40.751802 7553 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-f5755b457-f4cbl"] Mar 18 17:42:40.889312 master-0 kubenswrapper[7553]: I0318 17:42:40.889204 7553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/253ec853-f637-4aa4-8e8e-eb655dfccccb-client-ca\") pod \"route-controller-manager-57dbfd879f-44tfw\" (UID: \"253ec853-f637-4aa4-8e8e-eb655dfccccb\") " pod="openshift-route-controller-manager/route-controller-manager-57dbfd879f-44tfw" Mar 18 17:42:40.889312 master-0 kubenswrapper[7553]: I0318 17:42:40.889304 7553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1db0a246-ca43-4e7c-b09e-e80218ae99b1-config\") pod \"controller-manager-f5755b457-f4cbl\" (UID: \"1db0a246-ca43-4e7c-b09e-e80218ae99b1\") " pod="openshift-controller-manager/controller-manager-f5755b457-f4cbl" Mar 18 17:42:40.889658 master-0 kubenswrapper[7553]: I0318 17:42:40.889337 7553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1db0a246-ca43-4e7c-b09e-e80218ae99b1-serving-cert\") pod \"controller-manager-f5755b457-f4cbl\" (UID: \"1db0a246-ca43-4e7c-b09e-e80218ae99b1\") " pod="openshift-controller-manager/controller-manager-f5755b457-f4cbl" Mar 18 17:42:40.889658 master-0 kubenswrapper[7553]: I0318 17:42:40.889468 7553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cx596\" (UniqueName: \"kubernetes.io/projected/253ec853-f637-4aa4-8e8e-eb655dfccccb-kube-api-access-cx596\") pod \"route-controller-manager-57dbfd879f-44tfw\" (UID: \"253ec853-f637-4aa4-8e8e-eb655dfccccb\") " pod="openshift-route-controller-manager/route-controller-manager-57dbfd879f-44tfw" Mar 18 17:42:40.889658 master-0 kubenswrapper[7553]: I0318 17:42:40.889501 7553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1db0a246-ca43-4e7c-b09e-e80218ae99b1-proxy-ca-bundles\") pod \"controller-manager-f5755b457-f4cbl\" (UID: \"1db0a246-ca43-4e7c-b09e-e80218ae99b1\") " pod="openshift-controller-manager/controller-manager-f5755b457-f4cbl" Mar 18 17:42:40.889658 master-0 kubenswrapper[7553]: I0318 17:42:40.889522 7553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/253ec853-f637-4aa4-8e8e-eb655dfccccb-serving-cert\") pod \"route-controller-manager-57dbfd879f-44tfw\" (UID: \"253ec853-f637-4aa4-8e8e-eb655dfccccb\") " pod="openshift-route-controller-manager/route-controller-manager-57dbfd879f-44tfw" Mar 18 17:42:40.889658 master-0 kubenswrapper[7553]: I0318 17:42:40.889541 7553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1db0a246-ca43-4e7c-b09e-e80218ae99b1-client-ca\") pod \"controller-manager-f5755b457-f4cbl\" (UID: \"1db0a246-ca43-4e7c-b09e-e80218ae99b1\") " pod="openshift-controller-manager/controller-manager-f5755b457-f4cbl" Mar 18 17:42:40.889807 master-0 kubenswrapper[7553]: I0318 17:42:40.889636 7553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n9g8f\" (UniqueName: \"kubernetes.io/projected/1db0a246-ca43-4e7c-b09e-e80218ae99b1-kube-api-access-n9g8f\") pod \"controller-manager-f5755b457-f4cbl\" (UID: \"1db0a246-ca43-4e7c-b09e-e80218ae99b1\") " pod="openshift-controller-manager/controller-manager-f5755b457-f4cbl" Mar 18 17:42:40.889807 master-0 kubenswrapper[7553]: I0318 17:42:40.889754 7553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/253ec853-f637-4aa4-8e8e-eb655dfccccb-config\") pod \"route-controller-manager-57dbfd879f-44tfw\" (UID: \"253ec853-f637-4aa4-8e8e-eb655dfccccb\") " pod="openshift-route-controller-manager/route-controller-manager-57dbfd879f-44tfw" Mar 18 17:42:40.990681 master-0 kubenswrapper[7553]: I0318 17:42:40.990554 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1db0a246-ca43-4e7c-b09e-e80218ae99b1-client-ca\") pod \"controller-manager-f5755b457-f4cbl\" (UID: \"1db0a246-ca43-4e7c-b09e-e80218ae99b1\") " pod="openshift-controller-manager/controller-manager-f5755b457-f4cbl" Mar 18 17:42:40.990681 master-0 kubenswrapper[7553]: I0318 17:42:40.990656 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n9g8f\" (UniqueName: \"kubernetes.io/projected/1db0a246-ca43-4e7c-b09e-e80218ae99b1-kube-api-access-n9g8f\") pod \"controller-manager-f5755b457-f4cbl\" (UID: \"1db0a246-ca43-4e7c-b09e-e80218ae99b1\") " pod="openshift-controller-manager/controller-manager-f5755b457-f4cbl" Mar 18 17:42:40.990681 master-0 kubenswrapper[7553]: I0318 17:42:40.990688 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/253ec853-f637-4aa4-8e8e-eb655dfccccb-config\") pod \"route-controller-manager-57dbfd879f-44tfw\" (UID: \"253ec853-f637-4aa4-8e8e-eb655dfccccb\") " pod="openshift-route-controller-manager/route-controller-manager-57dbfd879f-44tfw" Mar 18 17:42:40.991072 master-0 kubenswrapper[7553]: I0318 17:42:40.990716 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/253ec853-f637-4aa4-8e8e-eb655dfccccb-client-ca\") pod \"route-controller-manager-57dbfd879f-44tfw\" (UID: \"253ec853-f637-4aa4-8e8e-eb655dfccccb\") " pod="openshift-route-controller-manager/route-controller-manager-57dbfd879f-44tfw" Mar 18 17:42:40.991072 master-0 kubenswrapper[7553]: I0318 17:42:40.990756 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1db0a246-ca43-4e7c-b09e-e80218ae99b1-config\") pod \"controller-manager-f5755b457-f4cbl\" (UID: \"1db0a246-ca43-4e7c-b09e-e80218ae99b1\") " pod="openshift-controller-manager/controller-manager-f5755b457-f4cbl" Mar 18 17:42:40.991072 master-0 kubenswrapper[7553]: I0318 17:42:40.990782 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1db0a246-ca43-4e7c-b09e-e80218ae99b1-serving-cert\") pod \"controller-manager-f5755b457-f4cbl\" (UID: \"1db0a246-ca43-4e7c-b09e-e80218ae99b1\") " pod="openshift-controller-manager/controller-manager-f5755b457-f4cbl" Mar 18 17:42:40.991072 master-0 kubenswrapper[7553]: I0318 17:42:40.990834 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cx596\" (UniqueName: \"kubernetes.io/projected/253ec853-f637-4aa4-8e8e-eb655dfccccb-kube-api-access-cx596\") pod \"route-controller-manager-57dbfd879f-44tfw\" (UID: \"253ec853-f637-4aa4-8e8e-eb655dfccccb\") " pod="openshift-route-controller-manager/route-controller-manager-57dbfd879f-44tfw" Mar 18 17:42:40.991072 master-0 kubenswrapper[7553]: I0318 17:42:40.990873 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1db0a246-ca43-4e7c-b09e-e80218ae99b1-proxy-ca-bundles\") pod \"controller-manager-f5755b457-f4cbl\" (UID: \"1db0a246-ca43-4e7c-b09e-e80218ae99b1\") " pod="openshift-controller-manager/controller-manager-f5755b457-f4cbl" Mar 18 17:42:40.991072 master-0 kubenswrapper[7553]: I0318 17:42:40.990896 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/253ec853-f637-4aa4-8e8e-eb655dfccccb-serving-cert\") pod \"route-controller-manager-57dbfd879f-44tfw\" (UID: \"253ec853-f637-4aa4-8e8e-eb655dfccccb\") " pod="openshift-route-controller-manager/route-controller-manager-57dbfd879f-44tfw" Mar 18 17:42:40.993242 master-0 kubenswrapper[7553]: I0318 17:42:40.993188 7553 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1db0a246-ca43-4e7c-b09e-e80218ae99b1-config\") pod \"controller-manager-f5755b457-f4cbl\" (UID: \"1db0a246-ca43-4e7c-b09e-e80218ae99b1\") " pod="openshift-controller-manager/controller-manager-f5755b457-f4cbl" Mar 18 17:42:41.006367 master-0 kubenswrapper[7553]: I0318 17:42:40.993416 7553 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/253ec853-f637-4aa4-8e8e-eb655dfccccb-client-ca\") pod \"route-controller-manager-57dbfd879f-44tfw\" (UID: \"253ec853-f637-4aa4-8e8e-eb655dfccccb\") " pod="openshift-route-controller-manager/route-controller-manager-57dbfd879f-44tfw" Mar 18 17:42:41.006367 master-0 kubenswrapper[7553]: I0318 17:42:40.994678 7553 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1db0a246-ca43-4e7c-b09e-e80218ae99b1-proxy-ca-bundles\") pod \"controller-manager-f5755b457-f4cbl\" (UID: \"1db0a246-ca43-4e7c-b09e-e80218ae99b1\") " pod="openshift-controller-manager/controller-manager-f5755b457-f4cbl" Mar 18 17:42:41.006367 master-0 kubenswrapper[7553]: I0318 17:42:40.994681 7553 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/253ec853-f637-4aa4-8e8e-eb655dfccccb-config\") pod \"route-controller-manager-57dbfd879f-44tfw\" (UID: \"253ec853-f637-4aa4-8e8e-eb655dfccccb\") " pod="openshift-route-controller-manager/route-controller-manager-57dbfd879f-44tfw" Mar 18 17:42:41.006367 master-0 kubenswrapper[7553]: I0318 17:42:40.994953 7553 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1db0a246-ca43-4e7c-b09e-e80218ae99b1-client-ca\") pod \"controller-manager-f5755b457-f4cbl\" (UID: \"1db0a246-ca43-4e7c-b09e-e80218ae99b1\") " pod="openshift-controller-manager/controller-manager-f5755b457-f4cbl" Mar 18 17:42:41.006367 master-0 kubenswrapper[7553]: I0318 17:42:40.996100 7553 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1db0a246-ca43-4e7c-b09e-e80218ae99b1-serving-cert\") pod \"controller-manager-f5755b457-f4cbl\" (UID: \"1db0a246-ca43-4e7c-b09e-e80218ae99b1\") " pod="openshift-controller-manager/controller-manager-f5755b457-f4cbl" Mar 18 17:42:41.006367 master-0 kubenswrapper[7553]: I0318 17:42:40.996637 7553 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/253ec853-f637-4aa4-8e8e-eb655dfccccb-serving-cert\") pod \"route-controller-manager-57dbfd879f-44tfw\" (UID: \"253ec853-f637-4aa4-8e8e-eb655dfccccb\") " pod="openshift-route-controller-manager/route-controller-manager-57dbfd879f-44tfw" Mar 18 17:42:41.015919 master-0 kubenswrapper[7553]: I0318 17:42:41.015846 7553 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cx596\" (UniqueName: \"kubernetes.io/projected/253ec853-f637-4aa4-8e8e-eb655dfccccb-kube-api-access-cx596\") pod \"route-controller-manager-57dbfd879f-44tfw\" (UID: \"253ec853-f637-4aa4-8e8e-eb655dfccccb\") " pod="openshift-route-controller-manager/route-controller-manager-57dbfd879f-44tfw" Mar 18 17:42:41.019340 master-0 kubenswrapper[7553]: I0318 17:42:41.017454 7553 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n9g8f\" (UniqueName: \"kubernetes.io/projected/1db0a246-ca43-4e7c-b09e-e80218ae99b1-kube-api-access-n9g8f\") pod \"controller-manager-f5755b457-f4cbl\" (UID: \"1db0a246-ca43-4e7c-b09e-e80218ae99b1\") " pod="openshift-controller-manager/controller-manager-f5755b457-f4cbl" Mar 18 17:42:41.047584 master-0 kubenswrapper[7553]: I0318 17:42:41.047504 7553 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-f5755b457-f4cbl" Mar 18 17:42:41.075675 master-0 kubenswrapper[7553]: I0318 17:42:41.075614 7553 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-57dbfd879f-44tfw" Mar 18 17:42:41.956852 master-0 kubenswrapper[7553]: W0318 17:42:41.956801 7553 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod4f688df1_3bfc_412e_b311_f9f761a0b00a.slice/crio-8b2f45d6c107abfb552477dd96d792756dec17de0e0140f60d8c6b31c6fa4d1e WatchSource:0}: Error finding container 8b2f45d6c107abfb552477dd96d792756dec17de0e0140f60d8c6b31c6fa4d1e: Status 404 returned error can't find the container with id 8b2f45d6c107abfb552477dd96d792756dec17de0e0140f60d8c6b31c6fa4d1e Mar 18 17:42:41.962197 master-0 kubenswrapper[7553]: W0318 17:42:41.961982 7553 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod1a709ef9_91c0_4193_acb4_0594d02f554c.slice/crio-9e890c0b05b9ab9a66059688757a1f43723c4593388d1175f31db9b7e7ec8883 WatchSource:0}: Error finding container 9e890c0b05b9ab9a66059688757a1f43723c4593388d1175f31db9b7e7ec8883: Status 404 returned error can't find the container with id 9e890c0b05b9ab9a66059688757a1f43723c4593388d1175f31db9b7e7ec8883 Mar 18 17:42:42.011830 master-0 kubenswrapper[7553]: I0318 17:42:42.011768 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-3-master-0" event={"ID":"1a709ef9-91c0-4193-acb4-0594d02f554c","Type":"ContainerStarted","Data":"9e890c0b05b9ab9a66059688757a1f43723c4593388d1175f31db9b7e7ec8883"} Mar 18 17:42:42.015440 master-0 kubenswrapper[7553]: I0318 17:42:42.015351 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-1-master-0" event={"ID":"4f688df1-3bfc-412e-b311-f9f761a0b00a","Type":"ContainerStarted","Data":"8b2f45d6c107abfb552477dd96d792756dec17de0e0140f60d8c6b31c6fa4d1e"} Mar 18 17:42:42.016988 master-0 kubenswrapper[7553]: I0318 17:42:42.016935 7553 generic.go:334] "Generic (PLEG): container finished" podID="43fab0f2-5cfd-4b5e-a632-728fd5b960fd" containerID="e9d865c621673d95e24957da6c5efc56f4b4cde9d2216c676659bdbab854d23a" exitCode=0 Mar 18 17:42:42.016988 master-0 kubenswrapper[7553]: I0318 17:42:42.016983 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-688fbbb854-6n26v" event={"ID":"43fab0f2-5cfd-4b5e-a632-728fd5b960fd","Type":"ContainerDied","Data":"e9d865c621673d95e24957da6c5efc56f4b4cde9d2216c676659bdbab854d23a"} Mar 18 17:42:42.512749 master-0 kubenswrapper[7553]: I0318 17:42:42.508761 7553 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-57dbfd879f-44tfw"] Mar 18 17:42:42.513358 master-0 kubenswrapper[7553]: I0318 17:42:42.512953 7553 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-f5755b457-f4cbl"] Mar 18 17:42:42.542163 master-0 kubenswrapper[7553]: W0318 17:42:42.542114 7553 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1db0a246_ca43_4e7c_b09e_e80218ae99b1.slice/crio-e34a7d43723491c0ffb4df04571420d726ec22d80fe5f50be4255c5ba300c922 WatchSource:0}: Error finding container e34a7d43723491c0ffb4df04571420d726ec22d80fe5f50be4255c5ba300c922: Status 404 returned error can't find the container with id e34a7d43723491c0ffb4df04571420d726ec22d80fe5f50be4255c5ba300c922 Mar 18 17:42:43.027635 master-0 kubenswrapper[7553]: I0318 17:42:43.027444 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-57dbfd879f-44tfw" event={"ID":"253ec853-f637-4aa4-8e8e-eb655dfccccb","Type":"ContainerStarted","Data":"44bcebab84e3e626740692adfb152c2797db6837bc5427bf84f3ada1de226018"} Mar 18 17:42:43.027635 master-0 kubenswrapper[7553]: I0318 17:42:43.027631 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-57dbfd879f-44tfw" event={"ID":"253ec853-f637-4aa4-8e8e-eb655dfccccb","Type":"ContainerStarted","Data":"b84bd85aac3ddf41b65c4a3ee28624adfec16e2d4dd19c154137ff1a28ded42b"} Mar 18 17:42:43.028082 master-0 kubenswrapper[7553]: I0318 17:42:43.028048 7553 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-57dbfd879f-44tfw" Mar 18 17:42:43.030107 master-0 kubenswrapper[7553]: I0318 17:42:43.030063 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-5dbbb8b86f-gr8jc" event={"ID":"a8c34df1-ea0d-4dfa-bf4d-5b58dc5bee8e","Type":"ContainerStarted","Data":"8d4a4392fb62b19690bdd00e7dd0f4626d2ed6c3f32141c69d0cf8e940849d1f"} Mar 18 17:42:43.032139 master-0 kubenswrapper[7553]: I0318 17:42:43.032083 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-f5755b457-f4cbl" event={"ID":"1db0a246-ca43-4e7c-b09e-e80218ae99b1","Type":"ContainerStarted","Data":"b3ebfba10cf9d40bcef8b7b1707842cdd5329c0fa6c5118e3bdbf4e1fe51f08d"} Mar 18 17:42:43.032196 master-0 kubenswrapper[7553]: I0318 17:42:43.032143 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-f5755b457-f4cbl" event={"ID":"1db0a246-ca43-4e7c-b09e-e80218ae99b1","Type":"ContainerStarted","Data":"e34a7d43723491c0ffb4df04571420d726ec22d80fe5f50be4255c5ba300c922"} Mar 18 17:42:43.032468 master-0 kubenswrapper[7553]: I0318 17:42:43.032431 7553 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-f5755b457-f4cbl" Mar 18 17:42:43.038311 master-0 kubenswrapper[7553]: I0318 17:42:43.035209 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-3-master-0" event={"ID":"1a709ef9-91c0-4193-acb4-0594d02f554c","Type":"ContainerStarted","Data":"484988d6e1e2aeba58f6749a644020e240b6e9ebd0d813d191a1e837c5837362"} Mar 18 17:42:43.039841 master-0 kubenswrapper[7553]: I0318 17:42:43.039806 7553 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-57dbfd879f-44tfw" Mar 18 17:42:43.042464 master-0 kubenswrapper[7553]: I0318 17:42:43.042405 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-7b95f86987-6qqz4" event={"ID":"d26d4515-391e-41a5-8c82-1b2b8a375662","Type":"ContainerStarted","Data":"c08cd14fe1ce6dcf04e7916d9d5a8cb80981c4007a423a03755dfeee8e27eeb4"} Mar 18 17:42:43.044056 master-0 kubenswrapper[7553]: I0318 17:42:43.043535 7553 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-7b95f86987-6qqz4" Mar 18 17:42:43.045417 master-0 kubenswrapper[7553]: I0318 17:42:43.045382 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-688fbbb854-6n26v" event={"ID":"43fab0f2-5cfd-4b5e-a632-728fd5b960fd","Type":"ContainerStarted","Data":"60b1b7ab2894f34a4d72e75e269d2d61041cb975246b74de9326bfa5ee794333"} Mar 18 17:42:43.063341 master-0 kubenswrapper[7553]: I0318 17:42:43.063241 7553 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-57dbfd879f-44tfw" podStartSLOduration=8.063220397 podStartE2EDuration="8.063220397s" podCreationTimestamp="2026-03-18 17:42:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 17:42:43.062080622 +0000 UTC m=+53.207915505" watchObservedRunningTime="2026-03-18 17:42:43.063220397 +0000 UTC m=+53.209055070" Mar 18 17:42:43.064867 master-0 kubenswrapper[7553]: I0318 17:42:43.064330 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-1-master-0" event={"ID":"4f688df1-3bfc-412e-b311-f9f761a0b00a","Type":"ContainerStarted","Data":"fdeef07d8840260931a9408a0850cec7ff93ac6938603492d86d93449b1926fe"} Mar 18 17:42:43.068393 master-0 kubenswrapper[7553]: I0318 17:42:43.068346 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-mfn52" event={"ID":"5a4f94f3-d63a-4869-b723-ae9637610b4b","Type":"ContainerStarted","Data":"f91859ca8cce7673f5eed0eb0988dd849459b4ee85bb3922e3a9f68f884d2b11"} Mar 18 17:42:43.069913 master-0 kubenswrapper[7553]: I0318 17:42:43.069874 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68f85b4d6c-qpgfz" event={"ID":"e9e04572-1425-440e-9869-6deef05e13e3","Type":"ContainerStarted","Data":"7e43314c8e037d8d04e40aabb2ea47f7293d5e3cc559929c7511e8d06accc3fb"} Mar 18 17:42:43.072645 master-0 kubenswrapper[7553]: I0318 17:42:43.070733 7553 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68f85b4d6c-qpgfz" Mar 18 17:42:43.072645 master-0 kubenswrapper[7553]: I0318 17:42:43.071056 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-5c9796789-6hngr" event={"ID":"e73f2834-c56c-4cef-ac3c-2317e9a4324c","Type":"ContainerStarted","Data":"7237163921aef14179170f8b6963ab7c60157d4f27e6da39581ca5dee7699026"} Mar 18 17:42:43.072645 master-0 kubenswrapper[7553]: I0318 17:42:43.071663 7553 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-5c9796789-6hngr" Mar 18 17:42:43.072645 master-0 kubenswrapper[7553]: I0318 17:42:43.072213 7553 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-f5755b457-f4cbl" Mar 18 17:42:43.077927 master-0 kubenswrapper[7553]: I0318 17:42:43.077888 7553 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-5c9796789-6hngr" Mar 18 17:42:43.078134 master-0 kubenswrapper[7553]: I0318 17:42:43.078099 7553 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68f85b4d6c-qpgfz" Mar 18 17:42:43.248277 master-0 kubenswrapper[7553]: I0318 17:42:43.248201 7553 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-688fbbb854-6n26v" podStartSLOduration=11.231401988 podStartE2EDuration="20.248171748s" podCreationTimestamp="2026-03-18 17:42:23 +0000 UTC" firstStartedPulling="2026-03-18 17:42:28.841231576 +0000 UTC m=+38.987066249" lastFinishedPulling="2026-03-18 17:42:37.858001336 +0000 UTC m=+48.003836009" observedRunningTime="2026-03-18 17:42:43.208597987 +0000 UTC m=+53.354432660" watchObservedRunningTime="2026-03-18 17:42:43.248171748 +0000 UTC m=+53.394006431" Mar 18 17:42:43.249575 master-0 kubenswrapper[7553]: I0318 17:42:43.249549 7553 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/installer-3-master-0" podStartSLOduration=7.249350693 podStartE2EDuration="7.249350693s" podCreationTimestamp="2026-03-18 17:42:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 17:42:43.248583087 +0000 UTC m=+53.394417760" watchObservedRunningTime="2026-03-18 17:42:43.249350693 +0000 UTC m=+53.395185366" Mar 18 17:42:43.285718 master-0 kubenswrapper[7553]: I0318 17:42:43.284708 7553 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-f5755b457-f4cbl" podStartSLOduration=8.284688642 podStartE2EDuration="8.284688642s" podCreationTimestamp="2026-03-18 17:42:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 17:42:43.281512312 +0000 UTC m=+53.427346985" watchObservedRunningTime="2026-03-18 17:42:43.284688642 +0000 UTC m=+53.430523315" Mar 18 17:42:43.339037 master-0 kubenswrapper[7553]: I0318 17:42:43.337509 7553 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/installer-1-master-0" podStartSLOduration=9.337484304 podStartE2EDuration="9.337484304s" podCreationTimestamp="2026-03-18 17:42:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 17:42:43.337016163 +0000 UTC m=+53.482850836" watchObservedRunningTime="2026-03-18 17:42:43.337484304 +0000 UTC m=+53.483318977" Mar 18 17:42:43.438370 master-0 kubenswrapper[7553]: I0318 17:42:43.438260 7553 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-fg8h6"] Mar 18 17:42:43.441113 master-0 kubenswrapper[7553]: I0318 17:42:43.439670 7553 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fg8h6" Mar 18 17:42:43.478353 master-0 kubenswrapper[7553]: I0318 17:42:43.477050 7553 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-fg8h6"] Mar 18 17:42:43.595316 master-0 kubenswrapper[7553]: I0318 17:42:43.594237 7553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7a9075c3-bb4f-4559-8454-5e097f334957-utilities\") pod \"community-operators-fg8h6\" (UID: \"7a9075c3-bb4f-4559-8454-5e097f334957\") " pod="openshift-marketplace/community-operators-fg8h6" Mar 18 17:42:43.595316 master-0 kubenswrapper[7553]: I0318 17:42:43.594310 7553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7a9075c3-bb4f-4559-8454-5e097f334957-catalog-content\") pod \"community-operators-fg8h6\" (UID: \"7a9075c3-bb4f-4559-8454-5e097f334957\") " pod="openshift-marketplace/community-operators-fg8h6" Mar 18 17:42:43.595316 master-0 kubenswrapper[7553]: I0318 17:42:43.594354 7553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kvpm7\" (UniqueName: \"kubernetes.io/projected/7a9075c3-bb4f-4559-8454-5e097f334957-kube-api-access-kvpm7\") pod \"community-operators-fg8h6\" (UID: \"7a9075c3-bb4f-4559-8454-5e097f334957\") " pod="openshift-marketplace/community-operators-fg8h6" Mar 18 17:42:43.641301 master-0 kubenswrapper[7553]: I0318 17:42:43.640873 7553 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-hgw2n"] Mar 18 17:42:43.644007 master-0 kubenswrapper[7553]: I0318 17:42:43.641927 7553 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hgw2n" Mar 18 17:42:43.682306 master-0 kubenswrapper[7553]: I0318 17:42:43.681551 7553 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-hgw2n"] Mar 18 17:42:43.699303 master-0 kubenswrapper[7553]: I0318 17:42:43.696107 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7a9075c3-bb4f-4559-8454-5e097f334957-utilities\") pod \"community-operators-fg8h6\" (UID: \"7a9075c3-bb4f-4559-8454-5e097f334957\") " pod="openshift-marketplace/community-operators-fg8h6" Mar 18 17:42:43.699303 master-0 kubenswrapper[7553]: I0318 17:42:43.696158 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7a9075c3-bb4f-4559-8454-5e097f334957-catalog-content\") pod \"community-operators-fg8h6\" (UID: \"7a9075c3-bb4f-4559-8454-5e097f334957\") " pod="openshift-marketplace/community-operators-fg8h6" Mar 18 17:42:43.699303 master-0 kubenswrapper[7553]: I0318 17:42:43.696186 7553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-njznj\" (UniqueName: \"kubernetes.io/projected/f7203a5f-0f67-48ca-a12b-be3b0ce7cbac-kube-api-access-njznj\") pod \"certified-operators-hgw2n\" (UID: \"f7203a5f-0f67-48ca-a12b-be3b0ce7cbac\") " pod="openshift-marketplace/certified-operators-hgw2n" Mar 18 17:42:43.699303 master-0 kubenswrapper[7553]: I0318 17:42:43.696205 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kvpm7\" (UniqueName: \"kubernetes.io/projected/7a9075c3-bb4f-4559-8454-5e097f334957-kube-api-access-kvpm7\") pod \"community-operators-fg8h6\" (UID: \"7a9075c3-bb4f-4559-8454-5e097f334957\") " pod="openshift-marketplace/community-operators-fg8h6" Mar 18 17:42:43.699303 master-0 kubenswrapper[7553]: I0318 17:42:43.696255 7553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f7203a5f-0f67-48ca-a12b-be3b0ce7cbac-catalog-content\") pod \"certified-operators-hgw2n\" (UID: \"f7203a5f-0f67-48ca-a12b-be3b0ce7cbac\") " pod="openshift-marketplace/certified-operators-hgw2n" Mar 18 17:42:43.699303 master-0 kubenswrapper[7553]: I0318 17:42:43.696278 7553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f7203a5f-0f67-48ca-a12b-be3b0ce7cbac-utilities\") pod \"certified-operators-hgw2n\" (UID: \"f7203a5f-0f67-48ca-a12b-be3b0ce7cbac\") " pod="openshift-marketplace/certified-operators-hgw2n" Mar 18 17:42:43.699303 master-0 kubenswrapper[7553]: I0318 17:42:43.696811 7553 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7a9075c3-bb4f-4559-8454-5e097f334957-utilities\") pod \"community-operators-fg8h6\" (UID: \"7a9075c3-bb4f-4559-8454-5e097f334957\") " pod="openshift-marketplace/community-operators-fg8h6" Mar 18 17:42:43.699303 master-0 kubenswrapper[7553]: I0318 17:42:43.697098 7553 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7a9075c3-bb4f-4559-8454-5e097f334957-catalog-content\") pod \"community-operators-fg8h6\" (UID: \"7a9075c3-bb4f-4559-8454-5e097f334957\") " pod="openshift-marketplace/community-operators-fg8h6" Mar 18 17:42:43.761414 master-0 kubenswrapper[7553]: I0318 17:42:43.759226 7553 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kvpm7\" (UniqueName: \"kubernetes.io/projected/7a9075c3-bb4f-4559-8454-5e097f334957-kube-api-access-kvpm7\") pod \"community-operators-fg8h6\" (UID: \"7a9075c3-bb4f-4559-8454-5e097f334957\") " pod="openshift-marketplace/community-operators-fg8h6" Mar 18 17:42:43.778395 master-0 kubenswrapper[7553]: I0318 17:42:43.776779 7553 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fg8h6" Mar 18 17:42:43.797981 master-0 kubenswrapper[7553]: I0318 17:42:43.797898 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f7203a5f-0f67-48ca-a12b-be3b0ce7cbac-catalog-content\") pod \"certified-operators-hgw2n\" (UID: \"f7203a5f-0f67-48ca-a12b-be3b0ce7cbac\") " pod="openshift-marketplace/certified-operators-hgw2n" Mar 18 17:42:43.797981 master-0 kubenswrapper[7553]: I0318 17:42:43.797971 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f7203a5f-0f67-48ca-a12b-be3b0ce7cbac-utilities\") pod \"certified-operators-hgw2n\" (UID: \"f7203a5f-0f67-48ca-a12b-be3b0ce7cbac\") " pod="openshift-marketplace/certified-operators-hgw2n" Mar 18 17:42:43.798218 master-0 kubenswrapper[7553]: I0318 17:42:43.798035 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-njznj\" (UniqueName: \"kubernetes.io/projected/f7203a5f-0f67-48ca-a12b-be3b0ce7cbac-kube-api-access-njznj\") pod \"certified-operators-hgw2n\" (UID: \"f7203a5f-0f67-48ca-a12b-be3b0ce7cbac\") " pod="openshift-marketplace/certified-operators-hgw2n" Mar 18 17:42:43.799345 master-0 kubenswrapper[7553]: I0318 17:42:43.799296 7553 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f7203a5f-0f67-48ca-a12b-be3b0ce7cbac-catalog-content\") pod \"certified-operators-hgw2n\" (UID: \"f7203a5f-0f67-48ca-a12b-be3b0ce7cbac\") " pod="openshift-marketplace/certified-operators-hgw2n" Mar 18 17:42:43.799697 master-0 kubenswrapper[7553]: I0318 17:42:43.799668 7553 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f7203a5f-0f67-48ca-a12b-be3b0ce7cbac-utilities\") pod \"certified-operators-hgw2n\" (UID: \"f7203a5f-0f67-48ca-a12b-be3b0ce7cbac\") " pod="openshift-marketplace/certified-operators-hgw2n" Mar 18 17:42:43.821198 master-0 kubenswrapper[7553]: I0318 17:42:43.821141 7553 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-cluster-version/cluster-version-operator-56d8475767-lqvvj"] Mar 18 17:42:43.826925 master-0 kubenswrapper[7553]: I0318 17:42:43.825730 7553 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-cluster-version/cluster-version-operator-56d8475767-lqvvj" podUID="a02399de-859b-45b1-9b00-18a08f285f39" containerName="cluster-version-operator" containerID="cri-o://dcdc5126bc7dc1f71b0c2b6aa40d9d36da39eb734a75c107c672d7a72b2e46fb" gracePeriod=130 Mar 18 17:42:43.877710 master-0 kubenswrapper[7553]: I0318 17:42:43.877587 7553 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-njznj\" (UniqueName: \"kubernetes.io/projected/f7203a5f-0f67-48ca-a12b-be3b0ce7cbac-kube-api-access-njznj\") pod \"certified-operators-hgw2n\" (UID: \"f7203a5f-0f67-48ca-a12b-be3b0ce7cbac\") " pod="openshift-marketplace/certified-operators-hgw2n" Mar 18 17:42:44.004972 master-0 kubenswrapper[7553]: I0318 17:42:44.004917 7553 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hgw2n" Mar 18 17:42:44.047935 master-0 kubenswrapper[7553]: I0318 17:42:44.047881 7553 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-688fbbb854-6n26v" Mar 18 17:42:44.049861 master-0 kubenswrapper[7553]: I0318 17:42:44.049828 7553 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-688fbbb854-6n26v" Mar 18 17:42:44.076326 master-0 kubenswrapper[7553]: I0318 17:42:44.076180 7553 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-688fbbb854-6n26v" Mar 18 17:42:44.088999 master-0 kubenswrapper[7553]: I0318 17:42:44.088929 7553 generic.go:334] "Generic (PLEG): container finished" podID="a02399de-859b-45b1-9b00-18a08f285f39" containerID="dcdc5126bc7dc1f71b0c2b6aa40d9d36da39eb734a75c107c672d7a72b2e46fb" exitCode=0 Mar 18 17:42:44.089783 master-0 kubenswrapper[7553]: I0318 17:42:44.089737 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-56d8475767-lqvvj" event={"ID":"a02399de-859b-45b1-9b00-18a08f285f39","Type":"ContainerDied","Data":"dcdc5126bc7dc1f71b0c2b6aa40d9d36da39eb734a75c107c672d7a72b2e46fb"} Mar 18 17:42:44.098857 master-0 kubenswrapper[7553]: I0318 17:42:44.098801 7553 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-688fbbb854-6n26v" Mar 18 17:42:44.243456 master-0 kubenswrapper[7553]: I0318 17:42:44.243421 7553 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-8vmsv" Mar 18 17:42:44.265869 master-0 kubenswrapper[7553]: I0318 17:42:44.265805 7553 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-fg8h6"] Mar 18 17:42:44.475454 master-0 kubenswrapper[7553]: I0318 17:42:44.475372 7553 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-56d8475767-lqvvj" Mar 18 17:42:44.614193 master-0 kubenswrapper[7553]: I0318 17:42:44.614082 7553 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a02399de-859b-45b1-9b00-18a08f285f39-serving-cert\") pod \"a02399de-859b-45b1-9b00-18a08f285f39\" (UID: \"a02399de-859b-45b1-9b00-18a08f285f39\") " Mar 18 17:42:44.614193 master-0 kubenswrapper[7553]: I0318 17:42:44.614169 7553 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/a02399de-859b-45b1-9b00-18a08f285f39-etc-ssl-certs\") pod \"a02399de-859b-45b1-9b00-18a08f285f39\" (UID: \"a02399de-859b-45b1-9b00-18a08f285f39\") " Mar 18 17:42:44.614975 master-0 kubenswrapper[7553]: I0318 17:42:44.614243 7553 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/a02399de-859b-45b1-9b00-18a08f285f39-etc-cvo-updatepayloads\") pod \"a02399de-859b-45b1-9b00-18a08f285f39\" (UID: \"a02399de-859b-45b1-9b00-18a08f285f39\") " Mar 18 17:42:44.614975 master-0 kubenswrapper[7553]: I0318 17:42:44.614375 7553 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a02399de-859b-45b1-9b00-18a08f285f39-etc-cvo-updatepayloads" (OuterVolumeSpecName: "etc-cvo-updatepayloads") pod "a02399de-859b-45b1-9b00-18a08f285f39" (UID: "a02399de-859b-45b1-9b00-18a08f285f39"). InnerVolumeSpecName "etc-cvo-updatepayloads". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 17:42:44.614975 master-0 kubenswrapper[7553]: I0318 17:42:44.614463 7553 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a02399de-859b-45b1-9b00-18a08f285f39-etc-ssl-certs" (OuterVolumeSpecName: "etc-ssl-certs") pod "a02399de-859b-45b1-9b00-18a08f285f39" (UID: "a02399de-859b-45b1-9b00-18a08f285f39"). InnerVolumeSpecName "etc-ssl-certs". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 17:42:44.614975 master-0 kubenswrapper[7553]: I0318 17:42:44.614464 7553 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/a02399de-859b-45b1-9b00-18a08f285f39-service-ca\") pod \"a02399de-859b-45b1-9b00-18a08f285f39\" (UID: \"a02399de-859b-45b1-9b00-18a08f285f39\") " Mar 18 17:42:44.614975 master-0 kubenswrapper[7553]: I0318 17:42:44.614619 7553 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a02399de-859b-45b1-9b00-18a08f285f39-kube-api-access\") pod \"a02399de-859b-45b1-9b00-18a08f285f39\" (UID: \"a02399de-859b-45b1-9b00-18a08f285f39\") " Mar 18 17:42:44.615624 master-0 kubenswrapper[7553]: I0318 17:42:44.615540 7553 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a02399de-859b-45b1-9b00-18a08f285f39-service-ca" (OuterVolumeSpecName: "service-ca") pod "a02399de-859b-45b1-9b00-18a08f285f39" (UID: "a02399de-859b-45b1-9b00-18a08f285f39"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 17:42:44.615727 master-0 kubenswrapper[7553]: I0318 17:42:44.615703 7553 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/a02399de-859b-45b1-9b00-18a08f285f39-service-ca\") on node \"master-0\" DevicePath \"\"" Mar 18 17:42:44.615727 master-0 kubenswrapper[7553]: I0318 17:42:44.615724 7553 reconciler_common.go:293] "Volume detached for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/a02399de-859b-45b1-9b00-18a08f285f39-etc-ssl-certs\") on node \"master-0\" DevicePath \"\"" Mar 18 17:42:44.615864 master-0 kubenswrapper[7553]: I0318 17:42:44.615735 7553 reconciler_common.go:293] "Volume detached for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/a02399de-859b-45b1-9b00-18a08f285f39-etc-cvo-updatepayloads\") on node \"master-0\" DevicePath \"\"" Mar 18 17:42:44.619658 master-0 kubenswrapper[7553]: I0318 17:42:44.619599 7553 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a02399de-859b-45b1-9b00-18a08f285f39-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "a02399de-859b-45b1-9b00-18a08f285f39" (UID: "a02399de-859b-45b1-9b00-18a08f285f39"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 17:42:44.619658 master-0 kubenswrapper[7553]: I0318 17:42:44.619598 7553 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a02399de-859b-45b1-9b00-18a08f285f39-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "a02399de-859b-45b1-9b00-18a08f285f39" (UID: "a02399de-859b-45b1-9b00-18a08f285f39"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 17:42:44.717063 master-0 kubenswrapper[7553]: I0318 17:42:44.716922 7553 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a02399de-859b-45b1-9b00-18a08f285f39-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 18 17:42:44.717063 master-0 kubenswrapper[7553]: I0318 17:42:44.716982 7553 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a02399de-859b-45b1-9b00-18a08f285f39-kube-api-access\") on node \"master-0\" DevicePath \"\"" Mar 18 17:42:45.099238 master-0 kubenswrapper[7553]: I0318 17:42:45.099120 7553 generic.go:334] "Generic (PLEG): container finished" podID="7a9075c3-bb4f-4559-8454-5e097f334957" containerID="61c8cdb2c792c0b417482aea9cd0f1183a7a5d96313ea5188479d603314faf40" exitCode=0 Mar 18 17:42:45.099454 master-0 kubenswrapper[7553]: I0318 17:42:45.099216 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fg8h6" event={"ID":"7a9075c3-bb4f-4559-8454-5e097f334957","Type":"ContainerDied","Data":"61c8cdb2c792c0b417482aea9cd0f1183a7a5d96313ea5188479d603314faf40"} Mar 18 17:42:45.099454 master-0 kubenswrapper[7553]: I0318 17:42:45.099351 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fg8h6" event={"ID":"7a9075c3-bb4f-4559-8454-5e097f334957","Type":"ContainerStarted","Data":"aa58349ddd9078d7290e359fb92c428dc5b57e83b5248dcb6c4eb5055e4481ef"} Mar 18 17:42:45.101992 master-0 kubenswrapper[7553]: I0318 17:42:45.101958 7553 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-56d8475767-lqvvj" Mar 18 17:42:45.102058 master-0 kubenswrapper[7553]: I0318 17:42:45.102023 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-56d8475767-lqvvj" event={"ID":"a02399de-859b-45b1-9b00-18a08f285f39","Type":"ContainerDied","Data":"b910fcd86d2c6a577227001de82fb055189643becfc32f71187a0e36a182af53"} Mar 18 17:42:45.102125 master-0 kubenswrapper[7553]: I0318 17:42:45.102087 7553 scope.go:117] "RemoveContainer" containerID="dcdc5126bc7dc1f71b0c2b6aa40d9d36da39eb734a75c107c672d7a72b2e46fb" Mar 18 17:42:45.223191 master-0 kubenswrapper[7553]: I0318 17:42:45.222467 7553 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-hgw2n"] Mar 18 17:42:45.242597 master-0 kubenswrapper[7553]: W0318 17:42:45.242512 7553 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf7203a5f_0f67_48ca_a12b_be3b0ce7cbac.slice/crio-5e886fe6be4c394b26355f867117ac224f8e36a7b3550590d5568700c659bdf2 WatchSource:0}: Error finding container 5e886fe6be4c394b26355f867117ac224f8e36a7b3550590d5568700c659bdf2: Status 404 returned error can't find the container with id 5e886fe6be4c394b26355f867117ac224f8e36a7b3550590d5568700c659bdf2 Mar 18 17:42:46.089376 master-0 kubenswrapper[7553]: I0318 17:42:46.084645 7553 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-j4kft"] Mar 18 17:42:46.089376 master-0 kubenswrapper[7553]: E0318 17:42:46.084890 7553 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a02399de-859b-45b1-9b00-18a08f285f39" containerName="cluster-version-operator" Mar 18 17:42:46.089376 master-0 kubenswrapper[7553]: I0318 17:42:46.084903 7553 state_mem.go:107] "Deleted CPUSet assignment" podUID="a02399de-859b-45b1-9b00-18a08f285f39" containerName="cluster-version-operator" Mar 18 17:42:46.089376 master-0 kubenswrapper[7553]: I0318 17:42:46.085008 7553 memory_manager.go:354] "RemoveStaleState removing state" podUID="a02399de-859b-45b1-9b00-18a08f285f39" containerName="cluster-version-operator" Mar 18 17:42:46.089376 master-0 kubenswrapper[7553]: I0318 17:42:46.085739 7553 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-j4kft" Mar 18 17:42:46.114690 master-0 kubenswrapper[7553]: I0318 17:42:46.114626 7553 generic.go:334] "Generic (PLEG): container finished" podID="f7203a5f-0f67-48ca-a12b-be3b0ce7cbac" containerID="551067c3fcf32dd80a122d54114412b5d6bfe4459ccf677a49f09efa0aea73a5" exitCode=0 Mar 18 17:42:46.114862 master-0 kubenswrapper[7553]: I0318 17:42:46.114740 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hgw2n" event={"ID":"f7203a5f-0f67-48ca-a12b-be3b0ce7cbac","Type":"ContainerDied","Data":"551067c3fcf32dd80a122d54114412b5d6bfe4459ccf677a49f09efa0aea73a5"} Mar 18 17:42:46.114929 master-0 kubenswrapper[7553]: I0318 17:42:46.114842 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hgw2n" event={"ID":"f7203a5f-0f67-48ca-a12b-be3b0ce7cbac","Type":"ContainerStarted","Data":"5e886fe6be4c394b26355f867117ac224f8e36a7b3550590d5568700c659bdf2"} Mar 18 17:42:46.266692 master-0 kubenswrapper[7553]: I0318 17:42:46.266539 7553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jnrk4\" (UniqueName: \"kubernetes.io/projected/35595774-da4b-499c-bd6e-1ae5af144833-kube-api-access-jnrk4\") pod \"redhat-marketplace-j4kft\" (UID: \"35595774-da4b-499c-bd6e-1ae5af144833\") " pod="openshift-marketplace/redhat-marketplace-j4kft" Mar 18 17:42:46.267590 master-0 kubenswrapper[7553]: I0318 17:42:46.267101 7553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/35595774-da4b-499c-bd6e-1ae5af144833-catalog-content\") pod \"redhat-marketplace-j4kft\" (UID: \"35595774-da4b-499c-bd6e-1ae5af144833\") " pod="openshift-marketplace/redhat-marketplace-j4kft" Mar 18 17:42:46.267689 master-0 kubenswrapper[7553]: I0318 17:42:46.267628 7553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/35595774-da4b-499c-bd6e-1ae5af144833-utilities\") pod \"redhat-marketplace-j4kft\" (UID: \"35595774-da4b-499c-bd6e-1ae5af144833\") " pod="openshift-marketplace/redhat-marketplace-j4kft" Mar 18 17:42:46.369760 master-0 kubenswrapper[7553]: I0318 17:42:46.369498 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jnrk4\" (UniqueName: \"kubernetes.io/projected/35595774-da4b-499c-bd6e-1ae5af144833-kube-api-access-jnrk4\") pod \"redhat-marketplace-j4kft\" (UID: \"35595774-da4b-499c-bd6e-1ae5af144833\") " pod="openshift-marketplace/redhat-marketplace-j4kft" Mar 18 17:42:46.369760 master-0 kubenswrapper[7553]: I0318 17:42:46.369675 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/35595774-da4b-499c-bd6e-1ae5af144833-catalog-content\") pod \"redhat-marketplace-j4kft\" (UID: \"35595774-da4b-499c-bd6e-1ae5af144833\") " pod="openshift-marketplace/redhat-marketplace-j4kft" Mar 18 17:42:46.369760 master-0 kubenswrapper[7553]: I0318 17:42:46.369732 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/35595774-da4b-499c-bd6e-1ae5af144833-utilities\") pod \"redhat-marketplace-j4kft\" (UID: \"35595774-da4b-499c-bd6e-1ae5af144833\") " pod="openshift-marketplace/redhat-marketplace-j4kft" Mar 18 17:42:46.371220 master-0 kubenswrapper[7553]: I0318 17:42:46.371127 7553 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/35595774-da4b-499c-bd6e-1ae5af144833-catalog-content\") pod \"redhat-marketplace-j4kft\" (UID: \"35595774-da4b-499c-bd6e-1ae5af144833\") " pod="openshift-marketplace/redhat-marketplace-j4kft" Mar 18 17:42:46.372426 master-0 kubenswrapper[7553]: I0318 17:42:46.371672 7553 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/35595774-da4b-499c-bd6e-1ae5af144833-utilities\") pod \"redhat-marketplace-j4kft\" (UID: \"35595774-da4b-499c-bd6e-1ae5af144833\") " pod="openshift-marketplace/redhat-marketplace-j4kft" Mar 18 17:42:46.497917 master-0 kubenswrapper[7553]: I0318 17:42:46.494161 7553 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-j4kft"] Mar 18 17:42:46.511843 master-0 kubenswrapper[7553]: I0318 17:42:46.511785 7553 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jnrk4\" (UniqueName: \"kubernetes.io/projected/35595774-da4b-499c-bd6e-1ae5af144833-kube-api-access-jnrk4\") pod \"redhat-marketplace-j4kft\" (UID: \"35595774-da4b-499c-bd6e-1ae5af144833\") " pod="openshift-marketplace/redhat-marketplace-j4kft" Mar 18 17:42:46.717873 master-0 kubenswrapper[7553]: I0318 17:42:46.717716 7553 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-j4kft" Mar 18 17:42:47.708785 master-0 kubenswrapper[7553]: I0318 17:42:47.708717 7553 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-cluster-version/cluster-version-operator-56d8475767-lqvvj"] Mar 18 17:42:47.715363 master-0 kubenswrapper[7553]: I0318 17:42:47.715316 7553 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-j4kft"] Mar 18 17:42:47.728499 master-0 kubenswrapper[7553]: W0318 17:42:47.728445 7553 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod35595774_da4b_499c_bd6e_1ae5af144833.slice/crio-864aeba19e2a5216570c21cd0d0d14315a2e5721c472a22dae48be501e01bd99 WatchSource:0}: Error finding container 864aeba19e2a5216570c21cd0d0d14315a2e5721c472a22dae48be501e01bd99: Status 404 returned error can't find the container with id 864aeba19e2a5216570c21cd0d0d14315a2e5721c472a22dae48be501e01bd99 Mar 18 17:42:48.130401 master-0 kubenswrapper[7553]: I0318 17:42:48.130334 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-j4kft" event={"ID":"35595774-da4b-499c-bd6e-1ae5af144833","Type":"ContainerStarted","Data":"feeb08461ad0d7781535b30235701d5143dfea88febf0f65b78d8d5869fe57f4"} Mar 18 17:42:48.130401 master-0 kubenswrapper[7553]: I0318 17:42:48.130399 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-j4kft" event={"ID":"35595774-da4b-499c-bd6e-1ae5af144833","Type":"ContainerStarted","Data":"864aeba19e2a5216570c21cd0d0d14315a2e5721c472a22dae48be501e01bd99"} Mar 18 17:42:48.278231 master-0 kubenswrapper[7553]: I0318 17:42:48.278152 7553 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-jlj6j"] Mar 18 17:42:48.279794 master-0 kubenswrapper[7553]: I0318 17:42:48.279764 7553 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jlj6j" Mar 18 17:42:48.417715 master-0 kubenswrapper[7553]: I0318 17:42:48.417591 7553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e7a6e8f4-26e0-454c-bfbb-f97e72636bf6-catalog-content\") pod \"redhat-operators-jlj6j\" (UID: \"e7a6e8f4-26e0-454c-bfbb-f97e72636bf6\") " pod="openshift-marketplace/redhat-operators-jlj6j" Mar 18 17:42:48.418005 master-0 kubenswrapper[7553]: I0318 17:42:48.417774 7553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e7a6e8f4-26e0-454c-bfbb-f97e72636bf6-utilities\") pod \"redhat-operators-jlj6j\" (UID: \"e7a6e8f4-26e0-454c-bfbb-f97e72636bf6\") " pod="openshift-marketplace/redhat-operators-jlj6j" Mar 18 17:42:48.418005 master-0 kubenswrapper[7553]: I0318 17:42:48.417883 7553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-llblv\" (UniqueName: \"kubernetes.io/projected/e7a6e8f4-26e0-454c-bfbb-f97e72636bf6-kube-api-access-llblv\") pod \"redhat-operators-jlj6j\" (UID: \"e7a6e8f4-26e0-454c-bfbb-f97e72636bf6\") " pod="openshift-marketplace/redhat-operators-jlj6j" Mar 18 17:42:48.484385 master-0 kubenswrapper[7553]: I0318 17:42:48.484220 7553 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-cluster-version/cluster-version-operator-56d8475767-lqvvj"] Mar 18 17:42:48.493347 master-0 kubenswrapper[7553]: I0318 17:42:48.486046 7553 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-jlj6j"] Mar 18 17:42:48.519534 master-0 kubenswrapper[7553]: I0318 17:42:48.519475 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e7a6e8f4-26e0-454c-bfbb-f97e72636bf6-catalog-content\") pod \"redhat-operators-jlj6j\" (UID: \"e7a6e8f4-26e0-454c-bfbb-f97e72636bf6\") " pod="openshift-marketplace/redhat-operators-jlj6j" Mar 18 17:42:48.519634 master-0 kubenswrapper[7553]: I0318 17:42:48.519544 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e7a6e8f4-26e0-454c-bfbb-f97e72636bf6-utilities\") pod \"redhat-operators-jlj6j\" (UID: \"e7a6e8f4-26e0-454c-bfbb-f97e72636bf6\") " pod="openshift-marketplace/redhat-operators-jlj6j" Mar 18 17:42:48.519634 master-0 kubenswrapper[7553]: I0318 17:42:48.519605 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-llblv\" (UniqueName: \"kubernetes.io/projected/e7a6e8f4-26e0-454c-bfbb-f97e72636bf6-kube-api-access-llblv\") pod \"redhat-operators-jlj6j\" (UID: \"e7a6e8f4-26e0-454c-bfbb-f97e72636bf6\") " pod="openshift-marketplace/redhat-operators-jlj6j" Mar 18 17:42:48.520056 master-0 kubenswrapper[7553]: I0318 17:42:48.519967 7553 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e7a6e8f4-26e0-454c-bfbb-f97e72636bf6-catalog-content\") pod \"redhat-operators-jlj6j\" (UID: \"e7a6e8f4-26e0-454c-bfbb-f97e72636bf6\") " pod="openshift-marketplace/redhat-operators-jlj6j" Mar 18 17:42:48.520357 master-0 kubenswrapper[7553]: I0318 17:42:48.520320 7553 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e7a6e8f4-26e0-454c-bfbb-f97e72636bf6-utilities\") pod \"redhat-operators-jlj6j\" (UID: \"e7a6e8f4-26e0-454c-bfbb-f97e72636bf6\") " pod="openshift-marketplace/redhat-operators-jlj6j" Mar 18 17:42:49.139687 master-0 kubenswrapper[7553]: I0318 17:42:49.139620 7553 generic.go:334] "Generic (PLEG): container finished" podID="35595774-da4b-499c-bd6e-1ae5af144833" containerID="feeb08461ad0d7781535b30235701d5143dfea88febf0f65b78d8d5869fe57f4" exitCode=0 Mar 18 17:42:49.139687 master-0 kubenswrapper[7553]: I0318 17:42:49.139679 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-j4kft" event={"ID":"35595774-da4b-499c-bd6e-1ae5af144833","Type":"ContainerDied","Data":"feeb08461ad0d7781535b30235701d5143dfea88febf0f65b78d8d5869fe57f4"} Mar 18 17:42:50.059731 master-0 kubenswrapper[7553]: I0318 17:42:50.059682 7553 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a02399de-859b-45b1-9b00-18a08f285f39" path="/var/lib/kubelet/pods/a02399de-859b-45b1-9b00-18a08f285f39/volumes" Mar 18 17:42:50.264886 master-0 kubenswrapper[7553]: I0318 17:42:50.264802 7553 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-llblv\" (UniqueName: \"kubernetes.io/projected/e7a6e8f4-26e0-454c-bfbb-f97e72636bf6-kube-api-access-llblv\") pod \"redhat-operators-jlj6j\" (UID: \"e7a6e8f4-26e0-454c-bfbb-f97e72636bf6\") " pod="openshift-marketplace/redhat-operators-jlj6j" Mar 18 17:42:50.424054 master-0 kubenswrapper[7553]: I0318 17:42:50.423978 7553 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jlj6j" Mar 18 17:42:51.296702 master-0 kubenswrapper[7553]: I0318 17:42:51.296642 7553 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-jlj6j"] Mar 18 17:42:51.314177 master-0 kubenswrapper[7553]: I0318 17:42:51.314031 7553 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-controller-manager/installer-1-master-0"] Mar 18 17:42:51.314520 master-0 kubenswrapper[7553]: I0318 17:42:51.314490 7553 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/installer-1-master-0" podUID="4f688df1-3bfc-412e-b311-f9f761a0b00a" containerName="installer" containerID="cri-o://fdeef07d8840260931a9408a0850cec7ff93ac6938603492d86d93449b1926fe" gracePeriod=30 Mar 18 17:42:51.326733 master-0 kubenswrapper[7553]: I0318 17:42:51.326157 7553 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-7d58488df-l48xm"] Mar 18 17:42:51.329480 master-0 kubenswrapper[7553]: I0318 17:42:51.328181 7553 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-7d58488df-l48xm" Mar 18 17:42:51.338421 master-0 kubenswrapper[7553]: I0318 17:42:51.337468 7553 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-tns2v" Mar 18 17:42:51.342424 master-0 kubenswrapper[7553]: I0318 17:42:51.340226 7553 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Mar 18 17:42:51.342424 master-0 kubenswrapper[7553]: I0318 17:42:51.340427 7553 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Mar 18 17:42:51.346961 master-0 kubenswrapper[7553]: I0318 17:42:51.346505 7553 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Mar 18 17:42:51.389174 master-0 kubenswrapper[7553]: I0318 17:42:51.389126 7553 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-fg8h6"] Mar 18 17:42:51.466208 master-0 kubenswrapper[7553]: I0318 17:42:51.465360 7553 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-hgw2n"] Mar 18 17:42:51.474447 master-0 kubenswrapper[7553]: I0318 17:42:51.473559 7553 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-8485d"] Mar 18 17:42:51.477379 master-0 kubenswrapper[7553]: I0318 17:42:51.475851 7553 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8485d" Mar 18 17:42:51.477379 master-0 kubenswrapper[7553]: I0318 17:42:51.476482 7553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/fdab27a1-1d7a-4dc5-b828-eba3f57592dd-etc-cvo-updatepayloads\") pod \"cluster-version-operator-7d58488df-l48xm\" (UID: \"fdab27a1-1d7a-4dc5-b828-eba3f57592dd\") " pod="openshift-cluster-version/cluster-version-operator-7d58488df-l48xm" Mar 18 17:42:51.477379 master-0 kubenswrapper[7553]: I0318 17:42:51.476534 7553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/fdab27a1-1d7a-4dc5-b828-eba3f57592dd-service-ca\") pod \"cluster-version-operator-7d58488df-l48xm\" (UID: \"fdab27a1-1d7a-4dc5-b828-eba3f57592dd\") " pod="openshift-cluster-version/cluster-version-operator-7d58488df-l48xm" Mar 18 17:42:51.477379 master-0 kubenswrapper[7553]: I0318 17:42:51.476638 7553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/fdab27a1-1d7a-4dc5-b828-eba3f57592dd-kube-api-access\") pod \"cluster-version-operator-7d58488df-l48xm\" (UID: \"fdab27a1-1d7a-4dc5-b828-eba3f57592dd\") " pod="openshift-cluster-version/cluster-version-operator-7d58488df-l48xm" Mar 18 17:42:51.477379 master-0 kubenswrapper[7553]: I0318 17:42:51.476668 7553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/fdab27a1-1d7a-4dc5-b828-eba3f57592dd-etc-ssl-certs\") pod \"cluster-version-operator-7d58488df-l48xm\" (UID: \"fdab27a1-1d7a-4dc5-b828-eba3f57592dd\") " pod="openshift-cluster-version/cluster-version-operator-7d58488df-l48xm" Mar 18 17:42:51.477379 master-0 kubenswrapper[7553]: I0318 17:42:51.476701 7553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fdab27a1-1d7a-4dc5-b828-eba3f57592dd-serving-cert\") pod \"cluster-version-operator-7d58488df-l48xm\" (UID: \"fdab27a1-1d7a-4dc5-b828-eba3f57592dd\") " pod="openshift-cluster-version/cluster-version-operator-7d58488df-l48xm" Mar 18 17:42:51.488163 master-0 kubenswrapper[7553]: I0318 17:42:51.487171 7553 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-6fg48" Mar 18 17:42:51.516798 master-0 kubenswrapper[7553]: I0318 17:42:51.516368 7553 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-8485d"] Mar 18 17:42:51.526788 master-0 kubenswrapper[7553]: I0318 17:42:51.526586 7553 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-vbglp"] Mar 18 17:42:51.529321 master-0 kubenswrapper[7553]: I0318 17:42:51.528717 7553 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vbglp" Mar 18 17:42:51.544352 master-0 kubenswrapper[7553]: I0318 17:42:51.542022 7553 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-zxhl4" Mar 18 17:42:51.552377 master-0 kubenswrapper[7553]: I0318 17:42:51.549333 7553 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-b8b994c95-kglwt"] Mar 18 17:42:51.552377 master-0 kubenswrapper[7553]: I0318 17:42:51.550890 7553 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-b8b994c95-kglwt" Mar 18 17:42:51.567324 master-0 kubenswrapper[7553]: I0318 17:42:51.564866 7553 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Mar 18 17:42:51.567324 master-0 kubenswrapper[7553]: I0318 17:42:51.565159 7553 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-22mk8" Mar 18 17:42:51.598151 master-0 kubenswrapper[7553]: I0318 17:42:51.580565 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/fdab27a1-1d7a-4dc5-b828-eba3f57592dd-etc-ssl-certs\") pod \"cluster-version-operator-7d58488df-l48xm\" (UID: \"fdab27a1-1d7a-4dc5-b828-eba3f57592dd\") " pod="openshift-cluster-version/cluster-version-operator-7d58488df-l48xm" Mar 18 17:42:51.598151 master-0 kubenswrapper[7553]: I0318 17:42:51.592123 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fdab27a1-1d7a-4dc5-b828-eba3f57592dd-serving-cert\") pod \"cluster-version-operator-7d58488df-l48xm\" (UID: \"fdab27a1-1d7a-4dc5-b828-eba3f57592dd\") " pod="openshift-cluster-version/cluster-version-operator-7d58488df-l48xm" Mar 18 17:42:51.598151 master-0 kubenswrapper[7553]: I0318 17:42:51.592254 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/fdab27a1-1d7a-4dc5-b828-eba3f57592dd-etc-cvo-updatepayloads\") pod \"cluster-version-operator-7d58488df-l48xm\" (UID: \"fdab27a1-1d7a-4dc5-b828-eba3f57592dd\") " pod="openshift-cluster-version/cluster-version-operator-7d58488df-l48xm" Mar 18 17:42:51.598151 master-0 kubenswrapper[7553]: I0318 17:42:51.592443 7553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/489dd872-39c3-4ce2-8dc1-9d0552b88616-catalog-content\") pod \"community-operators-8485d\" (UID: \"489dd872-39c3-4ce2-8dc1-9d0552b88616\") " pod="openshift-marketplace/community-operators-8485d" Mar 18 17:42:51.598151 master-0 kubenswrapper[7553]: I0318 17:42:51.592494 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/fdab27a1-1d7a-4dc5-b828-eba3f57592dd-service-ca\") pod \"cluster-version-operator-7d58488df-l48xm\" (UID: \"fdab27a1-1d7a-4dc5-b828-eba3f57592dd\") " pod="openshift-cluster-version/cluster-version-operator-7d58488df-l48xm" Mar 18 17:42:51.598151 master-0 kubenswrapper[7553]: I0318 17:42:51.592855 7553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/489dd872-39c3-4ce2-8dc1-9d0552b88616-utilities\") pod \"community-operators-8485d\" (UID: \"489dd872-39c3-4ce2-8dc1-9d0552b88616\") " pod="openshift-marketplace/community-operators-8485d" Mar 18 17:42:51.598151 master-0 kubenswrapper[7553]: I0318 17:42:51.593173 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/fdab27a1-1d7a-4dc5-b828-eba3f57592dd-kube-api-access\") pod \"cluster-version-operator-7d58488df-l48xm\" (UID: \"fdab27a1-1d7a-4dc5-b828-eba3f57592dd\") " pod="openshift-cluster-version/cluster-version-operator-7d58488df-l48xm" Mar 18 17:42:51.598151 master-0 kubenswrapper[7553]: I0318 17:42:51.593215 7553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wjtg7\" (UniqueName: \"kubernetes.io/projected/489dd872-39c3-4ce2-8dc1-9d0552b88616-kube-api-access-wjtg7\") pod \"community-operators-8485d\" (UID: \"489dd872-39c3-4ce2-8dc1-9d0552b88616\") " pod="openshift-marketplace/community-operators-8485d" Mar 18 17:42:51.598151 master-0 kubenswrapper[7553]: I0318 17:42:51.580883 7553 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/fdab27a1-1d7a-4dc5-b828-eba3f57592dd-etc-ssl-certs\") pod \"cluster-version-operator-7d58488df-l48xm\" (UID: \"fdab27a1-1d7a-4dc5-b828-eba3f57592dd\") " pod="openshift-cluster-version/cluster-version-operator-7d58488df-l48xm" Mar 18 17:42:51.598151 master-0 kubenswrapper[7553]: I0318 17:42:51.583243 7553 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-vbglp"] Mar 18 17:42:51.598151 master-0 kubenswrapper[7553]: I0318 17:42:51.593601 7553 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/fdab27a1-1d7a-4dc5-b828-eba3f57592dd-etc-cvo-updatepayloads\") pod \"cluster-version-operator-7d58488df-l48xm\" (UID: \"fdab27a1-1d7a-4dc5-b828-eba3f57592dd\") " pod="openshift-cluster-version/cluster-version-operator-7d58488df-l48xm" Mar 18 17:42:51.598151 master-0 kubenswrapper[7553]: I0318 17:42:51.595976 7553 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/fdab27a1-1d7a-4dc5-b828-eba3f57592dd-service-ca\") pod \"cluster-version-operator-7d58488df-l48xm\" (UID: \"fdab27a1-1d7a-4dc5-b828-eba3f57592dd\") " pod="openshift-cluster-version/cluster-version-operator-7d58488df-l48xm" Mar 18 17:42:51.598151 master-0 kubenswrapper[7553]: I0318 17:42:51.596445 7553 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-b8b994c95-kglwt"] Mar 18 17:42:51.620715 master-0 kubenswrapper[7553]: I0318 17:42:51.620666 7553 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fdab27a1-1d7a-4dc5-b828-eba3f57592dd-serving-cert\") pod \"cluster-version-operator-7d58488df-l48xm\" (UID: \"fdab27a1-1d7a-4dc5-b828-eba3f57592dd\") " pod="openshift-cluster-version/cluster-version-operator-7d58488df-l48xm" Mar 18 17:42:51.661193 master-0 kubenswrapper[7553]: I0318 17:42:51.661107 7553 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/fdab27a1-1d7a-4dc5-b828-eba3f57592dd-kube-api-access\") pod \"cluster-version-operator-7d58488df-l48xm\" (UID: \"fdab27a1-1d7a-4dc5-b828-eba3f57592dd\") " pod="openshift-cluster-version/cluster-version-operator-7d58488df-l48xm" Mar 18 17:42:51.699336 master-0 kubenswrapper[7553]: I0318 17:42:51.695086 7553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dc110414-3a6b-474c-bce3-33450cab8fcd-utilities\") pod \"certified-operators-vbglp\" (UID: \"dc110414-3a6b-474c-bce3-33450cab8fcd\") " pod="openshift-marketplace/certified-operators-vbglp" Mar 18 17:42:51.699336 master-0 kubenswrapper[7553]: I0318 17:42:51.695195 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/489dd872-39c3-4ce2-8dc1-9d0552b88616-utilities\") pod \"community-operators-8485d\" (UID: \"489dd872-39c3-4ce2-8dc1-9d0552b88616\") " pod="openshift-marketplace/community-operators-8485d" Mar 18 17:42:51.699336 master-0 kubenswrapper[7553]: I0318 17:42:51.695235 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wjtg7\" (UniqueName: \"kubernetes.io/projected/489dd872-39c3-4ce2-8dc1-9d0552b88616-kube-api-access-wjtg7\") pod \"community-operators-8485d\" (UID: \"489dd872-39c3-4ce2-8dc1-9d0552b88616\") " pod="openshift-marketplace/community-operators-8485d" Mar 18 17:42:51.699336 master-0 kubenswrapper[7553]: I0318 17:42:51.695258 7553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/8db04037-c7cc-4246-92c3-6e7985384b14-tmpfs\") pod \"packageserver-b8b994c95-kglwt\" (UID: \"8db04037-c7cc-4246-92c3-6e7985384b14\") " pod="openshift-operator-lifecycle-manager/packageserver-b8b994c95-kglwt" Mar 18 17:42:51.699336 master-0 kubenswrapper[7553]: I0318 17:42:51.695302 7553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mnl7c\" (UniqueName: \"kubernetes.io/projected/dc110414-3a6b-474c-bce3-33450cab8fcd-kube-api-access-mnl7c\") pod \"certified-operators-vbglp\" (UID: \"dc110414-3a6b-474c-bce3-33450cab8fcd\") " pod="openshift-marketplace/certified-operators-vbglp" Mar 18 17:42:51.699336 master-0 kubenswrapper[7553]: I0318 17:42:51.695324 7553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fglbh\" (UniqueName: \"kubernetes.io/projected/8db04037-c7cc-4246-92c3-6e7985384b14-kube-api-access-fglbh\") pod \"packageserver-b8b994c95-kglwt\" (UID: \"8db04037-c7cc-4246-92c3-6e7985384b14\") " pod="openshift-operator-lifecycle-manager/packageserver-b8b994c95-kglwt" Mar 18 17:42:51.699336 master-0 kubenswrapper[7553]: I0318 17:42:51.695358 7553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dc110414-3a6b-474c-bce3-33450cab8fcd-catalog-content\") pod \"certified-operators-vbglp\" (UID: \"dc110414-3a6b-474c-bce3-33450cab8fcd\") " pod="openshift-marketplace/certified-operators-vbglp" Mar 18 17:42:51.699336 master-0 kubenswrapper[7553]: I0318 17:42:51.695386 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/489dd872-39c3-4ce2-8dc1-9d0552b88616-catalog-content\") pod \"community-operators-8485d\" (UID: \"489dd872-39c3-4ce2-8dc1-9d0552b88616\") " pod="openshift-marketplace/community-operators-8485d" Mar 18 17:42:51.699336 master-0 kubenswrapper[7553]: I0318 17:42:51.695406 7553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/8db04037-c7cc-4246-92c3-6e7985384b14-webhook-cert\") pod \"packageserver-b8b994c95-kglwt\" (UID: \"8db04037-c7cc-4246-92c3-6e7985384b14\") " pod="openshift-operator-lifecycle-manager/packageserver-b8b994c95-kglwt" Mar 18 17:42:51.699336 master-0 kubenswrapper[7553]: I0318 17:42:51.695430 7553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/8db04037-c7cc-4246-92c3-6e7985384b14-apiservice-cert\") pod \"packageserver-b8b994c95-kglwt\" (UID: \"8db04037-c7cc-4246-92c3-6e7985384b14\") " pod="openshift-operator-lifecycle-manager/packageserver-b8b994c95-kglwt" Mar 18 17:42:51.699336 master-0 kubenswrapper[7553]: I0318 17:42:51.696027 7553 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/489dd872-39c3-4ce2-8dc1-9d0552b88616-utilities\") pod \"community-operators-8485d\" (UID: \"489dd872-39c3-4ce2-8dc1-9d0552b88616\") " pod="openshift-marketplace/community-operators-8485d" Mar 18 17:42:51.699336 master-0 kubenswrapper[7553]: I0318 17:42:51.696911 7553 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/489dd872-39c3-4ce2-8dc1-9d0552b88616-catalog-content\") pod \"community-operators-8485d\" (UID: \"489dd872-39c3-4ce2-8dc1-9d0552b88616\") " pod="openshift-marketplace/community-operators-8485d" Mar 18 17:42:51.734313 master-0 kubenswrapper[7553]: I0318 17:42:51.729152 7553 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-7d58488df-l48xm" Mar 18 17:42:51.753161 master-0 kubenswrapper[7553]: I0318 17:42:51.752656 7553 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wjtg7\" (UniqueName: \"kubernetes.io/projected/489dd872-39c3-4ce2-8dc1-9d0552b88616-kube-api-access-wjtg7\") pod \"community-operators-8485d\" (UID: \"489dd872-39c3-4ce2-8dc1-9d0552b88616\") " pod="openshift-marketplace/community-operators-8485d" Mar 18 17:42:51.797470 master-0 kubenswrapper[7553]: I0318 17:42:51.797399 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/8db04037-c7cc-4246-92c3-6e7985384b14-apiservice-cert\") pod \"packageserver-b8b994c95-kglwt\" (UID: \"8db04037-c7cc-4246-92c3-6e7985384b14\") " pod="openshift-operator-lifecycle-manager/packageserver-b8b994c95-kglwt" Mar 18 17:42:51.797723 master-0 kubenswrapper[7553]: I0318 17:42:51.797488 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dc110414-3a6b-474c-bce3-33450cab8fcd-utilities\") pod \"certified-operators-vbglp\" (UID: \"dc110414-3a6b-474c-bce3-33450cab8fcd\") " pod="openshift-marketplace/certified-operators-vbglp" Mar 18 17:42:51.797723 master-0 kubenswrapper[7553]: I0318 17:42:51.797566 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/8db04037-c7cc-4246-92c3-6e7985384b14-tmpfs\") pod \"packageserver-b8b994c95-kglwt\" (UID: \"8db04037-c7cc-4246-92c3-6e7985384b14\") " pod="openshift-operator-lifecycle-manager/packageserver-b8b994c95-kglwt" Mar 18 17:42:51.797723 master-0 kubenswrapper[7553]: I0318 17:42:51.797610 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mnl7c\" (UniqueName: \"kubernetes.io/projected/dc110414-3a6b-474c-bce3-33450cab8fcd-kube-api-access-mnl7c\") pod \"certified-operators-vbglp\" (UID: \"dc110414-3a6b-474c-bce3-33450cab8fcd\") " pod="openshift-marketplace/certified-operators-vbglp" Mar 18 17:42:51.797723 master-0 kubenswrapper[7553]: I0318 17:42:51.797637 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fglbh\" (UniqueName: \"kubernetes.io/projected/8db04037-c7cc-4246-92c3-6e7985384b14-kube-api-access-fglbh\") pod \"packageserver-b8b994c95-kglwt\" (UID: \"8db04037-c7cc-4246-92c3-6e7985384b14\") " pod="openshift-operator-lifecycle-manager/packageserver-b8b994c95-kglwt" Mar 18 17:42:51.797723 master-0 kubenswrapper[7553]: I0318 17:42:51.797668 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dc110414-3a6b-474c-bce3-33450cab8fcd-catalog-content\") pod \"certified-operators-vbglp\" (UID: \"dc110414-3a6b-474c-bce3-33450cab8fcd\") " pod="openshift-marketplace/certified-operators-vbglp" Mar 18 17:42:51.797723 master-0 kubenswrapper[7553]: I0318 17:42:51.797710 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/8db04037-c7cc-4246-92c3-6e7985384b14-webhook-cert\") pod \"packageserver-b8b994c95-kglwt\" (UID: \"8db04037-c7cc-4246-92c3-6e7985384b14\") " pod="openshift-operator-lifecycle-manager/packageserver-b8b994c95-kglwt" Mar 18 17:42:51.799420 master-0 kubenswrapper[7553]: I0318 17:42:51.799224 7553 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dc110414-3a6b-474c-bce3-33450cab8fcd-utilities\") pod \"certified-operators-vbglp\" (UID: \"dc110414-3a6b-474c-bce3-33450cab8fcd\") " pod="openshift-marketplace/certified-operators-vbglp" Mar 18 17:42:51.799765 master-0 kubenswrapper[7553]: I0318 17:42:51.799687 7553 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dc110414-3a6b-474c-bce3-33450cab8fcd-catalog-content\") pod \"certified-operators-vbglp\" (UID: \"dc110414-3a6b-474c-bce3-33450cab8fcd\") " pod="openshift-marketplace/certified-operators-vbglp" Mar 18 17:42:51.799820 master-0 kubenswrapper[7553]: I0318 17:42:51.799775 7553 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/8db04037-c7cc-4246-92c3-6e7985384b14-tmpfs\") pod \"packageserver-b8b994c95-kglwt\" (UID: \"8db04037-c7cc-4246-92c3-6e7985384b14\") " pod="openshift-operator-lifecycle-manager/packageserver-b8b994c95-kglwt" Mar 18 17:42:51.801347 master-0 kubenswrapper[7553]: I0318 17:42:51.801314 7553 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/8db04037-c7cc-4246-92c3-6e7985384b14-webhook-cert\") pod \"packageserver-b8b994c95-kglwt\" (UID: \"8db04037-c7cc-4246-92c3-6e7985384b14\") " pod="openshift-operator-lifecycle-manager/packageserver-b8b994c95-kglwt" Mar 18 17:42:51.805477 master-0 kubenswrapper[7553]: I0318 17:42:51.805376 7553 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/8db04037-c7cc-4246-92c3-6e7985384b14-apiservice-cert\") pod \"packageserver-b8b994c95-kglwt\" (UID: \"8db04037-c7cc-4246-92c3-6e7985384b14\") " pod="openshift-operator-lifecycle-manager/packageserver-b8b994c95-kglwt" Mar 18 17:42:51.855340 master-0 kubenswrapper[7553]: I0318 17:42:51.853395 7553 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8485d" Mar 18 17:42:51.859323 master-0 kubenswrapper[7553]: I0318 17:42:51.855991 7553 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fglbh\" (UniqueName: \"kubernetes.io/projected/8db04037-c7cc-4246-92c3-6e7985384b14-kube-api-access-fglbh\") pod \"packageserver-b8b994c95-kglwt\" (UID: \"8db04037-c7cc-4246-92c3-6e7985384b14\") " pod="openshift-operator-lifecycle-manager/packageserver-b8b994c95-kglwt" Mar 18 17:42:51.882126 master-0 kubenswrapper[7553]: I0318 17:42:51.880493 7553 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mnl7c\" (UniqueName: \"kubernetes.io/projected/dc110414-3a6b-474c-bce3-33450cab8fcd-kube-api-access-mnl7c\") pod \"certified-operators-vbglp\" (UID: \"dc110414-3a6b-474c-bce3-33450cab8fcd\") " pod="openshift-marketplace/certified-operators-vbglp" Mar 18 17:42:51.943801 master-0 kubenswrapper[7553]: I0318 17:42:51.942970 7553 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vbglp" Mar 18 17:42:51.979077 master-0 kubenswrapper[7553]: I0318 17:42:51.979003 7553 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-b8b994c95-kglwt" Mar 18 17:42:52.201956 master-0 kubenswrapper[7553]: I0318 17:42:52.199034 7553 generic.go:334] "Generic (PLEG): container finished" podID="e7a6e8f4-26e0-454c-bfbb-f97e72636bf6" containerID="5053ea0774db745bc5171a3a5165d83776bd53e8f14bf88f72c237e917e3529a" exitCode=0 Mar 18 17:42:52.201956 master-0 kubenswrapper[7553]: I0318 17:42:52.199196 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jlj6j" event={"ID":"e7a6e8f4-26e0-454c-bfbb-f97e72636bf6","Type":"ContainerDied","Data":"5053ea0774db745bc5171a3a5165d83776bd53e8f14bf88f72c237e917e3529a"} Mar 18 17:42:52.201956 master-0 kubenswrapper[7553]: I0318 17:42:52.199234 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jlj6j" event={"ID":"e7a6e8f4-26e0-454c-bfbb-f97e72636bf6","Type":"ContainerStarted","Data":"61837a983030238f211aa8ac08747382382504f05535c6f547af651eb6b3ff48"} Mar 18 17:42:52.224390 master-0 kubenswrapper[7553]: I0318 17:42:52.221400 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-7d58488df-l48xm" event={"ID":"fdab27a1-1d7a-4dc5-b828-eba3f57592dd","Type":"ContainerStarted","Data":"5eda9ef28d74f5cd7a10971a5854c8a51a0c32becadb69afd3686ca34d1563e1"} Mar 18 17:42:52.224390 master-0 kubenswrapper[7553]: I0318 17:42:52.221470 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-7d58488df-l48xm" event={"ID":"fdab27a1-1d7a-4dc5-b828-eba3f57592dd","Type":"ContainerStarted","Data":"14298257e1956a282ef61298797ea8ea8e4d9b9c2a924ea5f21c88394abce76c"} Mar 18 17:42:52.291102 master-0 kubenswrapper[7553]: I0318 17:42:52.290755 7553 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-7d58488df-l48xm" podStartSLOduration=1.290734486 podStartE2EDuration="1.290734486s" podCreationTimestamp="2026-03-18 17:42:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 17:42:52.290183234 +0000 UTC m=+62.436017907" watchObservedRunningTime="2026-03-18 17:42:52.290734486 +0000 UTC m=+62.436569149" Mar 18 17:42:52.471089 master-0 kubenswrapper[7553]: I0318 17:42:52.468077 7553 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-b8b994c95-kglwt"] Mar 18 17:42:52.516415 master-0 kubenswrapper[7553]: I0318 17:42:52.515360 7553 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-8485d"] Mar 18 17:42:52.542652 master-0 kubenswrapper[7553]: I0318 17:42:52.542601 7553 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-vbglp"] Mar 18 17:42:52.554330 master-0 kubenswrapper[7553]: W0318 17:42:52.554268 7553 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddc110414_3a6b_474c_bce3_33450cab8fcd.slice/crio-697aebe4c54170282f2900b6eb7950a2671c76c6eb51ac74def7ef20f0b63370 WatchSource:0}: Error finding container 697aebe4c54170282f2900b6eb7950a2671c76c6eb51ac74def7ef20f0b63370: Status 404 returned error can't find the container with id 697aebe4c54170282f2900b6eb7950a2671c76c6eb51ac74def7ef20f0b63370 Mar 18 17:42:53.238155 master-0 kubenswrapper[7553]: I0318 17:42:53.238086 7553 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-j4kft"] Mar 18 17:42:53.250303 master-0 kubenswrapper[7553]: I0318 17:42:53.247197 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-b8b994c95-kglwt" event={"ID":"8db04037-c7cc-4246-92c3-6e7985384b14","Type":"ContainerStarted","Data":"47365880fe826ecab9d7fe8d34683e85bfa742bad43f8684dd6fda3ca748f67a"} Mar 18 17:42:53.250303 master-0 kubenswrapper[7553]: I0318 17:42:53.247259 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-b8b994c95-kglwt" event={"ID":"8db04037-c7cc-4246-92c3-6e7985384b14","Type":"ContainerStarted","Data":"7e0345d8f514108b800a0c4627bc3a13dd0326586f06b4e1904eb81090cc64aa"} Mar 18 17:42:53.250303 master-0 kubenswrapper[7553]: I0318 17:42:53.247783 7553 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-b8b994c95-kglwt" Mar 18 17:42:53.250303 master-0 kubenswrapper[7553]: I0318 17:42:53.250260 7553 generic.go:334] "Generic (PLEG): container finished" podID="dc110414-3a6b-474c-bce3-33450cab8fcd" containerID="8293ae1276c1f139d18ab84c79b4ef640dd21f0be4c4014a118798b7acdc2d44" exitCode=0 Mar 18 17:42:53.250603 master-0 kubenswrapper[7553]: I0318 17:42:53.250366 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vbglp" event={"ID":"dc110414-3a6b-474c-bce3-33450cab8fcd","Type":"ContainerDied","Data":"8293ae1276c1f139d18ab84c79b4ef640dd21f0be4c4014a118798b7acdc2d44"} Mar 18 17:42:53.250603 master-0 kubenswrapper[7553]: I0318 17:42:53.250401 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vbglp" event={"ID":"dc110414-3a6b-474c-bce3-33450cab8fcd","Type":"ContainerStarted","Data":"697aebe4c54170282f2900b6eb7950a2671c76c6eb51ac74def7ef20f0b63370"} Mar 18 17:42:53.256617 master-0 kubenswrapper[7553]: I0318 17:42:53.253945 7553 generic.go:334] "Generic (PLEG): container finished" podID="489dd872-39c3-4ce2-8dc1-9d0552b88616" containerID="a2e29b749bfbe09ff5972a0dffb8367afb6d9100abae8e59d66f807f2bb0aaac" exitCode=0 Mar 18 17:42:53.256617 master-0 kubenswrapper[7553]: I0318 17:42:53.253985 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8485d" event={"ID":"489dd872-39c3-4ce2-8dc1-9d0552b88616","Type":"ContainerDied","Data":"a2e29b749bfbe09ff5972a0dffb8367afb6d9100abae8e59d66f807f2bb0aaac"} Mar 18 17:42:53.256617 master-0 kubenswrapper[7553]: I0318 17:42:53.254007 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8485d" event={"ID":"489dd872-39c3-4ce2-8dc1-9d0552b88616","Type":"ContainerStarted","Data":"726dac522b338193798e05019afcc3525452535e3149d4a25e33142fc811a586"} Mar 18 17:42:53.256617 master-0 kubenswrapper[7553]: I0318 17:42:53.254651 7553 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-b8b994c95-kglwt" Mar 18 17:42:53.278057 master-0 kubenswrapper[7553]: I0318 17:42:53.277926 7553 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-b8b994c95-kglwt" podStartSLOduration=2.277851781 podStartE2EDuration="2.277851781s" podCreationTimestamp="2026-03-18 17:42:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 17:42:53.267401741 +0000 UTC m=+63.413236414" watchObservedRunningTime="2026-03-18 17:42:53.277851781 +0000 UTC m=+63.423686454" Mar 18 17:42:53.595987 master-0 kubenswrapper[7553]: I0318 17:42:53.595823 7553 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/installer-2-master-0"] Mar 18 17:42:53.597010 master-0 kubenswrapper[7553]: I0318 17:42:53.596988 7553 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-2-master-0" Mar 18 17:42:53.600062 master-0 kubenswrapper[7553]: I0318 17:42:53.599956 7553 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-cskqs" Mar 18 17:42:53.620262 master-0 kubenswrapper[7553]: I0318 17:42:53.620140 7553 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/installer-2-master-0"] Mar 18 17:42:53.638845 master-0 kubenswrapper[7553]: I0318 17:42:53.638659 7553 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-6xmx4"] Mar 18 17:42:53.640842 master-0 kubenswrapper[7553]: I0318 17:42:53.640802 7553 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6xmx4" Mar 18 17:42:53.643936 master-0 kubenswrapper[7553]: I0318 17:42:53.643771 7553 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-kdvf8" Mar 18 17:42:53.659951 master-0 kubenswrapper[7553]: I0318 17:42:53.659808 7553 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-6xmx4"] Mar 18 17:42:53.666894 master-0 kubenswrapper[7553]: I0318 17:42:53.666839 7553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/37bbec19-22b8-411c-901b-d89c92b0bd4d-var-lock\") pod \"installer-2-master-0\" (UID: \"37bbec19-22b8-411c-901b-d89c92b0bd4d\") " pod="openshift-kube-controller-manager/installer-2-master-0" Mar 18 17:42:53.667012 master-0 kubenswrapper[7553]: I0318 17:42:53.666935 7553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/37bbec19-22b8-411c-901b-d89c92b0bd4d-kube-api-access\") pod \"installer-2-master-0\" (UID: \"37bbec19-22b8-411c-901b-d89c92b0bd4d\") " pod="openshift-kube-controller-manager/installer-2-master-0" Mar 18 17:42:53.667012 master-0 kubenswrapper[7553]: I0318 17:42:53.666957 7553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/37bbec19-22b8-411c-901b-d89c92b0bd4d-kubelet-dir\") pod \"installer-2-master-0\" (UID: \"37bbec19-22b8-411c-901b-d89c92b0bd4d\") " pod="openshift-kube-controller-manager/installer-2-master-0" Mar 18 17:42:53.769896 master-0 kubenswrapper[7553]: I0318 17:42:53.769775 7553 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/37bbec19-22b8-411c-901b-d89c92b0bd4d-var-lock\") pod \"installer-2-master-0\" (UID: \"37bbec19-22b8-411c-901b-d89c92b0bd4d\") " pod="openshift-kube-controller-manager/installer-2-master-0" Mar 18 17:42:53.770895 master-0 kubenswrapper[7553]: I0318 17:42:53.770592 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/37bbec19-22b8-411c-901b-d89c92b0bd4d-var-lock\") pod \"installer-2-master-0\" (UID: \"37bbec19-22b8-411c-901b-d89c92b0bd4d\") " pod="openshift-kube-controller-manager/installer-2-master-0" Mar 18 17:42:53.770895 master-0 kubenswrapper[7553]: I0318 17:42:53.770678 7553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/427e5ce9-f4b3-4f12-bb77-2b13775aa334-catalog-content\") pod \"redhat-marketplace-6xmx4\" (UID: \"427e5ce9-f4b3-4f12-bb77-2b13775aa334\") " pod="openshift-marketplace/redhat-marketplace-6xmx4" Mar 18 17:42:53.770895 master-0 kubenswrapper[7553]: I0318 17:42:53.770806 7553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/427e5ce9-f4b3-4f12-bb77-2b13775aa334-utilities\") pod \"redhat-marketplace-6xmx4\" (UID: \"427e5ce9-f4b3-4f12-bb77-2b13775aa334\") " pod="openshift-marketplace/redhat-marketplace-6xmx4" Mar 18 17:42:53.770895 master-0 kubenswrapper[7553]: I0318 17:42:53.770857 7553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z5jd4\" (UniqueName: \"kubernetes.io/projected/427e5ce9-f4b3-4f12-bb77-2b13775aa334-kube-api-access-z5jd4\") pod \"redhat-marketplace-6xmx4\" (UID: \"427e5ce9-f4b3-4f12-bb77-2b13775aa334\") " pod="openshift-marketplace/redhat-marketplace-6xmx4" Mar 18 17:42:53.771029 master-0 kubenswrapper[7553]: I0318 17:42:53.770940 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/37bbec19-22b8-411c-901b-d89c92b0bd4d-kube-api-access\") pod \"installer-2-master-0\" (UID: \"37bbec19-22b8-411c-901b-d89c92b0bd4d\") " pod="openshift-kube-controller-manager/installer-2-master-0" Mar 18 17:42:53.771029 master-0 kubenswrapper[7553]: I0318 17:42:53.771000 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/37bbec19-22b8-411c-901b-d89c92b0bd4d-kubelet-dir\") pod \"installer-2-master-0\" (UID: \"37bbec19-22b8-411c-901b-d89c92b0bd4d\") " pod="openshift-kube-controller-manager/installer-2-master-0" Mar 18 17:42:53.771465 master-0 kubenswrapper[7553]: I0318 17:42:53.771181 7553 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/37bbec19-22b8-411c-901b-d89c92b0bd4d-kubelet-dir\") pod \"installer-2-master-0\" (UID: \"37bbec19-22b8-411c-901b-d89c92b0bd4d\") " pod="openshift-kube-controller-manager/installer-2-master-0" Mar 18 17:42:53.814853 master-0 kubenswrapper[7553]: I0318 17:42:53.814777 7553 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/37bbec19-22b8-411c-901b-d89c92b0bd4d-kube-api-access\") pod \"installer-2-master-0\" (UID: \"37bbec19-22b8-411c-901b-d89c92b0bd4d\") " pod="openshift-kube-controller-manager/installer-2-master-0" Mar 18 17:42:53.868237 master-0 kubenswrapper[7553]: I0318 17:42:53.867950 7553 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-jlj6j"] Mar 18 17:42:53.872409 master-0 kubenswrapper[7553]: I0318 17:42:53.872348 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z5jd4\" (UniqueName: \"kubernetes.io/projected/427e5ce9-f4b3-4f12-bb77-2b13775aa334-kube-api-access-z5jd4\") pod \"redhat-marketplace-6xmx4\" (UID: \"427e5ce9-f4b3-4f12-bb77-2b13775aa334\") " pod="openshift-marketplace/redhat-marketplace-6xmx4" Mar 18 17:42:53.872409 master-0 kubenswrapper[7553]: I0318 17:42:53.872434 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/427e5ce9-f4b3-4f12-bb77-2b13775aa334-catalog-content\") pod \"redhat-marketplace-6xmx4\" (UID: \"427e5ce9-f4b3-4f12-bb77-2b13775aa334\") " pod="openshift-marketplace/redhat-marketplace-6xmx4" Mar 18 17:42:53.872409 master-0 kubenswrapper[7553]: I0318 17:42:53.872480 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/427e5ce9-f4b3-4f12-bb77-2b13775aa334-utilities\") pod \"redhat-marketplace-6xmx4\" (UID: \"427e5ce9-f4b3-4f12-bb77-2b13775aa334\") " pod="openshift-marketplace/redhat-marketplace-6xmx4" Mar 18 17:42:53.873034 master-0 kubenswrapper[7553]: I0318 17:42:53.872946 7553 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/427e5ce9-f4b3-4f12-bb77-2b13775aa334-utilities\") pod \"redhat-marketplace-6xmx4\" (UID: \"427e5ce9-f4b3-4f12-bb77-2b13775aa334\") " pod="openshift-marketplace/redhat-marketplace-6xmx4" Mar 18 17:42:53.874681 master-0 kubenswrapper[7553]: I0318 17:42:53.874661 7553 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/427e5ce9-f4b3-4f12-bb77-2b13775aa334-catalog-content\") pod \"redhat-marketplace-6xmx4\" (UID: \"427e5ce9-f4b3-4f12-bb77-2b13775aa334\") " pod="openshift-marketplace/redhat-marketplace-6xmx4" Mar 18 17:42:53.893226 master-0 kubenswrapper[7553]: I0318 17:42:53.893137 7553 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z5jd4\" (UniqueName: \"kubernetes.io/projected/427e5ce9-f4b3-4f12-bb77-2b13775aa334-kube-api-access-z5jd4\") pod \"redhat-marketplace-6xmx4\" (UID: \"427e5ce9-f4b3-4f12-bb77-2b13775aa334\") " pod="openshift-marketplace/redhat-marketplace-6xmx4" Mar 18 17:42:53.929173 master-0 kubenswrapper[7553]: I0318 17:42:53.929107 7553 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-2-master-0" Mar 18 17:42:53.966365 master-0 kubenswrapper[7553]: I0318 17:42:53.965901 7553 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6xmx4" Mar 18 17:42:54.034962 master-0 kubenswrapper[7553]: I0318 17:42:54.034901 7553 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-bgdql"] Mar 18 17:42:54.036803 master-0 kubenswrapper[7553]: I0318 17:42:54.036197 7553 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bgdql" Mar 18 17:42:54.043795 master-0 kubenswrapper[7553]: I0318 17:42:54.041993 7553 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-btlbk" Mar 18 17:42:54.047548 master-0 kubenswrapper[7553]: I0318 17:42:54.047429 7553 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-bgdql"] Mar 18 17:42:54.182001 master-0 kubenswrapper[7553]: I0318 17:42:54.181741 7553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4460d3d3-c55f-4f1c-a623-e3feccf937bb-utilities\") pod \"redhat-operators-bgdql\" (UID: \"4460d3d3-c55f-4f1c-a623-e3feccf937bb\") " pod="openshift-marketplace/redhat-operators-bgdql" Mar 18 17:42:54.182001 master-0 kubenswrapper[7553]: I0318 17:42:54.181830 7553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4460d3d3-c55f-4f1c-a623-e3feccf937bb-catalog-content\") pod \"redhat-operators-bgdql\" (UID: \"4460d3d3-c55f-4f1c-a623-e3feccf937bb\") " pod="openshift-marketplace/redhat-operators-bgdql" Mar 18 17:42:54.182001 master-0 kubenswrapper[7553]: I0318 17:42:54.181910 7553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2tskm\" (UniqueName: \"kubernetes.io/projected/4460d3d3-c55f-4f1c-a623-e3feccf937bb-kube-api-access-2tskm\") pod \"redhat-operators-bgdql\" (UID: \"4460d3d3-c55f-4f1c-a623-e3feccf937bb\") " pod="openshift-marketplace/redhat-operators-bgdql" Mar 18 17:42:54.238826 master-0 kubenswrapper[7553]: E0318 17:42:54.238788 7553 file.go:109] "Unable to process watch event" err="can't process config file \"/etc/kubernetes/manifests/etcd-pod.yaml\": /etc/kubernetes/manifests/etcd-pod.yaml: couldn't parse as pod(Object 'Kind' is missing in 'null'), please check config file" Mar 18 17:42:54.239434 master-0 kubenswrapper[7553]: I0318 17:42:54.238865 7553 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-etcd/etcd-master-0-master-0"] Mar 18 17:42:54.239434 master-0 kubenswrapper[7553]: I0318 17:42:54.239167 7553 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-etcd/etcd-master-0-master-0" podUID="d664a6d0d2a24360dee10612610f1b59" containerName="etcdctl" containerID="cri-o://99dc9cff4665f248f4ae68c96db3198a4bcd4d7b5dbfb367bdf3864e44ad29fc" gracePeriod=30 Mar 18 17:42:54.240385 master-0 kubenswrapper[7553]: I0318 17:42:54.239422 7553 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-etcd/etcd-master-0-master-0" podUID="d664a6d0d2a24360dee10612610f1b59" containerName="etcd" containerID="cri-o://1d30b6f37f4ad53c3294bea48dd4a0769d42ea2d80a5395f6ef8c16034150f6c" gracePeriod=30 Mar 18 17:42:54.244326 master-0 kubenswrapper[7553]: I0318 17:42:54.244113 7553 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-etcd/etcd-master-0"] Mar 18 17:42:54.244767 master-0 kubenswrapper[7553]: E0318 17:42:54.244545 7553 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d664a6d0d2a24360dee10612610f1b59" containerName="etcdctl" Mar 18 17:42:54.244767 master-0 kubenswrapper[7553]: I0318 17:42:54.244563 7553 state_mem.go:107] "Deleted CPUSet assignment" podUID="d664a6d0d2a24360dee10612610f1b59" containerName="etcdctl" Mar 18 17:42:54.244767 master-0 kubenswrapper[7553]: E0318 17:42:54.244583 7553 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d664a6d0d2a24360dee10612610f1b59" containerName="etcd" Mar 18 17:42:54.244767 master-0 kubenswrapper[7553]: I0318 17:42:54.244589 7553 state_mem.go:107] "Deleted CPUSet assignment" podUID="d664a6d0d2a24360dee10612610f1b59" containerName="etcd" Mar 18 17:42:54.244767 master-0 kubenswrapper[7553]: I0318 17:42:54.244678 7553 memory_manager.go:354] "RemoveStaleState removing state" podUID="d664a6d0d2a24360dee10612610f1b59" containerName="etcdctl" Mar 18 17:42:54.244767 master-0 kubenswrapper[7553]: I0318 17:42:54.244687 7553 memory_manager.go:354] "RemoveStaleState removing state" podUID="d664a6d0d2a24360dee10612610f1b59" containerName="etcd" Mar 18 17:42:54.246744 master-0 kubenswrapper[7553]: I0318 17:42:54.246573 7553 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-master-0" Mar 18 17:42:54.283008 master-0 kubenswrapper[7553]: I0318 17:42:54.282947 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2tskm\" (UniqueName: \"kubernetes.io/projected/4460d3d3-c55f-4f1c-a623-e3feccf937bb-kube-api-access-2tskm\") pod \"redhat-operators-bgdql\" (UID: \"4460d3d3-c55f-4f1c-a623-e3feccf937bb\") " pod="openshift-marketplace/redhat-operators-bgdql" Mar 18 17:42:54.284819 master-0 kubenswrapper[7553]: I0318 17:42:54.283256 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4460d3d3-c55f-4f1c-a623-e3feccf937bb-utilities\") pod \"redhat-operators-bgdql\" (UID: \"4460d3d3-c55f-4f1c-a623-e3feccf937bb\") " pod="openshift-marketplace/redhat-operators-bgdql" Mar 18 17:42:54.284819 master-0 kubenswrapper[7553]: I0318 17:42:54.283403 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4460d3d3-c55f-4f1c-a623-e3feccf937bb-catalog-content\") pod \"redhat-operators-bgdql\" (UID: \"4460d3d3-c55f-4f1c-a623-e3feccf937bb\") " pod="openshift-marketplace/redhat-operators-bgdql" Mar 18 17:42:54.284819 master-0 kubenswrapper[7553]: I0318 17:42:54.284743 7553 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4460d3d3-c55f-4f1c-a623-e3feccf937bb-utilities\") pod \"redhat-operators-bgdql\" (UID: \"4460d3d3-c55f-4f1c-a623-e3feccf937bb\") " pod="openshift-marketplace/redhat-operators-bgdql" Mar 18 17:42:54.285305 master-0 kubenswrapper[7553]: I0318 17:42:54.285105 7553 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4460d3d3-c55f-4f1c-a623-e3feccf937bb-catalog-content\") pod \"redhat-operators-bgdql\" (UID: \"4460d3d3-c55f-4f1c-a623-e3feccf937bb\") " pod="openshift-marketplace/redhat-operators-bgdql" Mar 18 17:42:54.386704 master-0 kubenswrapper[7553]: I0318 17:42:54.386599 7553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/24b4ed170d527099878cb5fdd508a2fb-data-dir\") pod \"etcd-master-0\" (UID: \"24b4ed170d527099878cb5fdd508a2fb\") " pod="openshift-etcd/etcd-master-0" Mar 18 17:42:54.386704 master-0 kubenswrapper[7553]: I0318 17:42:54.386681 7553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/24b4ed170d527099878cb5fdd508a2fb-usr-local-bin\") pod \"etcd-master-0\" (UID: \"24b4ed170d527099878cb5fdd508a2fb\") " pod="openshift-etcd/etcd-master-0" Mar 18 17:42:54.386972 master-0 kubenswrapper[7553]: I0318 17:42:54.386723 7553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/24b4ed170d527099878cb5fdd508a2fb-cert-dir\") pod \"etcd-master-0\" (UID: \"24b4ed170d527099878cb5fdd508a2fb\") " pod="openshift-etcd/etcd-master-0" Mar 18 17:42:54.386972 master-0 kubenswrapper[7553]: I0318 17:42:54.386748 7553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/24b4ed170d527099878cb5fdd508a2fb-static-pod-dir\") pod \"etcd-master-0\" (UID: \"24b4ed170d527099878cb5fdd508a2fb\") " pod="openshift-etcd/etcd-master-0" Mar 18 17:42:54.386972 master-0 kubenswrapper[7553]: I0318 17:42:54.386770 7553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/24b4ed170d527099878cb5fdd508a2fb-log-dir\") pod \"etcd-master-0\" (UID: \"24b4ed170d527099878cb5fdd508a2fb\") " pod="openshift-etcd/etcd-master-0" Mar 18 17:42:54.386972 master-0 kubenswrapper[7553]: I0318 17:42:54.386932 7553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/24b4ed170d527099878cb5fdd508a2fb-resource-dir\") pod \"etcd-master-0\" (UID: \"24b4ed170d527099878cb5fdd508a2fb\") " pod="openshift-etcd/etcd-master-0" Mar 18 17:42:54.489978 master-0 kubenswrapper[7553]: I0318 17:42:54.489851 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/24b4ed170d527099878cb5fdd508a2fb-data-dir\") pod \"etcd-master-0\" (UID: \"24b4ed170d527099878cb5fdd508a2fb\") " pod="openshift-etcd/etcd-master-0" Mar 18 17:42:54.489978 master-0 kubenswrapper[7553]: I0318 17:42:54.489911 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/24b4ed170d527099878cb5fdd508a2fb-usr-local-bin\") pod \"etcd-master-0\" (UID: \"24b4ed170d527099878cb5fdd508a2fb\") " pod="openshift-etcd/etcd-master-0" Mar 18 17:42:54.489978 master-0 kubenswrapper[7553]: I0318 17:42:54.489936 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/24b4ed170d527099878cb5fdd508a2fb-cert-dir\") pod \"etcd-master-0\" (UID: \"24b4ed170d527099878cb5fdd508a2fb\") " pod="openshift-etcd/etcd-master-0" Mar 18 17:42:54.490233 master-0 kubenswrapper[7553]: I0318 17:42:54.490005 7553 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/24b4ed170d527099878cb5fdd508a2fb-usr-local-bin\") pod \"etcd-master-0\" (UID: \"24b4ed170d527099878cb5fdd508a2fb\") " pod="openshift-etcd/etcd-master-0" Mar 18 17:42:54.490233 master-0 kubenswrapper[7553]: I0318 17:42:54.490054 7553 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/24b4ed170d527099878cb5fdd508a2fb-cert-dir\") pod \"etcd-master-0\" (UID: \"24b4ed170d527099878cb5fdd508a2fb\") " pod="openshift-etcd/etcd-master-0" Mar 18 17:42:54.490233 master-0 kubenswrapper[7553]: I0318 17:42:54.490045 7553 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/24b4ed170d527099878cb5fdd508a2fb-data-dir\") pod \"etcd-master-0\" (UID: \"24b4ed170d527099878cb5fdd508a2fb\") " pod="openshift-etcd/etcd-master-0" Mar 18 17:42:54.490233 master-0 kubenswrapper[7553]: I0318 17:42:54.490151 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/24b4ed170d527099878cb5fdd508a2fb-static-pod-dir\") pod \"etcd-master-0\" (UID: \"24b4ed170d527099878cb5fdd508a2fb\") " pod="openshift-etcd/etcd-master-0" Mar 18 17:42:54.490233 master-0 kubenswrapper[7553]: I0318 17:42:54.490181 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/24b4ed170d527099878cb5fdd508a2fb-log-dir\") pod \"etcd-master-0\" (UID: \"24b4ed170d527099878cb5fdd508a2fb\") " pod="openshift-etcd/etcd-master-0" Mar 18 17:42:54.490615 master-0 kubenswrapper[7553]: I0318 17:42:54.490310 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/24b4ed170d527099878cb5fdd508a2fb-resource-dir\") pod \"etcd-master-0\" (UID: \"24b4ed170d527099878cb5fdd508a2fb\") " pod="openshift-etcd/etcd-master-0" Mar 18 17:42:54.490615 master-0 kubenswrapper[7553]: I0318 17:42:54.490327 7553 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/24b4ed170d527099878cb5fdd508a2fb-static-pod-dir\") pod \"etcd-master-0\" (UID: \"24b4ed170d527099878cb5fdd508a2fb\") " pod="openshift-etcd/etcd-master-0" Mar 18 17:42:54.490615 master-0 kubenswrapper[7553]: I0318 17:42:54.490402 7553 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/24b4ed170d527099878cb5fdd508a2fb-log-dir\") pod \"etcd-master-0\" (UID: \"24b4ed170d527099878cb5fdd508a2fb\") " pod="openshift-etcd/etcd-master-0" Mar 18 17:42:54.490615 master-0 kubenswrapper[7553]: I0318 17:42:54.490534 7553 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/24b4ed170d527099878cb5fdd508a2fb-resource-dir\") pod \"etcd-master-0\" (UID: \"24b4ed170d527099878cb5fdd508a2fb\") " pod="openshift-etcd/etcd-master-0" Mar 18 17:43:00.339592 master-0 kubenswrapper[7553]: I0318 17:43:00.339546 7553 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_installer-1-master-0_22e8652f-ee18-4cff-bccb-ef413456685f/installer/0.log" Mar 18 17:43:00.340149 master-0 kubenswrapper[7553]: I0318 17:43:00.339621 7553 generic.go:334] "Generic (PLEG): container finished" podID="22e8652f-ee18-4cff-bccb-ef413456685f" containerID="e0ce789b272d7ec4bd7aac94ac37ecdd2765bd0434e740bbb25752a48e70911e" exitCode=1 Mar 18 17:43:00.340149 master-0 kubenswrapper[7553]: I0318 17:43:00.339670 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-1-master-0" event={"ID":"22e8652f-ee18-4cff-bccb-ef413456685f","Type":"ContainerDied","Data":"e0ce789b272d7ec4bd7aac94ac37ecdd2765bd0434e740bbb25752a48e70911e"} Mar 18 17:43:07.283012 master-0 kubenswrapper[7553]: E0318 17:43:07.282905 7553 kubelet.go:1929] "Failed creating a mirror pod for" err="Internal error occurred: admission plugin \"LimitRanger\" failed to complete mutation in 13s" pod="openshift-etcd/etcd-master-0" Mar 18 17:43:07.284079 master-0 kubenswrapper[7553]: I0318 17:43:07.283904 7553 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-master-0" Mar 18 17:43:09.562852 master-0 kubenswrapper[7553]: I0318 17:43:09.562753 7553 prober.go:107] "Probe failed" probeType="Liveness" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="46f265536aba6292ead501bc9b49f327" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.32.10:10257/healthz\": dial tcp 192.168.32.10:10257: connect: connection refused" Mar 18 17:43:12.212069 master-0 kubenswrapper[7553]: E0318 17:43:12.211998 7553 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 18 17:43:12.421360 master-0 kubenswrapper[7553]: I0318 17:43:12.413858 7553 generic.go:334] "Generic (PLEG): container finished" podID="46f265536aba6292ead501bc9b49f327" containerID="f0b81b3cfa0e5fd097c90e819ab6e9fd9565a5c9b97fe1b2e8315e5233938b1e" exitCode=1 Mar 18 17:43:12.421360 master-0 kubenswrapper[7553]: I0318 17:43:12.413967 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"46f265536aba6292ead501bc9b49f327","Type":"ContainerDied","Data":"f0b81b3cfa0e5fd097c90e819ab6e9fd9565a5c9b97fe1b2e8315e5233938b1e"} Mar 18 17:43:12.421360 master-0 kubenswrapper[7553]: I0318 17:43:12.414809 7553 scope.go:117] "RemoveContainer" containerID="f0b81b3cfa0e5fd097c90e819ab6e9fd9565a5c9b97fe1b2e8315e5233938b1e" Mar 18 17:43:12.421360 master-0 kubenswrapper[7553]: I0318 17:43:12.415607 7553 generic.go:334] "Generic (PLEG): container finished" podID="08451d5b-cf84-45a1-a16d-7ce10a83a6e7" containerID="5314ec05fb03281eaddcd24c27457c3fda717a46b41bfa95e18bf5f7470daeb4" exitCode=0 Mar 18 17:43:12.421360 master-0 kubenswrapper[7553]: I0318 17:43:12.415639 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/installer-1-master-0" event={"ID":"08451d5b-cf84-45a1-a16d-7ce10a83a6e7","Type":"ContainerDied","Data":"5314ec05fb03281eaddcd24c27457c3fda717a46b41bfa95e18bf5f7470daeb4"} Mar 18 17:43:12.421360 master-0 kubenswrapper[7553]: I0318 17:43:12.417792 7553 generic.go:334] "Generic (PLEG): container finished" podID="c83737980b9ee109184b1d78e942cf36" containerID="774c63dac090e52a2318d2a44e73b16fc328b4dc2d265dcfd10522ed7532c288" exitCode=1 Mar 18 17:43:12.421360 master-0 kubenswrapper[7553]: I0318 17:43:12.417875 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-scheduler-master-0" event={"ID":"c83737980b9ee109184b1d78e942cf36","Type":"ContainerDied","Data":"774c63dac090e52a2318d2a44e73b16fc328b4dc2d265dcfd10522ed7532c288"} Mar 18 17:43:12.421360 master-0 kubenswrapper[7553]: I0318 17:43:12.418468 7553 scope.go:117] "RemoveContainer" containerID="774c63dac090e52a2318d2a44e73b16fc328b4dc2d265dcfd10522ed7532c288" Mar 18 17:43:12.421360 master-0 kubenswrapper[7553]: I0318 17:43:12.419714 7553 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_installer-1-master-0_22e8652f-ee18-4cff-bccb-ef413456685f/installer/0.log" Mar 18 17:43:12.421360 master-0 kubenswrapper[7553]: I0318 17:43:12.419750 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-1-master-0" event={"ID":"22e8652f-ee18-4cff-bccb-ef413456685f","Type":"ContainerDied","Data":"d0d3e69906c0ae9dcd09afc3f088fea05034a3ae07c3604def2e9ba4e74187c1"} Mar 18 17:43:12.421360 master-0 kubenswrapper[7553]: I0318 17:43:12.419768 7553 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d0d3e69906c0ae9dcd09afc3f088fea05034a3ae07c3604def2e9ba4e74187c1" Mar 18 17:43:12.432508 master-0 kubenswrapper[7553]: W0318 17:43:12.432435 7553 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod24b4ed170d527099878cb5fdd508a2fb.slice/crio-d18781496f57527b602db82fd24e8e48c659d1884b30f552256bccafb22c1b55 WatchSource:0}: Error finding container d18781496f57527b602db82fd24e8e48c659d1884b30f552256bccafb22c1b55: Status 404 returned error can't find the container with id d18781496f57527b602db82fd24e8e48c659d1884b30f552256bccafb22c1b55 Mar 18 17:43:12.471688 master-0 kubenswrapper[7553]: I0318 17:43:12.471635 7553 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_installer-1-master-0_22e8652f-ee18-4cff-bccb-ef413456685f/installer/0.log" Mar 18 17:43:12.471869 master-0 kubenswrapper[7553]: I0318 17:43:12.471722 7553 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-1-master-0" Mar 18 17:43:12.609800 master-0 kubenswrapper[7553]: I0318 17:43:12.609750 7553 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/22e8652f-ee18-4cff-bccb-ef413456685f-kubelet-dir\") pod \"22e8652f-ee18-4cff-bccb-ef413456685f\" (UID: \"22e8652f-ee18-4cff-bccb-ef413456685f\") " Mar 18 17:43:12.609800 master-0 kubenswrapper[7553]: I0318 17:43:12.609801 7553 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/22e8652f-ee18-4cff-bccb-ef413456685f-kube-api-access\") pod \"22e8652f-ee18-4cff-bccb-ef413456685f\" (UID: \"22e8652f-ee18-4cff-bccb-ef413456685f\") " Mar 18 17:43:12.610071 master-0 kubenswrapper[7553]: I0318 17:43:12.609838 7553 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/22e8652f-ee18-4cff-bccb-ef413456685f-var-lock\") pod \"22e8652f-ee18-4cff-bccb-ef413456685f\" (UID: \"22e8652f-ee18-4cff-bccb-ef413456685f\") " Mar 18 17:43:12.610071 master-0 kubenswrapper[7553]: I0318 17:43:12.609899 7553 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/22e8652f-ee18-4cff-bccb-ef413456685f-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "22e8652f-ee18-4cff-bccb-ef413456685f" (UID: "22e8652f-ee18-4cff-bccb-ef413456685f"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 17:43:12.610071 master-0 kubenswrapper[7553]: I0318 17:43:12.609953 7553 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/22e8652f-ee18-4cff-bccb-ef413456685f-var-lock" (OuterVolumeSpecName: "var-lock") pod "22e8652f-ee18-4cff-bccb-ef413456685f" (UID: "22e8652f-ee18-4cff-bccb-ef413456685f"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 17:43:12.610282 master-0 kubenswrapper[7553]: I0318 17:43:12.610242 7553 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/22e8652f-ee18-4cff-bccb-ef413456685f-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Mar 18 17:43:12.610282 master-0 kubenswrapper[7553]: I0318 17:43:12.610265 7553 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/22e8652f-ee18-4cff-bccb-ef413456685f-var-lock\") on node \"master-0\" DevicePath \"\"" Mar 18 17:43:12.624148 master-0 kubenswrapper[7553]: I0318 17:43:12.624093 7553 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22e8652f-ee18-4cff-bccb-ef413456685f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "22e8652f-ee18-4cff-bccb-ef413456685f" (UID: "22e8652f-ee18-4cff-bccb-ef413456685f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 17:43:12.711478 master-0 kubenswrapper[7553]: I0318 17:43:12.711358 7553 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/22e8652f-ee18-4cff-bccb-ef413456685f-kube-api-access\") on node \"master-0\" DevicePath \"\"" Mar 18 17:43:13.133854 master-0 kubenswrapper[7553]: I0318 17:43:13.133816 7553 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-7b95f86987-6qqz4" Mar 18 17:43:13.428649 master-0 kubenswrapper[7553]: I0318 17:43:13.428596 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-scheduler-master-0" event={"ID":"c83737980b9ee109184b1d78e942cf36","Type":"ContainerStarted","Data":"39e81d7022f76aa50f44926362dbcc435bd580e0e562220512ebed69c23461e5"} Mar 18 17:43:13.430864 master-0 kubenswrapper[7553]: I0318 17:43:13.430823 7553 generic.go:334] "Generic (PLEG): container finished" podID="dc110414-3a6b-474c-bce3-33450cab8fcd" containerID="2718b408b0fd0508d3bbb65645adb3096e6a30b7fddd2e6d5a0da288259af5b6" exitCode=0 Mar 18 17:43:13.431099 master-0 kubenswrapper[7553]: I0318 17:43:13.430873 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vbglp" event={"ID":"dc110414-3a6b-474c-bce3-33450cab8fcd","Type":"ContainerDied","Data":"2718b408b0fd0508d3bbb65645adb3096e6a30b7fddd2e6d5a0da288259af5b6"} Mar 18 17:43:13.434474 master-0 kubenswrapper[7553]: I0318 17:43:13.434390 7553 generic.go:334] "Generic (PLEG): container finished" podID="7a9075c3-bb4f-4559-8454-5e097f334957" containerID="7fb5fd11d6048f6029e82f09770801d13ffe2b0bf670b25a592c84f63528f56f" exitCode=0 Mar 18 17:43:13.434587 master-0 kubenswrapper[7553]: I0318 17:43:13.434493 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fg8h6" event={"ID":"7a9075c3-bb4f-4559-8454-5e097f334957","Type":"ContainerDied","Data":"7fb5fd11d6048f6029e82f09770801d13ffe2b0bf670b25a592c84f63528f56f"} Mar 18 17:43:13.437400 master-0 kubenswrapper[7553]: I0318 17:43:13.437366 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"46f265536aba6292ead501bc9b49f327","Type":"ContainerStarted","Data":"91f9abefbca1475a63e431e476e7c66f5fca86898f7b16dc16e0f75842de9c91"} Mar 18 17:43:13.439566 master-0 kubenswrapper[7553]: I0318 17:43:13.439540 7553 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_installer-1-master-0_4f688df1-3bfc-412e-b311-f9f761a0b00a/installer/0.log" Mar 18 17:43:13.439890 master-0 kubenswrapper[7553]: I0318 17:43:13.439825 7553 generic.go:334] "Generic (PLEG): container finished" podID="4f688df1-3bfc-412e-b311-f9f761a0b00a" containerID="fdeef07d8840260931a9408a0850cec7ff93ac6938603492d86d93449b1926fe" exitCode=1 Mar 18 17:43:13.440196 master-0 kubenswrapper[7553]: I0318 17:43:13.439929 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-1-master-0" event={"ID":"4f688df1-3bfc-412e-b311-f9f761a0b00a","Type":"ContainerDied","Data":"fdeef07d8840260931a9408a0850cec7ff93ac6938603492d86d93449b1926fe"} Mar 18 17:43:13.444072 master-0 kubenswrapper[7553]: I0318 17:43:13.444027 7553 generic.go:334] "Generic (PLEG): container finished" podID="24b4ed170d527099878cb5fdd508a2fb" containerID="94631d3fb06d21e6367357342a45a846a836b814bfd835e3d206e702cdd96815" exitCode=0 Mar 18 17:43:13.444355 master-0 kubenswrapper[7553]: I0318 17:43:13.444095 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"24b4ed170d527099878cb5fdd508a2fb","Type":"ContainerDied","Data":"94631d3fb06d21e6367357342a45a846a836b814bfd835e3d206e702cdd96815"} Mar 18 17:43:13.444355 master-0 kubenswrapper[7553]: I0318 17:43:13.444176 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"24b4ed170d527099878cb5fdd508a2fb","Type":"ContainerStarted","Data":"d18781496f57527b602db82fd24e8e48c659d1884b30f552256bccafb22c1b55"} Mar 18 17:43:13.446976 master-0 kubenswrapper[7553]: I0318 17:43:13.446901 7553 generic.go:334] "Generic (PLEG): container finished" podID="f7203a5f-0f67-48ca-a12b-be3b0ce7cbac" containerID="51459af323032020b310b969c7f232ca0d879ba6054f5b26cbdbbbbcafb3c3e8" exitCode=0 Mar 18 17:43:13.447216 master-0 kubenswrapper[7553]: I0318 17:43:13.446992 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hgw2n" event={"ID":"f7203a5f-0f67-48ca-a12b-be3b0ce7cbac","Type":"ContainerDied","Data":"51459af323032020b310b969c7f232ca0d879ba6054f5b26cbdbbbbcafb3c3e8"} Mar 18 17:43:13.451427 master-0 kubenswrapper[7553]: I0318 17:43:13.451344 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jlj6j" event={"ID":"e7a6e8f4-26e0-454c-bfbb-f97e72636bf6","Type":"ContainerStarted","Data":"85c6d7b26f88c24833f482fd594353d9598a0fe3678db3d2c1ca35bacfd7b364"} Mar 18 17:43:13.451620 master-0 kubenswrapper[7553]: I0318 17:43:13.451565 7553 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-jlj6j" podUID="e7a6e8f4-26e0-454c-bfbb-f97e72636bf6" containerName="extract-content" containerID="cri-o://85c6d7b26f88c24833f482fd594353d9598a0fe3678db3d2c1ca35bacfd7b364" gracePeriod=2 Mar 18 17:43:13.460353 master-0 kubenswrapper[7553]: I0318 17:43:13.455715 7553 generic.go:334] "Generic (PLEG): container finished" podID="35595774-da4b-499c-bd6e-1ae5af144833" containerID="54ccd2c41e0b08d95c78caff7860957f58b42a96e1f09cd7115ac27b129f5797" exitCode=0 Mar 18 17:43:13.460353 master-0 kubenswrapper[7553]: I0318 17:43:13.455996 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-j4kft" event={"ID":"35595774-da4b-499c-bd6e-1ae5af144833","Type":"ContainerDied","Data":"54ccd2c41e0b08d95c78caff7860957f58b42a96e1f09cd7115ac27b129f5797"} Mar 18 17:43:13.470134 master-0 kubenswrapper[7553]: I0318 17:43:13.470073 7553 generic.go:334] "Generic (PLEG): container finished" podID="489dd872-39c3-4ce2-8dc1-9d0552b88616" containerID="70935598889d7ee02bf1833aebf4130f2e4fa22f2be159d783a76ae3260c0ec7" exitCode=0 Mar 18 17:43:13.470463 master-0 kubenswrapper[7553]: I0318 17:43:13.470413 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8485d" event={"ID":"489dd872-39c3-4ce2-8dc1-9d0552b88616","Type":"ContainerDied","Data":"70935598889d7ee02bf1833aebf4130f2e4fa22f2be159d783a76ae3260c0ec7"} Mar 18 17:43:13.471229 master-0 kubenswrapper[7553]: I0318 17:43:13.470881 7553 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-1-master-0" Mar 18 17:43:13.565804 master-0 kubenswrapper[7553]: I0318 17:43:13.562654 7553 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_installer-1-master-0_4f688df1-3bfc-412e-b311-f9f761a0b00a/installer/0.log" Mar 18 17:43:13.565804 master-0 kubenswrapper[7553]: I0318 17:43:13.562783 7553 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-1-master-0" Mar 18 17:43:13.727076 master-0 kubenswrapper[7553]: I0318 17:43:13.727009 7553 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4f688df1-3bfc-412e-b311-f9f761a0b00a-kube-api-access\") pod \"4f688df1-3bfc-412e-b311-f9f761a0b00a\" (UID: \"4f688df1-3bfc-412e-b311-f9f761a0b00a\") " Mar 18 17:43:13.727824 master-0 kubenswrapper[7553]: I0318 17:43:13.727092 7553 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/4f688df1-3bfc-412e-b311-f9f761a0b00a-var-lock\") pod \"4f688df1-3bfc-412e-b311-f9f761a0b00a\" (UID: \"4f688df1-3bfc-412e-b311-f9f761a0b00a\") " Mar 18 17:43:13.727824 master-0 kubenswrapper[7553]: I0318 17:43:13.727187 7553 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4f688df1-3bfc-412e-b311-f9f761a0b00a-kubelet-dir\") pod \"4f688df1-3bfc-412e-b311-f9f761a0b00a\" (UID: \"4f688df1-3bfc-412e-b311-f9f761a0b00a\") " Mar 18 17:43:13.727824 master-0 kubenswrapper[7553]: I0318 17:43:13.727557 7553 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4f688df1-3bfc-412e-b311-f9f761a0b00a-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "4f688df1-3bfc-412e-b311-f9f761a0b00a" (UID: "4f688df1-3bfc-412e-b311-f9f761a0b00a"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 17:43:13.727824 master-0 kubenswrapper[7553]: I0318 17:43:13.727608 7553 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4f688df1-3bfc-412e-b311-f9f761a0b00a-var-lock" (OuterVolumeSpecName: "var-lock") pod "4f688df1-3bfc-412e-b311-f9f761a0b00a" (UID: "4f688df1-3bfc-412e-b311-f9f761a0b00a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 17:43:13.732091 master-0 kubenswrapper[7553]: I0318 17:43:13.732019 7553 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4f688df1-3bfc-412e-b311-f9f761a0b00a-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "4f688df1-3bfc-412e-b311-f9f761a0b00a" (UID: "4f688df1-3bfc-412e-b311-f9f761a0b00a"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 17:43:13.734349 master-0 kubenswrapper[7553]: I0318 17:43:13.734243 7553 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fg8h6" Mar 18 17:43:13.830192 master-0 kubenswrapper[7553]: I0318 17:43:13.829464 7553 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4f688df1-3bfc-412e-b311-f9f761a0b00a-kube-api-access\") on node \"master-0\" DevicePath \"\"" Mar 18 17:43:13.830192 master-0 kubenswrapper[7553]: I0318 17:43:13.829516 7553 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/4f688df1-3bfc-412e-b311-f9f761a0b00a-var-lock\") on node \"master-0\" DevicePath \"\"" Mar 18 17:43:13.830192 master-0 kubenswrapper[7553]: I0318 17:43:13.829539 7553 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4f688df1-3bfc-412e-b311-f9f761a0b00a-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Mar 18 17:43:13.859144 master-0 kubenswrapper[7553]: I0318 17:43:13.859015 7553 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hgw2n" Mar 18 17:43:13.864064 master-0 kubenswrapper[7553]: I0318 17:43:13.863704 7553 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-j4kft" Mar 18 17:43:13.930679 master-0 kubenswrapper[7553]: I0318 17:43:13.930346 7553 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7a9075c3-bb4f-4559-8454-5e097f334957-catalog-content\") pod \"7a9075c3-bb4f-4559-8454-5e097f334957\" (UID: \"7a9075c3-bb4f-4559-8454-5e097f334957\") " Mar 18 17:43:13.930679 master-0 kubenswrapper[7553]: I0318 17:43:13.930463 7553 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7a9075c3-bb4f-4559-8454-5e097f334957-utilities\") pod \"7a9075c3-bb4f-4559-8454-5e097f334957\" (UID: \"7a9075c3-bb4f-4559-8454-5e097f334957\") " Mar 18 17:43:13.930679 master-0 kubenswrapper[7553]: I0318 17:43:13.930570 7553 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kvpm7\" (UniqueName: \"kubernetes.io/projected/7a9075c3-bb4f-4559-8454-5e097f334957-kube-api-access-kvpm7\") pod \"7a9075c3-bb4f-4559-8454-5e097f334957\" (UID: \"7a9075c3-bb4f-4559-8454-5e097f334957\") " Mar 18 17:43:13.933334 master-0 kubenswrapper[7553]: I0318 17:43:13.933231 7553 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7a9075c3-bb4f-4559-8454-5e097f334957-utilities" (OuterVolumeSpecName: "utilities") pod "7a9075c3-bb4f-4559-8454-5e097f334957" (UID: "7a9075c3-bb4f-4559-8454-5e097f334957"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 18 17:43:13.938460 master-0 kubenswrapper[7553]: I0318 17:43:13.938390 7553 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7a9075c3-bb4f-4559-8454-5e097f334957-kube-api-access-kvpm7" (OuterVolumeSpecName: "kube-api-access-kvpm7") pod "7a9075c3-bb4f-4559-8454-5e097f334957" (UID: "7a9075c3-bb4f-4559-8454-5e097f334957"). InnerVolumeSpecName "kube-api-access-kvpm7". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 17:43:14.003084 master-0 kubenswrapper[7553]: I0318 17:43:14.003025 7553 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7a9075c3-bb4f-4559-8454-5e097f334957-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7a9075c3-bb4f-4559-8454-5e097f334957" (UID: "7a9075c3-bb4f-4559-8454-5e097f334957"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 18 17:43:14.003957 master-0 kubenswrapper[7553]: I0318 17:43:14.003928 7553 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-1-master-0" Mar 18 17:43:14.012199 master-0 kubenswrapper[7553]: I0318 17:43:14.012170 7553 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-jlj6j_e7a6e8f4-26e0-454c-bfbb-f97e72636bf6/extract-content/0.log" Mar 18 17:43:14.012977 master-0 kubenswrapper[7553]: I0318 17:43:14.012949 7553 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jlj6j" Mar 18 17:43:14.032063 master-0 kubenswrapper[7553]: I0318 17:43:14.032045 7553 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/35595774-da4b-499c-bd6e-1ae5af144833-catalog-content\") pod \"35595774-da4b-499c-bd6e-1ae5af144833\" (UID: \"35595774-da4b-499c-bd6e-1ae5af144833\") " Mar 18 17:43:14.032193 master-0 kubenswrapper[7553]: I0318 17:43:14.032180 7553 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f7203a5f-0f67-48ca-a12b-be3b0ce7cbac-catalog-content\") pod \"f7203a5f-0f67-48ca-a12b-be3b0ce7cbac\" (UID: \"f7203a5f-0f67-48ca-a12b-be3b0ce7cbac\") " Mar 18 17:43:14.032313 master-0 kubenswrapper[7553]: I0318 17:43:14.032299 7553 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f7203a5f-0f67-48ca-a12b-be3b0ce7cbac-utilities\") pod \"f7203a5f-0f67-48ca-a12b-be3b0ce7cbac\" (UID: \"f7203a5f-0f67-48ca-a12b-be3b0ce7cbac\") " Mar 18 17:43:14.032456 master-0 kubenswrapper[7553]: I0318 17:43:14.032435 7553 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jnrk4\" (UniqueName: \"kubernetes.io/projected/35595774-da4b-499c-bd6e-1ae5af144833-kube-api-access-jnrk4\") pod \"35595774-da4b-499c-bd6e-1ae5af144833\" (UID: \"35595774-da4b-499c-bd6e-1ae5af144833\") " Mar 18 17:43:14.032684 master-0 kubenswrapper[7553]: I0318 17:43:14.032656 7553 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/35595774-da4b-499c-bd6e-1ae5af144833-utilities\") pod \"35595774-da4b-499c-bd6e-1ae5af144833\" (UID: \"35595774-da4b-499c-bd6e-1ae5af144833\") " Mar 18 17:43:14.032752 master-0 kubenswrapper[7553]: I0318 17:43:14.032696 7553 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e7a6e8f4-26e0-454c-bfbb-f97e72636bf6-catalog-content\") pod \"e7a6e8f4-26e0-454c-bfbb-f97e72636bf6\" (UID: \"e7a6e8f4-26e0-454c-bfbb-f97e72636bf6\") " Mar 18 17:43:14.032752 master-0 kubenswrapper[7553]: I0318 17:43:14.032727 7553 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-njznj\" (UniqueName: \"kubernetes.io/projected/f7203a5f-0f67-48ca-a12b-be3b0ce7cbac-kube-api-access-njznj\") pod \"f7203a5f-0f67-48ca-a12b-be3b0ce7cbac\" (UID: \"f7203a5f-0f67-48ca-a12b-be3b0ce7cbac\") " Mar 18 17:43:14.032944 master-0 kubenswrapper[7553]: I0318 17:43:14.032920 7553 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7a9075c3-bb4f-4559-8454-5e097f334957-catalog-content\") on node \"master-0\" DevicePath \"\"" Mar 18 17:43:14.032944 master-0 kubenswrapper[7553]: I0318 17:43:14.032942 7553 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7a9075c3-bb4f-4559-8454-5e097f334957-utilities\") on node \"master-0\" DevicePath \"\"" Mar 18 17:43:14.033046 master-0 kubenswrapper[7553]: I0318 17:43:14.032955 7553 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kvpm7\" (UniqueName: \"kubernetes.io/projected/7a9075c3-bb4f-4559-8454-5e097f334957-kube-api-access-kvpm7\") on node \"master-0\" DevicePath \"\"" Mar 18 17:43:14.033553 master-0 kubenswrapper[7553]: I0318 17:43:14.033511 7553 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/35595774-da4b-499c-bd6e-1ae5af144833-utilities" (OuterVolumeSpecName: "utilities") pod "35595774-da4b-499c-bd6e-1ae5af144833" (UID: "35595774-da4b-499c-bd6e-1ae5af144833"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 18 17:43:14.034000 master-0 kubenswrapper[7553]: I0318 17:43:14.033929 7553 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f7203a5f-0f67-48ca-a12b-be3b0ce7cbac-utilities" (OuterVolumeSpecName: "utilities") pod "f7203a5f-0f67-48ca-a12b-be3b0ce7cbac" (UID: "f7203a5f-0f67-48ca-a12b-be3b0ce7cbac"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 18 17:43:14.036049 master-0 kubenswrapper[7553]: I0318 17:43:14.036029 7553 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/35595774-da4b-499c-bd6e-1ae5af144833-kube-api-access-jnrk4" (OuterVolumeSpecName: "kube-api-access-jnrk4") pod "35595774-da4b-499c-bd6e-1ae5af144833" (UID: "35595774-da4b-499c-bd6e-1ae5af144833"). InnerVolumeSpecName "kube-api-access-jnrk4". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 17:43:14.038420 master-0 kubenswrapper[7553]: I0318 17:43:14.038392 7553 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f7203a5f-0f67-48ca-a12b-be3b0ce7cbac-kube-api-access-njznj" (OuterVolumeSpecName: "kube-api-access-njznj") pod "f7203a5f-0f67-48ca-a12b-be3b0ce7cbac" (UID: "f7203a5f-0f67-48ca-a12b-be3b0ce7cbac"). InnerVolumeSpecName "kube-api-access-njznj". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 17:43:14.107776 master-0 kubenswrapper[7553]: I0318 17:43:14.107630 7553 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/35595774-da4b-499c-bd6e-1ae5af144833-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "35595774-da4b-499c-bd6e-1ae5af144833" (UID: "35595774-da4b-499c-bd6e-1ae5af144833"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 18 17:43:14.118365 master-0 kubenswrapper[7553]: I0318 17:43:14.118296 7553 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f7203a5f-0f67-48ca-a12b-be3b0ce7cbac-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f7203a5f-0f67-48ca-a12b-be3b0ce7cbac" (UID: "f7203a5f-0f67-48ca-a12b-be3b0ce7cbac"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 18 17:43:14.133570 master-0 kubenswrapper[7553]: I0318 17:43:14.133459 7553 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/08451d5b-cf84-45a1-a16d-7ce10a83a6e7-kubelet-dir\") pod \"08451d5b-cf84-45a1-a16d-7ce10a83a6e7\" (UID: \"08451d5b-cf84-45a1-a16d-7ce10a83a6e7\") " Mar 18 17:43:14.133621 master-0 kubenswrapper[7553]: I0318 17:43:14.133545 7553 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e7a6e8f4-26e0-454c-bfbb-f97e72636bf6-utilities\") pod \"e7a6e8f4-26e0-454c-bfbb-f97e72636bf6\" (UID: \"e7a6e8f4-26e0-454c-bfbb-f97e72636bf6\") " Mar 18 17:43:14.133658 master-0 kubenswrapper[7553]: I0318 17:43:14.133612 7553 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/08451d5b-cf84-45a1-a16d-7ce10a83a6e7-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "08451d5b-cf84-45a1-a16d-7ce10a83a6e7" (UID: "08451d5b-cf84-45a1-a16d-7ce10a83a6e7"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 17:43:14.134787 master-0 kubenswrapper[7553]: I0318 17:43:14.134741 7553 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e7a6e8f4-26e0-454c-bfbb-f97e72636bf6-utilities" (OuterVolumeSpecName: "utilities") pod "e7a6e8f4-26e0-454c-bfbb-f97e72636bf6" (UID: "e7a6e8f4-26e0-454c-bfbb-f97e72636bf6"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 18 17:43:14.135054 master-0 kubenswrapper[7553]: I0318 17:43:14.135020 7553 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/08451d5b-cf84-45a1-a16d-7ce10a83a6e7-kube-api-access\") pod \"08451d5b-cf84-45a1-a16d-7ce10a83a6e7\" (UID: \"08451d5b-cf84-45a1-a16d-7ce10a83a6e7\") " Mar 18 17:43:14.135105 master-0 kubenswrapper[7553]: I0318 17:43:14.135071 7553 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/08451d5b-cf84-45a1-a16d-7ce10a83a6e7-var-lock\") pod \"08451d5b-cf84-45a1-a16d-7ce10a83a6e7\" (UID: \"08451d5b-cf84-45a1-a16d-7ce10a83a6e7\") " Mar 18 17:43:14.135140 master-0 kubenswrapper[7553]: I0318 17:43:14.135113 7553 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-llblv\" (UniqueName: \"kubernetes.io/projected/e7a6e8f4-26e0-454c-bfbb-f97e72636bf6-kube-api-access-llblv\") pod \"e7a6e8f4-26e0-454c-bfbb-f97e72636bf6\" (UID: \"e7a6e8f4-26e0-454c-bfbb-f97e72636bf6\") " Mar 18 17:43:14.135328 master-0 kubenswrapper[7553]: I0318 17:43:14.135245 7553 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/08451d5b-cf84-45a1-a16d-7ce10a83a6e7-var-lock" (OuterVolumeSpecName: "var-lock") pod "08451d5b-cf84-45a1-a16d-7ce10a83a6e7" (UID: "08451d5b-cf84-45a1-a16d-7ce10a83a6e7"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 17:43:14.135502 master-0 kubenswrapper[7553]: I0318 17:43:14.135462 7553 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f7203a5f-0f67-48ca-a12b-be3b0ce7cbac-catalog-content\") on node \"master-0\" DevicePath \"\"" Mar 18 17:43:14.135571 master-0 kubenswrapper[7553]: I0318 17:43:14.135504 7553 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f7203a5f-0f67-48ca-a12b-be3b0ce7cbac-utilities\") on node \"master-0\" DevicePath \"\"" Mar 18 17:43:14.135571 master-0 kubenswrapper[7553]: I0318 17:43:14.135523 7553 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/08451d5b-cf84-45a1-a16d-7ce10a83a6e7-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Mar 18 17:43:14.135571 master-0 kubenswrapper[7553]: I0318 17:43:14.135539 7553 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jnrk4\" (UniqueName: \"kubernetes.io/projected/35595774-da4b-499c-bd6e-1ae5af144833-kube-api-access-jnrk4\") on node \"master-0\" DevicePath \"\"" Mar 18 17:43:14.135571 master-0 kubenswrapper[7553]: I0318 17:43:14.135555 7553 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e7a6e8f4-26e0-454c-bfbb-f97e72636bf6-utilities\") on node \"master-0\" DevicePath \"\"" Mar 18 17:43:14.135571 master-0 kubenswrapper[7553]: I0318 17:43:14.135568 7553 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/35595774-da4b-499c-bd6e-1ae5af144833-utilities\") on node \"master-0\" DevicePath \"\"" Mar 18 17:43:14.135780 master-0 kubenswrapper[7553]: I0318 17:43:14.135582 7553 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-njznj\" (UniqueName: \"kubernetes.io/projected/f7203a5f-0f67-48ca-a12b-be3b0ce7cbac-kube-api-access-njznj\") on node \"master-0\" DevicePath \"\"" Mar 18 17:43:14.135780 master-0 kubenswrapper[7553]: I0318 17:43:14.135596 7553 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/08451d5b-cf84-45a1-a16d-7ce10a83a6e7-var-lock\") on node \"master-0\" DevicePath \"\"" Mar 18 17:43:14.135780 master-0 kubenswrapper[7553]: I0318 17:43:14.135610 7553 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/35595774-da4b-499c-bd6e-1ae5af144833-catalog-content\") on node \"master-0\" DevicePath \"\"" Mar 18 17:43:14.138771 master-0 kubenswrapper[7553]: I0318 17:43:14.138717 7553 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7a6e8f4-26e0-454c-bfbb-f97e72636bf6-kube-api-access-llblv" (OuterVolumeSpecName: "kube-api-access-llblv") pod "e7a6e8f4-26e0-454c-bfbb-f97e72636bf6" (UID: "e7a6e8f4-26e0-454c-bfbb-f97e72636bf6"). InnerVolumeSpecName "kube-api-access-llblv". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 17:43:14.141724 master-0 kubenswrapper[7553]: I0318 17:43:14.141679 7553 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/08451d5b-cf84-45a1-a16d-7ce10a83a6e7-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "08451d5b-cf84-45a1-a16d-7ce10a83a6e7" (UID: "08451d5b-cf84-45a1-a16d-7ce10a83a6e7"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 17:43:14.260568 master-0 kubenswrapper[7553]: I0318 17:43:14.260495 7553 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/08451d5b-cf84-45a1-a16d-7ce10a83a6e7-kube-api-access\") on node \"master-0\" DevicePath \"\"" Mar 18 17:43:14.270036 master-0 kubenswrapper[7553]: I0318 17:43:14.260544 7553 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-llblv\" (UniqueName: \"kubernetes.io/projected/e7a6e8f4-26e0-454c-bfbb-f97e72636bf6-kube-api-access-llblv\") on node \"master-0\" DevicePath \"\"" Mar 18 17:43:14.270036 master-0 kubenswrapper[7553]: I0318 17:43:14.264763 7553 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e7a6e8f4-26e0-454c-bfbb-f97e72636bf6-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e7a6e8f4-26e0-454c-bfbb-f97e72636bf6" (UID: "e7a6e8f4-26e0-454c-bfbb-f97e72636bf6"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 18 17:43:14.362846 master-0 kubenswrapper[7553]: I0318 17:43:14.362692 7553 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e7a6e8f4-26e0-454c-bfbb-f97e72636bf6-catalog-content\") on node \"master-0\" DevicePath \"\"" Mar 18 17:43:14.477992 master-0 kubenswrapper[7553]: I0318 17:43:14.477898 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vbglp" event={"ID":"dc110414-3a6b-474c-bce3-33450cab8fcd","Type":"ContainerStarted","Data":"fddcd4e9e307b1fbc0d2efb6241ca25a5a5753c7419878965f58c507209763f5"} Mar 18 17:43:14.479361 master-0 kubenswrapper[7553]: I0318 17:43:14.479309 7553 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_installer-1-master-0_4f688df1-3bfc-412e-b311-f9f761a0b00a/installer/0.log" Mar 18 17:43:14.479494 master-0 kubenswrapper[7553]: I0318 17:43:14.479443 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-1-master-0" event={"ID":"4f688df1-3bfc-412e-b311-f9f761a0b00a","Type":"ContainerDied","Data":"8b2f45d6c107abfb552477dd96d792756dec17de0e0140f60d8c6b31c6fa4d1e"} Mar 18 17:43:14.479546 master-0 kubenswrapper[7553]: I0318 17:43:14.479523 7553 scope.go:117] "RemoveContainer" containerID="fdeef07d8840260931a9408a0850cec7ff93ac6938603492d86d93449b1926fe" Mar 18 17:43:14.479646 master-0 kubenswrapper[7553]: I0318 17:43:14.479521 7553 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-1-master-0" Mar 18 17:43:14.483288 master-0 kubenswrapper[7553]: I0318 17:43:14.483223 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8485d" event={"ID":"489dd872-39c3-4ce2-8dc1-9d0552b88616","Type":"ContainerStarted","Data":"9182ec127f2c2b427136e91020c57787e72c6e99cf4058ba84a5d63ab20d20d9"} Mar 18 17:43:14.485996 master-0 kubenswrapper[7553]: I0318 17:43:14.485955 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hgw2n" event={"ID":"f7203a5f-0f67-48ca-a12b-be3b0ce7cbac","Type":"ContainerDied","Data":"5e886fe6be4c394b26355f867117ac224f8e36a7b3550590d5568700c659bdf2"} Mar 18 17:43:14.486085 master-0 kubenswrapper[7553]: I0318 17:43:14.486057 7553 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hgw2n" Mar 18 17:43:14.493979 master-0 kubenswrapper[7553]: I0318 17:43:14.493924 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/installer-1-master-0" event={"ID":"08451d5b-cf84-45a1-a16d-7ce10a83a6e7","Type":"ContainerDied","Data":"ee60fb39e538f57e3a2c9cf050408fd1ce812a3cd024c1de0ff7127a4236fd69"} Mar 18 17:43:14.493979 master-0 kubenswrapper[7553]: I0318 17:43:14.493970 7553 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ee60fb39e538f57e3a2c9cf050408fd1ce812a3cd024c1de0ff7127a4236fd69" Mar 18 17:43:14.494130 master-0 kubenswrapper[7553]: I0318 17:43:14.494068 7553 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-1-master-0" Mar 18 17:43:14.497852 master-0 kubenswrapper[7553]: I0318 17:43:14.497802 7553 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-jlj6j_e7a6e8f4-26e0-454c-bfbb-f97e72636bf6/extract-content/0.log" Mar 18 17:43:14.498492 master-0 kubenswrapper[7553]: I0318 17:43:14.498435 7553 generic.go:334] "Generic (PLEG): container finished" podID="e7a6e8f4-26e0-454c-bfbb-f97e72636bf6" containerID="85c6d7b26f88c24833f482fd594353d9598a0fe3678db3d2c1ca35bacfd7b364" exitCode=2 Mar 18 17:43:14.498666 master-0 kubenswrapper[7553]: I0318 17:43:14.498640 7553 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jlj6j" Mar 18 17:43:14.499530 master-0 kubenswrapper[7553]: I0318 17:43:14.499470 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jlj6j" event={"ID":"e7a6e8f4-26e0-454c-bfbb-f97e72636bf6","Type":"ContainerDied","Data":"85c6d7b26f88c24833f482fd594353d9598a0fe3678db3d2c1ca35bacfd7b364"} Mar 18 17:43:14.499627 master-0 kubenswrapper[7553]: I0318 17:43:14.499533 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jlj6j" event={"ID":"e7a6e8f4-26e0-454c-bfbb-f97e72636bf6","Type":"ContainerDied","Data":"61837a983030238f211aa8ac08747382382504f05535c6f547af651eb6b3ff48"} Mar 18 17:43:14.500067 master-0 kubenswrapper[7553]: I0318 17:43:14.500028 7553 scope.go:117] "RemoveContainer" containerID="51459af323032020b310b969c7f232ca0d879ba6054f5b26cbdbbbbcafb3c3e8" Mar 18 17:43:14.503738 master-0 kubenswrapper[7553]: I0318 17:43:14.503414 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-j4kft" event={"ID":"35595774-da4b-499c-bd6e-1ae5af144833","Type":"ContainerDied","Data":"864aeba19e2a5216570c21cd0d0d14315a2e5721c472a22dae48be501e01bd99"} Mar 18 17:43:14.503738 master-0 kubenswrapper[7553]: I0318 17:43:14.503452 7553 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-j4kft" Mar 18 17:43:14.505222 master-0 kubenswrapper[7553]: I0318 17:43:14.505181 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fg8h6" event={"ID":"7a9075c3-bb4f-4559-8454-5e097f334957","Type":"ContainerDied","Data":"aa58349ddd9078d7290e359fb92c428dc5b57e83b5248dcb6c4eb5055e4481ef"} Mar 18 17:43:14.505362 master-0 kubenswrapper[7553]: I0318 17:43:14.505260 7553 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fg8h6" Mar 18 17:43:14.528321 master-0 kubenswrapper[7553]: I0318 17:43:14.525811 7553 scope.go:117] "RemoveContainer" containerID="551067c3fcf32dd80a122d54114412b5d6bfe4459ccf677a49f09efa0aea73a5" Mar 18 17:43:14.557436 master-0 kubenswrapper[7553]: I0318 17:43:14.555007 7553 scope.go:117] "RemoveContainer" containerID="85c6d7b26f88c24833f482fd594353d9598a0fe3678db3d2c1ca35bacfd7b364" Mar 18 17:43:14.572619 master-0 kubenswrapper[7553]: I0318 17:43:14.572574 7553 scope.go:117] "RemoveContainer" containerID="5053ea0774db745bc5171a3a5165d83776bd53e8f14bf88f72c237e917e3529a" Mar 18 17:43:14.588632 master-0 kubenswrapper[7553]: I0318 17:43:14.588610 7553 scope.go:117] "RemoveContainer" containerID="85c6d7b26f88c24833f482fd594353d9598a0fe3678db3d2c1ca35bacfd7b364" Mar 18 17:43:14.589183 master-0 kubenswrapper[7553]: E0318 17:43:14.589121 7553 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"85c6d7b26f88c24833f482fd594353d9598a0fe3678db3d2c1ca35bacfd7b364\": container with ID starting with 85c6d7b26f88c24833f482fd594353d9598a0fe3678db3d2c1ca35bacfd7b364 not found: ID does not exist" containerID="85c6d7b26f88c24833f482fd594353d9598a0fe3678db3d2c1ca35bacfd7b364" Mar 18 17:43:14.589245 master-0 kubenswrapper[7553]: I0318 17:43:14.589203 7553 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"85c6d7b26f88c24833f482fd594353d9598a0fe3678db3d2c1ca35bacfd7b364"} err="failed to get container status \"85c6d7b26f88c24833f482fd594353d9598a0fe3678db3d2c1ca35bacfd7b364\": rpc error: code = NotFound desc = could not find container \"85c6d7b26f88c24833f482fd594353d9598a0fe3678db3d2c1ca35bacfd7b364\": container with ID starting with 85c6d7b26f88c24833f482fd594353d9598a0fe3678db3d2c1ca35bacfd7b364 not found: ID does not exist" Mar 18 17:43:14.589370 master-0 kubenswrapper[7553]: I0318 17:43:14.589256 7553 scope.go:117] "RemoveContainer" containerID="5053ea0774db745bc5171a3a5165d83776bd53e8f14bf88f72c237e917e3529a" Mar 18 17:43:14.589671 master-0 kubenswrapper[7553]: E0318 17:43:14.589629 7553 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5053ea0774db745bc5171a3a5165d83776bd53e8f14bf88f72c237e917e3529a\": container with ID starting with 5053ea0774db745bc5171a3a5165d83776bd53e8f14bf88f72c237e917e3529a not found: ID does not exist" containerID="5053ea0774db745bc5171a3a5165d83776bd53e8f14bf88f72c237e917e3529a" Mar 18 17:43:14.589726 master-0 kubenswrapper[7553]: I0318 17:43:14.589670 7553 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5053ea0774db745bc5171a3a5165d83776bd53e8f14bf88f72c237e917e3529a"} err="failed to get container status \"5053ea0774db745bc5171a3a5165d83776bd53e8f14bf88f72c237e917e3529a\": rpc error: code = NotFound desc = could not find container \"5053ea0774db745bc5171a3a5165d83776bd53e8f14bf88f72c237e917e3529a\": container with ID starting with 5053ea0774db745bc5171a3a5165d83776bd53e8f14bf88f72c237e917e3529a not found: ID does not exist" Mar 18 17:43:14.589726 master-0 kubenswrapper[7553]: I0318 17:43:14.589697 7553 scope.go:117] "RemoveContainer" containerID="54ccd2c41e0b08d95c78caff7860957f58b42a96e1f09cd7115ac27b129f5797" Mar 18 17:43:14.685977 master-0 kubenswrapper[7553]: I0318 17:43:14.685925 7553 scope.go:117] "RemoveContainer" containerID="feeb08461ad0d7781535b30235701d5143dfea88febf0f65b78d8d5869fe57f4" Mar 18 17:43:14.709396 master-0 kubenswrapper[7553]: I0318 17:43:14.709364 7553 scope.go:117] "RemoveContainer" containerID="7fb5fd11d6048f6029e82f09770801d13ffe2b0bf670b25a592c84f63528f56f" Mar 18 17:43:14.729219 master-0 kubenswrapper[7553]: I0318 17:43:14.729196 7553 scope.go:117] "RemoveContainer" containerID="61c8cdb2c792c0b417482aea9cd0f1183a7a5d96313ea5188479d603314faf40" Mar 18 17:43:15.185110 master-0 kubenswrapper[7553]: I0318 17:43:15.185043 7553 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 18 17:43:15.516441 master-0 kubenswrapper[7553]: I0318 17:43:15.516317 7553 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-1-master-0_41191498-89c5-44dc-b648-dbea889c72f5/installer/0.log" Mar 18 17:43:15.517716 master-0 kubenswrapper[7553]: I0318 17:43:15.517637 7553 generic.go:334] "Generic (PLEG): container finished" podID="41191498-89c5-44dc-b648-dbea889c72f5" containerID="952d444a3fc2166b6fd7ae2111af2db0a2310710ae00c917ceccc2b70b6b3ce3" exitCode=1 Mar 18 17:43:15.517884 master-0 kubenswrapper[7553]: I0318 17:43:15.517829 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-1-master-0" event={"ID":"41191498-89c5-44dc-b648-dbea889c72f5","Type":"ContainerDied","Data":"952d444a3fc2166b6fd7ae2111af2db0a2310710ae00c917ceccc2b70b6b3ce3"} Mar 18 17:43:16.979388 master-0 kubenswrapper[7553]: I0318 17:43:16.979338 7553 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-1-master-0_41191498-89c5-44dc-b648-dbea889c72f5/installer/0.log" Mar 18 17:43:16.979941 master-0 kubenswrapper[7553]: I0318 17:43:16.979479 7553 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-1-master-0" Mar 18 17:43:17.002129 master-0 kubenswrapper[7553]: I0318 17:43:17.002043 7553 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/41191498-89c5-44dc-b648-dbea889c72f5-kubelet-dir\") pod \"41191498-89c5-44dc-b648-dbea889c72f5\" (UID: \"41191498-89c5-44dc-b648-dbea889c72f5\") " Mar 18 17:43:17.002129 master-0 kubenswrapper[7553]: I0318 17:43:17.002120 7553 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/41191498-89c5-44dc-b648-dbea889c72f5-kube-api-access\") pod \"41191498-89c5-44dc-b648-dbea889c72f5\" (UID: \"41191498-89c5-44dc-b648-dbea889c72f5\") " Mar 18 17:43:17.002499 master-0 kubenswrapper[7553]: I0318 17:43:17.002179 7553 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/41191498-89c5-44dc-b648-dbea889c72f5-var-lock\") pod \"41191498-89c5-44dc-b648-dbea889c72f5\" (UID: \"41191498-89c5-44dc-b648-dbea889c72f5\") " Mar 18 17:43:17.002499 master-0 kubenswrapper[7553]: I0318 17:43:17.002454 7553 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/41191498-89c5-44dc-b648-dbea889c72f5-var-lock" (OuterVolumeSpecName: "var-lock") pod "41191498-89c5-44dc-b648-dbea889c72f5" (UID: "41191498-89c5-44dc-b648-dbea889c72f5"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 17:43:17.002848 master-0 kubenswrapper[7553]: I0318 17:43:17.002803 7553 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/41191498-89c5-44dc-b648-dbea889c72f5-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "41191498-89c5-44dc-b648-dbea889c72f5" (UID: "41191498-89c5-44dc-b648-dbea889c72f5"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 17:43:17.005312 master-0 kubenswrapper[7553]: I0318 17:43:17.005240 7553 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/41191498-89c5-44dc-b648-dbea889c72f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "41191498-89c5-44dc-b648-dbea889c72f5" (UID: "41191498-89c5-44dc-b648-dbea889c72f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 17:43:17.104575 master-0 kubenswrapper[7553]: I0318 17:43:17.104444 7553 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/41191498-89c5-44dc-b648-dbea889c72f5-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Mar 18 17:43:17.104575 master-0 kubenswrapper[7553]: I0318 17:43:17.104543 7553 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/41191498-89c5-44dc-b648-dbea889c72f5-kube-api-access\") on node \"master-0\" DevicePath \"\"" Mar 18 17:43:17.104575 master-0 kubenswrapper[7553]: I0318 17:43:17.104566 7553 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/41191498-89c5-44dc-b648-dbea889c72f5-var-lock\") on node \"master-0\" DevicePath \"\"" Mar 18 17:43:17.548801 master-0 kubenswrapper[7553]: I0318 17:43:17.548754 7553 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-1-master-0_41191498-89c5-44dc-b648-dbea889c72f5/installer/0.log" Mar 18 17:43:17.549080 master-0 kubenswrapper[7553]: I0318 17:43:17.548835 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-1-master-0" event={"ID":"41191498-89c5-44dc-b648-dbea889c72f5","Type":"ContainerDied","Data":"ca7a0939c8771a3524a053fbcf05a6e4e340302ea878636e59812ce8a826b33c"} Mar 18 17:43:17.549080 master-0 kubenswrapper[7553]: I0318 17:43:17.548890 7553 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ca7a0939c8771a3524a053fbcf05a6e4e340302ea878636e59812ce8a826b33c" Mar 18 17:43:17.549169 master-0 kubenswrapper[7553]: I0318 17:43:17.549062 7553 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-1-master-0" Mar 18 17:43:18.185416 master-0 kubenswrapper[7553]: I0318 17:43:18.185295 7553 prober.go:107] "Probe failed" probeType="Startup" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="46f265536aba6292ead501bc9b49f327" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.32.10:10257/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 18 17:43:18.497998 master-0 kubenswrapper[7553]: I0318 17:43:18.497845 7553 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 18 17:43:19.562535 master-0 kubenswrapper[7553]: I0318 17:43:19.562468 7553 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-controller-manager-operator_openshift-controller-manager-operator-8c94f4649-hpsbd_9a240ab7-a1d5-4e9a-96f3-4590681cc7ed/openshift-controller-manager-operator/0.log" Mar 18 17:43:19.562535 master-0 kubenswrapper[7553]: I0318 17:43:19.562536 7553 generic.go:334] "Generic (PLEG): container finished" podID="9a240ab7-a1d5-4e9a-96f3-4590681cc7ed" containerID="61d9d3ba71cb2cabd3457558bd5286207ddf90062735fd7732ad59d0bbbaa150" exitCode=1 Mar 18 17:43:19.563102 master-0 kubenswrapper[7553]: I0318 17:43:19.562576 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8c94f4649-hpsbd" event={"ID":"9a240ab7-a1d5-4e9a-96f3-4590681cc7ed","Type":"ContainerDied","Data":"61d9d3ba71cb2cabd3457558bd5286207ddf90062735fd7732ad59d0bbbaa150"} Mar 18 17:43:19.563102 master-0 kubenswrapper[7553]: I0318 17:43:19.562960 7553 scope.go:117] "RemoveContainer" containerID="61d9d3ba71cb2cabd3457558bd5286207ddf90062735fd7732ad59d0bbbaa150" Mar 18 17:43:20.572870 master-0 kubenswrapper[7553]: I0318 17:43:20.572789 7553 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-controller-manager-operator_openshift-controller-manager-operator-8c94f4649-hpsbd_9a240ab7-a1d5-4e9a-96f3-4590681cc7ed/openshift-controller-manager-operator/0.log" Mar 18 17:43:20.573493 master-0 kubenswrapper[7553]: I0318 17:43:20.572883 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8c94f4649-hpsbd" event={"ID":"9a240ab7-a1d5-4e9a-96f3-4590681cc7ed","Type":"ContainerStarted","Data":"c19ffd46d4609e819ebbe2a9d485eeae31630e1764b880f8494b5531e650c602"} Mar 18 17:43:20.748467 master-0 kubenswrapper[7553]: I0318 17:43:20.748352 7553 patch_prober.go:28] interesting pod/authentication-operator-5885bfd7f4-8sxdf container/authentication-operator namespace/openshift-authentication-operator: Liveness probe status=failure output="Get \"https://10.128.0.21:8443/healthz\": dial tcp 10.128.0.21:8443: connect: connection refused" start-of-body= Mar 18 17:43:20.748467 master-0 kubenswrapper[7553]: I0318 17:43:20.748433 7553 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-8sxdf" podUID="c087ce06-a16b-41f4-ba93-8fccdee09003" containerName="authentication-operator" probeResult="failure" output="Get \"https://10.128.0.21:8443/healthz\": dial tcp 10.128.0.21:8443: connect: connection refused" Mar 18 17:43:20.996160 master-0 kubenswrapper[7553]: I0318 17:43:20.995977 7553 patch_prober.go:28] interesting pod/etcd-operator-8544cbcf9c-rws9x container/etcd-operator namespace/openshift-etcd-operator: Liveness probe status=failure output="Get \"https://10.128.0.13:8443/healthz\": dial tcp 10.128.0.13:8443: connect: connection refused" start-of-body= Mar 18 17:43:20.996160 master-0 kubenswrapper[7553]: I0318 17:43:20.996063 7553 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-rws9x" podUID="0100a259-1358-45e8-8191-4e1f9a14ec89" containerName="etcd-operator" probeResult="failure" output="Get \"https://10.128.0.13:8443/healthz\": dial tcp 10.128.0.13:8443: connect: connection refused" Mar 18 17:43:21.587234 master-0 kubenswrapper[7553]: I0318 17:43:21.587145 7553 generic.go:334] "Generic (PLEG): container finished" podID="d664a6d0d2a24360dee10612610f1b59" containerID="1d30b6f37f4ad53c3294bea48dd4a0769d42ea2d80a5395f6ef8c16034150f6c" exitCode=0 Mar 18 17:43:21.854375 master-0 kubenswrapper[7553]: I0318 17:43:21.854121 7553 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-8485d" Mar 18 17:43:21.854600 master-0 kubenswrapper[7553]: I0318 17:43:21.854341 7553 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-8485d" Mar 18 17:43:21.921181 master-0 kubenswrapper[7553]: I0318 17:43:21.921118 7553 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-8485d" Mar 18 17:43:21.943900 master-0 kubenswrapper[7553]: I0318 17:43:21.943807 7553 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-vbglp" Mar 18 17:43:21.943900 master-0 kubenswrapper[7553]: I0318 17:43:21.943892 7553 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-vbglp" Mar 18 17:43:21.980007 master-0 kubenswrapper[7553]: I0318 17:43:21.979945 7553 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-vbglp" Mar 18 17:43:22.213776 master-0 kubenswrapper[7553]: E0318 17:43:22.213495 7553 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 18 17:43:22.445314 master-0 kubenswrapper[7553]: E0318 17:43:22.445036 7553 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-18T17:43:12Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-18T17:43:12Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-18T17:43:12Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-18T17:43:12Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2abc1fd79e7781634ed5ed9e8f2b98b9094ea51f40ac3a773c5e5224607bf3d7\\\"],\\\"sizeBytes\\\":1637455533},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a55ec7ec64efd0f595d8084377b7e463a1807829b7617e5d4a9092dcd924c36\\\"],\\\"sizeBytes\\\":1238100502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:af0fe0ca926422a6471d5bf22fc0e682c36c24fba05496a3bdfac0b7d3733015\\\"],\\\"sizeBytes\\\":991832673},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b23c544d3894e5b31f66a18c554f03b0d29f92c2000c46b57b1c96da7ec25db9\\\"],\\\"sizeBytes\\\":943841779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0a09f5a3ba4f60cce0145769509bab92553c8075d210af4ac058965d2ae11efa\\\"],\\\"sizeBytes\\\":876160834},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a9e8da5c6114f062b814936d4db7a47a04d248e160d6bb28ad4e4a081496ee4\\\"],\\\"sizeBytes\\\":772943435},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1e1faad2d9167d84e23585c1cea5962301845548043cf09578f943f79ca98016\\\"],\\\"sizeBytes\\\":687949580},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5e12e4dc52214d3ada5ba5106caebe079eac1d9292c2571a5fe83411ce8e900d\\\"],\\\"sizeBytes\\\":683195416},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ec8fd46dfb35ed10e8f98933166f69ce579c2f35b8db03d21e4c34fc544553e4\\\"],\\\"sizeBytes\\\":621648710},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c6a4383333a1fd6d05c3f60ec793913f7937ee3d77f002d85e6c61e20507bf55\\\"],\\\"sizeBytes\\\":582154903},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c2dd7a03348212e49876f5359f233d893a541ed9b934df390201a05133a06982\\\"],\\\"sizeBytes\\\":558211175},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7af9f5c5af9d529840233ef4b519120cc0e3f14c4fe28cc43b0823f2c11d8f89\\\"],\\\"sizeBytes\\\":548752816},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e29dc9f042f2d0471171a0611070886cb2f7c57338ab7f112613417bcd33b278\\\"],\\\"sizeBytes\\\":529326739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b4c9cf268bb7abef7af187cd775d3f74d0bd33626250095428d53b705ee946\\\"],\\\"sizeBytes\\\":528956487},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d5a971d5889f167cfe61a64c366424b87c17a6dc141ffcc43406cdcbb50cae2a\\\"],\\\"sizeBytes\\\":518384969},{\\\"names\\\":[],\\\"sizeBytes\\\":517999161},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c5ce3d1134d6500e2b8528516c1889d7bbc6259aba4981c6983395b0e9eeff65\\\"],\\\"sizeBytes\\\":514984269},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bfe394b58ec6195de8b8420e781b7630d85a412b9112d892fea903f92b783427\\\"],\\\"sizeBytes\\\":513221333},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1f23bac0a2a6cfd638e4af679dc787a8790d99c391f6e2ade8087dc477ff765e\\\"],\\\"sizeBytes\\\":512274055},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:77fff570657d2fa0bfb709b2c8b6665bae0bf90a2be981d8dbca56c674715098\\\"],\\\"sizeBytes\\\":511227324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adb9f6f2fd701863c7caed747df43f83d3569ba9388cfa33ea7219ac6a606b11\\\"],\\\"sizeBytes\\\":511164375},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c032f87ae61d6f757ff3ce52620a70a43516591987731f25da77aba152f17458\\\"],\\\"sizeBytes\\\":508888171},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:812819a9d712b9e345ef5f1404b242c281e2518ad724baebc393ec0fd3b3d263\\\"],\\\"sizeBytes\\\":508544745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:313d1d8ca85e65236a59f058a3316c49436dde691b3a3930d5bc5e3b4b8c8a71\\\"],\\\"sizeBytes\\\":507972093},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c527b4e8239a1f4f4e0a851113e7dd633b7dcb9d75b0e7b21c23d26304abcb3\\\"],\\\"sizeBytes\\\":506480167},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ef199844317b7b012879ed8d29f9b6bc37fad8a6fdb336103cbd5cabc74c4302\\\"],\\\"sizeBytes\\\":506395599},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fbbcb390de2563a0177b92fba1b5a65777366e2dc80e2808b61d87c41b47a2d\\\"],\\\"sizeBytes\\\":505246690},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c983016b9ceed0fca1f51bd49c2653243c7e5af91cbf2f478b091db6e028252\\\"],\\\"sizeBytes\\\":504625081},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:712d334b7752d95580571059aae2c50e111d879af4fd8ea7cc3dbaf1a8e7dc69\\\"],\\\"sizeBytes\\\":495994673},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4b5ea1ef4e09b673a0c68c8848ca162ab11d9ac373a377daa52dea702ffa3023\\\"],\\\"sizeBytes\\\":495065340},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ea5c8a93f30e0a4932da5697d22c0da7eda9a7035c0555eb006b6755e62bb2fc\\\"],\\\"sizeBytes\\\":468265024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d12d0dc7eb86bbedf6b2d7689a28fd51f0d928f720e4a6783744304297c661ed\\\"],\\\"sizeBytes\\\":465090934},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9609c00207cc4db97f0fd6162eb429d7f81654137f020a677e30cba26a887a24\\\"],\\\"sizeBytes\\\":463705930},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3062f6485aec4770e60852b535c69a42527b305161fe856499c8658ead6d1e85\\\"],\\\"sizeBytes\\\":448042136},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:951ecfeba9b2da4b653034d09275f925396a79c2d8461b8a7c71c776fee67ba0\\\"],\\\"sizeBytes\\\":443272037},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:292560e2d80b460468bb19fe0ddf289767c655027b03a76ee6c40c91ffe4c483\\\"],\\\"sizeBytes\\\":438654374},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e66fd50be6f83ce321a566dfb76f3725b597374077d5af13813b928f6b1267e\\\"],\\\"sizeBytes\\\":411587146},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e3a494212f1ba17f0f0980eef583218330eccb56eadf6b8cb0548c76d99b5014\\\"],\\\"sizeBytes\\\":407347125},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:53d66d524ca3e787d8dbe30dbc4d9b8612c9cebd505ccb4375a8441814e85422\\\"],\\\"sizeBytes\\\":396521761}],\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"runc\\\"}]}}\" for node \"master-0\": Patch \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0/status?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 18 17:43:22.654370 master-0 kubenswrapper[7553]: I0318 17:43:22.654239 7553 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-vbglp" Mar 18 17:43:22.655572 master-0 kubenswrapper[7553]: I0318 17:43:22.655513 7553 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-8485d" Mar 18 17:43:24.610225 master-0 kubenswrapper[7553]: I0318 17:43:24.610159 7553 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0-master-0_d664a6d0d2a24360dee10612610f1b59/etcdctl/0.log" Mar 18 17:43:24.610857 master-0 kubenswrapper[7553]: I0318 17:43:24.610237 7553 generic.go:334] "Generic (PLEG): container finished" podID="d664a6d0d2a24360dee10612610f1b59" containerID="99dc9cff4665f248f4ae68c96db3198a4bcd4d7b5dbfb367bdf3864e44ad29fc" exitCode=137 Mar 18 17:43:24.988719 master-0 kubenswrapper[7553]: I0318 17:43:24.988484 7553 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0-master-0_d664a6d0d2a24360dee10612610f1b59/etcdctl/0.log" Mar 18 17:43:24.988719 master-0 kubenswrapper[7553]: I0318 17:43:24.988614 7553 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-master-0-master-0" Mar 18 17:43:25.029631 master-0 kubenswrapper[7553]: I0318 17:43:25.029434 7553 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/host-path/d664a6d0d2a24360dee10612610f1b59-certs\") pod \"d664a6d0d2a24360dee10612610f1b59\" (UID: \"d664a6d0d2a24360dee10612610f1b59\") " Mar 18 17:43:25.029991 master-0 kubenswrapper[7553]: I0318 17:43:25.029675 7553 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d664a6d0d2a24360dee10612610f1b59-certs" (OuterVolumeSpecName: "certs") pod "d664a6d0d2a24360dee10612610f1b59" (UID: "d664a6d0d2a24360dee10612610f1b59"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 17:43:25.029991 master-0 kubenswrapper[7553]: I0318 17:43:25.029710 7553 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/d664a6d0d2a24360dee10612610f1b59-data-dir\") pod \"d664a6d0d2a24360dee10612610f1b59\" (UID: \"d664a6d0d2a24360dee10612610f1b59\") " Mar 18 17:43:25.029991 master-0 kubenswrapper[7553]: I0318 17:43:25.029765 7553 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d664a6d0d2a24360dee10612610f1b59-data-dir" (OuterVolumeSpecName: "data-dir") pod "d664a6d0d2a24360dee10612610f1b59" (UID: "d664a6d0d2a24360dee10612610f1b59"). InnerVolumeSpecName "data-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 17:43:25.030550 master-0 kubenswrapper[7553]: I0318 17:43:25.030493 7553 reconciler_common.go:293] "Volume detached for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/d664a6d0d2a24360dee10612610f1b59-data-dir\") on node \"master-0\" DevicePath \"\"" Mar 18 17:43:25.030550 master-0 kubenswrapper[7553]: I0318 17:43:25.030547 7553 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/host-path/d664a6d0d2a24360dee10612610f1b59-certs\") on node \"master-0\" DevicePath \"\"" Mar 18 17:43:25.620195 master-0 kubenswrapper[7553]: I0318 17:43:25.620088 7553 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0-master-0_d664a6d0d2a24360dee10612610f1b59/etcdctl/0.log" Mar 18 17:43:25.621096 master-0 kubenswrapper[7553]: I0318 17:43:25.620224 7553 scope.go:117] "RemoveContainer" containerID="1d30b6f37f4ad53c3294bea48dd4a0769d42ea2d80a5395f6ef8c16034150f6c" Mar 18 17:43:25.621096 master-0 kubenswrapper[7553]: I0318 17:43:25.620323 7553 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-master-0-master-0" Mar 18 17:43:25.645428 master-0 kubenswrapper[7553]: I0318 17:43:25.645363 7553 scope.go:117] "RemoveContainer" containerID="99dc9cff4665f248f4ae68c96db3198a4bcd4d7b5dbfb367bdf3864e44ad29fc" Mar 18 17:43:26.064787 master-0 kubenswrapper[7553]: I0318 17:43:26.064701 7553 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d664a6d0d2a24360dee10612610f1b59" path="/var/lib/kubelet/pods/d664a6d0d2a24360dee10612610f1b59/volumes" Mar 18 17:43:26.065371 master-0 kubenswrapper[7553]: I0318 17:43:26.065327 7553 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-etcd/etcd-master-0-master-0" podUID="" Mar 18 17:43:26.455268 master-0 kubenswrapper[7553]: E0318 17:43:26.455107 7553 kubelet.go:1929] "Failed creating a mirror pod for" err="Internal error occurred: admission plugin \"LimitRanger\" failed to complete mutation in 13s" pod="openshift-etcd/etcd-master-0" Mar 18 17:43:27.638578 master-0 kubenswrapper[7553]: I0318 17:43:27.638503 7553 generic.go:334] "Generic (PLEG): container finished" podID="24b4ed170d527099878cb5fdd508a2fb" containerID="a1a8b65b76478f61aae57183c5d2785b161f6f2faa30ae8f66ca227270ba2dc5" exitCode=0 Mar 18 17:43:28.186392 master-0 kubenswrapper[7553]: I0318 17:43:28.186235 7553 prober.go:107] "Probe failed" probeType="Startup" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="46f265536aba6292ead501bc9b49f327" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.32.10:10257/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 18 17:43:28.286746 master-0 kubenswrapper[7553]: E0318 17:43:28.286604 7553 projected.go:194] Error preparing data for projected volume kube-api-access-2tskm for pod openshift-marketplace/redhat-operators-bgdql: failed to fetch token: Timeout: request did not complete within requested timeout - context deadline exceeded Mar 18 17:43:28.286953 master-0 kubenswrapper[7553]: E0318 17:43:28.286837 7553 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/4460d3d3-c55f-4f1c-a623-e3feccf937bb-kube-api-access-2tskm podName:4460d3d3-c55f-4f1c-a623-e3feccf937bb nodeName:}" failed. No retries permitted until 2026-03-18 17:43:28.786789974 +0000 UTC m=+98.932624847 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-2tskm" (UniqueName: "kubernetes.io/projected/4460d3d3-c55f-4f1c-a623-e3feccf937bb-kube-api-access-2tskm") pod "redhat-operators-bgdql" (UID: "4460d3d3-c55f-4f1c-a623-e3feccf937bb") : failed to fetch token: Timeout: request did not complete within requested timeout - context deadline exceeded Mar 18 17:43:28.881759 master-0 kubenswrapper[7553]: I0318 17:43:28.881582 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2tskm\" (UniqueName: \"kubernetes.io/projected/4460d3d3-c55f-4f1c-a623-e3feccf937bb-kube-api-access-2tskm\") pod \"redhat-operators-bgdql\" (UID: \"4460d3d3-c55f-4f1c-a623-e3feccf937bb\") " pod="openshift-marketplace/redhat-operators-bgdql" Mar 18 17:43:30.748195 master-0 kubenswrapper[7553]: I0318 17:43:30.748090 7553 patch_prober.go:28] interesting pod/authentication-operator-5885bfd7f4-8sxdf container/authentication-operator namespace/openshift-authentication-operator: Liveness probe status=failure output="Get \"https://10.128.0.21:8443/healthz\": dial tcp 10.128.0.21:8443: connect: connection refused" start-of-body= Mar 18 17:43:30.748195 master-0 kubenswrapper[7553]: I0318 17:43:30.748194 7553 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-8sxdf" podUID="c087ce06-a16b-41f4-ba93-8fccdee09003" containerName="authentication-operator" probeResult="failure" output="Get \"https://10.128.0.21:8443/healthz\": dial tcp 10.128.0.21:8443: connect: connection refused" Mar 18 17:43:32.214112 master-0 kubenswrapper[7553]: E0318 17:43:32.214018 7553 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 18 17:43:32.446393 master-0 kubenswrapper[7553]: E0318 17:43:32.446263 7553 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 18 17:43:33.681246 master-0 kubenswrapper[7553]: I0318 17:43:33.681161 7553 generic.go:334] "Generic (PLEG): container finished" podID="0100a259-1358-45e8-8191-4e1f9a14ec89" containerID="958f393f34a137d2d87755401a3809cbe07ffd8110f371434c3fc67267936608" exitCode=0 Mar 18 17:43:38.186352 master-0 kubenswrapper[7553]: I0318 17:43:38.186166 7553 prober.go:107] "Probe failed" probeType="Startup" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="46f265536aba6292ead501bc9b49f327" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.32.10:10257/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 18 17:43:40.748961 master-0 kubenswrapper[7553]: I0318 17:43:40.748855 7553 patch_prober.go:28] interesting pod/authentication-operator-5885bfd7f4-8sxdf container/authentication-operator namespace/openshift-authentication-operator: Liveness probe status=failure output="Get \"https://10.128.0.21:8443/healthz\": dial tcp 10.128.0.21:8443: connect: connection refused" start-of-body= Mar 18 17:43:40.749862 master-0 kubenswrapper[7553]: I0318 17:43:40.748949 7553 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-8sxdf" podUID="c087ce06-a16b-41f4-ba93-8fccdee09003" containerName="authentication-operator" probeResult="failure" output="Get \"https://10.128.0.21:8443/healthz\": dial tcp 10.128.0.21:8443: connect: connection refused" Mar 18 17:43:41.736252 master-0 kubenswrapper[7553]: I0318 17:43:41.736177 7553 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-node-identity_network-node-identity-7s68k_9875ed82-813c-483d-8471-8f9b74b774ee/approver/0.log" Mar 18 17:43:41.737002 master-0 kubenswrapper[7553]: I0318 17:43:41.736915 7553 generic.go:334] "Generic (PLEG): container finished" podID="9875ed82-813c-483d-8471-8f9b74b774ee" containerID="e68d50794bc18082c3da1be336c93731deac7bad0cc308995bf349c65577d305" exitCode=1 Mar 18 17:43:42.215623 master-0 kubenswrapper[7553]: E0318 17:43:42.215498 7553 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 18 17:43:42.447391 master-0 kubenswrapper[7553]: E0318 17:43:42.447240 7553 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 18 17:43:43.565461 master-0 kubenswrapper[7553]: E0318 17:43:43.565184 7553 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{bootstrap-kube-controller-manager-master-0.189e00713ba9d813 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-controller-manager-master-0,UID:46f265536aba6292ead501bc9b49f327,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Unhealthy,Message:Liveness probe failed: Get \"https://192.168.32.10:10257/healthz\": dial tcp 192.168.32.10:10257: connect: connection refused,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 17:43:09.562845203 +0000 UTC m=+79.708679916,LastTimestamp:2026-03-18 17:43:09.562845203 +0000 UTC m=+79.708679916,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 17:43:43.753136 master-0 kubenswrapper[7553]: I0318 17:43:43.753016 7553 generic.go:334] "Generic (PLEG): container finished" podID="c355c750-ae2f-49fa-9a16-8fb4f688853e" containerID="0cb61f4df91a50839abfb90676637f2a5c84478782eb2749acec5427cc366219" exitCode=0 Mar 18 17:43:43.755417 master-0 kubenswrapper[7553]: I0318 17:43:43.755368 7553 generic.go:334] "Generic (PLEG): container finished" podID="0b9ff55a-73fb-473f-b406-1f8b6cffdb89" containerID="fa4790d4c10a7e1c45ffad9596658e2a3e44e654967b539ab7d40f5e263966e8" exitCode=0 Mar 18 17:43:52.216816 master-0 kubenswrapper[7553]: E0318 17:43:52.216690 7553 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 18 17:43:52.216816 master-0 kubenswrapper[7553]: I0318 17:43:52.216794 7553 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Mar 18 17:43:52.448632 master-0 kubenswrapper[7553]: E0318 17:43:52.448552 7553 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 18 17:43:53.828268 master-0 kubenswrapper[7553]: I0318 17:43:53.828181 7553 generic.go:334] "Generic (PLEG): container finished" podID="c087ce06-a16b-41f4-ba93-8fccdee09003" containerID="d912b8d4a9e9d78b8f87c200cd35424b252b93249d3289fc7b152d013f54ec26" exitCode=0 Mar 18 17:43:56.849990 master-0 kubenswrapper[7553]: I0318 17:43:56.849872 7553 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-operator_network-operator-7bd846bfc4-dxxbl_14a0661b-7bde-4e22-a9a9-5e3fb24df77f/network-operator/0.log" Mar 18 17:43:56.850725 master-0 kubenswrapper[7553]: I0318 17:43:56.850003 7553 generic.go:334] "Generic (PLEG): container finished" podID="14a0661b-7bde-4e22-a9a9-5e3fb24df77f" containerID="2832f3b1f879c3c460b4f098e3d00d5c7b56e3d42bebbd0ce9bb5a5d15d6f5ef" exitCode=255 Mar 18 17:43:58.864131 master-0 kubenswrapper[7553]: I0318 17:43:58.864060 7553 generic.go:334] "Generic (PLEG): container finished" podID="3a3a6c2c-78e7-41f3-acff-20173cbc012a" containerID="b999f5b1617525702f37002cc0c97c3e6d4fa7798646c662c89a3d3f27b32af1" exitCode=0 Mar 18 17:43:58.866178 master-0 kubenswrapper[7553]: I0318 17:43:58.866112 7553 generic.go:334] "Generic (PLEG): container finished" podID="26575d68-0488-4dfa-a5d0-5016e481dba6" containerID="51887b13aef82e88868eb337156320571051617c6952199181cd88bfeb560624" exitCode=0 Mar 18 17:43:58.868093 master-0 kubenswrapper[7553]: I0318 17:43:58.868058 7553 generic.go:334] "Generic (PLEG): container finished" podID="9b424d6c-7440-4c98-ac19-2d0642c696fd" containerID="5027239b16a33eaa242303aa483c7e285890e090e452f1d81a0bd3e82446b39c" exitCode=0 Mar 18 17:44:00.068713 master-0 kubenswrapper[7553]: E0318 17:44:00.068654 7553 mirror_client.go:138] "Failed deleting a mirror pod" err="Timeout: request did not complete within requested timeout - context deadline exceeded" pod="openshift-etcd/etcd-master-0-master-0" Mar 18 17:44:00.069184 master-0 kubenswrapper[7553]: E0318 17:44:00.068945 7553 kubelet.go:2526] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="34.016s" Mar 18 17:44:00.069184 master-0 kubenswrapper[7553]: I0318 17:44:00.068982 7553 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-8sxdf" Mar 18 17:44:00.069184 master-0 kubenswrapper[7553]: I0318 17:44:00.069018 7553 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 18 17:44:00.069184 master-0 kubenswrapper[7553]: I0318 17:44:00.069032 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"24b4ed170d527099878cb5fdd508a2fb","Type":"ContainerDied","Data":"a1a8b65b76478f61aae57183c5d2785b161f6f2faa30ae8f66ca227270ba2dc5"} Mar 18 17:44:00.069184 master-0 kubenswrapper[7553]: I0318 17:44:00.069070 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-rws9x" event={"ID":"0100a259-1358-45e8-8191-4e1f9a14ec89","Type":"ContainerDied","Data":"958f393f34a137d2d87755401a3809cbe07ffd8110f371434c3fc67267936608"} Mar 18 17:44:00.070339 master-0 kubenswrapper[7553]: I0318 17:44:00.070255 7553 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="kube-controller-manager" containerStatusID={"Type":"cri-o","ID":"91f9abefbca1475a63e431e476e7c66f5fca86898f7b16dc16e0f75842de9c91"} pod="kube-system/bootstrap-kube-controller-manager-master-0" containerMessage="Container kube-controller-manager failed startup probe, will be restarted" Mar 18 17:44:00.070450 master-0 kubenswrapper[7553]: I0318 17:44:00.070410 7553 kuberuntime_container.go:808] "Killing container with a grace period" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="46f265536aba6292ead501bc9b49f327" containerName="kube-controller-manager" containerID="cri-o://91f9abefbca1475a63e431e476e7c66f5fca86898f7b16dc16e0f75842de9c91" gracePeriod=30 Mar 18 17:44:00.072956 master-0 kubenswrapper[7553]: I0318 17:44:00.071884 7553 scope.go:117] "RemoveContainer" containerID="d912b8d4a9e9d78b8f87c200cd35424b252b93249d3289fc7b152d013f54ec26" Mar 18 17:44:00.073312 master-0 kubenswrapper[7553]: I0318 17:44:00.073291 7553 scope.go:117] "RemoveContainer" containerID="958f393f34a137d2d87755401a3809cbe07ffd8110f371434c3fc67267936608" Mar 18 17:44:00.082138 master-0 kubenswrapper[7553]: I0318 17:44:00.082103 7553 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-etcd/etcd-master-0-master-0" podUID="" Mar 18 17:44:00.895334 master-0 kubenswrapper[7553]: I0318 17:44:00.895211 7553 generic.go:334] "Generic (PLEG): container finished" podID="46f265536aba6292ead501bc9b49f327" containerID="91f9abefbca1475a63e431e476e7c66f5fca86898f7b16dc16e0f75842de9c91" exitCode=2 Mar 18 17:44:02.217960 master-0 kubenswrapper[7553]: E0318 17:44:02.217884 7553 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="200ms" Mar 18 17:44:02.449262 master-0 kubenswrapper[7553]: E0318 17:44:02.449161 7553 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 18 17:44:02.449262 master-0 kubenswrapper[7553]: E0318 17:44:02.449227 7553 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Mar 18 17:44:02.886354 master-0 kubenswrapper[7553]: E0318 17:44:02.885653 7553 projected.go:194] Error preparing data for projected volume kube-api-access-2tskm for pod openshift-marketplace/redhat-operators-bgdql: failed to fetch token: Timeout: request did not complete within requested timeout - context deadline exceeded Mar 18 17:44:02.886354 master-0 kubenswrapper[7553]: E0318 17:44:02.885779 7553 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/4460d3d3-c55f-4f1c-a623-e3feccf937bb-kube-api-access-2tskm podName:4460d3d3-c55f-4f1c-a623-e3feccf937bb nodeName:}" failed. No retries permitted until 2026-03-18 17:44:03.885750079 +0000 UTC m=+134.031584792 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-2tskm" (UniqueName: "kubernetes.io/projected/4460d3d3-c55f-4f1c-a623-e3feccf937bb-kube-api-access-2tskm") pod "redhat-operators-bgdql" (UID: "4460d3d3-c55f-4f1c-a623-e3feccf937bb") : failed to fetch token: Timeout: request did not complete within requested timeout - context deadline exceeded Mar 18 17:44:03.914259 master-0 kubenswrapper[7553]: I0318 17:44:03.914169 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2tskm\" (UniqueName: \"kubernetes.io/projected/4460d3d3-c55f-4f1c-a623-e3feccf937bb-kube-api-access-2tskm\") pod \"redhat-operators-bgdql\" (UID: \"4460d3d3-c55f-4f1c-a623-e3feccf937bb\") " pod="openshift-marketplace/redhat-operators-bgdql" Mar 18 17:44:11.971604 master-0 kubenswrapper[7553]: I0318 17:44:11.971527 7553 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_installer-3-master-0_1a709ef9-91c0-4193-acb4-0594d02f554c/installer/0.log" Mar 18 17:44:11.971604 master-0 kubenswrapper[7553]: I0318 17:44:11.971600 7553 generic.go:334] "Generic (PLEG): container finished" podID="1a709ef9-91c0-4193-acb4-0594d02f554c" containerID="484988d6e1e2aeba58f6749a644020e240b6e9ebd0d813d191a1e837c5837362" exitCode=1 Mar 18 17:44:12.416903 master-0 kubenswrapper[7553]: I0318 17:44:12.416798 7553 status_manager.go:851] "Failed to get status for pod" podUID="46f265536aba6292ead501bc9b49f327" pod="kube-system/bootstrap-kube-controller-manager-master-0" err="the server was unable to return a response in the time allotted, but may still be processing the request (get pods bootstrap-kube-controller-manager-master-0)" Mar 18 17:44:12.421046 master-0 kubenswrapper[7553]: E0318 17:44:12.420991 7553 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="400ms" Mar 18 17:44:13.081108 master-0 kubenswrapper[7553]: E0318 17:44:13.081044 7553 kubelet.go:1929] "Failed creating a mirror pod for" err="Internal error occurred: admission plugin \"LimitRanger\" failed to complete mutation in 13s" pod="openshift-etcd/etcd-master-0" Mar 18 17:44:13.171546 master-0 kubenswrapper[7553]: E0318 17:44:13.171460 7553 log.go:32] "RunPodSandbox from runtime service failed" err=< Mar 18 17:44:13.171546 master-0 kubenswrapper[7553]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_redhat-marketplace-6xmx4_openshift-marketplace_427e5ce9-f4b3-4f12-bb77-2b13775aa334_0(058f0af0121d4120fe3397ab9abd7113c4ed7ecc6d9bb4f0eb5cf3dee7958440): error adding pod openshift-marketplace_redhat-marketplace-6xmx4 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"058f0af0121d4120fe3397ab9abd7113c4ed7ecc6d9bb4f0eb5cf3dee7958440" Netns:"/var/run/netns/a7ebfd56-50ca-426f-8f4e-f42777f0248a" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=redhat-marketplace-6xmx4;K8S_POD_INFRA_CONTAINER_ID=058f0af0121d4120fe3397ab9abd7113c4ed7ecc6d9bb4f0eb5cf3dee7958440;K8S_POD_UID=427e5ce9-f4b3-4f12-bb77-2b13775aa334" Path:"" ERRORED: error configuring pod [openshift-marketplace/redhat-marketplace-6xmx4] networking: Multus: [openshift-marketplace/redhat-marketplace-6xmx4/427e5ce9-f4b3-4f12-bb77-2b13775aa334]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod redhat-marketplace-6xmx4 in out of cluster comm: SetNetworkStatus: failed to update the pod redhat-marketplace-6xmx4 in out of cluster comm: status update failed for pod /: the server was unable to return a response in the time allotted, but may still be processing the request (get pods redhat-marketplace-6xmx4) Mar 18 17:44:13.171546 master-0 kubenswrapper[7553]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Mar 18 17:44:13.171546 master-0 kubenswrapper[7553]: > Mar 18 17:44:13.172019 master-0 kubenswrapper[7553]: E0318 17:44:13.171565 7553 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Mar 18 17:44:13.172019 master-0 kubenswrapper[7553]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_redhat-marketplace-6xmx4_openshift-marketplace_427e5ce9-f4b3-4f12-bb77-2b13775aa334_0(058f0af0121d4120fe3397ab9abd7113c4ed7ecc6d9bb4f0eb5cf3dee7958440): error adding pod openshift-marketplace_redhat-marketplace-6xmx4 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"058f0af0121d4120fe3397ab9abd7113c4ed7ecc6d9bb4f0eb5cf3dee7958440" Netns:"/var/run/netns/a7ebfd56-50ca-426f-8f4e-f42777f0248a" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=redhat-marketplace-6xmx4;K8S_POD_INFRA_CONTAINER_ID=058f0af0121d4120fe3397ab9abd7113c4ed7ecc6d9bb4f0eb5cf3dee7958440;K8S_POD_UID=427e5ce9-f4b3-4f12-bb77-2b13775aa334" Path:"" ERRORED: error configuring pod [openshift-marketplace/redhat-marketplace-6xmx4] networking: Multus: [openshift-marketplace/redhat-marketplace-6xmx4/427e5ce9-f4b3-4f12-bb77-2b13775aa334]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod redhat-marketplace-6xmx4 in out of cluster comm: SetNetworkStatus: failed to update the pod redhat-marketplace-6xmx4 in out of cluster comm: status update failed for pod /: the server was unable to return a response in the time allotted, but may still be processing the request (get pods redhat-marketplace-6xmx4) Mar 18 17:44:13.172019 master-0 kubenswrapper[7553]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Mar 18 17:44:13.172019 master-0 kubenswrapper[7553]: > pod="openshift-marketplace/redhat-marketplace-6xmx4" Mar 18 17:44:13.172019 master-0 kubenswrapper[7553]: E0318 17:44:13.171596 7553 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err=< Mar 18 17:44:13.172019 master-0 kubenswrapper[7553]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_redhat-marketplace-6xmx4_openshift-marketplace_427e5ce9-f4b3-4f12-bb77-2b13775aa334_0(058f0af0121d4120fe3397ab9abd7113c4ed7ecc6d9bb4f0eb5cf3dee7958440): error adding pod openshift-marketplace_redhat-marketplace-6xmx4 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"058f0af0121d4120fe3397ab9abd7113c4ed7ecc6d9bb4f0eb5cf3dee7958440" Netns:"/var/run/netns/a7ebfd56-50ca-426f-8f4e-f42777f0248a" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=redhat-marketplace-6xmx4;K8S_POD_INFRA_CONTAINER_ID=058f0af0121d4120fe3397ab9abd7113c4ed7ecc6d9bb4f0eb5cf3dee7958440;K8S_POD_UID=427e5ce9-f4b3-4f12-bb77-2b13775aa334" Path:"" ERRORED: error configuring pod [openshift-marketplace/redhat-marketplace-6xmx4] networking: Multus: [openshift-marketplace/redhat-marketplace-6xmx4/427e5ce9-f4b3-4f12-bb77-2b13775aa334]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod redhat-marketplace-6xmx4 in out of cluster comm: SetNetworkStatus: failed to update the pod redhat-marketplace-6xmx4 in out of cluster comm: status update failed for pod /: the server was unable to return a response in the time allotted, but may still be processing the request (get pods redhat-marketplace-6xmx4) Mar 18 17:44:13.172019 master-0 kubenswrapper[7553]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Mar 18 17:44:13.172019 master-0 kubenswrapper[7553]: > pod="openshift-marketplace/redhat-marketplace-6xmx4" Mar 18 17:44:13.172019 master-0 kubenswrapper[7553]: E0318 17:44:13.171677 7553 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"redhat-marketplace-6xmx4_openshift-marketplace(427e5ce9-f4b3-4f12-bb77-2b13775aa334)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"redhat-marketplace-6xmx4_openshift-marketplace(427e5ce9-f4b3-4f12-bb77-2b13775aa334)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_redhat-marketplace-6xmx4_openshift-marketplace_427e5ce9-f4b3-4f12-bb77-2b13775aa334_0(058f0af0121d4120fe3397ab9abd7113c4ed7ecc6d9bb4f0eb5cf3dee7958440): error adding pod openshift-marketplace_redhat-marketplace-6xmx4 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus-shim\\\" name=\\\"multus-cni-network\\\" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:\\\"058f0af0121d4120fe3397ab9abd7113c4ed7ecc6d9bb4f0eb5cf3dee7958440\\\" Netns:\\\"/var/run/netns/a7ebfd56-50ca-426f-8f4e-f42777f0248a\\\" IfName:\\\"eth0\\\" Args:\\\"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=redhat-marketplace-6xmx4;K8S_POD_INFRA_CONTAINER_ID=058f0af0121d4120fe3397ab9abd7113c4ed7ecc6d9bb4f0eb5cf3dee7958440;K8S_POD_UID=427e5ce9-f4b3-4f12-bb77-2b13775aa334\\\" Path:\\\"\\\" ERRORED: error configuring pod [openshift-marketplace/redhat-marketplace-6xmx4] networking: Multus: [openshift-marketplace/redhat-marketplace-6xmx4/427e5ce9-f4b3-4f12-bb77-2b13775aa334]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod redhat-marketplace-6xmx4 in out of cluster comm: SetNetworkStatus: failed to update the pod redhat-marketplace-6xmx4 in out of cluster comm: status update failed for pod /: the server was unable to return a response in the time allotted, but may still be processing the request (get pods redhat-marketplace-6xmx4)\\n': StdinData: {\\\"binDir\\\":\\\"/var/lib/cni/bin\\\",\\\"clusterNetwork\\\":\\\"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf\\\",\\\"cniVersion\\\":\\\"0.3.1\\\",\\\"daemonSocketDir\\\":\\\"/run/multus/socket\\\",\\\"globalNamespaces\\\":\\\"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv\\\",\\\"logLevel\\\":\\\"verbose\\\",\\\"logToStderr\\\":true,\\\"name\\\":\\\"multus-cni-network\\\",\\\"namespaceIsolation\\\":true,\\\"type\\\":\\\"multus-shim\\\"}\"" pod="openshift-marketplace/redhat-marketplace-6xmx4" podUID="427e5ce9-f4b3-4f12-bb77-2b13775aa334" Mar 18 17:44:13.210969 master-0 kubenswrapper[7553]: E0318 17:44:13.210897 7553 log.go:32] "RunPodSandbox from runtime service failed" err=< Mar 18 17:44:13.210969 master-0 kubenswrapper[7553]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-2-master-0_openshift-kube-controller-manager_37bbec19-22b8-411c-901b-d89c92b0bd4d_0(349c2372b6028362a8094c8894cbb09f3ae556db0e4118357879a2323d7e0d62): error adding pod openshift-kube-controller-manager_installer-2-master-0 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"349c2372b6028362a8094c8894cbb09f3ae556db0e4118357879a2323d7e0d62" Netns:"/var/run/netns/2e5c0608-f0fe-4b8d-8948-51dd873276d5" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-controller-manager;K8S_POD_NAME=installer-2-master-0;K8S_POD_INFRA_CONTAINER_ID=349c2372b6028362a8094c8894cbb09f3ae556db0e4118357879a2323d7e0d62;K8S_POD_UID=37bbec19-22b8-411c-901b-d89c92b0bd4d" Path:"" ERRORED: error configuring pod [openshift-kube-controller-manager/installer-2-master-0] networking: Multus: [openshift-kube-controller-manager/installer-2-master-0/37bbec19-22b8-411c-901b-d89c92b0bd4d]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod installer-2-master-0 in out of cluster comm: SetNetworkStatus: failed to update the pod installer-2-master-0 in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/installer-2-master-0?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Mar 18 17:44:13.210969 master-0 kubenswrapper[7553]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Mar 18 17:44:13.210969 master-0 kubenswrapper[7553]: > Mar 18 17:44:13.211513 master-0 kubenswrapper[7553]: E0318 17:44:13.210985 7553 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Mar 18 17:44:13.211513 master-0 kubenswrapper[7553]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-2-master-0_openshift-kube-controller-manager_37bbec19-22b8-411c-901b-d89c92b0bd4d_0(349c2372b6028362a8094c8894cbb09f3ae556db0e4118357879a2323d7e0d62): error adding pod openshift-kube-controller-manager_installer-2-master-0 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"349c2372b6028362a8094c8894cbb09f3ae556db0e4118357879a2323d7e0d62" Netns:"/var/run/netns/2e5c0608-f0fe-4b8d-8948-51dd873276d5" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-controller-manager;K8S_POD_NAME=installer-2-master-0;K8S_POD_INFRA_CONTAINER_ID=349c2372b6028362a8094c8894cbb09f3ae556db0e4118357879a2323d7e0d62;K8S_POD_UID=37bbec19-22b8-411c-901b-d89c92b0bd4d" Path:"" ERRORED: error configuring pod [openshift-kube-controller-manager/installer-2-master-0] networking: Multus: [openshift-kube-controller-manager/installer-2-master-0/37bbec19-22b8-411c-901b-d89c92b0bd4d]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod installer-2-master-0 in out of cluster comm: SetNetworkStatus: failed to update the pod installer-2-master-0 in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/installer-2-master-0?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Mar 18 17:44:13.211513 master-0 kubenswrapper[7553]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Mar 18 17:44:13.211513 master-0 kubenswrapper[7553]: > pod="openshift-kube-controller-manager/installer-2-master-0" Mar 18 17:44:13.211513 master-0 kubenswrapper[7553]: E0318 17:44:13.211007 7553 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err=< Mar 18 17:44:13.211513 master-0 kubenswrapper[7553]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-2-master-0_openshift-kube-controller-manager_37bbec19-22b8-411c-901b-d89c92b0bd4d_0(349c2372b6028362a8094c8894cbb09f3ae556db0e4118357879a2323d7e0d62): error adding pod openshift-kube-controller-manager_installer-2-master-0 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"349c2372b6028362a8094c8894cbb09f3ae556db0e4118357879a2323d7e0d62" Netns:"/var/run/netns/2e5c0608-f0fe-4b8d-8948-51dd873276d5" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-controller-manager;K8S_POD_NAME=installer-2-master-0;K8S_POD_INFRA_CONTAINER_ID=349c2372b6028362a8094c8894cbb09f3ae556db0e4118357879a2323d7e0d62;K8S_POD_UID=37bbec19-22b8-411c-901b-d89c92b0bd4d" Path:"" ERRORED: error configuring pod [openshift-kube-controller-manager/installer-2-master-0] networking: Multus: [openshift-kube-controller-manager/installer-2-master-0/37bbec19-22b8-411c-901b-d89c92b0bd4d]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod installer-2-master-0 in out of cluster comm: SetNetworkStatus: failed to update the pod installer-2-master-0 in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/installer-2-master-0?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Mar 18 17:44:13.211513 master-0 kubenswrapper[7553]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Mar 18 17:44:13.211513 master-0 kubenswrapper[7553]: > pod="openshift-kube-controller-manager/installer-2-master-0" Mar 18 17:44:13.211513 master-0 kubenswrapper[7553]: E0318 17:44:13.211063 7553 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"installer-2-master-0_openshift-kube-controller-manager(37bbec19-22b8-411c-901b-d89c92b0bd4d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"installer-2-master-0_openshift-kube-controller-manager(37bbec19-22b8-411c-901b-d89c92b0bd4d)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-2-master-0_openshift-kube-controller-manager_37bbec19-22b8-411c-901b-d89c92b0bd4d_0(349c2372b6028362a8094c8894cbb09f3ae556db0e4118357879a2323d7e0d62): error adding pod openshift-kube-controller-manager_installer-2-master-0 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus-shim\\\" name=\\\"multus-cni-network\\\" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:\\\"349c2372b6028362a8094c8894cbb09f3ae556db0e4118357879a2323d7e0d62\\\" Netns:\\\"/var/run/netns/2e5c0608-f0fe-4b8d-8948-51dd873276d5\\\" IfName:\\\"eth0\\\" Args:\\\"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-controller-manager;K8S_POD_NAME=installer-2-master-0;K8S_POD_INFRA_CONTAINER_ID=349c2372b6028362a8094c8894cbb09f3ae556db0e4118357879a2323d7e0d62;K8S_POD_UID=37bbec19-22b8-411c-901b-d89c92b0bd4d\\\" Path:\\\"\\\" ERRORED: error configuring pod [openshift-kube-controller-manager/installer-2-master-0] networking: Multus: [openshift-kube-controller-manager/installer-2-master-0/37bbec19-22b8-411c-901b-d89c92b0bd4d]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod installer-2-master-0 in out of cluster comm: SetNetworkStatus: failed to update the pod installer-2-master-0 in out of cluster comm: status update failed for pod /: Get \\\"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/installer-2-master-0?timeout=1m0s\\\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\\n': StdinData: {\\\"binDir\\\":\\\"/var/lib/cni/bin\\\",\\\"clusterNetwork\\\":\\\"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf\\\",\\\"cniVersion\\\":\\\"0.3.1\\\",\\\"daemonSocketDir\\\":\\\"/run/multus/socket\\\",\\\"globalNamespaces\\\":\\\"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv\\\",\\\"logLevel\\\":\\\"verbose\\\",\\\"logToStderr\\\":true,\\\"name\\\":\\\"multus-cni-network\\\",\\\"namespaceIsolation\\\":true,\\\"type\\\":\\\"multus-shim\\\"}\"" pod="openshift-kube-controller-manager/installer-2-master-0" podUID="37bbec19-22b8-411c-901b-d89c92b0bd4d" Mar 18 17:44:13.989829 master-0 kubenswrapper[7553]: I0318 17:44:13.989762 7553 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-66b84d69b-qb7n6_7e64a377-f497-4416-8f22-d5c7f52e0b65/ingress-operator/0.log" Mar 18 17:44:13.989829 master-0 kubenswrapper[7553]: I0318 17:44:13.989839 7553 generic.go:334] "Generic (PLEG): container finished" podID="7e64a377-f497-4416-8f22-d5c7f52e0b65" containerID="e7ad342e7df9192dcebc5d4c70aab2c6f8db5ac6b23cbc811c6ae00a013dee5b" exitCode=1 Mar 18 17:44:13.993831 master-0 kubenswrapper[7553]: I0318 17:44:13.993784 7553 generic.go:334] "Generic (PLEG): container finished" podID="24b4ed170d527099878cb5fdd508a2fb" containerID="a516eed231a828089f9e5b970a9cd6a4d60cf12dd2e8fae44516a4db570c131e" exitCode=0 Mar 18 17:44:13.993975 master-0 kubenswrapper[7553]: I0318 17:44:13.993924 7553 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-2-master-0" Mar 18 17:44:13.994046 master-0 kubenswrapper[7553]: I0318 17:44:13.993993 7553 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6xmx4" Mar 18 17:44:13.994668 master-0 kubenswrapper[7553]: I0318 17:44:13.994623 7553 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-2-master-0" Mar 18 17:44:13.995032 master-0 kubenswrapper[7553]: I0318 17:44:13.994990 7553 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6xmx4" Mar 18 17:44:17.569070 master-0 kubenswrapper[7553]: E0318 17:44:17.568845 7553 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{bootstrap-kube-controller-manager-master-0.189e0071e71af75e kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-controller-manager-master-0,UID:46f265536aba6292ead501bc9b49f327,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b23c544d3894e5b31f66a18c554f03b0d29f92c2000c46b57b1c96da7ec25db9\" already present on machine,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 17:43:12.439162718 +0000 UTC m=+82.584997391,LastTimestamp:2026-03-18 17:43:12.439162718 +0000 UTC m=+82.584997391,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 17:44:22.534428 master-0 kubenswrapper[7553]: E0318 17:44:22.534079 7553 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-18T17:44:12Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-18T17:44:12Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-18T17:44:12Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-18T17:44:12Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:90dc03981a3a33aadde1815815ad5068886ae546bd3162c9a87a99fcc07dbbce\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:c5a86acf841f8f125e428a1254b8c9f450ef07b62a7634bd4c30aa7bf4bd88c9\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1747322591},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2abc1fd79e7781634ed5ed9e8f2b98b9094ea51f40ac3a773c5e5224607bf3d7\\\"],\\\"sizeBytes\\\":1637455533},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:86833de447f25d1d0fc15ed5460c5068cc48b18b78b8108304c5b5fd1dff04ab\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a41181d28dfacb78bea3690c390c965912300bc666e6e31a54a9382dd0329758\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1251896539},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a55ec7ec64efd0f595d8084377b7e463a1807829b7617e5d4a9092dcd924c36\\\"],\\\"sizeBytes\\\":1238100502},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:898c67bf7fc973e99114f3148976a6c21ae0dbe413051415588fa9b995f5b331\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:a641939d2096609a4cf6eec872a1476b7c671bfd81cffc2edeb6e9f13c9deeba\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1231028434},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:c3c12b935527854220bc939cf4b1e9ec5ea7b799b5530ba0609ec64f044c0a36\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:dd33dff955c181beea0d08607a8c766e68ceb902bff0a014f4416b7a4a86a7c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1223856348},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:af0fe0ca926422a6471d5bf22fc0e682c36c24fba05496a3bdfac0b7d3733015\\\"],\\\"sizeBytes\\\":991832673},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b23c544d3894e5b31f66a18c554f03b0d29f92c2000c46b57b1c96da7ec25db9\\\"],\\\"sizeBytes\\\":943841779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b00c42562d477ef44d51f35950253a26d7debc7de86e53270831aafef5795c1\\\"],\\\"sizeBytes\\\":918289953},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0a09f5a3ba4f60cce0145769509bab92553c8075d210af4ac058965d2ae11efa\\\"],\\\"sizeBytes\\\":876160834},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6bba3f73c0066e42b24839e0d29f5dce2f36436f0a11f9f5e1029bccc5ed6578\\\"],\\\"sizeBytes\\\":862657321},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a9e8da5c6114f062b814936d4db7a47a04d248e160d6bb28ad4e4a081496ee4\\\"],\\\"sizeBytes\\\":772943435},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1e1faad2d9167d84e23585c1cea5962301845548043cf09578f943f79ca98016\\\"],\\\"sizeBytes\\\":687949580},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5e12e4dc52214d3ada5ba5106caebe079eac1d9292c2571a5fe83411ce8e900d\\\"],\\\"sizeBytes\\\":683195416},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:aa5e782406f71c048b1ac3a4bf5d1227ff4be81111114083ad4c7a209c6bfb5a\\\"],\\\"sizeBytes\\\":677942383},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ec8fd46dfb35ed10e8f98933166f69ce579c2f35b8db03d21e4c34fc544553e4\\\"],\\\"sizeBytes\\\":621648710},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae50e496bd6ae2d27298d997470b7cb0a426eeb8b7e2e9c7187a34cb03993998\\\"],\\\"sizeBytes\\\":589386806},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c6a4383333a1fd6d05c3f60ec793913f7937ee3d77f002d85e6c61e20507bf55\\\"],\\\"sizeBytes\\\":582154903},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c2dd7a03348212e49876f5359f233d893a541ed9b934df390201a05133a06982\\\"],\\\"sizeBytes\\\":558211175},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7af9f5c5af9d529840233ef4b519120cc0e3f14c4fe28cc43b0823f2c11d8f89\\\"],\\\"sizeBytes\\\":548752816},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e29dc9f042f2d0471171a0611070886cb2f7c57338ab7f112613417bcd33b278\\\"],\\\"sizeBytes\\\":529326739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b4c9cf268bb7abef7af187cd775d3f74d0bd33626250095428d53b705ee946\\\"],\\\"sizeBytes\\\":528956487},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d5a971d5889f167cfe61a64c366424b87c17a6dc141ffcc43406cdcbb50cae2a\\\"],\\\"sizeBytes\\\":518384969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-release@sha256:59727c4b3fef19e5149675cf3350735bbfe2c6588a57654b2e4552dd719f58b1\\\"],\\\"sizeBytes\\\":517999161},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c5ce3d1134d6500e2b8528516c1889d7bbc6259aba4981c6983395b0e9eeff65\\\"],\\\"sizeBytes\\\":514984269},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bfe394b58ec6195de8b8420e781b7630d85a412b9112d892fea903f92b783427\\\"],\\\"sizeBytes\\\":513221333},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1f23bac0a2a6cfd638e4af679dc787a8790d99c391f6e2ade8087dc477ff765e\\\"],\\\"sizeBytes\\\":512274055},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:77fff570657d2fa0bfb709b2c8b6665bae0bf90a2be981d8dbca56c674715098\\\"],\\\"sizeBytes\\\":511227324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adb9f6f2fd701863c7caed747df43f83d3569ba9388cfa33ea7219ac6a606b11\\\"],\\\"sizeBytes\\\":511164375},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c032f87ae61d6f757ff3ce52620a70a43516591987731f25da77aba152f17458\\\"],\\\"sizeBytes\\\":508888171},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:812819a9d712b9e345ef5f1404b242c281e2518ad724baebc393ec0fd3b3d263\\\"],\\\"sizeBytes\\\":508544745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:313d1d8ca85e65236a59f058a3316c49436dde691b3a3930d5bc5e3b4b8c8a71\\\"],\\\"sizeBytes\\\":507972093},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c527b4e8239a1f4f4e0a851113e7dd633b7dcb9d75b0e7b21c23d26304abcb3\\\"],\\\"sizeBytes\\\":506480167},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ef199844317b7b012879ed8d29f9b6bc37fad8a6fdb336103cbd5cabc74c4302\\\"],\\\"sizeBytes\\\":506395599},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7d4a034950346bcd4e36e9e2f1343e0cf7a10cf544963f33d09c7eb2a1bfc634\\\"],\\\"sizeBytes\\\":505345991},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fbbcb390de2563a0177b92fba1b5a65777366e2dc80e2808b61d87c41b47a2d\\\"],\\\"sizeBytes\\\":505246690},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c983016b9ceed0fca1f51bd49c2653243c7e5af91cbf2f478b091db6e028252\\\"],\\\"sizeBytes\\\":504625081},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:712d334b7752d95580571059aae2c50e111d879af4fd8ea7cc3dbaf1a8e7dc69\\\"],\\\"sizeBytes\\\":495994673},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4b5ea1ef4e09b673a0c68c8848ca162ab11d9ac373a377daa52dea702ffa3023\\\"],\\\"sizeBytes\\\":495065340},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:446bedea4916d3c1ee52be94137e484659e9561bd1de95c8189eee279aae984b\\\"],\\\"sizeBytes\\\":487096305},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a746a87b784ea1caa278fd0e012554f9df520b6fff665ea0bc4c83f487fed113\\\"],\\\"sizeBytes\\\":484450894},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2c4d5a681595e428ff4b5083648c13615eed80be9084a3d3fc68a0295079cb12\\\"],\\\"sizeBytes\\\":484187929},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f933312f49083e8746fc41ab5e46a9a757b448374f14971e256ebcb36f11dd97\\\"],\\\"sizeBytes\\\":470826739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ea5c8a93f30e0a4932da5697d22c0da7eda9a7035c0555eb006b6755e62bb2fc\\\"],\\\"sizeBytes\\\":468265024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d12d0dc7eb86bbedf6b2d7689a28fd51f0d928f720e4a6783744304297c661ed\\\"],\\\"sizeBytes\\\":465090934},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9609c00207cc4db97f0fd6162eb429d7f81654137f020a677e30cba26a887a24\\\"],\\\"sizeBytes\\\":463705930},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:632e80bba5077068ecca05fddb95aedebad4493af6f36152c01c6ae490975b62\\\"],\\\"sizeBytes\\\":458126937},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bcb08821551e9a5b9f82aa794bcea673279cefb93cb47492e19ccac5e2cf18fe\\\"],\\\"sizeBytes\\\":456576198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:759fb1d5353dbbadd443f38631d977ca3aed9787b873be05cc9660532a252739\\\"],\\\"sizeBytes\\\":448828620},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3062f6485aec4770e60852b535c69a42527b305161fe856499c8658ead6d1e85\\\"],\\\"sizeBytes\\\":448042136}],\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"runc\\\"}]}}\" for node \"master-0\": Patch \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0/status?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 18 17:44:22.822848 master-0 kubenswrapper[7553]: E0318 17:44:22.822714 7553 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="800ms" Mar 18 17:44:32.535126 master-0 kubenswrapper[7553]: E0318 17:44:32.534994 7553 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 18 17:44:33.625212 master-0 kubenswrapper[7553]: E0318 17:44:33.624636 7553 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="1.6s" Mar 18 17:44:34.085774 master-0 kubenswrapper[7553]: E0318 17:44:34.085681 7553 mirror_client.go:138] "Failed deleting a mirror pod" err="Timeout: request did not complete within requested timeout - context deadline exceeded" pod="openshift-etcd/etcd-master-0-master-0" Mar 18 17:44:34.086162 master-0 kubenswrapper[7553]: E0318 17:44:34.085967 7553 kubelet.go:2526] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="34.017s" Mar 18 17:44:34.086162 master-0 kubenswrapper[7553]: I0318 17:44:34.085999 7553 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-rws9x" Mar 18 17:44:34.086162 master-0 kubenswrapper[7553]: I0318 17:44:34.086033 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-7s68k" event={"ID":"9875ed82-813c-483d-8471-8f9b74b774ee","Type":"ContainerDied","Data":"e68d50794bc18082c3da1be336c93731deac7bad0cc308995bf349c65577d305"} Mar 18 17:44:34.086162 master-0 kubenswrapper[7553]: I0318 17:44:34.086068 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-b865698dc-5zj8r" event={"ID":"c355c750-ae2f-49fa-9a16-8fb4f688853e","Type":"ContainerDied","Data":"0cb61f4df91a50839abfb90676637f2a5c84478782eb2749acec5427cc366219"} Mar 18 17:44:34.087185 master-0 kubenswrapper[7553]: I0318 17:44:34.086847 7553 scope.go:117] "RemoveContainer" containerID="e68d50794bc18082c3da1be336c93731deac7bad0cc308995bf349c65577d305" Mar 18 17:44:34.087185 master-0 kubenswrapper[7553]: I0318 17:44:34.087030 7553 scope.go:117] "RemoveContainer" containerID="0cb61f4df91a50839abfb90676637f2a5c84478782eb2749acec5427cc366219" Mar 18 17:44:34.087185 master-0 kubenswrapper[7553]: I0318 17:44:34.087114 7553 scope.go:117] "RemoveContainer" containerID="51887b13aef82e88868eb337156320571051617c6952199181cd88bfeb560624" Mar 18 17:44:34.089404 master-0 kubenswrapper[7553]: I0318 17:44:34.088243 7553 scope.go:117] "RemoveContainer" containerID="5027239b16a33eaa242303aa483c7e285890e090e452f1d81a0bd3e82446b39c" Mar 18 17:44:34.090230 master-0 kubenswrapper[7553]: I0318 17:44:34.089998 7553 scope.go:117] "RemoveContainer" containerID="fa4790d4c10a7e1c45ffad9596658e2a3e44e654967b539ab7d40f5e263966e8" Mar 18 17:44:34.092232 master-0 kubenswrapper[7553]: I0318 17:44:34.090943 7553 scope.go:117] "RemoveContainer" containerID="e7ad342e7df9192dcebc5d4c70aab2c6f8db5ac6b23cbc811c6ae00a013dee5b" Mar 18 17:44:34.098654 master-0 kubenswrapper[7553]: I0318 17:44:34.098593 7553 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-etcd/etcd-master-0-master-0" podUID="" Mar 18 17:44:35.311270 master-0 kubenswrapper[7553]: I0318 17:44:35.311192 7553 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-node-identity_network-node-identity-7s68k_9875ed82-813c-483d-8471-8f9b74b774ee/approver/0.log" Mar 18 17:44:35.319877 master-0 kubenswrapper[7553]: I0318 17:44:35.319818 7553 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-66b84d69b-qb7n6_7e64a377-f497-4416-8f22-d5c7f52e0b65/ingress-operator/0.log" Mar 18 17:44:37.917484 master-0 kubenswrapper[7553]: E0318 17:44:37.917387 7553 projected.go:194] Error preparing data for projected volume kube-api-access-2tskm for pod openshift-marketplace/redhat-operators-bgdql: failed to fetch token: Timeout: request did not complete within requested timeout - context deadline exceeded Mar 18 17:44:37.918250 master-0 kubenswrapper[7553]: E0318 17:44:37.917504 7553 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/4460d3d3-c55f-4f1c-a623-e3feccf937bb-kube-api-access-2tskm podName:4460d3d3-c55f-4f1c-a623-e3feccf937bb nodeName:}" failed. No retries permitted until 2026-03-18 17:44:39.917479218 +0000 UTC m=+170.063313891 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-2tskm" (UniqueName: "kubernetes.io/projected/4460d3d3-c55f-4f1c-a623-e3feccf937bb-kube-api-access-2tskm") pod "redhat-operators-bgdql" (UID: "4460d3d3-c55f-4f1c-a623-e3feccf937bb") : failed to fetch token: Timeout: request did not complete within requested timeout - context deadline exceeded Mar 18 17:44:39.935315 master-0 kubenswrapper[7553]: I0318 17:44:39.935182 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2tskm\" (UniqueName: \"kubernetes.io/projected/4460d3d3-c55f-4f1c-a623-e3feccf937bb-kube-api-access-2tskm\") pod \"redhat-operators-bgdql\" (UID: \"4460d3d3-c55f-4f1c-a623-e3feccf937bb\") " pod="openshift-marketplace/redhat-operators-bgdql" Mar 18 17:44:42.536778 master-0 kubenswrapper[7553]: E0318 17:44:42.536641 7553 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 18 17:44:43.124966 master-0 kubenswrapper[7553]: I0318 17:44:43.124889 7553 patch_prober.go:28] interesting pod/marketplace-operator-89ccd998f-l5gm7 container/marketplace-operator namespace/openshift-marketplace: Liveness probe status=failure output="Get \"http://10.128.0.7:8080/healthz\": dial tcp 10.128.0.7:8080: connect: connection refused" start-of-body= Mar 18 17:44:43.124966 master-0 kubenswrapper[7553]: I0318 17:44:43.124930 7553 patch_prober.go:28] interesting pod/marketplace-operator-89ccd998f-l5gm7 container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.128.0.7:8080/healthz\": dial tcp 10.128.0.7:8080: connect: connection refused" start-of-body= Mar 18 17:44:43.125335 master-0 kubenswrapper[7553]: I0318 17:44:43.124969 7553 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/marketplace-operator-89ccd998f-l5gm7" podUID="ce5831a6-5a8d-4cda-9299-5d86437bcab2" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.128.0.7:8080/healthz\": dial tcp 10.128.0.7:8080: connect: connection refused" Mar 18 17:44:43.125335 master-0 kubenswrapper[7553]: I0318 17:44:43.125019 7553 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-89ccd998f-l5gm7" podUID="ce5831a6-5a8d-4cda-9299-5d86437bcab2" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.128.0.7:8080/healthz\": dial tcp 10.128.0.7:8080: connect: connection refused" Mar 18 17:44:43.389506 master-0 kubenswrapper[7553]: I0318 17:44:43.389179 7553 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-catalogd_catalogd-controller-manager-6864dc98f7-8vmsv_56cde2f7-1742-45d6-aa22-8270cfb424a7/manager/0.log" Mar 18 17:44:43.390484 master-0 kubenswrapper[7553]: I0318 17:44:43.390234 7553 generic.go:334] "Generic (PLEG): container finished" podID="56cde2f7-1742-45d6-aa22-8270cfb424a7" containerID="9a3c783faf4f4f653f053e2f216b7497912efa5f57b792ca0a2a383ce66b1a4d" exitCode=1 Mar 18 17:44:43.405975 master-0 kubenswrapper[7553]: I0318 17:44:43.405876 7553 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-controller_operator-controller-controller-manager-57777556ff-bk26c_efbcb147-d077-4749-9289-1682daccb657/manager/0.log" Mar 18 17:44:43.405975 master-0 kubenswrapper[7553]: I0318 17:44:43.405971 7553 generic.go:334] "Generic (PLEG): container finished" podID="efbcb147-d077-4749-9289-1682daccb657" containerID="b1d92bc61050e9dcfcb1bd9705c2f2b94007d572857fef98c987e76770e1ad13" exitCode=1 Mar 18 17:44:43.409263 master-0 kubenswrapper[7553]: I0318 17:44:43.409201 7553 generic.go:334] "Generic (PLEG): container finished" podID="ce5831a6-5a8d-4cda-9299-5d86437bcab2" containerID="c7f5d502541807602a24d2f39710701583fd6aae06267e2b4ee473df7bbfd13e" exitCode=0 Mar 18 17:44:44.210218 master-0 kubenswrapper[7553]: I0318 17:44:44.210086 7553 patch_prober.go:28] interesting pod/operator-controller-controller-manager-57777556ff-bk26c container/manager namespace/openshift-operator-controller: Liveness probe status=failure output="Get \"http://10.128.0.43:8081/healthz\": dial tcp 10.128.0.43:8081: connect: connection refused" start-of-body= Mar 18 17:44:44.210218 master-0 kubenswrapper[7553]: I0318 17:44:44.210189 7553 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-bk26c" podUID="efbcb147-d077-4749-9289-1682daccb657" containerName="manager" probeResult="failure" output="Get \"http://10.128.0.43:8081/healthz\": dial tcp 10.128.0.43:8081: connect: connection refused" Mar 18 17:44:44.215814 master-0 kubenswrapper[7553]: I0318 17:44:44.210243 7553 patch_prober.go:28] interesting pod/operator-controller-controller-manager-57777556ff-bk26c container/manager namespace/openshift-operator-controller: Readiness probe status=failure output="Get \"http://10.128.0.43:8081/readyz\": dial tcp 10.128.0.43:8081: connect: connection refused" start-of-body= Mar 18 17:44:44.215814 master-0 kubenswrapper[7553]: I0318 17:44:44.210381 7553 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-bk26c" podUID="efbcb147-d077-4749-9289-1682daccb657" containerName="manager" probeResult="failure" output="Get \"http://10.128.0.43:8081/readyz\": dial tcp 10.128.0.43:8081: connect: connection refused" Mar 18 17:44:44.236256 master-0 kubenswrapper[7553]: I0318 17:44:44.236116 7553 patch_prober.go:28] interesting pod/catalogd-controller-manager-6864dc98f7-8vmsv container/manager namespace/openshift-catalogd: Readiness probe status=failure output="Get \"http://10.128.0.44:8081/readyz\": dial tcp 10.128.0.44:8081: connect: connection refused" start-of-body= Mar 18 17:44:44.236256 master-0 kubenswrapper[7553]: I0318 17:44:44.236123 7553 patch_prober.go:28] interesting pod/catalogd-controller-manager-6864dc98f7-8vmsv container/manager namespace/openshift-catalogd: Liveness probe status=failure output="Get \"http://10.128.0.44:8081/healthz\": dial tcp 10.128.0.44:8081: connect: connection refused" start-of-body= Mar 18 17:44:44.236256 master-0 kubenswrapper[7553]: I0318 17:44:44.236172 7553 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-8vmsv" podUID="56cde2f7-1742-45d6-aa22-8270cfb424a7" containerName="manager" probeResult="failure" output="Get \"http://10.128.0.44:8081/readyz\": dial tcp 10.128.0.44:8081: connect: connection refused" Mar 18 17:44:44.236256 master-0 kubenswrapper[7553]: I0318 17:44:44.236195 7553 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-8vmsv" podUID="56cde2f7-1742-45d6-aa22-8270cfb424a7" containerName="manager" probeResult="failure" output="Get \"http://10.128.0.44:8081/healthz\": dial tcp 10.128.0.44:8081: connect: connection refused" Mar 18 17:44:45.227337 master-0 kubenswrapper[7553]: E0318 17:44:45.227108 7553 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="3.2s" Mar 18 17:44:50.458188 master-0 kubenswrapper[7553]: I0318 17:44:50.458140 7553 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-controller-manager-operator_openshift-controller-manager-operator-8c94f4649-hpsbd_9a240ab7-a1d5-4e9a-96f3-4590681cc7ed/openshift-controller-manager-operator/1.log" Mar 18 17:44:50.459941 master-0 kubenswrapper[7553]: I0318 17:44:50.459897 7553 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-controller-manager-operator_openshift-controller-manager-operator-8c94f4649-hpsbd_9a240ab7-a1d5-4e9a-96f3-4590681cc7ed/openshift-controller-manager-operator/0.log" Mar 18 17:44:50.460078 master-0 kubenswrapper[7553]: I0318 17:44:50.459952 7553 generic.go:334] "Generic (PLEG): container finished" podID="9a240ab7-a1d5-4e9a-96f3-4590681cc7ed" containerID="c19ffd46d4609e819ebbe2a9d485eeae31630e1764b880f8494b5531e650c602" exitCode=255 Mar 18 17:44:52.308083 master-0 kubenswrapper[7553]: E0318 17:44:51.572240 7553 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{bootstrap-kube-scheduler-master-0.189e0071e751715d kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-scheduler-master-0,UID:c83737980b9ee109184b1d78e942cf36,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b23c544d3894e5b31f66a18c554f03b0d29f92c2000c46b57b1c96da7ec25db9\" already present on machine,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 17:43:12.442732893 +0000 UTC m=+82.588567606,LastTimestamp:2026-03-18 17:43:12.442732893 +0000 UTC m=+82.588567606,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 17:44:52.538118 master-0 kubenswrapper[7553]: E0318 17:44:52.537966 7553 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 18 17:44:53.125413 master-0 kubenswrapper[7553]: I0318 17:44:53.125337 7553 patch_prober.go:28] interesting pod/marketplace-operator-89ccd998f-l5gm7 container/marketplace-operator namespace/openshift-marketplace: Liveness probe status=failure output="Get \"http://10.128.0.7:8080/healthz\": dial tcp 10.128.0.7:8080: connect: connection refused" start-of-body= Mar 18 17:44:53.125786 master-0 kubenswrapper[7553]: I0318 17:44:53.125415 7553 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/marketplace-operator-89ccd998f-l5gm7" podUID="ce5831a6-5a8d-4cda-9299-5d86437bcab2" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.128.0.7:8080/healthz\": dial tcp 10.128.0.7:8080: connect: connection refused" Mar 18 17:44:53.127933 master-0 kubenswrapper[7553]: I0318 17:44:53.127884 7553 patch_prober.go:28] interesting pod/marketplace-operator-89ccd998f-l5gm7 container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.128.0.7:8080/healthz\": dial tcp 10.128.0.7:8080: connect: connection refused" start-of-body= Mar 18 17:44:53.128017 master-0 kubenswrapper[7553]: I0318 17:44:53.127975 7553 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-89ccd998f-l5gm7" podUID="ce5831a6-5a8d-4cda-9299-5d86437bcab2" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.128.0.7:8080/healthz\": dial tcp 10.128.0.7:8080: connect: connection refused" Mar 18 17:44:54.209466 master-0 kubenswrapper[7553]: I0318 17:44:54.209395 7553 patch_prober.go:28] interesting pod/operator-controller-controller-manager-57777556ff-bk26c container/manager namespace/openshift-operator-controller: Readiness probe status=failure output="Get \"http://10.128.0.43:8081/readyz\": dial tcp 10.128.0.43:8081: connect: connection refused" start-of-body= Mar 18 17:44:54.210503 master-0 kubenswrapper[7553]: I0318 17:44:54.210441 7553 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-bk26c" podUID="efbcb147-d077-4749-9289-1682daccb657" containerName="manager" probeResult="failure" output="Get \"http://10.128.0.43:8081/readyz\": dial tcp 10.128.0.43:8081: connect: connection refused" Mar 18 17:44:54.236304 master-0 kubenswrapper[7553]: I0318 17:44:54.236193 7553 patch_prober.go:28] interesting pod/catalogd-controller-manager-6864dc98f7-8vmsv container/manager namespace/openshift-catalogd: Readiness probe status=failure output="Get \"http://10.128.0.44:8081/readyz\": dial tcp 10.128.0.44:8081: connect: connection refused" start-of-body= Mar 18 17:44:54.236636 master-0 kubenswrapper[7553]: I0318 17:44:54.236339 7553 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-8vmsv" podUID="56cde2f7-1742-45d6-aa22-8270cfb424a7" containerName="manager" probeResult="failure" output="Get \"http://10.128.0.44:8081/readyz\": dial tcp 10.128.0.44:8081: connect: connection refused" Mar 18 17:44:57.061043 master-0 kubenswrapper[7553]: E0318 17:44:57.060894 7553 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[kube-api-access-2tskm], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="openshift-marketplace/redhat-operators-bgdql" podUID="4460d3d3-c55f-4f1c-a623-e3feccf937bb" Mar 18 17:44:57.505629 master-0 kubenswrapper[7553]: I0318 17:44:57.505378 7553 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bgdql" Mar 18 17:44:58.428337 master-0 kubenswrapper[7553]: E0318 17:44:58.428186 7553 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="6.4s" Mar 18 17:45:01.547402 master-0 kubenswrapper[7553]: I0318 17:45:01.547317 7553 generic.go:334] "Generic (PLEG): container finished" podID="46f265536aba6292ead501bc9b49f327" containerID="06fa80a623a13f79bd4c27a3c32495ce9b6db2867b399839b8a475d55e2b3bf6" exitCode=1 Mar 18 17:45:02.538860 master-0 kubenswrapper[7553]: E0318 17:45:02.538748 7553 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 18 17:45:02.538860 master-0 kubenswrapper[7553]: E0318 17:45:02.538823 7553 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Mar 18 17:45:03.125265 master-0 kubenswrapper[7553]: I0318 17:45:03.125176 7553 patch_prober.go:28] interesting pod/marketplace-operator-89ccd998f-l5gm7 container/marketplace-operator namespace/openshift-marketplace: Liveness probe status=failure output="Get \"http://10.128.0.7:8080/healthz\": dial tcp 10.128.0.7:8080: connect: connection refused" start-of-body= Mar 18 17:45:03.125983 master-0 kubenswrapper[7553]: I0318 17:45:03.125291 7553 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/marketplace-operator-89ccd998f-l5gm7" podUID="ce5831a6-5a8d-4cda-9299-5d86437bcab2" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.128.0.7:8080/healthz\": dial tcp 10.128.0.7:8080: connect: connection refused" Mar 18 17:45:03.125983 master-0 kubenswrapper[7553]: I0318 17:45:03.125320 7553 patch_prober.go:28] interesting pod/marketplace-operator-89ccd998f-l5gm7 container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.128.0.7:8080/healthz\": dial tcp 10.128.0.7:8080: connect: connection refused" start-of-body= Mar 18 17:45:03.125983 master-0 kubenswrapper[7553]: I0318 17:45:03.125414 7553 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-89ccd998f-l5gm7" podUID="ce5831a6-5a8d-4cda-9299-5d86437bcab2" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.128.0.7:8080/healthz\": dial tcp 10.128.0.7:8080: connect: connection refused" Mar 18 17:45:04.210001 master-0 kubenswrapper[7553]: I0318 17:45:04.209924 7553 patch_prober.go:28] interesting pod/operator-controller-controller-manager-57777556ff-bk26c container/manager namespace/openshift-operator-controller: Liveness probe status=failure output="Get \"http://10.128.0.43:8081/healthz\": dial tcp 10.128.0.43:8081: connect: connection refused" start-of-body= Mar 18 17:45:04.210001 master-0 kubenswrapper[7553]: I0318 17:45:04.209997 7553 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-bk26c" podUID="efbcb147-d077-4749-9289-1682daccb657" containerName="manager" probeResult="failure" output="Get \"http://10.128.0.43:8081/healthz\": dial tcp 10.128.0.43:8081: connect: connection refused" Mar 18 17:45:04.210814 master-0 kubenswrapper[7553]: I0318 17:45:04.209925 7553 patch_prober.go:28] interesting pod/operator-controller-controller-manager-57777556ff-bk26c container/manager namespace/openshift-operator-controller: Readiness probe status=failure output="Get \"http://10.128.0.43:8081/readyz\": dial tcp 10.128.0.43:8081: connect: connection refused" start-of-body= Mar 18 17:45:04.210814 master-0 kubenswrapper[7553]: I0318 17:45:04.210057 7553 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-bk26c" podUID="efbcb147-d077-4749-9289-1682daccb657" containerName="manager" probeResult="failure" output="Get \"http://10.128.0.43:8081/readyz\": dial tcp 10.128.0.43:8081: connect: connection refused" Mar 18 17:45:04.235404 master-0 kubenswrapper[7553]: I0318 17:45:04.235357 7553 patch_prober.go:28] interesting pod/catalogd-controller-manager-6864dc98f7-8vmsv container/manager namespace/openshift-catalogd: Liveness probe status=failure output="Get \"http://10.128.0.44:8081/healthz\": dial tcp 10.128.0.44:8081: connect: connection refused" start-of-body= Mar 18 17:45:04.235531 master-0 kubenswrapper[7553]: I0318 17:45:04.235412 7553 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-8vmsv" podUID="56cde2f7-1742-45d6-aa22-8270cfb424a7" containerName="manager" probeResult="failure" output="Get \"http://10.128.0.44:8081/healthz\": dial tcp 10.128.0.44:8081: connect: connection refused" Mar 18 17:45:04.235531 master-0 kubenswrapper[7553]: I0318 17:45:04.235357 7553 patch_prober.go:28] interesting pod/catalogd-controller-manager-6864dc98f7-8vmsv container/manager namespace/openshift-catalogd: Readiness probe status=failure output="Get \"http://10.128.0.44:8081/readyz\": dial tcp 10.128.0.44:8081: connect: connection refused" start-of-body= Mar 18 17:45:04.235531 master-0 kubenswrapper[7553]: I0318 17:45:04.235462 7553 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-8vmsv" podUID="56cde2f7-1742-45d6-aa22-8270cfb424a7" containerName="manager" probeResult="failure" output="Get \"http://10.128.0.44:8081/readyz\": dial tcp 10.128.0.44:8081: connect: connection refused" Mar 18 17:45:05.575127 master-0 kubenswrapper[7553]: I0318 17:45:05.575062 7553 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-64854d9cff-vpjmp_7d39d93e-9be3-47e1-a44e-be2d18b55446/snapshot-controller/0.log" Mar 18 17:45:05.575127 master-0 kubenswrapper[7553]: I0318 17:45:05.575118 7553 generic.go:334] "Generic (PLEG): container finished" podID="7d39d93e-9be3-47e1-a44e-be2d18b55446" containerID="579cc4ec2812b3f1711dac655f16b18685227d170cb9b02fdd9aad6faa3c3865" exitCode=1 Mar 18 17:45:08.102394 master-0 kubenswrapper[7553]: E0318 17:45:08.102269 7553 mirror_client.go:138] "Failed deleting a mirror pod" err="Timeout: request did not complete within requested timeout - context deadline exceeded" pod="openshift-etcd/etcd-master-0-master-0" Mar 18 17:45:08.103335 master-0 kubenswrapper[7553]: E0318 17:45:08.102615 7553 kubelet.go:2526] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="34.016s" Mar 18 17:45:08.103335 master-0 kubenswrapper[7553]: I0318 17:45:08.102655 7553 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-marketplace/marketplace-operator-89ccd998f-l5gm7" Mar 18 17:45:08.103335 master-0 kubenswrapper[7553]: I0318 17:45:08.102698 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-d65958b8-t266j" event={"ID":"0b9ff55a-73fb-473f-b406-1f8b6cffdb89","Type":"ContainerDied","Data":"fa4790d4c10a7e1c45ffad9596658e2a3e44e654967b539ab7d40f5e263966e8"} Mar 18 17:45:08.103672 master-0 kubenswrapper[7553]: I0318 17:45:08.103598 7553 scope.go:117] "RemoveContainer" containerID="c7f5d502541807602a24d2f39710701583fd6aae06267e2b4ee473df7bbfd13e" Mar 18 17:45:08.113461 master-0 kubenswrapper[7553]: I0318 17:45:08.113409 7553 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-etcd/etcd-master-0-master-0" podUID="" Mar 18 17:45:12.424179 master-0 kubenswrapper[7553]: I0318 17:45:12.424076 7553 status_manager.go:851] "Failed to get status for pod" podUID="08451d5b-cf84-45a1-a16d-7ce10a83a6e7" pod="openshift-etcd/installer-1-master-0" err="the server was unable to return a response in the time allotted, but may still be processing the request (get pods installer-1-master-0)" Mar 18 17:45:13.939217 master-0 kubenswrapper[7553]: E0318 17:45:13.939109 7553 projected.go:194] Error preparing data for projected volume kube-api-access-2tskm for pod openshift-marketplace/redhat-operators-bgdql: failed to fetch token: Timeout: request did not complete within requested timeout - context deadline exceeded Mar 18 17:45:13.940196 master-0 kubenswrapper[7553]: E0318 17:45:13.939263 7553 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/4460d3d3-c55f-4f1c-a623-e3feccf937bb-kube-api-access-2tskm podName:4460d3d3-c55f-4f1c-a623-e3feccf937bb nodeName:}" failed. No retries permitted until 2026-03-18 17:45:17.939223753 +0000 UTC m=+208.085058466 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-2tskm" (UniqueName: "kubernetes.io/projected/4460d3d3-c55f-4f1c-a623-e3feccf937bb-kube-api-access-2tskm") pod "redhat-operators-bgdql" (UID: "4460d3d3-c55f-4f1c-a623-e3feccf937bb") : failed to fetch token: Timeout: request did not complete within requested timeout - context deadline exceeded Mar 18 17:45:14.209920 master-0 kubenswrapper[7553]: I0318 17:45:14.209714 7553 patch_prober.go:28] interesting pod/operator-controller-controller-manager-57777556ff-bk26c container/manager namespace/openshift-operator-controller: Readiness probe status=failure output="Get \"http://10.128.0.43:8081/readyz\": dial tcp 10.128.0.43:8081: connect: connection refused" start-of-body= Mar 18 17:45:14.209920 master-0 kubenswrapper[7553]: I0318 17:45:14.209811 7553 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-bk26c" podUID="efbcb147-d077-4749-9289-1682daccb657" containerName="manager" probeResult="failure" output="Get \"http://10.128.0.43:8081/readyz\": dial tcp 10.128.0.43:8081: connect: connection refused" Mar 18 17:45:14.236054 master-0 kubenswrapper[7553]: I0318 17:45:14.235947 7553 patch_prober.go:28] interesting pod/catalogd-controller-manager-6864dc98f7-8vmsv container/manager namespace/openshift-catalogd: Readiness probe status=failure output="Get \"http://10.128.0.44:8081/readyz\": dial tcp 10.128.0.44:8081: connect: connection refused" start-of-body= Mar 18 17:45:14.236054 master-0 kubenswrapper[7553]: I0318 17:45:14.236012 7553 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-8vmsv" podUID="56cde2f7-1742-45d6-aa22-8270cfb424a7" containerName="manager" probeResult="failure" output="Get \"http://10.128.0.44:8081/readyz\": dial tcp 10.128.0.44:8081: connect: connection refused" Mar 18 17:45:14.827781 master-0 kubenswrapper[7553]: E0318 17:45:14.826914 7553 log.go:32] "RunPodSandbox from runtime service failed" err=< Mar 18 17:45:14.827781 master-0 kubenswrapper[7553]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_redhat-marketplace-6xmx4_openshift-marketplace_427e5ce9-f4b3-4f12-bb77-2b13775aa334_0(0cd0004a682d3e4285420a04775a4b8b9f20f486ab05ca91ffdaae8cea5f1959): error adding pod openshift-marketplace_redhat-marketplace-6xmx4 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"0cd0004a682d3e4285420a04775a4b8b9f20f486ab05ca91ffdaae8cea5f1959" Netns:"/var/run/netns/937d3faf-ab58-4f17-9819-d2058250ad57" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=redhat-marketplace-6xmx4;K8S_POD_INFRA_CONTAINER_ID=0cd0004a682d3e4285420a04775a4b8b9f20f486ab05ca91ffdaae8cea5f1959;K8S_POD_UID=427e5ce9-f4b3-4f12-bb77-2b13775aa334" Path:"" ERRORED: error configuring pod [openshift-marketplace/redhat-marketplace-6xmx4] networking: Multus: [openshift-marketplace/redhat-marketplace-6xmx4/427e5ce9-f4b3-4f12-bb77-2b13775aa334]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod redhat-marketplace-6xmx4 in out of cluster comm: SetNetworkStatus: failed to update the pod redhat-marketplace-6xmx4 in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-6xmx4?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Mar 18 17:45:14.827781 master-0 kubenswrapper[7553]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Mar 18 17:45:14.827781 master-0 kubenswrapper[7553]: > Mar 18 17:45:14.827781 master-0 kubenswrapper[7553]: E0318 17:45:14.827038 7553 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Mar 18 17:45:14.827781 master-0 kubenswrapper[7553]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_redhat-marketplace-6xmx4_openshift-marketplace_427e5ce9-f4b3-4f12-bb77-2b13775aa334_0(0cd0004a682d3e4285420a04775a4b8b9f20f486ab05ca91ffdaae8cea5f1959): error adding pod openshift-marketplace_redhat-marketplace-6xmx4 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"0cd0004a682d3e4285420a04775a4b8b9f20f486ab05ca91ffdaae8cea5f1959" Netns:"/var/run/netns/937d3faf-ab58-4f17-9819-d2058250ad57" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=redhat-marketplace-6xmx4;K8S_POD_INFRA_CONTAINER_ID=0cd0004a682d3e4285420a04775a4b8b9f20f486ab05ca91ffdaae8cea5f1959;K8S_POD_UID=427e5ce9-f4b3-4f12-bb77-2b13775aa334" Path:"" ERRORED: error configuring pod [openshift-marketplace/redhat-marketplace-6xmx4] networking: Multus: [openshift-marketplace/redhat-marketplace-6xmx4/427e5ce9-f4b3-4f12-bb77-2b13775aa334]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod redhat-marketplace-6xmx4 in out of cluster comm: SetNetworkStatus: failed to update the pod redhat-marketplace-6xmx4 in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-6xmx4?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Mar 18 17:45:14.827781 master-0 kubenswrapper[7553]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Mar 18 17:45:14.827781 master-0 kubenswrapper[7553]: > pod="openshift-marketplace/redhat-marketplace-6xmx4" Mar 18 17:45:14.827781 master-0 kubenswrapper[7553]: E0318 17:45:14.827067 7553 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err=< Mar 18 17:45:14.827781 master-0 kubenswrapper[7553]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_redhat-marketplace-6xmx4_openshift-marketplace_427e5ce9-f4b3-4f12-bb77-2b13775aa334_0(0cd0004a682d3e4285420a04775a4b8b9f20f486ab05ca91ffdaae8cea5f1959): error adding pod openshift-marketplace_redhat-marketplace-6xmx4 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"0cd0004a682d3e4285420a04775a4b8b9f20f486ab05ca91ffdaae8cea5f1959" Netns:"/var/run/netns/937d3faf-ab58-4f17-9819-d2058250ad57" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=redhat-marketplace-6xmx4;K8S_POD_INFRA_CONTAINER_ID=0cd0004a682d3e4285420a04775a4b8b9f20f486ab05ca91ffdaae8cea5f1959;K8S_POD_UID=427e5ce9-f4b3-4f12-bb77-2b13775aa334" Path:"" ERRORED: error configuring pod [openshift-marketplace/redhat-marketplace-6xmx4] networking: Multus: [openshift-marketplace/redhat-marketplace-6xmx4/427e5ce9-f4b3-4f12-bb77-2b13775aa334]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod redhat-marketplace-6xmx4 in out of cluster comm: SetNetworkStatus: failed to update the pod redhat-marketplace-6xmx4 in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-6xmx4?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Mar 18 17:45:14.827781 master-0 kubenswrapper[7553]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Mar 18 17:45:14.827781 master-0 kubenswrapper[7553]: > pod="openshift-marketplace/redhat-marketplace-6xmx4" Mar 18 17:45:14.827781 master-0 kubenswrapper[7553]: E0318 17:45:14.827140 7553 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"redhat-marketplace-6xmx4_openshift-marketplace(427e5ce9-f4b3-4f12-bb77-2b13775aa334)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"redhat-marketplace-6xmx4_openshift-marketplace(427e5ce9-f4b3-4f12-bb77-2b13775aa334)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_redhat-marketplace-6xmx4_openshift-marketplace_427e5ce9-f4b3-4f12-bb77-2b13775aa334_0(0cd0004a682d3e4285420a04775a4b8b9f20f486ab05ca91ffdaae8cea5f1959): error adding pod openshift-marketplace_redhat-marketplace-6xmx4 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus-shim\\\" name=\\\"multus-cni-network\\\" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:\\\"0cd0004a682d3e4285420a04775a4b8b9f20f486ab05ca91ffdaae8cea5f1959\\\" Netns:\\\"/var/run/netns/937d3faf-ab58-4f17-9819-d2058250ad57\\\" IfName:\\\"eth0\\\" Args:\\\"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=redhat-marketplace-6xmx4;K8S_POD_INFRA_CONTAINER_ID=0cd0004a682d3e4285420a04775a4b8b9f20f486ab05ca91ffdaae8cea5f1959;K8S_POD_UID=427e5ce9-f4b3-4f12-bb77-2b13775aa334\\\" Path:\\\"\\\" ERRORED: error configuring pod [openshift-marketplace/redhat-marketplace-6xmx4] networking: Multus: [openshift-marketplace/redhat-marketplace-6xmx4/427e5ce9-f4b3-4f12-bb77-2b13775aa334]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod redhat-marketplace-6xmx4 in out of cluster comm: SetNetworkStatus: failed to update the pod redhat-marketplace-6xmx4 in out of cluster comm: status update failed for pod /: Get \\\"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-6xmx4?timeout=1m0s\\\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\\n': StdinData: {\\\"binDir\\\":\\\"/var/lib/cni/bin\\\",\\\"clusterNetwork\\\":\\\"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf\\\",\\\"cniVersion\\\":\\\"0.3.1\\\",\\\"daemonSocketDir\\\":\\\"/run/multus/socket\\\",\\\"globalNamespaces\\\":\\\"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv\\\",\\\"logLevel\\\":\\\"verbose\\\",\\\"logToStderr\\\":true,\\\"name\\\":\\\"multus-cni-network\\\",\\\"namespaceIsolation\\\":true,\\\"type\\\":\\\"multus-shim\\\"}\"" pod="openshift-marketplace/redhat-marketplace-6xmx4" podUID="427e5ce9-f4b3-4f12-bb77-2b13775aa334" Mar 18 17:45:14.829256 master-0 kubenswrapper[7553]: E0318 17:45:14.829184 7553 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": context deadline exceeded" interval="7s" Mar 18 17:45:14.839886 master-0 kubenswrapper[7553]: E0318 17:45:14.839842 7553 log.go:32] "RunPodSandbox from runtime service failed" err=< Mar 18 17:45:14.839886 master-0 kubenswrapper[7553]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-2-master-0_openshift-kube-controller-manager_37bbec19-22b8-411c-901b-d89c92b0bd4d_0(17dd4e98e7bb6c3ce1933b5aaa2fc06f0e83e1e69e18e1b5b56b97db28967473): error adding pod openshift-kube-controller-manager_installer-2-master-0 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"17dd4e98e7bb6c3ce1933b5aaa2fc06f0e83e1e69e18e1b5b56b97db28967473" Netns:"/var/run/netns/00dae05b-6e29-4633-9c63-e3053b70e26a" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-controller-manager;K8S_POD_NAME=installer-2-master-0;K8S_POD_INFRA_CONTAINER_ID=17dd4e98e7bb6c3ce1933b5aaa2fc06f0e83e1e69e18e1b5b56b97db28967473;K8S_POD_UID=37bbec19-22b8-411c-901b-d89c92b0bd4d" Path:"" ERRORED: error configuring pod [openshift-kube-controller-manager/installer-2-master-0] networking: Multus: [openshift-kube-controller-manager/installer-2-master-0/37bbec19-22b8-411c-901b-d89c92b0bd4d]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod installer-2-master-0 in out of cluster comm: SetNetworkStatus: failed to update the pod installer-2-master-0 in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/installer-2-master-0?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Mar 18 17:45:14.839886 master-0 kubenswrapper[7553]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Mar 18 17:45:14.839886 master-0 kubenswrapper[7553]: > Mar 18 17:45:14.840053 master-0 kubenswrapper[7553]: E0318 17:45:14.839926 7553 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Mar 18 17:45:14.840053 master-0 kubenswrapper[7553]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-2-master-0_openshift-kube-controller-manager_37bbec19-22b8-411c-901b-d89c92b0bd4d_0(17dd4e98e7bb6c3ce1933b5aaa2fc06f0e83e1e69e18e1b5b56b97db28967473): error adding pod openshift-kube-controller-manager_installer-2-master-0 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"17dd4e98e7bb6c3ce1933b5aaa2fc06f0e83e1e69e18e1b5b56b97db28967473" Netns:"/var/run/netns/00dae05b-6e29-4633-9c63-e3053b70e26a" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-controller-manager;K8S_POD_NAME=installer-2-master-0;K8S_POD_INFRA_CONTAINER_ID=17dd4e98e7bb6c3ce1933b5aaa2fc06f0e83e1e69e18e1b5b56b97db28967473;K8S_POD_UID=37bbec19-22b8-411c-901b-d89c92b0bd4d" Path:"" ERRORED: error configuring pod [openshift-kube-controller-manager/installer-2-master-0] networking: Multus: [openshift-kube-controller-manager/installer-2-master-0/37bbec19-22b8-411c-901b-d89c92b0bd4d]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod installer-2-master-0 in out of cluster comm: SetNetworkStatus: failed to update the pod installer-2-master-0 in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/installer-2-master-0?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Mar 18 17:45:14.840053 master-0 kubenswrapper[7553]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Mar 18 17:45:14.840053 master-0 kubenswrapper[7553]: > pod="openshift-kube-controller-manager/installer-2-master-0" Mar 18 17:45:14.840053 master-0 kubenswrapper[7553]: E0318 17:45:14.839978 7553 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err=< Mar 18 17:45:14.840053 master-0 kubenswrapper[7553]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-2-master-0_openshift-kube-controller-manager_37bbec19-22b8-411c-901b-d89c92b0bd4d_0(17dd4e98e7bb6c3ce1933b5aaa2fc06f0e83e1e69e18e1b5b56b97db28967473): error adding pod openshift-kube-controller-manager_installer-2-master-0 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"17dd4e98e7bb6c3ce1933b5aaa2fc06f0e83e1e69e18e1b5b56b97db28967473" Netns:"/var/run/netns/00dae05b-6e29-4633-9c63-e3053b70e26a" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-controller-manager;K8S_POD_NAME=installer-2-master-0;K8S_POD_INFRA_CONTAINER_ID=17dd4e98e7bb6c3ce1933b5aaa2fc06f0e83e1e69e18e1b5b56b97db28967473;K8S_POD_UID=37bbec19-22b8-411c-901b-d89c92b0bd4d" Path:"" ERRORED: error configuring pod [openshift-kube-controller-manager/installer-2-master-0] networking: Multus: [openshift-kube-controller-manager/installer-2-master-0/37bbec19-22b8-411c-901b-d89c92b0bd4d]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod installer-2-master-0 in out of cluster comm: SetNetworkStatus: failed to update the pod installer-2-master-0 in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/installer-2-master-0?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Mar 18 17:45:14.840053 master-0 kubenswrapper[7553]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Mar 18 17:45:14.840053 master-0 kubenswrapper[7553]: > pod="openshift-kube-controller-manager/installer-2-master-0" Mar 18 17:45:14.840453 master-0 kubenswrapper[7553]: E0318 17:45:14.840058 7553 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"installer-2-master-0_openshift-kube-controller-manager(37bbec19-22b8-411c-901b-d89c92b0bd4d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"installer-2-master-0_openshift-kube-controller-manager(37bbec19-22b8-411c-901b-d89c92b0bd4d)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-2-master-0_openshift-kube-controller-manager_37bbec19-22b8-411c-901b-d89c92b0bd4d_0(17dd4e98e7bb6c3ce1933b5aaa2fc06f0e83e1e69e18e1b5b56b97db28967473): error adding pod openshift-kube-controller-manager_installer-2-master-0 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus-shim\\\" name=\\\"multus-cni-network\\\" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:\\\"17dd4e98e7bb6c3ce1933b5aaa2fc06f0e83e1e69e18e1b5b56b97db28967473\\\" Netns:\\\"/var/run/netns/00dae05b-6e29-4633-9c63-e3053b70e26a\\\" IfName:\\\"eth0\\\" Args:\\\"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-controller-manager;K8S_POD_NAME=installer-2-master-0;K8S_POD_INFRA_CONTAINER_ID=17dd4e98e7bb6c3ce1933b5aaa2fc06f0e83e1e69e18e1b5b56b97db28967473;K8S_POD_UID=37bbec19-22b8-411c-901b-d89c92b0bd4d\\\" Path:\\\"\\\" ERRORED: error configuring pod [openshift-kube-controller-manager/installer-2-master-0] networking: Multus: [openshift-kube-controller-manager/installer-2-master-0/37bbec19-22b8-411c-901b-d89c92b0bd4d]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod installer-2-master-0 in out of cluster comm: SetNetworkStatus: failed to update the pod installer-2-master-0 in out of cluster comm: status update failed for pod /: Get \\\"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/installer-2-master-0?timeout=1m0s\\\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\\n': StdinData: {\\\"binDir\\\":\\\"/var/lib/cni/bin\\\",\\\"clusterNetwork\\\":\\\"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf\\\",\\\"cniVersion\\\":\\\"0.3.1\\\",\\\"daemonSocketDir\\\":\\\"/run/multus/socket\\\",\\\"globalNamespaces\\\":\\\"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv\\\",\\\"logLevel\\\":\\\"verbose\\\",\\\"logToStderr\\\":true,\\\"name\\\":\\\"multus-cni-network\\\",\\\"namespaceIsolation\\\":true,\\\"type\\\":\\\"multus-shim\\\"}\"" pod="openshift-kube-controller-manager/installer-2-master-0" podUID="37bbec19-22b8-411c-901b-d89c92b0bd4d" Mar 18 17:45:15.648877 master-0 kubenswrapper[7553]: I0318 17:45:15.648751 7553 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6xmx4" Mar 18 17:45:15.649810 master-0 kubenswrapper[7553]: I0318 17:45:15.648753 7553 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-2-master-0" Mar 18 17:45:15.649810 master-0 kubenswrapper[7553]: I0318 17:45:15.649625 7553 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-2-master-0" Mar 18 17:45:15.649810 master-0 kubenswrapper[7553]: I0318 17:45:15.649627 7553 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6xmx4" Mar 18 17:45:18.006327 master-0 kubenswrapper[7553]: I0318 17:45:18.006175 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2tskm\" (UniqueName: \"kubernetes.io/projected/4460d3d3-c55f-4f1c-a623-e3feccf937bb-kube-api-access-2tskm\") pod \"redhat-operators-bgdql\" (UID: \"4460d3d3-c55f-4f1c-a623-e3feccf937bb\") " pod="openshift-marketplace/redhat-operators-bgdql" Mar 18 17:45:21.687986 master-0 kubenswrapper[7553]: I0318 17:45:21.687937 7553 generic.go:334] "Generic (PLEG): container finished" podID="7b94e08c-7944-445e-bfb7-6c7c14880c65" containerID="10ef0540ad110067bbacf0ae0c51fcdf81ed8a0e014b67c2675d03499d28dfab" exitCode=0 Mar 18 17:45:22.893647 master-0 kubenswrapper[7553]: E0318 17:45:22.893364 7553 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-18T17:45:12Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-18T17:45:12Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-18T17:45:12Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-18T17:45:12Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:90dc03981a3a33aadde1815815ad5068886ae546bd3162c9a87a99fcc07dbbce\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:c5a86acf841f8f125e428a1254b8c9f450ef07b62a7634bd4c30aa7bf4bd88c9\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1747322591},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2abc1fd79e7781634ed5ed9e8f2b98b9094ea51f40ac3a773c5e5224607bf3d7\\\"],\\\"sizeBytes\\\":1637455533},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:86833de447f25d1d0fc15ed5460c5068cc48b18b78b8108304c5b5fd1dff04ab\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a41181d28dfacb78bea3690c390c965912300bc666e6e31a54a9382dd0329758\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1251896539},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a55ec7ec64efd0f595d8084377b7e463a1807829b7617e5d4a9092dcd924c36\\\"],\\\"sizeBytes\\\":1238100502},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:898c67bf7fc973e99114f3148976a6c21ae0dbe413051415588fa9b995f5b331\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:a641939d2096609a4cf6eec872a1476b7c671bfd81cffc2edeb6e9f13c9deeba\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1231028434},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:c3c12b935527854220bc939cf4b1e9ec5ea7b799b5530ba0609ec64f044c0a36\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:dd33dff955c181beea0d08607a8c766e68ceb902bff0a014f4416b7a4a86a7c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1223856348},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:af0fe0ca926422a6471d5bf22fc0e682c36c24fba05496a3bdfac0b7d3733015\\\"],\\\"sizeBytes\\\":991832673},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b23c544d3894e5b31f66a18c554f03b0d29f92c2000c46b57b1c96da7ec25db9\\\"],\\\"sizeBytes\\\":943841779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b00c42562d477ef44d51f35950253a26d7debc7de86e53270831aafef5795c1\\\"],\\\"sizeBytes\\\":918289953},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0a09f5a3ba4f60cce0145769509bab92553c8075d210af4ac058965d2ae11efa\\\"],\\\"sizeBytes\\\":876160834},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6bba3f73c0066e42b24839e0d29f5dce2f36436f0a11f9f5e1029bccc5ed6578\\\"],\\\"sizeBytes\\\":862657321},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a9e8da5c6114f062b814936d4db7a47a04d248e160d6bb28ad4e4a081496ee4\\\"],\\\"sizeBytes\\\":772943435},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1e1faad2d9167d84e23585c1cea5962301845548043cf09578f943f79ca98016\\\"],\\\"sizeBytes\\\":687949580},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5e12e4dc52214d3ada5ba5106caebe079eac1d9292c2571a5fe83411ce8e900d\\\"],\\\"sizeBytes\\\":683195416},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:aa5e782406f71c048b1ac3a4bf5d1227ff4be81111114083ad4c7a209c6bfb5a\\\"],\\\"sizeBytes\\\":677942383},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ec8fd46dfb35ed10e8f98933166f69ce579c2f35b8db03d21e4c34fc544553e4\\\"],\\\"sizeBytes\\\":621648710},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae50e496bd6ae2d27298d997470b7cb0a426eeb8b7e2e9c7187a34cb03993998\\\"],\\\"sizeBytes\\\":589386806},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c6a4383333a1fd6d05c3f60ec793913f7937ee3d77f002d85e6c61e20507bf55\\\"],\\\"sizeBytes\\\":582154903},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c2dd7a03348212e49876f5359f233d893a541ed9b934df390201a05133a06982\\\"],\\\"sizeBytes\\\":558211175},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7af9f5c5af9d529840233ef4b519120cc0e3f14c4fe28cc43b0823f2c11d8f89\\\"],\\\"sizeBytes\\\":548752816},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e29dc9f042f2d0471171a0611070886cb2f7c57338ab7f112613417bcd33b278\\\"],\\\"sizeBytes\\\":529326739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b4c9cf268bb7abef7af187cd775d3f74d0bd33626250095428d53b705ee946\\\"],\\\"sizeBytes\\\":528956487},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d5a971d5889f167cfe61a64c366424b87c17a6dc141ffcc43406cdcbb50cae2a\\\"],\\\"sizeBytes\\\":518384969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-release@sha256:59727c4b3fef19e5149675cf3350735bbfe2c6588a57654b2e4552dd719f58b1\\\"],\\\"sizeBytes\\\":517999161},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c5ce3d1134d6500e2b8528516c1889d7bbc6259aba4981c6983395b0e9eeff65\\\"],\\\"sizeBytes\\\":514984269},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bfe394b58ec6195de8b8420e781b7630d85a412b9112d892fea903f92b783427\\\"],\\\"sizeBytes\\\":513221333},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1f23bac0a2a6cfd638e4af679dc787a8790d99c391f6e2ade8087dc477ff765e\\\"],\\\"sizeBytes\\\":512274055},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:77fff570657d2fa0bfb709b2c8b6665bae0bf90a2be981d8dbca56c674715098\\\"],\\\"sizeBytes\\\":511227324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adb9f6f2fd701863c7caed747df43f83d3569ba9388cfa33ea7219ac6a606b11\\\"],\\\"sizeBytes\\\":511164375},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c032f87ae61d6f757ff3ce52620a70a43516591987731f25da77aba152f17458\\\"],\\\"sizeBytes\\\":508888171},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:812819a9d712b9e345ef5f1404b242c281e2518ad724baebc393ec0fd3b3d263\\\"],\\\"sizeBytes\\\":508544745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:313d1d8ca85e65236a59f058a3316c49436dde691b3a3930d5bc5e3b4b8c8a71\\\"],\\\"sizeBytes\\\":507972093},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c527b4e8239a1f4f4e0a851113e7dd633b7dcb9d75b0e7b21c23d26304abcb3\\\"],\\\"sizeBytes\\\":506480167},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ef199844317b7b012879ed8d29f9b6bc37fad8a6fdb336103cbd5cabc74c4302\\\"],\\\"sizeBytes\\\":506395599},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7d4a034950346bcd4e36e9e2f1343e0cf7a10cf544963f33d09c7eb2a1bfc634\\\"],\\\"sizeBytes\\\":505345991},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fbbcb390de2563a0177b92fba1b5a65777366e2dc80e2808b61d87c41b47a2d\\\"],\\\"sizeBytes\\\":505246690},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c983016b9ceed0fca1f51bd49c2653243c7e5af91cbf2f478b091db6e028252\\\"],\\\"sizeBytes\\\":504625081},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:712d334b7752d95580571059aae2c50e111d879af4fd8ea7cc3dbaf1a8e7dc69\\\"],\\\"sizeBytes\\\":495994673},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4b5ea1ef4e09b673a0c68c8848ca162ab11d9ac373a377daa52dea702ffa3023\\\"],\\\"sizeBytes\\\":495065340},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:446bedea4916d3c1ee52be94137e484659e9561bd1de95c8189eee279aae984b\\\"],\\\"sizeBytes\\\":487096305},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a746a87b784ea1caa278fd0e012554f9df520b6fff665ea0bc4c83f487fed113\\\"],\\\"sizeBytes\\\":484450894},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2c4d5a681595e428ff4b5083648c13615eed80be9084a3d3fc68a0295079cb12\\\"],\\\"sizeBytes\\\":484187929},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f933312f49083e8746fc41ab5e46a9a757b448374f14971e256ebcb36f11dd97\\\"],\\\"sizeBytes\\\":470826739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ea5c8a93f30e0a4932da5697d22c0da7eda9a7035c0555eb006b6755e62bb2fc\\\"],\\\"sizeBytes\\\":468265024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d12d0dc7eb86bbedf6b2d7689a28fd51f0d928f720e4a6783744304297c661ed\\\"],\\\"sizeBytes\\\":465090934},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9609c00207cc4db97f0fd6162eb429d7f81654137f020a677e30cba26a887a24\\\"],\\\"sizeBytes\\\":463705930},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:632e80bba5077068ecca05fddb95aedebad4493af6f36152c01c6ae490975b62\\\"],\\\"sizeBytes\\\":458126937},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bcb08821551e9a5b9f82aa794bcea673279cefb93cb47492e19ccac5e2cf18fe\\\"],\\\"sizeBytes\\\":456576198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:759fb1d5353dbbadd443f38631d977ca3aed9787b873be05cc9660532a252739\\\"],\\\"sizeBytes\\\":448828620},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3062f6485aec4770e60852b535c69a42527b305161fe856499c8658ead6d1e85\\\"],\\\"sizeBytes\\\":448042136}],\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"runc\\\"}]}}\" for node \"master-0\": Patch \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0/status?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 18 17:45:24.209462 master-0 kubenswrapper[7553]: I0318 17:45:24.209361 7553 patch_prober.go:28] interesting pod/operator-controller-controller-manager-57777556ff-bk26c container/manager namespace/openshift-operator-controller: Liveness probe status=failure output="Get \"http://10.128.0.43:8081/healthz\": dial tcp 10.128.0.43:8081: connect: connection refused" start-of-body= Mar 18 17:45:24.209462 master-0 kubenswrapper[7553]: I0318 17:45:24.209390 7553 patch_prober.go:28] interesting pod/operator-controller-controller-manager-57777556ff-bk26c container/manager namespace/openshift-operator-controller: Readiness probe status=failure output="Get \"http://10.128.0.43:8081/readyz\": dial tcp 10.128.0.43:8081: connect: connection refused" start-of-body= Mar 18 17:45:24.209462 master-0 kubenswrapper[7553]: I0318 17:45:24.209449 7553 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-bk26c" podUID="efbcb147-d077-4749-9289-1682daccb657" containerName="manager" probeResult="failure" output="Get \"http://10.128.0.43:8081/healthz\": dial tcp 10.128.0.43:8081: connect: connection refused" Mar 18 17:45:24.210573 master-0 kubenswrapper[7553]: I0318 17:45:24.209453 7553 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-bk26c" podUID="efbcb147-d077-4749-9289-1682daccb657" containerName="manager" probeResult="failure" output="Get \"http://10.128.0.43:8081/readyz\": dial tcp 10.128.0.43:8081: connect: connection refused" Mar 18 17:45:24.236433 master-0 kubenswrapper[7553]: I0318 17:45:24.236270 7553 patch_prober.go:28] interesting pod/catalogd-controller-manager-6864dc98f7-8vmsv container/manager namespace/openshift-catalogd: Readiness probe status=failure output="Get \"http://10.128.0.44:8081/readyz\": dial tcp 10.128.0.44:8081: connect: connection refused" start-of-body= Mar 18 17:45:24.236433 master-0 kubenswrapper[7553]: I0318 17:45:24.236402 7553 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-8vmsv" podUID="56cde2f7-1742-45d6-aa22-8270cfb424a7" containerName="manager" probeResult="failure" output="Get \"http://10.128.0.44:8081/readyz\": dial tcp 10.128.0.44:8081: connect: connection refused" Mar 18 17:45:24.236749 master-0 kubenswrapper[7553]: I0318 17:45:24.236498 7553 patch_prober.go:28] interesting pod/catalogd-controller-manager-6864dc98f7-8vmsv container/manager namespace/openshift-catalogd: Liveness probe status=failure output="Get \"http://10.128.0.44:8081/healthz\": dial tcp 10.128.0.44:8081: connect: connection refused" start-of-body= Mar 18 17:45:24.236749 master-0 kubenswrapper[7553]: I0318 17:45:24.236560 7553 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-8vmsv" podUID="56cde2f7-1742-45d6-aa22-8270cfb424a7" containerName="manager" probeResult="failure" output="Get \"http://10.128.0.44:8081/healthz\": dial tcp 10.128.0.44:8081: connect: connection refused" Mar 18 17:45:25.576057 master-0 kubenswrapper[7553]: E0318 17:45:25.575794 7553 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{etcd-master-0.189e0071e7d860c7 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-master-0,UID:24b4ed170d527099878cb5fdd508a2fb,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e29dc9f042f2d0471171a0611070886cb2f7c57338ab7f112613417bcd33b278\" already present on machine,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 17:43:12.451576007 +0000 UTC m=+82.597410680,LastTimestamp:2026-03-18 17:43:12.451576007 +0000 UTC m=+82.597410680,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 17:45:26.727414 master-0 kubenswrapper[7553]: I0318 17:45:26.727157 7553 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-6f69995874-dh5zl_37b3753f-bf4f-4a9e-a4a8-d58296bada79/cluster-baremetal-operator/0.log" Mar 18 17:45:26.727414 master-0 kubenswrapper[7553]: I0318 17:45:26.727226 7553 generic.go:334] "Generic (PLEG): container finished" podID="37b3753f-bf4f-4a9e-a4a8-d58296bada79" containerID="45e3f6969d17c505bf035fc0bb6cc383a03ebc121f30574b4362e30470bf65e0" exitCode=1 Mar 18 17:45:30.756664 master-0 kubenswrapper[7553]: I0318 17:45:30.756582 7553 generic.go:334] "Generic (PLEG): container finished" podID="1db0a246-ca43-4e7c-b09e-e80218ae99b1" containerID="b3ebfba10cf9d40bcef8b7b1707842cdd5329c0fa6c5118e3bdbf4e1fe51f08d" exitCode=0 Mar 18 17:45:30.763823 master-0 kubenswrapper[7553]: I0318 17:45:30.763767 7553 patch_prober.go:28] interesting pod/etcd-operator-8544cbcf9c-rws9x container/etcd-operator namespace/openshift-etcd-operator: Liveness probe status=failure output="Get \"https://10.128.0.13:8443/healthz\": read tcp 10.128.0.2:53718->10.128.0.13:8443: read: connection reset by peer" start-of-body= Mar 18 17:45:30.763964 master-0 kubenswrapper[7553]: I0318 17:45:30.763833 7553 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-rws9x" podUID="0100a259-1358-45e8-8191-4e1f9a14ec89" containerName="etcd-operator" probeResult="failure" output="Get \"https://10.128.0.13:8443/healthz\": read tcp 10.128.0.2:53718->10.128.0.13:8443: read: connection reset by peer" Mar 18 17:45:31.049552 master-0 kubenswrapper[7553]: I0318 17:45:31.049369 7553 patch_prober.go:28] interesting pod/controller-manager-f5755b457-f4cbl container/controller-manager namespace/openshift-controller-manager: Liveness probe status=failure output="Get \"https://10.128.0.51:8443/healthz\": dial tcp 10.128.0.51:8443: connect: connection refused" start-of-body= Mar 18 17:45:31.049552 master-0 kubenswrapper[7553]: I0318 17:45:31.049385 7553 patch_prober.go:28] interesting pod/controller-manager-f5755b457-f4cbl container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.128.0.51:8443/healthz\": dial tcp 10.128.0.51:8443: connect: connection refused" start-of-body= Mar 18 17:45:31.049552 master-0 kubenswrapper[7553]: I0318 17:45:31.049453 7553 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-controller-manager/controller-manager-f5755b457-f4cbl" podUID="1db0a246-ca43-4e7c-b09e-e80218ae99b1" containerName="controller-manager" probeResult="failure" output="Get \"https://10.128.0.51:8443/healthz\": dial tcp 10.128.0.51:8443: connect: connection refused" Mar 18 17:45:31.049552 master-0 kubenswrapper[7553]: I0318 17:45:31.049519 7553 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-f5755b457-f4cbl" podUID="1db0a246-ca43-4e7c-b09e-e80218ae99b1" containerName="controller-manager" probeResult="failure" output="Get \"https://10.128.0.51:8443/healthz\": dial tcp 10.128.0.51:8443: connect: connection refused" Mar 18 17:45:31.767719 master-0 kubenswrapper[7553]: I0318 17:45:31.767651 7553 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd-operator_etcd-operator-8544cbcf9c-rws9x_0100a259-1358-45e8-8191-4e1f9a14ec89/etcd-operator/1.log" Mar 18 17:45:31.768675 master-0 kubenswrapper[7553]: I0318 17:45:31.768254 7553 generic.go:334] "Generic (PLEG): container finished" podID="0100a259-1358-45e8-8191-4e1f9a14ec89" containerID="53db21ebf3fb977fb32a1c89dc78765092d7ecfd2055d2adfee610a3dffc6d29" exitCode=255 Mar 18 17:45:31.770139 master-0 kubenswrapper[7553]: I0318 17:45:31.770104 7553 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication-operator_authentication-operator-5885bfd7f4-8sxdf_c087ce06-a16b-41f4-ba93-8fccdee09003/authentication-operator/1.log" Mar 18 17:45:31.770665 master-0 kubenswrapper[7553]: I0318 17:45:31.770631 7553 generic.go:334] "Generic (PLEG): container finished" podID="c087ce06-a16b-41f4-ba93-8fccdee09003" containerID="0f1dba4faf70afeafaab91d4fd1ae8ffa8171effce8ce56ce4db153d43518265" exitCode=255 Mar 18 17:45:31.833976 master-0 kubenswrapper[7553]: E0318 17:45:31.833904 7553 controller.go:145] "Failed to ensure lease exists, will retry" err="the server was unable to return a response in the time allotted, but may still be processing the request (get leases.coordination.k8s.io master-0)" interval="7s" Mar 18 17:45:32.894930 master-0 kubenswrapper[7553]: E0318 17:45:32.894568 7553 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 18 17:45:34.210172 master-0 kubenswrapper[7553]: I0318 17:45:34.210098 7553 patch_prober.go:28] interesting pod/operator-controller-controller-manager-57777556ff-bk26c container/manager namespace/openshift-operator-controller: Readiness probe status=failure output="Get \"http://10.128.0.43:8081/readyz\": dial tcp 10.128.0.43:8081: connect: connection refused" start-of-body= Mar 18 17:45:34.210748 master-0 kubenswrapper[7553]: I0318 17:45:34.210201 7553 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-bk26c" podUID="efbcb147-d077-4749-9289-1682daccb657" containerName="manager" probeResult="failure" output="Get \"http://10.128.0.43:8081/readyz\": dial tcp 10.128.0.43:8081: connect: connection refused" Mar 18 17:45:34.236018 master-0 kubenswrapper[7553]: I0318 17:45:34.235948 7553 patch_prober.go:28] interesting pod/catalogd-controller-manager-6864dc98f7-8vmsv container/manager namespace/openshift-catalogd: Readiness probe status=failure output="Get \"http://10.128.0.44:8081/readyz\": dial tcp 10.128.0.44:8081: connect: connection refused" start-of-body= Mar 18 17:45:34.236226 master-0 kubenswrapper[7553]: I0318 17:45:34.236026 7553 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-8vmsv" podUID="56cde2f7-1742-45d6-aa22-8270cfb424a7" containerName="manager" probeResult="failure" output="Get \"http://10.128.0.44:8081/readyz\": dial tcp 10.128.0.44:8081: connect: connection refused" Mar 18 17:45:41.049142 master-0 kubenswrapper[7553]: I0318 17:45:41.049018 7553 patch_prober.go:28] interesting pod/controller-manager-f5755b457-f4cbl container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.128.0.51:8443/healthz\": dial tcp 10.128.0.51:8443: connect: connection refused" start-of-body= Mar 18 17:45:41.050001 master-0 kubenswrapper[7553]: I0318 17:45:41.049063 7553 patch_prober.go:28] interesting pod/controller-manager-f5755b457-f4cbl container/controller-manager namespace/openshift-controller-manager: Liveness probe status=failure output="Get \"https://10.128.0.51:8443/healthz\": dial tcp 10.128.0.51:8443: connect: connection refused" start-of-body= Mar 18 17:45:41.050001 master-0 kubenswrapper[7553]: I0318 17:45:41.049159 7553 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-f5755b457-f4cbl" podUID="1db0a246-ca43-4e7c-b09e-e80218ae99b1" containerName="controller-manager" probeResult="failure" output="Get \"https://10.128.0.51:8443/healthz\": dial tcp 10.128.0.51:8443: connect: connection refused" Mar 18 17:45:41.050001 master-0 kubenswrapper[7553]: I0318 17:45:41.049269 7553 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-controller-manager/controller-manager-f5755b457-f4cbl" podUID="1db0a246-ca43-4e7c-b09e-e80218ae99b1" containerName="controller-manager" probeResult="failure" output="Get \"https://10.128.0.51:8443/healthz\": dial tcp 10.128.0.51:8443: connect: connection refused" Mar 18 17:45:42.117663 master-0 kubenswrapper[7553]: E0318 17:45:42.117576 7553 mirror_client.go:138] "Failed deleting a mirror pod" err="Timeout: request did not complete within requested timeout - context deadline exceeded" pod="openshift-etcd/etcd-master-0-master-0" Mar 18 17:45:42.118791 master-0 kubenswrapper[7553]: E0318 17:45:42.117897 7553 kubelet.go:2526] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="34.015s" Mar 18 17:45:42.119130 master-0 kubenswrapper[7553]: I0318 17:45:42.119030 7553 scope.go:117] "RemoveContainer" containerID="579cc4ec2812b3f1711dac655f16b18685227d170cb9b02fdd9aad6faa3c3865" Mar 18 17:45:42.120489 master-0 kubenswrapper[7553]: I0318 17:45:42.120434 7553 scope.go:117] "RemoveContainer" containerID="2832f3b1f879c3c460b4f098e3d00d5c7b56e3d42bebbd0ce9bb5a5d15d6f5ef" Mar 18 17:45:42.121920 master-0 kubenswrapper[7553]: I0318 17:45:42.120937 7553 scope.go:117] "RemoveContainer" containerID="0f1dba4faf70afeafaab91d4fd1ae8ffa8171effce8ce56ce4db153d43518265" Mar 18 17:45:42.121920 master-0 kubenswrapper[7553]: I0318 17:45:42.121569 7553 scope.go:117] "RemoveContainer" containerID="9a3c783faf4f4f653f053e2f216b7497912efa5f57b792ca0a2a383ce66b1a4d" Mar 18 17:45:42.122242 master-0 kubenswrapper[7553]: I0318 17:45:42.122150 7553 scope.go:117] "RemoveContainer" containerID="c19ffd46d4609e819ebbe2a9d485eeae31630e1764b880f8494b5531e650c602" Mar 18 17:45:42.125787 master-0 kubenswrapper[7553]: I0318 17:45:42.125556 7553 scope.go:117] "RemoveContainer" containerID="b999f5b1617525702f37002cc0c97c3e6d4fa7798646c662c89a3d3f27b32af1" Mar 18 17:45:42.125952 master-0 kubenswrapper[7553]: I0318 17:45:42.125884 7553 scope.go:117] "RemoveContainer" containerID="53db21ebf3fb977fb32a1c89dc78765092d7ecfd2055d2adfee610a3dffc6d29" Mar 18 17:45:42.126221 master-0 kubenswrapper[7553]: I0318 17:45:42.126127 7553 scope.go:117] "RemoveContainer" containerID="06fa80a623a13f79bd4c27a3c32495ce9b6db2867b399839b8a475d55e2b3bf6" Mar 18 17:45:42.126425 master-0 kubenswrapper[7553]: I0318 17:45:42.126386 7553 scope.go:117] "RemoveContainer" containerID="b1d92bc61050e9dcfcb1bd9705c2f2b94007d572857fef98c987e76770e1ad13" Mar 18 17:45:42.129150 master-0 kubenswrapper[7553]: I0318 17:45:42.126911 7553 scope.go:117] "RemoveContainer" containerID="45e3f6969d17c505bf035fc0bb6cc383a03ebc121f30574b4362e30470bf65e0" Mar 18 17:45:42.132752 master-0 kubenswrapper[7553]: I0318 17:45:42.132686 7553 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-etcd/etcd-master-0-master-0" podUID="" Mar 18 17:45:42.870776 master-0 kubenswrapper[7553]: I0318 17:45:42.870669 7553 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-catalogd_catalogd-controller-manager-6864dc98f7-8vmsv_56cde2f7-1742-45d6-aa22-8270cfb424a7/manager/0.log" Mar 18 17:45:42.875153 master-0 kubenswrapper[7553]: I0318 17:45:42.874955 7553 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-controller_operator-controller-controller-manager-57777556ff-bk26c_efbcb147-d077-4749-9289-1682daccb657/manager/0.log" Mar 18 17:45:42.882227 master-0 kubenswrapper[7553]: I0318 17:45:42.882173 7553 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-operator_network-operator-7bd846bfc4-dxxbl_14a0661b-7bde-4e22-a9a9-5e3fb24df77f/network-operator/0.log" Mar 18 17:45:42.884248 master-0 kubenswrapper[7553]: I0318 17:45:42.884231 7553 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication-operator_authentication-operator-5885bfd7f4-8sxdf_c087ce06-a16b-41f4-ba93-8fccdee09003/authentication-operator/1.log" Mar 18 17:45:42.887118 master-0 kubenswrapper[7553]: I0318 17:45:42.887040 7553 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-64854d9cff-vpjmp_7d39d93e-9be3-47e1-a44e-be2d18b55446/snapshot-controller/0.log" Mar 18 17:45:42.888933 master-0 kubenswrapper[7553]: I0318 17:45:42.888916 7553 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd-operator_etcd-operator-8544cbcf9c-rws9x_0100a259-1358-45e8-8191-4e1f9a14ec89/etcd-operator/1.log" Mar 18 17:45:42.891116 master-0 kubenswrapper[7553]: I0318 17:45:42.891068 7553 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-controller-manager-operator_openshift-controller-manager-operator-8c94f4649-hpsbd_9a240ab7-a1d5-4e9a-96f3-4590681cc7ed/openshift-controller-manager-operator/1.log" Mar 18 17:45:42.891802 master-0 kubenswrapper[7553]: I0318 17:45:42.891787 7553 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-controller-manager-operator_openshift-controller-manager-operator-8c94f4649-hpsbd_9a240ab7-a1d5-4e9a-96f3-4590681cc7ed/openshift-controller-manager-operator/0.log" Mar 18 17:45:42.893749 master-0 kubenswrapper[7553]: I0318 17:45:42.893734 7553 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-6f69995874-dh5zl_37b3753f-bf4f-4a9e-a4a8-d58296bada79/cluster-baremetal-operator/0.log" Mar 18 17:45:42.895496 master-0 kubenswrapper[7553]: E0318 17:45:42.895451 7553 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 18 17:45:43.238849 master-0 kubenswrapper[7553]: I0318 17:45:43.238768 7553 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_installer-3-master-0_1a709ef9-91c0-4193-acb4-0594d02f554c/installer/0.log" Mar 18 17:45:43.238849 master-0 kubenswrapper[7553]: I0318 17:45:43.238865 7553 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-3-master-0" Mar 18 17:45:43.380157 master-0 kubenswrapper[7553]: I0318 17:45:43.380080 7553 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/1a709ef9-91c0-4193-acb4-0594d02f554c-var-lock\") pod \"1a709ef9-91c0-4193-acb4-0594d02f554c\" (UID: \"1a709ef9-91c0-4193-acb4-0594d02f554c\") " Mar 18 17:45:43.380485 master-0 kubenswrapper[7553]: I0318 17:45:43.380201 7553 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1a709ef9-91c0-4193-acb4-0594d02f554c-kube-api-access\") pod \"1a709ef9-91c0-4193-acb4-0594d02f554c\" (UID: \"1a709ef9-91c0-4193-acb4-0594d02f554c\") " Mar 18 17:45:43.380485 master-0 kubenswrapper[7553]: I0318 17:45:43.380310 7553 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1a709ef9-91c0-4193-acb4-0594d02f554c-kubelet-dir\") pod \"1a709ef9-91c0-4193-acb4-0594d02f554c\" (UID: \"1a709ef9-91c0-4193-acb4-0594d02f554c\") " Mar 18 17:45:43.380679 master-0 kubenswrapper[7553]: I0318 17:45:43.380647 7553 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1a709ef9-91c0-4193-acb4-0594d02f554c-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "1a709ef9-91c0-4193-acb4-0594d02f554c" (UID: "1a709ef9-91c0-4193-acb4-0594d02f554c"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 17:45:43.380742 master-0 kubenswrapper[7553]: I0318 17:45:43.380697 7553 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1a709ef9-91c0-4193-acb4-0594d02f554c-var-lock" (OuterVolumeSpecName: "var-lock") pod "1a709ef9-91c0-4193-acb4-0594d02f554c" (UID: "1a709ef9-91c0-4193-acb4-0594d02f554c"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 17:45:43.383638 master-0 kubenswrapper[7553]: I0318 17:45:43.383604 7553 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1a709ef9-91c0-4193-acb4-0594d02f554c-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1a709ef9-91c0-4193-acb4-0594d02f554c" (UID: "1a709ef9-91c0-4193-acb4-0594d02f554c"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 17:45:43.482385 master-0 kubenswrapper[7553]: I0318 17:45:43.482159 7553 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1a709ef9-91c0-4193-acb4-0594d02f554c-kube-api-access\") on node \"master-0\" DevicePath \"\"" Mar 18 17:45:43.482385 master-0 kubenswrapper[7553]: I0318 17:45:43.482229 7553 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1a709ef9-91c0-4193-acb4-0594d02f554c-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Mar 18 17:45:43.482385 master-0 kubenswrapper[7553]: I0318 17:45:43.482245 7553 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/1a709ef9-91c0-4193-acb4-0594d02f554c-var-lock\") on node \"master-0\" DevicePath \"\"" Mar 18 17:45:43.915976 master-0 kubenswrapper[7553]: I0318 17:45:43.915887 7553 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_installer-3-master-0_1a709ef9-91c0-4193-acb4-0594d02f554c/installer/0.log" Mar 18 17:45:43.917994 master-0 kubenswrapper[7553]: I0318 17:45:43.917949 7553 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-3-master-0" Mar 18 17:45:48.834412 master-0 kubenswrapper[7553]: E0318 17:45:48.834235 7553 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": context deadline exceeded" interval="7s" Mar 18 17:45:51.048770 master-0 kubenswrapper[7553]: I0318 17:45:51.048649 7553 patch_prober.go:28] interesting pod/controller-manager-f5755b457-f4cbl container/controller-manager namespace/openshift-controller-manager: Liveness probe status=failure output="Get \"https://10.128.0.51:8443/healthz\": dial tcp 10.128.0.51:8443: connect: connection refused" start-of-body= Mar 18 17:45:51.048770 master-0 kubenswrapper[7553]: I0318 17:45:51.048723 7553 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-controller-manager/controller-manager-f5755b457-f4cbl" podUID="1db0a246-ca43-4e7c-b09e-e80218ae99b1" containerName="controller-manager" probeResult="failure" output="Get \"https://10.128.0.51:8443/healthz\": dial tcp 10.128.0.51:8443: connect: connection refused" Mar 18 17:45:51.049984 master-0 kubenswrapper[7553]: I0318 17:45:51.048787 7553 patch_prober.go:28] interesting pod/controller-manager-f5755b457-f4cbl container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.128.0.51:8443/healthz\": dial tcp 10.128.0.51:8443: connect: connection refused" start-of-body= Mar 18 17:45:51.049984 master-0 kubenswrapper[7553]: I0318 17:45:51.048947 7553 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-f5755b457-f4cbl" podUID="1db0a246-ca43-4e7c-b09e-e80218ae99b1" containerName="controller-manager" probeResult="failure" output="Get \"https://10.128.0.51:8443/healthz\": dial tcp 10.128.0.51:8443: connect: connection refused" Mar 18 17:45:52.010830 master-0 kubenswrapper[7553]: E0318 17:45:52.010729 7553 projected.go:194] Error preparing data for projected volume kube-api-access-2tskm for pod openshift-marketplace/redhat-operators-bgdql: failed to fetch token: Timeout: request did not complete within requested timeout - context deadline exceeded Mar 18 17:45:52.010830 master-0 kubenswrapper[7553]: E0318 17:45:52.010843 7553 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/4460d3d3-c55f-4f1c-a623-e3feccf937bb-kube-api-access-2tskm podName:4460d3d3-c55f-4f1c-a623-e3feccf937bb nodeName:}" failed. No retries permitted until 2026-03-18 17:46:00.010817309 +0000 UTC m=+250.156651982 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-2tskm" (UniqueName: "kubernetes.io/projected/4460d3d3-c55f-4f1c-a623-e3feccf937bb-kube-api-access-2tskm") pod "redhat-operators-bgdql" (UID: "4460d3d3-c55f-4f1c-a623-e3feccf937bb") : failed to fetch token: Timeout: request did not complete within requested timeout - context deadline exceeded Mar 18 17:45:52.896574 master-0 kubenswrapper[7553]: E0318 17:45:52.896464 7553 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 18 17:45:55.134348 master-0 kubenswrapper[7553]: E0318 17:45:55.134179 7553 kubelet.go:1929] "Failed creating a mirror pod for" err="Internal error occurred: admission plugin \"LimitRanger\" failed to complete mutation in 13s" pod="openshift-etcd/etcd-master-0" Mar 18 17:45:59.579534 master-0 kubenswrapper[7553]: E0318 17:45:59.579360 7553 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{community-operators-fg8h6.189e0071e7e54021 openshift-marketplace 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-marketplace,Name:community-operators-fg8h6,UID:7a9075c3-bb4f-4559-8454-5e097f334957,APIVersion:v1,ResourceVersion:7983,FieldPath:spec.initContainers{extract-content},},Reason:Pulled,Message:Successfully pulled image \"registry.redhat.io/redhat/community-operator-index:v4.18\" in 27.351s (27.351s including waiting). Image size: 1223856348 bytes.,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 17:43:12.452419617 +0000 UTC m=+82.598254290,LastTimestamp:2026-03-18 17:43:12.452419617 +0000 UTC m=+82.598254290,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 17:46:00.038572 master-0 kubenswrapper[7553]: I0318 17:46:00.038348 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2tskm\" (UniqueName: \"kubernetes.io/projected/4460d3d3-c55f-4f1c-a623-e3feccf937bb-kube-api-access-2tskm\") pod \"redhat-operators-bgdql\" (UID: \"4460d3d3-c55f-4f1c-a623-e3feccf937bb\") " pod="openshift-marketplace/redhat-operators-bgdql" Mar 18 17:46:01.048550 master-0 kubenswrapper[7553]: I0318 17:46:01.048433 7553 patch_prober.go:28] interesting pod/controller-manager-f5755b457-f4cbl container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.128.0.51:8443/healthz\": dial tcp 10.128.0.51:8443: connect: connection refused" start-of-body= Mar 18 17:46:01.049629 master-0 kubenswrapper[7553]: I0318 17:46:01.048563 7553 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-f5755b457-f4cbl" podUID="1db0a246-ca43-4e7c-b09e-e80218ae99b1" containerName="controller-manager" probeResult="failure" output="Get \"https://10.128.0.51:8443/healthz\": dial tcp 10.128.0.51:8443: connect: connection refused" Mar 18 17:46:02.897072 master-0 kubenswrapper[7553]: E0318 17:46:02.896985 7553 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 18 17:46:02.897072 master-0 kubenswrapper[7553]: E0318 17:46:02.897035 7553 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Mar 18 17:46:05.078303 master-0 kubenswrapper[7553]: I0318 17:46:05.078225 7553 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-service-ca-operator_service-ca-operator-b865698dc-5zj8r_c355c750-ae2f-49fa-9a16-8fb4f688853e/service-ca-operator/1.log" Mar 18 17:46:05.079070 master-0 kubenswrapper[7553]: I0318 17:46:05.078988 7553 generic.go:334] "Generic (PLEG): container finished" podID="c355c750-ae2f-49fa-9a16-8fb4f688853e" containerID="6663d9a012bba90e4d1f49e78a4578d42945dc0a251e88808d84607a0978912c" exitCode=255 Mar 18 17:46:05.081860 master-0 kubenswrapper[7553]: I0318 17:46:05.081836 7553 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-apiserver-operator_openshift-apiserver-operator-d65958b8-t266j_0b9ff55a-73fb-473f-b406-1f8b6cffdb89/openshift-apiserver-operator/1.log" Mar 18 17:46:05.082628 master-0 kubenswrapper[7553]: I0318 17:46:05.082575 7553 generic.go:334] "Generic (PLEG): container finished" podID="0b9ff55a-73fb-473f-b406-1f8b6cffdb89" containerID="36a5d9d231da98f0f9e0dae16fa8c5d4e171fd401ed1a351ab236e19bff04107" exitCode=255 Mar 18 17:46:05.085239 master-0 kubenswrapper[7553]: I0318 17:46:05.085204 7553 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver-operator_kube-apiserver-operator-8b68b9d9b-p72m2_26575d68-0488-4dfa-a5d0-5016e481dba6/kube-apiserver-operator/1.log" Mar 18 17:46:05.086017 master-0 kubenswrapper[7553]: I0318 17:46:05.085987 7553 generic.go:334] "Generic (PLEG): container finished" podID="26575d68-0488-4dfa-a5d0-5016e481dba6" containerID="f83b9c315c38279f3569813348a27c78beef46c5306eaadd08c03d8c08f384ba" exitCode=255 Mar 18 17:46:05.836625 master-0 kubenswrapper[7553]: E0318 17:46:05.836539 7553 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s" Mar 18 17:46:06.094607 master-0 kubenswrapper[7553]: I0318 17:46:06.094475 7553 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager-operator_kube-controller-manager-operator-ff989d6cc-qk279_9b424d6c-7440-4c98-ac19-2d0642c696fd/kube-controller-manager-operator/1.log" Mar 18 17:46:06.095241 master-0 kubenswrapper[7553]: I0318 17:46:06.095057 7553 generic.go:334] "Generic (PLEG): container finished" podID="9b424d6c-7440-4c98-ac19-2d0642c696fd" containerID="6e9473f3d26cbd67b9497211546ab830ef4c483cd3c3fb1fa65b5b574de9d612" exitCode=255 Mar 18 17:46:11.048932 master-0 kubenswrapper[7553]: I0318 17:46:11.048828 7553 patch_prober.go:28] interesting pod/controller-manager-f5755b457-f4cbl container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.128.0.51:8443/healthz\": dial tcp 10.128.0.51:8443: connect: connection refused" start-of-body= Mar 18 17:46:11.050035 master-0 kubenswrapper[7553]: I0318 17:46:11.048934 7553 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-f5755b457-f4cbl" podUID="1db0a246-ca43-4e7c-b09e-e80218ae99b1" containerName="controller-manager" probeResult="failure" output="Get \"https://10.128.0.51:8443/healthz\": dial tcp 10.128.0.51:8443: connect: connection refused" Mar 18 17:46:12.426157 master-0 kubenswrapper[7553]: I0318 17:46:12.426073 7553 status_manager.go:851] "Failed to get status for pod" podUID="f7203a5f-0f67-48ca-a12b-be3b0ce7cbac" pod="openshift-marketplace/certified-operators-hgw2n" err="the server was unable to return a response in the time allotted, but may still be processing the request (get pods certified-operators-hgw2n)" Mar 18 17:46:13.148455 master-0 kubenswrapper[7553]: I0318 17:46:13.148372 7553 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-64854d9cff-vpjmp_7d39d93e-9be3-47e1-a44e-be2d18b55446/snapshot-controller/1.log" Mar 18 17:46:13.149628 master-0 kubenswrapper[7553]: I0318 17:46:13.149562 7553 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-64854d9cff-vpjmp_7d39d93e-9be3-47e1-a44e-be2d18b55446/snapshot-controller/0.log" Mar 18 17:46:13.149746 master-0 kubenswrapper[7553]: I0318 17:46:13.149652 7553 generic.go:334] "Generic (PLEG): container finished" podID="7d39d93e-9be3-47e1-a44e-be2d18b55446" containerID="c9ad4dfdc283133c8325a6400b93e7ca1b286a38ba01514e1ca540aa2f6676d0" exitCode=1 Mar 18 17:46:16.136981 master-0 kubenswrapper[7553]: E0318 17:46:16.136832 7553 mirror_client.go:138] "Failed deleting a mirror pod" err="Timeout: request did not complete within requested timeout - context deadline exceeded" pod="openshift-etcd/etcd-master-0-master-0" Mar 18 17:46:16.137819 master-0 kubenswrapper[7553]: E0318 17:46:16.137162 7553 kubelet.go:2526] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="34.019s" Mar 18 17:46:16.137819 master-0 kubenswrapper[7553]: I0318 17:46:16.137359 7553 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-bk26c" Mar 18 17:46:16.138948 master-0 kubenswrapper[7553]: I0318 17:46:16.138898 7553 scope.go:117] "RemoveContainer" containerID="6e9473f3d26cbd67b9497211546ab830ef4c483cd3c3fb1fa65b5b574de9d612" Mar 18 17:46:16.139158 master-0 kubenswrapper[7553]: I0318 17:46:16.139114 7553 scope.go:117] "RemoveContainer" containerID="f83b9c315c38279f3569813348a27c78beef46c5306eaadd08c03d8c08f384ba" Mar 18 17:46:16.139776 master-0 kubenswrapper[7553]: I0318 17:46:16.139733 7553 scope.go:117] "RemoveContainer" containerID="10ef0540ad110067bbacf0ae0c51fcdf81ed8a0e014b67c2675d03499d28dfab" Mar 18 17:46:16.140697 master-0 kubenswrapper[7553]: I0318 17:46:16.140546 7553 scope.go:117] "RemoveContainer" containerID="6663d9a012bba90e4d1f49e78a4578d42945dc0a251e88808d84607a0978912c" Mar 18 17:46:16.140977 master-0 kubenswrapper[7553]: I0318 17:46:16.140929 7553 scope.go:117] "RemoveContainer" containerID="b3ebfba10cf9d40bcef8b7b1707842cdd5329c0fa6c5118e3bdbf4e1fe51f08d" Mar 18 17:46:16.154938 master-0 kubenswrapper[7553]: I0318 17:46:16.154885 7553 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-etcd/etcd-master-0-master-0" podUID="" Mar 18 17:46:16.485840 master-0 kubenswrapper[7553]: E0318 17:46:16.485770 7553 log.go:32] "RunPodSandbox from runtime service failed" err=< Mar 18 17:46:16.485840 master-0 kubenswrapper[7553]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_redhat-marketplace-6xmx4_openshift-marketplace_427e5ce9-f4b3-4f12-bb77-2b13775aa334_0(6885019f37c820851ddb0a5df18f90b84be30ee77e73da34e8e2c3a35e0d4ec4): error adding pod openshift-marketplace_redhat-marketplace-6xmx4 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"6885019f37c820851ddb0a5df18f90b84be30ee77e73da34e8e2c3a35e0d4ec4" Netns:"/var/run/netns/a6e8441e-d224-448a-b1c2-3969ff607975" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=redhat-marketplace-6xmx4;K8S_POD_INFRA_CONTAINER_ID=6885019f37c820851ddb0a5df18f90b84be30ee77e73da34e8e2c3a35e0d4ec4;K8S_POD_UID=427e5ce9-f4b3-4f12-bb77-2b13775aa334" Path:"" ERRORED: error configuring pod [openshift-marketplace/redhat-marketplace-6xmx4] networking: Multus: [openshift-marketplace/redhat-marketplace-6xmx4/427e5ce9-f4b3-4f12-bb77-2b13775aa334]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod redhat-marketplace-6xmx4 in out of cluster comm: SetNetworkStatus: failed to update the pod redhat-marketplace-6xmx4 in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-6xmx4?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Mar 18 17:46:16.485840 master-0 kubenswrapper[7553]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Mar 18 17:46:16.485840 master-0 kubenswrapper[7553]: > Mar 18 17:46:16.486044 master-0 kubenswrapper[7553]: E0318 17:46:16.485880 7553 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Mar 18 17:46:16.486044 master-0 kubenswrapper[7553]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_redhat-marketplace-6xmx4_openshift-marketplace_427e5ce9-f4b3-4f12-bb77-2b13775aa334_0(6885019f37c820851ddb0a5df18f90b84be30ee77e73da34e8e2c3a35e0d4ec4): error adding pod openshift-marketplace_redhat-marketplace-6xmx4 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"6885019f37c820851ddb0a5df18f90b84be30ee77e73da34e8e2c3a35e0d4ec4" Netns:"/var/run/netns/a6e8441e-d224-448a-b1c2-3969ff607975" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=redhat-marketplace-6xmx4;K8S_POD_INFRA_CONTAINER_ID=6885019f37c820851ddb0a5df18f90b84be30ee77e73da34e8e2c3a35e0d4ec4;K8S_POD_UID=427e5ce9-f4b3-4f12-bb77-2b13775aa334" Path:"" ERRORED: error configuring pod [openshift-marketplace/redhat-marketplace-6xmx4] networking: Multus: [openshift-marketplace/redhat-marketplace-6xmx4/427e5ce9-f4b3-4f12-bb77-2b13775aa334]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod redhat-marketplace-6xmx4 in out of cluster comm: SetNetworkStatus: failed to update the pod redhat-marketplace-6xmx4 in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-6xmx4?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Mar 18 17:46:16.486044 master-0 kubenswrapper[7553]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Mar 18 17:46:16.486044 master-0 kubenswrapper[7553]: > pod="openshift-marketplace/redhat-marketplace-6xmx4" Mar 18 17:46:16.486044 master-0 kubenswrapper[7553]: E0318 17:46:16.485913 7553 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err=< Mar 18 17:46:16.486044 master-0 kubenswrapper[7553]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_redhat-marketplace-6xmx4_openshift-marketplace_427e5ce9-f4b3-4f12-bb77-2b13775aa334_0(6885019f37c820851ddb0a5df18f90b84be30ee77e73da34e8e2c3a35e0d4ec4): error adding pod openshift-marketplace_redhat-marketplace-6xmx4 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"6885019f37c820851ddb0a5df18f90b84be30ee77e73da34e8e2c3a35e0d4ec4" Netns:"/var/run/netns/a6e8441e-d224-448a-b1c2-3969ff607975" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=redhat-marketplace-6xmx4;K8S_POD_INFRA_CONTAINER_ID=6885019f37c820851ddb0a5df18f90b84be30ee77e73da34e8e2c3a35e0d4ec4;K8S_POD_UID=427e5ce9-f4b3-4f12-bb77-2b13775aa334" Path:"" ERRORED: error configuring pod [openshift-marketplace/redhat-marketplace-6xmx4] networking: Multus: [openshift-marketplace/redhat-marketplace-6xmx4/427e5ce9-f4b3-4f12-bb77-2b13775aa334]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod redhat-marketplace-6xmx4 in out of cluster comm: SetNetworkStatus: failed to update the pod redhat-marketplace-6xmx4 in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-6xmx4?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Mar 18 17:46:16.486044 master-0 kubenswrapper[7553]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Mar 18 17:46:16.486044 master-0 kubenswrapper[7553]: > pod="openshift-marketplace/redhat-marketplace-6xmx4" Mar 18 17:46:16.487123 master-0 kubenswrapper[7553]: E0318 17:46:16.486002 7553 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"redhat-marketplace-6xmx4_openshift-marketplace(427e5ce9-f4b3-4f12-bb77-2b13775aa334)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"redhat-marketplace-6xmx4_openshift-marketplace(427e5ce9-f4b3-4f12-bb77-2b13775aa334)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_redhat-marketplace-6xmx4_openshift-marketplace_427e5ce9-f4b3-4f12-bb77-2b13775aa334_0(6885019f37c820851ddb0a5df18f90b84be30ee77e73da34e8e2c3a35e0d4ec4): error adding pod openshift-marketplace_redhat-marketplace-6xmx4 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus-shim\\\" name=\\\"multus-cni-network\\\" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:\\\"6885019f37c820851ddb0a5df18f90b84be30ee77e73da34e8e2c3a35e0d4ec4\\\" Netns:\\\"/var/run/netns/a6e8441e-d224-448a-b1c2-3969ff607975\\\" IfName:\\\"eth0\\\" Args:\\\"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=redhat-marketplace-6xmx4;K8S_POD_INFRA_CONTAINER_ID=6885019f37c820851ddb0a5df18f90b84be30ee77e73da34e8e2c3a35e0d4ec4;K8S_POD_UID=427e5ce9-f4b3-4f12-bb77-2b13775aa334\\\" Path:\\\"\\\" ERRORED: error configuring pod [openshift-marketplace/redhat-marketplace-6xmx4] networking: Multus: [openshift-marketplace/redhat-marketplace-6xmx4/427e5ce9-f4b3-4f12-bb77-2b13775aa334]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod redhat-marketplace-6xmx4 in out of cluster comm: SetNetworkStatus: failed to update the pod redhat-marketplace-6xmx4 in out of cluster comm: status update failed for pod /: Get \\\"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-6xmx4?timeout=1m0s\\\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\\n': StdinData: {\\\"binDir\\\":\\\"/var/lib/cni/bin\\\",\\\"clusterNetwork\\\":\\\"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf\\\",\\\"cniVersion\\\":\\\"0.3.1\\\",\\\"daemonSocketDir\\\":\\\"/run/multus/socket\\\",\\\"globalNamespaces\\\":\\\"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv\\\",\\\"logLevel\\\":\\\"verbose\\\",\\\"logToStderr\\\":true,\\\"name\\\":\\\"multus-cni-network\\\",\\\"namespaceIsolation\\\":true,\\\"type\\\":\\\"multus-shim\\\"}\"" pod="openshift-marketplace/redhat-marketplace-6xmx4" podUID="427e5ce9-f4b3-4f12-bb77-2b13775aa334" Mar 18 17:46:16.493218 master-0 kubenswrapper[7553]: E0318 17:46:16.493176 7553 log.go:32] "RunPodSandbox from runtime service failed" err=< Mar 18 17:46:16.493218 master-0 kubenswrapper[7553]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-2-master-0_openshift-kube-controller-manager_37bbec19-22b8-411c-901b-d89c92b0bd4d_0(81a82de2a6eba1f53b1a8ce1733815bdae65dbfdbbcea26f8dcaf845d5e46bfb): error adding pod openshift-kube-controller-manager_installer-2-master-0 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"81a82de2a6eba1f53b1a8ce1733815bdae65dbfdbbcea26f8dcaf845d5e46bfb" Netns:"/var/run/netns/ec69bf02-1789-4252-a210-f9fa8a9a1ef1" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-controller-manager;K8S_POD_NAME=installer-2-master-0;K8S_POD_INFRA_CONTAINER_ID=81a82de2a6eba1f53b1a8ce1733815bdae65dbfdbbcea26f8dcaf845d5e46bfb;K8S_POD_UID=37bbec19-22b8-411c-901b-d89c92b0bd4d" Path:"" ERRORED: error configuring pod [openshift-kube-controller-manager/installer-2-master-0] networking: Multus: [openshift-kube-controller-manager/installer-2-master-0/37bbec19-22b8-411c-901b-d89c92b0bd4d]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod installer-2-master-0 in out of cluster comm: SetNetworkStatus: failed to update the pod installer-2-master-0 in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/installer-2-master-0?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Mar 18 17:46:16.493218 master-0 kubenswrapper[7553]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Mar 18 17:46:16.493218 master-0 kubenswrapper[7553]: > Mar 18 17:46:16.493736 master-0 kubenswrapper[7553]: E0318 17:46:16.493252 7553 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Mar 18 17:46:16.493736 master-0 kubenswrapper[7553]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-2-master-0_openshift-kube-controller-manager_37bbec19-22b8-411c-901b-d89c92b0bd4d_0(81a82de2a6eba1f53b1a8ce1733815bdae65dbfdbbcea26f8dcaf845d5e46bfb): error adding pod openshift-kube-controller-manager_installer-2-master-0 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"81a82de2a6eba1f53b1a8ce1733815bdae65dbfdbbcea26f8dcaf845d5e46bfb" Netns:"/var/run/netns/ec69bf02-1789-4252-a210-f9fa8a9a1ef1" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-controller-manager;K8S_POD_NAME=installer-2-master-0;K8S_POD_INFRA_CONTAINER_ID=81a82de2a6eba1f53b1a8ce1733815bdae65dbfdbbcea26f8dcaf845d5e46bfb;K8S_POD_UID=37bbec19-22b8-411c-901b-d89c92b0bd4d" Path:"" ERRORED: error configuring pod [openshift-kube-controller-manager/installer-2-master-0] networking: Multus: [openshift-kube-controller-manager/installer-2-master-0/37bbec19-22b8-411c-901b-d89c92b0bd4d]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod installer-2-master-0 in out of cluster comm: SetNetworkStatus: failed to update the pod installer-2-master-0 in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/installer-2-master-0?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Mar 18 17:46:16.493736 master-0 kubenswrapper[7553]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Mar 18 17:46:16.493736 master-0 kubenswrapper[7553]: > pod="openshift-kube-controller-manager/installer-2-master-0" Mar 18 17:46:16.493736 master-0 kubenswrapper[7553]: E0318 17:46:16.493291 7553 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err=< Mar 18 17:46:16.493736 master-0 kubenswrapper[7553]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-2-master-0_openshift-kube-controller-manager_37bbec19-22b8-411c-901b-d89c92b0bd4d_0(81a82de2a6eba1f53b1a8ce1733815bdae65dbfdbbcea26f8dcaf845d5e46bfb): error adding pod openshift-kube-controller-manager_installer-2-master-0 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"81a82de2a6eba1f53b1a8ce1733815bdae65dbfdbbcea26f8dcaf845d5e46bfb" Netns:"/var/run/netns/ec69bf02-1789-4252-a210-f9fa8a9a1ef1" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-controller-manager;K8S_POD_NAME=installer-2-master-0;K8S_POD_INFRA_CONTAINER_ID=81a82de2a6eba1f53b1a8ce1733815bdae65dbfdbbcea26f8dcaf845d5e46bfb;K8S_POD_UID=37bbec19-22b8-411c-901b-d89c92b0bd4d" Path:"" ERRORED: error configuring pod [openshift-kube-controller-manager/installer-2-master-0] networking: Multus: [openshift-kube-controller-manager/installer-2-master-0/37bbec19-22b8-411c-901b-d89c92b0bd4d]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod installer-2-master-0 in out of cluster comm: SetNetworkStatus: failed to update the pod installer-2-master-0 in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/installer-2-master-0?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Mar 18 17:46:16.493736 master-0 kubenswrapper[7553]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Mar 18 17:46:16.493736 master-0 kubenswrapper[7553]: > pod="openshift-kube-controller-manager/installer-2-master-0" Mar 18 17:46:16.493736 master-0 kubenswrapper[7553]: E0318 17:46:16.493357 7553 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"installer-2-master-0_openshift-kube-controller-manager(37bbec19-22b8-411c-901b-d89c92b0bd4d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"installer-2-master-0_openshift-kube-controller-manager(37bbec19-22b8-411c-901b-d89c92b0bd4d)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-2-master-0_openshift-kube-controller-manager_37bbec19-22b8-411c-901b-d89c92b0bd4d_0(81a82de2a6eba1f53b1a8ce1733815bdae65dbfdbbcea26f8dcaf845d5e46bfb): error adding pod openshift-kube-controller-manager_installer-2-master-0 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus-shim\\\" name=\\\"multus-cni-network\\\" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:\\\"81a82de2a6eba1f53b1a8ce1733815bdae65dbfdbbcea26f8dcaf845d5e46bfb\\\" Netns:\\\"/var/run/netns/ec69bf02-1789-4252-a210-f9fa8a9a1ef1\\\" IfName:\\\"eth0\\\" Args:\\\"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-controller-manager;K8S_POD_NAME=installer-2-master-0;K8S_POD_INFRA_CONTAINER_ID=81a82de2a6eba1f53b1a8ce1733815bdae65dbfdbbcea26f8dcaf845d5e46bfb;K8S_POD_UID=37bbec19-22b8-411c-901b-d89c92b0bd4d\\\" Path:\\\"\\\" ERRORED: error configuring pod [openshift-kube-controller-manager/installer-2-master-0] networking: Multus: [openshift-kube-controller-manager/installer-2-master-0/37bbec19-22b8-411c-901b-d89c92b0bd4d]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod installer-2-master-0 in out of cluster comm: SetNetworkStatus: failed to update the pod installer-2-master-0 in out of cluster comm: status update failed for pod /: Get \\\"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/installer-2-master-0?timeout=1m0s\\\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\\n': StdinData: {\\\"binDir\\\":\\\"/var/lib/cni/bin\\\",\\\"clusterNetwork\\\":\\\"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf\\\",\\\"cniVersion\\\":\\\"0.3.1\\\",\\\"daemonSocketDir\\\":\\\"/run/multus/socket\\\",\\\"globalNamespaces\\\":\\\"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv\\\",\\\"logLevel\\\":\\\"verbose\\\",\\\"logToStderr\\\":true,\\\"name\\\":\\\"multus-cni-network\\\",\\\"namespaceIsolation\\\":true,\\\"type\\\":\\\"multus-shim\\\"}\"" pod="openshift-kube-controller-manager/installer-2-master-0" podUID="37bbec19-22b8-411c-901b-d89c92b0bd4d" Mar 18 17:46:17.190060 master-0 kubenswrapper[7553]: I0318 17:46:17.189981 7553 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-service-ca-operator_service-ca-operator-b865698dc-5zj8r_c355c750-ae2f-49fa-9a16-8fb4f688853e/service-ca-operator/1.log" Mar 18 17:46:17.195222 master-0 kubenswrapper[7553]: I0318 17:46:17.195160 7553 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver-operator_kube-apiserver-operator-8b68b9d9b-p72m2_26575d68-0488-4dfa-a5d0-5016e481dba6/kube-apiserver-operator/1.log" Mar 18 17:46:17.201872 master-0 kubenswrapper[7553]: I0318 17:46:17.201815 7553 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager-operator_kube-controller-manager-operator-ff989d6cc-qk279_9b424d6c-7440-4c98-ac19-2d0642c696fd/kube-controller-manager-operator/1.log" Mar 18 17:46:17.202540 master-0 kubenswrapper[7553]: I0318 17:46:17.202492 7553 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-2-master-0" Mar 18 17:46:17.202603 master-0 kubenswrapper[7553]: I0318 17:46:17.202550 7553 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6xmx4" Mar 18 17:46:17.203124 master-0 kubenswrapper[7553]: I0318 17:46:17.203090 7553 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-2-master-0" Mar 18 17:46:17.203366 master-0 kubenswrapper[7553]: I0318 17:46:17.203339 7553 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6xmx4" Mar 18 17:46:22.839538 master-0 kubenswrapper[7553]: E0318 17:46:22.839253 7553 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s" Mar 18 17:46:23.139901 master-0 kubenswrapper[7553]: E0318 17:46:23.139484 7553 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-18T17:46:13Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-18T17:46:13Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-18T17:46:13Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-18T17:46:13Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:90dc03981a3a33aadde1815815ad5068886ae546bd3162c9a87a99fcc07dbbce\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:c5a86acf841f8f125e428a1254b8c9f450ef07b62a7634bd4c30aa7bf4bd88c9\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1747322591},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2abc1fd79e7781634ed5ed9e8f2b98b9094ea51f40ac3a773c5e5224607bf3d7\\\"],\\\"sizeBytes\\\":1637455533},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:86833de447f25d1d0fc15ed5460c5068cc48b18b78b8108304c5b5fd1dff04ab\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a41181d28dfacb78bea3690c390c965912300bc666e6e31a54a9382dd0329758\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1251896539},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a55ec7ec64efd0f595d8084377b7e463a1807829b7617e5d4a9092dcd924c36\\\"],\\\"sizeBytes\\\":1238100502},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:898c67bf7fc973e99114f3148976a6c21ae0dbe413051415588fa9b995f5b331\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:a641939d2096609a4cf6eec872a1476b7c671bfd81cffc2edeb6e9f13c9deeba\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1231028434},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:c3c12b935527854220bc939cf4b1e9ec5ea7b799b5530ba0609ec64f044c0a36\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:dd33dff955c181beea0d08607a8c766e68ceb902bff0a014f4416b7a4a86a7c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1223856348},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:af0fe0ca926422a6471d5bf22fc0e682c36c24fba05496a3bdfac0b7d3733015\\\"],\\\"sizeBytes\\\":991832673},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b23c544d3894e5b31f66a18c554f03b0d29f92c2000c46b57b1c96da7ec25db9\\\"],\\\"sizeBytes\\\":943841779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b00c42562d477ef44d51f35950253a26d7debc7de86e53270831aafef5795c1\\\"],\\\"sizeBytes\\\":918289953},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0a09f5a3ba4f60cce0145769509bab92553c8075d210af4ac058965d2ae11efa\\\"],\\\"sizeBytes\\\":876160834},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6bba3f73c0066e42b24839e0d29f5dce2f36436f0a11f9f5e1029bccc5ed6578\\\"],\\\"sizeBytes\\\":862657321},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a9e8da5c6114f062b814936d4db7a47a04d248e160d6bb28ad4e4a081496ee4\\\"],\\\"sizeBytes\\\":772943435},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1e1faad2d9167d84e23585c1cea5962301845548043cf09578f943f79ca98016\\\"],\\\"sizeBytes\\\":687949580},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5e12e4dc52214d3ada5ba5106caebe079eac1d9292c2571a5fe83411ce8e900d\\\"],\\\"sizeBytes\\\":683195416},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:aa5e782406f71c048b1ac3a4bf5d1227ff4be81111114083ad4c7a209c6bfb5a\\\"],\\\"sizeBytes\\\":677942383},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ec8fd46dfb35ed10e8f98933166f69ce579c2f35b8db03d21e4c34fc544553e4\\\"],\\\"sizeBytes\\\":621648710},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae50e496bd6ae2d27298d997470b7cb0a426eeb8b7e2e9c7187a34cb03993998\\\"],\\\"sizeBytes\\\":589386806},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c6a4383333a1fd6d05c3f60ec793913f7937ee3d77f002d85e6c61e20507bf55\\\"],\\\"sizeBytes\\\":582154903},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c2dd7a03348212e49876f5359f233d893a541ed9b934df390201a05133a06982\\\"],\\\"sizeBytes\\\":558211175},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7af9f5c5af9d529840233ef4b519120cc0e3f14c4fe28cc43b0823f2c11d8f89\\\"],\\\"sizeBytes\\\":548752816},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e29dc9f042f2d0471171a0611070886cb2f7c57338ab7f112613417bcd33b278\\\"],\\\"sizeBytes\\\":529326739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b4c9cf268bb7abef7af187cd775d3f74d0bd33626250095428d53b705ee946\\\"],\\\"sizeBytes\\\":528956487},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d5a971d5889f167cfe61a64c366424b87c17a6dc141ffcc43406cdcbb50cae2a\\\"],\\\"sizeBytes\\\":518384969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-release@sha256:59727c4b3fef19e5149675cf3350735bbfe2c6588a57654b2e4552dd719f58b1\\\"],\\\"sizeBytes\\\":517999161},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c5ce3d1134d6500e2b8528516c1889d7bbc6259aba4981c6983395b0e9eeff65\\\"],\\\"sizeBytes\\\":514984269},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bfe394b58ec6195de8b8420e781b7630d85a412b9112d892fea903f92b783427\\\"],\\\"sizeBytes\\\":513221333},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1f23bac0a2a6cfd638e4af679dc787a8790d99c391f6e2ade8087dc477ff765e\\\"],\\\"sizeBytes\\\":512274055},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:77fff570657d2fa0bfb709b2c8b6665bae0bf90a2be981d8dbca56c674715098\\\"],\\\"sizeBytes\\\":511227324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adb9f6f2fd701863c7caed747df43f83d3569ba9388cfa33ea7219ac6a606b11\\\"],\\\"sizeBytes\\\":511164375},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c032f87ae61d6f757ff3ce52620a70a43516591987731f25da77aba152f17458\\\"],\\\"sizeBytes\\\":508888171},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:812819a9d712b9e345ef5f1404b242c281e2518ad724baebc393ec0fd3b3d263\\\"],\\\"sizeBytes\\\":508544745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:313d1d8ca85e65236a59f058a3316c49436dde691b3a3930d5bc5e3b4b8c8a71\\\"],\\\"sizeBytes\\\":507972093},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c527b4e8239a1f4f4e0a851113e7dd633b7dcb9d75b0e7b21c23d26304abcb3\\\"],\\\"sizeBytes\\\":506480167},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ef199844317b7b012879ed8d29f9b6bc37fad8a6fdb336103cbd5cabc74c4302\\\"],\\\"sizeBytes\\\":506395599},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7d4a034950346bcd4e36e9e2f1343e0cf7a10cf544963f33d09c7eb2a1bfc634\\\"],\\\"sizeBytes\\\":505345991},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fbbcb390de2563a0177b92fba1b5a65777366e2dc80e2808b61d87c41b47a2d\\\"],\\\"sizeBytes\\\":505246690},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c983016b9ceed0fca1f51bd49c2653243c7e5af91cbf2f478b091db6e028252\\\"],\\\"sizeBytes\\\":504625081},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:712d334b7752d95580571059aae2c50e111d879af4fd8ea7cc3dbaf1a8e7dc69\\\"],\\\"sizeBytes\\\":495994673},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4b5ea1ef4e09b673a0c68c8848ca162ab11d9ac373a377daa52dea702ffa3023\\\"],\\\"sizeBytes\\\":495065340},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:446bedea4916d3c1ee52be94137e484659e9561bd1de95c8189eee279aae984b\\\"],\\\"sizeBytes\\\":487096305},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a746a87b784ea1caa278fd0e012554f9df520b6fff665ea0bc4c83f487fed113\\\"],\\\"sizeBytes\\\":484450894},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2c4d5a681595e428ff4b5083648c13615eed80be9084a3d3fc68a0295079cb12\\\"],\\\"sizeBytes\\\":484187929},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f933312f49083e8746fc41ab5e46a9a757b448374f14971e256ebcb36f11dd97\\\"],\\\"sizeBytes\\\":470826739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ea5c8a93f30e0a4932da5697d22c0da7eda9a7035c0555eb006b6755e62bb2fc\\\"],\\\"sizeBytes\\\":468265024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d12d0dc7eb86bbedf6b2d7689a28fd51f0d928f720e4a6783744304297c661ed\\\"],\\\"sizeBytes\\\":465090934},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9609c00207cc4db97f0fd6162eb429d7f81654137f020a677e30cba26a887a24\\\"],\\\"sizeBytes\\\":463705930},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:632e80bba5077068ecca05fddb95aedebad4493af6f36152c01c6ae490975b62\\\"],\\\"sizeBytes\\\":458126937},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bcb08821551e9a5b9f82aa794bcea673279cefb93cb47492e19ccac5e2cf18fe\\\"],\\\"sizeBytes\\\":456576198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:759fb1d5353dbbadd443f38631d977ca3aed9787b873be05cc9660532a252739\\\"],\\\"sizeBytes\\\":448828620},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3062f6485aec4770e60852b535c69a42527b305161fe856499c8658ead6d1e85\\\"],\\\"sizeBytes\\\":448042136}]}}\" for node \"master-0\": Patch \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0/status?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 18 17:46:33.141828 master-0 kubenswrapper[7553]: E0318 17:46:33.141756 7553 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": the server was unable to return a response in the time allotted, but may still be processing the request (get nodes master-0)" Mar 18 17:46:33.583310 master-0 kubenswrapper[7553]: E0318 17:46:33.583014 7553 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{redhat-marketplace-j4kft.189e0071e827dbe7 openshift-marketplace 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-marketplace,Name:redhat-marketplace-j4kft,UID:35595774-da4b-499c-bd6e-1ae5af144833,APIVersion:v1,ResourceVersion:8452,FieldPath:spec.initContainers{extract-content},},Reason:Pulled,Message:Successfully pulled image \"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\" in 23.315s (23.315s including waiting). Image size: 1231028434 bytes.,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 17:43:12.456784871 +0000 UTC m=+82.602619554,LastTimestamp:2026-03-18 17:43:12.456784871 +0000 UTC m=+82.602619554,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 17:46:34.043513 master-0 kubenswrapper[7553]: E0318 17:46:34.043318 7553 projected.go:194] Error preparing data for projected volume kube-api-access-2tskm for pod openshift-marketplace/redhat-operators-bgdql: failed to fetch token: Timeout: request did not complete within requested timeout - context deadline exceeded Mar 18 17:46:34.043513 master-0 kubenswrapper[7553]: E0318 17:46:34.043436 7553 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/4460d3d3-c55f-4f1c-a623-e3feccf937bb-kube-api-access-2tskm podName:4460d3d3-c55f-4f1c-a623-e3feccf937bb nodeName:}" failed. No retries permitted until 2026-03-18 17:46:50.043401289 +0000 UTC m=+300.189235992 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-2tskm" (UniqueName: "kubernetes.io/projected/4460d3d3-c55f-4f1c-a623-e3feccf937bb-kube-api-access-2tskm") pod "redhat-operators-bgdql" (UID: "4460d3d3-c55f-4f1c-a623-e3feccf937bb") : failed to fetch token: Timeout: request did not complete within requested timeout - context deadline exceeded Mar 18 17:46:35.325188 master-0 kubenswrapper[7553]: I0318 17:46:35.325125 7553 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-66b84d69b-qb7n6_7e64a377-f497-4416-8f22-d5c7f52e0b65/ingress-operator/1.log" Mar 18 17:46:35.326453 master-0 kubenswrapper[7553]: I0318 17:46:35.326416 7553 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-66b84d69b-qb7n6_7e64a377-f497-4416-8f22-d5c7f52e0b65/ingress-operator/0.log" Mar 18 17:46:35.326539 master-0 kubenswrapper[7553]: I0318 17:46:35.326467 7553 generic.go:334] "Generic (PLEG): container finished" podID="7e64a377-f497-4416-8f22-d5c7f52e0b65" containerID="02b88785366f3ca67c38ae3fa046b86fa7c95b60c40b090f66977aa12f1b78cb" exitCode=1 Mar 18 17:46:39.842671 master-0 kubenswrapper[7553]: E0318 17:46:39.842499 7553 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s" Mar 18 17:46:43.142158 master-0 kubenswrapper[7553]: E0318 17:46:43.142055 7553 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 18 17:46:43.382627 master-0 kubenswrapper[7553]: I0318 17:46:43.382549 7553 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-6f69995874-dh5zl_37b3753f-bf4f-4a9e-a4a8-d58296bada79/cluster-baremetal-operator/1.log" Mar 18 17:46:43.384356 master-0 kubenswrapper[7553]: I0318 17:46:43.384318 7553 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-6f69995874-dh5zl_37b3753f-bf4f-4a9e-a4a8-d58296bada79/cluster-baremetal-operator/0.log" Mar 18 17:46:43.384604 master-0 kubenswrapper[7553]: I0318 17:46:43.384560 7553 generic.go:334] "Generic (PLEG): container finished" podID="37b3753f-bf4f-4a9e-a4a8-d58296bada79" containerID="fd1baed9e081b7d0a16ba577c3675952403bd2f32763aeb842989654f0b5e115" exitCode=1 Mar 18 17:46:44.397538 master-0 kubenswrapper[7553]: I0318 17:46:44.397464 7553 generic.go:334] "Generic (PLEG): container finished" podID="46f265536aba6292ead501bc9b49f327" containerID="13ecfe004522bd3f1997358f8d18d1d0444903e67db4326c279f978bc65fbe03" exitCode=1 Mar 18 17:46:50.075590 master-0 kubenswrapper[7553]: I0318 17:46:50.075508 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2tskm\" (UniqueName: \"kubernetes.io/projected/4460d3d3-c55f-4f1c-a623-e3feccf937bb-kube-api-access-2tskm\") pod \"redhat-operators-bgdql\" (UID: \"4460d3d3-c55f-4f1c-a623-e3feccf937bb\") " pod="openshift-marketplace/redhat-operators-bgdql" Mar 18 17:46:50.158866 master-0 kubenswrapper[7553]: E0318 17:46:50.158796 7553 mirror_client.go:138] "Failed deleting a mirror pod" err="Timeout: request did not complete within requested timeout - context deadline exceeded" pod="openshift-etcd/etcd-master-0-master-0" Mar 18 17:46:50.159167 master-0 kubenswrapper[7553]: E0318 17:46:50.159050 7553 kubelet.go:2526] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="34.022s" Mar 18 17:46:50.160656 master-0 kubenswrapper[7553]: I0318 17:46:50.160597 7553 scope.go:117] "RemoveContainer" containerID="c94a2985fe4117cc55a54b6163c21e92395f0ed45215b4c6fffd52daf31ec16f" Mar 18 17:46:50.168247 master-0 kubenswrapper[7553]: I0318 17:46:50.168151 7553 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-etcd/etcd-master-0-master-0" podUID="" Mar 18 17:46:53.143362 master-0 kubenswrapper[7553]: E0318 17:46:53.143233 7553 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 18 17:46:56.844768 master-0 kubenswrapper[7553]: E0318 17:46:56.844491 7553 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s" Mar 18 17:47:00.507208 master-0 kubenswrapper[7553]: E0318 17:47:00.507028 7553 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[kube-api-access-2tskm], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="openshift-marketplace/redhat-operators-bgdql" podUID="4460d3d3-c55f-4f1c-a623-e3feccf937bb" Mar 18 17:47:00.528890 master-0 kubenswrapper[7553]: I0318 17:47:00.528816 7553 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bgdql" Mar 18 17:47:03.143716 master-0 kubenswrapper[7553]: E0318 17:47:03.143647 7553 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 18 17:47:03.144838 master-0 kubenswrapper[7553]: E0318 17:47:03.144418 7553 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Mar 18 17:47:07.587204 master-0 kubenswrapper[7553]: E0318 17:47:07.586792 7553 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{redhat-operators-jlj6j.189e0071e838819c openshift-marketplace 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-marketplace,Name:redhat-operators-jlj6j,UID:e7a6e8f4-26e0-454c-bfbb-f97e72636bf6,APIVersion:v1,ResourceVersion:8497,FieldPath:spec.initContainers{extract-content},},Reason:Pulled,Message:Successfully pulled image \"registry.redhat.io/redhat/redhat-operator-index:v4.18\" in 20.235s (20.235s including waiting). Image size: 1747322591 bytes.,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 17:43:12.457875868 +0000 UTC m=+82.603710541,LastTimestamp:2026-03-18 17:43:12.457875868 +0000 UTC m=+82.603710541,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 17:47:12.428211 master-0 kubenswrapper[7553]: I0318 17:47:12.428019 7553 status_manager.go:851] "Failed to get status for pod" podUID="22e8652f-ee18-4cff-bccb-ef413456685f" pod="openshift-kube-scheduler/installer-1-master-0" err="the server was unable to return a response in the time allotted, but may still be processing the request (get pods installer-1-master-0)" Mar 18 17:47:13.627295 master-0 kubenswrapper[7553]: I0318 17:47:13.627230 7553 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication-operator_authentication-operator-5885bfd7f4-8sxdf_c087ce06-a16b-41f4-ba93-8fccdee09003/authentication-operator/2.log" Mar 18 17:47:13.627975 master-0 kubenswrapper[7553]: I0318 17:47:13.627772 7553 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication-operator_authentication-operator-5885bfd7f4-8sxdf_c087ce06-a16b-41f4-ba93-8fccdee09003/authentication-operator/1.log" Mar 18 17:47:13.628384 master-0 kubenswrapper[7553]: I0318 17:47:13.628333 7553 generic.go:334] "Generic (PLEG): container finished" podID="c087ce06-a16b-41f4-ba93-8fccdee09003" containerID="5ef1ad7d9de4700ea957d656ff99f57f457c91f9b150fe99e8b36beb88ed9c42" exitCode=255 Mar 18 17:47:13.629968 master-0 kubenswrapper[7553]: I0318 17:47:13.629930 7553 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler-operator_openshift-kube-scheduler-operator-dddff6458-wlfj4_3a3a6c2c-78e7-41f3-acff-20173cbc012a/kube-scheduler-operator-container/1.log" Mar 18 17:47:13.630403 master-0 kubenswrapper[7553]: I0318 17:47:13.630364 7553 generic.go:334] "Generic (PLEG): container finished" podID="3a3a6c2c-78e7-41f3-acff-20173cbc012a" containerID="34db6c58d1d15ad2f0f08eec2a02536e2b02dd1b1c722e12e770c383ca33f635" exitCode=255 Mar 18 17:47:13.632608 master-0 kubenswrapper[7553]: I0318 17:47:13.632571 7553 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd-operator_etcd-operator-8544cbcf9c-rws9x_0100a259-1358-45e8-8191-4e1f9a14ec89/etcd-operator/2.log" Mar 18 17:47:13.632972 master-0 kubenswrapper[7553]: I0318 17:47:13.632950 7553 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd-operator_etcd-operator-8544cbcf9c-rws9x_0100a259-1358-45e8-8191-4e1f9a14ec89/etcd-operator/1.log" Mar 18 17:47:13.633558 master-0 kubenswrapper[7553]: I0318 17:47:13.633524 7553 generic.go:334] "Generic (PLEG): container finished" podID="0100a259-1358-45e8-8191-4e1f9a14ec89" containerID="8df0fa7291cab5e340fb319c595e0406033737475f352f9d19dfc2dafb7b328f" exitCode=255 Mar 18 17:47:13.635289 master-0 kubenswrapper[7553]: I0318 17:47:13.635246 7553 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-operator_network-operator-7bd846bfc4-dxxbl_14a0661b-7bde-4e22-a9a9-5e3fb24df77f/network-operator/1.log" Mar 18 17:47:13.635840 master-0 kubenswrapper[7553]: I0318 17:47:13.635806 7553 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-operator_network-operator-7bd846bfc4-dxxbl_14a0661b-7bde-4e22-a9a9-5e3fb24df77f/network-operator/0.log" Mar 18 17:47:13.635840 master-0 kubenswrapper[7553]: I0318 17:47:13.635840 7553 generic.go:334] "Generic (PLEG): container finished" podID="14a0661b-7bde-4e22-a9a9-5e3fb24df77f" containerID="a6ebfcc622558a7e545ac685d6d46ff4d61a7219bfcb2c7a5f468d332911df22" exitCode=255 Mar 18 17:47:13.637163 master-0 kubenswrapper[7553]: I0318 17:47:13.637126 7553 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-controller-manager-operator_openshift-controller-manager-operator-8c94f4649-hpsbd_9a240ab7-a1d5-4e9a-96f3-4590681cc7ed/openshift-controller-manager-operator/2.log" Mar 18 17:47:13.637656 master-0 kubenswrapper[7553]: I0318 17:47:13.637580 7553 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-controller-manager-operator_openshift-controller-manager-operator-8c94f4649-hpsbd_9a240ab7-a1d5-4e9a-96f3-4590681cc7ed/openshift-controller-manager-operator/1.log" Mar 18 17:47:13.638344 master-0 kubenswrapper[7553]: I0318 17:47:13.638318 7553 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-controller-manager-operator_openshift-controller-manager-operator-8c94f4649-hpsbd_9a240ab7-a1d5-4e9a-96f3-4590681cc7ed/openshift-controller-manager-operator/0.log" Mar 18 17:47:13.638418 master-0 kubenswrapper[7553]: I0318 17:47:13.638355 7553 generic.go:334] "Generic (PLEG): container finished" podID="9a240ab7-a1d5-4e9a-96f3-4590681cc7ed" containerID="c2fb973641e8d289ba0dd09efd68e97b47576d6bc93a3c1a721a673bea80ce81" exitCode=255 Mar 18 17:47:13.846491 master-0 kubenswrapper[7553]: E0318 17:47:13.846394 7553 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s" Mar 18 17:47:17.919874 master-0 kubenswrapper[7553]: E0318 17:47:17.919786 7553 log.go:32] "RunPodSandbox from runtime service failed" err=< Mar 18 17:47:17.919874 master-0 kubenswrapper[7553]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_redhat-marketplace-6xmx4_openshift-marketplace_427e5ce9-f4b3-4f12-bb77-2b13775aa334_0(47910c2ef8e9ca81126e98939a4e16b047759f779e648e7779d085263bbdeeba): error adding pod openshift-marketplace_redhat-marketplace-6xmx4 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"47910c2ef8e9ca81126e98939a4e16b047759f779e648e7779d085263bbdeeba" Netns:"/var/run/netns/447017a2-f2de-406b-a6f2-8475eebfda3a" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=redhat-marketplace-6xmx4;K8S_POD_INFRA_CONTAINER_ID=47910c2ef8e9ca81126e98939a4e16b047759f779e648e7779d085263bbdeeba;K8S_POD_UID=427e5ce9-f4b3-4f12-bb77-2b13775aa334" Path:"" ERRORED: error configuring pod [openshift-marketplace/redhat-marketplace-6xmx4] networking: Multus: [openshift-marketplace/redhat-marketplace-6xmx4/427e5ce9-f4b3-4f12-bb77-2b13775aa334]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod redhat-marketplace-6xmx4 in out of cluster comm: SetNetworkStatus: failed to update the pod redhat-marketplace-6xmx4 in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-6xmx4?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Mar 18 17:47:17.919874 master-0 kubenswrapper[7553]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Mar 18 17:47:17.919874 master-0 kubenswrapper[7553]: > Mar 18 17:47:17.920680 master-0 kubenswrapper[7553]: E0318 17:47:17.919915 7553 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Mar 18 17:47:17.920680 master-0 kubenswrapper[7553]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_redhat-marketplace-6xmx4_openshift-marketplace_427e5ce9-f4b3-4f12-bb77-2b13775aa334_0(47910c2ef8e9ca81126e98939a4e16b047759f779e648e7779d085263bbdeeba): error adding pod openshift-marketplace_redhat-marketplace-6xmx4 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"47910c2ef8e9ca81126e98939a4e16b047759f779e648e7779d085263bbdeeba" Netns:"/var/run/netns/447017a2-f2de-406b-a6f2-8475eebfda3a" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=redhat-marketplace-6xmx4;K8S_POD_INFRA_CONTAINER_ID=47910c2ef8e9ca81126e98939a4e16b047759f779e648e7779d085263bbdeeba;K8S_POD_UID=427e5ce9-f4b3-4f12-bb77-2b13775aa334" Path:"" ERRORED: error configuring pod [openshift-marketplace/redhat-marketplace-6xmx4] networking: Multus: [openshift-marketplace/redhat-marketplace-6xmx4/427e5ce9-f4b3-4f12-bb77-2b13775aa334]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod redhat-marketplace-6xmx4 in out of cluster comm: SetNetworkStatus: failed to update the pod redhat-marketplace-6xmx4 in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-6xmx4?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Mar 18 17:47:17.920680 master-0 kubenswrapper[7553]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Mar 18 17:47:17.920680 master-0 kubenswrapper[7553]: > pod="openshift-marketplace/redhat-marketplace-6xmx4" Mar 18 17:47:17.920680 master-0 kubenswrapper[7553]: E0318 17:47:17.919952 7553 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err=< Mar 18 17:47:17.920680 master-0 kubenswrapper[7553]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_redhat-marketplace-6xmx4_openshift-marketplace_427e5ce9-f4b3-4f12-bb77-2b13775aa334_0(47910c2ef8e9ca81126e98939a4e16b047759f779e648e7779d085263bbdeeba): error adding pod openshift-marketplace_redhat-marketplace-6xmx4 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"47910c2ef8e9ca81126e98939a4e16b047759f779e648e7779d085263bbdeeba" Netns:"/var/run/netns/447017a2-f2de-406b-a6f2-8475eebfda3a" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=redhat-marketplace-6xmx4;K8S_POD_INFRA_CONTAINER_ID=47910c2ef8e9ca81126e98939a4e16b047759f779e648e7779d085263bbdeeba;K8S_POD_UID=427e5ce9-f4b3-4f12-bb77-2b13775aa334" Path:"" ERRORED: error configuring pod [openshift-marketplace/redhat-marketplace-6xmx4] networking: Multus: [openshift-marketplace/redhat-marketplace-6xmx4/427e5ce9-f4b3-4f12-bb77-2b13775aa334]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod redhat-marketplace-6xmx4 in out of cluster comm: SetNetworkStatus: failed to update the pod redhat-marketplace-6xmx4 in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-6xmx4?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Mar 18 17:47:17.920680 master-0 kubenswrapper[7553]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Mar 18 17:47:17.920680 master-0 kubenswrapper[7553]: > pod="openshift-marketplace/redhat-marketplace-6xmx4" Mar 18 17:47:17.920680 master-0 kubenswrapper[7553]: E0318 17:47:17.920066 7553 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"redhat-marketplace-6xmx4_openshift-marketplace(427e5ce9-f4b3-4f12-bb77-2b13775aa334)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"redhat-marketplace-6xmx4_openshift-marketplace(427e5ce9-f4b3-4f12-bb77-2b13775aa334)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_redhat-marketplace-6xmx4_openshift-marketplace_427e5ce9-f4b3-4f12-bb77-2b13775aa334_0(47910c2ef8e9ca81126e98939a4e16b047759f779e648e7779d085263bbdeeba): error adding pod openshift-marketplace_redhat-marketplace-6xmx4 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus-shim\\\" name=\\\"multus-cni-network\\\" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:\\\"47910c2ef8e9ca81126e98939a4e16b047759f779e648e7779d085263bbdeeba\\\" Netns:\\\"/var/run/netns/447017a2-f2de-406b-a6f2-8475eebfda3a\\\" IfName:\\\"eth0\\\" Args:\\\"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=redhat-marketplace-6xmx4;K8S_POD_INFRA_CONTAINER_ID=47910c2ef8e9ca81126e98939a4e16b047759f779e648e7779d085263bbdeeba;K8S_POD_UID=427e5ce9-f4b3-4f12-bb77-2b13775aa334\\\" Path:\\\"\\\" ERRORED: error configuring pod [openshift-marketplace/redhat-marketplace-6xmx4] networking: Multus: [openshift-marketplace/redhat-marketplace-6xmx4/427e5ce9-f4b3-4f12-bb77-2b13775aa334]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod redhat-marketplace-6xmx4 in out of cluster comm: SetNetworkStatus: failed to update the pod redhat-marketplace-6xmx4 in out of cluster comm: status update failed for pod /: Get \\\"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-6xmx4?timeout=1m0s\\\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\\n': StdinData: {\\\"binDir\\\":\\\"/var/lib/cni/bin\\\",\\\"clusterNetwork\\\":\\\"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf\\\",\\\"cniVersion\\\":\\\"0.3.1\\\",\\\"daemonSocketDir\\\":\\\"/run/multus/socket\\\",\\\"globalNamespaces\\\":\\\"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv\\\",\\\"logLevel\\\":\\\"verbose\\\",\\\"logToStderr\\\":true,\\\"name\\\":\\\"multus-cni-network\\\",\\\"namespaceIsolation\\\":true,\\\"type\\\":\\\"multus-shim\\\"}\"" pod="openshift-marketplace/redhat-marketplace-6xmx4" podUID="427e5ce9-f4b3-4f12-bb77-2b13775aa334" Mar 18 17:47:18.019949 master-0 kubenswrapper[7553]: E0318 17:47:18.019865 7553 log.go:32] "RunPodSandbox from runtime service failed" err=< Mar 18 17:47:18.019949 master-0 kubenswrapper[7553]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-2-master-0_openshift-kube-controller-manager_37bbec19-22b8-411c-901b-d89c92b0bd4d_0(8f46192a22db8798a46b3a9622d9ce977746f10258d6dac88e788d7ffab9b1eb): error adding pod openshift-kube-controller-manager_installer-2-master-0 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"8f46192a22db8798a46b3a9622d9ce977746f10258d6dac88e788d7ffab9b1eb" Netns:"/var/run/netns/523ade32-c29b-488e-8b31-f35a5d8f7c0b" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-controller-manager;K8S_POD_NAME=installer-2-master-0;K8S_POD_INFRA_CONTAINER_ID=8f46192a22db8798a46b3a9622d9ce977746f10258d6dac88e788d7ffab9b1eb;K8S_POD_UID=37bbec19-22b8-411c-901b-d89c92b0bd4d" Path:"" ERRORED: error configuring pod [openshift-kube-controller-manager/installer-2-master-0] networking: Multus: [openshift-kube-controller-manager/installer-2-master-0/37bbec19-22b8-411c-901b-d89c92b0bd4d]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod installer-2-master-0 in out of cluster comm: SetNetworkStatus: failed to update the pod installer-2-master-0 in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/installer-2-master-0?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Mar 18 17:47:18.019949 master-0 kubenswrapper[7553]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Mar 18 17:47:18.019949 master-0 kubenswrapper[7553]: > Mar 18 17:47:18.020226 master-0 kubenswrapper[7553]: E0318 17:47:18.019982 7553 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Mar 18 17:47:18.020226 master-0 kubenswrapper[7553]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-2-master-0_openshift-kube-controller-manager_37bbec19-22b8-411c-901b-d89c92b0bd4d_0(8f46192a22db8798a46b3a9622d9ce977746f10258d6dac88e788d7ffab9b1eb): error adding pod openshift-kube-controller-manager_installer-2-master-0 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"8f46192a22db8798a46b3a9622d9ce977746f10258d6dac88e788d7ffab9b1eb" Netns:"/var/run/netns/523ade32-c29b-488e-8b31-f35a5d8f7c0b" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-controller-manager;K8S_POD_NAME=installer-2-master-0;K8S_POD_INFRA_CONTAINER_ID=8f46192a22db8798a46b3a9622d9ce977746f10258d6dac88e788d7ffab9b1eb;K8S_POD_UID=37bbec19-22b8-411c-901b-d89c92b0bd4d" Path:"" ERRORED: error configuring pod [openshift-kube-controller-manager/installer-2-master-0] networking: Multus: [openshift-kube-controller-manager/installer-2-master-0/37bbec19-22b8-411c-901b-d89c92b0bd4d]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod installer-2-master-0 in out of cluster comm: SetNetworkStatus: failed to update the pod installer-2-master-0 in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/installer-2-master-0?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Mar 18 17:47:18.020226 master-0 kubenswrapper[7553]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Mar 18 17:47:18.020226 master-0 kubenswrapper[7553]: > pod="openshift-kube-controller-manager/installer-2-master-0" Mar 18 17:47:18.020226 master-0 kubenswrapper[7553]: E0318 17:47:18.020008 7553 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err=< Mar 18 17:47:18.020226 master-0 kubenswrapper[7553]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-2-master-0_openshift-kube-controller-manager_37bbec19-22b8-411c-901b-d89c92b0bd4d_0(8f46192a22db8798a46b3a9622d9ce977746f10258d6dac88e788d7ffab9b1eb): error adding pod openshift-kube-controller-manager_installer-2-master-0 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"8f46192a22db8798a46b3a9622d9ce977746f10258d6dac88e788d7ffab9b1eb" Netns:"/var/run/netns/523ade32-c29b-488e-8b31-f35a5d8f7c0b" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-controller-manager;K8S_POD_NAME=installer-2-master-0;K8S_POD_INFRA_CONTAINER_ID=8f46192a22db8798a46b3a9622d9ce977746f10258d6dac88e788d7ffab9b1eb;K8S_POD_UID=37bbec19-22b8-411c-901b-d89c92b0bd4d" Path:"" ERRORED: error configuring pod [openshift-kube-controller-manager/installer-2-master-0] networking: Multus: [openshift-kube-controller-manager/installer-2-master-0/37bbec19-22b8-411c-901b-d89c92b0bd4d]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod installer-2-master-0 in out of cluster comm: SetNetworkStatus: failed to update the pod installer-2-master-0 in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/installer-2-master-0?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Mar 18 17:47:18.020226 master-0 kubenswrapper[7553]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Mar 18 17:47:18.020226 master-0 kubenswrapper[7553]: > pod="openshift-kube-controller-manager/installer-2-master-0" Mar 18 17:47:18.020226 master-0 kubenswrapper[7553]: E0318 17:47:18.020082 7553 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"installer-2-master-0_openshift-kube-controller-manager(37bbec19-22b8-411c-901b-d89c92b0bd4d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"installer-2-master-0_openshift-kube-controller-manager(37bbec19-22b8-411c-901b-d89c92b0bd4d)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-2-master-0_openshift-kube-controller-manager_37bbec19-22b8-411c-901b-d89c92b0bd4d_0(8f46192a22db8798a46b3a9622d9ce977746f10258d6dac88e788d7ffab9b1eb): error adding pod openshift-kube-controller-manager_installer-2-master-0 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus-shim\\\" name=\\\"multus-cni-network\\\" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:\\\"8f46192a22db8798a46b3a9622d9ce977746f10258d6dac88e788d7ffab9b1eb\\\" Netns:\\\"/var/run/netns/523ade32-c29b-488e-8b31-f35a5d8f7c0b\\\" IfName:\\\"eth0\\\" Args:\\\"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-controller-manager;K8S_POD_NAME=installer-2-master-0;K8S_POD_INFRA_CONTAINER_ID=8f46192a22db8798a46b3a9622d9ce977746f10258d6dac88e788d7ffab9b1eb;K8S_POD_UID=37bbec19-22b8-411c-901b-d89c92b0bd4d\\\" Path:\\\"\\\" ERRORED: error configuring pod [openshift-kube-controller-manager/installer-2-master-0] networking: Multus: [openshift-kube-controller-manager/installer-2-master-0/37bbec19-22b8-411c-901b-d89c92b0bd4d]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod installer-2-master-0 in out of cluster comm: SetNetworkStatus: failed to update the pod installer-2-master-0 in out of cluster comm: status update failed for pod /: Get \\\"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/installer-2-master-0?timeout=1m0s\\\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\\n': StdinData: {\\\"binDir\\\":\\\"/var/lib/cni/bin\\\",\\\"clusterNetwork\\\":\\\"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf\\\",\\\"cniVersion\\\":\\\"0.3.1\\\",\\\"daemonSocketDir\\\":\\\"/run/multus/socket\\\",\\\"globalNamespaces\\\":\\\"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv\\\",\\\"logLevel\\\":\\\"verbose\\\",\\\"logToStderr\\\":true,\\\"name\\\":\\\"multus-cni-network\\\",\\\"namespaceIsolation\\\":true,\\\"type\\\":\\\"multus-shim\\\"}\"" pod="openshift-kube-controller-manager/installer-2-master-0" podUID="37bbec19-22b8-411c-901b-d89c92b0bd4d" Mar 18 17:47:23.200315 master-0 kubenswrapper[7553]: E0318 17:47:23.200002 7553 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-18T17:47:13Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-18T17:47:13Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-18T17:47:13Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-18T17:47:13Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:90dc03981a3a33aadde1815815ad5068886ae546bd3162c9a87a99fcc07dbbce\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:c5a86acf841f8f125e428a1254b8c9f450ef07b62a7634bd4c30aa7bf4bd88c9\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1747322591},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2abc1fd79e7781634ed5ed9e8f2b98b9094ea51f40ac3a773c5e5224607bf3d7\\\"],\\\"sizeBytes\\\":1637455533},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:86833de447f25d1d0fc15ed5460c5068cc48b18b78b8108304c5b5fd1dff04ab\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a41181d28dfacb78bea3690c390c965912300bc666e6e31a54a9382dd0329758\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1251896539},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a55ec7ec64efd0f595d8084377b7e463a1807829b7617e5d4a9092dcd924c36\\\"],\\\"sizeBytes\\\":1238100502},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:898c67bf7fc973e99114f3148976a6c21ae0dbe413051415588fa9b995f5b331\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:a641939d2096609a4cf6eec872a1476b7c671bfd81cffc2edeb6e9f13c9deeba\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1231028434},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:c3c12b935527854220bc939cf4b1e9ec5ea7b799b5530ba0609ec64f044c0a36\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:dd33dff955c181beea0d08607a8c766e68ceb902bff0a014f4416b7a4a86a7c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1223856348},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:af0fe0ca926422a6471d5bf22fc0e682c36c24fba05496a3bdfac0b7d3733015\\\"],\\\"sizeBytes\\\":991832673},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b23c544d3894e5b31f66a18c554f03b0d29f92c2000c46b57b1c96da7ec25db9\\\"],\\\"sizeBytes\\\":943841779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b00c42562d477ef44d51f35950253a26d7debc7de86e53270831aafef5795c1\\\"],\\\"sizeBytes\\\":918289953},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0a09f5a3ba4f60cce0145769509bab92553c8075d210af4ac058965d2ae11efa\\\"],\\\"sizeBytes\\\":876160834},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6bba3f73c0066e42b24839e0d29f5dce2f36436f0a11f9f5e1029bccc5ed6578\\\"],\\\"sizeBytes\\\":862657321},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a9e8da5c6114f062b814936d4db7a47a04d248e160d6bb28ad4e4a081496ee4\\\"],\\\"sizeBytes\\\":772943435},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1e1faad2d9167d84e23585c1cea5962301845548043cf09578f943f79ca98016\\\"],\\\"sizeBytes\\\":687949580},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5e12e4dc52214d3ada5ba5106caebe079eac1d9292c2571a5fe83411ce8e900d\\\"],\\\"sizeBytes\\\":683195416},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:aa5e782406f71c048b1ac3a4bf5d1227ff4be81111114083ad4c7a209c6bfb5a\\\"],\\\"sizeBytes\\\":677942383},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ec8fd46dfb35ed10e8f98933166f69ce579c2f35b8db03d21e4c34fc544553e4\\\"],\\\"sizeBytes\\\":621648710},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae50e496bd6ae2d27298d997470b7cb0a426eeb8b7e2e9c7187a34cb03993998\\\"],\\\"sizeBytes\\\":589386806},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c6a4383333a1fd6d05c3f60ec793913f7937ee3d77f002d85e6c61e20507bf55\\\"],\\\"sizeBytes\\\":582154903},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c2dd7a03348212e49876f5359f233d893a541ed9b934df390201a05133a06982\\\"],\\\"sizeBytes\\\":558211175},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7af9f5c5af9d529840233ef4b519120cc0e3f14c4fe28cc43b0823f2c11d8f89\\\"],\\\"sizeBytes\\\":548752816},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e29dc9f042f2d0471171a0611070886cb2f7c57338ab7f112613417bcd33b278\\\"],\\\"sizeBytes\\\":529326739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b4c9cf268bb7abef7af187cd775d3f74d0bd33626250095428d53b705ee946\\\"],\\\"sizeBytes\\\":528956487},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d5a971d5889f167cfe61a64c366424b87c17a6dc141ffcc43406cdcbb50cae2a\\\"],\\\"sizeBytes\\\":518384969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-release@sha256:59727c4b3fef19e5149675cf3350735bbfe2c6588a57654b2e4552dd719f58b1\\\"],\\\"sizeBytes\\\":517999161},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c5ce3d1134d6500e2b8528516c1889d7bbc6259aba4981c6983395b0e9eeff65\\\"],\\\"sizeBytes\\\":514984269},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bfe394b58ec6195de8b8420e781b7630d85a412b9112d892fea903f92b783427\\\"],\\\"sizeBytes\\\":513221333},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1f23bac0a2a6cfd638e4af679dc787a8790d99c391f6e2ade8087dc477ff765e\\\"],\\\"sizeBytes\\\":512274055},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:77fff570657d2fa0bfb709b2c8b6665bae0bf90a2be981d8dbca56c674715098\\\"],\\\"sizeBytes\\\":511227324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adb9f6f2fd701863c7caed747df43f83d3569ba9388cfa33ea7219ac6a606b11\\\"],\\\"sizeBytes\\\":511164375},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c032f87ae61d6f757ff3ce52620a70a43516591987731f25da77aba152f17458\\\"],\\\"sizeBytes\\\":508888171},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:812819a9d712b9e345ef5f1404b242c281e2518ad724baebc393ec0fd3b3d263\\\"],\\\"sizeBytes\\\":508544745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:313d1d8ca85e65236a59f058a3316c49436dde691b3a3930d5bc5e3b4b8c8a71\\\"],\\\"sizeBytes\\\":507972093},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c527b4e8239a1f4f4e0a851113e7dd633b7dcb9d75b0e7b21c23d26304abcb3\\\"],\\\"sizeBytes\\\":506480167},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ef199844317b7b012879ed8d29f9b6bc37fad8a6fdb336103cbd5cabc74c4302\\\"],\\\"sizeBytes\\\":506395599},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7d4a034950346bcd4e36e9e2f1343e0cf7a10cf544963f33d09c7eb2a1bfc634\\\"],\\\"sizeBytes\\\":505345991},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fbbcb390de2563a0177b92fba1b5a65777366e2dc80e2808b61d87c41b47a2d\\\"],\\\"sizeBytes\\\":505246690},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c983016b9ceed0fca1f51bd49c2653243c7e5af91cbf2f478b091db6e028252\\\"],\\\"sizeBytes\\\":504625081},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:712d334b7752d95580571059aae2c50e111d879af4fd8ea7cc3dbaf1a8e7dc69\\\"],\\\"sizeBytes\\\":495994673},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4b5ea1ef4e09b673a0c68c8848ca162ab11d9ac373a377daa52dea702ffa3023\\\"],\\\"sizeBytes\\\":495065340},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:446bedea4916d3c1ee52be94137e484659e9561bd1de95c8189eee279aae984b\\\"],\\\"sizeBytes\\\":487096305},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a746a87b784ea1caa278fd0e012554f9df520b6fff665ea0bc4c83f487fed113\\\"],\\\"sizeBytes\\\":484450894},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2c4d5a681595e428ff4b5083648c13615eed80be9084a3d3fc68a0295079cb12\\\"],\\\"sizeBytes\\\":484187929},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f933312f49083e8746fc41ab5e46a9a757b448374f14971e256ebcb36f11dd97\\\"],\\\"sizeBytes\\\":470826739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ea5c8a93f30e0a4932da5697d22c0da7eda9a7035c0555eb006b6755e62bb2fc\\\"],\\\"sizeBytes\\\":468265024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d12d0dc7eb86bbedf6b2d7689a28fd51f0d928f720e4a6783744304297c661ed\\\"],\\\"sizeBytes\\\":465090934},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9609c00207cc4db97f0fd6162eb429d7f81654137f020a677e30cba26a887a24\\\"],\\\"sizeBytes\\\":463705930},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:632e80bba5077068ecca05fddb95aedebad4493af6f36152c01c6ae490975b62\\\"],\\\"sizeBytes\\\":458126937},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bcb08821551e9a5b9f82aa794bcea673279cefb93cb47492e19ccac5e2cf18fe\\\"],\\\"sizeBytes\\\":456576198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:759fb1d5353dbbadd443f38631d977ca3aed9787b873be05cc9660532a252739\\\"],\\\"sizeBytes\\\":448828620},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3062f6485aec4770e60852b535c69a42527b305161fe856499c8658ead6d1e85\\\"],\\\"sizeBytes\\\":448042136}],\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"runc\\\"}]}}\" for node \"master-0\": Patch \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0/status?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 18 17:47:24.078486 master-0 kubenswrapper[7553]: E0318 17:47:24.078352 7553 projected.go:194] Error preparing data for projected volume kube-api-access-2tskm for pod openshift-marketplace/redhat-operators-bgdql: failed to fetch token: Timeout: request did not complete within requested timeout - context deadline exceeded Mar 18 17:47:24.078486 master-0 kubenswrapper[7553]: E0318 17:47:24.078443 7553 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/4460d3d3-c55f-4f1c-a623-e3feccf937bb-kube-api-access-2tskm podName:4460d3d3-c55f-4f1c-a623-e3feccf937bb nodeName:}" failed. No retries permitted until 2026-03-18 17:47:56.078416545 +0000 UTC m=+366.224251218 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-2tskm" (UniqueName: "kubernetes.io/projected/4460d3d3-c55f-4f1c-a623-e3feccf937bb-kube-api-access-2tskm") pod "redhat-operators-bgdql" (UID: "4460d3d3-c55f-4f1c-a623-e3feccf937bb") : failed to fetch token: Timeout: request did not complete within requested timeout - context deadline exceeded Mar 18 17:47:24.172883 master-0 kubenswrapper[7553]: E0318 17:47:24.172811 7553 mirror_client.go:138] "Failed deleting a mirror pod" err="Timeout: request did not complete within requested timeout - context deadline exceeded" pod="openshift-etcd/etcd-master-0-master-0" Mar 18 17:47:24.173210 master-0 kubenswrapper[7553]: E0318 17:47:24.173128 7553 kubelet.go:2526] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="34.014s" Mar 18 17:47:24.173266 master-0 kubenswrapper[7553]: I0318 17:47:24.173210 7553 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-89ccd998f-l5gm7" Mar 18 17:47:24.174147 master-0 kubenswrapper[7553]: I0318 17:47:24.174124 7553 scope.go:117] "RemoveContainer" containerID="8df0fa7291cab5e340fb319c595e0406033737475f352f9d19dfc2dafb7b328f" Mar 18 17:47:24.174450 master-0 kubenswrapper[7553]: E0318 17:47:24.174416 7553 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"etcd-operator\" with CrashLoopBackOff: \"back-off 20s restarting failed container=etcd-operator pod=etcd-operator-8544cbcf9c-rws9x_openshift-etcd-operator(0100a259-1358-45e8-8191-4e1f9a14ec89)\"" pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-rws9x" podUID="0100a259-1358-45e8-8191-4e1f9a14ec89" Mar 18 17:47:24.174653 master-0 kubenswrapper[7553]: I0318 17:47:24.174625 7553 scope.go:117] "RemoveContainer" containerID="a6ebfcc622558a7e545ac685d6d46ff4d61a7219bfcb2c7a5f468d332911df22" Mar 18 17:47:24.174983 master-0 kubenswrapper[7553]: I0318 17:47:24.174968 7553 scope.go:117] "RemoveContainer" containerID="c9ad4dfdc283133c8325a6400b93e7ca1b286a38ba01514e1ca540aa2f6676d0" Mar 18 17:47:24.175321 master-0 kubenswrapper[7553]: I0318 17:47:24.175298 7553 scope.go:117] "RemoveContainer" containerID="34db6c58d1d15ad2f0f08eec2a02536e2b02dd1b1c722e12e770c383ca33f635" Mar 18 17:47:24.176435 master-0 kubenswrapper[7553]: I0318 17:47:24.175933 7553 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 18 17:47:24.176435 master-0 kubenswrapper[7553]: I0318 17:47:24.176115 7553 status_manager.go:317] "Container readiness changed for unknown container" pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-bk26c" containerID="cri-o://b1d92bc61050e9dcfcb1bd9705c2f2b94007d572857fef98c987e76770e1ad13" Mar 18 17:47:24.176533 master-0 kubenswrapper[7553]: I0318 17:47:24.176450 7553 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-bk26c" Mar 18 17:47:24.176630 master-0 kubenswrapper[7553]: I0318 17:47:24.176579 7553 scope.go:117] "RemoveContainer" containerID="02b88785366f3ca67c38ae3fa046b86fa7c95b60c40b090f66977aa12f1b78cb" Mar 18 17:47:24.176789 master-0 kubenswrapper[7553]: I0318 17:47:24.176753 7553 scope.go:117] "RemoveContainer" containerID="c2fb973641e8d289ba0dd09efd68e97b47576d6bc93a3c1a721a673bea80ce81" Mar 18 17:47:24.176978 master-0 kubenswrapper[7553]: E0318 17:47:24.176952 7553 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openshift-controller-manager-operator\" with CrashLoopBackOff: \"back-off 20s restarting failed container=openshift-controller-manager-operator pod=openshift-controller-manager-operator-8c94f4649-hpsbd_openshift-controller-manager-operator(9a240ab7-a1d5-4e9a-96f3-4590681cc7ed)\"" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8c94f4649-hpsbd" podUID="9a240ab7-a1d5-4e9a-96f3-4590681cc7ed" Mar 18 17:47:24.177473 master-0 kubenswrapper[7553]: I0318 17:47:24.177451 7553 scope.go:117] "RemoveContainer" containerID="5ef1ad7d9de4700ea957d656ff99f57f457c91f9b150fe99e8b36beb88ed9c42" Mar 18 17:47:24.177686 master-0 kubenswrapper[7553]: E0318 17:47:24.177654 7553 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"authentication-operator\" with CrashLoopBackOff: \"back-off 20s restarting failed container=authentication-operator pod=authentication-operator-5885bfd7f4-8sxdf_openshift-authentication-operator(c087ce06-a16b-41f4-ba93-8fccdee09003)\"" pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-8sxdf" podUID="c087ce06-a16b-41f4-ba93-8fccdee09003" Mar 18 17:47:24.181316 master-0 kubenswrapper[7553]: I0318 17:47:24.181224 7553 scope.go:117] "RemoveContainer" containerID="36a5d9d231da98f0f9e0dae16fa8c5d4e171fd401ed1a351ab236e19bff04107" Mar 18 17:47:24.183208 master-0 kubenswrapper[7553]: I0318 17:47:24.183162 7553 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 18 17:47:24.185508 master-0 kubenswrapper[7553]: I0318 17:47:24.185474 7553 scope.go:117] "RemoveContainer" containerID="fd1baed9e081b7d0a16ba577c3675952403bd2f32763aeb842989654f0b5e115" Mar 18 17:47:24.185697 master-0 kubenswrapper[7553]: I0318 17:47:24.185663 7553 scope.go:117] "RemoveContainer" containerID="13ecfe004522bd3f1997358f8d18d1d0444903e67db4326c279f978bc65fbe03" Mar 18 17:47:24.191382 master-0 kubenswrapper[7553]: I0318 17:47:24.191331 7553 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-etcd/etcd-master-0-master-0" podUID="" Mar 18 17:47:24.732153 master-0 kubenswrapper[7553]: I0318 17:47:24.731954 7553 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-operator_network-operator-7bd846bfc4-dxxbl_14a0661b-7bde-4e22-a9a9-5e3fb24df77f/network-operator/1.log" Mar 18 17:47:24.733069 master-0 kubenswrapper[7553]: I0318 17:47:24.733028 7553 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-operator_network-operator-7bd846bfc4-dxxbl_14a0661b-7bde-4e22-a9a9-5e3fb24df77f/network-operator/0.log" Mar 18 17:47:24.736568 master-0 kubenswrapper[7553]: I0318 17:47:24.736517 7553 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-apiserver-operator_openshift-apiserver-operator-d65958b8-t266j_0b9ff55a-73fb-473f-b406-1f8b6cffdb89/openshift-apiserver-operator/1.log" Mar 18 17:47:24.740746 master-0 kubenswrapper[7553]: I0318 17:47:24.740677 7553 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-64854d9cff-vpjmp_7d39d93e-9be3-47e1-a44e-be2d18b55446/snapshot-controller/1.log" Mar 18 17:47:24.741561 master-0 kubenswrapper[7553]: I0318 17:47:24.741500 7553 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-64854d9cff-vpjmp_7d39d93e-9be3-47e1-a44e-be2d18b55446/snapshot-controller/0.log" Mar 18 17:47:24.745053 master-0 kubenswrapper[7553]: I0318 17:47:24.745006 7553 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-6f69995874-dh5zl_37b3753f-bf4f-4a9e-a4a8-d58296bada79/cluster-baremetal-operator/1.log" Mar 18 17:47:24.746466 master-0 kubenswrapper[7553]: I0318 17:47:24.746426 7553 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-6f69995874-dh5zl_37b3753f-bf4f-4a9e-a4a8-d58296bada79/cluster-baremetal-operator/0.log" Mar 18 17:47:24.749432 master-0 kubenswrapper[7553]: I0318 17:47:24.749384 7553 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler-operator_openshift-kube-scheduler-operator-dddff6458-wlfj4_3a3a6c2c-78e7-41f3-acff-20173cbc012a/kube-scheduler-operator-container/1.log" Mar 18 17:47:24.754847 master-0 kubenswrapper[7553]: I0318 17:47:24.754798 7553 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-66b84d69b-qb7n6_7e64a377-f497-4416-8f22-d5c7f52e0b65/ingress-operator/1.log" Mar 18 17:47:24.755765 master-0 kubenswrapper[7553]: I0318 17:47:24.755711 7553 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-66b84d69b-qb7n6_7e64a377-f497-4416-8f22-d5c7f52e0b65/ingress-operator/0.log" Mar 18 17:47:28.185579 master-0 kubenswrapper[7553]: I0318 17:47:28.185339 7553 prober.go:107] "Probe failed" probeType="Startup" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="46f265536aba6292ead501bc9b49f327" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.32.10:10257/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 18 17:47:28.787370 master-0 kubenswrapper[7553]: I0318 17:47:28.787205 7553 generic.go:334] "Generic (PLEG): container finished" podID="dba5f8d7-4d25-42b5-9c58-813221bf96bb" containerID="398454ad32431a1333f76c77a1b11d599119897614da05c5c31c8fb7c4b10bc1" exitCode=0 Mar 18 17:47:30.848176 master-0 kubenswrapper[7553]: E0318 17:47:30.847646 7553 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s" Mar 18 17:47:32.823022 master-0 kubenswrapper[7553]: I0318 17:47:32.822931 7553 generic.go:334] "Generic (PLEG): container finished" podID="46f265536aba6292ead501bc9b49f327" containerID="6a3212eaacddf8a633d9171d89d86f056fc2eaf17af107aa2bced9e6262d3611" exitCode=0 Mar 18 17:47:33.200896 master-0 kubenswrapper[7553]: E0318 17:47:33.200674 7553 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 18 17:47:34.594824 master-0 kubenswrapper[7553]: E0318 17:47:34.594734 7553 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfd4c81e2_699b_4fdf_ac7d_1607cde6a8ab.slice/crio-conmon-a8a00810d795e748f7416b26291bd5e824cc9027054e6c1fabd83a4ff999def0.scope\": RecentStats: unable to find data in memory cache]" Mar 18 17:47:34.842512 master-0 kubenswrapper[7553]: I0318 17:47:34.842444 7553 generic.go:334] "Generic (PLEG): container finished" podID="fd4c81e2-699b-4fdf-ac7d-1607cde6a8ab" containerID="a8a00810d795e748f7416b26291bd5e824cc9027054e6c1fabd83a4ff999def0" exitCode=0 Mar 18 17:47:37.226940 master-0 kubenswrapper[7553]: I0318 17:47:37.226795 7553 prober.go:107] "Probe failed" probeType="Liveness" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="46f265536aba6292ead501bc9b49f327" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://localhost:10357/healthz\": dial tcp [::1]:10357: connect: connection refused" Mar 18 17:47:37.433936 master-0 kubenswrapper[7553]: I0318 17:47:37.433832 7553 prober.go:107] "Probe failed" probeType="Readiness" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="46f265536aba6292ead501bc9b49f327" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://localhost:10357/healthz\": dial tcp [::1]:10357: connect: connection refused" Mar 18 17:47:38.186544 master-0 kubenswrapper[7553]: I0318 17:47:38.186401 7553 prober.go:107] "Probe failed" probeType="Startup" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="46f265536aba6292ead501bc9b49f327" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.32.10:10257/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 18 17:47:40.430920 master-0 kubenswrapper[7553]: E0318 17:47:40.430853 7553 kubelet.go:2526] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="16.248s" Mar 18 17:47:40.430920 master-0 kubenswrapper[7553]: I0318 17:47:40.430928 7553 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 18 17:47:40.431668 master-0 kubenswrapper[7553]: I0318 17:47:40.431041 7553 status_manager.go:317] "Container readiness changed for unknown container" pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-8vmsv" containerID="cri-o://9a3c783faf4f4f653f053e2f216b7497912efa5f57b792ca0a2a383ce66b1a4d" Mar 18 17:47:40.431668 master-0 kubenswrapper[7553]: I0318 17:47:40.431056 7553 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-8vmsv" Mar 18 17:47:40.431668 master-0 kubenswrapper[7553]: I0318 17:47:40.431080 7553 status_manager.go:317] "Container readiness changed for unknown container" pod="kube-system/bootstrap-kube-controller-manager-master-0" containerID="cri-o://06fa80a623a13f79bd4c27a3c32495ce9b6db2867b399839b8a475d55e2b3bf6" Mar 18 17:47:40.431668 master-0 kubenswrapper[7553]: I0318 17:47:40.431093 7553 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 18 17:47:40.431668 master-0 kubenswrapper[7553]: I0318 17:47:40.431109 7553 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-f5755b457-f4cbl" Mar 18 17:47:40.431668 master-0 kubenswrapper[7553]: I0318 17:47:40.431121 7553 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-8vmsv" Mar 18 17:47:40.431668 master-0 kubenswrapper[7553]: I0318 17:47:40.431251 7553 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-master-0" Mar 18 17:47:40.431668 master-0 kubenswrapper[7553]: I0318 17:47:40.431554 7553 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-master-0" Mar 18 17:47:40.431668 master-0 kubenswrapper[7553]: I0318 17:47:40.431616 7553 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6xmx4" Mar 18 17:47:40.431668 master-0 kubenswrapper[7553]: I0318 17:47:40.431646 7553 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-2-master-0" Mar 18 17:47:40.435297 master-0 kubenswrapper[7553]: I0318 17:47:40.432166 7553 scope.go:117] "RemoveContainer" containerID="c2fb973641e8d289ba0dd09efd68e97b47576d6bc93a3c1a721a673bea80ce81" Mar 18 17:47:40.435297 master-0 kubenswrapper[7553]: I0318 17:47:40.433050 7553 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-2-master-0" Mar 18 17:47:40.435297 master-0 kubenswrapper[7553]: I0318 17:47:40.433516 7553 scope.go:117] "RemoveContainer" containerID="6a3212eaacddf8a633d9171d89d86f056fc2eaf17af107aa2bced9e6262d3611" Mar 18 17:47:40.435297 master-0 kubenswrapper[7553]: I0318 17:47:40.433971 7553 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-bk26c" Mar 18 17:47:40.435297 master-0 kubenswrapper[7553]: I0318 17:47:40.434194 7553 scope.go:117] "RemoveContainer" containerID="8df0fa7291cab5e340fb319c595e0406033737475f352f9d19dfc2dafb7b328f" Mar 18 17:47:40.435297 master-0 kubenswrapper[7553]: I0318 17:47:40.435213 7553 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6xmx4" Mar 18 17:47:40.436807 master-0 kubenswrapper[7553]: I0318 17:47:40.436501 7553 scope.go:117] "RemoveContainer" containerID="5ef1ad7d9de4700ea957d656ff99f57f457c91f9b150fe99e8b36beb88ed9c42" Mar 18 17:47:40.449957 master-0 kubenswrapper[7553]: I0318 17:47:40.449911 7553 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-etcd/etcd-master-0-master-0" podUID="" Mar 18 17:47:40.458144 master-0 kubenswrapper[7553]: I0318 17:47:40.458089 7553 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-master-0"] Mar 18 17:47:40.458244 master-0 kubenswrapper[7553]: I0318 17:47:40.458169 7553 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-bk26c" Mar 18 17:47:40.458244 master-0 kubenswrapper[7553]: I0318 17:47:40.458189 7553 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-etcd/etcd-master-0-master-0"] Mar 18 17:47:40.458244 master-0 kubenswrapper[7553]: I0318 17:47:40.458203 7553 kubelet.go:2649] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-etcd/etcd-master-0-master-0" mirrorPodUID="ca3a5703-b262-4aea-8052-323b91187d00" Mar 18 17:47:40.458244 master-0 kubenswrapper[7553]: I0318 17:47:40.458228 7553 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-etcd/etcd-master-0-master-0"] Mar 18 17:47:40.458244 master-0 kubenswrapper[7553]: I0318 17:47:40.458240 7553 kubelet.go:2673] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-etcd/etcd-master-0-master-0" mirrorPodUID="ca3a5703-b262-4aea-8052-323b91187d00" Mar 18 17:47:40.458529 master-0 kubenswrapper[7553]: I0318 17:47:40.458319 7553 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-8vmsv" Mar 18 17:47:40.458529 master-0 kubenswrapper[7553]: I0318 17:47:40.458357 7553 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-8vmsv" Mar 18 17:47:40.458529 master-0 kubenswrapper[7553]: I0318 17:47:40.458374 7553 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 18 17:47:40.458529 master-0 kubenswrapper[7553]: I0318 17:47:40.458394 7553 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-89ccd998f-l5gm7" Mar 18 17:47:40.458529 master-0 kubenswrapper[7553]: I0318 17:47:40.458411 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-8sxdf" event={"ID":"c087ce06-a16b-41f4-ba93-8fccdee09003","Type":"ContainerDied","Data":"d912b8d4a9e9d78b8f87c200cd35424b252b93249d3289fc7b152d013f54ec26"} Mar 18 17:47:40.458529 master-0 kubenswrapper[7553]: I0318 17:47:40.458461 7553 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-89ccd998f-l5gm7" Mar 18 17:47:40.458529 master-0 kubenswrapper[7553]: I0318 17:47:40.458481 7553 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-8sxdf" Mar 18 17:47:40.458529 master-0 kubenswrapper[7553]: I0318 17:47:40.458517 7553 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-master-0" Mar 18 17:47:40.458529 master-0 kubenswrapper[7553]: I0318 17:47:40.458530 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-7bd846bfc4-dxxbl" event={"ID":"14a0661b-7bde-4e22-a9a9-5e3fb24df77f","Type":"ContainerDied","Data":"2832f3b1f879c3c460b4f098e3d00d5c7b56e3d42bebbd0ce9bb5a5d15d6f5ef"} Mar 18 17:47:40.458929 master-0 kubenswrapper[7553]: I0318 17:47:40.458560 7553 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 18 17:47:40.458929 master-0 kubenswrapper[7553]: I0318 17:47:40.458574 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-dddff6458-wlfj4" event={"ID":"3a3a6c2c-78e7-41f3-acff-20173cbc012a","Type":"ContainerDied","Data":"b999f5b1617525702f37002cc0c97c3e6d4fa7798646c662c89a3d3f27b32af1"} Mar 18 17:47:40.458929 master-0 kubenswrapper[7553]: I0318 17:47:40.458597 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-8b68b9d9b-p72m2" event={"ID":"26575d68-0488-4dfa-a5d0-5016e481dba6","Type":"ContainerDied","Data":"51887b13aef82e88868eb337156320571051617c6952199181cd88bfeb560624"} Mar 18 17:47:40.458929 master-0 kubenswrapper[7553]: I0318 17:47:40.458616 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-ff989d6cc-qk279" event={"ID":"9b424d6c-7440-4c98-ac19-2d0642c696fd","Type":"ContainerDied","Data":"5027239b16a33eaa242303aa483c7e285890e090e452f1d81a0bd3e82446b39c"} Mar 18 17:47:40.458929 master-0 kubenswrapper[7553]: I0318 17:47:40.458635 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-8sxdf" event={"ID":"c087ce06-a16b-41f4-ba93-8fccdee09003","Type":"ContainerStarted","Data":"0f1dba4faf70afeafaab91d4fd1ae8ffa8171effce8ce56ce4db153d43518265"} Mar 18 17:47:40.458929 master-0 kubenswrapper[7553]: I0318 17:47:40.458650 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-rws9x" event={"ID":"0100a259-1358-45e8-8191-4e1f9a14ec89","Type":"ContainerStarted","Data":"53db21ebf3fb977fb32a1c89dc78765092d7ecfd2055d2adfee610a3dffc6d29"} Mar 18 17:47:40.458929 master-0 kubenswrapper[7553]: I0318 17:47:40.458667 7553 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-controller-manager/controller-manager-f5755b457-f4cbl" Mar 18 17:47:40.458929 master-0 kubenswrapper[7553]: I0318 17:47:40.458681 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"46f265536aba6292ead501bc9b49f327","Type":"ContainerDied","Data":"91f9abefbca1475a63e431e476e7c66f5fca86898f7b16dc16e0f75842de9c91"} Mar 18 17:47:40.458929 master-0 kubenswrapper[7553]: I0318 17:47:40.458698 7553 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-rws9x" Mar 18 17:47:40.458929 master-0 kubenswrapper[7553]: I0318 17:47:40.458714 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"46f265536aba6292ead501bc9b49f327","Type":"ContainerStarted","Data":"06fa80a623a13f79bd4c27a3c32495ce9b6db2867b399839b8a475d55e2b3bf6"} Mar 18 17:47:40.458929 master-0 kubenswrapper[7553]: I0318 17:47:40.458731 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-3-master-0" event={"ID":"1a709ef9-91c0-4193-acb4-0594d02f554c","Type":"ContainerDied","Data":"484988d6e1e2aeba58f6749a644020e240b6e9ebd0d813d191a1e837c5837362"} Mar 18 17:47:40.458929 master-0 kubenswrapper[7553]: I0318 17:47:40.458749 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-66b84d69b-qb7n6" event={"ID":"7e64a377-f497-4416-8f22-d5c7f52e0b65","Type":"ContainerDied","Data":"e7ad342e7df9192dcebc5d4c70aab2c6f8db5ac6b23cbc811c6ae00a013dee5b"} Mar 18 17:47:40.458929 master-0 kubenswrapper[7553]: I0318 17:47:40.458774 7553 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-8sxdf" Mar 18 17:47:40.458929 master-0 kubenswrapper[7553]: I0318 17:47:40.458788 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"24b4ed170d527099878cb5fdd508a2fb","Type":"ContainerDied","Data":"a516eed231a828089f9e5b970a9cd6a4d60cf12dd2e8fae44516a4db570c131e"} Mar 18 17:47:40.458929 master-0 kubenswrapper[7553]: I0318 17:47:40.458810 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-7s68k" event={"ID":"9875ed82-813c-483d-8471-8f9b74b774ee","Type":"ContainerStarted","Data":"d6933300553a8b09299df5113bf7cc86680b024bf430a5e7f3a091b6af9ab04a"} Mar 18 17:47:40.458929 master-0 kubenswrapper[7553]: I0318 17:47:40.458826 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-ff989d6cc-qk279" event={"ID":"9b424d6c-7440-4c98-ac19-2d0642c696fd","Type":"ContainerStarted","Data":"6e9473f3d26cbd67b9497211546ab830ef4c483cd3c3fb1fa65b5b574de9d612"} Mar 18 17:47:40.458929 master-0 kubenswrapper[7553]: I0318 17:47:40.458843 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-b865698dc-5zj8r" event={"ID":"c355c750-ae2f-49fa-9a16-8fb4f688853e","Type":"ContainerStarted","Data":"6663d9a012bba90e4d1f49e78a4578d42945dc0a251e88808d84607a0978912c"} Mar 18 17:47:40.458929 master-0 kubenswrapper[7553]: I0318 17:47:40.458861 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-d65958b8-t266j" event={"ID":"0b9ff55a-73fb-473f-b406-1f8b6cffdb89","Type":"ContainerStarted","Data":"36a5d9d231da98f0f9e0dae16fa8c5d4e171fd401ed1a351ab236e19bff04107"} Mar 18 17:47:40.458929 master-0 kubenswrapper[7553]: I0318 17:47:40.458879 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-66b84d69b-qb7n6" event={"ID":"7e64a377-f497-4416-8f22-d5c7f52e0b65","Type":"ContainerStarted","Data":"02b88785366f3ca67c38ae3fa046b86fa7c95b60c40b090f66977aa12f1b78cb"} Mar 18 17:47:40.458929 master-0 kubenswrapper[7553]: I0318 17:47:40.458892 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-8b68b9d9b-p72m2" event={"ID":"26575d68-0488-4dfa-a5d0-5016e481dba6","Type":"ContainerStarted","Data":"f83b9c315c38279f3569813348a27c78beef46c5306eaadd08c03d8c08f384ba"} Mar 18 17:47:40.458929 master-0 kubenswrapper[7553]: I0318 17:47:40.458907 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-8vmsv" event={"ID":"56cde2f7-1742-45d6-aa22-8270cfb424a7","Type":"ContainerDied","Data":"9a3c783faf4f4f653f053e2f216b7497912efa5f57b792ca0a2a383ce66b1a4d"} Mar 18 17:47:40.458929 master-0 kubenswrapper[7553]: I0318 17:47:40.458921 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-bk26c" event={"ID":"efbcb147-d077-4749-9289-1682daccb657","Type":"ContainerDied","Data":"b1d92bc61050e9dcfcb1bd9705c2f2b94007d572857fef98c987e76770e1ad13"} Mar 18 17:47:40.458929 master-0 kubenswrapper[7553]: I0318 17:47:40.458937 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-89ccd998f-l5gm7" event={"ID":"ce5831a6-5a8d-4cda-9299-5d86437bcab2","Type":"ContainerDied","Data":"c7f5d502541807602a24d2f39710701583fd6aae06267e2b4ee473df7bbfd13e"} Mar 18 17:47:40.458929 master-0 kubenswrapper[7553]: I0318 17:47:40.458952 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8c94f4649-hpsbd" event={"ID":"9a240ab7-a1d5-4e9a-96f3-4590681cc7ed","Type":"ContainerDied","Data":"c19ffd46d4609e819ebbe2a9d485eeae31630e1764b880f8494b5531e650c602"} Mar 18 17:47:40.458929 master-0 kubenswrapper[7553]: I0318 17:47:40.458971 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"46f265536aba6292ead501bc9b49f327","Type":"ContainerDied","Data":"06fa80a623a13f79bd4c27a3c32495ce9b6db2867b399839b8a475d55e2b3bf6"} Mar 18 17:47:40.460028 master-0 kubenswrapper[7553]: I0318 17:47:40.458988 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-64854d9cff-vpjmp" event={"ID":"7d39d93e-9be3-47e1-a44e-be2d18b55446","Type":"ContainerDied","Data":"579cc4ec2812b3f1711dac655f16b18685227d170cb9b02fdd9aad6faa3c3865"} Mar 18 17:47:40.460028 master-0 kubenswrapper[7553]: I0318 17:47:40.459008 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-89ccd998f-l5gm7" event={"ID":"ce5831a6-5a8d-4cda-9299-5d86437bcab2","Type":"ContainerStarted","Data":"fe07019623ba4afabfbf6551b7028ec6e274c77f8b3075096e77bb2fa5ab0961"} Mar 18 17:47:40.460028 master-0 kubenswrapper[7553]: I0318 17:47:40.459022 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57f769d897-m82wx" event={"ID":"7b94e08c-7944-445e-bfb7-6c7c14880c65","Type":"ContainerDied","Data":"10ef0540ad110067bbacf0ae0c51fcdf81ed8a0e014b67c2675d03499d28dfab"} Mar 18 17:47:40.460028 master-0 kubenswrapper[7553]: I0318 17:47:40.459039 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-dh5zl" event={"ID":"37b3753f-bf4f-4a9e-a4a8-d58296bada79","Type":"ContainerDied","Data":"45e3f6969d17c505bf035fc0bb6cc383a03ebc121f30574b4362e30470bf65e0"} Mar 18 17:47:40.460028 master-0 kubenswrapper[7553]: I0318 17:47:40.459058 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-f5755b457-f4cbl" event={"ID":"1db0a246-ca43-4e7c-b09e-e80218ae99b1","Type":"ContainerDied","Data":"b3ebfba10cf9d40bcef8b7b1707842cdd5329c0fa6c5118e3bdbf4e1fe51f08d"} Mar 18 17:47:40.460028 master-0 kubenswrapper[7553]: I0318 17:47:40.459075 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-rws9x" event={"ID":"0100a259-1358-45e8-8191-4e1f9a14ec89","Type":"ContainerDied","Data":"53db21ebf3fb977fb32a1c89dc78765092d7ecfd2055d2adfee610a3dffc6d29"} Mar 18 17:47:40.460028 master-0 kubenswrapper[7553]: I0318 17:47:40.459093 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-8sxdf" event={"ID":"c087ce06-a16b-41f4-ba93-8fccdee09003","Type":"ContainerDied","Data":"0f1dba4faf70afeafaab91d4fd1ae8ffa8171effce8ce56ce4db153d43518265"} Mar 18 17:47:40.460028 master-0 kubenswrapper[7553]: I0318 17:47:40.459109 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-8vmsv" event={"ID":"56cde2f7-1742-45d6-aa22-8270cfb424a7","Type":"ContainerStarted","Data":"c455513aeeb0a865514a01932b50b8b6b2a2bfaa8dc030657e848c60ae487c2b"} Mar 18 17:47:40.460028 master-0 kubenswrapper[7553]: I0318 17:47:40.459124 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-bk26c" event={"ID":"efbcb147-d077-4749-9289-1682daccb657","Type":"ContainerStarted","Data":"e2d7bd945ff62383c4a337619ff4a53c695923ff63d0ce2cd5a9cb7b46a58867"} Mar 18 17:47:40.460028 master-0 kubenswrapper[7553]: I0318 17:47:40.459138 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"46f265536aba6292ead501bc9b49f327","Type":"ContainerStarted","Data":"13ecfe004522bd3f1997358f8d18d1d0444903e67db4326c279f978bc65fbe03"} Mar 18 17:47:40.460028 master-0 kubenswrapper[7553]: I0318 17:47:40.459152 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-7bd846bfc4-dxxbl" event={"ID":"14a0661b-7bde-4e22-a9a9-5e3fb24df77f","Type":"ContainerStarted","Data":"a6ebfcc622558a7e545ac685d6d46ff4d61a7219bfcb2c7a5f468d332911df22"} Mar 18 17:47:40.460028 master-0 kubenswrapper[7553]: I0318 17:47:40.459165 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-8sxdf" event={"ID":"c087ce06-a16b-41f4-ba93-8fccdee09003","Type":"ContainerStarted","Data":"5ef1ad7d9de4700ea957d656ff99f57f457c91f9b150fe99e8b36beb88ed9c42"} Mar 18 17:47:40.460028 master-0 kubenswrapper[7553]: I0318 17:47:40.459180 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-64854d9cff-vpjmp" event={"ID":"7d39d93e-9be3-47e1-a44e-be2d18b55446","Type":"ContainerStarted","Data":"c9ad4dfdc283133c8325a6400b93e7ca1b286a38ba01514e1ca540aa2f6676d0"} Mar 18 17:47:40.460028 master-0 kubenswrapper[7553]: I0318 17:47:40.459192 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-rws9x" event={"ID":"0100a259-1358-45e8-8191-4e1f9a14ec89","Type":"ContainerStarted","Data":"8df0fa7291cab5e340fb319c595e0406033737475f352f9d19dfc2dafb7b328f"} Mar 18 17:47:40.460028 master-0 kubenswrapper[7553]: I0318 17:47:40.459205 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8c94f4649-hpsbd" event={"ID":"9a240ab7-a1d5-4e9a-96f3-4590681cc7ed","Type":"ContainerStarted","Data":"c2fb973641e8d289ba0dd09efd68e97b47576d6bc93a3c1a721a673bea80ce81"} Mar 18 17:47:40.460028 master-0 kubenswrapper[7553]: I0318 17:47:40.459218 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-dh5zl" event={"ID":"37b3753f-bf4f-4a9e-a4a8-d58296bada79","Type":"ContainerStarted","Data":"fd1baed9e081b7d0a16ba577c3675952403bd2f32763aeb842989654f0b5e115"} Mar 18 17:47:40.460028 master-0 kubenswrapper[7553]: I0318 17:47:40.459233 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-dddff6458-wlfj4" event={"ID":"3a3a6c2c-78e7-41f3-acff-20173cbc012a","Type":"ContainerStarted","Data":"34db6c58d1d15ad2f0f08eec2a02536e2b02dd1b1c722e12e770c383ca33f635"} Mar 18 17:47:40.460028 master-0 kubenswrapper[7553]: I0318 17:47:40.459248 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-3-master-0" event={"ID":"1a709ef9-91c0-4193-acb4-0594d02f554c","Type":"ContainerDied","Data":"9e890c0b05b9ab9a66059688757a1f43723c4593388d1175f31db9b7e7ec8883"} Mar 18 17:47:40.460028 master-0 kubenswrapper[7553]: I0318 17:47:40.459262 7553 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9e890c0b05b9ab9a66059688757a1f43723c4593388d1175f31db9b7e7ec8883" Mar 18 17:47:40.460028 master-0 kubenswrapper[7553]: I0318 17:47:40.459297 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"24b4ed170d527099878cb5fdd508a2fb","Type":"ContainerStarted","Data":"368e05dc45d9fe9a58fa3e9e73f9caacf28864ce4d8f255cb43f6da2a5a67e8c"} Mar 18 17:47:40.460028 master-0 kubenswrapper[7553]: I0318 17:47:40.459312 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"24b4ed170d527099878cb5fdd508a2fb","Type":"ContainerStarted","Data":"4bad2f207b9c61f1bc1840e5051ac1d4e30175c583e7b44190e2def255d9e40e"} Mar 18 17:47:40.460028 master-0 kubenswrapper[7553]: I0318 17:47:40.459327 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"24b4ed170d527099878cb5fdd508a2fb","Type":"ContainerStarted","Data":"da8f0e31af82e82db7f44f999e96d023cf94af8530f7fe4d1cf38fd2a71678aa"} Mar 18 17:47:40.460028 master-0 kubenswrapper[7553]: I0318 17:47:40.459340 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"24b4ed170d527099878cb5fdd508a2fb","Type":"ContainerStarted","Data":"209ad042b5f8ad649dfc90b4fe15da791a0a47c7ee7a1b73c22be24e311175e0"} Mar 18 17:47:40.460028 master-0 kubenswrapper[7553]: I0318 17:47:40.459353 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"24b4ed170d527099878cb5fdd508a2fb","Type":"ContainerStarted","Data":"4e6c889cbc9d045dfa05a1c69aed57d0b4bc590c83ddf8cc03ee2cace2b29173"} Mar 18 17:47:40.460028 master-0 kubenswrapper[7553]: I0318 17:47:40.459365 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-b865698dc-5zj8r" event={"ID":"c355c750-ae2f-49fa-9a16-8fb4f688853e","Type":"ContainerDied","Data":"6663d9a012bba90e4d1f49e78a4578d42945dc0a251e88808d84607a0978912c"} Mar 18 17:47:40.460028 master-0 kubenswrapper[7553]: I0318 17:47:40.459382 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-d65958b8-t266j" event={"ID":"0b9ff55a-73fb-473f-b406-1f8b6cffdb89","Type":"ContainerDied","Data":"36a5d9d231da98f0f9e0dae16fa8c5d4e171fd401ed1a351ab236e19bff04107"} Mar 18 17:47:40.460028 master-0 kubenswrapper[7553]: I0318 17:47:40.459398 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-8b68b9d9b-p72m2" event={"ID":"26575d68-0488-4dfa-a5d0-5016e481dba6","Type":"ContainerDied","Data":"f83b9c315c38279f3569813348a27c78beef46c5306eaadd08c03d8c08f384ba"} Mar 18 17:47:40.460028 master-0 kubenswrapper[7553]: I0318 17:47:40.459412 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-ff989d6cc-qk279" event={"ID":"9b424d6c-7440-4c98-ac19-2d0642c696fd","Type":"ContainerDied","Data":"6e9473f3d26cbd67b9497211546ab830ef4c483cd3c3fb1fa65b5b574de9d612"} Mar 18 17:47:40.460028 master-0 kubenswrapper[7553]: I0318 17:47:40.459428 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-64854d9cff-vpjmp" event={"ID":"7d39d93e-9be3-47e1-a44e-be2d18b55446","Type":"ContainerDied","Data":"c9ad4dfdc283133c8325a6400b93e7ca1b286a38ba01514e1ca540aa2f6676d0"} Mar 18 17:47:40.460028 master-0 kubenswrapper[7553]: I0318 17:47:40.459442 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57f769d897-m82wx" event={"ID":"7b94e08c-7944-445e-bfb7-6c7c14880c65","Type":"ContainerStarted","Data":"94d941e21f1ab13a20fa6356fcedca0030606e420e596dcef8825d0ce5bcf87a"} Mar 18 17:47:40.460028 master-0 kubenswrapper[7553]: I0318 17:47:40.459453 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-b865698dc-5zj8r" event={"ID":"c355c750-ae2f-49fa-9a16-8fb4f688853e","Type":"ContainerStarted","Data":"ebe23adafc49efd64f86fbe53ef0b2cf71f92ee87bec64f94def1d1fde4df324"} Mar 18 17:47:40.460028 master-0 kubenswrapper[7553]: I0318 17:47:40.459465 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-8b68b9d9b-p72m2" event={"ID":"26575d68-0488-4dfa-a5d0-5016e481dba6","Type":"ContainerStarted","Data":"6469fb4bb68705329572e917ffd53c8d7d98a360f3801392b01dd10ca152c1c0"} Mar 18 17:47:40.460028 master-0 kubenswrapper[7553]: I0318 17:47:40.459476 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-f5755b457-f4cbl" event={"ID":"1db0a246-ca43-4e7c-b09e-e80218ae99b1","Type":"ContainerStarted","Data":"a1b33ace751148a425bdf00f07398d55bcc2dc83ff83d8278e7851ed219c1db7"} Mar 18 17:47:40.460028 master-0 kubenswrapper[7553]: I0318 17:47:40.459489 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-ff989d6cc-qk279" event={"ID":"9b424d6c-7440-4c98-ac19-2d0642c696fd","Type":"ContainerStarted","Data":"1fd744dbcfad29e0a4211253fc988f9ef696171ed5032f9e61793918d136f6fa"} Mar 18 17:47:40.460028 master-0 kubenswrapper[7553]: I0318 17:47:40.459501 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-66b84d69b-qb7n6" event={"ID":"7e64a377-f497-4416-8f22-d5c7f52e0b65","Type":"ContainerDied","Data":"02b88785366f3ca67c38ae3fa046b86fa7c95b60c40b090f66977aa12f1b78cb"} Mar 18 17:47:40.460028 master-0 kubenswrapper[7553]: I0318 17:47:40.459514 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-dh5zl" event={"ID":"37b3753f-bf4f-4a9e-a4a8-d58296bada79","Type":"ContainerDied","Data":"fd1baed9e081b7d0a16ba577c3675952403bd2f32763aeb842989654f0b5e115"} Mar 18 17:47:40.460028 master-0 kubenswrapper[7553]: I0318 17:47:40.459527 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"46f265536aba6292ead501bc9b49f327","Type":"ContainerDied","Data":"13ecfe004522bd3f1997358f8d18d1d0444903e67db4326c279f978bc65fbe03"} Mar 18 17:47:40.460028 master-0 kubenswrapper[7553]: I0318 17:47:40.459545 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-8sxdf" event={"ID":"c087ce06-a16b-41f4-ba93-8fccdee09003","Type":"ContainerDied","Data":"5ef1ad7d9de4700ea957d656ff99f57f457c91f9b150fe99e8b36beb88ed9c42"} Mar 18 17:47:40.460028 master-0 kubenswrapper[7553]: I0318 17:47:40.459558 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-dddff6458-wlfj4" event={"ID":"3a3a6c2c-78e7-41f3-acff-20173cbc012a","Type":"ContainerDied","Data":"34db6c58d1d15ad2f0f08eec2a02536e2b02dd1b1c722e12e770c383ca33f635"} Mar 18 17:47:40.460028 master-0 kubenswrapper[7553]: I0318 17:47:40.459571 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-rws9x" event={"ID":"0100a259-1358-45e8-8191-4e1f9a14ec89","Type":"ContainerDied","Data":"8df0fa7291cab5e340fb319c595e0406033737475f352f9d19dfc2dafb7b328f"} Mar 18 17:47:40.460028 master-0 kubenswrapper[7553]: I0318 17:47:40.459583 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-7bd846bfc4-dxxbl" event={"ID":"14a0661b-7bde-4e22-a9a9-5e3fb24df77f","Type":"ContainerDied","Data":"a6ebfcc622558a7e545ac685d6d46ff4d61a7219bfcb2c7a5f468d332911df22"} Mar 18 17:47:40.460028 master-0 kubenswrapper[7553]: I0318 17:47:40.459596 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8c94f4649-hpsbd" event={"ID":"9a240ab7-a1d5-4e9a-96f3-4590681cc7ed","Type":"ContainerDied","Data":"c2fb973641e8d289ba0dd09efd68e97b47576d6bc93a3c1a721a673bea80ce81"} Mar 18 17:47:40.460028 master-0 kubenswrapper[7553]: I0318 17:47:40.459609 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"46f265536aba6292ead501bc9b49f327","Type":"ContainerStarted","Data":"79efc2b057b97c5f729a0b5a1fa1420cba96b5301fd6279190cf494ebd7bf5f8"} Mar 18 17:47:40.460028 master-0 kubenswrapper[7553]: I0318 17:47:40.459621 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-7bd846bfc4-dxxbl" event={"ID":"14a0661b-7bde-4e22-a9a9-5e3fb24df77f","Type":"ContainerStarted","Data":"34974a400194e4abf23a570b3bcaf62e9c0cf2c55d12e3ded0eb4a493b533868"} Mar 18 17:47:40.460028 master-0 kubenswrapper[7553]: I0318 17:47:40.459632 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-d65958b8-t266j" event={"ID":"0b9ff55a-73fb-473f-b406-1f8b6cffdb89","Type":"ContainerStarted","Data":"208f151f73d2054e8fc1e7bad5a7840184b6f1a99cd1c642769a09479cee5ec9"} Mar 18 17:47:40.460028 master-0 kubenswrapper[7553]: I0318 17:47:40.459646 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-64854d9cff-vpjmp" event={"ID":"7d39d93e-9be3-47e1-a44e-be2d18b55446","Type":"ContainerStarted","Data":"9da45c50b62258b35b6fa6e25a88e2e045b13f36511821a9d8c318812731dc4c"} Mar 18 17:47:40.460028 master-0 kubenswrapper[7553]: I0318 17:47:40.459658 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-dh5zl" event={"ID":"37b3753f-bf4f-4a9e-a4a8-d58296bada79","Type":"ContainerStarted","Data":"80c5f3220064c232d03dadbc88c1a47282c553de28295165f3109e332825aa0f"} Mar 18 17:47:40.460028 master-0 kubenswrapper[7553]: I0318 17:47:40.459670 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-dddff6458-wlfj4" event={"ID":"3a3a6c2c-78e7-41f3-acff-20173cbc012a","Type":"ContainerStarted","Data":"668ffd9218f263f73d241599acfd6811f0e9302607d1e13913220907ac048330"} Mar 18 17:47:40.460028 master-0 kubenswrapper[7553]: I0318 17:47:40.459682 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-66b84d69b-qb7n6" event={"ID":"7e64a377-f497-4416-8f22-d5c7f52e0b65","Type":"ContainerStarted","Data":"7ee6b0cddd340e9ac4b37b541379d515766ed427e5cb173553e9eea6ace8c5a9"} Mar 18 17:47:40.460028 master-0 kubenswrapper[7553]: I0318 17:47:40.459693 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-5f5d689c6b-z9vvz" event={"ID":"dba5f8d7-4d25-42b5-9c58-813221bf96bb","Type":"ContainerDied","Data":"398454ad32431a1333f76c77a1b11d599119897614da05c5c31c8fb7c4b10bc1"} Mar 18 17:47:40.460028 master-0 kubenswrapper[7553]: I0318 17:47:40.459710 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"46f265536aba6292ead501bc9b49f327","Type":"ContainerDied","Data":"6a3212eaacddf8a633d9171d89d86f056fc2eaf17af107aa2bced9e6262d3611"} Mar 18 17:47:40.460028 master-0 kubenswrapper[7553]: I0318 17:47:40.459723 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-79bc6b8d76-g5brm" event={"ID":"fd4c81e2-699b-4fdf-ac7d-1607cde6a8ab","Type":"ContainerDied","Data":"a8a00810d795e748f7416b26291bd5e824cc9027054e6c1fabd83a4ff999def0"} Mar 18 17:47:40.462978 master-0 kubenswrapper[7553]: I0318 17:47:40.460330 7553 scope.go:117] "RemoveContainer" containerID="a8a00810d795e748f7416b26291bd5e824cc9027054e6c1fabd83a4ff999def0" Mar 18 17:47:40.462978 master-0 kubenswrapper[7553]: I0318 17:47:40.462218 7553 scope.go:117] "RemoveContainer" containerID="0f1dba4faf70afeafaab91d4fd1ae8ffa8171effce8ce56ce4db153d43518265" Mar 18 17:47:40.464215 master-0 kubenswrapper[7553]: I0318 17:47:40.464144 7553 scope.go:117] "RemoveContainer" containerID="398454ad32431a1333f76c77a1b11d599119897614da05c5c31c8fb7c4b10bc1" Mar 18 17:47:40.581237 master-0 kubenswrapper[7553]: I0318 17:47:40.574704 7553 scope.go:117] "RemoveContainer" containerID="d912b8d4a9e9d78b8f87c200cd35424b252b93249d3289fc7b152d013f54ec26" Mar 18 17:47:40.628336 master-0 kubenswrapper[7553]: I0318 17:47:40.627551 7553 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-controller-manager/installer-1-master-0"] Mar 18 17:47:40.632360 master-0 kubenswrapper[7553]: I0318 17:47:40.630901 7553 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-controller-manager/installer-1-master-0"] Mar 18 17:47:40.685022 master-0 kubenswrapper[7553]: I0318 17:47:40.681382 7553 scope.go:117] "RemoveContainer" containerID="2832f3b1f879c3c460b4f098e3d00d5c7b56e3d42bebbd0ce9bb5a5d15d6f5ef" Mar 18 17:47:40.717456 master-0 kubenswrapper[7553]: I0318 17:47:40.716913 7553 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-j4kft"] Mar 18 17:47:40.730412 master-0 kubenswrapper[7553]: I0318 17:47:40.729974 7553 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-j4kft"] Mar 18 17:47:40.747085 master-0 kubenswrapper[7553]: I0318 17:47:40.746866 7553 scope.go:117] "RemoveContainer" containerID="b999f5b1617525702f37002cc0c97c3e6d4fa7798646c662c89a3d3f27b32af1" Mar 18 17:47:40.805479 master-0 kubenswrapper[7553]: I0318 17:47:40.794756 7553 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-vbglp" podStartSLOduration=268.980222167 podStartE2EDuration="4m49.794730572s" podCreationTimestamp="2026-03-18 17:42:51 +0000 UTC" firstStartedPulling="2026-03-18 17:42:53.252081194 +0000 UTC m=+63.397915867" lastFinishedPulling="2026-03-18 17:43:14.066589589 +0000 UTC m=+84.212424272" observedRunningTime="2026-03-18 17:47:40.794114079 +0000 UTC m=+350.939948752" watchObservedRunningTime="2026-03-18 17:47:40.794730572 +0000 UTC m=+350.940565245" Mar 18 17:47:40.806360 master-0 kubenswrapper[7553]: I0318 17:47:40.805641 7553 scope.go:117] "RemoveContainer" containerID="51887b13aef82e88868eb337156320571051617c6952199181cd88bfeb560624" Mar 18 17:47:40.858304 master-0 kubenswrapper[7553]: I0318 17:47:40.851447 7553 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-fg8h6"] Mar 18 17:47:40.858304 master-0 kubenswrapper[7553]: I0318 17:47:40.853329 7553 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-fg8h6"] Mar 18 17:47:40.883559 master-0 kubenswrapper[7553]: I0318 17:47:40.883535 7553 scope.go:117] "RemoveContainer" containerID="5027239b16a33eaa242303aa483c7e285890e090e452f1d81a0bd3e82446b39c" Mar 18 17:47:40.916033 master-0 kubenswrapper[7553]: I0318 17:47:40.915980 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-5f5d689c6b-z9vvz" event={"ID":"dba5f8d7-4d25-42b5-9c58-813221bf96bb","Type":"ContainerStarted","Data":"be26a39de97522dd45a7740dc6545a7f3aea6dead3d0f7df86c4409b11af668a"} Mar 18 17:47:40.927437 master-0 kubenswrapper[7553]: I0318 17:47:40.927393 7553 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver-operator_kube-apiserver-operator-8b68b9d9b-p72m2_26575d68-0488-4dfa-a5d0-5016e481dba6/kube-apiserver-operator/1.log" Mar 18 17:47:40.959317 master-0 kubenswrapper[7553]: I0318 17:47:40.950424 7553 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-jlj6j"] Mar 18 17:47:40.959317 master-0 kubenswrapper[7553]: I0318 17:47:40.953066 7553 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/installer-2-master-0"] Mar 18 17:47:40.959317 master-0 kubenswrapper[7553]: I0318 17:47:40.955465 7553 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-jlj6j"] Mar 18 17:47:40.966767 master-0 kubenswrapper[7553]: I0318 17:47:40.964607 7553 scope.go:117] "RemoveContainer" containerID="06fa80a623a13f79bd4c27a3c32495ce9b6db2867b399839b8a475d55e2b3bf6" Mar 18 17:47:40.966767 master-0 kubenswrapper[7553]: I0318 17:47:40.965042 7553 generic.go:334] "Generic (PLEG): container finished" podID="cb522b02-0b93-4711-9041-566daa06b95a" containerID="399bf3be19e41993ba7e873949068ec6c32cf9d08ee1196692654605dc3ddd51" exitCode=0 Mar 18 17:47:40.966767 master-0 kubenswrapper[7553]: I0318 17:47:40.965098 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-95bf4f4d-q27fh" event={"ID":"cb522b02-0b93-4711-9041-566daa06b95a","Type":"ContainerDied","Data":"399bf3be19e41993ba7e873949068ec6c32cf9d08ee1196692654605dc3ddd51"} Mar 18 17:47:40.966767 master-0 kubenswrapper[7553]: I0318 17:47:40.965713 7553 scope.go:117] "RemoveContainer" containerID="399bf3be19e41993ba7e873949068ec6c32cf9d08ee1196692654605dc3ddd51" Mar 18 17:47:40.966767 master-0 kubenswrapper[7553]: E0318 17:47:40.965911 7553 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openshift-config-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=openshift-config-operator pod=openshift-config-operator-95bf4f4d-q27fh_openshift-config-operator(cb522b02-0b93-4711-9041-566daa06b95a)\"" pod="openshift-config-operator/openshift-config-operator-95bf4f4d-q27fh" podUID="cb522b02-0b93-4711-9041-566daa06b95a" Mar 18 17:47:40.990076 master-0 kubenswrapper[7553]: I0318 17:47:40.990041 7553 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-operator_network-operator-7bd846bfc4-dxxbl_14a0661b-7bde-4e22-a9a9-5e3fb24df77f/network-operator/1.log" Mar 18 17:47:41.012934 master-0 kubenswrapper[7553]: I0318 17:47:41.010829 7553 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-8485d" podStartSLOduration=269.137399537 podStartE2EDuration="4m50.010806775s" podCreationTimestamp="2026-03-18 17:42:51 +0000 UTC" firstStartedPulling="2026-03-18 17:42:53.255793315 +0000 UTC m=+63.401627978" lastFinishedPulling="2026-03-18 17:43:14.129200533 +0000 UTC m=+84.275035216" observedRunningTime="2026-03-18 17:47:41.006073225 +0000 UTC m=+351.151907908" watchObservedRunningTime="2026-03-18 17:47:41.010806775 +0000 UTC m=+351.156641448" Mar 18 17:47:41.016575 master-0 kubenswrapper[7553]: I0318 17:47:41.016540 7553 scope.go:117] "RemoveContainer" containerID="91f9abefbca1475a63e431e476e7c66f5fca86898f7b16dc16e0f75842de9c91" Mar 18 17:47:41.016766 master-0 kubenswrapper[7553]: I0318 17:47:41.016743 7553 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-config-operator/openshift-config-operator-95bf4f4d-q27fh" Mar 18 17:47:41.030423 master-0 kubenswrapper[7553]: I0318 17:47:41.029546 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-79bc6b8d76-g5brm" event={"ID":"fd4c81e2-699b-4fdf-ac7d-1607cde6a8ab","Type":"ContainerStarted","Data":"4bf8937e157502d33f2fabf48821c58eb90423b8d68fd9823fb4e7fd5bddb0b9"} Mar 18 17:47:41.057931 master-0 kubenswrapper[7553]: I0318 17:47:41.057889 7553 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-f5755b457-f4cbl" Mar 18 17:47:41.071360 master-0 kubenswrapper[7553]: I0318 17:47:41.070003 7553 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-f5755b457-f4cbl" Mar 18 17:47:41.072242 master-0 kubenswrapper[7553]: I0318 17:47:41.072209 7553 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-controller-manager-operator_openshift-controller-manager-operator-8c94f4649-hpsbd_9a240ab7-a1d5-4e9a-96f3-4590681cc7ed/openshift-controller-manager-operator/2.log" Mar 18 17:47:41.072770 master-0 kubenswrapper[7553]: I0318 17:47:41.072743 7553 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-controller-manager-operator_openshift-controller-manager-operator-8c94f4649-hpsbd_9a240ab7-a1d5-4e9a-96f3-4590681cc7ed/openshift-controller-manager-operator/1.log" Mar 18 17:47:41.074502 master-0 kubenswrapper[7553]: I0318 17:47:41.073858 7553 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-controller-manager-operator_openshift-controller-manager-operator-8c94f4649-hpsbd_9a240ab7-a1d5-4e9a-96f3-4590681cc7ed/openshift-controller-manager-operator/0.log" Mar 18 17:47:41.074502 master-0 kubenswrapper[7553]: I0318 17:47:41.073948 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8c94f4649-hpsbd" event={"ID":"9a240ab7-a1d5-4e9a-96f3-4590681cc7ed","Type":"ContainerStarted","Data":"f73785a1635196cac38cc4bd53a22ffc286467a7b93071487c4e45283cb55722"} Mar 18 17:47:41.081659 master-0 kubenswrapper[7553]: I0318 17:47:41.081613 7553 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication-operator_authentication-operator-5885bfd7f4-8sxdf_c087ce06-a16b-41f4-ba93-8fccdee09003/authentication-operator/2.log" Mar 18 17:47:41.081837 master-0 kubenswrapper[7553]: I0318 17:47:41.081790 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-8sxdf" event={"ID":"c087ce06-a16b-41f4-ba93-8fccdee09003","Type":"ContainerStarted","Data":"bc1a4bd14f358c13bdab303413973d3ac603a28259c3850ba80328bdb2347c79"} Mar 18 17:47:41.088559 master-0 kubenswrapper[7553]: I0318 17:47:41.088507 7553 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler-operator_openshift-kube-scheduler-operator-dddff6458-wlfj4_3a3a6c2c-78e7-41f3-acff-20173cbc012a/kube-scheduler-operator-container/1.log" Mar 18 17:47:41.101290 master-0 kubenswrapper[7553]: I0318 17:47:41.098492 7553 scope.go:117] "RemoveContainer" containerID="f0b81b3cfa0e5fd097c90e819ab6e9fd9565a5c9b97fe1b2e8315e5233938b1e" Mar 18 17:47:41.110389 master-0 kubenswrapper[7553]: E0318 17:47:41.110349 7553 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"etcd-master-0\" already exists" pod="openshift-etcd/etcd-master-0" Mar 18 17:47:41.114669 master-0 kubenswrapper[7553]: I0318 17:47:41.112460 7553 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-master-0" Mar 18 17:47:41.131621 master-0 kubenswrapper[7553]: I0318 17:47:41.131586 7553 scope.go:117] "RemoveContainer" containerID="e7ad342e7df9192dcebc5d4c70aab2c6f8db5ac6b23cbc811c6ae00a013dee5b" Mar 18 17:47:41.148875 master-0 kubenswrapper[7553]: I0318 17:47:41.148146 7553 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-6xmx4"] Mar 18 17:47:41.156184 master-0 kubenswrapper[7553]: I0318 17:47:41.156138 7553 scope.go:117] "RemoveContainer" containerID="c19ffd46d4609e819ebbe2a9d485eeae31630e1764b880f8494b5531e650c602" Mar 18 17:47:41.202249 master-0 kubenswrapper[7553]: I0318 17:47:41.201569 7553 scope.go:117] "RemoveContainer" containerID="61d9d3ba71cb2cabd3457558bd5286207ddf90062735fd7732ad59d0bbbaa150" Mar 18 17:47:41.226622 master-0 kubenswrapper[7553]: I0318 17:47:41.226572 7553 scope.go:117] "RemoveContainer" containerID="06fa80a623a13f79bd4c27a3c32495ce9b6db2867b399839b8a475d55e2b3bf6" Mar 18 17:47:41.227468 master-0 kubenswrapper[7553]: E0318 17:47:41.227421 7553 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"06fa80a623a13f79bd4c27a3c32495ce9b6db2867b399839b8a475d55e2b3bf6\": container with ID starting with 06fa80a623a13f79bd4c27a3c32495ce9b6db2867b399839b8a475d55e2b3bf6 not found: ID does not exist" containerID="06fa80a623a13f79bd4c27a3c32495ce9b6db2867b399839b8a475d55e2b3bf6" Mar 18 17:47:41.227530 master-0 kubenswrapper[7553]: I0318 17:47:41.227480 7553 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"06fa80a623a13f79bd4c27a3c32495ce9b6db2867b399839b8a475d55e2b3bf6"} err="failed to get container status \"06fa80a623a13f79bd4c27a3c32495ce9b6db2867b399839b8a475d55e2b3bf6\": rpc error: code = NotFound desc = could not find container \"06fa80a623a13f79bd4c27a3c32495ce9b6db2867b399839b8a475d55e2b3bf6\": container with ID starting with 06fa80a623a13f79bd4c27a3c32495ce9b6db2867b399839b8a475d55e2b3bf6 not found: ID does not exist" Mar 18 17:47:41.227572 master-0 kubenswrapper[7553]: I0318 17:47:41.227529 7553 scope.go:117] "RemoveContainer" containerID="91f9abefbca1475a63e431e476e7c66f5fca86898f7b16dc16e0f75842de9c91" Mar 18 17:47:41.228001 master-0 kubenswrapper[7553]: E0318 17:47:41.227959 7553 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"91f9abefbca1475a63e431e476e7c66f5fca86898f7b16dc16e0f75842de9c91\": container with ID starting with 91f9abefbca1475a63e431e476e7c66f5fca86898f7b16dc16e0f75842de9c91 not found: ID does not exist" containerID="91f9abefbca1475a63e431e476e7c66f5fca86898f7b16dc16e0f75842de9c91" Mar 18 17:47:41.228055 master-0 kubenswrapper[7553]: I0318 17:47:41.228013 7553 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"91f9abefbca1475a63e431e476e7c66f5fca86898f7b16dc16e0f75842de9c91"} err="failed to get container status \"91f9abefbca1475a63e431e476e7c66f5fca86898f7b16dc16e0f75842de9c91\": rpc error: code = NotFound desc = could not find container \"91f9abefbca1475a63e431e476e7c66f5fca86898f7b16dc16e0f75842de9c91\": container with ID starting with 91f9abefbca1475a63e431e476e7c66f5fca86898f7b16dc16e0f75842de9c91 not found: ID does not exist" Mar 18 17:47:41.228055 master-0 kubenswrapper[7553]: I0318 17:47:41.228037 7553 scope.go:117] "RemoveContainer" containerID="f0b81b3cfa0e5fd097c90e819ab6e9fd9565a5c9b97fe1b2e8315e5233938b1e" Mar 18 17:47:41.228607 master-0 kubenswrapper[7553]: E0318 17:47:41.228579 7553 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f0b81b3cfa0e5fd097c90e819ab6e9fd9565a5c9b97fe1b2e8315e5233938b1e\": container with ID starting with f0b81b3cfa0e5fd097c90e819ab6e9fd9565a5c9b97fe1b2e8315e5233938b1e not found: ID does not exist" containerID="f0b81b3cfa0e5fd097c90e819ab6e9fd9565a5c9b97fe1b2e8315e5233938b1e" Mar 18 17:47:41.228783 master-0 kubenswrapper[7553]: I0318 17:47:41.228757 7553 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f0b81b3cfa0e5fd097c90e819ab6e9fd9565a5c9b97fe1b2e8315e5233938b1e"} err="failed to get container status \"f0b81b3cfa0e5fd097c90e819ab6e9fd9565a5c9b97fe1b2e8315e5233938b1e\": rpc error: code = NotFound desc = could not find container \"f0b81b3cfa0e5fd097c90e819ab6e9fd9565a5c9b97fe1b2e8315e5233938b1e\": container with ID starting with f0b81b3cfa0e5fd097c90e819ab6e9fd9565a5c9b97fe1b2e8315e5233938b1e not found: ID does not exist" Mar 18 17:47:41.228783 master-0 kubenswrapper[7553]: I0318 17:47:41.228780 7553 scope.go:117] "RemoveContainer" containerID="579cc4ec2812b3f1711dac655f16b18685227d170cb9b02fdd9aad6faa3c3865" Mar 18 17:47:41.250601 master-0 kubenswrapper[7553]: I0318 17:47:41.250547 7553 scope.go:117] "RemoveContainer" containerID="45e3f6969d17c505bf035fc0bb6cc383a03ebc121f30574b4362e30470bf65e0" Mar 18 17:47:41.278403 master-0 kubenswrapper[7553]: I0318 17:47:41.277374 7553 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-hgw2n"] Mar 18 17:47:41.279103 master-0 kubenswrapper[7553]: I0318 17:47:41.279062 7553 scope.go:117] "RemoveContainer" containerID="53db21ebf3fb977fb32a1c89dc78765092d7ecfd2055d2adfee610a3dffc6d29" Mar 18 17:47:41.289891 master-0 kubenswrapper[7553]: I0318 17:47:41.289811 7553 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-hgw2n"] Mar 18 17:47:41.368349 master-0 kubenswrapper[7553]: I0318 17:47:41.368298 7553 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-scheduler/installer-1-master-0"] Mar 18 17:47:41.369800 master-0 kubenswrapper[7553]: I0318 17:47:41.369773 7553 scope.go:117] "RemoveContainer" containerID="958f393f34a137d2d87755401a3809cbe07ffd8110f371434c3fc67267936608" Mar 18 17:47:41.373817 master-0 kubenswrapper[7553]: I0318 17:47:41.373757 7553 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-scheduler/installer-1-master-0"] Mar 18 17:47:41.437947 master-0 kubenswrapper[7553]: I0318 17:47:41.437900 7553 scope.go:117] "RemoveContainer" containerID="0f1dba4faf70afeafaab91d4fd1ae8ffa8171effce8ce56ce4db153d43518265" Mar 18 17:47:41.438454 master-0 kubenswrapper[7553]: E0318 17:47:41.438420 7553 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0f1dba4faf70afeafaab91d4fd1ae8ffa8171effce8ce56ce4db153d43518265\": container with ID starting with 0f1dba4faf70afeafaab91d4fd1ae8ffa8171effce8ce56ce4db153d43518265 not found: ID does not exist" containerID="0f1dba4faf70afeafaab91d4fd1ae8ffa8171effce8ce56ce4db153d43518265" Mar 18 17:47:41.438526 master-0 kubenswrapper[7553]: I0318 17:47:41.438471 7553 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0f1dba4faf70afeafaab91d4fd1ae8ffa8171effce8ce56ce4db153d43518265"} err="failed to get container status \"0f1dba4faf70afeafaab91d4fd1ae8ffa8171effce8ce56ce4db153d43518265\": rpc error: code = NotFound desc = could not find container \"0f1dba4faf70afeafaab91d4fd1ae8ffa8171effce8ce56ce4db153d43518265\": container with ID starting with 0f1dba4faf70afeafaab91d4fd1ae8ffa8171effce8ce56ce4db153d43518265 not found: ID does not exist" Mar 18 17:47:41.438526 master-0 kubenswrapper[7553]: I0318 17:47:41.438504 7553 scope.go:117] "RemoveContainer" containerID="d912b8d4a9e9d78b8f87c200cd35424b252b93249d3289fc7b152d013f54ec26" Mar 18 17:47:41.438798 master-0 kubenswrapper[7553]: E0318 17:47:41.438770 7553 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d912b8d4a9e9d78b8f87c200cd35424b252b93249d3289fc7b152d013f54ec26\": container with ID starting with d912b8d4a9e9d78b8f87c200cd35424b252b93249d3289fc7b152d013f54ec26 not found: ID does not exist" containerID="d912b8d4a9e9d78b8f87c200cd35424b252b93249d3289fc7b152d013f54ec26" Mar 18 17:47:41.438878 master-0 kubenswrapper[7553]: I0318 17:47:41.438803 7553 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d912b8d4a9e9d78b8f87c200cd35424b252b93249d3289fc7b152d013f54ec26"} err="failed to get container status \"d912b8d4a9e9d78b8f87c200cd35424b252b93249d3289fc7b152d013f54ec26\": rpc error: code = NotFound desc = could not find container \"d912b8d4a9e9d78b8f87c200cd35424b252b93249d3289fc7b152d013f54ec26\": container with ID starting with d912b8d4a9e9d78b8f87c200cd35424b252b93249d3289fc7b152d013f54ec26 not found: ID does not exist" Mar 18 17:47:41.438878 master-0 kubenswrapper[7553]: I0318 17:47:41.438825 7553 scope.go:117] "RemoveContainer" containerID="0cb61f4df91a50839abfb90676637f2a5c84478782eb2749acec5427cc366219" Mar 18 17:47:41.533321 master-0 kubenswrapper[7553]: I0318 17:47:41.533293 7553 scope.go:117] "RemoveContainer" containerID="fa4790d4c10a7e1c45ffad9596658e2a3e44e654967b539ab7d40f5e263966e8" Mar 18 17:47:41.554107 master-0 kubenswrapper[7553]: I0318 17:47:41.554020 7553 scope.go:117] "RemoveContainer" containerID="51887b13aef82e88868eb337156320571051617c6952199181cd88bfeb560624" Mar 18 17:47:41.554649 master-0 kubenswrapper[7553]: E0318 17:47:41.554596 7553 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"51887b13aef82e88868eb337156320571051617c6952199181cd88bfeb560624\": container with ID starting with 51887b13aef82e88868eb337156320571051617c6952199181cd88bfeb560624 not found: ID does not exist" containerID="51887b13aef82e88868eb337156320571051617c6952199181cd88bfeb560624" Mar 18 17:47:41.554753 master-0 kubenswrapper[7553]: I0318 17:47:41.554669 7553 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"51887b13aef82e88868eb337156320571051617c6952199181cd88bfeb560624"} err="failed to get container status \"51887b13aef82e88868eb337156320571051617c6952199181cd88bfeb560624\": rpc error: code = NotFound desc = could not find container \"51887b13aef82e88868eb337156320571051617c6952199181cd88bfeb560624\": container with ID starting with 51887b13aef82e88868eb337156320571051617c6952199181cd88bfeb560624 not found: ID does not exist" Mar 18 17:47:41.554753 master-0 kubenswrapper[7553]: I0318 17:47:41.554714 7553 scope.go:117] "RemoveContainer" containerID="5027239b16a33eaa242303aa483c7e285890e090e452f1d81a0bd3e82446b39c" Mar 18 17:47:41.555435 master-0 kubenswrapper[7553]: E0318 17:47:41.555405 7553 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5027239b16a33eaa242303aa483c7e285890e090e452f1d81a0bd3e82446b39c\": container with ID starting with 5027239b16a33eaa242303aa483c7e285890e090e452f1d81a0bd3e82446b39c not found: ID does not exist" containerID="5027239b16a33eaa242303aa483c7e285890e090e452f1d81a0bd3e82446b39c" Mar 18 17:47:41.555532 master-0 kubenswrapper[7553]: I0318 17:47:41.555439 7553 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5027239b16a33eaa242303aa483c7e285890e090e452f1d81a0bd3e82446b39c"} err="failed to get container status \"5027239b16a33eaa242303aa483c7e285890e090e452f1d81a0bd3e82446b39c\": rpc error: code = NotFound desc = could not find container \"5027239b16a33eaa242303aa483c7e285890e090e452f1d81a0bd3e82446b39c\": container with ID starting with 5027239b16a33eaa242303aa483c7e285890e090e452f1d81a0bd3e82446b39c not found: ID does not exist" Mar 18 17:47:41.555532 master-0 kubenswrapper[7553]: I0318 17:47:41.555468 7553 scope.go:117] "RemoveContainer" containerID="579cc4ec2812b3f1711dac655f16b18685227d170cb9b02fdd9aad6faa3c3865" Mar 18 17:47:41.555765 master-0 kubenswrapper[7553]: E0318 17:47:41.555740 7553 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"579cc4ec2812b3f1711dac655f16b18685227d170cb9b02fdd9aad6faa3c3865\": container with ID starting with 579cc4ec2812b3f1711dac655f16b18685227d170cb9b02fdd9aad6faa3c3865 not found: ID does not exist" containerID="579cc4ec2812b3f1711dac655f16b18685227d170cb9b02fdd9aad6faa3c3865" Mar 18 17:47:41.555845 master-0 kubenswrapper[7553]: I0318 17:47:41.555762 7553 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"579cc4ec2812b3f1711dac655f16b18685227d170cb9b02fdd9aad6faa3c3865"} err="failed to get container status \"579cc4ec2812b3f1711dac655f16b18685227d170cb9b02fdd9aad6faa3c3865\": rpc error: code = NotFound desc = could not find container \"579cc4ec2812b3f1711dac655f16b18685227d170cb9b02fdd9aad6faa3c3865\": container with ID starting with 579cc4ec2812b3f1711dac655f16b18685227d170cb9b02fdd9aad6faa3c3865 not found: ID does not exist" Mar 18 17:47:41.555845 master-0 kubenswrapper[7553]: I0318 17:47:41.555778 7553 scope.go:117] "RemoveContainer" containerID="e7ad342e7df9192dcebc5d4c70aab2c6f8db5ac6b23cbc811c6ae00a013dee5b" Mar 18 17:47:41.556041 master-0 kubenswrapper[7553]: E0318 17:47:41.556015 7553 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e7ad342e7df9192dcebc5d4c70aab2c6f8db5ac6b23cbc811c6ae00a013dee5b\": container with ID starting with e7ad342e7df9192dcebc5d4c70aab2c6f8db5ac6b23cbc811c6ae00a013dee5b not found: ID does not exist" containerID="e7ad342e7df9192dcebc5d4c70aab2c6f8db5ac6b23cbc811c6ae00a013dee5b" Mar 18 17:47:41.556111 master-0 kubenswrapper[7553]: I0318 17:47:41.556038 7553 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e7ad342e7df9192dcebc5d4c70aab2c6f8db5ac6b23cbc811c6ae00a013dee5b"} err="failed to get container status \"e7ad342e7df9192dcebc5d4c70aab2c6f8db5ac6b23cbc811c6ae00a013dee5b\": rpc error: code = NotFound desc = could not find container \"e7ad342e7df9192dcebc5d4c70aab2c6f8db5ac6b23cbc811c6ae00a013dee5b\": container with ID starting with e7ad342e7df9192dcebc5d4c70aab2c6f8db5ac6b23cbc811c6ae00a013dee5b not found: ID does not exist" Mar 18 17:47:41.556111 master-0 kubenswrapper[7553]: I0318 17:47:41.556051 7553 scope.go:117] "RemoveContainer" containerID="45e3f6969d17c505bf035fc0bb6cc383a03ebc121f30574b4362e30470bf65e0" Mar 18 17:47:41.556345 master-0 kubenswrapper[7553]: E0318 17:47:41.556317 7553 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"45e3f6969d17c505bf035fc0bb6cc383a03ebc121f30574b4362e30470bf65e0\": container with ID starting with 45e3f6969d17c505bf035fc0bb6cc383a03ebc121f30574b4362e30470bf65e0 not found: ID does not exist" containerID="45e3f6969d17c505bf035fc0bb6cc383a03ebc121f30574b4362e30470bf65e0" Mar 18 17:47:41.556459 master-0 kubenswrapper[7553]: I0318 17:47:41.556346 7553 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"45e3f6969d17c505bf035fc0bb6cc383a03ebc121f30574b4362e30470bf65e0"} err="failed to get container status \"45e3f6969d17c505bf035fc0bb6cc383a03ebc121f30574b4362e30470bf65e0\": rpc error: code = NotFound desc = could not find container \"45e3f6969d17c505bf035fc0bb6cc383a03ebc121f30574b4362e30470bf65e0\": container with ID starting with 45e3f6969d17c505bf035fc0bb6cc383a03ebc121f30574b4362e30470bf65e0 not found: ID does not exist" Mar 18 17:47:41.556459 master-0 kubenswrapper[7553]: I0318 17:47:41.556363 7553 scope.go:117] "RemoveContainer" containerID="06fa80a623a13f79bd4c27a3c32495ce9b6db2867b399839b8a475d55e2b3bf6" Mar 18 17:47:41.556594 master-0 kubenswrapper[7553]: I0318 17:47:41.556566 7553 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"06fa80a623a13f79bd4c27a3c32495ce9b6db2867b399839b8a475d55e2b3bf6"} err="failed to get container status \"06fa80a623a13f79bd4c27a3c32495ce9b6db2867b399839b8a475d55e2b3bf6\": rpc error: code = NotFound desc = could not find container \"06fa80a623a13f79bd4c27a3c32495ce9b6db2867b399839b8a475d55e2b3bf6\": container with ID starting with 06fa80a623a13f79bd4c27a3c32495ce9b6db2867b399839b8a475d55e2b3bf6 not found: ID does not exist" Mar 18 17:47:41.556594 master-0 kubenswrapper[7553]: I0318 17:47:41.556588 7553 scope.go:117] "RemoveContainer" containerID="91f9abefbca1475a63e431e476e7c66f5fca86898f7b16dc16e0f75842de9c91" Mar 18 17:47:41.558527 master-0 kubenswrapper[7553]: I0318 17:47:41.558497 7553 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"91f9abefbca1475a63e431e476e7c66f5fca86898f7b16dc16e0f75842de9c91"} err="failed to get container status \"91f9abefbca1475a63e431e476e7c66f5fca86898f7b16dc16e0f75842de9c91\": rpc error: code = NotFound desc = could not find container \"91f9abefbca1475a63e431e476e7c66f5fca86898f7b16dc16e0f75842de9c91\": container with ID starting with 91f9abefbca1475a63e431e476e7c66f5fca86898f7b16dc16e0f75842de9c91 not found: ID does not exist" Mar 18 17:47:41.558527 master-0 kubenswrapper[7553]: I0318 17:47:41.558522 7553 scope.go:117] "RemoveContainer" containerID="f0b81b3cfa0e5fd097c90e819ab6e9fd9565a5c9b97fe1b2e8315e5233938b1e" Mar 18 17:47:41.558781 master-0 kubenswrapper[7553]: I0318 17:47:41.558753 7553 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f0b81b3cfa0e5fd097c90e819ab6e9fd9565a5c9b97fe1b2e8315e5233938b1e"} err="failed to get container status \"f0b81b3cfa0e5fd097c90e819ab6e9fd9565a5c9b97fe1b2e8315e5233938b1e\": rpc error: code = NotFound desc = could not find container \"f0b81b3cfa0e5fd097c90e819ab6e9fd9565a5c9b97fe1b2e8315e5233938b1e\": container with ID starting with f0b81b3cfa0e5fd097c90e819ab6e9fd9565a5c9b97fe1b2e8315e5233938b1e not found: ID does not exist" Mar 18 17:47:41.558781 master-0 kubenswrapper[7553]: I0318 17:47:41.558772 7553 scope.go:117] "RemoveContainer" containerID="0f1dba4faf70afeafaab91d4fd1ae8ffa8171effce8ce56ce4db153d43518265" Mar 18 17:47:41.559428 master-0 kubenswrapper[7553]: I0318 17:47:41.559391 7553 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0f1dba4faf70afeafaab91d4fd1ae8ffa8171effce8ce56ce4db153d43518265"} err="failed to get container status \"0f1dba4faf70afeafaab91d4fd1ae8ffa8171effce8ce56ce4db153d43518265\": rpc error: code = NotFound desc = could not find container \"0f1dba4faf70afeafaab91d4fd1ae8ffa8171effce8ce56ce4db153d43518265\": container with ID starting with 0f1dba4faf70afeafaab91d4fd1ae8ffa8171effce8ce56ce4db153d43518265 not found: ID does not exist" Mar 18 17:47:41.559428 master-0 kubenswrapper[7553]: I0318 17:47:41.559415 7553 scope.go:117] "RemoveContainer" containerID="d912b8d4a9e9d78b8f87c200cd35424b252b93249d3289fc7b152d013f54ec26" Mar 18 17:47:41.559854 master-0 kubenswrapper[7553]: I0318 17:47:41.559827 7553 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d912b8d4a9e9d78b8f87c200cd35424b252b93249d3289fc7b152d013f54ec26"} err="failed to get container status \"d912b8d4a9e9d78b8f87c200cd35424b252b93249d3289fc7b152d013f54ec26\": rpc error: code = NotFound desc = could not find container \"d912b8d4a9e9d78b8f87c200cd35424b252b93249d3289fc7b152d013f54ec26\": container with ID starting with d912b8d4a9e9d78b8f87c200cd35424b252b93249d3289fc7b152d013f54ec26 not found: ID does not exist" Mar 18 17:47:41.559854 master-0 kubenswrapper[7553]: I0318 17:47:41.559847 7553 scope.go:117] "RemoveContainer" containerID="b999f5b1617525702f37002cc0c97c3e6d4fa7798646c662c89a3d3f27b32af1" Mar 18 17:47:41.560251 master-0 kubenswrapper[7553]: E0318 17:47:41.560227 7553 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b999f5b1617525702f37002cc0c97c3e6d4fa7798646c662c89a3d3f27b32af1\": container with ID starting with b999f5b1617525702f37002cc0c97c3e6d4fa7798646c662c89a3d3f27b32af1 not found: ID does not exist" containerID="b999f5b1617525702f37002cc0c97c3e6d4fa7798646c662c89a3d3f27b32af1" Mar 18 17:47:41.560340 master-0 kubenswrapper[7553]: I0318 17:47:41.560247 7553 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b999f5b1617525702f37002cc0c97c3e6d4fa7798646c662c89a3d3f27b32af1"} err="failed to get container status \"b999f5b1617525702f37002cc0c97c3e6d4fa7798646c662c89a3d3f27b32af1\": rpc error: code = NotFound desc = could not find container \"b999f5b1617525702f37002cc0c97c3e6d4fa7798646c662c89a3d3f27b32af1\": container with ID starting with b999f5b1617525702f37002cc0c97c3e6d4fa7798646c662c89a3d3f27b32af1 not found: ID does not exist" Mar 18 17:47:41.560340 master-0 kubenswrapper[7553]: I0318 17:47:41.560266 7553 scope.go:117] "RemoveContainer" containerID="53db21ebf3fb977fb32a1c89dc78765092d7ecfd2055d2adfee610a3dffc6d29" Mar 18 17:47:41.560599 master-0 kubenswrapper[7553]: E0318 17:47:41.560569 7553 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"53db21ebf3fb977fb32a1c89dc78765092d7ecfd2055d2adfee610a3dffc6d29\": container with ID starting with 53db21ebf3fb977fb32a1c89dc78765092d7ecfd2055d2adfee610a3dffc6d29 not found: ID does not exist" containerID="53db21ebf3fb977fb32a1c89dc78765092d7ecfd2055d2adfee610a3dffc6d29" Mar 18 17:47:41.560672 master-0 kubenswrapper[7553]: I0318 17:47:41.560597 7553 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"53db21ebf3fb977fb32a1c89dc78765092d7ecfd2055d2adfee610a3dffc6d29"} err="failed to get container status \"53db21ebf3fb977fb32a1c89dc78765092d7ecfd2055d2adfee610a3dffc6d29\": rpc error: code = NotFound desc = could not find container \"53db21ebf3fb977fb32a1c89dc78765092d7ecfd2055d2adfee610a3dffc6d29\": container with ID starting with 53db21ebf3fb977fb32a1c89dc78765092d7ecfd2055d2adfee610a3dffc6d29 not found: ID does not exist" Mar 18 17:47:41.560672 master-0 kubenswrapper[7553]: I0318 17:47:41.560611 7553 scope.go:117] "RemoveContainer" containerID="958f393f34a137d2d87755401a3809cbe07ffd8110f371434c3fc67267936608" Mar 18 17:47:41.560878 master-0 kubenswrapper[7553]: E0318 17:47:41.560833 7553 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"958f393f34a137d2d87755401a3809cbe07ffd8110f371434c3fc67267936608\": container with ID starting with 958f393f34a137d2d87755401a3809cbe07ffd8110f371434c3fc67267936608 not found: ID does not exist" containerID="958f393f34a137d2d87755401a3809cbe07ffd8110f371434c3fc67267936608" Mar 18 17:47:41.560878 master-0 kubenswrapper[7553]: I0318 17:47:41.560875 7553 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"958f393f34a137d2d87755401a3809cbe07ffd8110f371434c3fc67267936608"} err="failed to get container status \"958f393f34a137d2d87755401a3809cbe07ffd8110f371434c3fc67267936608\": rpc error: code = NotFound desc = could not find container \"958f393f34a137d2d87755401a3809cbe07ffd8110f371434c3fc67267936608\": container with ID starting with 958f393f34a137d2d87755401a3809cbe07ffd8110f371434c3fc67267936608 not found: ID does not exist" Mar 18 17:47:41.560997 master-0 kubenswrapper[7553]: I0318 17:47:41.560888 7553 scope.go:117] "RemoveContainer" containerID="2832f3b1f879c3c460b4f098e3d00d5c7b56e3d42bebbd0ce9bb5a5d15d6f5ef" Mar 18 17:47:41.561222 master-0 kubenswrapper[7553]: E0318 17:47:41.561197 7553 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2832f3b1f879c3c460b4f098e3d00d5c7b56e3d42bebbd0ce9bb5a5d15d6f5ef\": container with ID starting with 2832f3b1f879c3c460b4f098e3d00d5c7b56e3d42bebbd0ce9bb5a5d15d6f5ef not found: ID does not exist" containerID="2832f3b1f879c3c460b4f098e3d00d5c7b56e3d42bebbd0ce9bb5a5d15d6f5ef" Mar 18 17:47:41.561322 master-0 kubenswrapper[7553]: I0318 17:47:41.561220 7553 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2832f3b1f879c3c460b4f098e3d00d5c7b56e3d42bebbd0ce9bb5a5d15d6f5ef"} err="failed to get container status \"2832f3b1f879c3c460b4f098e3d00d5c7b56e3d42bebbd0ce9bb5a5d15d6f5ef\": rpc error: code = NotFound desc = could not find container \"2832f3b1f879c3c460b4f098e3d00d5c7b56e3d42bebbd0ce9bb5a5d15d6f5ef\": container with ID starting with 2832f3b1f879c3c460b4f098e3d00d5c7b56e3d42bebbd0ce9bb5a5d15d6f5ef not found: ID does not exist" Mar 18 17:47:41.561322 master-0 kubenswrapper[7553]: I0318 17:47:41.561234 7553 scope.go:117] "RemoveContainer" containerID="c19ffd46d4609e819ebbe2a9d485eeae31630e1764b880f8494b5531e650c602" Mar 18 17:47:41.561514 master-0 kubenswrapper[7553]: E0318 17:47:41.561486 7553 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c19ffd46d4609e819ebbe2a9d485eeae31630e1764b880f8494b5531e650c602\": container with ID starting with c19ffd46d4609e819ebbe2a9d485eeae31630e1764b880f8494b5531e650c602 not found: ID does not exist" containerID="c19ffd46d4609e819ebbe2a9d485eeae31630e1764b880f8494b5531e650c602" Mar 18 17:47:41.561587 master-0 kubenswrapper[7553]: I0318 17:47:41.561512 7553 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c19ffd46d4609e819ebbe2a9d485eeae31630e1764b880f8494b5531e650c602"} err="failed to get container status \"c19ffd46d4609e819ebbe2a9d485eeae31630e1764b880f8494b5531e650c602\": rpc error: code = NotFound desc = could not find container \"c19ffd46d4609e819ebbe2a9d485eeae31630e1764b880f8494b5531e650c602\": container with ID starting with c19ffd46d4609e819ebbe2a9d485eeae31630e1764b880f8494b5531e650c602 not found: ID does not exist" Mar 18 17:47:41.561587 master-0 kubenswrapper[7553]: I0318 17:47:41.561526 7553 scope.go:117] "RemoveContainer" containerID="61d9d3ba71cb2cabd3457558bd5286207ddf90062735fd7732ad59d0bbbaa150" Mar 18 17:47:41.561817 master-0 kubenswrapper[7553]: E0318 17:47:41.561791 7553 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"61d9d3ba71cb2cabd3457558bd5286207ddf90062735fd7732ad59d0bbbaa150\": container with ID starting with 61d9d3ba71cb2cabd3457558bd5286207ddf90062735fd7732ad59d0bbbaa150 not found: ID does not exist" containerID="61d9d3ba71cb2cabd3457558bd5286207ddf90062735fd7732ad59d0bbbaa150" Mar 18 17:47:41.561817 master-0 kubenswrapper[7553]: I0318 17:47:41.561814 7553 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"61d9d3ba71cb2cabd3457558bd5286207ddf90062735fd7732ad59d0bbbaa150"} err="failed to get container status \"61d9d3ba71cb2cabd3457558bd5286207ddf90062735fd7732ad59d0bbbaa150\": rpc error: code = NotFound desc = could not find container \"61d9d3ba71cb2cabd3457558bd5286207ddf90062735fd7732ad59d0bbbaa150\": container with ID starting with 61d9d3ba71cb2cabd3457558bd5286207ddf90062735fd7732ad59d0bbbaa150 not found: ID does not exist" Mar 18 17:47:41.561817 master-0 kubenswrapper[7553]: I0318 17:47:41.561828 7553 scope.go:117] "RemoveContainer" containerID="8ec96d66f498df1f17ff1b07f364e893b390b96c326cc03f6199600b04196d04" Mar 18 17:47:41.589639 master-0 kubenswrapper[7553]: E0318 17:47:41.589468 7553 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{certified-operators-hgw2n.189e0071e84589e6 openshift-marketplace 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-marketplace,Name:certified-operators-hgw2n,UID:f7203a5f-0f67-48ca-a12b-be3b0ce7cbac,APIVersion:v1,ResourceVersion:8052,FieldPath:spec.initContainers{extract-content},},Reason:Pulled,Message:Successfully pulled image \"registry.redhat.io/redhat/certified-operator-index:v4.18\" in 26.341s (26.341s including waiting). Image size: 1251896539 bytes.,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 17:43:12.458729958 +0000 UTC m=+82.604564631,LastTimestamp:2026-03-18 17:43:12.458729958 +0000 UTC m=+82.604564631,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 17:47:41.602571 master-0 kubenswrapper[7553]: I0318 17:47:41.602062 7553 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-master-0" podStartSLOduration=17.602035531 podStartE2EDuration="17.602035531s" podCreationTimestamp="2026-03-18 17:47:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 17:47:41.600164781 +0000 UTC m=+351.745999464" watchObservedRunningTime="2026-03-18 17:47:41.602035531 +0000 UTC m=+351.747870204" Mar 18 17:47:42.063184 master-0 kubenswrapper[7553]: I0318 17:47:42.063127 7553 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22e8652f-ee18-4cff-bccb-ef413456685f" path="/var/lib/kubelet/pods/22e8652f-ee18-4cff-bccb-ef413456685f/volumes" Mar 18 17:47:42.064434 master-0 kubenswrapper[7553]: I0318 17:47:42.064396 7553 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="35595774-da4b-499c-bd6e-1ae5af144833" path="/var/lib/kubelet/pods/35595774-da4b-499c-bd6e-1ae5af144833/volumes" Mar 18 17:47:42.065471 master-0 kubenswrapper[7553]: I0318 17:47:42.065436 7553 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4f688df1-3bfc-412e-b311-f9f761a0b00a" path="/var/lib/kubelet/pods/4f688df1-3bfc-412e-b311-f9f761a0b00a/volumes" Mar 18 17:47:42.066556 master-0 kubenswrapper[7553]: I0318 17:47:42.066520 7553 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7a9075c3-bb4f-4559-8454-5e097f334957" path="/var/lib/kubelet/pods/7a9075c3-bb4f-4559-8454-5e097f334957/volumes" Mar 18 17:47:42.069070 master-0 kubenswrapper[7553]: I0318 17:47:42.069004 7553 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7a6e8f4-26e0-454c-bfbb-f97e72636bf6" path="/var/lib/kubelet/pods/e7a6e8f4-26e0-454c-bfbb-f97e72636bf6/volumes" Mar 18 17:47:42.070338 master-0 kubenswrapper[7553]: I0318 17:47:42.070255 7553 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f7203a5f-0f67-48ca-a12b-be3b0ce7cbac" path="/var/lib/kubelet/pods/f7203a5f-0f67-48ca-a12b-be3b0ce7cbac/volumes" Mar 18 17:47:42.101044 master-0 kubenswrapper[7553]: I0318 17:47:42.101014 7553 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd-operator_etcd-operator-8544cbcf9c-rws9x_0100a259-1358-45e8-8191-4e1f9a14ec89/etcd-operator/2.log" Mar 18 17:47:42.101489 master-0 kubenswrapper[7553]: I0318 17:47:42.101451 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-rws9x" event={"ID":"0100a259-1358-45e8-8191-4e1f9a14ec89","Type":"ContainerStarted","Data":"1bb2dec1f59aff9832355c134a19ba762af95a3f61ff179296debc28c40ca05c"} Mar 18 17:47:42.104430 master-0 kubenswrapper[7553]: I0318 17:47:42.104356 7553 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager-operator_kube-controller-manager-operator-ff989d6cc-qk279_9b424d6c-7440-4c98-ac19-2d0642c696fd/kube-controller-manager-operator/1.log" Mar 18 17:47:42.107146 master-0 kubenswrapper[7553]: I0318 17:47:42.107117 7553 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-service-ca-operator_service-ca-operator-b865698dc-5zj8r_c355c750-ae2f-49fa-9a16-8fb4f688853e/service-ca-operator/1.log" Mar 18 17:47:42.110446 master-0 kubenswrapper[7553]: I0318 17:47:42.110391 7553 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-controller-manager-operator_openshift-controller-manager-operator-8c94f4649-hpsbd_9a240ab7-a1d5-4e9a-96f3-4590681cc7ed/openshift-controller-manager-operator/2.log" Mar 18 17:47:42.115747 master-0 kubenswrapper[7553]: I0318 17:47:42.115685 7553 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-apiserver-operator_openshift-apiserver-operator-d65958b8-t266j_0b9ff55a-73fb-473f-b406-1f8b6cffdb89/openshift-apiserver-operator/1.log" Mar 18 17:47:42.120706 master-0 kubenswrapper[7553]: I0318 17:47:42.120659 7553 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-64854d9cff-vpjmp_7d39d93e-9be3-47e1-a44e-be2d18b55446/snapshot-controller/1.log" Mar 18 17:47:42.124265 master-0 kubenswrapper[7553]: I0318 17:47:42.124194 7553 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-66b84d69b-qb7n6_7e64a377-f497-4416-8f22-d5c7f52e0b65/ingress-operator/1.log" Mar 18 17:47:42.126776 master-0 kubenswrapper[7553]: I0318 17:47:42.126701 7553 generic.go:334] "Generic (PLEG): container finished" podID="427e5ce9-f4b3-4f12-bb77-2b13775aa334" containerID="184cb76aa84a88cd3b8719a8bbdc255f068d4a3e6468482f6b7438107b9e68d8" exitCode=0 Mar 18 17:47:42.126897 master-0 kubenswrapper[7553]: I0318 17:47:42.126828 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6xmx4" event={"ID":"427e5ce9-f4b3-4f12-bb77-2b13775aa334","Type":"ContainerDied","Data":"184cb76aa84a88cd3b8719a8bbdc255f068d4a3e6468482f6b7438107b9e68d8"} Mar 18 17:47:42.126897 master-0 kubenswrapper[7553]: I0318 17:47:42.126870 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6xmx4" event={"ID":"427e5ce9-f4b3-4f12-bb77-2b13775aa334","Type":"ContainerStarted","Data":"7b2841761444793b373ed80c5f092794f38989726bcf53c2a969f325f8459b75"} Mar 18 17:47:42.128972 master-0 kubenswrapper[7553]: I0318 17:47:42.128921 7553 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 18 17:47:42.146917 master-0 kubenswrapper[7553]: I0318 17:47:42.146786 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"46f265536aba6292ead501bc9b49f327","Type":"ContainerStarted","Data":"21f65b83dcd474e201c2e5f73d8624edd7acb25dd6db2218299da95d8111811c"} Mar 18 17:47:42.154612 master-0 kubenswrapper[7553]: I0318 17:47:42.154555 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-2-master-0" event={"ID":"37bbec19-22b8-411c-901b-d89c92b0bd4d","Type":"ContainerStarted","Data":"96795dabdb6bc76b373e901a5376a2ae90d0d629bb5240323bbf35ecdc487386"} Mar 18 17:47:42.154765 master-0 kubenswrapper[7553]: I0318 17:47:42.154628 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-2-master-0" event={"ID":"37bbec19-22b8-411c-901b-d89c92b0bd4d","Type":"ContainerStarted","Data":"f95a076923e4629406022fc1044a23f8f3e37ea1e3db68f6f34125f8c501b177"} Mar 18 17:47:42.158108 master-0 kubenswrapper[7553]: I0318 17:47:42.158045 7553 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-6f69995874-dh5zl_37b3753f-bf4f-4a9e-a4a8-d58296bada79/cluster-baremetal-operator/1.log" Mar 18 17:47:42.164865 master-0 kubenswrapper[7553]: I0318 17:47:42.164806 7553 scope.go:117] "RemoveContainer" containerID="399bf3be19e41993ba7e873949068ec6c32cf9d08ee1196692654605dc3ddd51" Mar 18 17:47:42.165410 master-0 kubenswrapper[7553]: E0318 17:47:42.165350 7553 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openshift-config-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=openshift-config-operator pod=openshift-config-operator-95bf4f4d-q27fh_openshift-config-operator(cb522b02-0b93-4711-9041-566daa06b95a)\"" pod="openshift-config-operator/openshift-config-operator-95bf4f4d-q27fh" podUID="cb522b02-0b93-4711-9041-566daa06b95a" Mar 18 17:47:42.206166 master-0 kubenswrapper[7553]: I0318 17:47:42.206056 7553 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/installer-2-master-0" podStartSLOduration=289.206028084 podStartE2EDuration="4m49.206028084s" podCreationTimestamp="2026-03-18 17:42:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 17:47:42.202419847 +0000 UTC m=+352.348254530" watchObservedRunningTime="2026-03-18 17:47:42.206028084 +0000 UTC m=+352.351862787" Mar 18 17:47:43.174071 master-0 kubenswrapper[7553]: I0318 17:47:43.172825 7553 generic.go:334] "Generic (PLEG): container finished" podID="f7ff61c7-32d1-4407-a792-8e22bb4d50f9" containerID="24610a985db5ce85023cf9747ca14df30c98ba89aeb22c58ca49f5ef21707a5f" exitCode=0 Mar 18 17:47:43.174071 master-0 kubenswrapper[7553]: I0318 17:47:43.172915 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-6bb5bfb6fd-xwwcg" event={"ID":"f7ff61c7-32d1-4407-a792-8e22bb4d50f9","Type":"ContainerDied","Data":"24610a985db5ce85023cf9747ca14df30c98ba89aeb22c58ca49f5ef21707a5f"} Mar 18 17:47:43.174071 master-0 kubenswrapper[7553]: I0318 17:47:43.173618 7553 scope.go:117] "RemoveContainer" containerID="24610a985db5ce85023cf9747ca14df30c98ba89aeb22c58ca49f5ef21707a5f" Mar 18 17:47:43.177916 master-0 kubenswrapper[7553]: I0318 17:47:43.177259 7553 generic.go:334] "Generic (PLEG): container finished" podID="427e5ce9-f4b3-4f12-bb77-2b13775aa334" containerID="22a31804731ff2ad6097e1478a33c0a03dfd73fd92e656c745ef5aa863cd5673" exitCode=0 Mar 18 17:47:43.177916 master-0 kubenswrapper[7553]: I0318 17:47:43.177435 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6xmx4" event={"ID":"427e5ce9-f4b3-4f12-bb77-2b13775aa334","Type":"ContainerDied","Data":"22a31804731ff2ad6097e1478a33c0a03dfd73fd92e656c745ef5aa863cd5673"} Mar 18 17:47:43.181656 master-0 kubenswrapper[7553]: I0318 17:47:43.181629 7553 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-95bf4f4d-q27fh" Mar 18 17:47:43.182018 master-0 kubenswrapper[7553]: I0318 17:47:43.182004 7553 scope.go:117] "RemoveContainer" containerID="399bf3be19e41993ba7e873949068ec6c32cf9d08ee1196692654605dc3ddd51" Mar 18 17:47:43.182229 master-0 kubenswrapper[7553]: E0318 17:47:43.182207 7553 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openshift-config-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=openshift-config-operator pod=openshift-config-operator-95bf4f4d-q27fh_openshift-config-operator(cb522b02-0b93-4711-9041-566daa06b95a)\"" pod="openshift-config-operator/openshift-config-operator-95bf4f4d-q27fh" podUID="cb522b02-0b93-4711-9041-566daa06b95a" Mar 18 17:47:43.201007 master-0 kubenswrapper[7553]: E0318 17:47:43.200948 7553 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": context deadline exceeded" Mar 18 17:47:44.187617 master-0 kubenswrapper[7553]: I0318 17:47:44.187534 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6xmx4" event={"ID":"427e5ce9-f4b3-4f12-bb77-2b13775aa334","Type":"ContainerStarted","Data":"214d7478fbbc2bdd00bf6310c0312306cf0ce27cb922e05aee38bae87a0d80f6"} Mar 18 17:47:44.193445 master-0 kubenswrapper[7553]: I0318 17:47:44.193377 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-6bb5bfb6fd-xwwcg" event={"ID":"f7ff61c7-32d1-4407-a792-8e22bb4d50f9","Type":"ContainerStarted","Data":"26d9bad45253e9ed004980ee45ac455d4c739974d250f32d4e33bfde8ed6ef29"} Mar 18 17:47:44.221241 master-0 kubenswrapper[7553]: I0318 17:47:44.221096 7553 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-6xmx4" podStartSLOduration=289.341420462 podStartE2EDuration="4m51.221062559s" podCreationTimestamp="2026-03-18 17:42:53 +0000 UTC" firstStartedPulling="2026-03-18 17:47:42.128818922 +0000 UTC m=+352.274653625" lastFinishedPulling="2026-03-18 17:47:44.008461039 +0000 UTC m=+354.154295722" observedRunningTime="2026-03-18 17:47:44.213520542 +0000 UTC m=+354.359355205" watchObservedRunningTime="2026-03-18 17:47:44.221062559 +0000 UTC m=+354.366897262" Mar 18 17:47:44.582034 master-0 kubenswrapper[7553]: I0318 17:47:44.581919 7553 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 18 17:47:47.215827 master-0 kubenswrapper[7553]: I0318 17:47:47.215739 7553 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver-operator_kube-apiserver-operator-8b68b9d9b-p72m2_26575d68-0488-4dfa-a5d0-5016e481dba6/kube-apiserver-operator/2.log" Mar 18 17:47:47.216650 master-0 kubenswrapper[7553]: I0318 17:47:47.216347 7553 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver-operator_kube-apiserver-operator-8b68b9d9b-p72m2_26575d68-0488-4dfa-a5d0-5016e481dba6/kube-apiserver-operator/1.log" Mar 18 17:47:47.216650 master-0 kubenswrapper[7553]: I0318 17:47:47.216393 7553 generic.go:334] "Generic (PLEG): container finished" podID="26575d68-0488-4dfa-a5d0-5016e481dba6" containerID="6469fb4bb68705329572e917ffd53c8d7d98a360f3801392b01dd10ca152c1c0" exitCode=255 Mar 18 17:47:47.216650 master-0 kubenswrapper[7553]: I0318 17:47:47.216477 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-8b68b9d9b-p72m2" event={"ID":"26575d68-0488-4dfa-a5d0-5016e481dba6","Type":"ContainerDied","Data":"6469fb4bb68705329572e917ffd53c8d7d98a360f3801392b01dd10ca152c1c0"} Mar 18 17:47:47.216650 master-0 kubenswrapper[7553]: I0318 17:47:47.216540 7553 scope.go:117] "RemoveContainer" containerID="f83b9c315c38279f3569813348a27c78beef46c5306eaadd08c03d8c08f384ba" Mar 18 17:47:47.218538 master-0 kubenswrapper[7553]: I0318 17:47:47.218478 7553 scope.go:117] "RemoveContainer" containerID="6469fb4bb68705329572e917ffd53c8d7d98a360f3801392b01dd10ca152c1c0" Mar 18 17:47:47.218906 master-0 kubenswrapper[7553]: E0318 17:47:47.218851 7553 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-operator\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver-operator pod=kube-apiserver-operator-8b68b9d9b-p72m2_openshift-kube-apiserver-operator(26575d68-0488-4dfa-a5d0-5016e481dba6)\"" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-8b68b9d9b-p72m2" podUID="26575d68-0488-4dfa-a5d0-5016e481dba6" Mar 18 17:47:47.219813 master-0 kubenswrapper[7553]: I0318 17:47:47.219770 7553 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager-operator_kube-controller-manager-operator-ff989d6cc-qk279_9b424d6c-7440-4c98-ac19-2d0642c696fd/kube-controller-manager-operator/2.log" Mar 18 17:47:47.221247 master-0 kubenswrapper[7553]: I0318 17:47:47.221208 7553 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager-operator_kube-controller-manager-operator-ff989d6cc-qk279_9b424d6c-7440-4c98-ac19-2d0642c696fd/kube-controller-manager-operator/1.log" Mar 18 17:47:47.221353 master-0 kubenswrapper[7553]: I0318 17:47:47.221253 7553 generic.go:334] "Generic (PLEG): container finished" podID="9b424d6c-7440-4c98-ac19-2d0642c696fd" containerID="1fd744dbcfad29e0a4211253fc988f9ef696171ed5032f9e61793918d136f6fa" exitCode=255 Mar 18 17:47:47.221353 master-0 kubenswrapper[7553]: I0318 17:47:47.221334 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-ff989d6cc-qk279" event={"ID":"9b424d6c-7440-4c98-ac19-2d0642c696fd","Type":"ContainerDied","Data":"1fd744dbcfad29e0a4211253fc988f9ef696171ed5032f9e61793918d136f6fa"} Mar 18 17:47:47.221989 master-0 kubenswrapper[7553]: I0318 17:47:47.221950 7553 scope.go:117] "RemoveContainer" containerID="1fd744dbcfad29e0a4211253fc988f9ef696171ed5032f9e61793918d136f6fa" Mar 18 17:47:47.222239 master-0 kubenswrapper[7553]: E0318 17:47:47.222194 7553 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager-operator\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-controller-manager-operator pod=kube-controller-manager-operator-ff989d6cc-qk279_openshift-kube-controller-manager-operator(9b424d6c-7440-4c98-ac19-2d0642c696fd)\"" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-ff989d6cc-qk279" podUID="9b424d6c-7440-4c98-ac19-2d0642c696fd" Mar 18 17:47:47.224660 master-0 kubenswrapper[7553]: I0318 17:47:47.224625 7553 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-service-ca-operator_service-ca-operator-b865698dc-5zj8r_c355c750-ae2f-49fa-9a16-8fb4f688853e/service-ca-operator/2.log" Mar 18 17:47:47.225431 master-0 kubenswrapper[7553]: I0318 17:47:47.225379 7553 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-service-ca-operator_service-ca-operator-b865698dc-5zj8r_c355c750-ae2f-49fa-9a16-8fb4f688853e/service-ca-operator/1.log" Mar 18 17:47:47.225513 master-0 kubenswrapper[7553]: I0318 17:47:47.225458 7553 generic.go:334] "Generic (PLEG): container finished" podID="c355c750-ae2f-49fa-9a16-8fb4f688853e" containerID="ebe23adafc49efd64f86fbe53ef0b2cf71f92ee87bec64f94def1d1fde4df324" exitCode=255 Mar 18 17:47:47.225573 master-0 kubenswrapper[7553]: I0318 17:47:47.225518 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-b865698dc-5zj8r" event={"ID":"c355c750-ae2f-49fa-9a16-8fb4f688853e","Type":"ContainerDied","Data":"ebe23adafc49efd64f86fbe53ef0b2cf71f92ee87bec64f94def1d1fde4df324"} Mar 18 17:47:47.226368 master-0 kubenswrapper[7553]: I0318 17:47:47.226331 7553 scope.go:117] "RemoveContainer" containerID="ebe23adafc49efd64f86fbe53ef0b2cf71f92ee87bec64f94def1d1fde4df324" Mar 18 17:47:47.226594 master-0 kubenswrapper[7553]: E0318 17:47:47.226557 7553 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"service-ca-operator\" with CrashLoopBackOff: \"back-off 20s restarting failed container=service-ca-operator pod=service-ca-operator-b865698dc-5zj8r_openshift-service-ca-operator(c355c750-ae2f-49fa-9a16-8fb4f688853e)\"" pod="openshift-service-ca-operator/service-ca-operator-b865698dc-5zj8r" podUID="c355c750-ae2f-49fa-9a16-8fb4f688853e" Mar 18 17:47:47.227851 master-0 kubenswrapper[7553]: I0318 17:47:47.227812 7553 generic.go:334] "Generic (PLEG): container finished" podID="6f26e239-2988-4faa-bc1d-24b15b95b7f1" containerID="d4e55edde3b012389f45dd8d1909f3ff7e569bfb5c590f0e8e7e8c080c91f4b0" exitCode=0 Mar 18 17:47:47.227929 master-0 kubenswrapper[7553]: I0318 17:47:47.227872 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-5549dc66cb-ljrq8" event={"ID":"6f26e239-2988-4faa-bc1d-24b15b95b7f1","Type":"ContainerDied","Data":"d4e55edde3b012389f45dd8d1909f3ff7e569bfb5c590f0e8e7e8c080c91f4b0"} Mar 18 17:47:47.228184 master-0 kubenswrapper[7553]: I0318 17:47:47.228150 7553 scope.go:117] "RemoveContainer" containerID="d4e55edde3b012389f45dd8d1909f3ff7e569bfb5c590f0e8e7e8c080c91f4b0" Mar 18 17:47:47.234051 master-0 kubenswrapper[7553]: I0318 17:47:47.234005 7553 generic.go:334] "Generic (PLEG): container finished" podID="99e215da-759d-4fff-af65-0fb64245fbd0" containerID="991a1bf80cc5f91f8bda7e5c2511f88f98023ee76020f581b2ef2e76ff7bcf29" exitCode=0 Mar 18 17:47:47.234051 master-0 kubenswrapper[7553]: I0318 17:47:47.234047 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-olm-operator/cluster-olm-operator-67dcd4998-lljnt" event={"ID":"99e215da-759d-4fff-af65-0fb64245fbd0","Type":"ContainerDied","Data":"991a1bf80cc5f91f8bda7e5c2511f88f98023ee76020f581b2ef2e76ff7bcf29"} Mar 18 17:47:47.234798 master-0 kubenswrapper[7553]: I0318 17:47:47.234758 7553 scope.go:117] "RemoveContainer" containerID="991a1bf80cc5f91f8bda7e5c2511f88f98023ee76020f581b2ef2e76ff7bcf29" Mar 18 17:47:47.235138 master-0 kubenswrapper[7553]: E0318 17:47:47.235094 7553 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cluster-olm-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=cluster-olm-operator pod=cluster-olm-operator-67dcd4998-lljnt_openshift-cluster-olm-operator(99e215da-759d-4fff-af65-0fb64245fbd0)\"" pod="openshift-cluster-olm-operator/cluster-olm-operator-67dcd4998-lljnt" podUID="99e215da-759d-4fff-af65-0fb64245fbd0" Mar 18 17:47:47.281437 master-0 kubenswrapper[7553]: I0318 17:47:47.281365 7553 scope.go:117] "RemoveContainer" containerID="6e9473f3d26cbd67b9497211546ab830ef4c483cd3c3fb1fa65b5b574de9d612" Mar 18 17:47:47.327821 master-0 kubenswrapper[7553]: I0318 17:47:47.327713 7553 scope.go:117] "RemoveContainer" containerID="6663d9a012bba90e4d1f49e78a4578d42945dc0a251e88808d84607a0978912c" Mar 18 17:47:47.369164 master-0 kubenswrapper[7553]: I0318 17:47:47.369115 7553 scope.go:117] "RemoveContainer" containerID="526fb1f5737ab88a407bf2b841c814ad5e5c2b858476030b2e358c55fa03c304" Mar 18 17:47:47.433498 master-0 kubenswrapper[7553]: I0318 17:47:47.433258 7553 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 18 17:47:47.582583 master-0 kubenswrapper[7553]: I0318 17:47:47.582382 7553 prober.go:107] "Probe failed" probeType="Startup" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="46f265536aba6292ead501bc9b49f327" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 18 17:47:47.850071 master-0 kubenswrapper[7553]: E0318 17:47:47.849807 7553 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s" Mar 18 17:47:48.185382 master-0 kubenswrapper[7553]: I0318 17:47:48.185157 7553 prober.go:107] "Probe failed" probeType="Startup" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="46f265536aba6292ead501bc9b49f327" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.32.10:10257/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 18 17:47:48.185382 master-0 kubenswrapper[7553]: I0318 17:47:48.185287 7553 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 18 17:47:48.243954 master-0 kubenswrapper[7553]: I0318 17:47:48.243881 7553 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-service-ca-operator_service-ca-operator-b865698dc-5zj8r_c355c750-ae2f-49fa-9a16-8fb4f688853e/service-ca-operator/2.log" Mar 18 17:47:48.247707 master-0 kubenswrapper[7553]: I0318 17:47:48.247636 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-5549dc66cb-ljrq8" event={"ID":"6f26e239-2988-4faa-bc1d-24b15b95b7f1","Type":"ContainerStarted","Data":"e31032eb3407bce853d0be38a115c77d3679d1c63fdc6c68fe19ac271b5e7c71"} Mar 18 17:47:48.255067 master-0 kubenswrapper[7553]: I0318 17:47:48.254969 7553 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver-operator_kube-apiserver-operator-8b68b9d9b-p72m2_26575d68-0488-4dfa-a5d0-5016e481dba6/kube-apiserver-operator/2.log" Mar 18 17:47:48.257501 master-0 kubenswrapper[7553]: I0318 17:47:48.257236 7553 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager-operator_kube-controller-manager-operator-ff989d6cc-qk279_9b424d6c-7440-4c98-ac19-2d0642c696fd/kube-controller-manager-operator/2.log" Mar 18 17:47:48.258942 master-0 kubenswrapper[7553]: I0318 17:47:48.258882 7553 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="kube-controller-manager" containerStatusID={"Type":"cri-o","ID":"79efc2b057b97c5f729a0b5a1fa1420cba96b5301fd6279190cf494ebd7bf5f8"} pod="kube-system/bootstrap-kube-controller-manager-master-0" containerMessage="Container kube-controller-manager failed startup probe, will be restarted" Mar 18 17:47:48.259379 master-0 kubenswrapper[7553]: I0318 17:47:48.259329 7553 kuberuntime_container.go:808] "Killing container with a grace period" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="46f265536aba6292ead501bc9b49f327" containerName="kube-controller-manager" containerID="cri-o://79efc2b057b97c5f729a0b5a1fa1420cba96b5301fd6279190cf494ebd7bf5f8" gracePeriod=30 Mar 18 17:47:48.393340 master-0 kubenswrapper[7553]: E0318 17:47:48.393236 7553 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=kube-controller-manager pod=bootstrap-kube-controller-manager-master-0_kube-system(46f265536aba6292ead501bc9b49f327)\"" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="46f265536aba6292ead501bc9b49f327" Mar 18 17:47:49.269381 master-0 kubenswrapper[7553]: I0318 17:47:49.269266 7553 generic.go:334] "Generic (PLEG): container finished" podID="46f265536aba6292ead501bc9b49f327" containerID="79efc2b057b97c5f729a0b5a1fa1420cba96b5301fd6279190cf494ebd7bf5f8" exitCode=2 Mar 18 17:47:49.269381 master-0 kubenswrapper[7553]: I0318 17:47:49.269327 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"46f265536aba6292ead501bc9b49f327","Type":"ContainerDied","Data":"79efc2b057b97c5f729a0b5a1fa1420cba96b5301fd6279190cf494ebd7bf5f8"} Mar 18 17:47:49.270053 master-0 kubenswrapper[7553]: I0318 17:47:49.269453 7553 scope.go:117] "RemoveContainer" containerID="13ecfe004522bd3f1997358f8d18d1d0444903e67db4326c279f978bc65fbe03" Mar 18 17:47:49.272937 master-0 kubenswrapper[7553]: I0318 17:47:49.271437 7553 scope.go:117] "RemoveContainer" containerID="79efc2b057b97c5f729a0b5a1fa1420cba96b5301fd6279190cf494ebd7bf5f8" Mar 18 17:47:49.272937 master-0 kubenswrapper[7553]: E0318 17:47:49.271717 7553 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=kube-controller-manager pod=bootstrap-kube-controller-manager-master-0_kube-system(46f265536aba6292ead501bc9b49f327)\"" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="46f265536aba6292ead501bc9b49f327" Mar 18 17:47:49.562517 master-0 kubenswrapper[7553]: I0318 17:47:49.562422 7553 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 18 17:47:50.278774 master-0 kubenswrapper[7553]: I0318 17:47:50.278649 7553 scope.go:117] "RemoveContainer" containerID="79efc2b057b97c5f729a0b5a1fa1420cba96b5301fd6279190cf494ebd7bf5f8" Mar 18 17:47:50.279601 master-0 kubenswrapper[7553]: E0318 17:47:50.279041 7553 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=kube-controller-manager pod=bootstrap-kube-controller-manager-master-0_kube-system(46f265536aba6292ead501bc9b49f327)\"" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="46f265536aba6292ead501bc9b49f327" Mar 18 17:47:53.966619 master-0 kubenswrapper[7553]: I0318 17:47:53.966492 7553 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-6xmx4" Mar 18 17:47:53.966619 master-0 kubenswrapper[7553]: I0318 17:47:53.966618 7553 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-6xmx4" Mar 18 17:47:54.005650 master-0 kubenswrapper[7553]: I0318 17:47:54.005584 7553 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-6xmx4" Mar 18 17:47:54.340307 master-0 kubenswrapper[7553]: I0318 17:47:54.340230 7553 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-6xmx4" Mar 18 17:47:55.053511 master-0 kubenswrapper[7553]: I0318 17:47:55.053468 7553 scope.go:117] "RemoveContainer" containerID="399bf3be19e41993ba7e873949068ec6c32cf9d08ee1196692654605dc3ddd51" Mar 18 17:47:55.320017 master-0 kubenswrapper[7553]: I0318 17:47:55.319832 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-95bf4f4d-q27fh" event={"ID":"cb522b02-0b93-4711-9041-566daa06b95a","Type":"ContainerStarted","Data":"e37c792d21d780dbedc7e6122afb61646879956ae156b581bf934e2fbdabe85d"} Mar 18 17:47:55.320545 master-0 kubenswrapper[7553]: I0318 17:47:55.320521 7553 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-95bf4f4d-q27fh" Mar 18 17:47:56.119892 master-0 kubenswrapper[7553]: I0318 17:47:56.119767 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2tskm\" (UniqueName: \"kubernetes.io/projected/4460d3d3-c55f-4f1c-a623-e3feccf937bb-kube-api-access-2tskm\") pod \"redhat-operators-bgdql\" (UID: \"4460d3d3-c55f-4f1c-a623-e3feccf937bb\") " pod="openshift-marketplace/redhat-operators-bgdql" Mar 18 17:47:56.137863 master-0 kubenswrapper[7553]: I0318 17:47:56.137765 7553 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2tskm\" (UniqueName: \"kubernetes.io/projected/4460d3d3-c55f-4f1c-a623-e3feccf937bb-kube-api-access-2tskm\") pod \"redhat-operators-bgdql\" (UID: \"4460d3d3-c55f-4f1c-a623-e3feccf937bb\") " pod="openshift-marketplace/redhat-operators-bgdql" Mar 18 17:47:56.331823 master-0 kubenswrapper[7553]: I0318 17:47:56.331744 7553 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-btlbk" Mar 18 17:47:56.340428 master-0 kubenswrapper[7553]: I0318 17:47:56.340357 7553 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bgdql" Mar 18 17:47:56.764772 master-0 kubenswrapper[7553]: I0318 17:47:56.764561 7553 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-bgdql"] Mar 18 17:47:57.333164 master-0 kubenswrapper[7553]: I0318 17:47:57.333093 7553 generic.go:334] "Generic (PLEG): container finished" podID="4460d3d3-c55f-4f1c-a623-e3feccf937bb" containerID="2508ebe9053440edc87c49e130a7b0e4cfa3dcec7c01ec67984f7b0b7290be83" exitCode=0 Mar 18 17:47:57.333164 master-0 kubenswrapper[7553]: I0318 17:47:57.333154 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bgdql" event={"ID":"4460d3d3-c55f-4f1c-a623-e3feccf937bb","Type":"ContainerDied","Data":"2508ebe9053440edc87c49e130a7b0e4cfa3dcec7c01ec67984f7b0b7290be83"} Mar 18 17:47:57.333915 master-0 kubenswrapper[7553]: I0318 17:47:57.333201 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bgdql" event={"ID":"4460d3d3-c55f-4f1c-a623-e3feccf937bb","Type":"ContainerStarted","Data":"c73523c110a89aa2ec5b986dce6527591a38ece4a4afaf4032ec9cf612257a34"} Mar 18 17:47:57.582642 master-0 kubenswrapper[7553]: I0318 17:47:57.582536 7553 prober.go:107] "Probe failed" probeType="Startup" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="46f265536aba6292ead501bc9b49f327" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 18 17:47:59.053706 master-0 kubenswrapper[7553]: I0318 17:47:59.053660 7553 scope.go:117] "RemoveContainer" containerID="ebe23adafc49efd64f86fbe53ef0b2cf71f92ee87bec64f94def1d1fde4df324" Mar 18 17:47:59.054612 master-0 kubenswrapper[7553]: E0318 17:47:59.054590 7553 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"service-ca-operator\" with CrashLoopBackOff: \"back-off 20s restarting failed container=service-ca-operator pod=service-ca-operator-b865698dc-5zj8r_openshift-service-ca-operator(c355c750-ae2f-49fa-9a16-8fb4f688853e)\"" pod="openshift-service-ca-operator/service-ca-operator-b865698dc-5zj8r" podUID="c355c750-ae2f-49fa-9a16-8fb4f688853e" Mar 18 17:47:59.182067 master-0 kubenswrapper[7553]: I0318 17:47:59.181988 7553 patch_prober.go:28] interesting pod/openshift-config-operator-95bf4f4d-q27fh container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.15:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 18 17:47:59.182318 master-0 kubenswrapper[7553]: I0318 17:47:59.182073 7553 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-95bf4f4d-q27fh" podUID="cb522b02-0b93-4711-9041-566daa06b95a" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.15:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 18 17:47:59.349064 master-0 kubenswrapper[7553]: I0318 17:47:59.348849 7553 generic.go:334] "Generic (PLEG): container finished" podID="4460d3d3-c55f-4f1c-a623-e3feccf937bb" containerID="98d863723a508017dfde5d2fba0f35e4c2c885a3faf38a07e44a5b8c49c1f0be" exitCode=0 Mar 18 17:47:59.349064 master-0 kubenswrapper[7553]: I0318 17:47:59.348936 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bgdql" event={"ID":"4460d3d3-c55f-4f1c-a623-e3feccf937bb","Type":"ContainerDied","Data":"98d863723a508017dfde5d2fba0f35e4c2c885a3faf38a07e44a5b8c49c1f0be"} Mar 18 17:48:00.005171 master-0 kubenswrapper[7553]: I0318 17:48:00.005080 7553 patch_prober.go:28] interesting pod/openshift-config-operator-95bf4f4d-q27fh container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.128.0.15:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 18 17:48:00.005468 master-0 kubenswrapper[7553]: I0318 17:48:00.005197 7553 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-95bf4f4d-q27fh" podUID="cb522b02-0b93-4711-9041-566daa06b95a" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.15:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 18 17:48:00.059322 master-0 kubenswrapper[7553]: I0318 17:48:00.059218 7553 scope.go:117] "RemoveContainer" containerID="6469fb4bb68705329572e917ffd53c8d7d98a360f3801392b01dd10ca152c1c0" Mar 18 17:48:00.060185 master-0 kubenswrapper[7553]: E0318 17:48:00.060058 7553 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-operator\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver-operator pod=kube-apiserver-operator-8b68b9d9b-p72m2_openshift-kube-apiserver-operator(26575d68-0488-4dfa-a5d0-5016e481dba6)\"" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-8b68b9d9b-p72m2" podUID="26575d68-0488-4dfa-a5d0-5016e481dba6" Mar 18 17:48:00.060977 master-0 kubenswrapper[7553]: I0318 17:48:00.060933 7553 scope.go:117] "RemoveContainer" containerID="991a1bf80cc5f91f8bda7e5c2511f88f98023ee76020f581b2ef2e76ff7bcf29" Mar 18 17:48:00.357231 master-0 kubenswrapper[7553]: I0318 17:48:00.357101 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bgdql" event={"ID":"4460d3d3-c55f-4f1c-a623-e3feccf937bb","Type":"ContainerStarted","Data":"3055b80e3c52d6f4fbc6907be81a6c2dce915f83207e8d232d78bfb855218493"} Mar 18 17:48:00.360379 master-0 kubenswrapper[7553]: I0318 17:48:00.360314 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-olm-operator/cluster-olm-operator-67dcd4998-lljnt" event={"ID":"99e215da-759d-4fff-af65-0fb64245fbd0","Type":"ContainerStarted","Data":"f9444501489b05e091711bec8960923d4b51b406068b0c1afaea6e89806b61fd"} Mar 18 17:48:00.383112 master-0 kubenswrapper[7553]: I0318 17:48:00.383027 7553 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-bgdql" podStartSLOduration=303.878009579 podStartE2EDuration="5m6.383008943s" podCreationTimestamp="2026-03-18 17:42:54 +0000 UTC" firstStartedPulling="2026-03-18 17:47:57.334954269 +0000 UTC m=+367.480788942" lastFinishedPulling="2026-03-18 17:47:59.839953593 +0000 UTC m=+369.985788306" observedRunningTime="2026-03-18 17:48:00.381018418 +0000 UTC m=+370.526853091" watchObservedRunningTime="2026-03-18 17:48:00.383008943 +0000 UTC m=+370.528843616" Mar 18 17:48:02.053330 master-0 kubenswrapper[7553]: I0318 17:48:02.053247 7553 scope.go:117] "RemoveContainer" containerID="79efc2b057b97c5f729a0b5a1fa1420cba96b5301fd6279190cf494ebd7bf5f8" Mar 18 17:48:02.053940 master-0 kubenswrapper[7553]: E0318 17:48:02.053618 7553 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=kube-controller-manager pod=bootstrap-kube-controller-manager-master-0_kube-system(46f265536aba6292ead501bc9b49f327)\"" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="46f265536aba6292ead501bc9b49f327" Mar 18 17:48:02.054362 master-0 kubenswrapper[7553]: I0318 17:48:02.054322 7553 scope.go:117] "RemoveContainer" containerID="1fd744dbcfad29e0a4211253fc988f9ef696171ed5032f9e61793918d136f6fa" Mar 18 17:48:02.054583 master-0 kubenswrapper[7553]: E0318 17:48:02.054540 7553 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager-operator\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-controller-manager-operator pod=kube-controller-manager-operator-ff989d6cc-qk279_openshift-kube-controller-manager-operator(9b424d6c-7440-4c98-ac19-2d0642c696fd)\"" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-ff989d6cc-qk279" podUID="9b424d6c-7440-4c98-ac19-2d0642c696fd" Mar 18 17:48:02.183240 master-0 kubenswrapper[7553]: I0318 17:48:02.183154 7553 patch_prober.go:28] interesting pod/openshift-config-operator-95bf4f4d-q27fh container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.15:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 18 17:48:02.183955 master-0 kubenswrapper[7553]: I0318 17:48:02.183887 7553 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-95bf4f4d-q27fh" podUID="cb522b02-0b93-4711-9041-566daa06b95a" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.15:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 18 17:48:03.005857 master-0 kubenswrapper[7553]: I0318 17:48:03.005781 7553 patch_prober.go:28] interesting pod/openshift-config-operator-95bf4f4d-q27fh container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.128.0.15:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 18 17:48:03.006416 master-0 kubenswrapper[7553]: I0318 17:48:03.005895 7553 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-95bf4f4d-q27fh" podUID="cb522b02-0b93-4711-9041-566daa06b95a" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.15:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 18 17:48:03.384861 master-0 kubenswrapper[7553]: I0318 17:48:03.384785 7553 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-node-tuning-operator_cluster-node-tuning-operator-598fbc5f8f-7qwxn_7c6694a8-ccd0-491b-9f21-215450f6ce67/cluster-node-tuning-operator/0.log" Mar 18 17:48:03.385562 master-0 kubenswrapper[7553]: I0318 17:48:03.384880 7553 generic.go:334] "Generic (PLEG): container finished" podID="7c6694a8-ccd0-491b-9f21-215450f6ce67" containerID="6af98a7327b83a0f9fcfd3425055ee2bbebd96176bf419d80ea4f980729da819" exitCode=1 Mar 18 17:48:03.385562 master-0 kubenswrapper[7553]: I0318 17:48:03.384933 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-7qwxn" event={"ID":"7c6694a8-ccd0-491b-9f21-215450f6ce67","Type":"ContainerDied","Data":"6af98a7327b83a0f9fcfd3425055ee2bbebd96176bf419d80ea4f980729da819"} Mar 18 17:48:03.385774 master-0 kubenswrapper[7553]: I0318 17:48:03.385709 7553 scope.go:117] "RemoveContainer" containerID="6af98a7327b83a0f9fcfd3425055ee2bbebd96176bf419d80ea4f980729da819" Mar 18 17:48:04.392622 master-0 kubenswrapper[7553]: I0318 17:48:04.392539 7553 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-node-tuning-operator_cluster-node-tuning-operator-598fbc5f8f-7qwxn_7c6694a8-ccd0-491b-9f21-215450f6ce67/cluster-node-tuning-operator/0.log" Mar 18 17:48:04.393241 master-0 kubenswrapper[7553]: I0318 17:48:04.392621 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-7qwxn" event={"ID":"7c6694a8-ccd0-491b-9f21-215450f6ce67","Type":"ContainerStarted","Data":"54489b0edcfa24dfcbbb34581a482bdade21886266c2b553e30f0c64c39e011f"} Mar 18 17:48:04.851377 master-0 kubenswrapper[7553]: E0318 17:48:04.851250 7553 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s" Mar 18 17:48:05.183502 master-0 kubenswrapper[7553]: I0318 17:48:05.183423 7553 patch_prober.go:28] interesting pod/openshift-config-operator-95bf4f4d-q27fh container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.15:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 18 17:48:05.183834 master-0 kubenswrapper[7553]: I0318 17:48:05.183530 7553 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-95bf4f4d-q27fh" podUID="cb522b02-0b93-4711-9041-566daa06b95a" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.15:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 18 17:48:05.235050 master-0 kubenswrapper[7553]: I0318 17:48:05.234968 7553 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 18 17:48:05.235881 master-0 kubenswrapper[7553]: I0318 17:48:05.235854 7553 scope.go:117] "RemoveContainer" containerID="79efc2b057b97c5f729a0b5a1fa1420cba96b5301fd6279190cf494ebd7bf5f8" Mar 18 17:48:05.236557 master-0 kubenswrapper[7553]: E0318 17:48:05.236336 7553 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=kube-controller-manager pod=bootstrap-kube-controller-manager-master-0_kube-system(46f265536aba6292ead501bc9b49f327)\"" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="46f265536aba6292ead501bc9b49f327" Mar 18 17:48:05.282458 master-0 kubenswrapper[7553]: I0318 17:48:05.282372 7553 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 18 17:48:05.400017 master-0 kubenswrapper[7553]: I0318 17:48:05.399951 7553 generic.go:334] "Generic (PLEG): container finished" podID="253ec853-f637-4aa4-8e8e-eb655dfccccb" containerID="44bcebab84e3e626740692adfb152c2797db6837bc5427bf84f3ada1de226018" exitCode=0 Mar 18 17:48:05.400712 master-0 kubenswrapper[7553]: I0318 17:48:05.400050 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-57dbfd879f-44tfw" event={"ID":"253ec853-f637-4aa4-8e8e-eb655dfccccb","Type":"ContainerDied","Data":"44bcebab84e3e626740692adfb152c2797db6837bc5427bf84f3ada1de226018"} Mar 18 17:48:05.401227 master-0 kubenswrapper[7553]: I0318 17:48:05.401199 7553 scope.go:117] "RemoveContainer" containerID="44bcebab84e3e626740692adfb152c2797db6837bc5427bf84f3ada1de226018" Mar 18 17:48:05.401746 master-0 kubenswrapper[7553]: I0318 17:48:05.401720 7553 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-lifecycle-manager_package-server-manager-7b95f86987-6qqz4_d26d4515-391e-41a5-8c82-1b2b8a375662/package-server-manager/0.log" Mar 18 17:48:05.402179 master-0 kubenswrapper[7553]: I0318 17:48:05.402134 7553 generic.go:334] "Generic (PLEG): container finished" podID="d26d4515-391e-41a5-8c82-1b2b8a375662" containerID="c08cd14fe1ce6dcf04e7916d9d5a8cb80981c4007a423a03755dfeee8e27eeb4" exitCode=1 Mar 18 17:48:05.402218 master-0 kubenswrapper[7553]: I0318 17:48:05.402182 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-7b95f86987-6qqz4" event={"ID":"d26d4515-391e-41a5-8c82-1b2b8a375662","Type":"ContainerDied","Data":"c08cd14fe1ce6dcf04e7916d9d5a8cb80981c4007a423a03755dfeee8e27eeb4"} Mar 18 17:48:05.408584 master-0 kubenswrapper[7553]: I0318 17:48:05.408546 7553 scope.go:117] "RemoveContainer" containerID="c08cd14fe1ce6dcf04e7916d9d5a8cb80981c4007a423a03755dfeee8e27eeb4" Mar 18 17:48:05.414669 master-0 kubenswrapper[7553]: I0318 17:48:05.414626 7553 generic.go:334] "Generic (PLEG): container finished" podID="fdab27a1-1d7a-4dc5-b828-eba3f57592dd" containerID="5eda9ef28d74f5cd7a10971a5854c8a51a0c32becadb69afd3686ca34d1563e1" exitCode=0 Mar 18 17:48:05.414997 master-0 kubenswrapper[7553]: I0318 17:48:05.414952 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-7d58488df-l48xm" event={"ID":"fdab27a1-1d7a-4dc5-b828-eba3f57592dd","Type":"ContainerDied","Data":"5eda9ef28d74f5cd7a10971a5854c8a51a0c32becadb69afd3686ca34d1563e1"} Mar 18 17:48:05.415489 master-0 kubenswrapper[7553]: I0318 17:48:05.415067 7553 scope.go:117] "RemoveContainer" containerID="79efc2b057b97c5f729a0b5a1fa1420cba96b5301fd6279190cf494ebd7bf5f8" Mar 18 17:48:05.415489 master-0 kubenswrapper[7553]: E0318 17:48:05.415323 7553 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=kube-controller-manager pod=bootstrap-kube-controller-manager-master-0_kube-system(46f265536aba6292ead501bc9b49f327)\"" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="46f265536aba6292ead501bc9b49f327" Mar 18 17:48:05.415489 master-0 kubenswrapper[7553]: I0318 17:48:05.415332 7553 scope.go:117] "RemoveContainer" containerID="5eda9ef28d74f5cd7a10971a5854c8a51a0c32becadb69afd3686ca34d1563e1" Mar 18 17:48:06.341010 master-0 kubenswrapper[7553]: I0318 17:48:06.340933 7553 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-bgdql" Mar 18 17:48:06.341010 master-0 kubenswrapper[7553]: I0318 17:48:06.340995 7553 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-bgdql" Mar 18 17:48:06.422848 master-0 kubenswrapper[7553]: I0318 17:48:06.422769 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-57dbfd879f-44tfw" event={"ID":"253ec853-f637-4aa4-8e8e-eb655dfccccb","Type":"ContainerStarted","Data":"2db7d7384c8a35f95ff52d00ab25bbedc70ce8cf90c5c6ca6aff91f2c9272cd8"} Mar 18 17:48:06.423618 master-0 kubenswrapper[7553]: I0318 17:48:06.423133 7553 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-57dbfd879f-44tfw" Mar 18 17:48:06.425127 master-0 kubenswrapper[7553]: I0318 17:48:06.425097 7553 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-lifecycle-manager_package-server-manager-7b95f86987-6qqz4_d26d4515-391e-41a5-8c82-1b2b8a375662/package-server-manager/0.log" Mar 18 17:48:06.425650 master-0 kubenswrapper[7553]: I0318 17:48:06.425617 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-7b95f86987-6qqz4" event={"ID":"d26d4515-391e-41a5-8c82-1b2b8a375662","Type":"ContainerStarted","Data":"2bf18e51a1823185cc3f2ac648f42885a8d2aea94913a831a7d4285f0b01a344"} Mar 18 17:48:06.425896 master-0 kubenswrapper[7553]: I0318 17:48:06.425871 7553 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-7b95f86987-6qqz4" Mar 18 17:48:06.427514 master-0 kubenswrapper[7553]: I0318 17:48:06.427480 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-7d58488df-l48xm" event={"ID":"fdab27a1-1d7a-4dc5-b828-eba3f57592dd","Type":"ContainerStarted","Data":"b533f593b28cafb60fbcf6432d0aa3477e72d3d1f721e9b883b828b9059da814"} Mar 18 17:48:06.430884 master-0 kubenswrapper[7553]: I0318 17:48:06.430811 7553 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-57dbfd879f-44tfw" Mar 18 17:48:07.190694 master-0 kubenswrapper[7553]: I0318 17:48:07.190591 7553 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-95bf4f4d-q27fh" Mar 18 17:48:07.384695 master-0 kubenswrapper[7553]: I0318 17:48:07.384597 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-bgdql" podUID="4460d3d3-c55f-4f1c-a623-e3feccf937bb" containerName="registry-server" probeResult="failure" output=< Mar 18 17:48:07.384695 master-0 kubenswrapper[7553]: timeout: failed to connect service ":50051" within 1s Mar 18 17:48:07.384695 master-0 kubenswrapper[7553]: > Mar 18 17:48:11.053563 master-0 kubenswrapper[7553]: I0318 17:48:11.053490 7553 scope.go:117] "RemoveContainer" containerID="ebe23adafc49efd64f86fbe53ef0b2cf71f92ee87bec64f94def1d1fde4df324" Mar 18 17:48:12.053307 master-0 kubenswrapper[7553]: I0318 17:48:12.053201 7553 scope.go:117] "RemoveContainer" containerID="6469fb4bb68705329572e917ffd53c8d7d98a360f3801392b01dd10ca152c1c0" Mar 18 17:48:12.478015 master-0 kubenswrapper[7553]: I0318 17:48:12.477956 7553 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-service-ca-operator_service-ca-operator-b865698dc-5zj8r_c355c750-ae2f-49fa-9a16-8fb4f688853e/service-ca-operator/2.log" Mar 18 17:48:12.478735 master-0 kubenswrapper[7553]: I0318 17:48:12.478088 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-b865698dc-5zj8r" event={"ID":"c355c750-ae2f-49fa-9a16-8fb4f688853e","Type":"ContainerStarted","Data":"82b3c41b778f6b2cb0358e27e4513c9d6911408756eafe9881b278fd4128f2db"} Mar 18 17:48:12.480498 master-0 kubenswrapper[7553]: I0318 17:48:12.480458 7553 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver-operator_kube-apiserver-operator-8b68b9d9b-p72m2_26575d68-0488-4dfa-a5d0-5016e481dba6/kube-apiserver-operator/2.log" Mar 18 17:48:12.480832 master-0 kubenswrapper[7553]: I0318 17:48:12.480552 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-8b68b9d9b-p72m2" event={"ID":"26575d68-0488-4dfa-a5d0-5016e481dba6","Type":"ContainerStarted","Data":"2206a7113dacde21996d9057f09cbc9465ab1858bcc433f5c546151c4ea00afa"} Mar 18 17:48:14.267758 master-0 kubenswrapper[7553]: I0318 17:48:14.267691 7553 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["kube-system/bootstrap-kube-controller-manager-master-0"] Mar 18 17:48:14.269779 master-0 kubenswrapper[7553]: I0318 17:48:14.268975 7553 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-controller-manager/kube-controller-manager-master-0"] Mar 18 17:48:14.270201 master-0 kubenswrapper[7553]: I0318 17:48:14.269370 7553 kuberuntime_container.go:808] "Killing container with a grace period" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="46f265536aba6292ead501bc9b49f327" containerName="cluster-policy-controller" containerID="cri-o://21f65b83dcd474e201c2e5f73d8624edd7acb25dd6db2218299da95d8111811c" gracePeriod=30 Mar 18 17:48:14.270641 master-0 kubenswrapper[7553]: E0318 17:48:14.270614 7553 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="41191498-89c5-44dc-b648-dbea889c72f5" containerName="installer" Mar 18 17:48:14.271473 master-0 kubenswrapper[7553]: I0318 17:48:14.271447 7553 state_mem.go:107] "Deleted CPUSet assignment" podUID="41191498-89c5-44dc-b648-dbea889c72f5" containerName="installer" Mar 18 17:48:14.271615 master-0 kubenswrapper[7553]: E0318 17:48:14.271595 7553 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="22e8652f-ee18-4cff-bccb-ef413456685f" containerName="installer" Mar 18 17:48:14.271731 master-0 kubenswrapper[7553]: I0318 17:48:14.271711 7553 state_mem.go:107] "Deleted CPUSet assignment" podUID="22e8652f-ee18-4cff-bccb-ef413456685f" containerName="installer" Mar 18 17:48:14.271844 master-0 kubenswrapper[7553]: E0318 17:48:14.271825 7553 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="46f265536aba6292ead501bc9b49f327" containerName="kube-controller-manager" Mar 18 17:48:14.271987 master-0 kubenswrapper[7553]: I0318 17:48:14.271935 7553 state_mem.go:107] "Deleted CPUSet assignment" podUID="46f265536aba6292ead501bc9b49f327" containerName="kube-controller-manager" Mar 18 17:48:14.272118 master-0 kubenswrapper[7553]: E0318 17:48:14.272098 7553 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="35595774-da4b-499c-bd6e-1ae5af144833" containerName="extract-utilities" Mar 18 17:48:14.272267 master-0 kubenswrapper[7553]: I0318 17:48:14.272247 7553 state_mem.go:107] "Deleted CPUSet assignment" podUID="35595774-da4b-499c-bd6e-1ae5af144833" containerName="extract-utilities" Mar 18 17:48:14.272418 master-0 kubenswrapper[7553]: E0318 17:48:14.272398 7553 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="35595774-da4b-499c-bd6e-1ae5af144833" containerName="extract-content" Mar 18 17:48:14.272529 master-0 kubenswrapper[7553]: I0318 17:48:14.272511 7553 state_mem.go:107] "Deleted CPUSet assignment" podUID="35595774-da4b-499c-bd6e-1ae5af144833" containerName="extract-content" Mar 18 17:48:14.272648 master-0 kubenswrapper[7553]: E0318 17:48:14.272629 7553 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="46f265536aba6292ead501bc9b49f327" containerName="cluster-policy-controller" Mar 18 17:48:14.272758 master-0 kubenswrapper[7553]: I0318 17:48:14.272740 7553 state_mem.go:107] "Deleted CPUSet assignment" podUID="46f265536aba6292ead501bc9b49f327" containerName="cluster-policy-controller" Mar 18 17:48:14.272892 master-0 kubenswrapper[7553]: E0318 17:48:14.272872 7553 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7a9075c3-bb4f-4559-8454-5e097f334957" containerName="extract-utilities" Mar 18 17:48:14.273001 master-0 kubenswrapper[7553]: I0318 17:48:14.272982 7553 state_mem.go:107] "Deleted CPUSet assignment" podUID="7a9075c3-bb4f-4559-8454-5e097f334957" containerName="extract-utilities" Mar 18 17:48:14.273112 master-0 kubenswrapper[7553]: E0318 17:48:14.273094 7553 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="46f265536aba6292ead501bc9b49f327" containerName="kube-controller-manager" Mar 18 17:48:14.273222 master-0 kubenswrapper[7553]: I0318 17:48:14.273202 7553 state_mem.go:107] "Deleted CPUSet assignment" podUID="46f265536aba6292ead501bc9b49f327" containerName="kube-controller-manager" Mar 18 17:48:14.273365 master-0 kubenswrapper[7553]: E0318 17:48:14.273345 7553 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="46f265536aba6292ead501bc9b49f327" containerName="kube-controller-manager" Mar 18 17:48:14.273562 master-0 kubenswrapper[7553]: I0318 17:48:14.273465 7553 state_mem.go:107] "Deleted CPUSet assignment" podUID="46f265536aba6292ead501bc9b49f327" containerName="kube-controller-manager" Mar 18 17:48:14.273562 master-0 kubenswrapper[7553]: E0318 17:48:14.273560 7553 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="08451d5b-cf84-45a1-a16d-7ce10a83a6e7" containerName="installer" Mar 18 17:48:14.273695 master-0 kubenswrapper[7553]: I0318 17:48:14.273573 7553 state_mem.go:107] "Deleted CPUSet assignment" podUID="08451d5b-cf84-45a1-a16d-7ce10a83a6e7" containerName="installer" Mar 18 17:48:14.273695 master-0 kubenswrapper[7553]: E0318 17:48:14.273594 7553 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f7203a5f-0f67-48ca-a12b-be3b0ce7cbac" containerName="extract-content" Mar 18 17:48:14.273695 master-0 kubenswrapper[7553]: I0318 17:48:14.273603 7553 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7203a5f-0f67-48ca-a12b-be3b0ce7cbac" containerName="extract-content" Mar 18 17:48:14.273695 master-0 kubenswrapper[7553]: E0318 17:48:14.273623 7553 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e7a6e8f4-26e0-454c-bfbb-f97e72636bf6" containerName="extract-utilities" Mar 18 17:48:14.273695 master-0 kubenswrapper[7553]: I0318 17:48:14.273630 7553 state_mem.go:107] "Deleted CPUSet assignment" podUID="e7a6e8f4-26e0-454c-bfbb-f97e72636bf6" containerName="extract-utilities" Mar 18 17:48:14.273695 master-0 kubenswrapper[7553]: E0318 17:48:14.273648 7553 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7a9075c3-bb4f-4559-8454-5e097f334957" containerName="extract-content" Mar 18 17:48:14.273695 master-0 kubenswrapper[7553]: I0318 17:48:14.273655 7553 state_mem.go:107] "Deleted CPUSet assignment" podUID="7a9075c3-bb4f-4559-8454-5e097f334957" containerName="extract-content" Mar 18 17:48:14.273695 master-0 kubenswrapper[7553]: E0318 17:48:14.273670 7553 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4f688df1-3bfc-412e-b311-f9f761a0b00a" containerName="installer" Mar 18 17:48:14.273695 master-0 kubenswrapper[7553]: I0318 17:48:14.273677 7553 state_mem.go:107] "Deleted CPUSet assignment" podUID="4f688df1-3bfc-412e-b311-f9f761a0b00a" containerName="installer" Mar 18 17:48:14.273695 master-0 kubenswrapper[7553]: E0318 17:48:14.273689 7553 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f7203a5f-0f67-48ca-a12b-be3b0ce7cbac" containerName="extract-utilities" Mar 18 17:48:14.273695 master-0 kubenswrapper[7553]: I0318 17:48:14.273696 7553 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7203a5f-0f67-48ca-a12b-be3b0ce7cbac" containerName="extract-utilities" Mar 18 17:48:14.273695 master-0 kubenswrapper[7553]: E0318 17:48:14.273707 7553 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1a709ef9-91c0-4193-acb4-0594d02f554c" containerName="installer" Mar 18 17:48:14.274432 master-0 kubenswrapper[7553]: I0318 17:48:14.273740 7553 state_mem.go:107] "Deleted CPUSet assignment" podUID="1a709ef9-91c0-4193-acb4-0594d02f554c" containerName="installer" Mar 18 17:48:14.274432 master-0 kubenswrapper[7553]: E0318 17:48:14.273755 7553 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e7a6e8f4-26e0-454c-bfbb-f97e72636bf6" containerName="extract-content" Mar 18 17:48:14.274432 master-0 kubenswrapper[7553]: I0318 17:48:14.273763 7553 state_mem.go:107] "Deleted CPUSet assignment" podUID="e7a6e8f4-26e0-454c-bfbb-f97e72636bf6" containerName="extract-content" Mar 18 17:48:14.274432 master-0 kubenswrapper[7553]: I0318 17:48:14.274014 7553 memory_manager.go:354] "RemoveStaleState removing state" podUID="08451d5b-cf84-45a1-a16d-7ce10a83a6e7" containerName="installer" Mar 18 17:48:14.274432 master-0 kubenswrapper[7553]: I0318 17:48:14.274025 7553 memory_manager.go:354] "RemoveStaleState removing state" podUID="46f265536aba6292ead501bc9b49f327" containerName="cluster-policy-controller" Mar 18 17:48:14.274432 master-0 kubenswrapper[7553]: I0318 17:48:14.274037 7553 memory_manager.go:354] "RemoveStaleState removing state" podUID="35595774-da4b-499c-bd6e-1ae5af144833" containerName="extract-content" Mar 18 17:48:14.274432 master-0 kubenswrapper[7553]: I0318 17:48:14.274048 7553 memory_manager.go:354] "RemoveStaleState removing state" podUID="e7a6e8f4-26e0-454c-bfbb-f97e72636bf6" containerName="extract-content" Mar 18 17:48:14.274432 master-0 kubenswrapper[7553]: I0318 17:48:14.274059 7553 memory_manager.go:354] "RemoveStaleState removing state" podUID="41191498-89c5-44dc-b648-dbea889c72f5" containerName="installer" Mar 18 17:48:14.274432 master-0 kubenswrapper[7553]: I0318 17:48:14.274067 7553 memory_manager.go:354] "RemoveStaleState removing state" podUID="46f265536aba6292ead501bc9b49f327" containerName="cluster-policy-controller" Mar 18 17:48:14.274432 master-0 kubenswrapper[7553]: I0318 17:48:14.274077 7553 memory_manager.go:354] "RemoveStaleState removing state" podUID="46f265536aba6292ead501bc9b49f327" containerName="kube-controller-manager" Mar 18 17:48:14.274432 master-0 kubenswrapper[7553]: I0318 17:48:14.274087 7553 memory_manager.go:354] "RemoveStaleState removing state" podUID="46f265536aba6292ead501bc9b49f327" containerName="kube-controller-manager" Mar 18 17:48:14.274432 master-0 kubenswrapper[7553]: I0318 17:48:14.274095 7553 memory_manager.go:354] "RemoveStaleState removing state" podUID="1a709ef9-91c0-4193-acb4-0594d02f554c" containerName="installer" Mar 18 17:48:14.274432 master-0 kubenswrapper[7553]: I0318 17:48:14.274105 7553 memory_manager.go:354] "RemoveStaleState removing state" podUID="7a9075c3-bb4f-4559-8454-5e097f334957" containerName="extract-content" Mar 18 17:48:14.274432 master-0 kubenswrapper[7553]: I0318 17:48:14.274112 7553 memory_manager.go:354] "RemoveStaleState removing state" podUID="46f265536aba6292ead501bc9b49f327" containerName="kube-controller-manager" Mar 18 17:48:14.274432 master-0 kubenswrapper[7553]: I0318 17:48:14.274119 7553 memory_manager.go:354] "RemoveStaleState removing state" podUID="22e8652f-ee18-4cff-bccb-ef413456685f" containerName="installer" Mar 18 17:48:14.274432 master-0 kubenswrapper[7553]: I0318 17:48:14.274128 7553 memory_manager.go:354] "RemoveStaleState removing state" podUID="f7203a5f-0f67-48ca-a12b-be3b0ce7cbac" containerName="extract-content" Mar 18 17:48:14.274432 master-0 kubenswrapper[7553]: I0318 17:48:14.274138 7553 memory_manager.go:354] "RemoveStaleState removing state" podUID="4f688df1-3bfc-412e-b311-f9f761a0b00a" containerName="installer" Mar 18 17:48:14.274432 master-0 kubenswrapper[7553]: I0318 17:48:14.274148 7553 memory_manager.go:354] "RemoveStaleState removing state" podUID="46f265536aba6292ead501bc9b49f327" containerName="kube-controller-manager" Mar 18 17:48:14.274432 master-0 kubenswrapper[7553]: E0318 17:48:14.274247 7553 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="46f265536aba6292ead501bc9b49f327" containerName="kube-controller-manager" Mar 18 17:48:14.274432 master-0 kubenswrapper[7553]: I0318 17:48:14.274255 7553 state_mem.go:107] "Deleted CPUSet assignment" podUID="46f265536aba6292ead501bc9b49f327" containerName="kube-controller-manager" Mar 18 17:48:14.274432 master-0 kubenswrapper[7553]: E0318 17:48:14.274266 7553 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="46f265536aba6292ead501bc9b49f327" containerName="cluster-policy-controller" Mar 18 17:48:14.274432 master-0 kubenswrapper[7553]: I0318 17:48:14.274286 7553 state_mem.go:107] "Deleted CPUSet assignment" podUID="46f265536aba6292ead501bc9b49f327" containerName="cluster-policy-controller" Mar 18 17:48:14.274432 master-0 kubenswrapper[7553]: E0318 17:48:14.274296 7553 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="46f265536aba6292ead501bc9b49f327" containerName="kube-controller-manager" Mar 18 17:48:14.274432 master-0 kubenswrapper[7553]: I0318 17:48:14.274303 7553 state_mem.go:107] "Deleted CPUSet assignment" podUID="46f265536aba6292ead501bc9b49f327" containerName="kube-controller-manager" Mar 18 17:48:14.274432 master-0 kubenswrapper[7553]: I0318 17:48:14.274396 7553 memory_manager.go:354] "RemoveStaleState removing state" podUID="46f265536aba6292ead501bc9b49f327" containerName="kube-controller-manager" Mar 18 17:48:14.276304 master-0 kubenswrapper[7553]: I0318 17:48:14.275315 7553 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 17:48:14.396615 master-0 kubenswrapper[7553]: I0318 17:48:14.394174 7553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3b3363934623637fdc1d37ff8b16880a-cert-dir\") pod \"kube-controller-manager-master-0\" (UID: \"3b3363934623637fdc1d37ff8b16880a\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 17:48:14.396615 master-0 kubenswrapper[7553]: I0318 17:48:14.394266 7553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3b3363934623637fdc1d37ff8b16880a-resource-dir\") pod \"kube-controller-manager-master-0\" (UID: \"3b3363934623637fdc1d37ff8b16880a\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 17:48:14.417769 master-0 kubenswrapper[7553]: I0318 17:48:14.417692 7553 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-master-0"] Mar 18 17:48:14.441809 master-0 kubenswrapper[7553]: I0318 17:48:14.441215 7553 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 18 17:48:14.489294 master-0 kubenswrapper[7553]: I0318 17:48:14.489208 7553 kubelet.go:2706] "Unable to find pod for mirror pod, skipping" mirrorPod="kube-system/bootstrap-kube-controller-manager-master-0" mirrorPodUID="e21a741e-3aa8-4988-b113-9c8b4005a80d" Mar 18 17:48:14.495068 master-0 kubenswrapper[7553]: I0318 17:48:14.495027 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3b3363934623637fdc1d37ff8b16880a-cert-dir\") pod \"kube-controller-manager-master-0\" (UID: \"3b3363934623637fdc1d37ff8b16880a\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 17:48:14.495184 master-0 kubenswrapper[7553]: I0318 17:48:14.495082 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3b3363934623637fdc1d37ff8b16880a-resource-dir\") pod \"kube-controller-manager-master-0\" (UID: \"3b3363934623637fdc1d37ff8b16880a\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 17:48:14.495184 master-0 kubenswrapper[7553]: I0318 17:48:14.495172 7553 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3b3363934623637fdc1d37ff8b16880a-resource-dir\") pod \"kube-controller-manager-master-0\" (UID: \"3b3363934623637fdc1d37ff8b16880a\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 17:48:14.495245 master-0 kubenswrapper[7553]: I0318 17:48:14.495208 7553 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3b3363934623637fdc1d37ff8b16880a-cert-dir\") pod \"kube-controller-manager-master-0\" (UID: \"3b3363934623637fdc1d37ff8b16880a\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 17:48:14.500010 master-0 kubenswrapper[7553]: I0318 17:48:14.499946 7553 generic.go:334] "Generic (PLEG): container finished" podID="46f265536aba6292ead501bc9b49f327" containerID="21f65b83dcd474e201c2e5f73d8624edd7acb25dd6db2218299da95d8111811c" exitCode=0 Mar 18 17:48:14.500101 master-0 kubenswrapper[7553]: I0318 17:48:14.500072 7553 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4c05ce2500dc59522a6a15e9d7a181f449cb0590dccc6ae6225c9f2d2a528378" Mar 18 17:48:14.500101 master-0 kubenswrapper[7553]: I0318 17:48:14.500095 7553 scope.go:117] "RemoveContainer" containerID="6a3212eaacddf8a633d9171d89d86f056fc2eaf17af107aa2bced9e6262d3611" Mar 18 17:48:14.500293 master-0 kubenswrapper[7553]: I0318 17:48:14.500248 7553 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 18 17:48:14.503003 master-0 kubenswrapper[7553]: I0318 17:48:14.502968 7553 generic.go:334] "Generic (PLEG): container finished" podID="37bbec19-22b8-411c-901b-d89c92b0bd4d" containerID="96795dabdb6bc76b373e901a5376a2ae90d0d629bb5240323bbf35ecdc487386" exitCode=0 Mar 18 17:48:14.503090 master-0 kubenswrapper[7553]: I0318 17:48:14.503022 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-2-master-0" event={"ID":"37bbec19-22b8-411c-901b-d89c92b0bd4d","Type":"ContainerDied","Data":"96795dabdb6bc76b373e901a5376a2ae90d0d629bb5240323bbf35ecdc487386"} Mar 18 17:48:14.596357 master-0 kubenswrapper[7553]: I0318 17:48:14.596290 7553 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/46f265536aba6292ead501bc9b49f327-logs\") pod \"46f265536aba6292ead501bc9b49f327\" (UID: \"46f265536aba6292ead501bc9b49f327\") " Mar 18 17:48:14.596357 master-0 kubenswrapper[7553]: I0318 17:48:14.596362 7553 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/46f265536aba6292ead501bc9b49f327-secrets\") pod \"46f265536aba6292ead501bc9b49f327\" (UID: \"46f265536aba6292ead501bc9b49f327\") " Mar 18 17:48:14.596576 master-0 kubenswrapper[7553]: I0318 17:48:14.596403 7553 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/46f265536aba6292ead501bc9b49f327-ssl-certs-host\") pod \"46f265536aba6292ead501bc9b49f327\" (UID: \"46f265536aba6292ead501bc9b49f327\") " Mar 18 17:48:14.596576 master-0 kubenswrapper[7553]: I0318 17:48:14.596471 7553 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/46f265536aba6292ead501bc9b49f327-logs" (OuterVolumeSpecName: "logs") pod "46f265536aba6292ead501bc9b49f327" (UID: "46f265536aba6292ead501bc9b49f327"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 17:48:14.596576 master-0 kubenswrapper[7553]: I0318 17:48:14.596511 7553 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/host-path/46f265536aba6292ead501bc9b49f327-config\") pod \"46f265536aba6292ead501bc9b49f327\" (UID: \"46f265536aba6292ead501bc9b49f327\") " Mar 18 17:48:14.596576 master-0 kubenswrapper[7553]: I0318 17:48:14.596556 7553 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/46f265536aba6292ead501bc9b49f327-config" (OuterVolumeSpecName: "config") pod "46f265536aba6292ead501bc9b49f327" (UID: "46f265536aba6292ead501bc9b49f327"). InnerVolumeSpecName "config". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 17:48:14.596818 master-0 kubenswrapper[7553]: I0318 17:48:14.596591 7553 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/46f265536aba6292ead501bc9b49f327-secrets" (OuterVolumeSpecName: "secrets") pod "46f265536aba6292ead501bc9b49f327" (UID: "46f265536aba6292ead501bc9b49f327"). InnerVolumeSpecName "secrets". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 17:48:14.596818 master-0 kubenswrapper[7553]: I0318 17:48:14.596607 7553 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/46f265536aba6292ead501bc9b49f327-ssl-certs-host" (OuterVolumeSpecName: "ssl-certs-host") pod "46f265536aba6292ead501bc9b49f327" (UID: "46f265536aba6292ead501bc9b49f327"). InnerVolumeSpecName "ssl-certs-host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 17:48:14.596818 master-0 kubenswrapper[7553]: I0318 17:48:14.596659 7553 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/46f265536aba6292ead501bc9b49f327-etc-kubernetes-cloud\") pod \"46f265536aba6292ead501bc9b49f327\" (UID: \"46f265536aba6292ead501bc9b49f327\") " Mar 18 17:48:14.597339 master-0 kubenswrapper[7553]: I0318 17:48:14.597224 7553 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/46f265536aba6292ead501bc9b49f327-etc-kubernetes-cloud" (OuterVolumeSpecName: "etc-kubernetes-cloud") pod "46f265536aba6292ead501bc9b49f327" (UID: "46f265536aba6292ead501bc9b49f327"). InnerVolumeSpecName "etc-kubernetes-cloud". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 17:48:14.597514 master-0 kubenswrapper[7553]: I0318 17:48:14.597448 7553 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/host-path/46f265536aba6292ead501bc9b49f327-config\") on node \"master-0\" DevicePath \"\"" Mar 18 17:48:14.597678 master-0 kubenswrapper[7553]: I0318 17:48:14.597656 7553 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/46f265536aba6292ead501bc9b49f327-logs\") on node \"master-0\" DevicePath \"\"" Mar 18 17:48:14.597804 master-0 kubenswrapper[7553]: I0318 17:48:14.597783 7553 reconciler_common.go:293] "Volume detached for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/46f265536aba6292ead501bc9b49f327-secrets\") on node \"master-0\" DevicePath \"\"" Mar 18 17:48:14.597928 master-0 kubenswrapper[7553]: I0318 17:48:14.597904 7553 reconciler_common.go:293] "Volume detached for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/46f265536aba6292ead501bc9b49f327-ssl-certs-host\") on node \"master-0\" DevicePath \"\"" Mar 18 17:48:14.699411 master-0 kubenswrapper[7553]: I0318 17:48:14.699368 7553 reconciler_common.go:293] "Volume detached for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/46f265536aba6292ead501bc9b49f327-etc-kubernetes-cloud\") on node \"master-0\" DevicePath \"\"" Mar 18 17:48:14.713501 master-0 kubenswrapper[7553]: I0318 17:48:14.713442 7553 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 17:48:14.733779 master-0 kubenswrapper[7553]: W0318 17:48:14.733563 7553 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3b3363934623637fdc1d37ff8b16880a.slice/crio-989b2fdc7ef152f9acfe916ef0d60955c7235426f6fe1dd2fc891166e785e105 WatchSource:0}: Error finding container 989b2fdc7ef152f9acfe916ef0d60955c7235426f6fe1dd2fc891166e785e105: Status 404 returned error can't find the container with id 989b2fdc7ef152f9acfe916ef0d60955c7235426f6fe1dd2fc891166e785e105 Mar 18 17:48:14.861995 master-0 kubenswrapper[7553]: I0318 17:48:14.861903 7553 kubelet.go:2706] "Unable to find pod for mirror pod, skipping" mirrorPod="kube-system/bootstrap-kube-controller-manager-master-0" mirrorPodUID="e21a741e-3aa8-4988-b113-9c8b4005a80d" Mar 18 17:48:15.517357 master-0 kubenswrapper[7553]: I0318 17:48:15.517248 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"3b3363934623637fdc1d37ff8b16880a","Type":"ContainerStarted","Data":"c8289571034ebc6739ae21b3260df385ebf8dcd2b89305874e7d44766e4b4396"} Mar 18 17:48:15.517357 master-0 kubenswrapper[7553]: I0318 17:48:15.517352 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"3b3363934623637fdc1d37ff8b16880a","Type":"ContainerStarted","Data":"6007004024fecf1344918d5eba36f91c4644591c32375ce8f9e07fc9beb46c69"} Mar 18 17:48:15.519951 master-0 kubenswrapper[7553]: I0318 17:48:15.517379 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"3b3363934623637fdc1d37ff8b16880a","Type":"ContainerStarted","Data":"989b2fdc7ef152f9acfe916ef0d60955c7235426f6fe1dd2fc891166e785e105"} Mar 18 17:48:15.842226 master-0 kubenswrapper[7553]: I0318 17:48:15.842180 7553 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-2-master-0" Mar 18 17:48:16.019622 master-0 kubenswrapper[7553]: I0318 17:48:16.019460 7553 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/37bbec19-22b8-411c-901b-d89c92b0bd4d-kube-api-access\") pod \"37bbec19-22b8-411c-901b-d89c92b0bd4d\" (UID: \"37bbec19-22b8-411c-901b-d89c92b0bd4d\") " Mar 18 17:48:16.019622 master-0 kubenswrapper[7553]: I0318 17:48:16.019579 7553 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/37bbec19-22b8-411c-901b-d89c92b0bd4d-kubelet-dir\") pod \"37bbec19-22b8-411c-901b-d89c92b0bd4d\" (UID: \"37bbec19-22b8-411c-901b-d89c92b0bd4d\") " Mar 18 17:48:16.019856 master-0 kubenswrapper[7553]: I0318 17:48:16.019652 7553 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/37bbec19-22b8-411c-901b-d89c92b0bd4d-var-lock\") pod \"37bbec19-22b8-411c-901b-d89c92b0bd4d\" (UID: \"37bbec19-22b8-411c-901b-d89c92b0bd4d\") " Mar 18 17:48:16.019856 master-0 kubenswrapper[7553]: I0318 17:48:16.019674 7553 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/37bbec19-22b8-411c-901b-d89c92b0bd4d-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "37bbec19-22b8-411c-901b-d89c92b0bd4d" (UID: "37bbec19-22b8-411c-901b-d89c92b0bd4d"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 17:48:16.019856 master-0 kubenswrapper[7553]: I0318 17:48:16.019704 7553 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/37bbec19-22b8-411c-901b-d89c92b0bd4d-var-lock" (OuterVolumeSpecName: "var-lock") pod "37bbec19-22b8-411c-901b-d89c92b0bd4d" (UID: "37bbec19-22b8-411c-901b-d89c92b0bd4d"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 17:48:16.020041 master-0 kubenswrapper[7553]: I0318 17:48:16.020016 7553 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/37bbec19-22b8-411c-901b-d89c92b0bd4d-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Mar 18 17:48:16.020041 master-0 kubenswrapper[7553]: I0318 17:48:16.020041 7553 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/37bbec19-22b8-411c-901b-d89c92b0bd4d-var-lock\") on node \"master-0\" DevicePath \"\"" Mar 18 17:48:16.023658 master-0 kubenswrapper[7553]: I0318 17:48:16.023618 7553 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/37bbec19-22b8-411c-901b-d89c92b0bd4d-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "37bbec19-22b8-411c-901b-d89c92b0bd4d" (UID: "37bbec19-22b8-411c-901b-d89c92b0bd4d"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 17:48:16.063022 master-0 kubenswrapper[7553]: I0318 17:48:16.062950 7553 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="46f265536aba6292ead501bc9b49f327" path="/var/lib/kubelet/pods/46f265536aba6292ead501bc9b49f327/volumes" Mar 18 17:48:16.063436 master-0 kubenswrapper[7553]: I0318 17:48:16.063352 7553 mirror_client.go:130] "Deleting a mirror pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="" Mar 18 17:48:16.077683 master-0 kubenswrapper[7553]: I0318 17:48:16.077623 7553 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["kube-system/bootstrap-kube-controller-manager-master-0"] Mar 18 17:48:16.077683 master-0 kubenswrapper[7553]: I0318 17:48:16.077680 7553 kubelet.go:2649] "Unable to find pod for mirror pod, skipping" mirrorPod="kube-system/bootstrap-kube-controller-manager-master-0" mirrorPodUID="e21a741e-3aa8-4988-b113-9c8b4005a80d" Mar 18 17:48:16.079694 master-0 kubenswrapper[7553]: I0318 17:48:16.079644 7553 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["kube-system/bootstrap-kube-controller-manager-master-0"] Mar 18 17:48:16.079754 master-0 kubenswrapper[7553]: I0318 17:48:16.079689 7553 kubelet.go:2673] "Unable to find pod for mirror pod, skipping" mirrorPod="kube-system/bootstrap-kube-controller-manager-master-0" mirrorPodUID="e21a741e-3aa8-4988-b113-9c8b4005a80d" Mar 18 17:48:16.123375 master-0 kubenswrapper[7553]: I0318 17:48:16.121569 7553 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/37bbec19-22b8-411c-901b-d89c92b0bd4d-kube-api-access\") on node \"master-0\" DevicePath \"\"" Mar 18 17:48:16.402757 master-0 kubenswrapper[7553]: I0318 17:48:16.402678 7553 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-bgdql" Mar 18 17:48:16.452943 master-0 kubenswrapper[7553]: I0318 17:48:16.452869 7553 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-bgdql" Mar 18 17:48:16.527265 master-0 kubenswrapper[7553]: I0318 17:48:16.526236 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-2-master-0" event={"ID":"37bbec19-22b8-411c-901b-d89c92b0bd4d","Type":"ContainerDied","Data":"f95a076923e4629406022fc1044a23f8f3e37ea1e3db68f6f34125f8c501b177"} Mar 18 17:48:16.527265 master-0 kubenswrapper[7553]: I0318 17:48:16.526311 7553 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f95a076923e4629406022fc1044a23f8f3e37ea1e3db68f6f34125f8c501b177" Mar 18 17:48:16.527265 master-0 kubenswrapper[7553]: I0318 17:48:16.526333 7553 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-2-master-0" Mar 18 17:48:16.531057 master-0 kubenswrapper[7553]: I0318 17:48:16.530922 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"3b3363934623637fdc1d37ff8b16880a","Type":"ContainerStarted","Data":"b6f2e9aac67fef6d9cd60fe1d8d223b7762a7baf5bd08f250b7e213146055132"} Mar 18 17:48:16.531057 master-0 kubenswrapper[7553]: I0318 17:48:16.531030 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"3b3363934623637fdc1d37ff8b16880a","Type":"ContainerStarted","Data":"243a7398c383ba8c402d23dcf0f7c5b93b0d9dae2f29d0c0170f8b972de06495"} Mar 18 17:48:16.586126 master-0 kubenswrapper[7553]: I0318 17:48:16.586049 7553 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podStartSLOduration=2.586025685 podStartE2EDuration="2.586025685s" podCreationTimestamp="2026-03-18 17:48:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 17:48:16.584760507 +0000 UTC m=+386.730595180" watchObservedRunningTime="2026-03-18 17:48:16.586025685 +0000 UTC m=+386.731860358" Mar 18 17:48:17.052881 master-0 kubenswrapper[7553]: I0318 17:48:17.052801 7553 scope.go:117] "RemoveContainer" containerID="1fd744dbcfad29e0a4211253fc988f9ef696171ed5032f9e61793918d136f6fa" Mar 18 17:48:17.540220 master-0 kubenswrapper[7553]: I0318 17:48:17.540175 7553 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager-operator_kube-controller-manager-operator-ff989d6cc-qk279_9b424d6c-7440-4c98-ac19-2d0642c696fd/kube-controller-manager-operator/2.log" Mar 18 17:48:17.540981 master-0 kubenswrapper[7553]: I0318 17:48:17.540296 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-ff989d6cc-qk279" event={"ID":"9b424d6c-7440-4c98-ac19-2d0642c696fd","Type":"ContainerStarted","Data":"733c4831624297f5112d8028d0486f0fad40d94494178f2290df8fe70a7c80e2"} Mar 18 17:48:24.714502 master-0 kubenswrapper[7553]: I0318 17:48:24.714411 7553 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 17:48:24.715320 master-0 kubenswrapper[7553]: I0318 17:48:24.715295 7553 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 17:48:24.715461 master-0 kubenswrapper[7553]: I0318 17:48:24.715440 7553 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 17:48:24.715861 master-0 kubenswrapper[7553]: I0318 17:48:24.715840 7553 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 17:48:24.723227 master-0 kubenswrapper[7553]: I0318 17:48:24.723142 7553 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 17:48:24.723723 master-0 kubenswrapper[7553]: I0318 17:48:24.723685 7553 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 17:48:25.602601 master-0 kubenswrapper[7553]: I0318 17:48:25.602546 7553 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 17:48:25.603090 master-0 kubenswrapper[7553]: I0318 17:48:25.603056 7553 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 17:48:32.520066 master-0 kubenswrapper[7553]: I0318 17:48:32.518816 7553 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-5c6485487f-z74t2"] Mar 18 17:48:32.520066 master-0 kubenswrapper[7553]: E0318 17:48:32.519053 7553 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="37bbec19-22b8-411c-901b-d89c92b0bd4d" containerName="installer" Mar 18 17:48:32.520066 master-0 kubenswrapper[7553]: I0318 17:48:32.519068 7553 state_mem.go:107] "Deleted CPUSet assignment" podUID="37bbec19-22b8-411c-901b-d89c92b0bd4d" containerName="installer" Mar 18 17:48:32.520066 master-0 kubenswrapper[7553]: I0318 17:48:32.519155 7553 memory_manager.go:354] "RemoveStaleState removing state" podUID="37bbec19-22b8-411c-901b-d89c92b0bd4d" containerName="installer" Mar 18 17:48:32.524297 master-0 kubenswrapper[7553]: I0318 17:48:32.521469 7553 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-5c6485487f-z74t2" Mar 18 17:48:32.524896 master-0 kubenswrapper[7553]: I0318 17:48:32.524865 7553 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-gxxlp" Mar 18 17:48:32.525203 master-0 kubenswrapper[7553]: I0318 17:48:32.525190 7553 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Mar 18 17:48:32.525457 master-0 kubenswrapper[7553]: I0318 17:48:32.525445 7553 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Mar 18 17:48:32.525688 master-0 kubenswrapper[7553]: I0318 17:48:32.525676 7553 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Mar 18 17:48:32.538703 master-0 kubenswrapper[7553]: I0318 17:48:32.538658 7553 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Mar 18 17:48:32.539008 master-0 kubenswrapper[7553]: I0318 17:48:32.538982 7553 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Mar 18 17:48:32.543045 master-0 kubenswrapper[7553]: I0318 17:48:32.543017 7553 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-85f7577d78-xnx8x"] Mar 18 17:48:32.544039 master-0 kubenswrapper[7553]: I0318 17:48:32.544017 7553 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-85f7577d78-xnx8x" Mar 18 17:48:32.545083 master-0 kubenswrapper[7553]: I0318 17:48:32.545062 7553 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7559f7c68c-9hlvv"] Mar 18 17:48:32.545780 master-0 kubenswrapper[7553]: I0318 17:48:32.545758 7553 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7559f7c68c-9hlvv" Mar 18 17:48:32.555930 master-0 kubenswrapper[7553]: I0318 17:48:32.555755 7553 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Mar 18 17:48:32.556241 master-0 kubenswrapper[7553]: I0318 17:48:32.556176 7553 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Mar 18 17:48:32.556553 master-0 kubenswrapper[7553]: I0318 17:48:32.556524 7553 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-4fc8r" Mar 18 17:48:32.556731 master-0 kubenswrapper[7553]: I0318 17:48:32.556209 7553 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Mar 18 17:48:32.559641 master-0 kubenswrapper[7553]: I0318 17:48:32.559610 7553 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-storage-operator/cluster-storage-operator-7d87854d6-d4bmc"] Mar 18 17:48:32.561380 master-0 kubenswrapper[7553]: I0318 17:48:32.561353 7553 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-storage-operator/cluster-storage-operator-7d87854d6-d4bmc" Mar 18 17:48:32.587372 master-0 kubenswrapper[7553]: I0318 17:48:32.571822 7553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pqgm8\" (UniqueName: \"kubernetes.io/projected/656ac493-a769-4c15-9356-2050c4b9c8d8-kube-api-access-pqgm8\") pod \"cluster-cloud-controller-manager-operator-7559f7c68c-9hlvv\" (UID: \"656ac493-a769-4c15-9356-2050c4b9c8d8\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7559f7c68c-9hlvv" Mar 18 17:48:32.587372 master-0 kubenswrapper[7553]: I0318 17:48:32.571886 7553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/656ac493-a769-4c15-9356-2050c4b9c8d8-images\") pod \"cluster-cloud-controller-manager-operator-7559f7c68c-9hlvv\" (UID: \"656ac493-a769-4c15-9356-2050c4b9c8d8\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7559f7c68c-9hlvv" Mar 18 17:48:32.587372 master-0 kubenswrapper[7553]: I0318 17:48:32.571928 7553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/e0e04440-c08b-452d-9be6-9f70a4027c92-samples-operator-tls\") pod \"cluster-samples-operator-85f7577d78-xnx8x\" (UID: \"e0e04440-c08b-452d-9be6-9f70a4027c92\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-85f7577d78-xnx8x" Mar 18 17:48:32.587372 master-0 kubenswrapper[7553]: I0318 17:48:32.571967 7553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cluster-storage-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/c38c5f03-a753-49f4-ab06-33e75a03bd45-cluster-storage-operator-serving-cert\") pod \"cluster-storage-operator-7d87854d6-d4bmc\" (UID: \"c38c5f03-a753-49f4-ab06-33e75a03bd45\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-7d87854d6-d4bmc" Mar 18 17:48:32.587372 master-0 kubenswrapper[7553]: I0318 17:48:32.572034 7553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-767c7\" (UniqueName: \"kubernetes.io/projected/e0e04440-c08b-452d-9be6-9f70a4027c92-kube-api-access-767c7\") pod \"cluster-samples-operator-85f7577d78-xnx8x\" (UID: \"e0e04440-c08b-452d-9be6-9f70a4027c92\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-85f7577d78-xnx8x" Mar 18 17:48:32.587372 master-0 kubenswrapper[7553]: I0318 17:48:32.572079 7553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sb496\" (UniqueName: \"kubernetes.io/projected/92153864-7959-4482-bf24-c8db36435fb5-kube-api-access-sb496\") pod \"machine-approver-5c6485487f-z74t2\" (UID: \"92153864-7959-4482-bf24-c8db36435fb5\") " pod="openshift-cluster-machine-approver/machine-approver-5c6485487f-z74t2" Mar 18 17:48:32.587372 master-0 kubenswrapper[7553]: I0318 17:48:32.572123 7553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/656ac493-a769-4c15-9356-2050c4b9c8d8-auth-proxy-config\") pod \"cluster-cloud-controller-manager-operator-7559f7c68c-9hlvv\" (UID: \"656ac493-a769-4c15-9356-2050c4b9c8d8\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7559f7c68c-9hlvv" Mar 18 17:48:32.587372 master-0 kubenswrapper[7553]: I0318 17:48:32.572160 7553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/92153864-7959-4482-bf24-c8db36435fb5-config\") pod \"machine-approver-5c6485487f-z74t2\" (UID: \"92153864-7959-4482-bf24-c8db36435fb5\") " pod="openshift-cluster-machine-approver/machine-approver-5c6485487f-z74t2" Mar 18 17:48:32.587372 master-0 kubenswrapper[7553]: I0318 17:48:32.572216 7553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/92153864-7959-4482-bf24-c8db36435fb5-machine-approver-tls\") pod \"machine-approver-5c6485487f-z74t2\" (UID: \"92153864-7959-4482-bf24-c8db36435fb5\") " pod="openshift-cluster-machine-approver/machine-approver-5c6485487f-z74t2" Mar 18 17:48:32.587372 master-0 kubenswrapper[7553]: I0318 17:48:32.572286 7553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d8d74\" (UniqueName: \"kubernetes.io/projected/c38c5f03-a753-49f4-ab06-33e75a03bd45-kube-api-access-d8d74\") pod \"cluster-storage-operator-7d87854d6-d4bmc\" (UID: \"c38c5f03-a753-49f4-ab06-33e75a03bd45\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-7d87854d6-d4bmc" Mar 18 17:48:32.587372 master-0 kubenswrapper[7553]: I0318 17:48:32.572316 7553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/92153864-7959-4482-bf24-c8db36435fb5-auth-proxy-config\") pod \"machine-approver-5c6485487f-z74t2\" (UID: \"92153864-7959-4482-bf24-c8db36435fb5\") " pod="openshift-cluster-machine-approver/machine-approver-5c6485487f-z74t2" Mar 18 17:48:32.587372 master-0 kubenswrapper[7553]: I0318 17:48:32.572350 7553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/656ac493-a769-4c15-9356-2050c4b9c8d8-host-etc-kube\") pod \"cluster-cloud-controller-manager-operator-7559f7c68c-9hlvv\" (UID: \"656ac493-a769-4c15-9356-2050c4b9c8d8\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7559f7c68c-9hlvv" Mar 18 17:48:32.587372 master-0 kubenswrapper[7553]: I0318 17:48:32.572406 7553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloud-controller-manager-operator-tls\" (UniqueName: \"kubernetes.io/secret/656ac493-a769-4c15-9356-2050c4b9c8d8-cloud-controller-manager-operator-tls\") pod \"cluster-cloud-controller-manager-operator-7559f7c68c-9hlvv\" (UID: \"656ac493-a769-4c15-9356-2050c4b9c8d8\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7559f7c68c-9hlvv" Mar 18 17:48:32.587372 master-0 kubenswrapper[7553]: I0318 17:48:32.576999 7553 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"kube-rbac-proxy" Mar 18 17:48:32.587372 master-0 kubenswrapper[7553]: I0318 17:48:32.577231 7553 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"kube-root-ca.crt" Mar 18 17:48:32.587372 master-0 kubenswrapper[7553]: I0318 17:48:32.577331 7553 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cloud-credential-operator/cloud-credential-operator-744f9dbf77-djgn7"] Mar 18 17:48:32.587372 master-0 kubenswrapper[7553]: I0318 17:48:32.580945 7553 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-controller-manager-operator"/"cloud-controller-manager-operator-tls" Mar 18 17:48:32.587372 master-0 kubenswrapper[7553]: I0318 17:48:32.581027 7553 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"openshift-service-ca.crt" Mar 18 17:48:32.587372 master-0 kubenswrapper[7553]: I0318 17:48:32.586888 7553 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"cloud-controller-manager-images" Mar 18 17:48:32.587372 master-0 kubenswrapper[7553]: I0318 17:48:32.586924 7553 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-controller-manager-operator"/"cluster-cloud-controller-manager-dockercfg-2mk4r" Mar 18 17:48:32.587372 master-0 kubenswrapper[7553]: I0318 17:48:32.586926 7553 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-storage-operator"/"cluster-storage-operator-dockercfg-clcfd" Mar 18 17:48:32.593912 master-0 kubenswrapper[7553]: I0318 17:48:32.593865 7553 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-storage-operator"/"cluster-storage-operator-serving-cert" Mar 18 17:48:32.594327 master-0 kubenswrapper[7553]: I0318 17:48:32.594310 7553 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cloud-credential-operator/cloud-credential-operator-744f9dbf77-djgn7" Mar 18 17:48:32.605317 master-0 kubenswrapper[7553]: I0318 17:48:32.602055 7553 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-credential-operator"/"cco-trusted-ca" Mar 18 17:48:32.605317 master-0 kubenswrapper[7553]: I0318 17:48:32.602571 7553 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-credential-operator"/"openshift-service-ca.crt" Mar 18 17:48:32.605317 master-0 kubenswrapper[7553]: I0318 17:48:32.602733 7553 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-credential-operator"/"cloud-credential-operator-dockercfg-4fdq4" Mar 18 17:48:32.605317 master-0 kubenswrapper[7553]: I0318 17:48:32.602872 7553 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-credential-operator"/"cloud-credential-operator-serving-cert" Mar 18 17:48:32.605317 master-0 kubenswrapper[7553]: I0318 17:48:32.603016 7553 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-credential-operator"/"kube-root-ca.crt" Mar 18 17:48:32.606001 master-0 kubenswrapper[7553]: I0318 17:48:32.605963 7553 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-85f7577d78-xnx8x"] Mar 18 17:48:32.618456 master-0 kubenswrapper[7553]: I0318 17:48:32.618388 7553 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-storage-operator/cluster-storage-operator-7d87854d6-d4bmc"] Mar 18 17:48:32.622303 master-0 kubenswrapper[7553]: I0318 17:48:32.621004 7553 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cloud-credential-operator/cloud-credential-operator-744f9dbf77-djgn7"] Mar 18 17:48:32.662487 master-0 kubenswrapper[7553]: I0318 17:48:32.662434 7553 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-6f97756bc8-zdqtc"] Mar 18 17:48:32.663754 master-0 kubenswrapper[7553]: I0318 17:48:32.663738 7553 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-6f97756bc8-zdqtc" Mar 18 17:48:32.667421 master-0 kubenswrapper[7553]: I0318 17:48:32.667362 7553 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/cluster-autoscaler-operator-866dc4744-l6hpt"] Mar 18 17:48:32.668837 master-0 kubenswrapper[7553]: I0318 17:48:32.668697 7553 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/cluster-autoscaler-operator-866dc4744-l6hpt" Mar 18 17:48:32.671461 master-0 kubenswrapper[7553]: I0318 17:48:32.670807 7553 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-insights/insights-operator-68bf6ff9d6-hm777"] Mar 18 17:48:32.671662 master-0 kubenswrapper[7553]: I0318 17:48:32.671628 7553 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy-cluster-autoscaler-operator" Mar 18 17:48:32.672649 master-0 kubenswrapper[7553]: I0318 17:48:32.672624 7553 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-autoscaler-operator-dockercfg-rqcfx" Mar 18 17:48:32.672959 master-0 kubenswrapper[7553]: I0318 17:48:32.672945 7553 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-autoscaler-operator-cert" Mar 18 17:48:32.673175 master-0 kubenswrapper[7553]: I0318 17:48:32.673097 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sb496\" (UniqueName: \"kubernetes.io/projected/92153864-7959-4482-bf24-c8db36435fb5-kube-api-access-sb496\") pod \"machine-approver-5c6485487f-z74t2\" (UID: \"92153864-7959-4482-bf24-c8db36435fb5\") " pod="openshift-cluster-machine-approver/machine-approver-5c6485487f-z74t2" Mar 18 17:48:32.673175 master-0 kubenswrapper[7553]: I0318 17:48:32.673160 7553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cco-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/04cef0bd-f365-4bf6-864a-1895995015d6-cco-trusted-ca\") pod \"cloud-credential-operator-744f9dbf77-djgn7\" (UID: \"04cef0bd-f365-4bf6-864a-1895995015d6\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-744f9dbf77-djgn7" Mar 18 17:48:32.673262 master-0 kubenswrapper[7553]: I0318 17:48:32.673180 7553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/a94f7bff-ad61-4c53-a8eb-000a13f26971-auth-proxy-config\") pod \"cluster-autoscaler-operator-866dc4744-l6hpt\" (UID: \"a94f7bff-ad61-4c53-a8eb-000a13f26971\") " pod="openshift-machine-api/cluster-autoscaler-operator-866dc4744-l6hpt" Mar 18 17:48:32.673262 master-0 kubenswrapper[7553]: I0318 17:48:32.673202 7553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qlhls\" (UniqueName: \"kubernetes.io/projected/04cef0bd-f365-4bf6-864a-1895995015d6-kube-api-access-qlhls\") pod \"cloud-credential-operator-744f9dbf77-djgn7\" (UID: \"04cef0bd-f365-4bf6-864a-1895995015d6\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-744f9dbf77-djgn7" Mar 18 17:48:32.673262 master-0 kubenswrapper[7553]: I0318 17:48:32.673222 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/656ac493-a769-4c15-9356-2050c4b9c8d8-auth-proxy-config\") pod \"cluster-cloud-controller-manager-operator-7559f7c68c-9hlvv\" (UID: \"656ac493-a769-4c15-9356-2050c4b9c8d8\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7559f7c68c-9hlvv" Mar 18 17:48:32.673262 master-0 kubenswrapper[7553]: I0318 17:48:32.673243 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/92153864-7959-4482-bf24-c8db36435fb5-config\") pod \"machine-approver-5c6485487f-z74t2\" (UID: \"92153864-7959-4482-bf24-c8db36435fb5\") " pod="openshift-cluster-machine-approver/machine-approver-5c6485487f-z74t2" Mar 18 17:48:32.673262 master-0 kubenswrapper[7553]: I0318 17:48:32.673260 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/92153864-7959-4482-bf24-c8db36435fb5-machine-approver-tls\") pod \"machine-approver-5c6485487f-z74t2\" (UID: \"92153864-7959-4482-bf24-c8db36435fb5\") " pod="openshift-cluster-machine-approver/machine-approver-5c6485487f-z74t2" Mar 18 17:48:32.673463 master-0 kubenswrapper[7553]: I0318 17:48:32.673293 7553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a94f7bff-ad61-4c53-a8eb-000a13f26971-cert\") pod \"cluster-autoscaler-operator-866dc4744-l6hpt\" (UID: \"a94f7bff-ad61-4c53-a8eb-000a13f26971\") " pod="openshift-machine-api/cluster-autoscaler-operator-866dc4744-l6hpt" Mar 18 17:48:32.673463 master-0 kubenswrapper[7553]: I0318 17:48:32.673323 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d8d74\" (UniqueName: \"kubernetes.io/projected/c38c5f03-a753-49f4-ab06-33e75a03bd45-kube-api-access-d8d74\") pod \"cluster-storage-operator-7d87854d6-d4bmc\" (UID: \"c38c5f03-a753-49f4-ab06-33e75a03bd45\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-7d87854d6-d4bmc" Mar 18 17:48:32.673463 master-0 kubenswrapper[7553]: I0318 17:48:32.673340 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/92153864-7959-4482-bf24-c8db36435fb5-auth-proxy-config\") pod \"machine-approver-5c6485487f-z74t2\" (UID: \"92153864-7959-4482-bf24-c8db36435fb5\") " pod="openshift-cluster-machine-approver/machine-approver-5c6485487f-z74t2" Mar 18 17:48:32.673463 master-0 kubenswrapper[7553]: I0318 17:48:32.673363 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/656ac493-a769-4c15-9356-2050c4b9c8d8-host-etc-kube\") pod \"cluster-cloud-controller-manager-operator-7559f7c68c-9hlvv\" (UID: \"656ac493-a769-4c15-9356-2050c4b9c8d8\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7559f7c68c-9hlvv" Mar 18 17:48:32.673463 master-0 kubenswrapper[7553]: I0318 17:48:32.673398 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloud-controller-manager-operator-tls\" (UniqueName: \"kubernetes.io/secret/656ac493-a769-4c15-9356-2050c4b9c8d8-cloud-controller-manager-operator-tls\") pod \"cluster-cloud-controller-manager-operator-7559f7c68c-9hlvv\" (UID: \"656ac493-a769-4c15-9356-2050c4b9c8d8\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7559f7c68c-9hlvv" Mar 18 17:48:32.673463 master-0 kubenswrapper[7553]: I0318 17:48:32.673427 7553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5xvzx\" (UniqueName: \"kubernetes.io/projected/a94f7bff-ad61-4c53-a8eb-000a13f26971-kube-api-access-5xvzx\") pod \"cluster-autoscaler-operator-866dc4744-l6hpt\" (UID: \"a94f7bff-ad61-4c53-a8eb-000a13f26971\") " pod="openshift-machine-api/cluster-autoscaler-operator-866dc4744-l6hpt" Mar 18 17:48:32.673463 master-0 kubenswrapper[7553]: I0318 17:48:32.673448 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pqgm8\" (UniqueName: \"kubernetes.io/projected/656ac493-a769-4c15-9356-2050c4b9c8d8-kube-api-access-pqgm8\") pod \"cluster-cloud-controller-manager-operator-7559f7c68c-9hlvv\" (UID: \"656ac493-a769-4c15-9356-2050c4b9c8d8\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7559f7c68c-9hlvv" Mar 18 17:48:32.673463 master-0 kubenswrapper[7553]: I0318 17:48:32.673469 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/656ac493-a769-4c15-9356-2050c4b9c8d8-images\") pod \"cluster-cloud-controller-manager-operator-7559f7c68c-9hlvv\" (UID: \"656ac493-a769-4c15-9356-2050c4b9c8d8\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7559f7c68c-9hlvv" Mar 18 17:48:32.673697 master-0 kubenswrapper[7553]: I0318 17:48:32.673491 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/e0e04440-c08b-452d-9be6-9f70a4027c92-samples-operator-tls\") pod \"cluster-samples-operator-85f7577d78-xnx8x\" (UID: \"e0e04440-c08b-452d-9be6-9f70a4027c92\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-85f7577d78-xnx8x" Mar 18 17:48:32.673697 master-0 kubenswrapper[7553]: I0318 17:48:32.673514 7553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloud-credential-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/04cef0bd-f365-4bf6-864a-1895995015d6-cloud-credential-operator-serving-cert\") pod \"cloud-credential-operator-744f9dbf77-djgn7\" (UID: \"04cef0bd-f365-4bf6-864a-1895995015d6\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-744f9dbf77-djgn7" Mar 18 17:48:32.673697 master-0 kubenswrapper[7553]: I0318 17:48:32.673535 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-storage-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/c38c5f03-a753-49f4-ab06-33e75a03bd45-cluster-storage-operator-serving-cert\") pod \"cluster-storage-operator-7d87854d6-d4bmc\" (UID: \"c38c5f03-a753-49f4-ab06-33e75a03bd45\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-7d87854d6-d4bmc" Mar 18 17:48:32.673697 master-0 kubenswrapper[7553]: I0318 17:48:32.673555 7553 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-insights/insights-operator-68bf6ff9d6-hm777" Mar 18 17:48:32.678082 master-0 kubenswrapper[7553]: I0318 17:48:32.674400 7553 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-6fbb6cf6f9-6x52p"] Mar 18 17:48:32.678082 master-0 kubenswrapper[7553]: I0318 17:48:32.675134 7553 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/92153864-7959-4482-bf24-c8db36435fb5-config\") pod \"machine-approver-5c6485487f-z74t2\" (UID: \"92153864-7959-4482-bf24-c8db36435fb5\") " pod="openshift-cluster-machine-approver/machine-approver-5c6485487f-z74t2" Mar 18 17:48:32.678082 master-0 kubenswrapper[7553]: E0318 17:48:32.675205 7553 secret.go:189] Couldn't get secret openshift-cluster-machine-approver/machine-approver-tls: secret "machine-approver-tls" not found Mar 18 17:48:32.678082 master-0 kubenswrapper[7553]: E0318 17:48:32.675246 7553 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/92153864-7959-4482-bf24-c8db36435fb5-machine-approver-tls podName:92153864-7959-4482-bf24-c8db36435fb5 nodeName:}" failed. No retries permitted until 2026-03-18 17:48:33.175233188 +0000 UTC m=+403.321067861 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "machine-approver-tls" (UniqueName: "kubernetes.io/secret/92153864-7959-4482-bf24-c8db36435fb5-machine-approver-tls") pod "machine-approver-5c6485487f-z74t2" (UID: "92153864-7959-4482-bf24-c8db36435fb5") : secret "machine-approver-tls" not found Mar 18 17:48:32.678082 master-0 kubenswrapper[7553]: I0318 17:48:32.675532 7553 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-rgwwd" Mar 18 17:48:32.678082 master-0 kubenswrapper[7553]: E0318 17:48:32.675887 7553 secret.go:189] Couldn't get secret openshift-cluster-samples-operator/samples-operator-tls: secret "samples-operator-tls" not found Mar 18 17:48:32.678082 master-0 kubenswrapper[7553]: E0318 17:48:32.675945 7553 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e0e04440-c08b-452d-9be6-9f70a4027c92-samples-operator-tls podName:e0e04440-c08b-452d-9be6-9f70a4027c92 nodeName:}" failed. No retries permitted until 2026-03-18 17:48:33.175923278 +0000 UTC m=+403.321757951 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "samples-operator-tls" (UniqueName: "kubernetes.io/secret/e0e04440-c08b-452d-9be6-9f70a4027c92-samples-operator-tls") pod "cluster-samples-operator-85f7577d78-xnx8x" (UID: "e0e04440-c08b-452d-9be6-9f70a4027c92") : secret "samples-operator-tls" not found Mar 18 17:48:32.678082 master-0 kubenswrapper[7553]: I0318 17:48:32.675998 7553 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/656ac493-a769-4c15-9356-2050c4b9c8d8-host-etc-kube\") pod \"cluster-cloud-controller-manager-operator-7559f7c68c-9hlvv\" (UID: \"656ac493-a769-4c15-9356-2050c4b9c8d8\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7559f7c68c-9hlvv" Mar 18 17:48:32.678082 master-0 kubenswrapper[7553]: I0318 17:48:32.676126 7553 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/92153864-7959-4482-bf24-c8db36435fb5-auth-proxy-config\") pod \"machine-approver-5c6485487f-z74t2\" (UID: \"92153864-7959-4482-bf24-c8db36435fb5\") " pod="openshift-cluster-machine-approver/machine-approver-5c6485487f-z74t2" Mar 18 17:48:32.678082 master-0 kubenswrapper[7553]: I0318 17:48:32.674762 7553 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/656ac493-a769-4c15-9356-2050c4b9c8d8-auth-proxy-config\") pod \"cluster-cloud-controller-manager-operator-7559f7c68c-9hlvv\" (UID: \"656ac493-a769-4c15-9356-2050c4b9c8d8\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7559f7c68c-9hlvv" Mar 18 17:48:32.678082 master-0 kubenswrapper[7553]: I0318 17:48:32.673563 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-767c7\" (UniqueName: \"kubernetes.io/projected/e0e04440-c08b-452d-9be6-9f70a4027c92-kube-api-access-767c7\") pod \"cluster-samples-operator-85f7577d78-xnx8x\" (UID: \"e0e04440-c08b-452d-9be6-9f70a4027c92\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-85f7577d78-xnx8x" Mar 18 17:48:32.678082 master-0 kubenswrapper[7553]: I0318 17:48:32.676415 7553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/de189d27-4c60-49f1-9119-d1fde5c37b1e-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-6f97756bc8-zdqtc\" (UID: \"de189d27-4c60-49f1-9119-d1fde5c37b1e\") " pod="openshift-machine-api/control-plane-machine-set-operator-6f97756bc8-zdqtc" Mar 18 17:48:32.678082 master-0 kubenswrapper[7553]: I0318 17:48:32.676451 7553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tf476\" (UniqueName: \"kubernetes.io/projected/de189d27-4c60-49f1-9119-d1fde5c37b1e-kube-api-access-tf476\") pod \"control-plane-machine-set-operator-6f97756bc8-zdqtc\" (UID: \"de189d27-4c60-49f1-9119-d1fde5c37b1e\") " pod="openshift-machine-api/control-plane-machine-set-operator-6f97756bc8-zdqtc" Mar 18 17:48:32.678082 master-0 kubenswrapper[7553]: I0318 17:48:32.676701 7553 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Mar 18 17:48:32.678082 master-0 kubenswrapper[7553]: I0318 17:48:32.677134 7553 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-6fbb6cf6f9-6x52p" Mar 18 17:48:32.678082 master-0 kubenswrapper[7553]: I0318 17:48:32.677219 7553 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/656ac493-a769-4c15-9356-2050c4b9c8d8-images\") pod \"cluster-cloud-controller-manager-operator-7559f7c68c-9hlvv\" (UID: \"656ac493-a769-4c15-9356-2050c4b9c8d8\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7559f7c68c-9hlvv" Mar 18 17:48:32.694362 master-0 kubenswrapper[7553]: I0318 17:48:32.692912 7553 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloud-controller-manager-operator-tls\" (UniqueName: \"kubernetes.io/secret/656ac493-a769-4c15-9356-2050c4b9c8d8-cloud-controller-manager-operator-tls\") pod \"cluster-cloud-controller-manager-operator-7559f7c68c-9hlvv\" (UID: \"656ac493-a769-4c15-9356-2050c4b9c8d8\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7559f7c68c-9hlvv" Mar 18 17:48:32.698306 master-0 kubenswrapper[7553]: I0318 17:48:32.697911 7553 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"openshift-service-ca.crt" Mar 18 17:48:32.698306 master-0 kubenswrapper[7553]: I0318 17:48:32.698074 7553 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"service-ca-bundle" Mar 18 17:48:32.708380 master-0 kubenswrapper[7553]: I0318 17:48:32.706799 7553 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-6f97756bc8-zdqtc"] Mar 18 17:48:32.708380 master-0 kubenswrapper[7553]: I0318 17:48:32.707823 7553 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-insights"/"openshift-insights-serving-cert" Mar 18 17:48:32.708380 master-0 kubenswrapper[7553]: I0318 17:48:32.707848 7553 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Mar 18 17:48:32.708907 master-0 kubenswrapper[7553]: I0318 17:48:32.708837 7553 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Mar 18 17:48:32.708987 master-0 kubenswrapper[7553]: I0318 17:48:32.708933 7553 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"kube-root-ca.crt" Mar 18 17:48:32.709332 master-0 kubenswrapper[7553]: I0318 17:48:32.709244 7553 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"trusted-ca-bundle" Mar 18 17:48:32.710694 master-0 kubenswrapper[7553]: I0318 17:48:32.709708 7553 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-insights"/"operator-dockercfg-bnhc4" Mar 18 17:48:32.710694 master-0 kubenswrapper[7553]: I0318 17:48:32.709992 7553 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Mar 18 17:48:32.710694 master-0 kubenswrapper[7553]: I0318 17:48:32.710309 7553 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-rl6dv" Mar 18 17:48:32.714430 master-0 kubenswrapper[7553]: I0318 17:48:32.711292 7553 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-storage-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/c38c5f03-a753-49f4-ab06-33e75a03bd45-cluster-storage-operator-serving-cert\") pod \"cluster-storage-operator-7d87854d6-d4bmc\" (UID: \"c38c5f03-a753-49f4-ab06-33e75a03bd45\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-7d87854d6-d4bmc" Mar 18 17:48:32.731357 master-0 kubenswrapper[7553]: I0318 17:48:32.729069 7553 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-insights/insights-operator-68bf6ff9d6-hm777"] Mar 18 17:48:32.742433 master-0 kubenswrapper[7553]: I0318 17:48:32.733227 7553 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/cluster-autoscaler-operator-866dc4744-l6hpt"] Mar 18 17:48:32.742433 master-0 kubenswrapper[7553]: I0318 17:48:32.734384 7553 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-6fbb6cf6f9-6x52p"] Mar 18 17:48:32.772447 master-0 kubenswrapper[7553]: I0318 17:48:32.768368 7553 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sb496\" (UniqueName: \"kubernetes.io/projected/92153864-7959-4482-bf24-c8db36435fb5-kube-api-access-sb496\") pod \"machine-approver-5c6485487f-z74t2\" (UID: \"92153864-7959-4482-bf24-c8db36435fb5\") " pod="openshift-cluster-machine-approver/machine-approver-5c6485487f-z74t2" Mar 18 17:48:32.773748 master-0 kubenswrapper[7553]: I0318 17:48:32.773717 7553 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-84d549f6d5-b5lps"] Mar 18 17:48:32.780628 master-0 kubenswrapper[7553]: I0318 17:48:32.777976 7553 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d8d74\" (UniqueName: \"kubernetes.io/projected/c38c5f03-a753-49f4-ab06-33e75a03bd45-kube-api-access-d8d74\") pod \"cluster-storage-operator-7d87854d6-d4bmc\" (UID: \"c38c5f03-a753-49f4-ab06-33e75a03bd45\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-7d87854d6-d4bmc" Mar 18 17:48:32.780628 master-0 kubenswrapper[7553]: I0318 17:48:32.778808 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloud-credential-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/04cef0bd-f365-4bf6-864a-1895995015d6-cloud-credential-operator-serving-cert\") pod \"cloud-credential-operator-744f9dbf77-djgn7\" (UID: \"04cef0bd-f365-4bf6-864a-1895995015d6\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-744f9dbf77-djgn7" Mar 18 17:48:32.780628 master-0 kubenswrapper[7553]: I0318 17:48:32.778869 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/de189d27-4c60-49f1-9119-d1fde5c37b1e-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-6f97756bc8-zdqtc\" (UID: \"de189d27-4c60-49f1-9119-d1fde5c37b1e\") " pod="openshift-machine-api/control-plane-machine-set-operator-6f97756bc8-zdqtc" Mar 18 17:48:32.780628 master-0 kubenswrapper[7553]: I0318 17:48:32.778886 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tf476\" (UniqueName: \"kubernetes.io/projected/de189d27-4c60-49f1-9119-d1fde5c37b1e-kube-api-access-tf476\") pod \"control-plane-machine-set-operator-6f97756bc8-zdqtc\" (UID: \"de189d27-4c60-49f1-9119-d1fde5c37b1e\") " pod="openshift-machine-api/control-plane-machine-set-operator-6f97756bc8-zdqtc" Mar 18 17:48:32.780628 master-0 kubenswrapper[7553]: I0318 17:48:32.778920 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cco-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/04cef0bd-f365-4bf6-864a-1895995015d6-cco-trusted-ca\") pod \"cloud-credential-operator-744f9dbf77-djgn7\" (UID: \"04cef0bd-f365-4bf6-864a-1895995015d6\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-744f9dbf77-djgn7" Mar 18 17:48:32.780628 master-0 kubenswrapper[7553]: I0318 17:48:32.778939 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/a94f7bff-ad61-4c53-a8eb-000a13f26971-auth-proxy-config\") pod \"cluster-autoscaler-operator-866dc4744-l6hpt\" (UID: \"a94f7bff-ad61-4c53-a8eb-000a13f26971\") " pod="openshift-machine-api/cluster-autoscaler-operator-866dc4744-l6hpt" Mar 18 17:48:32.780628 master-0 kubenswrapper[7553]: I0318 17:48:32.778962 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qlhls\" (UniqueName: \"kubernetes.io/projected/04cef0bd-f365-4bf6-864a-1895995015d6-kube-api-access-qlhls\") pod \"cloud-credential-operator-744f9dbf77-djgn7\" (UID: \"04cef0bd-f365-4bf6-864a-1895995015d6\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-744f9dbf77-djgn7" Mar 18 17:48:32.780628 master-0 kubenswrapper[7553]: I0318 17:48:32.778993 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a94f7bff-ad61-4c53-a8eb-000a13f26971-cert\") pod \"cluster-autoscaler-operator-866dc4744-l6hpt\" (UID: \"a94f7bff-ad61-4c53-a8eb-000a13f26971\") " pod="openshift-machine-api/cluster-autoscaler-operator-866dc4744-l6hpt" Mar 18 17:48:32.780628 master-0 kubenswrapper[7553]: I0318 17:48:32.779024 7553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d4c75bee-d0d2-4261-8f89-8c3375dbd868-service-ca-bundle\") pod \"insights-operator-68bf6ff9d6-hm777\" (UID: \"d4c75bee-d0d2-4261-8f89-8c3375dbd868\") " pod="openshift-insights/insights-operator-68bf6ff9d6-hm777" Mar 18 17:48:32.780628 master-0 kubenswrapper[7553]: I0318 17:48:32.779044 7553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bz8rf\" (UniqueName: \"kubernetes.io/projected/d4c75bee-d0d2-4261-8f89-8c3375dbd868-kube-api-access-bz8rf\") pod \"insights-operator-68bf6ff9d6-hm777\" (UID: \"d4c75bee-d0d2-4261-8f89-8c3375dbd868\") " pod="openshift-insights/insights-operator-68bf6ff9d6-hm777" Mar 18 17:48:32.780628 master-0 kubenswrapper[7553]: I0318 17:48:32.779072 7553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"snapshots\" (UniqueName: \"kubernetes.io/empty-dir/d4c75bee-d0d2-4261-8f89-8c3375dbd868-snapshots\") pod \"insights-operator-68bf6ff9d6-hm777\" (UID: \"d4c75bee-d0d2-4261-8f89-8c3375dbd868\") " pod="openshift-insights/insights-operator-68bf6ff9d6-hm777" Mar 18 17:48:32.780628 master-0 kubenswrapper[7553]: I0318 17:48:32.779091 7553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/2d21e77e-8b61-4f03-8f17-941b7a1d8b1d-images\") pod \"machine-api-operator-6fbb6cf6f9-6x52p\" (UID: \"2d21e77e-8b61-4f03-8f17-941b7a1d8b1d\") " pod="openshift-machine-api/machine-api-operator-6fbb6cf6f9-6x52p" Mar 18 17:48:32.780628 master-0 kubenswrapper[7553]: I0318 17:48:32.779116 7553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d4c75bee-d0d2-4261-8f89-8c3375dbd868-serving-cert\") pod \"insights-operator-68bf6ff9d6-hm777\" (UID: \"d4c75bee-d0d2-4261-8f89-8c3375dbd868\") " pod="openshift-insights/insights-operator-68bf6ff9d6-hm777" Mar 18 17:48:32.780628 master-0 kubenswrapper[7553]: I0318 17:48:32.779136 7553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fc27m\" (UniqueName: \"kubernetes.io/projected/2d21e77e-8b61-4f03-8f17-941b7a1d8b1d-kube-api-access-fc27m\") pod \"machine-api-operator-6fbb6cf6f9-6x52p\" (UID: \"2d21e77e-8b61-4f03-8f17-941b7a1d8b1d\") " pod="openshift-machine-api/machine-api-operator-6fbb6cf6f9-6x52p" Mar 18 17:48:32.780628 master-0 kubenswrapper[7553]: I0318 17:48:32.779163 7553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2d21e77e-8b61-4f03-8f17-941b7a1d8b1d-config\") pod \"machine-api-operator-6fbb6cf6f9-6x52p\" (UID: \"2d21e77e-8b61-4f03-8f17-941b7a1d8b1d\") " pod="openshift-machine-api/machine-api-operator-6fbb6cf6f9-6x52p" Mar 18 17:48:32.780628 master-0 kubenswrapper[7553]: I0318 17:48:32.779180 7553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d4c75bee-d0d2-4261-8f89-8c3375dbd868-trusted-ca-bundle\") pod \"insights-operator-68bf6ff9d6-hm777\" (UID: \"d4c75bee-d0d2-4261-8f89-8c3375dbd868\") " pod="openshift-insights/insights-operator-68bf6ff9d6-hm777" Mar 18 17:48:32.780628 master-0 kubenswrapper[7553]: I0318 17:48:32.779202 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5xvzx\" (UniqueName: \"kubernetes.io/projected/a94f7bff-ad61-4c53-a8eb-000a13f26971-kube-api-access-5xvzx\") pod \"cluster-autoscaler-operator-866dc4744-l6hpt\" (UID: \"a94f7bff-ad61-4c53-a8eb-000a13f26971\") " pod="openshift-machine-api/cluster-autoscaler-operator-866dc4744-l6hpt" Mar 18 17:48:32.780628 master-0 kubenswrapper[7553]: I0318 17:48:32.779224 7553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/2d21e77e-8b61-4f03-8f17-941b7a1d8b1d-machine-api-operator-tls\") pod \"machine-api-operator-6fbb6cf6f9-6x52p\" (UID: \"2d21e77e-8b61-4f03-8f17-941b7a1d8b1d\") " pod="openshift-machine-api/machine-api-operator-6fbb6cf6f9-6x52p" Mar 18 17:48:32.780628 master-0 kubenswrapper[7553]: E0318 17:48:32.779390 7553 secret.go:189] Couldn't get secret openshift-cloud-credential-operator/cloud-credential-operator-serving-cert: secret "cloud-credential-operator-serving-cert" not found Mar 18 17:48:32.780628 master-0 kubenswrapper[7553]: E0318 17:48:32.779439 7553 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/04cef0bd-f365-4bf6-864a-1895995015d6-cloud-credential-operator-serving-cert podName:04cef0bd-f365-4bf6-864a-1895995015d6 nodeName:}" failed. No retries permitted until 2026-03-18 17:48:33.279423841 +0000 UTC m=+403.425258514 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cloud-credential-operator-serving-cert" (UniqueName: "kubernetes.io/secret/04cef0bd-f365-4bf6-864a-1895995015d6-cloud-credential-operator-serving-cert") pod "cloud-credential-operator-744f9dbf77-djgn7" (UID: "04cef0bd-f365-4bf6-864a-1895995015d6") : secret "cloud-credential-operator-serving-cert" not found Mar 18 17:48:32.780628 master-0 kubenswrapper[7553]: E0318 17:48:32.779689 7553 secret.go:189] Couldn't get secret openshift-machine-api/control-plane-machine-set-operator-tls: secret "control-plane-machine-set-operator-tls" not found Mar 18 17:48:32.780628 master-0 kubenswrapper[7553]: E0318 17:48:32.779715 7553 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/de189d27-4c60-49f1-9119-d1fde5c37b1e-control-plane-machine-set-operator-tls podName:de189d27-4c60-49f1-9119-d1fde5c37b1e nodeName:}" failed. No retries permitted until 2026-03-18 17:48:33.279706699 +0000 UTC m=+403.425541372 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "control-plane-machine-set-operator-tls" (UniqueName: "kubernetes.io/secret/de189d27-4c60-49f1-9119-d1fde5c37b1e-control-plane-machine-set-operator-tls") pod "control-plane-machine-set-operator-6f97756bc8-zdqtc" (UID: "de189d27-4c60-49f1-9119-d1fde5c37b1e") : secret "control-plane-machine-set-operator-tls" not found Mar 18 17:48:32.780628 master-0 kubenswrapper[7553]: I0318 17:48:32.780529 7553 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cco-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/04cef0bd-f365-4bf6-864a-1895995015d6-cco-trusted-ca\") pod \"cloud-credential-operator-744f9dbf77-djgn7\" (UID: \"04cef0bd-f365-4bf6-864a-1895995015d6\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-744f9dbf77-djgn7" Mar 18 17:48:32.780628 master-0 kubenswrapper[7553]: I0318 17:48:32.780618 7553 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-767c7\" (UniqueName: \"kubernetes.io/projected/e0e04440-c08b-452d-9be6-9f70a4027c92-kube-api-access-767c7\") pod \"cluster-samples-operator-85f7577d78-xnx8x\" (UID: \"e0e04440-c08b-452d-9be6-9f70a4027c92\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-85f7577d78-xnx8x" Mar 18 17:48:32.781483 master-0 kubenswrapper[7553]: E0318 17:48:32.781024 7553 secret.go:189] Couldn't get secret openshift-machine-api/cluster-autoscaler-operator-cert: secret "cluster-autoscaler-operator-cert" not found Mar 18 17:48:32.781483 master-0 kubenswrapper[7553]: E0318 17:48:32.781066 7553 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a94f7bff-ad61-4c53-a8eb-000a13f26971-cert podName:a94f7bff-ad61-4c53-a8eb-000a13f26971 nodeName:}" failed. No retries permitted until 2026-03-18 17:48:33.281050126 +0000 UTC m=+403.426884799 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/a94f7bff-ad61-4c53-a8eb-000a13f26971-cert") pod "cluster-autoscaler-operator-866dc4744-l6hpt" (UID: "a94f7bff-ad61-4c53-a8eb-000a13f26971") : secret "cluster-autoscaler-operator-cert" not found Mar 18 17:48:32.781483 master-0 kubenswrapper[7553]: I0318 17:48:32.781219 7553 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/a94f7bff-ad61-4c53-a8eb-000a13f26971-auth-proxy-config\") pod \"cluster-autoscaler-operator-866dc4744-l6hpt\" (UID: \"a94f7bff-ad61-4c53-a8eb-000a13f26971\") " pod="openshift-machine-api/cluster-autoscaler-operator-866dc4744-l6hpt" Mar 18 17:48:32.788257 master-0 kubenswrapper[7553]: I0318 17:48:32.788223 7553 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-84d549f6d5-b5lps"] Mar 18 17:48:32.790870 master-0 kubenswrapper[7553]: I0318 17:48:32.788512 7553 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-84d549f6d5-b5lps" Mar 18 17:48:32.813049 master-0 kubenswrapper[7553]: I0318 17:48:32.812794 7553 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Mar 18 17:48:32.813302 master-0 kubenswrapper[7553]: I0318 17:48:32.813237 7553 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-ksrlj" Mar 18 17:48:32.814620 master-0 kubenswrapper[7553]: I0318 17:48:32.814434 7553 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Mar 18 17:48:32.815998 master-0 kubenswrapper[7553]: I0318 17:48:32.814718 7553 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Mar 18 17:48:32.815998 master-0 kubenswrapper[7553]: I0318 17:48:32.815009 7553 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Mar 18 17:48:32.815998 master-0 kubenswrapper[7553]: I0318 17:48:32.815824 7553 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Mar 18 17:48:32.822647 master-0 kubenswrapper[7553]: I0318 17:48:32.821730 7553 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tf476\" (UniqueName: \"kubernetes.io/projected/de189d27-4c60-49f1-9119-d1fde5c37b1e-kube-api-access-tf476\") pod \"control-plane-machine-set-operator-6f97756bc8-zdqtc\" (UID: \"de189d27-4c60-49f1-9119-d1fde5c37b1e\") " pod="openshift-machine-api/control-plane-machine-set-operator-6f97756bc8-zdqtc" Mar 18 17:48:32.822647 master-0 kubenswrapper[7553]: I0318 17:48:32.822561 7553 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pqgm8\" (UniqueName: \"kubernetes.io/projected/656ac493-a769-4c15-9356-2050c4b9c8d8-kube-api-access-pqgm8\") pod \"cluster-cloud-controller-manager-operator-7559f7c68c-9hlvv\" (UID: \"656ac493-a769-4c15-9356-2050c4b9c8d8\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7559f7c68c-9hlvv" Mar 18 17:48:32.831304 master-0 kubenswrapper[7553]: I0318 17:48:32.830356 7553 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qlhls\" (UniqueName: \"kubernetes.io/projected/04cef0bd-f365-4bf6-864a-1895995015d6-kube-api-access-qlhls\") pod \"cloud-credential-operator-744f9dbf77-djgn7\" (UID: \"04cef0bd-f365-4bf6-864a-1895995015d6\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-744f9dbf77-djgn7" Mar 18 17:48:32.831304 master-0 kubenswrapper[7553]: I0318 17:48:32.830840 7553 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5xvzx\" (UniqueName: \"kubernetes.io/projected/a94f7bff-ad61-4c53-a8eb-000a13f26971-kube-api-access-5xvzx\") pod \"cluster-autoscaler-operator-866dc4744-l6hpt\" (UID: \"a94f7bff-ad61-4c53-a8eb-000a13f26971\") " pod="openshift-machine-api/cluster-autoscaler-operator-866dc4744-l6hpt" Mar 18 17:48:32.880304 master-0 kubenswrapper[7553]: I0318 17:48:32.880051 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"snapshots\" (UniqueName: \"kubernetes.io/empty-dir/d4c75bee-d0d2-4261-8f89-8c3375dbd868-snapshots\") pod \"insights-operator-68bf6ff9d6-hm777\" (UID: \"d4c75bee-d0d2-4261-8f89-8c3375dbd868\") " pod="openshift-insights/insights-operator-68bf6ff9d6-hm777" Mar 18 17:48:32.880304 master-0 kubenswrapper[7553]: I0318 17:48:32.880103 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/2d21e77e-8b61-4f03-8f17-941b7a1d8b1d-images\") pod \"machine-api-operator-6fbb6cf6f9-6x52p\" (UID: \"2d21e77e-8b61-4f03-8f17-941b7a1d8b1d\") " pod="openshift-machine-api/machine-api-operator-6fbb6cf6f9-6x52p" Mar 18 17:48:32.880304 master-0 kubenswrapper[7553]: I0318 17:48:32.880119 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d4c75bee-d0d2-4261-8f89-8c3375dbd868-serving-cert\") pod \"insights-operator-68bf6ff9d6-hm777\" (UID: \"d4c75bee-d0d2-4261-8f89-8c3375dbd868\") " pod="openshift-insights/insights-operator-68bf6ff9d6-hm777" Mar 18 17:48:32.880304 master-0 kubenswrapper[7553]: I0318 17:48:32.880139 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fc27m\" (UniqueName: \"kubernetes.io/projected/2d21e77e-8b61-4f03-8f17-941b7a1d8b1d-kube-api-access-fc27m\") pod \"machine-api-operator-6fbb6cf6f9-6x52p\" (UID: \"2d21e77e-8b61-4f03-8f17-941b7a1d8b1d\") " pod="openshift-machine-api/machine-api-operator-6fbb6cf6f9-6x52p" Mar 18 17:48:32.880304 master-0 kubenswrapper[7553]: I0318 17:48:32.880165 7553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/c3267271-e0c5-45d6-980c-d78e4f9eef35-auth-proxy-config\") pod \"machine-config-operator-84d549f6d5-b5lps\" (UID: \"c3267271-e0c5-45d6-980c-d78e4f9eef35\") " pod="openshift-machine-config-operator/machine-config-operator-84d549f6d5-b5lps" Mar 18 17:48:32.880304 master-0 kubenswrapper[7553]: I0318 17:48:32.880206 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2d21e77e-8b61-4f03-8f17-941b7a1d8b1d-config\") pod \"machine-api-operator-6fbb6cf6f9-6x52p\" (UID: \"2d21e77e-8b61-4f03-8f17-941b7a1d8b1d\") " pod="openshift-machine-api/machine-api-operator-6fbb6cf6f9-6x52p" Mar 18 17:48:32.880304 master-0 kubenswrapper[7553]: I0318 17:48:32.880225 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d4c75bee-d0d2-4261-8f89-8c3375dbd868-trusted-ca-bundle\") pod \"insights-operator-68bf6ff9d6-hm777\" (UID: \"d4c75bee-d0d2-4261-8f89-8c3375dbd868\") " pod="openshift-insights/insights-operator-68bf6ff9d6-hm777" Mar 18 17:48:32.880304 master-0 kubenswrapper[7553]: I0318 17:48:32.880252 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/2d21e77e-8b61-4f03-8f17-941b7a1d8b1d-machine-api-operator-tls\") pod \"machine-api-operator-6fbb6cf6f9-6x52p\" (UID: \"2d21e77e-8b61-4f03-8f17-941b7a1d8b1d\") " pod="openshift-machine-api/machine-api-operator-6fbb6cf6f9-6x52p" Mar 18 17:48:32.880304 master-0 kubenswrapper[7553]: I0318 17:48:32.880328 7553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/c3267271-e0c5-45d6-980c-d78e4f9eef35-proxy-tls\") pod \"machine-config-operator-84d549f6d5-b5lps\" (UID: \"c3267271-e0c5-45d6-980c-d78e4f9eef35\") " pod="openshift-machine-config-operator/machine-config-operator-84d549f6d5-b5lps" Mar 18 17:48:32.881082 master-0 kubenswrapper[7553]: I0318 17:48:32.880375 7553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z7xqg\" (UniqueName: \"kubernetes.io/projected/c3267271-e0c5-45d6-980c-d78e4f9eef35-kube-api-access-z7xqg\") pod \"machine-config-operator-84d549f6d5-b5lps\" (UID: \"c3267271-e0c5-45d6-980c-d78e4f9eef35\") " pod="openshift-machine-config-operator/machine-config-operator-84d549f6d5-b5lps" Mar 18 17:48:32.881082 master-0 kubenswrapper[7553]: I0318 17:48:32.880414 7553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/c3267271-e0c5-45d6-980c-d78e4f9eef35-images\") pod \"machine-config-operator-84d549f6d5-b5lps\" (UID: \"c3267271-e0c5-45d6-980c-d78e4f9eef35\") " pod="openshift-machine-config-operator/machine-config-operator-84d549f6d5-b5lps" Mar 18 17:48:32.881082 master-0 kubenswrapper[7553]: I0318 17:48:32.880475 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d4c75bee-d0d2-4261-8f89-8c3375dbd868-service-ca-bundle\") pod \"insights-operator-68bf6ff9d6-hm777\" (UID: \"d4c75bee-d0d2-4261-8f89-8c3375dbd868\") " pod="openshift-insights/insights-operator-68bf6ff9d6-hm777" Mar 18 17:48:32.881082 master-0 kubenswrapper[7553]: I0318 17:48:32.880496 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bz8rf\" (UniqueName: \"kubernetes.io/projected/d4c75bee-d0d2-4261-8f89-8c3375dbd868-kube-api-access-bz8rf\") pod \"insights-operator-68bf6ff9d6-hm777\" (UID: \"d4c75bee-d0d2-4261-8f89-8c3375dbd868\") " pod="openshift-insights/insights-operator-68bf6ff9d6-hm777" Mar 18 17:48:32.881236 master-0 kubenswrapper[7553]: I0318 17:48:32.881213 7553 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d4c75bee-d0d2-4261-8f89-8c3375dbd868-service-ca-bundle\") pod \"insights-operator-68bf6ff9d6-hm777\" (UID: \"d4c75bee-d0d2-4261-8f89-8c3375dbd868\") " pod="openshift-insights/insights-operator-68bf6ff9d6-hm777" Mar 18 17:48:32.881408 master-0 kubenswrapper[7553]: I0318 17:48:32.881381 7553 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2d21e77e-8b61-4f03-8f17-941b7a1d8b1d-config\") pod \"machine-api-operator-6fbb6cf6f9-6x52p\" (UID: \"2d21e77e-8b61-4f03-8f17-941b7a1d8b1d\") " pod="openshift-machine-api/machine-api-operator-6fbb6cf6f9-6x52p" Mar 18 17:48:32.884356 master-0 kubenswrapper[7553]: E0318 17:48:32.881902 7553 secret.go:189] Couldn't get secret openshift-machine-api/machine-api-operator-tls: secret "machine-api-operator-tls" not found Mar 18 17:48:32.884356 master-0 kubenswrapper[7553]: E0318 17:48:32.881954 7553 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2d21e77e-8b61-4f03-8f17-941b7a1d8b1d-machine-api-operator-tls podName:2d21e77e-8b61-4f03-8f17-941b7a1d8b1d nodeName:}" failed. No retries permitted until 2026-03-18 17:48:33.381939105 +0000 UTC m=+403.527773778 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "machine-api-operator-tls" (UniqueName: "kubernetes.io/secret/2d21e77e-8b61-4f03-8f17-941b7a1d8b1d-machine-api-operator-tls") pod "machine-api-operator-6fbb6cf6f9-6x52p" (UID: "2d21e77e-8b61-4f03-8f17-941b7a1d8b1d") : secret "machine-api-operator-tls" not found Mar 18 17:48:32.884356 master-0 kubenswrapper[7553]: I0318 17:48:32.882180 7553 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d4c75bee-d0d2-4261-8f89-8c3375dbd868-trusted-ca-bundle\") pod \"insights-operator-68bf6ff9d6-hm777\" (UID: \"d4c75bee-d0d2-4261-8f89-8c3375dbd868\") " pod="openshift-insights/insights-operator-68bf6ff9d6-hm777" Mar 18 17:48:32.884356 master-0 kubenswrapper[7553]: I0318 17:48:32.882614 7553 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"snapshots\" (UniqueName: \"kubernetes.io/empty-dir/d4c75bee-d0d2-4261-8f89-8c3375dbd868-snapshots\") pod \"insights-operator-68bf6ff9d6-hm777\" (UID: \"d4c75bee-d0d2-4261-8f89-8c3375dbd868\") " pod="openshift-insights/insights-operator-68bf6ff9d6-hm777" Mar 18 17:48:32.884356 master-0 kubenswrapper[7553]: I0318 17:48:32.883258 7553 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/2d21e77e-8b61-4f03-8f17-941b7a1d8b1d-images\") pod \"machine-api-operator-6fbb6cf6f9-6x52p\" (UID: \"2d21e77e-8b61-4f03-8f17-941b7a1d8b1d\") " pod="openshift-machine-api/machine-api-operator-6fbb6cf6f9-6x52p" Mar 18 17:48:32.885033 master-0 kubenswrapper[7553]: I0318 17:48:32.885012 7553 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d4c75bee-d0d2-4261-8f89-8c3375dbd868-serving-cert\") pod \"insights-operator-68bf6ff9d6-hm777\" (UID: \"d4c75bee-d0d2-4261-8f89-8c3375dbd868\") " pod="openshift-insights/insights-operator-68bf6ff9d6-hm777" Mar 18 17:48:32.907247 master-0 kubenswrapper[7553]: I0318 17:48:32.907190 7553 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fc27m\" (UniqueName: \"kubernetes.io/projected/2d21e77e-8b61-4f03-8f17-941b7a1d8b1d-kube-api-access-fc27m\") pod \"machine-api-operator-6fbb6cf6f9-6x52p\" (UID: \"2d21e77e-8b61-4f03-8f17-941b7a1d8b1d\") " pod="openshift-machine-api/machine-api-operator-6fbb6cf6f9-6x52p" Mar 18 17:48:32.917538 master-0 kubenswrapper[7553]: I0318 17:48:32.916742 7553 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bz8rf\" (UniqueName: \"kubernetes.io/projected/d4c75bee-d0d2-4261-8f89-8c3375dbd868-kube-api-access-bz8rf\") pod \"insights-operator-68bf6ff9d6-hm777\" (UID: \"d4c75bee-d0d2-4261-8f89-8c3375dbd868\") " pod="openshift-insights/insights-operator-68bf6ff9d6-hm777" Mar 18 17:48:32.923519 master-0 kubenswrapper[7553]: I0318 17:48:32.923459 7553 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7559f7c68c-9hlvv" Mar 18 17:48:32.949948 master-0 kubenswrapper[7553]: I0318 17:48:32.949913 7553 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-storage-operator/cluster-storage-operator-7d87854d6-d4bmc" Mar 18 17:48:32.981491 master-0 kubenswrapper[7553]: I0318 17:48:32.981444 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/c3267271-e0c5-45d6-980c-d78e4f9eef35-images\") pod \"machine-config-operator-84d549f6d5-b5lps\" (UID: \"c3267271-e0c5-45d6-980c-d78e4f9eef35\") " pod="openshift-machine-config-operator/machine-config-operator-84d549f6d5-b5lps" Mar 18 17:48:32.981608 master-0 kubenswrapper[7553]: I0318 17:48:32.981560 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/c3267271-e0c5-45d6-980c-d78e4f9eef35-auth-proxy-config\") pod \"machine-config-operator-84d549f6d5-b5lps\" (UID: \"c3267271-e0c5-45d6-980c-d78e4f9eef35\") " pod="openshift-machine-config-operator/machine-config-operator-84d549f6d5-b5lps" Mar 18 17:48:32.981659 master-0 kubenswrapper[7553]: I0318 17:48:32.981607 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/c3267271-e0c5-45d6-980c-d78e4f9eef35-proxy-tls\") pod \"machine-config-operator-84d549f6d5-b5lps\" (UID: \"c3267271-e0c5-45d6-980c-d78e4f9eef35\") " pod="openshift-machine-config-operator/machine-config-operator-84d549f6d5-b5lps" Mar 18 17:48:32.981659 master-0 kubenswrapper[7553]: I0318 17:48:32.981644 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z7xqg\" (UniqueName: \"kubernetes.io/projected/c3267271-e0c5-45d6-980c-d78e4f9eef35-kube-api-access-z7xqg\") pod \"machine-config-operator-84d549f6d5-b5lps\" (UID: \"c3267271-e0c5-45d6-980c-d78e4f9eef35\") " pod="openshift-machine-config-operator/machine-config-operator-84d549f6d5-b5lps" Mar 18 17:48:32.984997 master-0 kubenswrapper[7553]: I0318 17:48:32.982373 7553 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/c3267271-e0c5-45d6-980c-d78e4f9eef35-images\") pod \"machine-config-operator-84d549f6d5-b5lps\" (UID: \"c3267271-e0c5-45d6-980c-d78e4f9eef35\") " pod="openshift-machine-config-operator/machine-config-operator-84d549f6d5-b5lps" Mar 18 17:48:32.984997 master-0 kubenswrapper[7553]: I0318 17:48:32.983179 7553 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/c3267271-e0c5-45d6-980c-d78e4f9eef35-auth-proxy-config\") pod \"machine-config-operator-84d549f6d5-b5lps\" (UID: \"c3267271-e0c5-45d6-980c-d78e4f9eef35\") " pod="openshift-machine-config-operator/machine-config-operator-84d549f6d5-b5lps" Mar 18 17:48:32.984997 master-0 kubenswrapper[7553]: I0318 17:48:32.984588 7553 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/c3267271-e0c5-45d6-980c-d78e4f9eef35-proxy-tls\") pod \"machine-config-operator-84d549f6d5-b5lps\" (UID: \"c3267271-e0c5-45d6-980c-d78e4f9eef35\") " pod="openshift-machine-config-operator/machine-config-operator-84d549f6d5-b5lps" Mar 18 17:48:33.002671 master-0 kubenswrapper[7553]: I0318 17:48:33.002598 7553 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z7xqg\" (UniqueName: \"kubernetes.io/projected/c3267271-e0c5-45d6-980c-d78e4f9eef35-kube-api-access-z7xqg\") pod \"machine-config-operator-84d549f6d5-b5lps\" (UID: \"c3267271-e0c5-45d6-980c-d78e4f9eef35\") " pod="openshift-machine-config-operator/machine-config-operator-84d549f6d5-b5lps" Mar 18 17:48:33.111373 master-0 kubenswrapper[7553]: I0318 17:48:33.111313 7553 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-insights/insights-operator-68bf6ff9d6-hm777" Mar 18 17:48:33.148084 master-0 kubenswrapper[7553]: I0318 17:48:33.148020 7553 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-84d549f6d5-b5lps" Mar 18 17:48:33.188284 master-0 kubenswrapper[7553]: I0318 17:48:33.188207 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/e0e04440-c08b-452d-9be6-9f70a4027c92-samples-operator-tls\") pod \"cluster-samples-operator-85f7577d78-xnx8x\" (UID: \"e0e04440-c08b-452d-9be6-9f70a4027c92\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-85f7577d78-xnx8x" Mar 18 17:48:33.188858 master-0 kubenswrapper[7553]: E0318 17:48:33.188695 7553 secret.go:189] Couldn't get secret openshift-cluster-samples-operator/samples-operator-tls: secret "samples-operator-tls" not found Mar 18 17:48:33.191386 master-0 kubenswrapper[7553]: E0318 17:48:33.189416 7553 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e0e04440-c08b-452d-9be6-9f70a4027c92-samples-operator-tls podName:e0e04440-c08b-452d-9be6-9f70a4027c92 nodeName:}" failed. No retries permitted until 2026-03-18 17:48:34.188801291 +0000 UTC m=+404.334635964 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "samples-operator-tls" (UniqueName: "kubernetes.io/secret/e0e04440-c08b-452d-9be6-9f70a4027c92-samples-operator-tls") pod "cluster-samples-operator-85f7577d78-xnx8x" (UID: "e0e04440-c08b-452d-9be6-9f70a4027c92") : secret "samples-operator-tls" not found Mar 18 17:48:33.191386 master-0 kubenswrapper[7553]: I0318 17:48:33.189466 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/92153864-7959-4482-bf24-c8db36435fb5-machine-approver-tls\") pod \"machine-approver-5c6485487f-z74t2\" (UID: \"92153864-7959-4482-bf24-c8db36435fb5\") " pod="openshift-cluster-machine-approver/machine-approver-5c6485487f-z74t2" Mar 18 17:48:33.191386 master-0 kubenswrapper[7553]: E0318 17:48:33.189749 7553 secret.go:189] Couldn't get secret openshift-cluster-machine-approver/machine-approver-tls: secret "machine-approver-tls" not found Mar 18 17:48:33.191386 master-0 kubenswrapper[7553]: E0318 17:48:33.189901 7553 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/92153864-7959-4482-bf24-c8db36435fb5-machine-approver-tls podName:92153864-7959-4482-bf24-c8db36435fb5 nodeName:}" failed. No retries permitted until 2026-03-18 17:48:34.189854151 +0000 UTC m=+404.335688824 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "machine-approver-tls" (UniqueName: "kubernetes.io/secret/92153864-7959-4482-bf24-c8db36435fb5-machine-approver-tls") pod "machine-approver-5c6485487f-z74t2" (UID: "92153864-7959-4482-bf24-c8db36435fb5") : secret "machine-approver-tls" not found Mar 18 17:48:33.291625 master-0 kubenswrapper[7553]: I0318 17:48:33.291493 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/de189d27-4c60-49f1-9119-d1fde5c37b1e-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-6f97756bc8-zdqtc\" (UID: \"de189d27-4c60-49f1-9119-d1fde5c37b1e\") " pod="openshift-machine-api/control-plane-machine-set-operator-6f97756bc8-zdqtc" Mar 18 17:48:33.291974 master-0 kubenswrapper[7553]: I0318 17:48:33.291932 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a94f7bff-ad61-4c53-a8eb-000a13f26971-cert\") pod \"cluster-autoscaler-operator-866dc4744-l6hpt\" (UID: \"a94f7bff-ad61-4c53-a8eb-000a13f26971\") " pod="openshift-machine-api/cluster-autoscaler-operator-866dc4744-l6hpt" Mar 18 17:48:33.292844 master-0 kubenswrapper[7553]: E0318 17:48:33.292183 7553 secret.go:189] Couldn't get secret openshift-machine-api/control-plane-machine-set-operator-tls: secret "control-plane-machine-set-operator-tls" not found Mar 18 17:48:33.292844 master-0 kubenswrapper[7553]: E0318 17:48:33.292252 7553 secret.go:189] Couldn't get secret openshift-machine-api/cluster-autoscaler-operator-cert: secret "cluster-autoscaler-operator-cert" not found Mar 18 17:48:33.292844 master-0 kubenswrapper[7553]: E0318 17:48:33.292313 7553 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/de189d27-4c60-49f1-9119-d1fde5c37b1e-control-plane-machine-set-operator-tls podName:de189d27-4c60-49f1-9119-d1fde5c37b1e nodeName:}" failed. No retries permitted until 2026-03-18 17:48:34.292257093 +0000 UTC m=+404.438091766 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "control-plane-machine-set-operator-tls" (UniqueName: "kubernetes.io/secret/de189d27-4c60-49f1-9119-d1fde5c37b1e-control-plane-machine-set-operator-tls") pod "control-plane-machine-set-operator-6f97756bc8-zdqtc" (UID: "de189d27-4c60-49f1-9119-d1fde5c37b1e") : secret "control-plane-machine-set-operator-tls" not found Mar 18 17:48:33.292844 master-0 kubenswrapper[7553]: E0318 17:48:33.292329 7553 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a94f7bff-ad61-4c53-a8eb-000a13f26971-cert podName:a94f7bff-ad61-4c53-a8eb-000a13f26971 nodeName:}" failed. No retries permitted until 2026-03-18 17:48:34.292323305 +0000 UTC m=+404.438157978 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/a94f7bff-ad61-4c53-a8eb-000a13f26971-cert") pod "cluster-autoscaler-operator-866dc4744-l6hpt" (UID: "a94f7bff-ad61-4c53-a8eb-000a13f26971") : secret "cluster-autoscaler-operator-cert" not found Mar 18 17:48:33.293248 master-0 kubenswrapper[7553]: I0318 17:48:33.293166 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloud-credential-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/04cef0bd-f365-4bf6-864a-1895995015d6-cloud-credential-operator-serving-cert\") pod \"cloud-credential-operator-744f9dbf77-djgn7\" (UID: \"04cef0bd-f365-4bf6-864a-1895995015d6\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-744f9dbf77-djgn7" Mar 18 17:48:33.294127 master-0 kubenswrapper[7553]: E0318 17:48:33.294059 7553 secret.go:189] Couldn't get secret openshift-cloud-credential-operator/cloud-credential-operator-serving-cert: secret "cloud-credential-operator-serving-cert" not found Mar 18 17:48:33.295316 master-0 kubenswrapper[7553]: E0318 17:48:33.295233 7553 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/04cef0bd-f365-4bf6-864a-1895995015d6-cloud-credential-operator-serving-cert podName:04cef0bd-f365-4bf6-864a-1895995015d6 nodeName:}" failed. No retries permitted until 2026-03-18 17:48:34.294106204 +0000 UTC m=+404.439940877 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cloud-credential-operator-serving-cert" (UniqueName: "kubernetes.io/secret/04cef0bd-f365-4bf6-864a-1895995015d6-cloud-credential-operator-serving-cert") pod "cloud-credential-operator-744f9dbf77-djgn7" (UID: "04cef0bd-f365-4bf6-864a-1895995015d6") : secret "cloud-credential-operator-serving-cert" not found Mar 18 17:48:33.385837 master-0 kubenswrapper[7553]: I0318 17:48:33.385782 7553 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-storage-operator/cluster-storage-operator-7d87854d6-d4bmc"] Mar 18 17:48:33.390325 master-0 kubenswrapper[7553]: W0318 17:48:33.390125 7553 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc38c5f03_a753_49f4_ab06_33e75a03bd45.slice/crio-230edebdcb314d25cf4af81ff75a06a2701ace4abbe260261cb0347a76dc2bd1 WatchSource:0}: Error finding container 230edebdcb314d25cf4af81ff75a06a2701ace4abbe260261cb0347a76dc2bd1: Status 404 returned error can't find the container with id 230edebdcb314d25cf4af81ff75a06a2701ace4abbe260261cb0347a76dc2bd1 Mar 18 17:48:33.396506 master-0 kubenswrapper[7553]: I0318 17:48:33.396009 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/2d21e77e-8b61-4f03-8f17-941b7a1d8b1d-machine-api-operator-tls\") pod \"machine-api-operator-6fbb6cf6f9-6x52p\" (UID: \"2d21e77e-8b61-4f03-8f17-941b7a1d8b1d\") " pod="openshift-machine-api/machine-api-operator-6fbb6cf6f9-6x52p" Mar 18 17:48:33.396506 master-0 kubenswrapper[7553]: E0318 17:48:33.396170 7553 secret.go:189] Couldn't get secret openshift-machine-api/machine-api-operator-tls: secret "machine-api-operator-tls" not found Mar 18 17:48:33.396506 master-0 kubenswrapper[7553]: E0318 17:48:33.396252 7553 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2d21e77e-8b61-4f03-8f17-941b7a1d8b1d-machine-api-operator-tls podName:2d21e77e-8b61-4f03-8f17-941b7a1d8b1d nodeName:}" failed. No retries permitted until 2026-03-18 17:48:34.396231319 +0000 UTC m=+404.542065992 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "machine-api-operator-tls" (UniqueName: "kubernetes.io/secret/2d21e77e-8b61-4f03-8f17-941b7a1d8b1d-machine-api-operator-tls") pod "machine-api-operator-6fbb6cf6f9-6x52p" (UID: "2d21e77e-8b61-4f03-8f17-941b7a1d8b1d") : secret "machine-api-operator-tls" not found Mar 18 17:48:33.523255 master-0 kubenswrapper[7553]: I0318 17:48:33.522839 7553 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-insights/insights-operator-68bf6ff9d6-hm777"] Mar 18 17:48:33.604482 master-0 kubenswrapper[7553]: I0318 17:48:33.604401 7553 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-84d549f6d5-b5lps"] Mar 18 17:48:33.609929 master-0 kubenswrapper[7553]: W0318 17:48:33.609836 7553 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc3267271_e0c5_45d6_980c_d78e4f9eef35.slice/crio-594d4a59acf0a0da5be4aa4bcad6deb49fd2749cf6065ab7e5a5a39d60f17265 WatchSource:0}: Error finding container 594d4a59acf0a0da5be4aa4bcad6deb49fd2749cf6065ab7e5a5a39d60f17265: Status 404 returned error can't find the container with id 594d4a59acf0a0da5be4aa4bcad6deb49fd2749cf6065ab7e5a5a39d60f17265 Mar 18 17:48:33.661086 master-0 kubenswrapper[7553]: I0318 17:48:33.660502 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7559f7c68c-9hlvv" event={"ID":"656ac493-a769-4c15-9356-2050c4b9c8d8","Type":"ContainerStarted","Data":"3207043a8dbcd1d67e3d3199c155f8c1aa1ba06f12de9e1d173f2f7d7639c727"} Mar 18 17:48:33.661955 master-0 kubenswrapper[7553]: I0318 17:48:33.661818 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/cluster-storage-operator-7d87854d6-d4bmc" event={"ID":"c38c5f03-a753-49f4-ab06-33e75a03bd45","Type":"ContainerStarted","Data":"230edebdcb314d25cf4af81ff75a06a2701ace4abbe260261cb0347a76dc2bd1"} Mar 18 17:48:33.665323 master-0 kubenswrapper[7553]: I0318 17:48:33.663810 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-84d549f6d5-b5lps" event={"ID":"c3267271-e0c5-45d6-980c-d78e4f9eef35","Type":"ContainerStarted","Data":"594d4a59acf0a0da5be4aa4bcad6deb49fd2749cf6065ab7e5a5a39d60f17265"} Mar 18 17:48:33.668555 master-0 kubenswrapper[7553]: I0318 17:48:33.668458 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-insights/insights-operator-68bf6ff9d6-hm777" event={"ID":"d4c75bee-d0d2-4261-8f89-8c3375dbd868","Type":"ContainerStarted","Data":"f7dc5373fa76e1da12d58e0de7c6eb4b3bc82471bd7a410a252fcb24df6cb1d6"} Mar 18 17:48:34.210171 master-0 kubenswrapper[7553]: I0318 17:48:34.209339 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/92153864-7959-4482-bf24-c8db36435fb5-machine-approver-tls\") pod \"machine-approver-5c6485487f-z74t2\" (UID: \"92153864-7959-4482-bf24-c8db36435fb5\") " pod="openshift-cluster-machine-approver/machine-approver-5c6485487f-z74t2" Mar 18 17:48:34.210171 master-0 kubenswrapper[7553]: E0318 17:48:34.209536 7553 secret.go:189] Couldn't get secret openshift-cluster-machine-approver/machine-approver-tls: secret "machine-approver-tls" not found Mar 18 17:48:34.210171 master-0 kubenswrapper[7553]: E0318 17:48:34.209706 7553 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/92153864-7959-4482-bf24-c8db36435fb5-machine-approver-tls podName:92153864-7959-4482-bf24-c8db36435fb5 nodeName:}" failed. No retries permitted until 2026-03-18 17:48:36.209604019 +0000 UTC m=+406.355438692 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "machine-approver-tls" (UniqueName: "kubernetes.io/secret/92153864-7959-4482-bf24-c8db36435fb5-machine-approver-tls") pod "machine-approver-5c6485487f-z74t2" (UID: "92153864-7959-4482-bf24-c8db36435fb5") : secret "machine-approver-tls" not found Mar 18 17:48:34.210171 master-0 kubenswrapper[7553]: I0318 17:48:34.210058 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/e0e04440-c08b-452d-9be6-9f70a4027c92-samples-operator-tls\") pod \"cluster-samples-operator-85f7577d78-xnx8x\" (UID: \"e0e04440-c08b-452d-9be6-9f70a4027c92\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-85f7577d78-xnx8x" Mar 18 17:48:34.211315 master-0 kubenswrapper[7553]: E0318 17:48:34.210242 7553 secret.go:189] Couldn't get secret openshift-cluster-samples-operator/samples-operator-tls: secret "samples-operator-tls" not found Mar 18 17:48:34.211315 master-0 kubenswrapper[7553]: E0318 17:48:34.210300 7553 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e0e04440-c08b-452d-9be6-9f70a4027c92-samples-operator-tls podName:e0e04440-c08b-452d-9be6-9f70a4027c92 nodeName:}" failed. No retries permitted until 2026-03-18 17:48:36.210262817 +0000 UTC m=+406.356097490 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "samples-operator-tls" (UniqueName: "kubernetes.io/secret/e0e04440-c08b-452d-9be6-9f70a4027c92-samples-operator-tls") pod "cluster-samples-operator-85f7577d78-xnx8x" (UID: "e0e04440-c08b-452d-9be6-9f70a4027c92") : secret "samples-operator-tls" not found Mar 18 17:48:34.317493 master-0 kubenswrapper[7553]: I0318 17:48:34.311501 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a94f7bff-ad61-4c53-a8eb-000a13f26971-cert\") pod \"cluster-autoscaler-operator-866dc4744-l6hpt\" (UID: \"a94f7bff-ad61-4c53-a8eb-000a13f26971\") " pod="openshift-machine-api/cluster-autoscaler-operator-866dc4744-l6hpt" Mar 18 17:48:34.317493 master-0 kubenswrapper[7553]: E0318 17:48:34.311760 7553 secret.go:189] Couldn't get secret openshift-machine-api/cluster-autoscaler-operator-cert: secret "cluster-autoscaler-operator-cert" not found Mar 18 17:48:34.317493 master-0 kubenswrapper[7553]: E0318 17:48:34.311848 7553 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a94f7bff-ad61-4c53-a8eb-000a13f26971-cert podName:a94f7bff-ad61-4c53-a8eb-000a13f26971 nodeName:}" failed. No retries permitted until 2026-03-18 17:48:36.311827396 +0000 UTC m=+406.457662069 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/a94f7bff-ad61-4c53-a8eb-000a13f26971-cert") pod "cluster-autoscaler-operator-866dc4744-l6hpt" (UID: "a94f7bff-ad61-4c53-a8eb-000a13f26971") : secret "cluster-autoscaler-operator-cert" not found Mar 18 17:48:34.317493 master-0 kubenswrapper[7553]: I0318 17:48:34.312357 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloud-credential-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/04cef0bd-f365-4bf6-864a-1895995015d6-cloud-credential-operator-serving-cert\") pod \"cloud-credential-operator-744f9dbf77-djgn7\" (UID: \"04cef0bd-f365-4bf6-864a-1895995015d6\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-744f9dbf77-djgn7" Mar 18 17:48:34.317493 master-0 kubenswrapper[7553]: I0318 17:48:34.312429 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/de189d27-4c60-49f1-9119-d1fde5c37b1e-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-6f97756bc8-zdqtc\" (UID: \"de189d27-4c60-49f1-9119-d1fde5c37b1e\") " pod="openshift-machine-api/control-plane-machine-set-operator-6f97756bc8-zdqtc" Mar 18 17:48:34.317493 master-0 kubenswrapper[7553]: E0318 17:48:34.312591 7553 secret.go:189] Couldn't get secret openshift-machine-api/control-plane-machine-set-operator-tls: secret "control-plane-machine-set-operator-tls" not found Mar 18 17:48:34.317493 master-0 kubenswrapper[7553]: E0318 17:48:34.312633 7553 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/de189d27-4c60-49f1-9119-d1fde5c37b1e-control-plane-machine-set-operator-tls podName:de189d27-4c60-49f1-9119-d1fde5c37b1e nodeName:}" failed. No retries permitted until 2026-03-18 17:48:36.312614088 +0000 UTC m=+406.458448761 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "control-plane-machine-set-operator-tls" (UniqueName: "kubernetes.io/secret/de189d27-4c60-49f1-9119-d1fde5c37b1e-control-plane-machine-set-operator-tls") pod "control-plane-machine-set-operator-6f97756bc8-zdqtc" (UID: "de189d27-4c60-49f1-9119-d1fde5c37b1e") : secret "control-plane-machine-set-operator-tls" not found Mar 18 17:48:34.317493 master-0 kubenswrapper[7553]: E0318 17:48:34.312695 7553 secret.go:189] Couldn't get secret openshift-cloud-credential-operator/cloud-credential-operator-serving-cert: secret "cloud-credential-operator-serving-cert" not found Mar 18 17:48:34.317493 master-0 kubenswrapper[7553]: E0318 17:48:34.312715 7553 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/04cef0bd-f365-4bf6-864a-1895995015d6-cloud-credential-operator-serving-cert podName:04cef0bd-f365-4bf6-864a-1895995015d6 nodeName:}" failed. No retries permitted until 2026-03-18 17:48:36.31270925 +0000 UTC m=+406.458543923 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cloud-credential-operator-serving-cert" (UniqueName: "kubernetes.io/secret/04cef0bd-f365-4bf6-864a-1895995015d6-cloud-credential-operator-serving-cert") pod "cloud-credential-operator-744f9dbf77-djgn7" (UID: "04cef0bd-f365-4bf6-864a-1895995015d6") : secret "cloud-credential-operator-serving-cert" not found Mar 18 17:48:34.414595 master-0 kubenswrapper[7553]: I0318 17:48:34.414471 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/2d21e77e-8b61-4f03-8f17-941b7a1d8b1d-machine-api-operator-tls\") pod \"machine-api-operator-6fbb6cf6f9-6x52p\" (UID: \"2d21e77e-8b61-4f03-8f17-941b7a1d8b1d\") " pod="openshift-machine-api/machine-api-operator-6fbb6cf6f9-6x52p" Mar 18 17:48:34.414806 master-0 kubenswrapper[7553]: E0318 17:48:34.414698 7553 secret.go:189] Couldn't get secret openshift-machine-api/machine-api-operator-tls: secret "machine-api-operator-tls" not found Mar 18 17:48:34.414806 master-0 kubenswrapper[7553]: E0318 17:48:34.414781 7553 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2d21e77e-8b61-4f03-8f17-941b7a1d8b1d-machine-api-operator-tls podName:2d21e77e-8b61-4f03-8f17-941b7a1d8b1d nodeName:}" failed. No retries permitted until 2026-03-18 17:48:36.414760552 +0000 UTC m=+406.560595225 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "machine-api-operator-tls" (UniqueName: "kubernetes.io/secret/2d21e77e-8b61-4f03-8f17-941b7a1d8b1d-machine-api-operator-tls") pod "machine-api-operator-6fbb6cf6f9-6x52p" (UID: "2d21e77e-8b61-4f03-8f17-941b7a1d8b1d") : secret "machine-api-operator-tls" not found Mar 18 17:48:34.679230 master-0 kubenswrapper[7553]: I0318 17:48:34.679115 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-84d549f6d5-b5lps" event={"ID":"c3267271-e0c5-45d6-980c-d78e4f9eef35","Type":"ContainerStarted","Data":"10bb621dbbd80d4491c870657a31409bb00b55a30d167ca87c001b76fada6014"} Mar 18 17:48:34.679230 master-0 kubenswrapper[7553]: I0318 17:48:34.679186 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-84d549f6d5-b5lps" event={"ID":"c3267271-e0c5-45d6-980c-d78e4f9eef35","Type":"ContainerStarted","Data":"4af4292c294ed18f4d7a20d7c6af6118981afc3f4dccaa087fc72c0bbc4f6572"} Mar 18 17:48:34.707406 master-0 kubenswrapper[7553]: I0318 17:48:34.707006 7553 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-84d549f6d5-b5lps" podStartSLOduration=2.706967877 podStartE2EDuration="2.706967877s" podCreationTimestamp="2026-03-18 17:48:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 17:48:34.703940962 +0000 UTC m=+404.849775655" watchObservedRunningTime="2026-03-18 17:48:34.706967877 +0000 UTC m=+404.852802540" Mar 18 17:48:36.240805 master-0 kubenswrapper[7553]: I0318 17:48:36.240730 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/e0e04440-c08b-452d-9be6-9f70a4027c92-samples-operator-tls\") pod \"cluster-samples-operator-85f7577d78-xnx8x\" (UID: \"e0e04440-c08b-452d-9be6-9f70a4027c92\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-85f7577d78-xnx8x" Mar 18 17:48:36.241369 master-0 kubenswrapper[7553]: E0318 17:48:36.240920 7553 secret.go:189] Couldn't get secret openshift-cluster-samples-operator/samples-operator-tls: secret "samples-operator-tls" not found Mar 18 17:48:36.241369 master-0 kubenswrapper[7553]: I0318 17:48:36.240964 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/92153864-7959-4482-bf24-c8db36435fb5-machine-approver-tls\") pod \"machine-approver-5c6485487f-z74t2\" (UID: \"92153864-7959-4482-bf24-c8db36435fb5\") " pod="openshift-cluster-machine-approver/machine-approver-5c6485487f-z74t2" Mar 18 17:48:36.241369 master-0 kubenswrapper[7553]: E0318 17:48:36.241079 7553 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e0e04440-c08b-452d-9be6-9f70a4027c92-samples-operator-tls podName:e0e04440-c08b-452d-9be6-9f70a4027c92 nodeName:}" failed. No retries permitted until 2026-03-18 17:48:40.241051718 +0000 UTC m=+410.386886401 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "samples-operator-tls" (UniqueName: "kubernetes.io/secret/e0e04440-c08b-452d-9be6-9f70a4027c92-samples-operator-tls") pod "cluster-samples-operator-85f7577d78-xnx8x" (UID: "e0e04440-c08b-452d-9be6-9f70a4027c92") : secret "samples-operator-tls" not found Mar 18 17:48:36.241369 master-0 kubenswrapper[7553]: E0318 17:48:36.241079 7553 secret.go:189] Couldn't get secret openshift-cluster-machine-approver/machine-approver-tls: secret "machine-approver-tls" not found Mar 18 17:48:36.241369 master-0 kubenswrapper[7553]: E0318 17:48:36.241148 7553 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/92153864-7959-4482-bf24-c8db36435fb5-machine-approver-tls podName:92153864-7959-4482-bf24-c8db36435fb5 nodeName:}" failed. No retries permitted until 2026-03-18 17:48:40.241136901 +0000 UTC m=+410.386971594 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "machine-approver-tls" (UniqueName: "kubernetes.io/secret/92153864-7959-4482-bf24-c8db36435fb5-machine-approver-tls") pod "machine-approver-5c6485487f-z74t2" (UID: "92153864-7959-4482-bf24-c8db36435fb5") : secret "machine-approver-tls" not found Mar 18 17:48:36.341713 master-0 kubenswrapper[7553]: I0318 17:48:36.341643 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/de189d27-4c60-49f1-9119-d1fde5c37b1e-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-6f97756bc8-zdqtc\" (UID: \"de189d27-4c60-49f1-9119-d1fde5c37b1e\") " pod="openshift-machine-api/control-plane-machine-set-operator-6f97756bc8-zdqtc" Mar 18 17:48:36.341988 master-0 kubenswrapper[7553]: I0318 17:48:36.341806 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a94f7bff-ad61-4c53-a8eb-000a13f26971-cert\") pod \"cluster-autoscaler-operator-866dc4744-l6hpt\" (UID: \"a94f7bff-ad61-4c53-a8eb-000a13f26971\") " pod="openshift-machine-api/cluster-autoscaler-operator-866dc4744-l6hpt" Mar 18 17:48:36.341988 master-0 kubenswrapper[7553]: I0318 17:48:36.341953 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloud-credential-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/04cef0bd-f365-4bf6-864a-1895995015d6-cloud-credential-operator-serving-cert\") pod \"cloud-credential-operator-744f9dbf77-djgn7\" (UID: \"04cef0bd-f365-4bf6-864a-1895995015d6\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-744f9dbf77-djgn7" Mar 18 17:48:36.342153 master-0 kubenswrapper[7553]: E0318 17:48:36.342124 7553 secret.go:189] Couldn't get secret openshift-cloud-credential-operator/cloud-credential-operator-serving-cert: secret "cloud-credential-operator-serving-cert" not found Mar 18 17:48:36.342231 master-0 kubenswrapper[7553]: E0318 17:48:36.342124 7553 secret.go:189] Couldn't get secret openshift-machine-api/control-plane-machine-set-operator-tls: secret "control-plane-machine-set-operator-tls" not found Mar 18 17:48:36.342377 master-0 kubenswrapper[7553]: E0318 17:48:36.342198 7553 secret.go:189] Couldn't get secret openshift-machine-api/cluster-autoscaler-operator-cert: secret "cluster-autoscaler-operator-cert" not found Mar 18 17:48:36.342432 master-0 kubenswrapper[7553]: E0318 17:48:36.342222 7553 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/04cef0bd-f365-4bf6-864a-1895995015d6-cloud-credential-operator-serving-cert podName:04cef0bd-f365-4bf6-864a-1895995015d6 nodeName:}" failed. No retries permitted until 2026-03-18 17:48:40.342194775 +0000 UTC m=+410.488029468 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cloud-credential-operator-serving-cert" (UniqueName: "kubernetes.io/secret/04cef0bd-f365-4bf6-864a-1895995015d6-cloud-credential-operator-serving-cert") pod "cloud-credential-operator-744f9dbf77-djgn7" (UID: "04cef0bd-f365-4bf6-864a-1895995015d6") : secret "cloud-credential-operator-serving-cert" not found Mar 18 17:48:36.342503 master-0 kubenswrapper[7553]: E0318 17:48:36.342477 7553 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/de189d27-4c60-49f1-9119-d1fde5c37b1e-control-plane-machine-set-operator-tls podName:de189d27-4c60-49f1-9119-d1fde5c37b1e nodeName:}" failed. No retries permitted until 2026-03-18 17:48:40.342434372 +0000 UTC m=+410.488269075 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "control-plane-machine-set-operator-tls" (UniqueName: "kubernetes.io/secret/de189d27-4c60-49f1-9119-d1fde5c37b1e-control-plane-machine-set-operator-tls") pod "control-plane-machine-set-operator-6f97756bc8-zdqtc" (UID: "de189d27-4c60-49f1-9119-d1fde5c37b1e") : secret "control-plane-machine-set-operator-tls" not found Mar 18 17:48:36.342542 master-0 kubenswrapper[7553]: E0318 17:48:36.342520 7553 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a94f7bff-ad61-4c53-a8eb-000a13f26971-cert podName:a94f7bff-ad61-4c53-a8eb-000a13f26971 nodeName:}" failed. No retries permitted until 2026-03-18 17:48:40.342503404 +0000 UTC m=+410.488338117 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/a94f7bff-ad61-4c53-a8eb-000a13f26971-cert") pod "cluster-autoscaler-operator-866dc4744-l6hpt" (UID: "a94f7bff-ad61-4c53-a8eb-000a13f26971") : secret "cluster-autoscaler-operator-cert" not found Mar 18 17:48:36.444342 master-0 kubenswrapper[7553]: I0318 17:48:36.444181 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/2d21e77e-8b61-4f03-8f17-941b7a1d8b1d-machine-api-operator-tls\") pod \"machine-api-operator-6fbb6cf6f9-6x52p\" (UID: \"2d21e77e-8b61-4f03-8f17-941b7a1d8b1d\") " pod="openshift-machine-api/machine-api-operator-6fbb6cf6f9-6x52p" Mar 18 17:48:36.444604 master-0 kubenswrapper[7553]: E0318 17:48:36.444565 7553 secret.go:189] Couldn't get secret openshift-machine-api/machine-api-operator-tls: secret "machine-api-operator-tls" not found Mar 18 17:48:36.444645 master-0 kubenswrapper[7553]: E0318 17:48:36.444637 7553 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2d21e77e-8b61-4f03-8f17-941b7a1d8b1d-machine-api-operator-tls podName:2d21e77e-8b61-4f03-8f17-941b7a1d8b1d nodeName:}" failed. No retries permitted until 2026-03-18 17:48:40.444616558 +0000 UTC m=+410.590451231 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "machine-api-operator-tls" (UniqueName: "kubernetes.io/secret/2d21e77e-8b61-4f03-8f17-941b7a1d8b1d-machine-api-operator-tls") pod "machine-api-operator-6fbb6cf6f9-6x52p" (UID: "2d21e77e-8b61-4f03-8f17-941b7a1d8b1d") : secret "machine-api-operator-tls" not found Mar 18 17:48:37.253361 master-0 kubenswrapper[7553]: I0318 17:48:37.253313 7553 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-5l8hh"] Mar 18 17:48:37.254842 master-0 kubenswrapper[7553]: I0318 17:48:37.254229 7553 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-5l8hh" Mar 18 17:48:37.255616 master-0 kubenswrapper[7553]: I0318 17:48:37.255563 7553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/fcf459dc-bd30-4143-b5c4-60fd01b46548-rootfs\") pod \"machine-config-daemon-5l8hh\" (UID: \"fcf459dc-bd30-4143-b5c4-60fd01b46548\") " pod="openshift-machine-config-operator/machine-config-daemon-5l8hh" Mar 18 17:48:37.255704 master-0 kubenswrapper[7553]: I0318 17:48:37.255643 7553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fcf459dc-bd30-4143-b5c4-60fd01b46548-proxy-tls\") pod \"machine-config-daemon-5l8hh\" (UID: \"fcf459dc-bd30-4143-b5c4-60fd01b46548\") " pod="openshift-machine-config-operator/machine-config-daemon-5l8hh" Mar 18 17:48:37.255814 master-0 kubenswrapper[7553]: I0318 17:48:37.255756 7553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xzp78\" (UniqueName: \"kubernetes.io/projected/fcf459dc-bd30-4143-b5c4-60fd01b46548-kube-api-access-xzp78\") pod \"machine-config-daemon-5l8hh\" (UID: \"fcf459dc-bd30-4143-b5c4-60fd01b46548\") " pod="openshift-machine-config-operator/machine-config-daemon-5l8hh" Mar 18 17:48:37.255877 master-0 kubenswrapper[7553]: I0318 17:48:37.255847 7553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fcf459dc-bd30-4143-b5c4-60fd01b46548-mcd-auth-proxy-config\") pod \"machine-config-daemon-5l8hh\" (UID: \"fcf459dc-bd30-4143-b5c4-60fd01b46548\") " pod="openshift-machine-config-operator/machine-config-daemon-5l8hh" Mar 18 17:48:37.256412 master-0 kubenswrapper[7553]: I0318 17:48:37.256384 7553 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-cqcns" Mar 18 17:48:37.257023 master-0 kubenswrapper[7553]: I0318 17:48:37.256998 7553 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Mar 18 17:48:37.357740 master-0 kubenswrapper[7553]: I0318 17:48:37.357643 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/fcf459dc-bd30-4143-b5c4-60fd01b46548-rootfs\") pod \"machine-config-daemon-5l8hh\" (UID: \"fcf459dc-bd30-4143-b5c4-60fd01b46548\") " pod="openshift-machine-config-operator/machine-config-daemon-5l8hh" Mar 18 17:48:37.357740 master-0 kubenswrapper[7553]: I0318 17:48:37.357723 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fcf459dc-bd30-4143-b5c4-60fd01b46548-proxy-tls\") pod \"machine-config-daemon-5l8hh\" (UID: \"fcf459dc-bd30-4143-b5c4-60fd01b46548\") " pod="openshift-machine-config-operator/machine-config-daemon-5l8hh" Mar 18 17:48:37.357974 master-0 kubenswrapper[7553]: I0318 17:48:37.357770 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xzp78\" (UniqueName: \"kubernetes.io/projected/fcf459dc-bd30-4143-b5c4-60fd01b46548-kube-api-access-xzp78\") pod \"machine-config-daemon-5l8hh\" (UID: \"fcf459dc-bd30-4143-b5c4-60fd01b46548\") " pod="openshift-machine-config-operator/machine-config-daemon-5l8hh" Mar 18 17:48:37.357974 master-0 kubenswrapper[7553]: I0318 17:48:37.357817 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fcf459dc-bd30-4143-b5c4-60fd01b46548-mcd-auth-proxy-config\") pod \"machine-config-daemon-5l8hh\" (UID: \"fcf459dc-bd30-4143-b5c4-60fd01b46548\") " pod="openshift-machine-config-operator/machine-config-daemon-5l8hh" Mar 18 17:48:37.357974 master-0 kubenswrapper[7553]: I0318 17:48:37.357767 7553 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/fcf459dc-bd30-4143-b5c4-60fd01b46548-rootfs\") pod \"machine-config-daemon-5l8hh\" (UID: \"fcf459dc-bd30-4143-b5c4-60fd01b46548\") " pod="openshift-machine-config-operator/machine-config-daemon-5l8hh" Mar 18 17:48:37.358829 master-0 kubenswrapper[7553]: I0318 17:48:37.358803 7553 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fcf459dc-bd30-4143-b5c4-60fd01b46548-mcd-auth-proxy-config\") pod \"machine-config-daemon-5l8hh\" (UID: \"fcf459dc-bd30-4143-b5c4-60fd01b46548\") " pod="openshift-machine-config-operator/machine-config-daemon-5l8hh" Mar 18 17:48:37.361564 master-0 kubenswrapper[7553]: I0318 17:48:37.361519 7553 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fcf459dc-bd30-4143-b5c4-60fd01b46548-proxy-tls\") pod \"machine-config-daemon-5l8hh\" (UID: \"fcf459dc-bd30-4143-b5c4-60fd01b46548\") " pod="openshift-machine-config-operator/machine-config-daemon-5l8hh" Mar 18 17:48:37.378635 master-0 kubenswrapper[7553]: I0318 17:48:37.378579 7553 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xzp78\" (UniqueName: \"kubernetes.io/projected/fcf459dc-bd30-4143-b5c4-60fd01b46548-kube-api-access-xzp78\") pod \"machine-config-daemon-5l8hh\" (UID: \"fcf459dc-bd30-4143-b5c4-60fd01b46548\") " pod="openshift-machine-config-operator/machine-config-daemon-5l8hh" Mar 18 17:48:37.460088 master-0 kubenswrapper[7553]: I0318 17:48:37.460059 7553 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-5l8hh" Mar 18 17:48:37.489671 master-0 kubenswrapper[7553]: W0318 17:48:37.489612 7553 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfcf459dc_bd30_4143_b5c4_60fd01b46548.slice/crio-98a2fd574e075391d5a514f212989330aab4c8ffe303103d815d81e2f13e5d87 WatchSource:0}: Error finding container 98a2fd574e075391d5a514f212989330aab4c8ffe303103d815d81e2f13e5d87: Status 404 returned error can't find the container with id 98a2fd574e075391d5a514f212989330aab4c8ffe303103d815d81e2f13e5d87 Mar 18 17:48:37.706207 master-0 kubenswrapper[7553]: I0318 17:48:37.706058 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-insights/insights-operator-68bf6ff9d6-hm777" event={"ID":"d4c75bee-d0d2-4261-8f89-8c3375dbd868","Type":"ContainerStarted","Data":"350645ba3bc2c5d9132063ea0cd6e79ddd087baff486b5e73a7bad9c73b8c8c7"} Mar 18 17:48:37.711446 master-0 kubenswrapper[7553]: I0318 17:48:37.711388 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5l8hh" event={"ID":"fcf459dc-bd30-4143-b5c4-60fd01b46548","Type":"ContainerStarted","Data":"53fbe6be32fddfe0cb9a4a480023f6542a6316d21b3dba1c04ecf7bb1fb6a6df"} Mar 18 17:48:37.713870 master-0 kubenswrapper[7553]: I0318 17:48:37.711510 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5l8hh" event={"ID":"fcf459dc-bd30-4143-b5c4-60fd01b46548","Type":"ContainerStarted","Data":"98a2fd574e075391d5a514f212989330aab4c8ffe303103d815d81e2f13e5d87"} Mar 18 17:48:37.714872 master-0 kubenswrapper[7553]: I0318 17:48:37.714819 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7559f7c68c-9hlvv" event={"ID":"656ac493-a769-4c15-9356-2050c4b9c8d8","Type":"ContainerStarted","Data":"fa5bb0ca85d70bebc829285cc630fe546dd52e60d8679fbe25c4975938d92290"} Mar 18 17:48:37.717330 master-0 kubenswrapper[7553]: I0318 17:48:37.716981 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/cluster-storage-operator-7d87854d6-d4bmc" event={"ID":"c38c5f03-a753-49f4-ab06-33e75a03bd45","Type":"ContainerStarted","Data":"a3a77ef6f8f671fb5f80e7a57420cd1c8a6c6e49b81d12a2df38ba7e576274fc"} Mar 18 17:48:37.735666 master-0 kubenswrapper[7553]: I0318 17:48:37.735567 7553 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-insights/insights-operator-68bf6ff9d6-hm777" podStartSLOduration=2.026740131 podStartE2EDuration="5.735540391s" podCreationTimestamp="2026-03-18 17:48:32 +0000 UTC" firstStartedPulling="2026-03-18 17:48:33.527441488 +0000 UTC m=+403.673276201" lastFinishedPulling="2026-03-18 17:48:37.236241788 +0000 UTC m=+407.382076461" observedRunningTime="2026-03-18 17:48:37.733819542 +0000 UTC m=+407.879654215" watchObservedRunningTime="2026-03-18 17:48:37.735540391 +0000 UTC m=+407.881375084" Mar 18 17:48:37.806307 master-0 kubenswrapper[7553]: I0318 17:48:37.803622 7553 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-storage-operator/cluster-storage-operator-7d87854d6-d4bmc" podStartSLOduration=1.98568289 podStartE2EDuration="5.803590829s" podCreationTimestamp="2026-03-18 17:48:32 +0000 UTC" firstStartedPulling="2026-03-18 17:48:33.396156247 +0000 UTC m=+403.541990920" lastFinishedPulling="2026-03-18 17:48:37.214064176 +0000 UTC m=+407.359898859" observedRunningTime="2026-03-18 17:48:37.798351272 +0000 UTC m=+407.944185955" watchObservedRunningTime="2026-03-18 17:48:37.803590829 +0000 UTC m=+407.949425502" Mar 18 17:48:38.725953 master-0 kubenswrapper[7553]: I0318 17:48:38.725834 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5l8hh" event={"ID":"fcf459dc-bd30-4143-b5c4-60fd01b46548","Type":"ContainerStarted","Data":"3d21f3f1a23a350e575a4283d6eb844273ebe993d42962e1c7233b8a8739cd45"} Mar 18 17:48:38.727988 master-0 kubenswrapper[7553]: I0318 17:48:38.727953 7553 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-controller-manager-operator_cluster-cloud-controller-manager-operator-7559f7c68c-9hlvv_656ac493-a769-4c15-9356-2050c4b9c8d8/kube-rbac-proxy/0.log" Mar 18 17:48:38.728862 master-0 kubenswrapper[7553]: I0318 17:48:38.728817 7553 generic.go:334] "Generic (PLEG): container finished" podID="656ac493-a769-4c15-9356-2050c4b9c8d8" containerID="47f5e2187a6f113ac1287a382a487e278c4ddac0178ebb0b16165e9baddf0e85" exitCode=1 Mar 18 17:48:38.728968 master-0 kubenswrapper[7553]: I0318 17:48:38.728894 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7559f7c68c-9hlvv" event={"ID":"656ac493-a769-4c15-9356-2050c4b9c8d8","Type":"ContainerDied","Data":"47f5e2187a6f113ac1287a382a487e278c4ddac0178ebb0b16165e9baddf0e85"} Mar 18 17:48:38.729021 master-0 kubenswrapper[7553]: I0318 17:48:38.728992 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7559f7c68c-9hlvv" event={"ID":"656ac493-a769-4c15-9356-2050c4b9c8d8","Type":"ContainerStarted","Data":"d175c66095a977d859bc6ffd65e06bd2765140fd5419d2e653bb2a5514e62751"} Mar 18 17:48:38.729437 master-0 kubenswrapper[7553]: I0318 17:48:38.729406 7553 scope.go:117] "RemoveContainer" containerID="47f5e2187a6f113ac1287a382a487e278c4ddac0178ebb0b16165e9baddf0e85" Mar 18 17:48:38.745320 master-0 kubenswrapper[7553]: I0318 17:48:38.745198 7553 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-5l8hh" podStartSLOduration=1.7451745939999999 podStartE2EDuration="1.745174594s" podCreationTimestamp="2026-03-18 17:48:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 17:48:38.744166876 +0000 UTC m=+408.890001559" watchObservedRunningTime="2026-03-18 17:48:38.745174594 +0000 UTC m=+408.891009277" Mar 18 17:48:39.739129 master-0 kubenswrapper[7553]: I0318 17:48:39.739057 7553 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-controller-manager-operator_cluster-cloud-controller-manager-operator-7559f7c68c-9hlvv_656ac493-a769-4c15-9356-2050c4b9c8d8/kube-rbac-proxy/1.log" Mar 18 17:48:39.740206 master-0 kubenswrapper[7553]: I0318 17:48:39.739814 7553 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-controller-manager-operator_cluster-cloud-controller-manager-operator-7559f7c68c-9hlvv_656ac493-a769-4c15-9356-2050c4b9c8d8/kube-rbac-proxy/0.log" Mar 18 17:48:39.740781 master-0 kubenswrapper[7553]: I0318 17:48:39.740723 7553 generic.go:334] "Generic (PLEG): container finished" podID="656ac493-a769-4c15-9356-2050c4b9c8d8" containerID="b7bb5390f984301665f3ca607ecedbc67713a42573a17188652c2b439e42a0e2" exitCode=1 Mar 18 17:48:39.740928 master-0 kubenswrapper[7553]: I0318 17:48:39.740847 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7559f7c68c-9hlvv" event={"ID":"656ac493-a769-4c15-9356-2050c4b9c8d8","Type":"ContainerDied","Data":"b7bb5390f984301665f3ca607ecedbc67713a42573a17188652c2b439e42a0e2"} Mar 18 17:48:39.741027 master-0 kubenswrapper[7553]: I0318 17:48:39.740928 7553 scope.go:117] "RemoveContainer" containerID="47f5e2187a6f113ac1287a382a487e278c4ddac0178ebb0b16165e9baddf0e85" Mar 18 17:48:39.741552 master-0 kubenswrapper[7553]: I0318 17:48:39.741480 7553 scope.go:117] "RemoveContainer" containerID="b7bb5390f984301665f3ca607ecedbc67713a42573a17188652c2b439e42a0e2" Mar 18 17:48:39.742202 master-0 kubenswrapper[7553]: E0318 17:48:39.742126 7553 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-rbac-proxy pod=cluster-cloud-controller-manager-operator-7559f7c68c-9hlvv_openshift-cloud-controller-manager-operator(656ac493-a769-4c15-9356-2050c4b9c8d8)\"" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7559f7c68c-9hlvv" podUID="656ac493-a769-4c15-9356-2050c4b9c8d8" Mar 18 17:48:40.342844 master-0 kubenswrapper[7553]: I0318 17:48:40.342739 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/de189d27-4c60-49f1-9119-d1fde5c37b1e-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-6f97756bc8-zdqtc\" (UID: \"de189d27-4c60-49f1-9119-d1fde5c37b1e\") " pod="openshift-machine-api/control-plane-machine-set-operator-6f97756bc8-zdqtc" Mar 18 17:48:40.343104 master-0 kubenswrapper[7553]: I0318 17:48:40.342861 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/92153864-7959-4482-bf24-c8db36435fb5-machine-approver-tls\") pod \"machine-approver-5c6485487f-z74t2\" (UID: \"92153864-7959-4482-bf24-c8db36435fb5\") " pod="openshift-cluster-machine-approver/machine-approver-5c6485487f-z74t2" Mar 18 17:48:40.343193 master-0 kubenswrapper[7553]: E0318 17:48:40.343106 7553 secret.go:189] Couldn't get secret openshift-machine-api/control-plane-machine-set-operator-tls: secret "control-plane-machine-set-operator-tls" not found Mar 18 17:48:40.343193 master-0 kubenswrapper[7553]: I0318 17:48:40.343147 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a94f7bff-ad61-4c53-a8eb-000a13f26971-cert\") pod \"cluster-autoscaler-operator-866dc4744-l6hpt\" (UID: \"a94f7bff-ad61-4c53-a8eb-000a13f26971\") " pod="openshift-machine-api/cluster-autoscaler-operator-866dc4744-l6hpt" Mar 18 17:48:40.343351 master-0 kubenswrapper[7553]: E0318 17:48:40.343238 7553 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/de189d27-4c60-49f1-9119-d1fde5c37b1e-control-plane-machine-set-operator-tls podName:de189d27-4c60-49f1-9119-d1fde5c37b1e nodeName:}" failed. No retries permitted until 2026-03-18 17:48:48.34320322 +0000 UTC m=+418.489037933 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "control-plane-machine-set-operator-tls" (UniqueName: "kubernetes.io/secret/de189d27-4c60-49f1-9119-d1fde5c37b1e-control-plane-machine-set-operator-tls") pod "control-plane-machine-set-operator-6f97756bc8-zdqtc" (UID: "de189d27-4c60-49f1-9119-d1fde5c37b1e") : secret "control-plane-machine-set-operator-tls" not found Mar 18 17:48:40.343445 master-0 kubenswrapper[7553]: E0318 17:48:40.343382 7553 secret.go:189] Couldn't get secret openshift-cluster-machine-approver/machine-approver-tls: secret "machine-approver-tls" not found Mar 18 17:48:40.343445 master-0 kubenswrapper[7553]: I0318 17:48:40.343422 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/e0e04440-c08b-452d-9be6-9f70a4027c92-samples-operator-tls\") pod \"cluster-samples-operator-85f7577d78-xnx8x\" (UID: \"e0e04440-c08b-452d-9be6-9f70a4027c92\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-85f7577d78-xnx8x" Mar 18 17:48:40.343579 master-0 kubenswrapper[7553]: E0318 17:48:40.343478 7553 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/92153864-7959-4482-bf24-c8db36435fb5-machine-approver-tls podName:92153864-7959-4482-bf24-c8db36435fb5 nodeName:}" failed. No retries permitted until 2026-03-18 17:48:48.343448367 +0000 UTC m=+418.489283080 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "machine-approver-tls" (UniqueName: "kubernetes.io/secret/92153864-7959-4482-bf24-c8db36435fb5-machine-approver-tls") pod "machine-approver-5c6485487f-z74t2" (UID: "92153864-7959-4482-bf24-c8db36435fb5") : secret "machine-approver-tls" not found Mar 18 17:48:40.343579 master-0 kubenswrapper[7553]: I0318 17:48:40.343515 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloud-credential-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/04cef0bd-f365-4bf6-864a-1895995015d6-cloud-credential-operator-serving-cert\") pod \"cloud-credential-operator-744f9dbf77-djgn7\" (UID: \"04cef0bd-f365-4bf6-864a-1895995015d6\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-744f9dbf77-djgn7" Mar 18 17:48:40.343712 master-0 kubenswrapper[7553]: E0318 17:48:40.343603 7553 secret.go:189] Couldn't get secret openshift-cluster-samples-operator/samples-operator-tls: secret "samples-operator-tls" not found Mar 18 17:48:40.343712 master-0 kubenswrapper[7553]: E0318 17:48:40.343619 7553 secret.go:189] Couldn't get secret openshift-machine-api/cluster-autoscaler-operator-cert: secret "cluster-autoscaler-operator-cert" not found Mar 18 17:48:40.343712 master-0 kubenswrapper[7553]: E0318 17:48:40.343666 7553 secret.go:189] Couldn't get secret openshift-cloud-credential-operator/cloud-credential-operator-serving-cert: secret "cloud-credential-operator-serving-cert" not found Mar 18 17:48:40.343908 master-0 kubenswrapper[7553]: E0318 17:48:40.343676 7553 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e0e04440-c08b-452d-9be6-9f70a4027c92-samples-operator-tls podName:e0e04440-c08b-452d-9be6-9f70a4027c92 nodeName:}" failed. No retries permitted until 2026-03-18 17:48:48.343653863 +0000 UTC m=+418.489488736 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "samples-operator-tls" (UniqueName: "kubernetes.io/secret/e0e04440-c08b-452d-9be6-9f70a4027c92-samples-operator-tls") pod "cluster-samples-operator-85f7577d78-xnx8x" (UID: "e0e04440-c08b-452d-9be6-9f70a4027c92") : secret "samples-operator-tls" not found Mar 18 17:48:40.343908 master-0 kubenswrapper[7553]: E0318 17:48:40.343797 7553 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/04cef0bd-f365-4bf6-864a-1895995015d6-cloud-credential-operator-serving-cert podName:04cef0bd-f365-4bf6-864a-1895995015d6 nodeName:}" failed. No retries permitted until 2026-03-18 17:48:48.343755426 +0000 UTC m=+418.489590169 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cloud-credential-operator-serving-cert" (UniqueName: "kubernetes.io/secret/04cef0bd-f365-4bf6-864a-1895995015d6-cloud-credential-operator-serving-cert") pod "cloud-credential-operator-744f9dbf77-djgn7" (UID: "04cef0bd-f365-4bf6-864a-1895995015d6") : secret "cloud-credential-operator-serving-cert" not found Mar 18 17:48:40.343908 master-0 kubenswrapper[7553]: E0318 17:48:40.343848 7553 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a94f7bff-ad61-4c53-a8eb-000a13f26971-cert podName:a94f7bff-ad61-4c53-a8eb-000a13f26971 nodeName:}" failed. No retries permitted until 2026-03-18 17:48:48.343828088 +0000 UTC m=+418.489662791 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/a94f7bff-ad61-4c53-a8eb-000a13f26971-cert") pod "cluster-autoscaler-operator-866dc4744-l6hpt" (UID: "a94f7bff-ad61-4c53-a8eb-000a13f26971") : secret "cluster-autoscaler-operator-cert" not found Mar 18 17:48:40.446465 master-0 kubenswrapper[7553]: I0318 17:48:40.446363 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/2d21e77e-8b61-4f03-8f17-941b7a1d8b1d-machine-api-operator-tls\") pod \"machine-api-operator-6fbb6cf6f9-6x52p\" (UID: \"2d21e77e-8b61-4f03-8f17-941b7a1d8b1d\") " pod="openshift-machine-api/machine-api-operator-6fbb6cf6f9-6x52p" Mar 18 17:48:40.446844 master-0 kubenswrapper[7553]: E0318 17:48:40.446662 7553 secret.go:189] Couldn't get secret openshift-machine-api/machine-api-operator-tls: secret "machine-api-operator-tls" not found Mar 18 17:48:40.446844 master-0 kubenswrapper[7553]: E0318 17:48:40.446785 7553 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2d21e77e-8b61-4f03-8f17-941b7a1d8b1d-machine-api-operator-tls podName:2d21e77e-8b61-4f03-8f17-941b7a1d8b1d nodeName:}" failed. No retries permitted until 2026-03-18 17:48:48.446751564 +0000 UTC m=+418.592586277 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "machine-api-operator-tls" (UniqueName: "kubernetes.io/secret/2d21e77e-8b61-4f03-8f17-941b7a1d8b1d-machine-api-operator-tls") pod "machine-api-operator-6fbb6cf6f9-6x52p" (UID: "2d21e77e-8b61-4f03-8f17-941b7a1d8b1d") : secret "machine-api-operator-tls" not found Mar 18 17:48:40.748950 master-0 kubenswrapper[7553]: I0318 17:48:40.748888 7553 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-controller-manager-operator_cluster-cloud-controller-manager-operator-7559f7c68c-9hlvv_656ac493-a769-4c15-9356-2050c4b9c8d8/kube-rbac-proxy/1.log" Mar 18 17:48:40.751483 master-0 kubenswrapper[7553]: I0318 17:48:40.751446 7553 scope.go:117] "RemoveContainer" containerID="b7bb5390f984301665f3ca607ecedbc67713a42573a17188652c2b439e42a0e2" Mar 18 17:48:40.751808 master-0 kubenswrapper[7553]: E0318 17:48:40.751767 7553 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-rbac-proxy pod=cluster-cloud-controller-manager-operator-7559f7c68c-9hlvv_openshift-cloud-controller-manager-operator(656ac493-a769-4c15-9356-2050c4b9c8d8)\"" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7559f7c68c-9hlvv" podUID="656ac493-a769-4c15-9356-2050c4b9c8d8" Mar 18 17:48:41.477645 master-0 kubenswrapper[7553]: I0318 17:48:41.477575 7553 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-b4f87c5b9-m84zq"] Mar 18 17:48:41.479992 master-0 kubenswrapper[7553]: I0318 17:48:41.479979 7553 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-b4f87c5b9-m84zq" Mar 18 17:48:41.483258 master-0 kubenswrapper[7553]: I0318 17:48:41.483220 7553 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Mar 18 17:48:41.483510 master-0 kubenswrapper[7553]: I0318 17:48:41.483220 7553 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-npx6j" Mar 18 17:48:41.495551 master-0 kubenswrapper[7553]: I0318 17:48:41.495260 7553 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-b4f87c5b9-m84zq"] Mar 18 17:48:41.567667 master-0 kubenswrapper[7553]: I0318 17:48:41.567579 7553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-88hkw\" (UniqueName: \"kubernetes.io/projected/89e6c3d6-7bd5-4df6-90db-3a349f644afb-kube-api-access-88hkw\") pod \"machine-config-controller-b4f87c5b9-m84zq\" (UID: \"89e6c3d6-7bd5-4df6-90db-3a349f644afb\") " pod="openshift-machine-config-operator/machine-config-controller-b4f87c5b9-m84zq" Mar 18 17:48:41.567998 master-0 kubenswrapper[7553]: I0318 17:48:41.567844 7553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/89e6c3d6-7bd5-4df6-90db-3a349f644afb-mcc-auth-proxy-config\") pod \"machine-config-controller-b4f87c5b9-m84zq\" (UID: \"89e6c3d6-7bd5-4df6-90db-3a349f644afb\") " pod="openshift-machine-config-operator/machine-config-controller-b4f87c5b9-m84zq" Mar 18 17:48:41.568080 master-0 kubenswrapper[7553]: I0318 17:48:41.568020 7553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/89e6c3d6-7bd5-4df6-90db-3a349f644afb-proxy-tls\") pod \"machine-config-controller-b4f87c5b9-m84zq\" (UID: \"89e6c3d6-7bd5-4df6-90db-3a349f644afb\") " pod="openshift-machine-config-operator/machine-config-controller-b4f87c5b9-m84zq" Mar 18 17:48:41.669548 master-0 kubenswrapper[7553]: I0318 17:48:41.669468 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-88hkw\" (UniqueName: \"kubernetes.io/projected/89e6c3d6-7bd5-4df6-90db-3a349f644afb-kube-api-access-88hkw\") pod \"machine-config-controller-b4f87c5b9-m84zq\" (UID: \"89e6c3d6-7bd5-4df6-90db-3a349f644afb\") " pod="openshift-machine-config-operator/machine-config-controller-b4f87c5b9-m84zq" Mar 18 17:48:41.669966 master-0 kubenswrapper[7553]: I0318 17:48:41.669942 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/89e6c3d6-7bd5-4df6-90db-3a349f644afb-mcc-auth-proxy-config\") pod \"machine-config-controller-b4f87c5b9-m84zq\" (UID: \"89e6c3d6-7bd5-4df6-90db-3a349f644afb\") " pod="openshift-machine-config-operator/machine-config-controller-b4f87c5b9-m84zq" Mar 18 17:48:41.670108 master-0 kubenswrapper[7553]: I0318 17:48:41.670088 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/89e6c3d6-7bd5-4df6-90db-3a349f644afb-proxy-tls\") pod \"machine-config-controller-b4f87c5b9-m84zq\" (UID: \"89e6c3d6-7bd5-4df6-90db-3a349f644afb\") " pod="openshift-machine-config-operator/machine-config-controller-b4f87c5b9-m84zq" Mar 18 17:48:41.671967 master-0 kubenswrapper[7553]: I0318 17:48:41.671880 7553 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/89e6c3d6-7bd5-4df6-90db-3a349f644afb-mcc-auth-proxy-config\") pod \"machine-config-controller-b4f87c5b9-m84zq\" (UID: \"89e6c3d6-7bd5-4df6-90db-3a349f644afb\") " pod="openshift-machine-config-operator/machine-config-controller-b4f87c5b9-m84zq" Mar 18 17:48:41.677373 master-0 kubenswrapper[7553]: I0318 17:48:41.677328 7553 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/89e6c3d6-7bd5-4df6-90db-3a349f644afb-proxy-tls\") pod \"machine-config-controller-b4f87c5b9-m84zq\" (UID: \"89e6c3d6-7bd5-4df6-90db-3a349f644afb\") " pod="openshift-machine-config-operator/machine-config-controller-b4f87c5b9-m84zq" Mar 18 17:48:41.700640 master-0 kubenswrapper[7553]: I0318 17:48:41.700431 7553 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-88hkw\" (UniqueName: \"kubernetes.io/projected/89e6c3d6-7bd5-4df6-90db-3a349f644afb-kube-api-access-88hkw\") pod \"machine-config-controller-b4f87c5b9-m84zq\" (UID: \"89e6c3d6-7bd5-4df6-90db-3a349f644afb\") " pod="openshift-machine-config-operator/machine-config-controller-b4f87c5b9-m84zq" Mar 18 17:48:41.812494 master-0 kubenswrapper[7553]: I0318 17:48:41.812431 7553 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-b4f87c5b9-m84zq" Mar 18 17:48:42.353130 master-0 kubenswrapper[7553]: I0318 17:48:42.352757 7553 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-b4f87c5b9-m84zq"] Mar 18 17:48:42.362372 master-0 kubenswrapper[7553]: W0318 17:48:42.362260 7553 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod89e6c3d6_7bd5_4df6_90db_3a349f644afb.slice/crio-c23a831f572d860a391d4d959c13e33c442846ac9ce5af54ffdc6e3a90052296 WatchSource:0}: Error finding container c23a831f572d860a391d4d959c13e33c442846ac9ce5af54ffdc6e3a90052296: Status 404 returned error can't find the container with id c23a831f572d860a391d4d959c13e33c442846ac9ce5af54ffdc6e3a90052296 Mar 18 17:48:42.727496 master-0 kubenswrapper[7553]: I0318 17:48:42.727453 7553 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/prometheus-operator-admission-webhook-69c6b55594-7r9qg"] Mar 18 17:48:42.728436 master-0 kubenswrapper[7553]: I0318 17:48:42.728420 7553 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-admission-webhook-69c6b55594-7r9qg" Mar 18 17:48:42.731506 master-0 kubenswrapper[7553]: I0318 17:48:42.731491 7553 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-admission-webhook-tls" Mar 18 17:48:42.733419 master-0 kubenswrapper[7553]: I0318 17:48:42.733333 7553 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-7dcf5569b5-m5dh4"] Mar 18 17:48:42.734608 master-0 kubenswrapper[7553]: I0318 17:48:42.734403 7553 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" Mar 18 17:48:42.737014 master-0 kubenswrapper[7553]: I0318 17:48:42.736971 7553 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-diagnostics/network-check-source-b4bf74f6-nlqpp"] Mar 18 17:48:42.740250 master-0 kubenswrapper[7553]: I0318 17:48:42.738620 7553 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-b4bf74f6-nlqpp" Mar 18 17:48:42.740957 master-0 kubenswrapper[7553]: I0318 17:48:42.740920 7553 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Mar 18 17:48:42.741081 master-0 kubenswrapper[7553]: I0318 17:48:42.740917 7553 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Mar 18 17:48:42.741290 master-0 kubenswrapper[7553]: I0318 17:48:42.741249 7553 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Mar 18 17:48:42.741492 master-0 kubenswrapper[7553]: I0318 17:48:42.741469 7553 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Mar 18 17:48:42.742879 master-0 kubenswrapper[7553]: I0318 17:48:42.741760 7553 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Mar 18 17:48:42.742879 master-0 kubenswrapper[7553]: I0318 17:48:42.741918 7553 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Mar 18 17:48:42.756689 master-0 kubenswrapper[7553]: I0318 17:48:42.756630 7553 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-operator-admission-webhook-69c6b55594-7r9qg"] Mar 18 17:48:42.774631 master-0 kubenswrapper[7553]: I0318 17:48:42.765430 7553 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-network-diagnostics/network-check-source-b4bf74f6-nlqpp"] Mar 18 17:48:42.774631 master-0 kubenswrapper[7553]: I0318 17:48:42.772720 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-b4f87c5b9-m84zq" event={"ID":"89e6c3d6-7bd5-4df6-90db-3a349f644afb","Type":"ContainerStarted","Data":"688e6445f03f6d6d1fe8c28da63f4970638ae6cc63157d485e4456d88b827cd4"} Mar 18 17:48:42.774631 master-0 kubenswrapper[7553]: I0318 17:48:42.772759 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-b4f87c5b9-m84zq" event={"ID":"89e6c3d6-7bd5-4df6-90db-3a349f644afb","Type":"ContainerStarted","Data":"c82dc79407cc2ebdd830e24e81c06ba7f22e81e0353adc5d05a21365ba7f195f"} Mar 18 17:48:42.774631 master-0 kubenswrapper[7553]: I0318 17:48:42.772774 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-b4f87c5b9-m84zq" event={"ID":"89e6c3d6-7bd5-4df6-90db-3a349f644afb","Type":"ContainerStarted","Data":"c23a831f572d860a391d4d959c13e33c442846ac9ce5af54ffdc6e3a90052296"} Mar 18 17:48:42.799021 master-0 kubenswrapper[7553]: I0318 17:48:42.798463 7553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c57f282a-829b-41b2-827a-f4bc598245a2-default-certificate\") pod \"router-default-7dcf5569b5-m5dh4\" (UID: \"c57f282a-829b-41b2-827a-f4bc598245a2\") " pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" Mar 18 17:48:42.799021 master-0 kubenswrapper[7553]: I0318 17:48:42.798585 7553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c57f282a-829b-41b2-827a-f4bc598245a2-stats-auth\") pod \"router-default-7dcf5569b5-m5dh4\" (UID: \"c57f282a-829b-41b2-827a-f4bc598245a2\") " pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" Mar 18 17:48:42.799021 master-0 kubenswrapper[7553]: I0318 17:48:42.798636 7553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ljbl7\" (UniqueName: \"kubernetes.io/projected/7d72bb42-1ee6-4f61-9515-d1c5bafa896f-kube-api-access-ljbl7\") pod \"network-check-source-b4bf74f6-nlqpp\" (UID: \"7d72bb42-1ee6-4f61-9515-d1c5bafa896f\") " pod="openshift-network-diagnostics/network-check-source-b4bf74f6-nlqpp" Mar 18 17:48:42.799021 master-0 kubenswrapper[7553]: I0318 17:48:42.798714 7553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/9e2d0d0d-54ca-475b-be8a-4eb6d4434e74-tls-certificates\") pod \"prometheus-operator-admission-webhook-69c6b55594-7r9qg\" (UID: \"9e2d0d0d-54ca-475b-be8a-4eb6d4434e74\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-69c6b55594-7r9qg" Mar 18 17:48:42.799021 master-0 kubenswrapper[7553]: I0318 17:48:42.798745 7553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c57f282a-829b-41b2-827a-f4bc598245a2-metrics-certs\") pod \"router-default-7dcf5569b5-m5dh4\" (UID: \"c57f282a-829b-41b2-827a-f4bc598245a2\") " pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" Mar 18 17:48:42.799021 master-0 kubenswrapper[7553]: I0318 17:48:42.798801 7553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d6c68\" (UniqueName: \"kubernetes.io/projected/c57f282a-829b-41b2-827a-f4bc598245a2-kube-api-access-d6c68\") pod \"router-default-7dcf5569b5-m5dh4\" (UID: \"c57f282a-829b-41b2-827a-f4bc598245a2\") " pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" Mar 18 17:48:42.799021 master-0 kubenswrapper[7553]: I0318 17:48:42.798908 7553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c57f282a-829b-41b2-827a-f4bc598245a2-service-ca-bundle\") pod \"router-default-7dcf5569b5-m5dh4\" (UID: \"c57f282a-829b-41b2-827a-f4bc598245a2\") " pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" Mar 18 17:48:42.837999 master-0 kubenswrapper[7553]: I0318 17:48:42.835313 7553 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-b4f87c5b9-m84zq" podStartSLOduration=1.835265427 podStartE2EDuration="1.835265427s" podCreationTimestamp="2026-03-18 17:48:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 17:48:42.831925574 +0000 UTC m=+412.977760267" watchObservedRunningTime="2026-03-18 17:48:42.835265427 +0000 UTC m=+412.981100120" Mar 18 17:48:42.900604 master-0 kubenswrapper[7553]: I0318 17:48:42.900377 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c57f282a-829b-41b2-827a-f4bc598245a2-stats-auth\") pod \"router-default-7dcf5569b5-m5dh4\" (UID: \"c57f282a-829b-41b2-827a-f4bc598245a2\") " pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" Mar 18 17:48:42.900604 master-0 kubenswrapper[7553]: I0318 17:48:42.900467 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ljbl7\" (UniqueName: \"kubernetes.io/projected/7d72bb42-1ee6-4f61-9515-d1c5bafa896f-kube-api-access-ljbl7\") pod \"network-check-source-b4bf74f6-nlqpp\" (UID: \"7d72bb42-1ee6-4f61-9515-d1c5bafa896f\") " pod="openshift-network-diagnostics/network-check-source-b4bf74f6-nlqpp" Mar 18 17:48:42.900893 master-0 kubenswrapper[7553]: I0318 17:48:42.900695 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/9e2d0d0d-54ca-475b-be8a-4eb6d4434e74-tls-certificates\") pod \"prometheus-operator-admission-webhook-69c6b55594-7r9qg\" (UID: \"9e2d0d0d-54ca-475b-be8a-4eb6d4434e74\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-69c6b55594-7r9qg" Mar 18 17:48:42.900893 master-0 kubenswrapper[7553]: I0318 17:48:42.900751 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c57f282a-829b-41b2-827a-f4bc598245a2-metrics-certs\") pod \"router-default-7dcf5569b5-m5dh4\" (UID: \"c57f282a-829b-41b2-827a-f4bc598245a2\") " pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" Mar 18 17:48:42.900893 master-0 kubenswrapper[7553]: I0318 17:48:42.900796 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d6c68\" (UniqueName: \"kubernetes.io/projected/c57f282a-829b-41b2-827a-f4bc598245a2-kube-api-access-d6c68\") pod \"router-default-7dcf5569b5-m5dh4\" (UID: \"c57f282a-829b-41b2-827a-f4bc598245a2\") " pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" Mar 18 17:48:42.900986 master-0 kubenswrapper[7553]: I0318 17:48:42.900900 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c57f282a-829b-41b2-827a-f4bc598245a2-service-ca-bundle\") pod \"router-default-7dcf5569b5-m5dh4\" (UID: \"c57f282a-829b-41b2-827a-f4bc598245a2\") " pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" Mar 18 17:48:42.901021 master-0 kubenswrapper[7553]: I0318 17:48:42.900981 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c57f282a-829b-41b2-827a-f4bc598245a2-default-certificate\") pod \"router-default-7dcf5569b5-m5dh4\" (UID: \"c57f282a-829b-41b2-827a-f4bc598245a2\") " pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" Mar 18 17:48:42.902137 master-0 kubenswrapper[7553]: I0318 17:48:42.902075 7553 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c57f282a-829b-41b2-827a-f4bc598245a2-service-ca-bundle\") pod \"router-default-7dcf5569b5-m5dh4\" (UID: \"c57f282a-829b-41b2-827a-f4bc598245a2\") " pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" Mar 18 17:48:42.903819 master-0 kubenswrapper[7553]: I0318 17:48:42.903778 7553 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/9e2d0d0d-54ca-475b-be8a-4eb6d4434e74-tls-certificates\") pod \"prometheus-operator-admission-webhook-69c6b55594-7r9qg\" (UID: \"9e2d0d0d-54ca-475b-be8a-4eb6d4434e74\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-69c6b55594-7r9qg" Mar 18 17:48:42.904392 master-0 kubenswrapper[7553]: I0318 17:48:42.904204 7553 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c57f282a-829b-41b2-827a-f4bc598245a2-stats-auth\") pod \"router-default-7dcf5569b5-m5dh4\" (UID: \"c57f282a-829b-41b2-827a-f4bc598245a2\") " pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" Mar 18 17:48:42.906072 master-0 kubenswrapper[7553]: I0318 17:48:42.906039 7553 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c57f282a-829b-41b2-827a-f4bc598245a2-default-certificate\") pod \"router-default-7dcf5569b5-m5dh4\" (UID: \"c57f282a-829b-41b2-827a-f4bc598245a2\") " pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" Mar 18 17:48:42.906182 master-0 kubenswrapper[7553]: I0318 17:48:42.906044 7553 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c57f282a-829b-41b2-827a-f4bc598245a2-metrics-certs\") pod \"router-default-7dcf5569b5-m5dh4\" (UID: \"c57f282a-829b-41b2-827a-f4bc598245a2\") " pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" Mar 18 17:48:42.923191 master-0 kubenswrapper[7553]: I0318 17:48:42.923128 7553 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d6c68\" (UniqueName: \"kubernetes.io/projected/c57f282a-829b-41b2-827a-f4bc598245a2-kube-api-access-d6c68\") pod \"router-default-7dcf5569b5-m5dh4\" (UID: \"c57f282a-829b-41b2-827a-f4bc598245a2\") " pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" Mar 18 17:48:42.927289 master-0 kubenswrapper[7553]: I0318 17:48:42.927218 7553 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ljbl7\" (UniqueName: \"kubernetes.io/projected/7d72bb42-1ee6-4f61-9515-d1c5bafa896f-kube-api-access-ljbl7\") pod \"network-check-source-b4bf74f6-nlqpp\" (UID: \"7d72bb42-1ee6-4f61-9515-d1c5bafa896f\") " pod="openshift-network-diagnostics/network-check-source-b4bf74f6-nlqpp" Mar 18 17:48:43.058834 master-0 kubenswrapper[7553]: I0318 17:48:43.058751 7553 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-admission-webhook-69c6b55594-7r9qg" Mar 18 17:48:43.099129 master-0 kubenswrapper[7553]: I0318 17:48:43.099051 7553 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" Mar 18 17:48:43.125673 master-0 kubenswrapper[7553]: I0318 17:48:43.117244 7553 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-b4bf74f6-nlqpp" Mar 18 17:48:43.125673 master-0 kubenswrapper[7553]: I0318 17:48:43.118441 7553 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-7b95f86987-6qqz4" Mar 18 17:48:43.158872 master-0 kubenswrapper[7553]: W0318 17:48:43.158715 7553 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc57f282a_829b_41b2_827a_f4bc598245a2.slice/crio-d32f636809075be6cf635b9dbbf658143a67ef27c719c0247cb93d87c34ccc46 WatchSource:0}: Error finding container d32f636809075be6cf635b9dbbf658143a67ef27c719c0247cb93d87c34ccc46: Status 404 returned error can't find the container with id d32f636809075be6cf635b9dbbf658143a67ef27c719c0247cb93d87c34ccc46 Mar 18 17:48:43.784221 master-0 kubenswrapper[7553]: I0318 17:48:43.784130 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" event={"ID":"c57f282a-829b-41b2-827a-f4bc598245a2","Type":"ContainerStarted","Data":"d32f636809075be6cf635b9dbbf658143a67ef27c719c0247cb93d87c34ccc46"} Mar 18 17:48:44.414559 master-0 kubenswrapper[7553]: I0318 17:48:44.414468 7553 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-operator-admission-webhook-69c6b55594-7r9qg"] Mar 18 17:48:44.439399 master-0 kubenswrapper[7553]: W0318 17:48:44.439328 7553 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9e2d0d0d_54ca_475b_be8a_4eb6d4434e74.slice/crio-819894978d4b63b70f3c5ba05beeaf66b4fdd7279c891272a2e358b0b8143717 WatchSource:0}: Error finding container 819894978d4b63b70f3c5ba05beeaf66b4fdd7279c891272a2e358b0b8143717: Status 404 returned error can't find the container with id 819894978d4b63b70f3c5ba05beeaf66b4fdd7279c891272a2e358b0b8143717 Mar 18 17:48:44.810077 master-0 kubenswrapper[7553]: I0318 17:48:44.809984 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-admission-webhook-69c6b55594-7r9qg" event={"ID":"9e2d0d0d-54ca-475b-be8a-4eb6d4434e74","Type":"ContainerStarted","Data":"819894978d4b63b70f3c5ba05beeaf66b4fdd7279c891272a2e358b0b8143717"} Mar 18 17:48:45.095147 master-0 kubenswrapper[7553]: I0318 17:48:45.094894 7553 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-network-diagnostics/network-check-source-b4bf74f6-nlqpp"] Mar 18 17:48:45.112775 master-0 kubenswrapper[7553]: W0318 17:48:45.112706 7553 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7d72bb42_1ee6_4f61_9515_d1c5bafa896f.slice/crio-90cc2b02445555cd2d532e865fff8c504dc1d3510b60d980449ac43b37071918 WatchSource:0}: Error finding container 90cc2b02445555cd2d532e865fff8c504dc1d3510b60d980449ac43b37071918: Status 404 returned error can't find the container with id 90cc2b02445555cd2d532e865fff8c504dc1d3510b60d980449ac43b37071918 Mar 18 17:48:45.164563 master-0 kubenswrapper[7553]: I0318 17:48:45.164513 7553 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Mar 18 17:48:45.740452 master-0 kubenswrapper[7553]: I0318 17:48:45.740414 7553 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication-operator_authentication-operator-5885bfd7f4-8sxdf_c087ce06-a16b-41f4-ba93-8fccdee09003/authentication-operator/2.log" Mar 18 17:48:45.818526 master-0 kubenswrapper[7553]: I0318 17:48:45.818472 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-b4bf74f6-nlqpp" event={"ID":"7d72bb42-1ee6-4f61-9515-d1c5bafa896f","Type":"ContainerStarted","Data":"f24e001f37478d75ca8c0aebbe9de5bdd57b1290712b15777c1c59d17efb6a0f"} Mar 18 17:48:45.818526 master-0 kubenswrapper[7553]: I0318 17:48:45.818537 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-b4bf74f6-nlqpp" event={"ID":"7d72bb42-1ee6-4f61-9515-d1c5bafa896f","Type":"ContainerStarted","Data":"90cc2b02445555cd2d532e865fff8c504dc1d3510b60d980449ac43b37071918"} Mar 18 17:48:45.850642 master-0 kubenswrapper[7553]: I0318 17:48:45.847686 7553 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-network-diagnostics/network-check-source-b4bf74f6-nlqpp" podStartSLOduration=469.847660489 podStartE2EDuration="7m49.847660489s" podCreationTimestamp="2026-03-18 17:40:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 17:48:45.837646238 +0000 UTC m=+415.983480911" watchObservedRunningTime="2026-03-18 17:48:45.847660489 +0000 UTC m=+415.993495162" Mar 18 17:48:45.938077 master-0 kubenswrapper[7553]: I0318 17:48:45.938021 7553 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication-operator_authentication-operator-5885bfd7f4-8sxdf_c087ce06-a16b-41f4-ba93-8fccdee09003/authentication-operator/3.log" Mar 18 17:48:46.336017 master-0 kubenswrapper[7553]: I0318 17:48:46.335960 7553 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-oauth-apiserver_apiserver-688fbbb854-6n26v_43fab0f2-5cfd-4b5e-a632-728fd5b960fd/fix-audit-permissions/0.log" Mar 18 17:48:46.540493 master-0 kubenswrapper[7553]: I0318 17:48:46.540440 7553 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-oauth-apiserver_apiserver-688fbbb854-6n26v_43fab0f2-5cfd-4b5e-a632-728fd5b960fd/oauth-apiserver/0.log" Mar 18 17:48:46.741586 master-0 kubenswrapper[7553]: I0318 17:48:46.741468 7553 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd-operator_etcd-operator-8544cbcf9c-rws9x_0100a259-1358-45e8-8191-4e1f9a14ec89/etcd-operator/2.log" Mar 18 17:48:46.938865 master-0 kubenswrapper[7553]: I0318 17:48:46.938775 7553 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd-operator_etcd-operator-8544cbcf9c-rws9x_0100a259-1358-45e8-8191-4e1f9a14ec89/etcd-operator/3.log" Mar 18 17:48:47.139920 master-0 kubenswrapper[7553]: I0318 17:48:47.139875 7553 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_24b4ed170d527099878cb5fdd508a2fb/setup/0.log" Mar 18 17:48:47.344363 master-0 kubenswrapper[7553]: I0318 17:48:47.344324 7553 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_24b4ed170d527099878cb5fdd508a2fb/etcd-ensure-env-vars/0.log" Mar 18 17:48:47.537975 master-0 kubenswrapper[7553]: I0318 17:48:47.537920 7553 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_24b4ed170d527099878cb5fdd508a2fb/etcd-resources-copy/0.log" Mar 18 17:48:47.578527 master-0 kubenswrapper[7553]: I0318 17:48:47.578453 7553 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-mpmxb"] Mar 18 17:48:47.579790 master-0 kubenswrapper[7553]: I0318 17:48:47.579754 7553 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-mpmxb" Mar 18 17:48:47.582705 master-0 kubenswrapper[7553]: I0318 17:48:47.582634 7553 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-bwq44" Mar 18 17:48:47.582851 master-0 kubenswrapper[7553]: I0318 17:48:47.582834 7553 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Mar 18 17:48:47.585825 master-0 kubenswrapper[7553]: I0318 17:48:47.585800 7553 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Mar 18 17:48:47.695034 master-0 kubenswrapper[7553]: I0318 17:48:47.694884 7553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/b3385316-45f0-46c5-ac82-683168db5878-node-bootstrap-token\") pod \"machine-config-server-mpmxb\" (UID: \"b3385316-45f0-46c5-ac82-683168db5878\") " pod="openshift-machine-config-operator/machine-config-server-mpmxb" Mar 18 17:48:47.695435 master-0 kubenswrapper[7553]: I0318 17:48:47.695404 7553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wd9sc\" (UniqueName: \"kubernetes.io/projected/b3385316-45f0-46c5-ac82-683168db5878-kube-api-access-wd9sc\") pod \"machine-config-server-mpmxb\" (UID: \"b3385316-45f0-46c5-ac82-683168db5878\") " pod="openshift-machine-config-operator/machine-config-server-mpmxb" Mar 18 17:48:47.695657 master-0 kubenswrapper[7553]: I0318 17:48:47.695628 7553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/b3385316-45f0-46c5-ac82-683168db5878-certs\") pod \"machine-config-server-mpmxb\" (UID: \"b3385316-45f0-46c5-ac82-683168db5878\") " pod="openshift-machine-config-operator/machine-config-server-mpmxb" Mar 18 17:48:47.744067 master-0 kubenswrapper[7553]: I0318 17:48:47.744006 7553 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_24b4ed170d527099878cb5fdd508a2fb/etcdctl/0.log" Mar 18 17:48:47.797369 master-0 kubenswrapper[7553]: I0318 17:48:47.797291 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/b3385316-45f0-46c5-ac82-683168db5878-node-bootstrap-token\") pod \"machine-config-server-mpmxb\" (UID: \"b3385316-45f0-46c5-ac82-683168db5878\") " pod="openshift-machine-config-operator/machine-config-server-mpmxb" Mar 18 17:48:47.797673 master-0 kubenswrapper[7553]: I0318 17:48:47.797434 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wd9sc\" (UniqueName: \"kubernetes.io/projected/b3385316-45f0-46c5-ac82-683168db5878-kube-api-access-wd9sc\") pod \"machine-config-server-mpmxb\" (UID: \"b3385316-45f0-46c5-ac82-683168db5878\") " pod="openshift-machine-config-operator/machine-config-server-mpmxb" Mar 18 17:48:47.797673 master-0 kubenswrapper[7553]: I0318 17:48:47.797483 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/b3385316-45f0-46c5-ac82-683168db5878-certs\") pod \"machine-config-server-mpmxb\" (UID: \"b3385316-45f0-46c5-ac82-683168db5878\") " pod="openshift-machine-config-operator/machine-config-server-mpmxb" Mar 18 17:48:47.806268 master-0 kubenswrapper[7553]: I0318 17:48:47.806211 7553 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/b3385316-45f0-46c5-ac82-683168db5878-node-bootstrap-token\") pod \"machine-config-server-mpmxb\" (UID: \"b3385316-45f0-46c5-ac82-683168db5878\") " pod="openshift-machine-config-operator/machine-config-server-mpmxb" Mar 18 17:48:47.806487 master-0 kubenswrapper[7553]: I0318 17:48:47.806245 7553 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/b3385316-45f0-46c5-ac82-683168db5878-certs\") pod \"machine-config-server-mpmxb\" (UID: \"b3385316-45f0-46c5-ac82-683168db5878\") " pod="openshift-machine-config-operator/machine-config-server-mpmxb" Mar 18 17:48:47.836191 master-0 kubenswrapper[7553]: I0318 17:48:47.836127 7553 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wd9sc\" (UniqueName: \"kubernetes.io/projected/b3385316-45f0-46c5-ac82-683168db5878-kube-api-access-wd9sc\") pod \"machine-config-server-mpmxb\" (UID: \"b3385316-45f0-46c5-ac82-683168db5878\") " pod="openshift-machine-config-operator/machine-config-server-mpmxb" Mar 18 17:48:47.839436 master-0 kubenswrapper[7553]: I0318 17:48:47.839382 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-admission-webhook-69c6b55594-7r9qg" event={"ID":"9e2d0d0d-54ca-475b-be8a-4eb6d4434e74","Type":"ContainerStarted","Data":"279161cbc60f89544c68db2a4cd13b9d564d287fe42ea6664d2a3a946a1e0c00"} Mar 18 17:48:47.839955 master-0 kubenswrapper[7553]: I0318 17:48:47.839914 7553 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/prometheus-operator-admission-webhook-69c6b55594-7r9qg" Mar 18 17:48:47.842862 master-0 kubenswrapper[7553]: I0318 17:48:47.842815 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" event={"ID":"c57f282a-829b-41b2-827a-f4bc598245a2","Type":"ContainerStarted","Data":"3d1a4c794f84645b132cca3ce7dc17d228df153769dd3f1d6b34979465df7e8d"} Mar 18 17:48:47.850393 master-0 kubenswrapper[7553]: I0318 17:48:47.850198 7553 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/prometheus-operator-admission-webhook-69c6b55594-7r9qg" Mar 18 17:48:47.870577 master-0 kubenswrapper[7553]: I0318 17:48:47.870401 7553 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/prometheus-operator-admission-webhook-69c6b55594-7r9qg" podStartSLOduration=367.139334803 podStartE2EDuration="6m9.870378704s" podCreationTimestamp="2026-03-18 17:42:38 +0000 UTC" firstStartedPulling="2026-03-18 17:48:44.443721125 +0000 UTC m=+414.589555838" lastFinishedPulling="2026-03-18 17:48:47.174765066 +0000 UTC m=+417.320599739" observedRunningTime="2026-03-18 17:48:47.867784082 +0000 UTC m=+418.013618765" watchObservedRunningTime="2026-03-18 17:48:47.870378704 +0000 UTC m=+418.016213387" Mar 18 17:48:47.911676 master-0 kubenswrapper[7553]: I0318 17:48:47.911610 7553 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-mpmxb" Mar 18 17:48:47.944410 master-0 kubenswrapper[7553]: I0318 17:48:47.940403 7553 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podStartSLOduration=390.869275986 podStartE2EDuration="6m34.940378048s" podCreationTimestamp="2026-03-18 17:42:13 +0000 UTC" firstStartedPulling="2026-03-18 17:48:43.16521213 +0000 UTC m=+413.311046843" lastFinishedPulling="2026-03-18 17:48:47.236314232 +0000 UTC m=+417.382148905" observedRunningTime="2026-03-18 17:48:47.902716671 +0000 UTC m=+418.048551344" watchObservedRunningTime="2026-03-18 17:48:47.940378048 +0000 UTC m=+418.086212711" Mar 18 17:48:47.949073 master-0 kubenswrapper[7553]: I0318 17:48:47.948870 7553 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_24b4ed170d527099878cb5fdd508a2fb/etcd/0.log" Mar 18 17:48:48.100219 master-0 kubenswrapper[7553]: I0318 17:48:48.100162 7553 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" Mar 18 17:48:48.103148 master-0 kubenswrapper[7553]: I0318 17:48:48.103091 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:48:48.103148 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:48:48.103148 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:48:48.103148 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:48:48.103487 master-0 kubenswrapper[7553]: I0318 17:48:48.103452 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:48:48.138754 master-0 kubenswrapper[7553]: I0318 17:48:48.138704 7553 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_24b4ed170d527099878cb5fdd508a2fb/etcd-metrics/0.log" Mar 18 17:48:48.338089 master-0 kubenswrapper[7553]: I0318 17:48:48.338039 7553 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_24b4ed170d527099878cb5fdd508a2fb/etcd-readyz/0.log" Mar 18 17:48:48.411064 master-0 kubenswrapper[7553]: I0318 17:48:48.411020 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/e0e04440-c08b-452d-9be6-9f70a4027c92-samples-operator-tls\") pod \"cluster-samples-operator-85f7577d78-xnx8x\" (UID: \"e0e04440-c08b-452d-9be6-9f70a4027c92\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-85f7577d78-xnx8x" Mar 18 17:48:48.411369 master-0 kubenswrapper[7553]: E0318 17:48:48.411319 7553 secret.go:189] Couldn't get secret openshift-cluster-samples-operator/samples-operator-tls: secret "samples-operator-tls" not found Mar 18 17:48:48.411467 master-0 kubenswrapper[7553]: E0318 17:48:48.411438 7553 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e0e04440-c08b-452d-9be6-9f70a4027c92-samples-operator-tls podName:e0e04440-c08b-452d-9be6-9f70a4027c92 nodeName:}" failed. No retries permitted until 2026-03-18 17:49:04.411405427 +0000 UTC m=+434.557240300 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "samples-operator-tls" (UniqueName: "kubernetes.io/secret/e0e04440-c08b-452d-9be6-9f70a4027c92-samples-operator-tls") pod "cluster-samples-operator-85f7577d78-xnx8x" (UID: "e0e04440-c08b-452d-9be6-9f70a4027c92") : secret "samples-operator-tls" not found Mar 18 17:48:48.411467 master-0 kubenswrapper[7553]: I0318 17:48:48.411440 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloud-credential-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/04cef0bd-f365-4bf6-864a-1895995015d6-cloud-credential-operator-serving-cert\") pod \"cloud-credential-operator-744f9dbf77-djgn7\" (UID: \"04cef0bd-f365-4bf6-864a-1895995015d6\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-744f9dbf77-djgn7" Mar 18 17:48:48.411702 master-0 kubenswrapper[7553]: E0318 17:48:48.411684 7553 secret.go:189] Couldn't get secret openshift-cloud-credential-operator/cloud-credential-operator-serving-cert: secret "cloud-credential-operator-serving-cert" not found Mar 18 17:48:48.411808 master-0 kubenswrapper[7553]: E0318 17:48:48.411796 7553 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/04cef0bd-f365-4bf6-864a-1895995015d6-cloud-credential-operator-serving-cert podName:04cef0bd-f365-4bf6-864a-1895995015d6 nodeName:}" failed. No retries permitted until 2026-03-18 17:49:04.411780687 +0000 UTC m=+434.557615360 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cloud-credential-operator-serving-cert" (UniqueName: "kubernetes.io/secret/04cef0bd-f365-4bf6-864a-1895995015d6-cloud-credential-operator-serving-cert") pod "cloud-credential-operator-744f9dbf77-djgn7" (UID: "04cef0bd-f365-4bf6-864a-1895995015d6") : secret "cloud-credential-operator-serving-cert" not found Mar 18 17:48:48.411927 master-0 kubenswrapper[7553]: I0318 17:48:48.411811 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/de189d27-4c60-49f1-9119-d1fde5c37b1e-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-6f97756bc8-zdqtc\" (UID: \"de189d27-4c60-49f1-9119-d1fde5c37b1e\") " pod="openshift-machine-api/control-plane-machine-set-operator-6f97756bc8-zdqtc" Mar 18 17:48:48.412062 master-0 kubenswrapper[7553]: E0318 17:48:48.412015 7553 secret.go:189] Couldn't get secret openshift-machine-api/control-plane-machine-set-operator-tls: secret "control-plane-machine-set-operator-tls" not found Mar 18 17:48:48.412327 master-0 kubenswrapper[7553]: I0318 17:48:48.412036 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/92153864-7959-4482-bf24-c8db36435fb5-machine-approver-tls\") pod \"machine-approver-5c6485487f-z74t2\" (UID: \"92153864-7959-4482-bf24-c8db36435fb5\") " pod="openshift-cluster-machine-approver/machine-approver-5c6485487f-z74t2" Mar 18 17:48:48.412327 master-0 kubenswrapper[7553]: E0318 17:48:48.412131 7553 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/de189d27-4c60-49f1-9119-d1fde5c37b1e-control-plane-machine-set-operator-tls podName:de189d27-4c60-49f1-9119-d1fde5c37b1e nodeName:}" failed. No retries permitted until 2026-03-18 17:49:04.412101636 +0000 UTC m=+434.557936419 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "control-plane-machine-set-operator-tls" (UniqueName: "kubernetes.io/secret/de189d27-4c60-49f1-9119-d1fde5c37b1e-control-plane-machine-set-operator-tls") pod "control-plane-machine-set-operator-6f97756bc8-zdqtc" (UID: "de189d27-4c60-49f1-9119-d1fde5c37b1e") : secret "control-plane-machine-set-operator-tls" not found Mar 18 17:48:48.412327 master-0 kubenswrapper[7553]: I0318 17:48:48.412169 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a94f7bff-ad61-4c53-a8eb-000a13f26971-cert\") pod \"cluster-autoscaler-operator-866dc4744-l6hpt\" (UID: \"a94f7bff-ad61-4c53-a8eb-000a13f26971\") " pod="openshift-machine-api/cluster-autoscaler-operator-866dc4744-l6hpt" Mar 18 17:48:48.412327 master-0 kubenswrapper[7553]: E0318 17:48:48.412255 7553 secret.go:189] Couldn't get secret openshift-machine-api/cluster-autoscaler-operator-cert: secret "cluster-autoscaler-operator-cert" not found Mar 18 17:48:48.412327 master-0 kubenswrapper[7553]: E0318 17:48:48.412307 7553 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a94f7bff-ad61-4c53-a8eb-000a13f26971-cert podName:a94f7bff-ad61-4c53-a8eb-000a13f26971 nodeName:}" failed. No retries permitted until 2026-03-18 17:49:04.412298692 +0000 UTC m=+434.558133365 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/a94f7bff-ad61-4c53-a8eb-000a13f26971-cert") pod "cluster-autoscaler-operator-866dc4744-l6hpt" (UID: "a94f7bff-ad61-4c53-a8eb-000a13f26971") : secret "cluster-autoscaler-operator-cert" not found Mar 18 17:48:48.412554 master-0 kubenswrapper[7553]: E0318 17:48:48.412540 7553 secret.go:189] Couldn't get secret openshift-cluster-machine-approver/machine-approver-tls: secret "machine-approver-tls" not found Mar 18 17:48:48.412640 master-0 kubenswrapper[7553]: E0318 17:48:48.412630 7553 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/92153864-7959-4482-bf24-c8db36435fb5-machine-approver-tls podName:92153864-7959-4482-bf24-c8db36435fb5 nodeName:}" failed. No retries permitted until 2026-03-18 17:49:04.412620441 +0000 UTC m=+434.558455114 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "machine-approver-tls" (UniqueName: "kubernetes.io/secret/92153864-7959-4482-bf24-c8db36435fb5-machine-approver-tls") pod "machine-approver-5c6485487f-z74t2" (UID: "92153864-7959-4482-bf24-c8db36435fb5") : secret "machine-approver-tls" not found Mar 18 17:48:48.513012 master-0 kubenswrapper[7553]: I0318 17:48:48.512945 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/2d21e77e-8b61-4f03-8f17-941b7a1d8b1d-machine-api-operator-tls\") pod \"machine-api-operator-6fbb6cf6f9-6x52p\" (UID: \"2d21e77e-8b61-4f03-8f17-941b7a1d8b1d\") " pod="openshift-machine-api/machine-api-operator-6fbb6cf6f9-6x52p" Mar 18 17:48:48.513360 master-0 kubenswrapper[7553]: E0318 17:48:48.513253 7553 secret.go:189] Couldn't get secret openshift-machine-api/machine-api-operator-tls: secret "machine-api-operator-tls" not found Mar 18 17:48:48.513486 master-0 kubenswrapper[7553]: E0318 17:48:48.513451 7553 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2d21e77e-8b61-4f03-8f17-941b7a1d8b1d-machine-api-operator-tls podName:2d21e77e-8b61-4f03-8f17-941b7a1d8b1d nodeName:}" failed. No retries permitted until 2026-03-18 17:49:04.513412907 +0000 UTC m=+434.659247610 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "machine-api-operator-tls" (UniqueName: "kubernetes.io/secret/2d21e77e-8b61-4f03-8f17-941b7a1d8b1d-machine-api-operator-tls") pod "machine-api-operator-6fbb6cf6f9-6x52p" (UID: "2d21e77e-8b61-4f03-8f17-941b7a1d8b1d") : secret "machine-api-operator-tls" not found Mar 18 17:48:48.536136 master-0 kubenswrapper[7553]: I0318 17:48:48.536071 7553 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_24b4ed170d527099878cb5fdd508a2fb/etcd-rev/0.log" Mar 18 17:48:48.741096 master-0 kubenswrapper[7553]: I0318 17:48:48.740741 7553 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_installer-1-master-0_08451d5b-cf84-45a1-a16d-7ce10a83a6e7/installer/0.log" Mar 18 17:48:48.853700 master-0 kubenswrapper[7553]: I0318 17:48:48.853606 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-mpmxb" event={"ID":"b3385316-45f0-46c5-ac82-683168db5878","Type":"ContainerStarted","Data":"6d4f1b131150a4aed3f2741c7d0708a0570b1762a135b18bb86cd045e410b968"} Mar 18 17:48:48.853700 master-0 kubenswrapper[7553]: I0318 17:48:48.853707 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-mpmxb" event={"ID":"b3385316-45f0-46c5-ac82-683168db5878","Type":"ContainerStarted","Data":"01d8f1f738d166015accb45a5a875b9da0577b0908a968320b9793f9dbe962a2"} Mar 18 17:48:48.944213 master-0 kubenswrapper[7553]: I0318 17:48:48.944142 7553 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver-operator_kube-apiserver-operator-8b68b9d9b-p72m2_26575d68-0488-4dfa-a5d0-5016e481dba6/kube-apiserver-operator/2.log" Mar 18 17:48:48.998516 master-0 kubenswrapper[7553]: I0318 17:48:48.995646 7553 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-mpmxb" podStartSLOduration=1.99561627 podStartE2EDuration="1.99561627s" podCreationTimestamp="2026-03-18 17:48:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 17:48:48.89538706 +0000 UTC m=+419.041221793" watchObservedRunningTime="2026-03-18 17:48:48.99561627 +0000 UTC m=+419.141450963" Mar 18 17:48:48.998516 master-0 kubenswrapper[7553]: I0318 17:48:48.998465 7553 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/prometheus-operator-6c8df6d4b-fshkm"] Mar 18 17:48:49.001079 master-0 kubenswrapper[7553]: I0318 17:48:49.001047 7553 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-6c8df6d4b-fshkm" Mar 18 17:48:49.003893 master-0 kubenswrapper[7553]: I0318 17:48:49.003845 7553 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-dockercfg-kcjlz" Mar 18 17:48:49.003893 master-0 kubenswrapper[7553]: I0318 17:48:49.003861 7553 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-kube-rbac-proxy-config" Mar 18 17:48:49.003893 master-0 kubenswrapper[7553]: I0318 17:48:49.003877 7553 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"metrics-client-ca" Mar 18 17:48:49.004092 master-0 kubenswrapper[7553]: I0318 17:48:49.003855 7553 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-tls" Mar 18 17:48:49.018120 master-0 kubenswrapper[7553]: I0318 17:48:49.017838 7553 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-operator-6c8df6d4b-fshkm"] Mar 18 17:48:49.102472 master-0 kubenswrapper[7553]: I0318 17:48:49.102337 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:48:49.102472 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:48:49.102472 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:48:49.102472 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:48:49.102774 master-0 kubenswrapper[7553]: I0318 17:48:49.102529 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:48:49.122080 master-0 kubenswrapper[7553]: I0318 17:48:49.122014 7553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/9c0dbd44-7669-41d6-bf1b-d8c1343c9d98-prometheus-operator-tls\") pod \"prometheus-operator-6c8df6d4b-fshkm\" (UID: \"9c0dbd44-7669-41d6-bf1b-d8c1343c9d98\") " pod="openshift-monitoring/prometheus-operator-6c8df6d4b-fshkm" Mar 18 17:48:49.122265 master-0 kubenswrapper[7553]: I0318 17:48:49.122117 7553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/9c0dbd44-7669-41d6-bf1b-d8c1343c9d98-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-6c8df6d4b-fshkm\" (UID: \"9c0dbd44-7669-41d6-bf1b-d8c1343c9d98\") " pod="openshift-monitoring/prometheus-operator-6c8df6d4b-fshkm" Mar 18 17:48:49.122265 master-0 kubenswrapper[7553]: I0318 17:48:49.122156 7553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/9c0dbd44-7669-41d6-bf1b-d8c1343c9d98-metrics-client-ca\") pod \"prometheus-operator-6c8df6d4b-fshkm\" (UID: \"9c0dbd44-7669-41d6-bf1b-d8c1343c9d98\") " pod="openshift-monitoring/prometheus-operator-6c8df6d4b-fshkm" Mar 18 17:48:49.122265 master-0 kubenswrapper[7553]: I0318 17:48:49.122180 7553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qbdth\" (UniqueName: \"kubernetes.io/projected/9c0dbd44-7669-41d6-bf1b-d8c1343c9d98-kube-api-access-qbdth\") pod \"prometheus-operator-6c8df6d4b-fshkm\" (UID: \"9c0dbd44-7669-41d6-bf1b-d8c1343c9d98\") " pod="openshift-monitoring/prometheus-operator-6c8df6d4b-fshkm" Mar 18 17:48:49.137166 master-0 kubenswrapper[7553]: I0318 17:48:49.137062 7553 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver-operator_kube-apiserver-operator-8b68b9d9b-p72m2_26575d68-0488-4dfa-a5d0-5016e481dba6/kube-apiserver-operator/3.log" Mar 18 17:48:49.223226 master-0 kubenswrapper[7553]: I0318 17:48:49.223125 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/9c0dbd44-7669-41d6-bf1b-d8c1343c9d98-prometheus-operator-tls\") pod \"prometheus-operator-6c8df6d4b-fshkm\" (UID: \"9c0dbd44-7669-41d6-bf1b-d8c1343c9d98\") " pod="openshift-monitoring/prometheus-operator-6c8df6d4b-fshkm" Mar 18 17:48:49.223645 master-0 kubenswrapper[7553]: I0318 17:48:49.223252 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/9c0dbd44-7669-41d6-bf1b-d8c1343c9d98-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-6c8df6d4b-fshkm\" (UID: \"9c0dbd44-7669-41d6-bf1b-d8c1343c9d98\") " pod="openshift-monitoring/prometheus-operator-6c8df6d4b-fshkm" Mar 18 17:48:49.223645 master-0 kubenswrapper[7553]: E0318 17:48:49.223311 7553 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-operator-tls: secret "prometheus-operator-tls" not found Mar 18 17:48:49.223645 master-0 kubenswrapper[7553]: I0318 17:48:49.223333 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/9c0dbd44-7669-41d6-bf1b-d8c1343c9d98-metrics-client-ca\") pod \"prometheus-operator-6c8df6d4b-fshkm\" (UID: \"9c0dbd44-7669-41d6-bf1b-d8c1343c9d98\") " pod="openshift-monitoring/prometheus-operator-6c8df6d4b-fshkm" Mar 18 17:48:49.223645 master-0 kubenswrapper[7553]: I0318 17:48:49.223362 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qbdth\" (UniqueName: \"kubernetes.io/projected/9c0dbd44-7669-41d6-bf1b-d8c1343c9d98-kube-api-access-qbdth\") pod \"prometheus-operator-6c8df6d4b-fshkm\" (UID: \"9c0dbd44-7669-41d6-bf1b-d8c1343c9d98\") " pod="openshift-monitoring/prometheus-operator-6c8df6d4b-fshkm" Mar 18 17:48:49.223645 master-0 kubenswrapper[7553]: E0318 17:48:49.223378 7553 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9c0dbd44-7669-41d6-bf1b-d8c1343c9d98-prometheus-operator-tls podName:9c0dbd44-7669-41d6-bf1b-d8c1343c9d98 nodeName:}" failed. No retries permitted until 2026-03-18 17:48:49.723358998 +0000 UTC m=+419.869193671 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "prometheus-operator-tls" (UniqueName: "kubernetes.io/secret/9c0dbd44-7669-41d6-bf1b-d8c1343c9d98-prometheus-operator-tls") pod "prometheus-operator-6c8df6d4b-fshkm" (UID: "9c0dbd44-7669-41d6-bf1b-d8c1343c9d98") : secret "prometheus-operator-tls" not found Mar 18 17:48:49.225114 master-0 kubenswrapper[7553]: I0318 17:48:49.225077 7553 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/9c0dbd44-7669-41d6-bf1b-d8c1343c9d98-metrics-client-ca\") pod \"prometheus-operator-6c8df6d4b-fshkm\" (UID: \"9c0dbd44-7669-41d6-bf1b-d8c1343c9d98\") " pod="openshift-monitoring/prometheus-operator-6c8df6d4b-fshkm" Mar 18 17:48:49.228747 master-0 kubenswrapper[7553]: I0318 17:48:49.228712 7553 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/9c0dbd44-7669-41d6-bf1b-d8c1343c9d98-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-6c8df6d4b-fshkm\" (UID: \"9c0dbd44-7669-41d6-bf1b-d8c1343c9d98\") " pod="openshift-monitoring/prometheus-operator-6c8df6d4b-fshkm" Mar 18 17:48:49.253138 master-0 kubenswrapper[7553]: I0318 17:48:49.252976 7553 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qbdth\" (UniqueName: \"kubernetes.io/projected/9c0dbd44-7669-41d6-bf1b-d8c1343c9d98-kube-api-access-qbdth\") pod \"prometheus-operator-6c8df6d4b-fshkm\" (UID: \"9c0dbd44-7669-41d6-bf1b-d8c1343c9d98\") " pod="openshift-monitoring/prometheus-operator-6c8df6d4b-fshkm" Mar 18 17:48:49.336833 master-0 kubenswrapper[7553]: I0318 17:48:49.336766 7553 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_bootstrap-kube-apiserver-master-0_49fac1b46a11e49501805e891baae4a9/setup/0.log" Mar 18 17:48:49.546224 master-0 kubenswrapper[7553]: I0318 17:48:49.546064 7553 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_bootstrap-kube-apiserver-master-0_49fac1b46a11e49501805e891baae4a9/kube-apiserver/0.log" Mar 18 17:48:49.731439 master-0 kubenswrapper[7553]: I0318 17:48:49.731331 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/9c0dbd44-7669-41d6-bf1b-d8c1343c9d98-prometheus-operator-tls\") pod \"prometheus-operator-6c8df6d4b-fshkm\" (UID: \"9c0dbd44-7669-41d6-bf1b-d8c1343c9d98\") " pod="openshift-monitoring/prometheus-operator-6c8df6d4b-fshkm" Mar 18 17:48:49.731661 master-0 kubenswrapper[7553]: E0318 17:48:49.731587 7553 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-operator-tls: secret "prometheus-operator-tls" not found Mar 18 17:48:49.731747 master-0 kubenswrapper[7553]: E0318 17:48:49.731700 7553 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9c0dbd44-7669-41d6-bf1b-d8c1343c9d98-prometheus-operator-tls podName:9c0dbd44-7669-41d6-bf1b-d8c1343c9d98 nodeName:}" failed. No retries permitted until 2026-03-18 17:48:50.731677142 +0000 UTC m=+420.877511815 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "prometheus-operator-tls" (UniqueName: "kubernetes.io/secret/9c0dbd44-7669-41d6-bf1b-d8c1343c9d98-prometheus-operator-tls") pod "prometheus-operator-6c8df6d4b-fshkm" (UID: "9c0dbd44-7669-41d6-bf1b-d8c1343c9d98") : secret "prometheus-operator-tls" not found Mar 18 17:48:49.742211 master-0 kubenswrapper[7553]: I0318 17:48:49.742098 7553 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_bootstrap-kube-apiserver-master-0_49fac1b46a11e49501805e891baae4a9/kube-apiserver-insecure-readyz/0.log" Mar 18 17:48:50.103248 master-0 kubenswrapper[7553]: I0318 17:48:50.103200 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:48:50.103248 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:48:50.103248 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:48:50.103248 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:48:50.104438 master-0 kubenswrapper[7553]: I0318 17:48:50.103305 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:48:50.220412 master-0 kubenswrapper[7553]: I0318 17:48:50.220331 7553 scope.go:117] "RemoveContainer" containerID="e0ce789b272d7ec4bd7aac94ac37ecdd2765bd0434e740bbb25752a48e70911e" Mar 18 17:48:50.277265 master-0 kubenswrapper[7553]: I0318 17:48:50.277210 7553 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-1-master-0_41191498-89c5-44dc-b648-dbea889c72f5/installer/0.log" Mar 18 17:48:50.321350 master-0 kubenswrapper[7553]: I0318 17:48:50.319825 7553 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_installer-2-master-0_37bbec19-22b8-411c-901b-d89c92b0bd4d/installer/0.log" Mar 18 17:48:50.346330 master-0 kubenswrapper[7553]: I0318 17:48:50.346145 7553 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_3b3363934623637fdc1d37ff8b16880a/kube-controller-manager/0.log" Mar 18 17:48:50.548689 master-0 kubenswrapper[7553]: I0318 17:48:50.548487 7553 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_3b3363934623637fdc1d37ff8b16880a/cluster-policy-controller/0.log" Mar 18 17:48:50.740247 master-0 kubenswrapper[7553]: I0318 17:48:50.740075 7553 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_3b3363934623637fdc1d37ff8b16880a/kube-controller-manager-cert-syncer/0.log" Mar 18 17:48:50.756234 master-0 kubenswrapper[7553]: I0318 17:48:50.756138 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/9c0dbd44-7669-41d6-bf1b-d8c1343c9d98-prometheus-operator-tls\") pod \"prometheus-operator-6c8df6d4b-fshkm\" (UID: \"9c0dbd44-7669-41d6-bf1b-d8c1343c9d98\") " pod="openshift-monitoring/prometheus-operator-6c8df6d4b-fshkm" Mar 18 17:48:50.756629 master-0 kubenswrapper[7553]: E0318 17:48:50.756394 7553 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-operator-tls: secret "prometheus-operator-tls" not found Mar 18 17:48:50.756629 master-0 kubenswrapper[7553]: E0318 17:48:50.756521 7553 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9c0dbd44-7669-41d6-bf1b-d8c1343c9d98-prometheus-operator-tls podName:9c0dbd44-7669-41d6-bf1b-d8c1343c9d98 nodeName:}" failed. No retries permitted until 2026-03-18 17:48:52.756484363 +0000 UTC m=+422.902319206 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "prometheus-operator-tls" (UniqueName: "kubernetes.io/secret/9c0dbd44-7669-41d6-bf1b-d8c1343c9d98-prometheus-operator-tls") pod "prometheus-operator-6c8df6d4b-fshkm" (UID: "9c0dbd44-7669-41d6-bf1b-d8c1343c9d98") : secret "prometheus-operator-tls" not found Mar 18 17:48:50.939205 master-0 kubenswrapper[7553]: I0318 17:48:50.939145 7553 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_3b3363934623637fdc1d37ff8b16880a/kube-controller-manager-recovery-controller/0.log" Mar 18 17:48:51.102833 master-0 kubenswrapper[7553]: I0318 17:48:51.102778 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:48:51.102833 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:48:51.102833 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:48:51.102833 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:48:51.103549 master-0 kubenswrapper[7553]: I0318 17:48:51.103500 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:48:51.137650 master-0 kubenswrapper[7553]: I0318 17:48:51.137599 7553 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager-operator_kube-controller-manager-operator-ff989d6cc-qk279_9b424d6c-7440-4c98-ac19-2d0642c696fd/kube-controller-manager-operator/2.log" Mar 18 17:48:51.340769 master-0 kubenswrapper[7553]: I0318 17:48:51.340713 7553 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager-operator_kube-controller-manager-operator-ff989d6cc-qk279_9b424d6c-7440-4c98-ac19-2d0642c696fd/kube-controller-manager-operator/3.log" Mar 18 17:48:51.551202 master-0 kubenswrapper[7553]: I0318 17:48:51.551024 7553 log.go:25] "Finished parsing log file" path="/var/log/pods/kube-system_bootstrap-kube-scheduler-master-0_c83737980b9ee109184b1d78e942cf36/kube-scheduler/0.log" Mar 18 17:48:51.745905 master-0 kubenswrapper[7553]: I0318 17:48:51.745814 7553 log.go:25] "Finished parsing log file" path="/var/log/pods/kube-system_bootstrap-kube-scheduler-master-0_c83737980b9ee109184b1d78e942cf36/kube-scheduler/1.log" Mar 18 17:48:51.941265 master-0 kubenswrapper[7553]: I0318 17:48:51.940941 7553 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_installer-3-master-0_1a709ef9-91c0-4193-acb4-0594d02f554c/installer/0.log" Mar 18 17:48:52.103084 master-0 kubenswrapper[7553]: I0318 17:48:52.102998 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:48:52.103084 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:48:52.103084 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:48:52.103084 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:48:52.104046 master-0 kubenswrapper[7553]: I0318 17:48:52.103122 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:48:52.410641 master-0 kubenswrapper[7553]: I0318 17:48:52.410518 7553 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler-operator_openshift-kube-scheduler-operator-dddff6458-wlfj4_3a3a6c2c-78e7-41f3-acff-20173cbc012a/kube-scheduler-operator-container/1.log" Mar 18 17:48:52.502752 master-0 kubenswrapper[7553]: I0318 17:48:52.502084 7553 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler-operator_openshift-kube-scheduler-operator-dddff6458-wlfj4_3a3a6c2c-78e7-41f3-acff-20173cbc012a/kube-scheduler-operator-container/2.log" Mar 18 17:48:52.541453 master-0 kubenswrapper[7553]: I0318 17:48:52.541241 7553 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-apiserver-operator_openshift-apiserver-operator-d65958b8-t266j_0b9ff55a-73fb-473f-b406-1f8b6cffdb89/openshift-apiserver-operator/1.log" Mar 18 17:48:52.763781 master-0 kubenswrapper[7553]: I0318 17:48:52.763354 7553 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-apiserver-operator_openshift-apiserver-operator-d65958b8-t266j_0b9ff55a-73fb-473f-b406-1f8b6cffdb89/openshift-apiserver-operator/2.log" Mar 18 17:48:52.789045 master-0 kubenswrapper[7553]: I0318 17:48:52.788925 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/9c0dbd44-7669-41d6-bf1b-d8c1343c9d98-prometheus-operator-tls\") pod \"prometheus-operator-6c8df6d4b-fshkm\" (UID: \"9c0dbd44-7669-41d6-bf1b-d8c1343c9d98\") " pod="openshift-monitoring/prometheus-operator-6c8df6d4b-fshkm" Mar 18 17:48:52.792338 master-0 kubenswrapper[7553]: E0318 17:48:52.789906 7553 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-operator-tls: secret "prometheus-operator-tls" not found Mar 18 17:48:52.792338 master-0 kubenswrapper[7553]: E0318 17:48:52.791568 7553 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9c0dbd44-7669-41d6-bf1b-d8c1343c9d98-prometheus-operator-tls podName:9c0dbd44-7669-41d6-bf1b-d8c1343c9d98 nodeName:}" failed. No retries permitted until 2026-03-18 17:48:56.791517633 +0000 UTC m=+426.937352346 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "prometheus-operator-tls" (UniqueName: "kubernetes.io/secret/9c0dbd44-7669-41d6-bf1b-d8c1343c9d98-prometheus-operator-tls") pod "prometheus-operator-6c8df6d4b-fshkm" (UID: "9c0dbd44-7669-41d6-bf1b-d8c1343c9d98") : secret "prometheus-operator-tls" not found Mar 18 17:48:52.938303 master-0 kubenswrapper[7553]: I0318 17:48:52.938214 7553 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-apiserver_apiserver-897b458c6-vsss9_30d77a7c-222e-41c7-8a98-219854aa3da2/fix-audit-permissions/0.log" Mar 18 17:48:53.053665 master-0 kubenswrapper[7553]: I0318 17:48:53.053604 7553 scope.go:117] "RemoveContainer" containerID="b7bb5390f984301665f3ca607ecedbc67713a42573a17188652c2b439e42a0e2" Mar 18 17:48:53.100075 master-0 kubenswrapper[7553]: I0318 17:48:53.099477 7553 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" Mar 18 17:48:53.103779 master-0 kubenswrapper[7553]: I0318 17:48:53.103739 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:48:53.103779 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:48:53.103779 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:48:53.103779 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:48:53.104246 master-0 kubenswrapper[7553]: I0318 17:48:53.103802 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:48:53.139451 master-0 kubenswrapper[7553]: I0318 17:48:53.139403 7553 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-apiserver_apiserver-897b458c6-vsss9_30d77a7c-222e-41c7-8a98-219854aa3da2/openshift-apiserver/0.log" Mar 18 17:48:53.342166 master-0 kubenswrapper[7553]: I0318 17:48:53.342085 7553 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-apiserver_apiserver-897b458c6-vsss9_30d77a7c-222e-41c7-8a98-219854aa3da2/openshift-apiserver-check-endpoints/0.log" Mar 18 17:48:53.538232 master-0 kubenswrapper[7553]: I0318 17:48:53.538154 7553 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd-operator_etcd-operator-8544cbcf9c-rws9x_0100a259-1358-45e8-8191-4e1f9a14ec89/etcd-operator/2.log" Mar 18 17:48:53.740104 master-0 kubenswrapper[7553]: I0318 17:48:53.739851 7553 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd-operator_etcd-operator-8544cbcf9c-rws9x_0100a259-1358-45e8-8191-4e1f9a14ec89/etcd-operator/3.log" Mar 18 17:48:53.896891 master-0 kubenswrapper[7553]: I0318 17:48:53.896835 7553 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-controller-manager-operator_cluster-cloud-controller-manager-operator-7559f7c68c-9hlvv_656ac493-a769-4c15-9356-2050c4b9c8d8/kube-rbac-proxy/2.log" Mar 18 17:48:53.897913 master-0 kubenswrapper[7553]: I0318 17:48:53.897854 7553 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-controller-manager-operator_cluster-cloud-controller-manager-operator-7559f7c68c-9hlvv_656ac493-a769-4c15-9356-2050c4b9c8d8/kube-rbac-proxy/1.log" Mar 18 17:48:53.898936 master-0 kubenswrapper[7553]: I0318 17:48:53.898873 7553 generic.go:334] "Generic (PLEG): container finished" podID="656ac493-a769-4c15-9356-2050c4b9c8d8" containerID="703ec054323901d37cccb1bc7c38b8e2c66c02264969127e70398a5065bff28f" exitCode=1 Mar 18 17:48:53.899060 master-0 kubenswrapper[7553]: I0318 17:48:53.898940 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7559f7c68c-9hlvv" event={"ID":"656ac493-a769-4c15-9356-2050c4b9c8d8","Type":"ContainerDied","Data":"703ec054323901d37cccb1bc7c38b8e2c66c02264969127e70398a5065bff28f"} Mar 18 17:48:53.899060 master-0 kubenswrapper[7553]: I0318 17:48:53.899002 7553 scope.go:117] "RemoveContainer" containerID="b7bb5390f984301665f3ca607ecedbc67713a42573a17188652c2b439e42a0e2" Mar 18 17:48:53.900425 master-0 kubenswrapper[7553]: I0318 17:48:53.900370 7553 scope.go:117] "RemoveContainer" containerID="703ec054323901d37cccb1bc7c38b8e2c66c02264969127e70398a5065bff28f" Mar 18 17:48:53.900839 master-0 kubenswrapper[7553]: E0318 17:48:53.900763 7553 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-rbac-proxy pod=cluster-cloud-controller-manager-operator-7559f7c68c-9hlvv_openshift-cloud-controller-manager-operator(656ac493-a769-4c15-9356-2050c4b9c8d8)\"" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7559f7c68c-9hlvv" podUID="656ac493-a769-4c15-9356-2050c4b9c8d8" Mar 18 17:48:53.944705 master-0 kubenswrapper[7553]: I0318 17:48:53.944627 7553 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-lifecycle-manager_catalog-operator-68f85b4d6c-qpgfz_e9e04572-1425-440e-9869-6deef05e13e3/catalog-operator/0.log" Mar 18 17:48:54.102256 master-0 kubenswrapper[7553]: I0318 17:48:54.102151 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:48:54.102256 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:48:54.102256 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:48:54.102256 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:48:54.102735 master-0 kubenswrapper[7553]: I0318 17:48:54.102298 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:48:54.147849 master-0 kubenswrapper[7553]: I0318 17:48:54.147762 7553 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-lifecycle-manager_olm-operator-5c9796789-6hngr_e73f2834-c56c-4cef-ac3c-2317e9a4324c/olm-operator/0.log" Mar 18 17:48:54.544630 master-0 kubenswrapper[7553]: I0318 17:48:54.544269 7553 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-lifecycle-manager_package-server-manager-7b95f86987-6qqz4_d26d4515-391e-41a5-8c82-1b2b8a375662/package-server-manager/0.log" Mar 18 17:48:54.736673 master-0 kubenswrapper[7553]: I0318 17:48:54.736603 7553 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-lifecycle-manager_package-server-manager-7b95f86987-6qqz4_d26d4515-391e-41a5-8c82-1b2b8a375662/kube-rbac-proxy/0.log" Mar 18 17:48:54.909306 master-0 kubenswrapper[7553]: I0318 17:48:54.909053 7553 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-controller-manager-operator_cluster-cloud-controller-manager-operator-7559f7c68c-9hlvv_656ac493-a769-4c15-9356-2050c4b9c8d8/kube-rbac-proxy/2.log" Mar 18 17:48:54.942184 master-0 kubenswrapper[7553]: I0318 17:48:54.941598 7553 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-lifecycle-manager_package-server-manager-7b95f86987-6qqz4_d26d4515-391e-41a5-8c82-1b2b8a375662/package-server-manager/1.log" Mar 18 17:48:55.102768 master-0 kubenswrapper[7553]: I0318 17:48:55.102708 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:48:55.102768 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:48:55.102768 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:48:55.102768 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:48:55.103356 master-0 kubenswrapper[7553]: I0318 17:48:55.103257 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:48:55.147696 master-0 kubenswrapper[7553]: I0318 17:48:55.146964 7553 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-lifecycle-manager_packageserver-b8b994c95-kglwt_8db04037-c7cc-4246-92c3-6e7985384b14/packageserver/0.log" Mar 18 17:48:55.220814 master-0 kubenswrapper[7553]: I0318 17:48:55.219266 7553 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7559f7c68c-9hlvv"] Mar 18 17:48:55.220814 master-0 kubenswrapper[7553]: I0318 17:48:55.219670 7553 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7559f7c68c-9hlvv" podUID="656ac493-a769-4c15-9356-2050c4b9c8d8" containerName="cluster-cloud-controller-manager" containerID="cri-o://fa5bb0ca85d70bebc829285cc630fe546dd52e60d8679fbe25c4975938d92290" gracePeriod=30 Mar 18 17:48:55.220814 master-0 kubenswrapper[7553]: I0318 17:48:55.219766 7553 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7559f7c68c-9hlvv" podUID="656ac493-a769-4c15-9356-2050c4b9c8d8" containerName="config-sync-controllers" containerID="cri-o://d175c66095a977d859bc6ffd65e06bd2765140fd5419d2e653bb2a5514e62751" gracePeriod=30 Mar 18 17:48:55.393164 master-0 kubenswrapper[7553]: I0318 17:48:55.393114 7553 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-controller-manager-operator_cluster-cloud-controller-manager-operator-7559f7c68c-9hlvv_656ac493-a769-4c15-9356-2050c4b9c8d8/kube-rbac-proxy/2.log" Mar 18 17:48:55.394168 master-0 kubenswrapper[7553]: I0318 17:48:55.394138 7553 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7559f7c68c-9hlvv" Mar 18 17:48:55.443492 master-0 kubenswrapper[7553]: I0318 17:48:55.443427 7553 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cloud-controller-manager-operator-tls\" (UniqueName: \"kubernetes.io/secret/656ac493-a769-4c15-9356-2050c4b9c8d8-cloud-controller-manager-operator-tls\") pod \"656ac493-a769-4c15-9356-2050c4b9c8d8\" (UID: \"656ac493-a769-4c15-9356-2050c4b9c8d8\") " Mar 18 17:48:55.443492 master-0 kubenswrapper[7553]: I0318 17:48:55.443501 7553 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/656ac493-a769-4c15-9356-2050c4b9c8d8-host-etc-kube\") pod \"656ac493-a769-4c15-9356-2050c4b9c8d8\" (UID: \"656ac493-a769-4c15-9356-2050c4b9c8d8\") " Mar 18 17:48:55.443810 master-0 kubenswrapper[7553]: I0318 17:48:55.443540 7553 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/656ac493-a769-4c15-9356-2050c4b9c8d8-images\") pod \"656ac493-a769-4c15-9356-2050c4b9c8d8\" (UID: \"656ac493-a769-4c15-9356-2050c4b9c8d8\") " Mar 18 17:48:55.443810 master-0 kubenswrapper[7553]: I0318 17:48:55.443566 7553 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pqgm8\" (UniqueName: \"kubernetes.io/projected/656ac493-a769-4c15-9356-2050c4b9c8d8-kube-api-access-pqgm8\") pod \"656ac493-a769-4c15-9356-2050c4b9c8d8\" (UID: \"656ac493-a769-4c15-9356-2050c4b9c8d8\") " Mar 18 17:48:55.443810 master-0 kubenswrapper[7553]: I0318 17:48:55.443690 7553 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/656ac493-a769-4c15-9356-2050c4b9c8d8-host-etc-kube" (OuterVolumeSpecName: "host-etc-kube") pod "656ac493-a769-4c15-9356-2050c4b9c8d8" (UID: "656ac493-a769-4c15-9356-2050c4b9c8d8"). InnerVolumeSpecName "host-etc-kube". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 17:48:55.444713 master-0 kubenswrapper[7553]: I0318 17:48:55.444047 7553 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/656ac493-a769-4c15-9356-2050c4b9c8d8-images" (OuterVolumeSpecName: "images") pod "656ac493-a769-4c15-9356-2050c4b9c8d8" (UID: "656ac493-a769-4c15-9356-2050c4b9c8d8"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 17:48:55.444713 master-0 kubenswrapper[7553]: I0318 17:48:55.444181 7553 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/656ac493-a769-4c15-9356-2050c4b9c8d8-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "656ac493-a769-4c15-9356-2050c4b9c8d8" (UID: "656ac493-a769-4c15-9356-2050c4b9c8d8"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 17:48:55.444713 master-0 kubenswrapper[7553]: I0318 17:48:55.443843 7553 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/656ac493-a769-4c15-9356-2050c4b9c8d8-auth-proxy-config\") pod \"656ac493-a769-4c15-9356-2050c4b9c8d8\" (UID: \"656ac493-a769-4c15-9356-2050c4b9c8d8\") " Mar 18 17:48:55.445242 master-0 kubenswrapper[7553]: I0318 17:48:55.445219 7553 reconciler_common.go:293] "Volume detached for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/656ac493-a769-4c15-9356-2050c4b9c8d8-host-etc-kube\") on node \"master-0\" DevicePath \"\"" Mar 18 17:48:55.445316 master-0 kubenswrapper[7553]: I0318 17:48:55.445244 7553 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/656ac493-a769-4c15-9356-2050c4b9c8d8-images\") on node \"master-0\" DevicePath \"\"" Mar 18 17:48:55.445316 master-0 kubenswrapper[7553]: I0318 17:48:55.445256 7553 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/656ac493-a769-4c15-9356-2050c4b9c8d8-auth-proxy-config\") on node \"master-0\" DevicePath \"\"" Mar 18 17:48:55.450321 master-0 kubenswrapper[7553]: I0318 17:48:55.450225 7553 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/656ac493-a769-4c15-9356-2050c4b9c8d8-kube-api-access-pqgm8" (OuterVolumeSpecName: "kube-api-access-pqgm8") pod "656ac493-a769-4c15-9356-2050c4b9c8d8" (UID: "656ac493-a769-4c15-9356-2050c4b9c8d8"). InnerVolumeSpecName "kube-api-access-pqgm8". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 17:48:55.451673 master-0 kubenswrapper[7553]: I0318 17:48:55.451610 7553 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/656ac493-a769-4c15-9356-2050c4b9c8d8-cloud-controller-manager-operator-tls" (OuterVolumeSpecName: "cloud-controller-manager-operator-tls") pod "656ac493-a769-4c15-9356-2050c4b9c8d8" (UID: "656ac493-a769-4c15-9356-2050c4b9c8d8"). InnerVolumeSpecName "cloud-controller-manager-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 17:48:55.546501 master-0 kubenswrapper[7553]: I0318 17:48:55.546358 7553 reconciler_common.go:293] "Volume detached for volume \"cloud-controller-manager-operator-tls\" (UniqueName: \"kubernetes.io/secret/656ac493-a769-4c15-9356-2050c4b9c8d8-cloud-controller-manager-operator-tls\") on node \"master-0\" DevicePath \"\"" Mar 18 17:48:55.546501 master-0 kubenswrapper[7553]: I0318 17:48:55.546395 7553 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pqgm8\" (UniqueName: \"kubernetes.io/projected/656ac493-a769-4c15-9356-2050c4b9c8d8-kube-api-access-pqgm8\") on node \"master-0\" DevicePath \"\"" Mar 18 17:48:55.925733 master-0 kubenswrapper[7553]: I0318 17:48:55.925634 7553 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-controller-manager-operator_cluster-cloud-controller-manager-operator-7559f7c68c-9hlvv_656ac493-a769-4c15-9356-2050c4b9c8d8/kube-rbac-proxy/2.log" Mar 18 17:48:55.927279 master-0 kubenswrapper[7553]: I0318 17:48:55.927213 7553 generic.go:334] "Generic (PLEG): container finished" podID="656ac493-a769-4c15-9356-2050c4b9c8d8" containerID="d175c66095a977d859bc6ffd65e06bd2765140fd5419d2e653bb2a5514e62751" exitCode=0 Mar 18 17:48:55.927341 master-0 kubenswrapper[7553]: I0318 17:48:55.927261 7553 generic.go:334] "Generic (PLEG): container finished" podID="656ac493-a769-4c15-9356-2050c4b9c8d8" containerID="fa5bb0ca85d70bebc829285cc630fe546dd52e60d8679fbe25c4975938d92290" exitCode=0 Mar 18 17:48:55.927377 master-0 kubenswrapper[7553]: I0318 17:48:55.927331 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7559f7c68c-9hlvv" event={"ID":"656ac493-a769-4c15-9356-2050c4b9c8d8","Type":"ContainerDied","Data":"d175c66095a977d859bc6ffd65e06bd2765140fd5419d2e653bb2a5514e62751"} Mar 18 17:48:55.927412 master-0 kubenswrapper[7553]: I0318 17:48:55.927395 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7559f7c68c-9hlvv" event={"ID":"656ac493-a769-4c15-9356-2050c4b9c8d8","Type":"ContainerDied","Data":"fa5bb0ca85d70bebc829285cc630fe546dd52e60d8679fbe25c4975938d92290"} Mar 18 17:48:55.927450 master-0 kubenswrapper[7553]: I0318 17:48:55.927406 7553 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7559f7c68c-9hlvv" Mar 18 17:48:55.927450 master-0 kubenswrapper[7553]: I0318 17:48:55.927439 7553 scope.go:117] "RemoveContainer" containerID="703ec054323901d37cccb1bc7c38b8e2c66c02264969127e70398a5065bff28f" Mar 18 17:48:55.927643 master-0 kubenswrapper[7553]: I0318 17:48:55.927419 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7559f7c68c-9hlvv" event={"ID":"656ac493-a769-4c15-9356-2050c4b9c8d8","Type":"ContainerDied","Data":"3207043a8dbcd1d67e3d3199c155f8c1aa1ba06f12de9e1d173f2f7d7639c727"} Mar 18 17:48:55.953398 master-0 kubenswrapper[7553]: I0318 17:48:55.953336 7553 scope.go:117] "RemoveContainer" containerID="d175c66095a977d859bc6ffd65e06bd2765140fd5419d2e653bb2a5514e62751" Mar 18 17:48:55.973221 master-0 kubenswrapper[7553]: I0318 17:48:55.970864 7553 scope.go:117] "RemoveContainer" containerID="fa5bb0ca85d70bebc829285cc630fe546dd52e60d8679fbe25c4975938d92290" Mar 18 17:48:55.989860 master-0 kubenswrapper[7553]: I0318 17:48:55.989760 7553 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7559f7c68c-9hlvv"] Mar 18 17:48:55.994614 master-0 kubenswrapper[7553]: I0318 17:48:55.994192 7553 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7559f7c68c-9hlvv"] Mar 18 17:48:56.001259 master-0 kubenswrapper[7553]: I0318 17:48:56.001185 7553 scope.go:117] "RemoveContainer" containerID="703ec054323901d37cccb1bc7c38b8e2c66c02264969127e70398a5065bff28f" Mar 18 17:48:56.002064 master-0 kubenswrapper[7553]: E0318 17:48:56.001962 7553 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"703ec054323901d37cccb1bc7c38b8e2c66c02264969127e70398a5065bff28f\": container with ID starting with 703ec054323901d37cccb1bc7c38b8e2c66c02264969127e70398a5065bff28f not found: ID does not exist" containerID="703ec054323901d37cccb1bc7c38b8e2c66c02264969127e70398a5065bff28f" Mar 18 17:48:56.002227 master-0 kubenswrapper[7553]: I0318 17:48:56.002126 7553 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"703ec054323901d37cccb1bc7c38b8e2c66c02264969127e70398a5065bff28f"} err="failed to get container status \"703ec054323901d37cccb1bc7c38b8e2c66c02264969127e70398a5065bff28f\": rpc error: code = NotFound desc = could not find container \"703ec054323901d37cccb1bc7c38b8e2c66c02264969127e70398a5065bff28f\": container with ID starting with 703ec054323901d37cccb1bc7c38b8e2c66c02264969127e70398a5065bff28f not found: ID does not exist" Mar 18 17:48:56.002343 master-0 kubenswrapper[7553]: I0318 17:48:56.002232 7553 scope.go:117] "RemoveContainer" containerID="d175c66095a977d859bc6ffd65e06bd2765140fd5419d2e653bb2a5514e62751" Mar 18 17:48:56.003151 master-0 kubenswrapper[7553]: E0318 17:48:56.003098 7553 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d175c66095a977d859bc6ffd65e06bd2765140fd5419d2e653bb2a5514e62751\": container with ID starting with d175c66095a977d859bc6ffd65e06bd2765140fd5419d2e653bb2a5514e62751 not found: ID does not exist" containerID="d175c66095a977d859bc6ffd65e06bd2765140fd5419d2e653bb2a5514e62751" Mar 18 17:48:56.003212 master-0 kubenswrapper[7553]: I0318 17:48:56.003149 7553 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d175c66095a977d859bc6ffd65e06bd2765140fd5419d2e653bb2a5514e62751"} err="failed to get container status \"d175c66095a977d859bc6ffd65e06bd2765140fd5419d2e653bb2a5514e62751\": rpc error: code = NotFound desc = could not find container \"d175c66095a977d859bc6ffd65e06bd2765140fd5419d2e653bb2a5514e62751\": container with ID starting with d175c66095a977d859bc6ffd65e06bd2765140fd5419d2e653bb2a5514e62751 not found: ID does not exist" Mar 18 17:48:56.003212 master-0 kubenswrapper[7553]: I0318 17:48:56.003179 7553 scope.go:117] "RemoveContainer" containerID="fa5bb0ca85d70bebc829285cc630fe546dd52e60d8679fbe25c4975938d92290" Mar 18 17:48:56.005946 master-0 kubenswrapper[7553]: E0318 17:48:56.003541 7553 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fa5bb0ca85d70bebc829285cc630fe546dd52e60d8679fbe25c4975938d92290\": container with ID starting with fa5bb0ca85d70bebc829285cc630fe546dd52e60d8679fbe25c4975938d92290 not found: ID does not exist" containerID="fa5bb0ca85d70bebc829285cc630fe546dd52e60d8679fbe25c4975938d92290" Mar 18 17:48:56.005946 master-0 kubenswrapper[7553]: I0318 17:48:56.003581 7553 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fa5bb0ca85d70bebc829285cc630fe546dd52e60d8679fbe25c4975938d92290"} err="failed to get container status \"fa5bb0ca85d70bebc829285cc630fe546dd52e60d8679fbe25c4975938d92290\": rpc error: code = NotFound desc = could not find container \"fa5bb0ca85d70bebc829285cc630fe546dd52e60d8679fbe25c4975938d92290\": container with ID starting with fa5bb0ca85d70bebc829285cc630fe546dd52e60d8679fbe25c4975938d92290 not found: ID does not exist" Mar 18 17:48:56.005946 master-0 kubenswrapper[7553]: I0318 17:48:56.003608 7553 scope.go:117] "RemoveContainer" containerID="703ec054323901d37cccb1bc7c38b8e2c66c02264969127e70398a5065bff28f" Mar 18 17:48:56.005946 master-0 kubenswrapper[7553]: I0318 17:48:56.003876 7553 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"703ec054323901d37cccb1bc7c38b8e2c66c02264969127e70398a5065bff28f"} err="failed to get container status \"703ec054323901d37cccb1bc7c38b8e2c66c02264969127e70398a5065bff28f\": rpc error: code = NotFound desc = could not find container \"703ec054323901d37cccb1bc7c38b8e2c66c02264969127e70398a5065bff28f\": container with ID starting with 703ec054323901d37cccb1bc7c38b8e2c66c02264969127e70398a5065bff28f not found: ID does not exist" Mar 18 17:48:56.005946 master-0 kubenswrapper[7553]: I0318 17:48:56.003903 7553 scope.go:117] "RemoveContainer" containerID="d175c66095a977d859bc6ffd65e06bd2765140fd5419d2e653bb2a5514e62751" Mar 18 17:48:56.005946 master-0 kubenswrapper[7553]: I0318 17:48:56.004261 7553 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d175c66095a977d859bc6ffd65e06bd2765140fd5419d2e653bb2a5514e62751"} err="failed to get container status \"d175c66095a977d859bc6ffd65e06bd2765140fd5419d2e653bb2a5514e62751\": rpc error: code = NotFound desc = could not find container \"d175c66095a977d859bc6ffd65e06bd2765140fd5419d2e653bb2a5514e62751\": container with ID starting with d175c66095a977d859bc6ffd65e06bd2765140fd5419d2e653bb2a5514e62751 not found: ID does not exist" Mar 18 17:48:56.005946 master-0 kubenswrapper[7553]: I0318 17:48:56.004344 7553 scope.go:117] "RemoveContainer" containerID="fa5bb0ca85d70bebc829285cc630fe546dd52e60d8679fbe25c4975938d92290" Mar 18 17:48:56.005946 master-0 kubenswrapper[7553]: I0318 17:48:56.004751 7553 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fa5bb0ca85d70bebc829285cc630fe546dd52e60d8679fbe25c4975938d92290"} err="failed to get container status \"fa5bb0ca85d70bebc829285cc630fe546dd52e60d8679fbe25c4975938d92290\": rpc error: code = NotFound desc = could not find container \"fa5bb0ca85d70bebc829285cc630fe546dd52e60d8679fbe25c4975938d92290\": container with ID starting with fa5bb0ca85d70bebc829285cc630fe546dd52e60d8679fbe25c4975938d92290 not found: ID does not exist" Mar 18 17:48:56.032207 master-0 kubenswrapper[7553]: I0318 17:48:56.032126 7553 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-kfzkl"] Mar 18 17:48:56.033052 master-0 kubenswrapper[7553]: E0318 17:48:56.032996 7553 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="656ac493-a769-4c15-9356-2050c4b9c8d8" containerName="kube-rbac-proxy" Mar 18 17:48:56.033128 master-0 kubenswrapper[7553]: I0318 17:48:56.033077 7553 state_mem.go:107] "Deleted CPUSet assignment" podUID="656ac493-a769-4c15-9356-2050c4b9c8d8" containerName="kube-rbac-proxy" Mar 18 17:48:56.033197 master-0 kubenswrapper[7553]: E0318 17:48:56.033162 7553 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="656ac493-a769-4c15-9356-2050c4b9c8d8" containerName="cluster-cloud-controller-manager" Mar 18 17:48:56.033197 master-0 kubenswrapper[7553]: I0318 17:48:56.033179 7553 state_mem.go:107] "Deleted CPUSet assignment" podUID="656ac493-a769-4c15-9356-2050c4b9c8d8" containerName="cluster-cloud-controller-manager" Mar 18 17:48:56.033319 master-0 kubenswrapper[7553]: E0318 17:48:56.033252 7553 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="656ac493-a769-4c15-9356-2050c4b9c8d8" containerName="kube-rbac-proxy" Mar 18 17:48:56.033373 master-0 kubenswrapper[7553]: I0318 17:48:56.033267 7553 state_mem.go:107] "Deleted CPUSet assignment" podUID="656ac493-a769-4c15-9356-2050c4b9c8d8" containerName="kube-rbac-proxy" Mar 18 17:48:56.033373 master-0 kubenswrapper[7553]: E0318 17:48:56.033344 7553 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="656ac493-a769-4c15-9356-2050c4b9c8d8" containerName="config-sync-controllers" Mar 18 17:48:56.033373 master-0 kubenswrapper[7553]: I0318 17:48:56.033359 7553 state_mem.go:107] "Deleted CPUSet assignment" podUID="656ac493-a769-4c15-9356-2050c4b9c8d8" containerName="config-sync-controllers" Mar 18 17:48:56.033817 master-0 kubenswrapper[7553]: I0318 17:48:56.033765 7553 memory_manager.go:354] "RemoveStaleState removing state" podUID="656ac493-a769-4c15-9356-2050c4b9c8d8" containerName="kube-rbac-proxy" Mar 18 17:48:56.033882 master-0 kubenswrapper[7553]: I0318 17:48:56.033845 7553 memory_manager.go:354] "RemoveStaleState removing state" podUID="656ac493-a769-4c15-9356-2050c4b9c8d8" containerName="config-sync-controllers" Mar 18 17:48:56.033882 master-0 kubenswrapper[7553]: I0318 17:48:56.033876 7553 memory_manager.go:354] "RemoveStaleState removing state" podUID="656ac493-a769-4c15-9356-2050c4b9c8d8" containerName="cluster-cloud-controller-manager" Mar 18 17:48:56.033970 master-0 kubenswrapper[7553]: I0318 17:48:56.033947 7553 memory_manager.go:354] "RemoveStaleState removing state" podUID="656ac493-a769-4c15-9356-2050c4b9c8d8" containerName="kube-rbac-proxy" Mar 18 17:48:56.034360 master-0 kubenswrapper[7553]: E0318 17:48:56.034316 7553 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="656ac493-a769-4c15-9356-2050c4b9c8d8" containerName="kube-rbac-proxy" Mar 18 17:48:56.034360 master-0 kubenswrapper[7553]: I0318 17:48:56.034348 7553 state_mem.go:107] "Deleted CPUSet assignment" podUID="656ac493-a769-4c15-9356-2050c4b9c8d8" containerName="kube-rbac-proxy" Mar 18 17:48:56.034820 master-0 kubenswrapper[7553]: I0318 17:48:56.034730 7553 memory_manager.go:354] "RemoveStaleState removing state" podUID="656ac493-a769-4c15-9356-2050c4b9c8d8" containerName="kube-rbac-proxy" Mar 18 17:48:56.036718 master-0 kubenswrapper[7553]: I0318 17:48:56.036679 7553 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-kfzkl" Mar 18 17:48:56.040717 master-0 kubenswrapper[7553]: I0318 17:48:56.040679 7553 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"cloud-controller-manager-images" Mar 18 17:48:56.041108 master-0 kubenswrapper[7553]: I0318 17:48:56.041095 7553 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-controller-manager-operator"/"cloud-controller-manager-operator-tls" Mar 18 17:48:56.041221 master-0 kubenswrapper[7553]: I0318 17:48:56.041140 7553 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"kube-root-ca.crt" Mar 18 17:48:56.041675 master-0 kubenswrapper[7553]: I0318 17:48:56.041620 7553 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-controller-manager-operator"/"cluster-cloud-controller-manager-dockercfg-2mk4r" Mar 18 17:48:56.041753 master-0 kubenswrapper[7553]: I0318 17:48:56.041675 7553 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"openshift-service-ca.crt" Mar 18 17:48:56.043361 master-0 kubenswrapper[7553]: I0318 17:48:56.042583 7553 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"kube-rbac-proxy" Mar 18 17:48:56.055562 master-0 kubenswrapper[7553]: I0318 17:48:56.055512 7553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/0751c002-fe0e-4f13-bb9c-9accd8ca0df3-host-etc-kube\") pod \"cluster-cloud-controller-manager-operator-7dff898856-kfzkl\" (UID: \"0751c002-fe0e-4f13-bb9c-9accd8ca0df3\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-kfzkl" Mar 18 17:48:56.055911 master-0 kubenswrapper[7553]: I0318 17:48:56.055886 7553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/0751c002-fe0e-4f13-bb9c-9accd8ca0df3-images\") pod \"cluster-cloud-controller-manager-operator-7dff898856-kfzkl\" (UID: \"0751c002-fe0e-4f13-bb9c-9accd8ca0df3\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-kfzkl" Mar 18 17:48:56.056092 master-0 kubenswrapper[7553]: I0318 17:48:56.056071 7553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloud-controller-manager-operator-tls\" (UniqueName: \"kubernetes.io/secret/0751c002-fe0e-4f13-bb9c-9accd8ca0df3-cloud-controller-manager-operator-tls\") pod \"cluster-cloud-controller-manager-operator-7dff898856-kfzkl\" (UID: \"0751c002-fe0e-4f13-bb9c-9accd8ca0df3\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-kfzkl" Mar 18 17:48:56.056263 master-0 kubenswrapper[7553]: I0318 17:48:56.056244 7553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-njx6n\" (UniqueName: \"kubernetes.io/projected/0751c002-fe0e-4f13-bb9c-9accd8ca0df3-kube-api-access-njx6n\") pod \"cluster-cloud-controller-manager-operator-7dff898856-kfzkl\" (UID: \"0751c002-fe0e-4f13-bb9c-9accd8ca0df3\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-kfzkl" Mar 18 17:48:56.056491 master-0 kubenswrapper[7553]: I0318 17:48:56.056465 7553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0751c002-fe0e-4f13-bb9c-9accd8ca0df3-auth-proxy-config\") pod \"cluster-cloud-controller-manager-operator-7dff898856-kfzkl\" (UID: \"0751c002-fe0e-4f13-bb9c-9accd8ca0df3\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-kfzkl" Mar 18 17:48:56.077528 master-0 kubenswrapper[7553]: I0318 17:48:56.077472 7553 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="656ac493-a769-4c15-9356-2050c4b9c8d8" path="/var/lib/kubelet/pods/656ac493-a769-4c15-9356-2050c4b9c8d8/volumes" Mar 18 17:48:56.102343 master-0 kubenswrapper[7553]: I0318 17:48:56.102235 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:48:56.102343 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:48:56.102343 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:48:56.102343 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:48:56.102637 master-0 kubenswrapper[7553]: I0318 17:48:56.102336 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:48:56.157576 master-0 kubenswrapper[7553]: I0318 17:48:56.157514 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloud-controller-manager-operator-tls\" (UniqueName: \"kubernetes.io/secret/0751c002-fe0e-4f13-bb9c-9accd8ca0df3-cloud-controller-manager-operator-tls\") pod \"cluster-cloud-controller-manager-operator-7dff898856-kfzkl\" (UID: \"0751c002-fe0e-4f13-bb9c-9accd8ca0df3\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-kfzkl" Mar 18 17:48:56.157783 master-0 kubenswrapper[7553]: I0318 17:48:56.157741 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-njx6n\" (UniqueName: \"kubernetes.io/projected/0751c002-fe0e-4f13-bb9c-9accd8ca0df3-kube-api-access-njx6n\") pod \"cluster-cloud-controller-manager-operator-7dff898856-kfzkl\" (UID: \"0751c002-fe0e-4f13-bb9c-9accd8ca0df3\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-kfzkl" Mar 18 17:48:56.157874 master-0 kubenswrapper[7553]: I0318 17:48:56.157853 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0751c002-fe0e-4f13-bb9c-9accd8ca0df3-auth-proxy-config\") pod \"cluster-cloud-controller-manager-operator-7dff898856-kfzkl\" (UID: \"0751c002-fe0e-4f13-bb9c-9accd8ca0df3\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-kfzkl" Mar 18 17:48:56.157928 master-0 kubenswrapper[7553]: I0318 17:48:56.157911 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/0751c002-fe0e-4f13-bb9c-9accd8ca0df3-host-etc-kube\") pod \"cluster-cloud-controller-manager-operator-7dff898856-kfzkl\" (UID: \"0751c002-fe0e-4f13-bb9c-9accd8ca0df3\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-kfzkl" Mar 18 17:48:56.157969 master-0 kubenswrapper[7553]: I0318 17:48:56.157939 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/0751c002-fe0e-4f13-bb9c-9accd8ca0df3-images\") pod \"cluster-cloud-controller-manager-operator-7dff898856-kfzkl\" (UID: \"0751c002-fe0e-4f13-bb9c-9accd8ca0df3\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-kfzkl" Mar 18 17:48:56.158262 master-0 kubenswrapper[7553]: I0318 17:48:56.158202 7553 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/0751c002-fe0e-4f13-bb9c-9accd8ca0df3-host-etc-kube\") pod \"cluster-cloud-controller-manager-operator-7dff898856-kfzkl\" (UID: \"0751c002-fe0e-4f13-bb9c-9accd8ca0df3\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-kfzkl" Mar 18 17:48:56.158755 master-0 kubenswrapper[7553]: I0318 17:48:56.158721 7553 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/0751c002-fe0e-4f13-bb9c-9accd8ca0df3-images\") pod \"cluster-cloud-controller-manager-operator-7dff898856-kfzkl\" (UID: \"0751c002-fe0e-4f13-bb9c-9accd8ca0df3\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-kfzkl" Mar 18 17:48:56.158936 master-0 kubenswrapper[7553]: I0318 17:48:56.158896 7553 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0751c002-fe0e-4f13-bb9c-9accd8ca0df3-auth-proxy-config\") pod \"cluster-cloud-controller-manager-operator-7dff898856-kfzkl\" (UID: \"0751c002-fe0e-4f13-bb9c-9accd8ca0df3\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-kfzkl" Mar 18 17:48:56.161397 master-0 kubenswrapper[7553]: I0318 17:48:56.161375 7553 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloud-controller-manager-operator-tls\" (UniqueName: \"kubernetes.io/secret/0751c002-fe0e-4f13-bb9c-9accd8ca0df3-cloud-controller-manager-operator-tls\") pod \"cluster-cloud-controller-manager-operator-7dff898856-kfzkl\" (UID: \"0751c002-fe0e-4f13-bb9c-9accd8ca0df3\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-kfzkl" Mar 18 17:48:56.178508 master-0 kubenswrapper[7553]: I0318 17:48:56.178428 7553 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-njx6n\" (UniqueName: \"kubernetes.io/projected/0751c002-fe0e-4f13-bb9c-9accd8ca0df3-kube-api-access-njx6n\") pod \"cluster-cloud-controller-manager-operator-7dff898856-kfzkl\" (UID: \"0751c002-fe0e-4f13-bb9c-9accd8ca0df3\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-kfzkl" Mar 18 17:48:56.368817 master-0 kubenswrapper[7553]: I0318 17:48:56.368728 7553 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-kfzkl" Mar 18 17:48:56.868197 master-0 kubenswrapper[7553]: I0318 17:48:56.868137 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/9c0dbd44-7669-41d6-bf1b-d8c1343c9d98-prometheus-operator-tls\") pod \"prometheus-operator-6c8df6d4b-fshkm\" (UID: \"9c0dbd44-7669-41d6-bf1b-d8c1343c9d98\") " pod="openshift-monitoring/prometheus-operator-6c8df6d4b-fshkm" Mar 18 17:48:56.868441 master-0 kubenswrapper[7553]: E0318 17:48:56.868388 7553 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-operator-tls: secret "prometheus-operator-tls" not found Mar 18 17:48:56.868524 master-0 kubenswrapper[7553]: E0318 17:48:56.868447 7553 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9c0dbd44-7669-41d6-bf1b-d8c1343c9d98-prometheus-operator-tls podName:9c0dbd44-7669-41d6-bf1b-d8c1343c9d98 nodeName:}" failed. No retries permitted until 2026-03-18 17:49:04.868429767 +0000 UTC m=+435.014264440 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "prometheus-operator-tls" (UniqueName: "kubernetes.io/secret/9c0dbd44-7669-41d6-bf1b-d8c1343c9d98-prometheus-operator-tls") pod "prometheus-operator-6c8df6d4b-fshkm" (UID: "9c0dbd44-7669-41d6-bf1b-d8c1343c9d98") : secret "prometheus-operator-tls" not found Mar 18 17:48:56.938707 master-0 kubenswrapper[7553]: I0318 17:48:56.938562 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-kfzkl" event={"ID":"0751c002-fe0e-4f13-bb9c-9accd8ca0df3","Type":"ContainerStarted","Data":"a81203ae354d597c88c3b98386e062196ad2d6278f0f6ad5fc4ad9c4b04a9ff2"} Mar 18 17:48:56.938707 master-0 kubenswrapper[7553]: I0318 17:48:56.938629 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-kfzkl" event={"ID":"0751c002-fe0e-4f13-bb9c-9accd8ca0df3","Type":"ContainerStarted","Data":"19f22c241321c089522b514fbfd3f5b1ec6df250184c4997e1e9c0766f09796c"} Mar 18 17:48:56.938707 master-0 kubenswrapper[7553]: I0318 17:48:56.938645 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-kfzkl" event={"ID":"0751c002-fe0e-4f13-bb9c-9accd8ca0df3","Type":"ContainerStarted","Data":"a9a9d675b5bc654d44d972fe5be99d008e180b13cd245216bdc5bd95af4fe020"} Mar 18 17:48:57.110147 master-0 kubenswrapper[7553]: I0318 17:48:57.110054 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:48:57.110147 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:48:57.110147 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:48:57.110147 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:48:57.110714 master-0 kubenswrapper[7553]: I0318 17:48:57.110163 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:48:57.962488 master-0 kubenswrapper[7553]: I0318 17:48:57.962384 7553 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-controller-manager-operator_cluster-cloud-controller-manager-operator-7dff898856-kfzkl_0751c002-fe0e-4f13-bb9c-9accd8ca0df3/kube-rbac-proxy/0.log" Mar 18 17:48:57.963940 master-0 kubenswrapper[7553]: I0318 17:48:57.963858 7553 generic.go:334] "Generic (PLEG): container finished" podID="0751c002-fe0e-4f13-bb9c-9accd8ca0df3" containerID="636b2985bc6c06ce415a4fd566ad1e6159f703b4ad9fce51afedd39ec30b7e04" exitCode=1 Mar 18 17:48:57.964096 master-0 kubenswrapper[7553]: I0318 17:48:57.963941 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-kfzkl" event={"ID":"0751c002-fe0e-4f13-bb9c-9accd8ca0df3","Type":"ContainerDied","Data":"636b2985bc6c06ce415a4fd566ad1e6159f703b4ad9fce51afedd39ec30b7e04"} Mar 18 17:48:57.964941 master-0 kubenswrapper[7553]: I0318 17:48:57.964872 7553 scope.go:117] "RemoveContainer" containerID="636b2985bc6c06ce415a4fd566ad1e6159f703b4ad9fce51afedd39ec30b7e04" Mar 18 17:48:58.103480 master-0 kubenswrapper[7553]: I0318 17:48:58.103393 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:48:58.103480 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:48:58.103480 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:48:58.103480 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:48:58.105681 master-0 kubenswrapper[7553]: I0318 17:48:58.103503 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:48:58.975566 master-0 kubenswrapper[7553]: I0318 17:48:58.975496 7553 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-controller-manager-operator_cluster-cloud-controller-manager-operator-7dff898856-kfzkl_0751c002-fe0e-4f13-bb9c-9accd8ca0df3/kube-rbac-proxy/1.log" Mar 18 17:48:58.976486 master-0 kubenswrapper[7553]: I0318 17:48:58.976361 7553 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-controller-manager-operator_cluster-cloud-controller-manager-operator-7dff898856-kfzkl_0751c002-fe0e-4f13-bb9c-9accd8ca0df3/kube-rbac-proxy/0.log" Mar 18 17:48:58.977447 master-0 kubenswrapper[7553]: I0318 17:48:58.977391 7553 generic.go:334] "Generic (PLEG): container finished" podID="0751c002-fe0e-4f13-bb9c-9accd8ca0df3" containerID="e453b81b137f5796ce8487f47777884fb5e361174f46a988fd7c6cf246bf19f4" exitCode=1 Mar 18 17:48:58.977660 master-0 kubenswrapper[7553]: I0318 17:48:58.977526 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-kfzkl" event={"ID":"0751c002-fe0e-4f13-bb9c-9accd8ca0df3","Type":"ContainerDied","Data":"e453b81b137f5796ce8487f47777884fb5e361174f46a988fd7c6cf246bf19f4"} Mar 18 17:48:58.977885 master-0 kubenswrapper[7553]: I0318 17:48:58.977856 7553 scope.go:117] "RemoveContainer" containerID="636b2985bc6c06ce415a4fd566ad1e6159f703b4ad9fce51afedd39ec30b7e04" Mar 18 17:48:58.978484 master-0 kubenswrapper[7553]: I0318 17:48:58.978442 7553 scope.go:117] "RemoveContainer" containerID="e453b81b137f5796ce8487f47777884fb5e361174f46a988fd7c6cf246bf19f4" Mar 18 17:48:58.978934 master-0 kubenswrapper[7553]: E0318 17:48:58.978847 7553 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-rbac-proxy pod=cluster-cloud-controller-manager-operator-7dff898856-kfzkl_openshift-cloud-controller-manager-operator(0751c002-fe0e-4f13-bb9c-9accd8ca0df3)\"" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-kfzkl" podUID="0751c002-fe0e-4f13-bb9c-9accd8ca0df3" Mar 18 17:48:59.103163 master-0 kubenswrapper[7553]: I0318 17:48:59.103110 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:48:59.103163 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:48:59.103163 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:48:59.103163 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:48:59.103685 master-0 kubenswrapper[7553]: I0318 17:48:59.103648 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:48:59.992161 master-0 kubenswrapper[7553]: I0318 17:48:59.992067 7553 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-controller-manager-operator_cluster-cloud-controller-manager-operator-7dff898856-kfzkl_0751c002-fe0e-4f13-bb9c-9accd8ca0df3/kube-rbac-proxy/1.log" Mar 18 17:48:59.994631 master-0 kubenswrapper[7553]: I0318 17:48:59.994578 7553 scope.go:117] "RemoveContainer" containerID="e453b81b137f5796ce8487f47777884fb5e361174f46a988fd7c6cf246bf19f4" Mar 18 17:48:59.994960 master-0 kubenswrapper[7553]: E0318 17:48:59.994903 7553 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-rbac-proxy pod=cluster-cloud-controller-manager-operator-7dff898856-kfzkl_openshift-cloud-controller-manager-operator(0751c002-fe0e-4f13-bb9c-9accd8ca0df3)\"" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-kfzkl" podUID="0751c002-fe0e-4f13-bb9c-9accd8ca0df3" Mar 18 17:49:00.102706 master-0 kubenswrapper[7553]: I0318 17:49:00.102612 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:49:00.102706 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:49:00.102706 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:49:00.102706 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:49:00.102706 master-0 kubenswrapper[7553]: I0318 17:49:00.102702 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:49:01.103197 master-0 kubenswrapper[7553]: I0318 17:49:01.101726 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:49:01.103197 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:49:01.103197 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:49:01.103197 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:49:01.103197 master-0 kubenswrapper[7553]: I0318 17:49:01.101868 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:49:02.103434 master-0 kubenswrapper[7553]: I0318 17:49:02.103353 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:49:02.103434 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:49:02.103434 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:49:02.103434 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:49:02.104676 master-0 kubenswrapper[7553]: I0318 17:49:02.103461 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:49:03.101758 master-0 kubenswrapper[7553]: I0318 17:49:03.101700 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:49:03.101758 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:49:03.101758 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:49:03.101758 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:49:03.102070 master-0 kubenswrapper[7553]: I0318 17:49:03.101771 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:49:04.102498 master-0 kubenswrapper[7553]: I0318 17:49:04.102416 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:49:04.102498 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:49:04.102498 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:49:04.102498 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:49:04.103354 master-0 kubenswrapper[7553]: I0318 17:49:04.102507 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:49:04.508601 master-0 kubenswrapper[7553]: I0318 17:49:04.508430 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/e0e04440-c08b-452d-9be6-9f70a4027c92-samples-operator-tls\") pod \"cluster-samples-operator-85f7577d78-xnx8x\" (UID: \"e0e04440-c08b-452d-9be6-9f70a4027c92\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-85f7577d78-xnx8x" Mar 18 17:49:04.508601 master-0 kubenswrapper[7553]: I0318 17:49:04.508530 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloud-credential-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/04cef0bd-f365-4bf6-864a-1895995015d6-cloud-credential-operator-serving-cert\") pod \"cloud-credential-operator-744f9dbf77-djgn7\" (UID: \"04cef0bd-f365-4bf6-864a-1895995015d6\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-744f9dbf77-djgn7" Mar 18 17:49:04.508601 master-0 kubenswrapper[7553]: I0318 17:49:04.508582 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/de189d27-4c60-49f1-9119-d1fde5c37b1e-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-6f97756bc8-zdqtc\" (UID: \"de189d27-4c60-49f1-9119-d1fde5c37b1e\") " pod="openshift-machine-api/control-plane-machine-set-operator-6f97756bc8-zdqtc" Mar 18 17:49:04.508910 master-0 kubenswrapper[7553]: I0318 17:49:04.508631 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/92153864-7959-4482-bf24-c8db36435fb5-machine-approver-tls\") pod \"machine-approver-5c6485487f-z74t2\" (UID: \"92153864-7959-4482-bf24-c8db36435fb5\") " pod="openshift-cluster-machine-approver/machine-approver-5c6485487f-z74t2" Mar 18 17:49:04.508910 master-0 kubenswrapper[7553]: E0318 17:49:04.508669 7553 secret.go:189] Couldn't get secret openshift-cluster-samples-operator/samples-operator-tls: secret "samples-operator-tls" not found Mar 18 17:49:04.508910 master-0 kubenswrapper[7553]: E0318 17:49:04.508793 7553 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e0e04440-c08b-452d-9be6-9f70a4027c92-samples-operator-tls podName:e0e04440-c08b-452d-9be6-9f70a4027c92 nodeName:}" failed. No retries permitted until 2026-03-18 17:49:36.508760344 +0000 UTC m=+466.654595047 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "samples-operator-tls" (UniqueName: "kubernetes.io/secret/e0e04440-c08b-452d-9be6-9f70a4027c92-samples-operator-tls") pod "cluster-samples-operator-85f7577d78-xnx8x" (UID: "e0e04440-c08b-452d-9be6-9f70a4027c92") : secret "samples-operator-tls" not found Mar 18 17:49:04.508910 master-0 kubenswrapper[7553]: E0318 17:49:04.508798 7553 secret.go:189] Couldn't get secret openshift-machine-api/control-plane-machine-set-operator-tls: secret "control-plane-machine-set-operator-tls" not found Mar 18 17:49:04.508910 master-0 kubenswrapper[7553]: E0318 17:49:04.508807 7553 secret.go:189] Couldn't get secret openshift-cloud-credential-operator/cloud-credential-operator-serving-cert: secret "cloud-credential-operator-serving-cert" not found Mar 18 17:49:04.509113 master-0 kubenswrapper[7553]: E0318 17:49:04.508861 7553 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/de189d27-4c60-49f1-9119-d1fde5c37b1e-control-plane-machine-set-operator-tls podName:de189d27-4c60-49f1-9119-d1fde5c37b1e nodeName:}" failed. No retries permitted until 2026-03-18 17:49:36.508842936 +0000 UTC m=+466.654677639 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "control-plane-machine-set-operator-tls" (UniqueName: "kubernetes.io/secret/de189d27-4c60-49f1-9119-d1fde5c37b1e-control-plane-machine-set-operator-tls") pod "control-plane-machine-set-operator-6f97756bc8-zdqtc" (UID: "de189d27-4c60-49f1-9119-d1fde5c37b1e") : secret "control-plane-machine-set-operator-tls" not found Mar 18 17:49:04.509113 master-0 kubenswrapper[7553]: E0318 17:49:04.508984 7553 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/04cef0bd-f365-4bf6-864a-1895995015d6-cloud-credential-operator-serving-cert podName:04cef0bd-f365-4bf6-864a-1895995015d6 nodeName:}" failed. No retries permitted until 2026-03-18 17:49:36.508952169 +0000 UTC m=+466.654786982 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "cloud-credential-operator-serving-cert" (UniqueName: "kubernetes.io/secret/04cef0bd-f365-4bf6-864a-1895995015d6-cloud-credential-operator-serving-cert") pod "cloud-credential-operator-744f9dbf77-djgn7" (UID: "04cef0bd-f365-4bf6-864a-1895995015d6") : secret "cloud-credential-operator-serving-cert" not found Mar 18 17:49:04.509113 master-0 kubenswrapper[7553]: E0318 17:49:04.509013 7553 secret.go:189] Couldn't get secret openshift-cluster-machine-approver/machine-approver-tls: secret "machine-approver-tls" not found Mar 18 17:49:04.509113 master-0 kubenswrapper[7553]: I0318 17:49:04.509041 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a94f7bff-ad61-4c53-a8eb-000a13f26971-cert\") pod \"cluster-autoscaler-operator-866dc4744-l6hpt\" (UID: \"a94f7bff-ad61-4c53-a8eb-000a13f26971\") " pod="openshift-machine-api/cluster-autoscaler-operator-866dc4744-l6hpt" Mar 18 17:49:04.509113 master-0 kubenswrapper[7553]: E0318 17:49:04.509071 7553 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/92153864-7959-4482-bf24-c8db36435fb5-machine-approver-tls podName:92153864-7959-4482-bf24-c8db36435fb5 nodeName:}" failed. No retries permitted until 2026-03-18 17:49:36.509055422 +0000 UTC m=+466.654890285 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "machine-approver-tls" (UniqueName: "kubernetes.io/secret/92153864-7959-4482-bf24-c8db36435fb5-machine-approver-tls") pod "machine-approver-5c6485487f-z74t2" (UID: "92153864-7959-4482-bf24-c8db36435fb5") : secret "machine-approver-tls" not found Mar 18 17:49:04.509388 master-0 kubenswrapper[7553]: E0318 17:49:04.509165 7553 secret.go:189] Couldn't get secret openshift-machine-api/cluster-autoscaler-operator-cert: secret "cluster-autoscaler-operator-cert" not found Mar 18 17:49:04.509388 master-0 kubenswrapper[7553]: E0318 17:49:04.509247 7553 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a94f7bff-ad61-4c53-a8eb-000a13f26971-cert podName:a94f7bff-ad61-4c53-a8eb-000a13f26971 nodeName:}" failed. No retries permitted until 2026-03-18 17:49:36.509217106 +0000 UTC m=+466.655051789 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/a94f7bff-ad61-4c53-a8eb-000a13f26971-cert") pod "cluster-autoscaler-operator-866dc4744-l6hpt" (UID: "a94f7bff-ad61-4c53-a8eb-000a13f26971") : secret "cluster-autoscaler-operator-cert" not found Mar 18 17:49:04.610377 master-0 kubenswrapper[7553]: I0318 17:49:04.610239 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/2d21e77e-8b61-4f03-8f17-941b7a1d8b1d-machine-api-operator-tls\") pod \"machine-api-operator-6fbb6cf6f9-6x52p\" (UID: \"2d21e77e-8b61-4f03-8f17-941b7a1d8b1d\") " pod="openshift-machine-api/machine-api-operator-6fbb6cf6f9-6x52p" Mar 18 17:49:04.610702 master-0 kubenswrapper[7553]: E0318 17:49:04.610598 7553 secret.go:189] Couldn't get secret openshift-machine-api/machine-api-operator-tls: secret "machine-api-operator-tls" not found Mar 18 17:49:04.610807 master-0 kubenswrapper[7553]: E0318 17:49:04.610761 7553 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2d21e77e-8b61-4f03-8f17-941b7a1d8b1d-machine-api-operator-tls podName:2d21e77e-8b61-4f03-8f17-941b7a1d8b1d nodeName:}" failed. No retries permitted until 2026-03-18 17:49:36.610718172 +0000 UTC m=+466.756553025 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "machine-api-operator-tls" (UniqueName: "kubernetes.io/secret/2d21e77e-8b61-4f03-8f17-941b7a1d8b1d-machine-api-operator-tls") pod "machine-api-operator-6fbb6cf6f9-6x52p" (UID: "2d21e77e-8b61-4f03-8f17-941b7a1d8b1d") : secret "machine-api-operator-tls" not found Mar 18 17:49:04.914932 master-0 kubenswrapper[7553]: I0318 17:49:04.914828 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/9c0dbd44-7669-41d6-bf1b-d8c1343c9d98-prometheus-operator-tls\") pod \"prometheus-operator-6c8df6d4b-fshkm\" (UID: \"9c0dbd44-7669-41d6-bf1b-d8c1343c9d98\") " pod="openshift-monitoring/prometheus-operator-6c8df6d4b-fshkm" Mar 18 17:49:04.915255 master-0 kubenswrapper[7553]: E0318 17:49:04.915193 7553 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-operator-tls: secret "prometheus-operator-tls" not found Mar 18 17:49:04.915389 master-0 kubenswrapper[7553]: E0318 17:49:04.915303 7553 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9c0dbd44-7669-41d6-bf1b-d8c1343c9d98-prometheus-operator-tls podName:9c0dbd44-7669-41d6-bf1b-d8c1343c9d98 nodeName:}" failed. No retries permitted until 2026-03-18 17:49:20.915249003 +0000 UTC m=+451.061083716 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "prometheus-operator-tls" (UniqueName: "kubernetes.io/secret/9c0dbd44-7669-41d6-bf1b-d8c1343c9d98-prometheus-operator-tls") pod "prometheus-operator-6c8df6d4b-fshkm" (UID: "9c0dbd44-7669-41d6-bf1b-d8c1343c9d98") : secret "prometheus-operator-tls" not found Mar 18 17:49:05.102589 master-0 kubenswrapper[7553]: I0318 17:49:05.102507 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:49:05.102589 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:49:05.102589 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:49:05.102589 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:49:05.103527 master-0 kubenswrapper[7553]: I0318 17:49:05.102610 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:49:06.103152 master-0 kubenswrapper[7553]: I0318 17:49:06.103072 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:49:06.103152 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:49:06.103152 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:49:06.103152 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:49:06.104345 master-0 kubenswrapper[7553]: I0318 17:49:06.103174 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:49:07.101838 master-0 kubenswrapper[7553]: I0318 17:49:07.101741 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:49:07.101838 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:49:07.101838 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:49:07.101838 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:49:07.102701 master-0 kubenswrapper[7553]: I0318 17:49:07.101848 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:49:08.103144 master-0 kubenswrapper[7553]: I0318 17:49:08.103041 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:49:08.103144 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:49:08.103144 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:49:08.103144 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:49:08.104168 master-0 kubenswrapper[7553]: I0318 17:49:08.103157 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:49:09.102892 master-0 kubenswrapper[7553]: I0318 17:49:09.102809 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:49:09.102892 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:49:09.102892 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:49:09.102892 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:49:09.103644 master-0 kubenswrapper[7553]: I0318 17:49:09.102905 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:49:10.102856 master-0 kubenswrapper[7553]: I0318 17:49:10.102760 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:49:10.102856 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:49:10.102856 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:49:10.102856 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:49:10.103741 master-0 kubenswrapper[7553]: I0318 17:49:10.102869 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:49:11.102699 master-0 kubenswrapper[7553]: I0318 17:49:11.102618 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:49:11.102699 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:49:11.102699 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:49:11.102699 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:49:11.103152 master-0 kubenswrapper[7553]: I0318 17:49:11.102723 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:49:12.101961 master-0 kubenswrapper[7553]: I0318 17:49:12.101867 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:49:12.101961 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:49:12.101961 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:49:12.101961 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:49:12.103094 master-0 kubenswrapper[7553]: I0318 17:49:12.102012 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:49:13.101772 master-0 kubenswrapper[7553]: I0318 17:49:13.101700 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:49:13.101772 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:49:13.101772 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:49:13.101772 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:49:13.102679 master-0 kubenswrapper[7553]: I0318 17:49:13.101790 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:49:14.102890 master-0 kubenswrapper[7553]: I0318 17:49:14.102813 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:49:14.102890 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:49:14.102890 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:49:14.102890 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:49:14.103563 master-0 kubenswrapper[7553]: I0318 17:49:14.102918 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:49:15.053506 master-0 kubenswrapper[7553]: I0318 17:49:15.053355 7553 scope.go:117] "RemoveContainer" containerID="e453b81b137f5796ce8487f47777884fb5e361174f46a988fd7c6cf246bf19f4" Mar 18 17:49:15.102762 master-0 kubenswrapper[7553]: I0318 17:49:15.102672 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:49:15.102762 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:49:15.102762 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:49:15.102762 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:49:15.103720 master-0 kubenswrapper[7553]: I0318 17:49:15.102838 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:49:16.103428 master-0 kubenswrapper[7553]: I0318 17:49:16.103316 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:49:16.103428 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:49:16.103428 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:49:16.103428 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:49:16.104400 master-0 kubenswrapper[7553]: I0318 17:49:16.103453 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:49:16.127291 master-0 kubenswrapper[7553]: I0318 17:49:16.127207 7553 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-controller-manager-operator_cluster-cloud-controller-manager-operator-7dff898856-kfzkl_0751c002-fe0e-4f13-bb9c-9accd8ca0df3/kube-rbac-proxy/2.log" Mar 18 17:49:16.128013 master-0 kubenswrapper[7553]: I0318 17:49:16.127967 7553 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-controller-manager-operator_cluster-cloud-controller-manager-operator-7dff898856-kfzkl_0751c002-fe0e-4f13-bb9c-9accd8ca0df3/kube-rbac-proxy/1.log" Mar 18 17:49:16.128927 master-0 kubenswrapper[7553]: I0318 17:49:16.128880 7553 generic.go:334] "Generic (PLEG): container finished" podID="0751c002-fe0e-4f13-bb9c-9accd8ca0df3" containerID="beba171ab7d0a9472bc419ea9abba8f00900b048cb7091b8b12412389754787d" exitCode=1 Mar 18 17:49:16.128990 master-0 kubenswrapper[7553]: I0318 17:49:16.128928 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-kfzkl" event={"ID":"0751c002-fe0e-4f13-bb9c-9accd8ca0df3","Type":"ContainerDied","Data":"beba171ab7d0a9472bc419ea9abba8f00900b048cb7091b8b12412389754787d"} Mar 18 17:49:16.128990 master-0 kubenswrapper[7553]: I0318 17:49:16.128979 7553 scope.go:117] "RemoveContainer" containerID="e453b81b137f5796ce8487f47777884fb5e361174f46a988fd7c6cf246bf19f4" Mar 18 17:49:16.130093 master-0 kubenswrapper[7553]: I0318 17:49:16.130050 7553 scope.go:117] "RemoveContainer" containerID="beba171ab7d0a9472bc419ea9abba8f00900b048cb7091b8b12412389754787d" Mar 18 17:49:16.130759 master-0 kubenswrapper[7553]: E0318 17:49:16.130487 7553 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-rbac-proxy pod=cluster-cloud-controller-manager-operator-7dff898856-kfzkl_openshift-cloud-controller-manager-operator(0751c002-fe0e-4f13-bb9c-9accd8ca0df3)\"" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-kfzkl" podUID="0751c002-fe0e-4f13-bb9c-9accd8ca0df3" Mar 18 17:49:17.102741 master-0 kubenswrapper[7553]: I0318 17:49:17.102627 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:49:17.102741 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:49:17.102741 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:49:17.102741 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:49:17.103244 master-0 kubenswrapper[7553]: I0318 17:49:17.102748 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:49:17.139127 master-0 kubenswrapper[7553]: I0318 17:49:17.139043 7553 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-controller-manager-operator_cluster-cloud-controller-manager-operator-7dff898856-kfzkl_0751c002-fe0e-4f13-bb9c-9accd8ca0df3/kube-rbac-proxy/2.log" Mar 18 17:49:18.102789 master-0 kubenswrapper[7553]: I0318 17:49:18.102699 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:49:18.102789 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:49:18.102789 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:49:18.102789 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:49:18.103085 master-0 kubenswrapper[7553]: I0318 17:49:18.102801 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:49:19.102941 master-0 kubenswrapper[7553]: I0318 17:49:19.102837 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:49:19.102941 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:49:19.102941 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:49:19.102941 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:49:19.102941 master-0 kubenswrapper[7553]: I0318 17:49:19.102938 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:49:20.104512 master-0 kubenswrapper[7553]: I0318 17:49:20.104405 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:49:20.104512 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:49:20.104512 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:49:20.104512 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:49:20.105630 master-0 kubenswrapper[7553]: I0318 17:49:20.104530 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:49:20.918629 master-0 kubenswrapper[7553]: I0318 17:49:20.918504 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/9c0dbd44-7669-41d6-bf1b-d8c1343c9d98-prometheus-operator-tls\") pod \"prometheus-operator-6c8df6d4b-fshkm\" (UID: \"9c0dbd44-7669-41d6-bf1b-d8c1343c9d98\") " pod="openshift-monitoring/prometheus-operator-6c8df6d4b-fshkm" Mar 18 17:49:20.920759 master-0 kubenswrapper[7553]: E0318 17:49:20.920681 7553 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-operator-tls: secret "prometheus-operator-tls" not found Mar 18 17:49:20.920893 master-0 kubenswrapper[7553]: E0318 17:49:20.920873 7553 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9c0dbd44-7669-41d6-bf1b-d8c1343c9d98-prometheus-operator-tls podName:9c0dbd44-7669-41d6-bf1b-d8c1343c9d98 nodeName:}" failed. No retries permitted until 2026-03-18 17:49:52.920829484 +0000 UTC m=+483.066664317 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "prometheus-operator-tls" (UniqueName: "kubernetes.io/secret/9c0dbd44-7669-41d6-bf1b-d8c1343c9d98-prometheus-operator-tls") pod "prometheus-operator-6c8df6d4b-fshkm" (UID: "9c0dbd44-7669-41d6-bf1b-d8c1343c9d98") : secret "prometheus-operator-tls" not found Mar 18 17:49:21.103607 master-0 kubenswrapper[7553]: I0318 17:49:21.103512 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:49:21.103607 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:49:21.103607 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:49:21.103607 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:49:21.104225 master-0 kubenswrapper[7553]: I0318 17:49:21.103661 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:49:22.102233 master-0 kubenswrapper[7553]: I0318 17:49:22.102168 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:49:22.102233 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:49:22.102233 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:49:22.102233 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:49:22.103138 master-0 kubenswrapper[7553]: I0318 17:49:22.102250 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:49:23.102075 master-0 kubenswrapper[7553]: I0318 17:49:23.101949 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:49:23.102075 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:49:23.102075 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:49:23.102075 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:49:23.102868 master-0 kubenswrapper[7553]: I0318 17:49:23.102082 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:49:24.102315 master-0 kubenswrapper[7553]: I0318 17:49:24.102204 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:49:24.102315 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:49:24.102315 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:49:24.102315 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:49:24.103374 master-0 kubenswrapper[7553]: I0318 17:49:24.102352 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:49:25.102580 master-0 kubenswrapper[7553]: I0318 17:49:25.102457 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:49:25.102580 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:49:25.102580 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:49:25.102580 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:49:25.102580 master-0 kubenswrapper[7553]: I0318 17:49:25.102560 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:49:26.103038 master-0 kubenswrapper[7553]: I0318 17:49:26.102907 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:49:26.103038 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:49:26.103038 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:49:26.103038 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:49:26.103038 master-0 kubenswrapper[7553]: I0318 17:49:26.103030 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:49:27.102766 master-0 kubenswrapper[7553]: I0318 17:49:27.102697 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:49:27.102766 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:49:27.102766 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:49:27.102766 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:49:27.103959 master-0 kubenswrapper[7553]: I0318 17:49:27.103471 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:49:28.102698 master-0 kubenswrapper[7553]: I0318 17:49:28.102584 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:49:28.102698 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:49:28.102698 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:49:28.102698 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:49:28.103188 master-0 kubenswrapper[7553]: I0318 17:49:28.102717 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:49:29.103346 master-0 kubenswrapper[7553]: I0318 17:49:29.103232 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:49:29.103346 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:49:29.103346 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:49:29.103346 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:49:29.104473 master-0 kubenswrapper[7553]: I0318 17:49:29.103353 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:49:30.102799 master-0 kubenswrapper[7553]: I0318 17:49:30.102706 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:49:30.102799 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:49:30.102799 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:49:30.102799 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:49:30.103304 master-0 kubenswrapper[7553]: I0318 17:49:30.102807 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:49:31.054486 master-0 kubenswrapper[7553]: I0318 17:49:31.054384 7553 scope.go:117] "RemoveContainer" containerID="beba171ab7d0a9472bc419ea9abba8f00900b048cb7091b8b12412389754787d" Mar 18 17:49:31.055620 master-0 kubenswrapper[7553]: E0318 17:49:31.054760 7553 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-rbac-proxy pod=cluster-cloud-controller-manager-operator-7dff898856-kfzkl_openshift-cloud-controller-manager-operator(0751c002-fe0e-4f13-bb9c-9accd8ca0df3)\"" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-kfzkl" podUID="0751c002-fe0e-4f13-bb9c-9accd8ca0df3" Mar 18 17:49:31.102559 master-0 kubenswrapper[7553]: I0318 17:49:31.102489 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:49:31.102559 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:49:31.102559 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:49:31.102559 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:49:31.103081 master-0 kubenswrapper[7553]: I0318 17:49:31.102580 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:49:32.103603 master-0 kubenswrapper[7553]: I0318 17:49:32.103516 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:49:32.103603 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:49:32.103603 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:49:32.103603 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:49:32.104821 master-0 kubenswrapper[7553]: I0318 17:49:32.103625 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:49:33.102424 master-0 kubenswrapper[7553]: I0318 17:49:33.102350 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:49:33.102424 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:49:33.102424 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:49:33.102424 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:49:33.102424 master-0 kubenswrapper[7553]: I0318 17:49:33.102442 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:49:34.103454 master-0 kubenswrapper[7553]: I0318 17:49:34.103353 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:49:34.103454 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:49:34.103454 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:49:34.103454 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:49:34.104739 master-0 kubenswrapper[7553]: I0318 17:49:34.103482 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:49:35.103409 master-0 kubenswrapper[7553]: I0318 17:49:35.103248 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:49:35.103409 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:49:35.103409 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:49:35.103409 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:49:35.104514 master-0 kubenswrapper[7553]: I0318 17:49:35.103451 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:49:36.102467 master-0 kubenswrapper[7553]: I0318 17:49:36.102076 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:49:36.102467 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:49:36.102467 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:49:36.102467 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:49:36.102467 master-0 kubenswrapper[7553]: I0318 17:49:36.102185 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:49:36.513505 master-0 kubenswrapper[7553]: I0318 17:49:36.513238 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/e0e04440-c08b-452d-9be6-9f70a4027c92-samples-operator-tls\") pod \"cluster-samples-operator-85f7577d78-xnx8x\" (UID: \"e0e04440-c08b-452d-9be6-9f70a4027c92\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-85f7577d78-xnx8x" Mar 18 17:49:36.513505 master-0 kubenswrapper[7553]: I0318 17:49:36.513357 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloud-credential-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/04cef0bd-f365-4bf6-864a-1895995015d6-cloud-credential-operator-serving-cert\") pod \"cloud-credential-operator-744f9dbf77-djgn7\" (UID: \"04cef0bd-f365-4bf6-864a-1895995015d6\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-744f9dbf77-djgn7" Mar 18 17:49:36.513505 master-0 kubenswrapper[7553]: I0318 17:49:36.513419 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/de189d27-4c60-49f1-9119-d1fde5c37b1e-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-6f97756bc8-zdqtc\" (UID: \"de189d27-4c60-49f1-9119-d1fde5c37b1e\") " pod="openshift-machine-api/control-plane-machine-set-operator-6f97756bc8-zdqtc" Mar 18 17:49:36.513505 master-0 kubenswrapper[7553]: I0318 17:49:36.513489 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/92153864-7959-4482-bf24-c8db36435fb5-machine-approver-tls\") pod \"machine-approver-5c6485487f-z74t2\" (UID: \"92153864-7959-4482-bf24-c8db36435fb5\") " pod="openshift-cluster-machine-approver/machine-approver-5c6485487f-z74t2" Mar 18 17:49:36.513505 master-0 kubenswrapper[7553]: E0318 17:49:36.513493 7553 secret.go:189] Couldn't get secret openshift-cluster-samples-operator/samples-operator-tls: secret "samples-operator-tls" not found Mar 18 17:49:36.515173 master-0 kubenswrapper[7553]: I0318 17:49:36.513534 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a94f7bff-ad61-4c53-a8eb-000a13f26971-cert\") pod \"cluster-autoscaler-operator-866dc4744-l6hpt\" (UID: \"a94f7bff-ad61-4c53-a8eb-000a13f26971\") " pod="openshift-machine-api/cluster-autoscaler-operator-866dc4744-l6hpt" Mar 18 17:49:36.515173 master-0 kubenswrapper[7553]: E0318 17:49:36.513603 7553 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e0e04440-c08b-452d-9be6-9f70a4027c92-samples-operator-tls podName:e0e04440-c08b-452d-9be6-9f70a4027c92 nodeName:}" failed. No retries permitted until 2026-03-18 17:50:40.513574694 +0000 UTC m=+530.659409397 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "samples-operator-tls" (UniqueName: "kubernetes.io/secret/e0e04440-c08b-452d-9be6-9f70a4027c92-samples-operator-tls") pod "cluster-samples-operator-85f7577d78-xnx8x" (UID: "e0e04440-c08b-452d-9be6-9f70a4027c92") : secret "samples-operator-tls" not found Mar 18 17:49:36.515173 master-0 kubenswrapper[7553]: E0318 17:49:36.513636 7553 secret.go:189] Couldn't get secret openshift-cloud-credential-operator/cloud-credential-operator-serving-cert: secret "cloud-credential-operator-serving-cert" not found Mar 18 17:49:36.515173 master-0 kubenswrapper[7553]: E0318 17:49:36.513804 7553 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/04cef0bd-f365-4bf6-864a-1895995015d6-cloud-credential-operator-serving-cert podName:04cef0bd-f365-4bf6-864a-1895995015d6 nodeName:}" failed. No retries permitted until 2026-03-18 17:50:40.51377868 +0000 UTC m=+530.659613393 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "cloud-credential-operator-serving-cert" (UniqueName: "kubernetes.io/secret/04cef0bd-f365-4bf6-864a-1895995015d6-cloud-credential-operator-serving-cert") pod "cloud-credential-operator-744f9dbf77-djgn7" (UID: "04cef0bd-f365-4bf6-864a-1895995015d6") : secret "cloud-credential-operator-serving-cert" not found Mar 18 17:49:36.515173 master-0 kubenswrapper[7553]: E0318 17:49:36.513890 7553 secret.go:189] Couldn't get secret openshift-machine-api/cluster-autoscaler-operator-cert: secret "cluster-autoscaler-operator-cert" not found Mar 18 17:49:36.515173 master-0 kubenswrapper[7553]: E0318 17:49:36.513961 7553 secret.go:189] Couldn't get secret openshift-machine-api/control-plane-machine-set-operator-tls: secret "control-plane-machine-set-operator-tls" not found Mar 18 17:49:36.515173 master-0 kubenswrapper[7553]: E0318 17:49:36.514053 7553 secret.go:189] Couldn't get secret openshift-cluster-machine-approver/machine-approver-tls: secret "machine-approver-tls" not found Mar 18 17:49:36.515173 master-0 kubenswrapper[7553]: E0318 17:49:36.513978 7553 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a94f7bff-ad61-4c53-a8eb-000a13f26971-cert podName:a94f7bff-ad61-4c53-a8eb-000a13f26971 nodeName:}" failed. No retries permitted until 2026-03-18 17:50:40.513948484 +0000 UTC m=+530.659783197 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/a94f7bff-ad61-4c53-a8eb-000a13f26971-cert") pod "cluster-autoscaler-operator-866dc4744-l6hpt" (UID: "a94f7bff-ad61-4c53-a8eb-000a13f26971") : secret "cluster-autoscaler-operator-cert" not found Mar 18 17:49:36.515173 master-0 kubenswrapper[7553]: E0318 17:49:36.514229 7553 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/de189d27-4c60-49f1-9119-d1fde5c37b1e-control-plane-machine-set-operator-tls podName:de189d27-4c60-49f1-9119-d1fde5c37b1e nodeName:}" failed. No retries permitted until 2026-03-18 17:50:40.51415875 +0000 UTC m=+530.659993463 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "control-plane-machine-set-operator-tls" (UniqueName: "kubernetes.io/secret/de189d27-4c60-49f1-9119-d1fde5c37b1e-control-plane-machine-set-operator-tls") pod "control-plane-machine-set-operator-6f97756bc8-zdqtc" (UID: "de189d27-4c60-49f1-9119-d1fde5c37b1e") : secret "control-plane-machine-set-operator-tls" not found Mar 18 17:49:36.515173 master-0 kubenswrapper[7553]: E0318 17:49:36.514309 7553 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/92153864-7959-4482-bf24-c8db36435fb5-machine-approver-tls podName:92153864-7959-4482-bf24-c8db36435fb5 nodeName:}" failed. No retries permitted until 2026-03-18 17:50:40.514259113 +0000 UTC m=+530.660093826 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "machine-approver-tls" (UniqueName: "kubernetes.io/secret/92153864-7959-4482-bf24-c8db36435fb5-machine-approver-tls") pod "machine-approver-5c6485487f-z74t2" (UID: "92153864-7959-4482-bf24-c8db36435fb5") : secret "machine-approver-tls" not found Mar 18 17:49:36.615749 master-0 kubenswrapper[7553]: I0318 17:49:36.615687 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/2d21e77e-8b61-4f03-8f17-941b7a1d8b1d-machine-api-operator-tls\") pod \"machine-api-operator-6fbb6cf6f9-6x52p\" (UID: \"2d21e77e-8b61-4f03-8f17-941b7a1d8b1d\") " pod="openshift-machine-api/machine-api-operator-6fbb6cf6f9-6x52p" Mar 18 17:49:36.616264 master-0 kubenswrapper[7553]: E0318 17:49:36.615946 7553 secret.go:189] Couldn't get secret openshift-machine-api/machine-api-operator-tls: secret "machine-api-operator-tls" not found Mar 18 17:49:36.616411 master-0 kubenswrapper[7553]: E0318 17:49:36.616375 7553 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2d21e77e-8b61-4f03-8f17-941b7a1d8b1d-machine-api-operator-tls podName:2d21e77e-8b61-4f03-8f17-941b7a1d8b1d nodeName:}" failed. No retries permitted until 2026-03-18 17:50:40.61633668 +0000 UTC m=+530.762171543 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "machine-api-operator-tls" (UniqueName: "kubernetes.io/secret/2d21e77e-8b61-4f03-8f17-941b7a1d8b1d-machine-api-operator-tls") pod "machine-api-operator-6fbb6cf6f9-6x52p" (UID: "2d21e77e-8b61-4f03-8f17-941b7a1d8b1d") : secret "machine-api-operator-tls" not found Mar 18 17:49:37.102522 master-0 kubenswrapper[7553]: I0318 17:49:37.102378 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:49:37.102522 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:49:37.102522 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:49:37.102522 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:49:37.102522 master-0 kubenswrapper[7553]: I0318 17:49:37.102510 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:49:38.102887 master-0 kubenswrapper[7553]: I0318 17:49:38.102773 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:49:38.102887 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:49:38.102887 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:49:38.102887 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:49:38.104039 master-0 kubenswrapper[7553]: I0318 17:49:38.102903 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:49:39.102133 master-0 kubenswrapper[7553]: I0318 17:49:39.102068 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:49:39.102133 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:49:39.102133 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:49:39.102133 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:49:39.102769 master-0 kubenswrapper[7553]: I0318 17:49:39.102727 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:49:40.103161 master-0 kubenswrapper[7553]: I0318 17:49:40.103087 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:49:40.103161 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:49:40.103161 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:49:40.103161 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:49:40.103161 master-0 kubenswrapper[7553]: I0318 17:49:40.103159 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:49:41.102258 master-0 kubenswrapper[7553]: I0318 17:49:41.102195 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:49:41.102258 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:49:41.102258 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:49:41.102258 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:49:41.102577 master-0 kubenswrapper[7553]: I0318 17:49:41.102294 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:49:42.054897 master-0 kubenswrapper[7553]: I0318 17:49:42.054825 7553 scope.go:117] "RemoveContainer" containerID="beba171ab7d0a9472bc419ea9abba8f00900b048cb7091b8b12412389754787d" Mar 18 17:49:42.102872 master-0 kubenswrapper[7553]: I0318 17:49:42.102595 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:49:42.102872 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:49:42.102872 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:49:42.102872 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:49:42.102872 master-0 kubenswrapper[7553]: I0318 17:49:42.102679 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:49:42.363645 master-0 kubenswrapper[7553]: I0318 17:49:42.363609 7553 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-controller-manager-operator_cluster-cloud-controller-manager-operator-7dff898856-kfzkl_0751c002-fe0e-4f13-bb9c-9accd8ca0df3/kube-rbac-proxy/2.log" Mar 18 17:49:42.366493 master-0 kubenswrapper[7553]: I0318 17:49:42.364507 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-kfzkl" event={"ID":"0751c002-fe0e-4f13-bb9c-9accd8ca0df3","Type":"ContainerStarted","Data":"5ffcea9cb9096ab077b706090befbf6443a5d79ef2d60ff75759b7f2ad4c3c8b"} Mar 18 17:49:42.385811 master-0 kubenswrapper[7553]: I0318 17:49:42.385736 7553 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-kfzkl" podStartSLOduration=46.385715481 podStartE2EDuration="46.385715481s" podCreationTimestamp="2026-03-18 17:48:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 17:49:42.38498021 +0000 UTC m=+472.530814883" watchObservedRunningTime="2026-03-18 17:49:42.385715481 +0000 UTC m=+472.531550154" Mar 18 17:49:43.103434 master-0 kubenswrapper[7553]: I0318 17:49:43.103330 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:49:43.103434 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:49:43.103434 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:49:43.103434 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:49:43.104017 master-0 kubenswrapper[7553]: I0318 17:49:43.103475 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:49:43.373927 master-0 kubenswrapper[7553]: I0318 17:49:43.373735 7553 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-controller-manager-operator_cluster-cloud-controller-manager-operator-7dff898856-kfzkl_0751c002-fe0e-4f13-bb9c-9accd8ca0df3/kube-rbac-proxy/3.log" Mar 18 17:49:43.374705 master-0 kubenswrapper[7553]: I0318 17:49:43.374652 7553 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-controller-manager-operator_cluster-cloud-controller-manager-operator-7dff898856-kfzkl_0751c002-fe0e-4f13-bb9c-9accd8ca0df3/kube-rbac-proxy/2.log" Mar 18 17:49:43.376162 master-0 kubenswrapper[7553]: I0318 17:49:43.376090 7553 generic.go:334] "Generic (PLEG): container finished" podID="0751c002-fe0e-4f13-bb9c-9accd8ca0df3" containerID="5ffcea9cb9096ab077b706090befbf6443a5d79ef2d60ff75759b7f2ad4c3c8b" exitCode=1 Mar 18 17:49:43.376335 master-0 kubenswrapper[7553]: I0318 17:49:43.376160 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-kfzkl" event={"ID":"0751c002-fe0e-4f13-bb9c-9accd8ca0df3","Type":"ContainerDied","Data":"5ffcea9cb9096ab077b706090befbf6443a5d79ef2d60ff75759b7f2ad4c3c8b"} Mar 18 17:49:43.376335 master-0 kubenswrapper[7553]: I0318 17:49:43.376225 7553 scope.go:117] "RemoveContainer" containerID="beba171ab7d0a9472bc419ea9abba8f00900b048cb7091b8b12412389754787d" Mar 18 17:49:43.377010 master-0 kubenswrapper[7553]: I0318 17:49:43.376943 7553 scope.go:117] "RemoveContainer" containerID="5ffcea9cb9096ab077b706090befbf6443a5d79ef2d60ff75759b7f2ad4c3c8b" Mar 18 17:49:43.377209 master-0 kubenswrapper[7553]: E0318 17:49:43.377143 7553 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-rbac-proxy pod=cluster-cloud-controller-manager-operator-7dff898856-kfzkl_openshift-cloud-controller-manager-operator(0751c002-fe0e-4f13-bb9c-9accd8ca0df3)\"" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-kfzkl" podUID="0751c002-fe0e-4f13-bb9c-9accd8ca0df3" Mar 18 17:49:44.102748 master-0 kubenswrapper[7553]: I0318 17:49:44.102663 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:49:44.102748 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:49:44.102748 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:49:44.102748 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:49:44.103251 master-0 kubenswrapper[7553]: I0318 17:49:44.102763 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:49:44.385448 master-0 kubenswrapper[7553]: I0318 17:49:44.385227 7553 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-controller-manager-operator_cluster-cloud-controller-manager-operator-7dff898856-kfzkl_0751c002-fe0e-4f13-bb9c-9accd8ca0df3/kube-rbac-proxy/3.log" Mar 18 17:49:45.102260 master-0 kubenswrapper[7553]: I0318 17:49:45.102186 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:49:45.102260 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:49:45.102260 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:49:45.102260 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:49:45.102553 master-0 kubenswrapper[7553]: I0318 17:49:45.102266 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:49:46.102059 master-0 kubenswrapper[7553]: I0318 17:49:46.101976 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:49:46.102059 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:49:46.102059 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:49:46.102059 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:49:46.102782 master-0 kubenswrapper[7553]: I0318 17:49:46.102085 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:49:47.102592 master-0 kubenswrapper[7553]: I0318 17:49:47.102494 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:49:47.102592 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:49:47.102592 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:49:47.102592 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:49:47.102592 master-0 kubenswrapper[7553]: I0318 17:49:47.102578 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:49:48.102485 master-0 kubenswrapper[7553]: I0318 17:49:48.102396 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:49:48.102485 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:49:48.102485 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:49:48.102485 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:49:48.103456 master-0 kubenswrapper[7553]: I0318 17:49:48.102512 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:49:49.101833 master-0 kubenswrapper[7553]: I0318 17:49:49.101767 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:49:49.101833 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:49:49.101833 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:49:49.101833 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:49:49.102225 master-0 kubenswrapper[7553]: I0318 17:49:49.101837 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:49:50.102803 master-0 kubenswrapper[7553]: I0318 17:49:50.102710 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:49:50.102803 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:49:50.102803 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:49:50.102803 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:49:50.103942 master-0 kubenswrapper[7553]: I0318 17:49:50.102809 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:49:51.103997 master-0 kubenswrapper[7553]: I0318 17:49:51.103873 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:49:51.103997 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:49:51.103997 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:49:51.103997 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:49:51.104977 master-0 kubenswrapper[7553]: I0318 17:49:51.104111 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:49:52.102034 master-0 kubenswrapper[7553]: I0318 17:49:52.101927 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:49:52.102034 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:49:52.102034 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:49:52.102034 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:49:52.102522 master-0 kubenswrapper[7553]: I0318 17:49:52.102038 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:49:53.017872 master-0 kubenswrapper[7553]: I0318 17:49:53.017791 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/9c0dbd44-7669-41d6-bf1b-d8c1343c9d98-prometheus-operator-tls\") pod \"prometheus-operator-6c8df6d4b-fshkm\" (UID: \"9c0dbd44-7669-41d6-bf1b-d8c1343c9d98\") " pod="openshift-monitoring/prometheus-operator-6c8df6d4b-fshkm" Mar 18 17:49:53.018558 master-0 kubenswrapper[7553]: E0318 17:49:53.018072 7553 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-operator-tls: secret "prometheus-operator-tls" not found Mar 18 17:49:53.018558 master-0 kubenswrapper[7553]: E0318 17:49:53.018146 7553 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9c0dbd44-7669-41d6-bf1b-d8c1343c9d98-prometheus-operator-tls podName:9c0dbd44-7669-41d6-bf1b-d8c1343c9d98 nodeName:}" failed. No retries permitted until 2026-03-18 17:50:57.018120589 +0000 UTC m=+547.163955292 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "prometheus-operator-tls" (UniqueName: "kubernetes.io/secret/9c0dbd44-7669-41d6-bf1b-d8c1343c9d98-prometheus-operator-tls") pod "prometheus-operator-6c8df6d4b-fshkm" (UID: "9c0dbd44-7669-41d6-bf1b-d8c1343c9d98") : secret "prometheus-operator-tls" not found Mar 18 17:49:53.101993 master-0 kubenswrapper[7553]: I0318 17:49:53.101922 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:49:53.101993 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:49:53.101993 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:49:53.101993 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:49:53.102312 master-0 kubenswrapper[7553]: I0318 17:49:53.102011 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:49:54.054027 master-0 kubenswrapper[7553]: I0318 17:49:54.053951 7553 scope.go:117] "RemoveContainer" containerID="5ffcea9cb9096ab077b706090befbf6443a5d79ef2d60ff75759b7f2ad4c3c8b" Mar 18 17:49:54.054885 master-0 kubenswrapper[7553]: E0318 17:49:54.054305 7553 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-rbac-proxy pod=cluster-cloud-controller-manager-operator-7dff898856-kfzkl_openshift-cloud-controller-manager-operator(0751c002-fe0e-4f13-bb9c-9accd8ca0df3)\"" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-kfzkl" podUID="0751c002-fe0e-4f13-bb9c-9accd8ca0df3" Mar 18 17:49:54.103118 master-0 kubenswrapper[7553]: I0318 17:49:54.103003 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:49:54.103118 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:49:54.103118 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:49:54.103118 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:49:54.103628 master-0 kubenswrapper[7553]: I0318 17:49:54.103124 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:49:55.103333 master-0 kubenswrapper[7553]: I0318 17:49:55.103208 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:49:55.103333 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:49:55.103333 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:49:55.103333 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:49:55.104764 master-0 kubenswrapper[7553]: I0318 17:49:55.103378 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:49:56.102351 master-0 kubenswrapper[7553]: I0318 17:49:56.102233 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:49:56.102351 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:49:56.102351 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:49:56.102351 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:49:56.102351 master-0 kubenswrapper[7553]: I0318 17:49:56.102356 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:49:57.102828 master-0 kubenswrapper[7553]: I0318 17:49:57.102708 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:49:57.102828 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:49:57.102828 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:49:57.102828 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:49:57.102828 master-0 kubenswrapper[7553]: I0318 17:49:57.102790 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:49:58.103743 master-0 kubenswrapper[7553]: I0318 17:49:58.103639 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:49:58.103743 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:49:58.103743 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:49:58.103743 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:49:58.104947 master-0 kubenswrapper[7553]: I0318 17:49:58.103774 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:49:59.103496 master-0 kubenswrapper[7553]: I0318 17:49:59.103425 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:49:59.103496 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:49:59.103496 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:49:59.103496 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:49:59.104599 master-0 kubenswrapper[7553]: I0318 17:49:59.103528 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:50:00.103601 master-0 kubenswrapper[7553]: I0318 17:50:00.103463 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:50:00.103601 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:50:00.103601 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:50:00.103601 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:50:00.103601 master-0 kubenswrapper[7553]: I0318 17:50:00.103576 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:50:01.103440 master-0 kubenswrapper[7553]: I0318 17:50:01.103335 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:50:01.103440 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:50:01.103440 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:50:01.103440 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:50:01.104755 master-0 kubenswrapper[7553]: I0318 17:50:01.103441 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:50:02.102792 master-0 kubenswrapper[7553]: I0318 17:50:02.102702 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:50:02.102792 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:50:02.102792 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:50:02.102792 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:50:02.103139 master-0 kubenswrapper[7553]: I0318 17:50:02.102814 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:50:02.548414 master-0 kubenswrapper[7553]: I0318 17:50:02.548229 7553 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-66b84d69b-qb7n6_7e64a377-f497-4416-8f22-d5c7f52e0b65/ingress-operator/2.log" Mar 18 17:50:02.549526 master-0 kubenswrapper[7553]: I0318 17:50:02.549444 7553 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-66b84d69b-qb7n6_7e64a377-f497-4416-8f22-d5c7f52e0b65/ingress-operator/1.log" Mar 18 17:50:02.550738 master-0 kubenswrapper[7553]: I0318 17:50:02.550657 7553 generic.go:334] "Generic (PLEG): container finished" podID="7e64a377-f497-4416-8f22-d5c7f52e0b65" containerID="7ee6b0cddd340e9ac4b37b541379d515766ed427e5cb173553e9eea6ace8c5a9" exitCode=1 Mar 18 17:50:02.550738 master-0 kubenswrapper[7553]: I0318 17:50:02.550731 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-66b84d69b-qb7n6" event={"ID":"7e64a377-f497-4416-8f22-d5c7f52e0b65","Type":"ContainerDied","Data":"7ee6b0cddd340e9ac4b37b541379d515766ed427e5cb173553e9eea6ace8c5a9"} Mar 18 17:50:02.550954 master-0 kubenswrapper[7553]: I0318 17:50:02.550794 7553 scope.go:117] "RemoveContainer" containerID="02b88785366f3ca67c38ae3fa046b86fa7c95b60c40b090f66977aa12f1b78cb" Mar 18 17:50:02.552088 master-0 kubenswrapper[7553]: I0318 17:50:02.552018 7553 scope.go:117] "RemoveContainer" containerID="7ee6b0cddd340e9ac4b37b541379d515766ed427e5cb173553e9eea6ace8c5a9" Mar 18 17:50:02.552616 master-0 kubenswrapper[7553]: E0318 17:50:02.552531 7553 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ingress-operator\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ingress-operator pod=ingress-operator-66b84d69b-qb7n6_openshift-ingress-operator(7e64a377-f497-4416-8f22-d5c7f52e0b65)\"" pod="openshift-ingress-operator/ingress-operator-66b84d69b-qb7n6" podUID="7e64a377-f497-4416-8f22-d5c7f52e0b65" Mar 18 17:50:03.105671 master-0 kubenswrapper[7553]: I0318 17:50:03.105567 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:50:03.105671 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:50:03.105671 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:50:03.105671 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:50:03.105671 master-0 kubenswrapper[7553]: I0318 17:50:03.105657 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:50:03.560714 master-0 kubenswrapper[7553]: I0318 17:50:03.560604 7553 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-66b84d69b-qb7n6_7e64a377-f497-4416-8f22-d5c7f52e0b65/ingress-operator/2.log" Mar 18 17:50:04.103355 master-0 kubenswrapper[7553]: I0318 17:50:04.103249 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:50:04.103355 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:50:04.103355 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:50:04.103355 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:50:04.103773 master-0 kubenswrapper[7553]: I0318 17:50:04.103384 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:50:05.106300 master-0 kubenswrapper[7553]: I0318 17:50:05.106184 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:50:05.106300 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:50:05.106300 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:50:05.106300 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:50:05.107073 master-0 kubenswrapper[7553]: I0318 17:50:05.106331 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:50:06.055911 master-0 kubenswrapper[7553]: I0318 17:50:06.055847 7553 scope.go:117] "RemoveContainer" containerID="5ffcea9cb9096ab077b706090befbf6443a5d79ef2d60ff75759b7f2ad4c3c8b" Mar 18 17:50:06.056969 master-0 kubenswrapper[7553]: E0318 17:50:06.056917 7553 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-rbac-proxy pod=cluster-cloud-controller-manager-operator-7dff898856-kfzkl_openshift-cloud-controller-manager-operator(0751c002-fe0e-4f13-bb9c-9accd8ca0df3)\"" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-kfzkl" podUID="0751c002-fe0e-4f13-bb9c-9accd8ca0df3" Mar 18 17:50:06.103315 master-0 kubenswrapper[7553]: I0318 17:50:06.103197 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:50:06.103315 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:50:06.103315 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:50:06.103315 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:50:06.104405 master-0 kubenswrapper[7553]: I0318 17:50:06.103331 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:50:07.104633 master-0 kubenswrapper[7553]: I0318 17:50:07.104555 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:50:07.104633 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:50:07.104633 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:50:07.104633 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:50:07.106056 master-0 kubenswrapper[7553]: I0318 17:50:07.105991 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:50:08.103034 master-0 kubenswrapper[7553]: I0318 17:50:08.102960 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:50:08.103034 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:50:08.103034 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:50:08.103034 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:50:08.103709 master-0 kubenswrapper[7553]: I0318 17:50:08.103650 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:50:09.104130 master-0 kubenswrapper[7553]: I0318 17:50:09.104014 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:50:09.104130 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:50:09.104130 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:50:09.104130 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:50:09.105207 master-0 kubenswrapper[7553]: I0318 17:50:09.104135 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:50:10.102231 master-0 kubenswrapper[7553]: I0318 17:50:10.102173 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:50:10.102231 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:50:10.102231 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:50:10.102231 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:50:10.102563 master-0 kubenswrapper[7553]: I0318 17:50:10.102251 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:50:11.102849 master-0 kubenswrapper[7553]: I0318 17:50:11.102774 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:50:11.102849 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:50:11.102849 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:50:11.102849 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:50:11.103946 master-0 kubenswrapper[7553]: I0318 17:50:11.102866 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:50:12.109972 master-0 kubenswrapper[7553]: I0318 17:50:12.109876 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:50:12.109972 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:50:12.109972 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:50:12.109972 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:50:12.110906 master-0 kubenswrapper[7553]: I0318 17:50:12.110001 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:50:13.102539 master-0 kubenswrapper[7553]: I0318 17:50:13.102420 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:50:13.102539 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:50:13.102539 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:50:13.102539 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:50:13.102539 master-0 kubenswrapper[7553]: I0318 17:50:13.102529 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:50:14.103744 master-0 kubenswrapper[7553]: I0318 17:50:14.103610 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:50:14.103744 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:50:14.103744 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:50:14.103744 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:50:14.104975 master-0 kubenswrapper[7553]: I0318 17:50:14.103747 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:50:15.054386 master-0 kubenswrapper[7553]: I0318 17:50:15.054246 7553 scope.go:117] "RemoveContainer" containerID="7ee6b0cddd340e9ac4b37b541379d515766ed427e5cb173553e9eea6ace8c5a9" Mar 18 17:50:15.054883 master-0 kubenswrapper[7553]: E0318 17:50:15.054817 7553 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ingress-operator\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ingress-operator pod=ingress-operator-66b84d69b-qb7n6_openshift-ingress-operator(7e64a377-f497-4416-8f22-d5c7f52e0b65)\"" pod="openshift-ingress-operator/ingress-operator-66b84d69b-qb7n6" podUID="7e64a377-f497-4416-8f22-d5c7f52e0b65" Mar 18 17:50:15.103122 master-0 kubenswrapper[7553]: I0318 17:50:15.102999 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:50:15.103122 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:50:15.103122 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:50:15.103122 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:50:15.103122 master-0 kubenswrapper[7553]: I0318 17:50:15.103102 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:50:16.102877 master-0 kubenswrapper[7553]: I0318 17:50:16.102779 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:50:16.102877 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:50:16.102877 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:50:16.102877 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:50:16.103866 master-0 kubenswrapper[7553]: I0318 17:50:16.102899 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:50:17.102608 master-0 kubenswrapper[7553]: I0318 17:50:17.102551 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:50:17.102608 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:50:17.102608 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:50:17.102608 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:50:17.102924 master-0 kubenswrapper[7553]: I0318 17:50:17.102624 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:50:18.103011 master-0 kubenswrapper[7553]: I0318 17:50:18.102888 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:50:18.103011 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:50:18.103011 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:50:18.103011 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:50:18.104255 master-0 kubenswrapper[7553]: I0318 17:50:18.103012 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:50:19.103142 master-0 kubenswrapper[7553]: I0318 17:50:19.103031 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:50:19.103142 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:50:19.103142 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:50:19.103142 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:50:19.103142 master-0 kubenswrapper[7553]: I0318 17:50:19.103133 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:50:20.060204 master-0 kubenswrapper[7553]: I0318 17:50:20.060139 7553 scope.go:117] "RemoveContainer" containerID="5ffcea9cb9096ab077b706090befbf6443a5d79ef2d60ff75759b7f2ad4c3c8b" Mar 18 17:50:20.061029 master-0 kubenswrapper[7553]: E0318 17:50:20.060975 7553 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-rbac-proxy pod=cluster-cloud-controller-manager-operator-7dff898856-kfzkl_openshift-cloud-controller-manager-operator(0751c002-fe0e-4f13-bb9c-9accd8ca0df3)\"" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-kfzkl" podUID="0751c002-fe0e-4f13-bb9c-9accd8ca0df3" Mar 18 17:50:20.103133 master-0 kubenswrapper[7553]: I0318 17:50:20.103041 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:50:20.103133 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:50:20.103133 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:50:20.103133 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:50:20.103133 master-0 kubenswrapper[7553]: I0318 17:50:20.103120 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:50:21.102678 master-0 kubenswrapper[7553]: I0318 17:50:21.102518 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:50:21.102678 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:50:21.102678 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:50:21.102678 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:50:21.102678 master-0 kubenswrapper[7553]: I0318 17:50:21.102658 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:50:22.102762 master-0 kubenswrapper[7553]: I0318 17:50:22.102661 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:50:22.102762 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:50:22.102762 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:50:22.102762 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:50:22.103807 master-0 kubenswrapper[7553]: I0318 17:50:22.102777 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:50:23.103661 master-0 kubenswrapper[7553]: I0318 17:50:23.103574 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:50:23.103661 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:50:23.103661 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:50:23.103661 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:50:23.104794 master-0 kubenswrapper[7553]: I0318 17:50:23.104740 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:50:24.104168 master-0 kubenswrapper[7553]: I0318 17:50:24.104062 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:50:24.104168 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:50:24.104168 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:50:24.104168 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:50:24.105139 master-0 kubenswrapper[7553]: I0318 17:50:24.104170 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:50:25.103418 master-0 kubenswrapper[7553]: I0318 17:50:25.103299 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:50:25.103418 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:50:25.103418 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:50:25.103418 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:50:25.103418 master-0 kubenswrapper[7553]: I0318 17:50:25.103397 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:50:26.104031 master-0 kubenswrapper[7553]: I0318 17:50:26.103890 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:50:26.104031 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:50:26.104031 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:50:26.104031 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:50:26.105206 master-0 kubenswrapper[7553]: I0318 17:50:26.104046 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:50:27.054122 master-0 kubenswrapper[7553]: I0318 17:50:27.054033 7553 scope.go:117] "RemoveContainer" containerID="7ee6b0cddd340e9ac4b37b541379d515766ed427e5cb173553e9eea6ace8c5a9" Mar 18 17:50:27.104263 master-0 kubenswrapper[7553]: I0318 17:50:27.104196 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:50:27.104263 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:50:27.104263 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:50:27.104263 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:50:27.105111 master-0 kubenswrapper[7553]: I0318 17:50:27.104311 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:50:27.750410 master-0 kubenswrapper[7553]: I0318 17:50:27.750350 7553 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-66b84d69b-qb7n6_7e64a377-f497-4416-8f22-d5c7f52e0b65/ingress-operator/2.log" Mar 18 17:50:27.750882 master-0 kubenswrapper[7553]: I0318 17:50:27.750844 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-66b84d69b-qb7n6" event={"ID":"7e64a377-f497-4416-8f22-d5c7f52e0b65","Type":"ContainerStarted","Data":"283b61599e310047ed75a28fad3754db0725837893f44d2709551e02ebb45040"} Mar 18 17:50:28.102683 master-0 kubenswrapper[7553]: I0318 17:50:28.102590 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:50:28.102683 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:50:28.102683 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:50:28.102683 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:50:28.103220 master-0 kubenswrapper[7553]: I0318 17:50:28.102709 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:50:29.103506 master-0 kubenswrapper[7553]: I0318 17:50:29.103419 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:50:29.103506 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:50:29.103506 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:50:29.103506 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:50:29.104307 master-0 kubenswrapper[7553]: I0318 17:50:29.103524 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:50:30.103120 master-0 kubenswrapper[7553]: I0318 17:50:30.103039 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:50:30.103120 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:50:30.103120 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:50:30.103120 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:50:30.104194 master-0 kubenswrapper[7553]: I0318 17:50:30.103144 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:50:31.102531 master-0 kubenswrapper[7553]: I0318 17:50:31.102443 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:50:31.102531 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:50:31.102531 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:50:31.102531 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:50:31.103035 master-0 kubenswrapper[7553]: I0318 17:50:31.102557 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:50:32.053940 master-0 kubenswrapper[7553]: I0318 17:50:32.053863 7553 scope.go:117] "RemoveContainer" containerID="5ffcea9cb9096ab077b706090befbf6443a5d79ef2d60ff75759b7f2ad4c3c8b" Mar 18 17:50:32.103311 master-0 kubenswrapper[7553]: I0318 17:50:32.103221 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:50:32.103311 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:50:32.103311 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:50:32.103311 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:50:32.103311 master-0 kubenswrapper[7553]: I0318 17:50:32.103308 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:50:32.790873 master-0 kubenswrapper[7553]: I0318 17:50:32.790653 7553 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-controller-manager-operator_cluster-cloud-controller-manager-operator-7dff898856-kfzkl_0751c002-fe0e-4f13-bb9c-9accd8ca0df3/kube-rbac-proxy/4.log" Mar 18 17:50:32.791688 master-0 kubenswrapper[7553]: I0318 17:50:32.791631 7553 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-controller-manager-operator_cluster-cloud-controller-manager-operator-7dff898856-kfzkl_0751c002-fe0e-4f13-bb9c-9accd8ca0df3/kube-rbac-proxy/3.log" Mar 18 17:50:32.792757 master-0 kubenswrapper[7553]: I0318 17:50:32.792682 7553 generic.go:334] "Generic (PLEG): container finished" podID="0751c002-fe0e-4f13-bb9c-9accd8ca0df3" containerID="5c16981b35905733515d1e248adaa6df57536596a8a44ed7adeee9fde518db63" exitCode=1 Mar 18 17:50:32.792887 master-0 kubenswrapper[7553]: I0318 17:50:32.792744 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-kfzkl" event={"ID":"0751c002-fe0e-4f13-bb9c-9accd8ca0df3","Type":"ContainerDied","Data":"5c16981b35905733515d1e248adaa6df57536596a8a44ed7adeee9fde518db63"} Mar 18 17:50:32.792887 master-0 kubenswrapper[7553]: I0318 17:50:32.792846 7553 scope.go:117] "RemoveContainer" containerID="5ffcea9cb9096ab077b706090befbf6443a5d79ef2d60ff75759b7f2ad4c3c8b" Mar 18 17:50:32.794197 master-0 kubenswrapper[7553]: I0318 17:50:32.794156 7553 scope.go:117] "RemoveContainer" containerID="5c16981b35905733515d1e248adaa6df57536596a8a44ed7adeee9fde518db63" Mar 18 17:50:32.794628 master-0 kubenswrapper[7553]: E0318 17:50:32.794565 7553 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=kube-rbac-proxy pod=cluster-cloud-controller-manager-operator-7dff898856-kfzkl_openshift-cloud-controller-manager-operator(0751c002-fe0e-4f13-bb9c-9accd8ca0df3)\"" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-kfzkl" podUID="0751c002-fe0e-4f13-bb9c-9accd8ca0df3" Mar 18 17:50:33.103299 master-0 kubenswrapper[7553]: I0318 17:50:33.103203 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:50:33.103299 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:50:33.103299 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:50:33.103299 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:50:33.104008 master-0 kubenswrapper[7553]: I0318 17:50:33.103348 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:50:33.804187 master-0 kubenswrapper[7553]: I0318 17:50:33.804100 7553 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-controller-manager-operator_cluster-cloud-controller-manager-operator-7dff898856-kfzkl_0751c002-fe0e-4f13-bb9c-9accd8ca0df3/kube-rbac-proxy/4.log" Mar 18 17:50:34.102881 master-0 kubenswrapper[7553]: I0318 17:50:34.102725 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:50:34.102881 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:50:34.102881 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:50:34.102881 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:50:34.102881 master-0 kubenswrapper[7553]: I0318 17:50:34.102809 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:50:35.103330 master-0 kubenswrapper[7553]: I0318 17:50:35.103171 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:50:35.103330 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:50:35.103330 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:50:35.103330 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:50:35.103330 master-0 kubenswrapper[7553]: I0318 17:50:35.103313 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:50:35.565757 master-0 kubenswrapper[7553]: E0318 17:50:35.565643 7553 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[machine-approver-tls], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="openshift-cluster-machine-approver/machine-approver-5c6485487f-z74t2" podUID="92153864-7959-4482-bf24-c8db36435fb5" Mar 18 17:50:35.604049 master-0 kubenswrapper[7553]: E0318 17:50:35.603947 7553 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[samples-operator-tls], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="openshift-cluster-samples-operator/cluster-samples-operator-85f7577d78-xnx8x" podUID="e0e04440-c08b-452d-9be6-9f70a4027c92" Mar 18 17:50:35.694679 master-0 kubenswrapper[7553]: E0318 17:50:35.694573 7553 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[cloud-credential-operator-serving-cert], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="openshift-cloud-credential-operator/cloud-credential-operator-744f9dbf77-djgn7" podUID="04cef0bd-f365-4bf6-864a-1895995015d6" Mar 18 17:50:35.729749 master-0 kubenswrapper[7553]: E0318 17:50:35.729646 7553 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[control-plane-machine-set-operator-tls], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="openshift-machine-api/control-plane-machine-set-operator-6f97756bc8-zdqtc" podUID="de189d27-4c60-49f1-9119-d1fde5c37b1e" Mar 18 17:50:35.762580 master-0 kubenswrapper[7553]: E0318 17:50:35.762187 7553 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[cert], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="openshift-machine-api/cluster-autoscaler-operator-866dc4744-l6hpt" podUID="a94f7bff-ad61-4c53-a8eb-000a13f26971" Mar 18 17:50:35.823520 master-0 kubenswrapper[7553]: I0318 17:50:35.823151 7553 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-6f97756bc8-zdqtc" Mar 18 17:50:35.823520 master-0 kubenswrapper[7553]: I0318 17:50:35.823307 7553 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cloud-credential-operator/cloud-credential-operator-744f9dbf77-djgn7" Mar 18 17:50:35.823520 master-0 kubenswrapper[7553]: I0318 17:50:35.823306 7553 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/cluster-autoscaler-operator-866dc4744-l6hpt" Mar 18 17:50:35.824146 master-0 kubenswrapper[7553]: I0318 17:50:35.823631 7553 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-85f7577d78-xnx8x" Mar 18 17:50:35.835244 master-0 kubenswrapper[7553]: E0318 17:50:35.835183 7553 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[machine-api-operator-tls], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="openshift-machine-api/machine-api-operator-6fbb6cf6f9-6x52p" podUID="2d21e77e-8b61-4f03-8f17-941b7a1d8b1d" Mar 18 17:50:36.103895 master-0 kubenswrapper[7553]: I0318 17:50:36.103745 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:50:36.103895 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:50:36.103895 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:50:36.103895 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:50:36.103895 master-0 kubenswrapper[7553]: I0318 17:50:36.103851 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:50:36.830314 master-0 kubenswrapper[7553]: I0318 17:50:36.830204 7553 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-6fbb6cf6f9-6x52p" Mar 18 17:50:37.102344 master-0 kubenswrapper[7553]: I0318 17:50:37.102191 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:50:37.102344 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:50:37.102344 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:50:37.102344 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:50:37.102344 master-0 kubenswrapper[7553]: I0318 17:50:37.102320 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:50:38.102995 master-0 kubenswrapper[7553]: I0318 17:50:38.102856 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:50:38.102995 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:50:38.102995 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:50:38.102995 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:50:38.104087 master-0 kubenswrapper[7553]: I0318 17:50:38.103002 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:50:39.102432 master-0 kubenswrapper[7553]: I0318 17:50:39.102362 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:50:39.102432 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:50:39.102432 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:50:39.102432 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:50:39.102823 master-0 kubenswrapper[7553]: I0318 17:50:39.102450 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:50:40.103404 master-0 kubenswrapper[7553]: I0318 17:50:40.103342 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:50:40.103404 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:50:40.103404 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:50:40.103404 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:50:40.104203 master-0 kubenswrapper[7553]: I0318 17:50:40.104166 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:50:40.560092 master-0 kubenswrapper[7553]: I0318 17:50:40.560014 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/e0e04440-c08b-452d-9be6-9f70a4027c92-samples-operator-tls\") pod \"cluster-samples-operator-85f7577d78-xnx8x\" (UID: \"e0e04440-c08b-452d-9be6-9f70a4027c92\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-85f7577d78-xnx8x" Mar 18 17:50:40.560744 master-0 kubenswrapper[7553]: I0318 17:50:40.560695 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloud-credential-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/04cef0bd-f365-4bf6-864a-1895995015d6-cloud-credential-operator-serving-cert\") pod \"cloud-credential-operator-744f9dbf77-djgn7\" (UID: \"04cef0bd-f365-4bf6-864a-1895995015d6\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-744f9dbf77-djgn7" Mar 18 17:50:40.561057 master-0 kubenswrapper[7553]: I0318 17:50:40.561010 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/de189d27-4c60-49f1-9119-d1fde5c37b1e-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-6f97756bc8-zdqtc\" (UID: \"de189d27-4c60-49f1-9119-d1fde5c37b1e\") " pod="openshift-machine-api/control-plane-machine-set-operator-6f97756bc8-zdqtc" Mar 18 17:50:40.561407 master-0 kubenswrapper[7553]: I0318 17:50:40.561363 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/92153864-7959-4482-bf24-c8db36435fb5-machine-approver-tls\") pod \"machine-approver-5c6485487f-z74t2\" (UID: \"92153864-7959-4482-bf24-c8db36435fb5\") " pod="openshift-cluster-machine-approver/machine-approver-5c6485487f-z74t2" Mar 18 17:50:40.561881 master-0 kubenswrapper[7553]: I0318 17:50:40.561838 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a94f7bff-ad61-4c53-a8eb-000a13f26971-cert\") pod \"cluster-autoscaler-operator-866dc4744-l6hpt\" (UID: \"a94f7bff-ad61-4c53-a8eb-000a13f26971\") " pod="openshift-machine-api/cluster-autoscaler-operator-866dc4744-l6hpt" Mar 18 17:50:40.562165 master-0 kubenswrapper[7553]: E0318 17:50:40.560257 7553 secret.go:189] Couldn't get secret openshift-cluster-samples-operator/samples-operator-tls: secret "samples-operator-tls" not found Mar 18 17:50:40.562512 master-0 kubenswrapper[7553]: E0318 17:50:40.562473 7553 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e0e04440-c08b-452d-9be6-9f70a4027c92-samples-operator-tls podName:e0e04440-c08b-452d-9be6-9f70a4027c92 nodeName:}" failed. No retries permitted until 2026-03-18 17:52:42.56243214 +0000 UTC m=+652.708266853 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "samples-operator-tls" (UniqueName: "kubernetes.io/secret/e0e04440-c08b-452d-9be6-9f70a4027c92-samples-operator-tls") pod "cluster-samples-operator-85f7577d78-xnx8x" (UID: "e0e04440-c08b-452d-9be6-9f70a4027c92") : secret "samples-operator-tls" not found Mar 18 17:50:40.562751 master-0 kubenswrapper[7553]: E0318 17:50:40.560877 7553 secret.go:189] Couldn't get secret openshift-cloud-credential-operator/cloud-credential-operator-serving-cert: secret "cloud-credential-operator-serving-cert" not found Mar 18 17:50:40.562947 master-0 kubenswrapper[7553]: E0318 17:50:40.561140 7553 secret.go:189] Couldn't get secret openshift-machine-api/control-plane-machine-set-operator-tls: secret "control-plane-machine-set-operator-tls" not found Mar 18 17:50:40.563120 master-0 kubenswrapper[7553]: E0318 17:50:40.561504 7553 secret.go:189] Couldn't get secret openshift-cluster-machine-approver/machine-approver-tls: secret "machine-approver-tls" not found Mar 18 17:50:40.563332 master-0 kubenswrapper[7553]: E0318 17:50:40.562059 7553 secret.go:189] Couldn't get secret openshift-machine-api/cluster-autoscaler-operator-cert: secret "cluster-autoscaler-operator-cert" not found Mar 18 17:50:40.563533 master-0 kubenswrapper[7553]: E0318 17:50:40.563185 7553 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/04cef0bd-f365-4bf6-864a-1895995015d6-cloud-credential-operator-serving-cert podName:04cef0bd-f365-4bf6-864a-1895995015d6 nodeName:}" failed. No retries permitted until 2026-03-18 17:52:42.563099148 +0000 UTC m=+652.708934031 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "cloud-credential-operator-serving-cert" (UniqueName: "kubernetes.io/secret/04cef0bd-f365-4bf6-864a-1895995015d6-cloud-credential-operator-serving-cert") pod "cloud-credential-operator-744f9dbf77-djgn7" (UID: "04cef0bd-f365-4bf6-864a-1895995015d6") : secret "cloud-credential-operator-serving-cert" not found Mar 18 17:50:40.563778 master-0 kubenswrapper[7553]: E0318 17:50:40.563747 7553 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/de189d27-4c60-49f1-9119-d1fde5c37b1e-control-plane-machine-set-operator-tls podName:de189d27-4c60-49f1-9119-d1fde5c37b1e nodeName:}" failed. No retries permitted until 2026-03-18 17:52:42.563707074 +0000 UTC m=+652.709541867 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "control-plane-machine-set-operator-tls" (UniqueName: "kubernetes.io/secret/de189d27-4c60-49f1-9119-d1fde5c37b1e-control-plane-machine-set-operator-tls") pod "control-plane-machine-set-operator-6f97756bc8-zdqtc" (UID: "de189d27-4c60-49f1-9119-d1fde5c37b1e") : secret "control-plane-machine-set-operator-tls" not found Mar 18 17:50:40.564024 master-0 kubenswrapper[7553]: E0318 17:50:40.563996 7553 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/92153864-7959-4482-bf24-c8db36435fb5-machine-approver-tls podName:92153864-7959-4482-bf24-c8db36435fb5 nodeName:}" failed. No retries permitted until 2026-03-18 17:52:42.56396437 +0000 UTC m=+652.709799273 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "machine-approver-tls" (UniqueName: "kubernetes.io/secret/92153864-7959-4482-bf24-c8db36435fb5-machine-approver-tls") pod "machine-approver-5c6485487f-z74t2" (UID: "92153864-7959-4482-bf24-c8db36435fb5") : secret "machine-approver-tls" not found Mar 18 17:50:40.564498 master-0 kubenswrapper[7553]: E0318 17:50:40.564468 7553 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a94f7bff-ad61-4c53-a8eb-000a13f26971-cert podName:a94f7bff-ad61-4c53-a8eb-000a13f26971 nodeName:}" failed. No retries permitted until 2026-03-18 17:52:42.564435694 +0000 UTC m=+652.710270597 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/a94f7bff-ad61-4c53-a8eb-000a13f26971-cert") pod "cluster-autoscaler-operator-866dc4744-l6hpt" (UID: "a94f7bff-ad61-4c53-a8eb-000a13f26971") : secret "cluster-autoscaler-operator-cert" not found Mar 18 17:50:40.663994 master-0 kubenswrapper[7553]: I0318 17:50:40.663901 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/2d21e77e-8b61-4f03-8f17-941b7a1d8b1d-machine-api-operator-tls\") pod \"machine-api-operator-6fbb6cf6f9-6x52p\" (UID: \"2d21e77e-8b61-4f03-8f17-941b7a1d8b1d\") " pod="openshift-machine-api/machine-api-operator-6fbb6cf6f9-6x52p" Mar 18 17:50:40.664405 master-0 kubenswrapper[7553]: E0318 17:50:40.664130 7553 secret.go:189] Couldn't get secret openshift-machine-api/machine-api-operator-tls: secret "machine-api-operator-tls" not found Mar 18 17:50:40.664405 master-0 kubenswrapper[7553]: E0318 17:50:40.664193 7553 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2d21e77e-8b61-4f03-8f17-941b7a1d8b1d-machine-api-operator-tls podName:2d21e77e-8b61-4f03-8f17-941b7a1d8b1d nodeName:}" failed. No retries permitted until 2026-03-18 17:52:42.664174163 +0000 UTC m=+652.810008836 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "machine-api-operator-tls" (UniqueName: "kubernetes.io/secret/2d21e77e-8b61-4f03-8f17-941b7a1d8b1d-machine-api-operator-tls") pod "machine-api-operator-6fbb6cf6f9-6x52p" (UID: "2d21e77e-8b61-4f03-8f17-941b7a1d8b1d") : secret "machine-api-operator-tls" not found Mar 18 17:50:41.102430 master-0 kubenswrapper[7553]: I0318 17:50:41.102347 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:50:41.102430 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:50:41.102430 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:50:41.102430 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:50:41.102913 master-0 kubenswrapper[7553]: I0318 17:50:41.102478 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:50:42.102144 master-0 kubenswrapper[7553]: I0318 17:50:42.102064 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:50:42.102144 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:50:42.102144 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:50:42.102144 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:50:42.102931 master-0 kubenswrapper[7553]: I0318 17:50:42.102176 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:50:43.102196 master-0 kubenswrapper[7553]: I0318 17:50:43.102105 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:50:43.102196 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:50:43.102196 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:50:43.102196 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:50:43.102196 master-0 kubenswrapper[7553]: I0318 17:50:43.102189 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:50:44.102554 master-0 kubenswrapper[7553]: I0318 17:50:44.102470 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:50:44.102554 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:50:44.102554 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:50:44.102554 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:50:44.103116 master-0 kubenswrapper[7553]: I0318 17:50:44.102617 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:50:45.103774 master-0 kubenswrapper[7553]: I0318 17:50:45.103674 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:50:45.103774 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:50:45.103774 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:50:45.103774 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:50:45.104583 master-0 kubenswrapper[7553]: I0318 17:50:45.103798 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:50:46.102587 master-0 kubenswrapper[7553]: I0318 17:50:46.102513 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:50:46.102587 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:50:46.102587 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:50:46.102587 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:50:46.103024 master-0 kubenswrapper[7553]: I0318 17:50:46.102626 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:50:47.053507 master-0 kubenswrapper[7553]: I0318 17:50:47.053449 7553 scope.go:117] "RemoveContainer" containerID="5c16981b35905733515d1e248adaa6df57536596a8a44ed7adeee9fde518db63" Mar 18 17:50:47.054115 master-0 kubenswrapper[7553]: E0318 17:50:47.053713 7553 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=kube-rbac-proxy pod=cluster-cloud-controller-manager-operator-7dff898856-kfzkl_openshift-cloud-controller-manager-operator(0751c002-fe0e-4f13-bb9c-9accd8ca0df3)\"" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-kfzkl" podUID="0751c002-fe0e-4f13-bb9c-9accd8ca0df3" Mar 18 17:50:47.101680 master-0 kubenswrapper[7553]: I0318 17:50:47.101606 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:50:47.101680 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:50:47.101680 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:50:47.101680 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:50:47.102178 master-0 kubenswrapper[7553]: I0318 17:50:47.101725 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:50:47.102178 master-0 kubenswrapper[7553]: I0318 17:50:47.101807 7553 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" Mar 18 17:50:47.102712 master-0 kubenswrapper[7553]: I0318 17:50:47.102672 7553 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="router" containerStatusID={"Type":"cri-o","ID":"3d1a4c794f84645b132cca3ce7dc17d228df153769dd3f1d6b34979465df7e8d"} pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" containerMessage="Container router failed startup probe, will be restarted" Mar 18 17:50:47.102789 master-0 kubenswrapper[7553]: I0318 17:50:47.102734 7553 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" containerID="cri-o://3d1a4c794f84645b132cca3ce7dc17d228df153769dd3f1d6b34979465df7e8d" gracePeriod=3600 Mar 18 17:50:51.052554 master-0 kubenswrapper[7553]: I0318 17:50:51.052467 7553 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-5c6485487f-z74t2" Mar 18 17:50:52.024190 master-0 kubenswrapper[7553]: E0318 17:50:52.024078 7553 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[prometheus-operator-tls], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="openshift-monitoring/prometheus-operator-6c8df6d4b-fshkm" podUID="9c0dbd44-7669-41d6-bf1b-d8c1343c9d98" Mar 18 17:50:52.959680 master-0 kubenswrapper[7553]: I0318 17:50:52.959572 7553 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-6c8df6d4b-fshkm" Mar 18 17:50:57.091601 master-0 kubenswrapper[7553]: I0318 17:50:57.091490 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/9c0dbd44-7669-41d6-bf1b-d8c1343c9d98-prometheus-operator-tls\") pod \"prometheus-operator-6c8df6d4b-fshkm\" (UID: \"9c0dbd44-7669-41d6-bf1b-d8c1343c9d98\") " pod="openshift-monitoring/prometheus-operator-6c8df6d4b-fshkm" Mar 18 17:50:57.093202 master-0 kubenswrapper[7553]: E0318 17:50:57.091726 7553 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-operator-tls: secret "prometheus-operator-tls" not found Mar 18 17:50:57.093202 master-0 kubenswrapper[7553]: E0318 17:50:57.091834 7553 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9c0dbd44-7669-41d6-bf1b-d8c1343c9d98-prometheus-operator-tls podName:9c0dbd44-7669-41d6-bf1b-d8c1343c9d98 nodeName:}" failed. No retries permitted until 2026-03-18 17:52:59.091810142 +0000 UTC m=+669.237644825 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "prometheus-operator-tls" (UniqueName: "kubernetes.io/secret/9c0dbd44-7669-41d6-bf1b-d8c1343c9d98-prometheus-operator-tls") pod "prometheus-operator-6c8df6d4b-fshkm" (UID: "9c0dbd44-7669-41d6-bf1b-d8c1343c9d98") : secret "prometheus-operator-tls" not found Mar 18 17:51:00.056192 master-0 kubenswrapper[7553]: I0318 17:51:00.056125 7553 scope.go:117] "RemoveContainer" containerID="5c16981b35905733515d1e248adaa6df57536596a8a44ed7adeee9fde518db63" Mar 18 17:51:00.057246 master-0 kubenswrapper[7553]: E0318 17:51:00.056358 7553 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=kube-rbac-proxy pod=cluster-cloud-controller-manager-operator-7dff898856-kfzkl_openshift-cloud-controller-manager-operator(0751c002-fe0e-4f13-bb9c-9accd8ca0df3)\"" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-kfzkl" podUID="0751c002-fe0e-4f13-bb9c-9accd8ca0df3" Mar 18 17:51:11.053490 master-0 kubenswrapper[7553]: I0318 17:51:11.053419 7553 scope.go:117] "RemoveContainer" containerID="5c16981b35905733515d1e248adaa6df57536596a8a44ed7adeee9fde518db63" Mar 18 17:51:11.054301 master-0 kubenswrapper[7553]: E0318 17:51:11.053781 7553 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=kube-rbac-proxy pod=cluster-cloud-controller-manager-operator-7dff898856-kfzkl_openshift-cloud-controller-manager-operator(0751c002-fe0e-4f13-bb9c-9accd8ca0df3)\"" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-kfzkl" podUID="0751c002-fe0e-4f13-bb9c-9accd8ca0df3" Mar 18 17:51:23.053267 master-0 kubenswrapper[7553]: I0318 17:51:23.053203 7553 scope.go:117] "RemoveContainer" containerID="5c16981b35905733515d1e248adaa6df57536596a8a44ed7adeee9fde518db63" Mar 18 17:51:23.053841 master-0 kubenswrapper[7553]: E0318 17:51:23.053485 7553 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=kube-rbac-proxy pod=cluster-cloud-controller-manager-operator-7dff898856-kfzkl_openshift-cloud-controller-manager-operator(0751c002-fe0e-4f13-bb9c-9accd8ca0df3)\"" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-kfzkl" podUID="0751c002-fe0e-4f13-bb9c-9accd8ca0df3" Mar 18 17:51:33.321242 master-0 kubenswrapper[7553]: I0318 17:51:33.320967 7553 generic.go:334] "Generic (PLEG): container finished" podID="c57f282a-829b-41b2-827a-f4bc598245a2" containerID="3d1a4c794f84645b132cca3ce7dc17d228df153769dd3f1d6b34979465df7e8d" exitCode=0 Mar 18 17:51:33.321242 master-0 kubenswrapper[7553]: I0318 17:51:33.321067 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" event={"ID":"c57f282a-829b-41b2-827a-f4bc598245a2","Type":"ContainerDied","Data":"3d1a4c794f84645b132cca3ce7dc17d228df153769dd3f1d6b34979465df7e8d"} Mar 18 17:51:34.331554 master-0 kubenswrapper[7553]: I0318 17:51:34.331444 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" event={"ID":"c57f282a-829b-41b2-827a-f4bc598245a2","Type":"ContainerStarted","Data":"3be88236d1075355721a3a53c0d6a8b5bc0a4bd441e11b9ae0dd32cd30599a9f"} Mar 18 17:51:35.099990 master-0 kubenswrapper[7553]: I0318 17:51:35.099869 7553 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" Mar 18 17:51:35.106334 master-0 kubenswrapper[7553]: I0318 17:51:35.106181 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:51:35.106334 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:51:35.106334 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:51:35.106334 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:51:35.106761 master-0 kubenswrapper[7553]: I0318 17:51:35.106346 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:51:36.054630 master-0 kubenswrapper[7553]: I0318 17:51:36.054528 7553 scope.go:117] "RemoveContainer" containerID="5c16981b35905733515d1e248adaa6df57536596a8a44ed7adeee9fde518db63" Mar 18 17:51:36.055956 master-0 kubenswrapper[7553]: E0318 17:51:36.055030 7553 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=kube-rbac-proxy pod=cluster-cloud-controller-manager-operator-7dff898856-kfzkl_openshift-cloud-controller-manager-operator(0751c002-fe0e-4f13-bb9c-9accd8ca0df3)\"" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-kfzkl" podUID="0751c002-fe0e-4f13-bb9c-9accd8ca0df3" Mar 18 17:51:36.103513 master-0 kubenswrapper[7553]: I0318 17:51:36.103408 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:51:36.103513 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:51:36.103513 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:51:36.103513 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:51:36.103977 master-0 kubenswrapper[7553]: I0318 17:51:36.103514 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:51:37.102845 master-0 kubenswrapper[7553]: I0318 17:51:37.102729 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:51:37.102845 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:51:37.102845 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:51:37.102845 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:51:37.104004 master-0 kubenswrapper[7553]: I0318 17:51:37.102879 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:51:38.103449 master-0 kubenswrapper[7553]: I0318 17:51:38.103192 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:51:38.103449 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:51:38.103449 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:51:38.103449 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:51:38.104606 master-0 kubenswrapper[7553]: I0318 17:51:38.103478 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:51:39.102983 master-0 kubenswrapper[7553]: I0318 17:51:39.102923 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:51:39.102983 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:51:39.102983 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:51:39.102983 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:51:39.103499 master-0 kubenswrapper[7553]: I0318 17:51:39.103469 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:51:40.102654 master-0 kubenswrapper[7553]: I0318 17:51:40.102604 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:51:40.102654 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:51:40.102654 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:51:40.102654 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:51:40.103074 master-0 kubenswrapper[7553]: I0318 17:51:40.103043 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:51:41.101646 master-0 kubenswrapper[7553]: I0318 17:51:41.101563 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:51:41.101646 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:51:41.101646 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:51:41.101646 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:51:41.101646 master-0 kubenswrapper[7553]: I0318 17:51:41.101629 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:51:42.102985 master-0 kubenswrapper[7553]: I0318 17:51:42.102838 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:51:42.102985 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:51:42.102985 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:51:42.102985 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:51:42.104489 master-0 kubenswrapper[7553]: I0318 17:51:42.102989 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:51:43.100373 master-0 kubenswrapper[7553]: I0318 17:51:43.100315 7553 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" Mar 18 17:51:43.102948 master-0 kubenswrapper[7553]: I0318 17:51:43.102908 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:51:43.102948 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:51:43.102948 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:51:43.102948 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:51:43.103991 master-0 kubenswrapper[7553]: I0318 17:51:43.103949 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:51:44.103045 master-0 kubenswrapper[7553]: I0318 17:51:44.102953 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:51:44.103045 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:51:44.103045 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:51:44.103045 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:51:44.104021 master-0 kubenswrapper[7553]: I0318 17:51:44.103064 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:51:45.102688 master-0 kubenswrapper[7553]: I0318 17:51:45.102567 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:51:45.102688 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:51:45.102688 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:51:45.102688 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:51:45.103120 master-0 kubenswrapper[7553]: I0318 17:51:45.102696 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:51:46.103004 master-0 kubenswrapper[7553]: I0318 17:51:46.102890 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:51:46.103004 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:51:46.103004 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:51:46.103004 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:51:46.104077 master-0 kubenswrapper[7553]: I0318 17:51:46.103022 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:51:47.102850 master-0 kubenswrapper[7553]: I0318 17:51:47.102775 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:51:47.102850 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:51:47.102850 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:51:47.102850 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:51:47.103210 master-0 kubenswrapper[7553]: I0318 17:51:47.102870 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:51:48.102486 master-0 kubenswrapper[7553]: I0318 17:51:48.102407 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:51:48.102486 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:51:48.102486 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:51:48.102486 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:51:48.103134 master-0 kubenswrapper[7553]: I0318 17:51:48.102511 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:51:49.102520 master-0 kubenswrapper[7553]: I0318 17:51:49.102436 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:51:49.102520 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:51:49.102520 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:51:49.102520 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:51:49.102520 master-0 kubenswrapper[7553]: I0318 17:51:49.102514 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:51:50.102521 master-0 kubenswrapper[7553]: I0318 17:51:50.102454 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:51:50.102521 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:51:50.102521 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:51:50.102521 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:51:50.103388 master-0 kubenswrapper[7553]: I0318 17:51:50.102552 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:51:51.053472 master-0 kubenswrapper[7553]: I0318 17:51:51.053397 7553 scope.go:117] "RemoveContainer" containerID="5c16981b35905733515d1e248adaa6df57536596a8a44ed7adeee9fde518db63" Mar 18 17:51:51.053827 master-0 kubenswrapper[7553]: E0318 17:51:51.053759 7553 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=kube-rbac-proxy pod=cluster-cloud-controller-manager-operator-7dff898856-kfzkl_openshift-cloud-controller-manager-operator(0751c002-fe0e-4f13-bb9c-9accd8ca0df3)\"" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-kfzkl" podUID="0751c002-fe0e-4f13-bb9c-9accd8ca0df3" Mar 18 17:51:51.102572 master-0 kubenswrapper[7553]: I0318 17:51:51.102472 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:51:51.102572 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:51:51.102572 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:51:51.102572 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:51:51.103428 master-0 kubenswrapper[7553]: I0318 17:51:51.102575 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:51:52.103006 master-0 kubenswrapper[7553]: I0318 17:51:52.102928 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:51:52.103006 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:51:52.103006 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:51:52.103006 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:51:52.103786 master-0 kubenswrapper[7553]: I0318 17:51:52.103037 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:51:53.103095 master-0 kubenswrapper[7553]: I0318 17:51:53.103044 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:51:53.103095 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:51:53.103095 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:51:53.103095 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:51:53.104058 master-0 kubenswrapper[7553]: I0318 17:51:53.104021 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:51:54.103227 master-0 kubenswrapper[7553]: I0318 17:51:54.103171 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:51:54.103227 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:51:54.103227 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:51:54.103227 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:51:54.104265 master-0 kubenswrapper[7553]: I0318 17:51:54.104218 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:51:55.103341 master-0 kubenswrapper[7553]: I0318 17:51:55.103237 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:51:55.103341 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:51:55.103341 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:51:55.103341 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:51:55.104424 master-0 kubenswrapper[7553]: I0318 17:51:55.103363 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:51:56.102424 master-0 kubenswrapper[7553]: I0318 17:51:56.102306 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:51:56.102424 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:51:56.102424 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:51:56.102424 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:51:56.102780 master-0 kubenswrapper[7553]: I0318 17:51:56.102473 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:51:57.103625 master-0 kubenswrapper[7553]: I0318 17:51:57.103478 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:51:57.103625 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:51:57.103625 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:51:57.103625 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:51:57.103625 master-0 kubenswrapper[7553]: I0318 17:51:57.103608 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:51:58.103828 master-0 kubenswrapper[7553]: I0318 17:51:58.103759 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:51:58.103828 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:51:58.103828 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:51:58.103828 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:51:58.103828 master-0 kubenswrapper[7553]: I0318 17:51:58.103825 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:51:59.102569 master-0 kubenswrapper[7553]: I0318 17:51:59.102483 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:51:59.102569 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:51:59.102569 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:51:59.102569 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:51:59.102993 master-0 kubenswrapper[7553]: I0318 17:51:59.102573 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:52:00.102794 master-0 kubenswrapper[7553]: I0318 17:52:00.102720 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:52:00.102794 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:52:00.102794 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:52:00.102794 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:52:00.103504 master-0 kubenswrapper[7553]: I0318 17:52:00.102832 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:52:01.103067 master-0 kubenswrapper[7553]: I0318 17:52:01.102994 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:52:01.103067 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:52:01.103067 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:52:01.103067 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:52:01.103989 master-0 kubenswrapper[7553]: I0318 17:52:01.103072 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:52:02.102236 master-0 kubenswrapper[7553]: I0318 17:52:02.102152 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:52:02.102236 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:52:02.102236 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:52:02.102236 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:52:02.102716 master-0 kubenswrapper[7553]: I0318 17:52:02.102259 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:52:03.102062 master-0 kubenswrapper[7553]: I0318 17:52:03.101987 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:52:03.102062 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:52:03.102062 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:52:03.102062 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:52:03.103174 master-0 kubenswrapper[7553]: I0318 17:52:03.102098 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:52:04.102346 master-0 kubenswrapper[7553]: I0318 17:52:04.102254 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:52:04.102346 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:52:04.102346 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:52:04.102346 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:52:04.103217 master-0 kubenswrapper[7553]: I0318 17:52:04.102365 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:52:05.102322 master-0 kubenswrapper[7553]: I0318 17:52:05.102189 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:52:05.102322 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:52:05.102322 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:52:05.102322 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:52:05.103574 master-0 kubenswrapper[7553]: I0318 17:52:05.102365 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:52:06.054305 master-0 kubenswrapper[7553]: I0318 17:52:06.054242 7553 scope.go:117] "RemoveContainer" containerID="5c16981b35905733515d1e248adaa6df57536596a8a44ed7adeee9fde518db63" Mar 18 17:52:06.103229 master-0 kubenswrapper[7553]: I0318 17:52:06.102841 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:52:06.103229 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:52:06.103229 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:52:06.103229 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:52:06.103229 master-0 kubenswrapper[7553]: I0318 17:52:06.102967 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:52:06.591903 master-0 kubenswrapper[7553]: I0318 17:52:06.591770 7553 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-controller-manager-operator_cluster-cloud-controller-manager-operator-7dff898856-kfzkl_0751c002-fe0e-4f13-bb9c-9accd8ca0df3/kube-rbac-proxy/5.log" Mar 18 17:52:06.592771 master-0 kubenswrapper[7553]: I0318 17:52:06.592725 7553 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-controller-manager-operator_cluster-cloud-controller-manager-operator-7dff898856-kfzkl_0751c002-fe0e-4f13-bb9c-9accd8ca0df3/kube-rbac-proxy/4.log" Mar 18 17:52:06.593930 master-0 kubenswrapper[7553]: I0318 17:52:06.593891 7553 generic.go:334] "Generic (PLEG): container finished" podID="0751c002-fe0e-4f13-bb9c-9accd8ca0df3" containerID="6eeb4cd2c87e125ddb833d434b420403394a973efd8a0367e734678f55632df5" exitCode=1 Mar 18 17:52:06.594001 master-0 kubenswrapper[7553]: I0318 17:52:06.593934 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-kfzkl" event={"ID":"0751c002-fe0e-4f13-bb9c-9accd8ca0df3","Type":"ContainerDied","Data":"6eeb4cd2c87e125ddb833d434b420403394a973efd8a0367e734678f55632df5"} Mar 18 17:52:06.594001 master-0 kubenswrapper[7553]: I0318 17:52:06.593981 7553 scope.go:117] "RemoveContainer" containerID="5c16981b35905733515d1e248adaa6df57536596a8a44ed7adeee9fde518db63" Mar 18 17:52:06.594825 master-0 kubenswrapper[7553]: I0318 17:52:06.594783 7553 scope.go:117] "RemoveContainer" containerID="6eeb4cd2c87e125ddb833d434b420403394a973efd8a0367e734678f55632df5" Mar 18 17:52:06.595079 master-0 kubenswrapper[7553]: E0318 17:52:06.595038 7553 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-rbac-proxy pod=cluster-cloud-controller-manager-operator-7dff898856-kfzkl_openshift-cloud-controller-manager-operator(0751c002-fe0e-4f13-bb9c-9accd8ca0df3)\"" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-kfzkl" podUID="0751c002-fe0e-4f13-bb9c-9accd8ca0df3" Mar 18 17:52:07.104724 master-0 kubenswrapper[7553]: I0318 17:52:07.104614 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:52:07.104724 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:52:07.104724 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:52:07.104724 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:52:07.106046 master-0 kubenswrapper[7553]: I0318 17:52:07.105609 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:52:07.609533 master-0 kubenswrapper[7553]: I0318 17:52:07.609432 7553 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-controller-manager-operator_cluster-cloud-controller-manager-operator-7dff898856-kfzkl_0751c002-fe0e-4f13-bb9c-9accd8ca0df3/kube-rbac-proxy/5.log" Mar 18 17:52:08.103753 master-0 kubenswrapper[7553]: I0318 17:52:08.103655 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:52:08.103753 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:52:08.103753 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:52:08.103753 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:52:08.104249 master-0 kubenswrapper[7553]: I0318 17:52:08.103833 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:52:09.103949 master-0 kubenswrapper[7553]: I0318 17:52:09.103790 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:52:09.103949 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:52:09.103949 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:52:09.103949 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:52:09.103949 master-0 kubenswrapper[7553]: I0318 17:52:09.103930 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:52:10.103095 master-0 kubenswrapper[7553]: I0318 17:52:10.102992 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:52:10.103095 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:52:10.103095 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:52:10.103095 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:52:10.103483 master-0 kubenswrapper[7553]: I0318 17:52:10.103111 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:52:11.103426 master-0 kubenswrapper[7553]: I0318 17:52:11.103340 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:52:11.103426 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:52:11.103426 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:52:11.103426 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:52:11.104368 master-0 kubenswrapper[7553]: I0318 17:52:11.103458 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:52:12.104171 master-0 kubenswrapper[7553]: I0318 17:52:12.104065 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:52:12.104171 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:52:12.104171 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:52:12.104171 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:52:12.105353 master-0 kubenswrapper[7553]: I0318 17:52:12.104172 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:52:13.132396 master-0 kubenswrapper[7553]: I0318 17:52:13.132213 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:52:13.132396 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:52:13.132396 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:52:13.132396 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:52:13.133397 master-0 kubenswrapper[7553]: I0318 17:52:13.132434 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:52:14.102971 master-0 kubenswrapper[7553]: I0318 17:52:14.102906 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:52:14.102971 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:52:14.102971 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:52:14.102971 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:52:14.103473 master-0 kubenswrapper[7553]: I0318 17:52:14.102981 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:52:15.101924 master-0 kubenswrapper[7553]: I0318 17:52:15.101790 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:52:15.101924 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:52:15.101924 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:52:15.101924 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:52:15.102763 master-0 kubenswrapper[7553]: I0318 17:52:15.101978 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:52:16.102048 master-0 kubenswrapper[7553]: I0318 17:52:16.101971 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:52:16.102048 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:52:16.102048 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:52:16.102048 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:52:16.102611 master-0 kubenswrapper[7553]: I0318 17:52:16.102072 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:52:17.102518 master-0 kubenswrapper[7553]: I0318 17:52:17.102450 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:52:17.102518 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:52:17.102518 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:52:17.102518 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:52:17.103612 master-0 kubenswrapper[7553]: I0318 17:52:17.103513 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:52:18.103127 master-0 kubenswrapper[7553]: I0318 17:52:18.102997 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:52:18.103127 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:52:18.103127 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:52:18.103127 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:52:18.103127 master-0 kubenswrapper[7553]: I0318 17:52:18.103126 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:52:19.054655 master-0 kubenswrapper[7553]: I0318 17:52:19.054573 7553 scope.go:117] "RemoveContainer" containerID="6eeb4cd2c87e125ddb833d434b420403394a973efd8a0367e734678f55632df5" Mar 18 17:52:19.055026 master-0 kubenswrapper[7553]: E0318 17:52:19.054840 7553 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-rbac-proxy pod=cluster-cloud-controller-manager-operator-7dff898856-kfzkl_openshift-cloud-controller-manager-operator(0751c002-fe0e-4f13-bb9c-9accd8ca0df3)\"" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-kfzkl" podUID="0751c002-fe0e-4f13-bb9c-9accd8ca0df3" Mar 18 17:52:19.103463 master-0 kubenswrapper[7553]: I0318 17:52:19.103363 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:52:19.103463 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:52:19.103463 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:52:19.103463 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:52:19.104190 master-0 kubenswrapper[7553]: I0318 17:52:19.103501 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:52:20.102191 master-0 kubenswrapper[7553]: I0318 17:52:20.102103 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:52:20.102191 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:52:20.102191 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:52:20.102191 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:52:20.102684 master-0 kubenswrapper[7553]: I0318 17:52:20.102201 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:52:21.102299 master-0 kubenswrapper[7553]: I0318 17:52:21.102162 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:52:21.102299 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:52:21.102299 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:52:21.102299 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:52:21.103052 master-0 kubenswrapper[7553]: I0318 17:52:21.102324 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:52:22.102496 master-0 kubenswrapper[7553]: I0318 17:52:22.102410 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:52:22.102496 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:52:22.102496 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:52:22.102496 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:52:22.103356 master-0 kubenswrapper[7553]: I0318 17:52:22.102511 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:52:23.102983 master-0 kubenswrapper[7553]: I0318 17:52:23.102847 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:52:23.102983 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:52:23.102983 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:52:23.102983 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:52:23.102983 master-0 kubenswrapper[7553]: I0318 17:52:23.102939 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:52:24.102389 master-0 kubenswrapper[7553]: I0318 17:52:24.102258 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:52:24.102389 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:52:24.102389 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:52:24.102389 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:52:24.102389 master-0 kubenswrapper[7553]: I0318 17:52:24.102377 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:52:25.103870 master-0 kubenswrapper[7553]: I0318 17:52:25.103753 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:52:25.103870 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:52:25.103870 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:52:25.103870 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:52:25.104748 master-0 kubenswrapper[7553]: I0318 17:52:25.103884 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:52:26.102988 master-0 kubenswrapper[7553]: I0318 17:52:26.102891 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:52:26.102988 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:52:26.102988 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:52:26.102988 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:52:26.103480 master-0 kubenswrapper[7553]: I0318 17:52:26.103008 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:52:27.102342 master-0 kubenswrapper[7553]: I0318 17:52:27.102224 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:52:27.102342 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:52:27.102342 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:52:27.102342 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:52:27.103262 master-0 kubenswrapper[7553]: I0318 17:52:27.102398 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:52:28.105340 master-0 kubenswrapper[7553]: I0318 17:52:28.102376 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:52:28.105340 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:52:28.105340 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:52:28.105340 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:52:28.105340 master-0 kubenswrapper[7553]: I0318 17:52:28.102474 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:52:28.778653 master-0 kubenswrapper[7553]: I0318 17:52:28.778461 7553 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-66b84d69b-qb7n6_7e64a377-f497-4416-8f22-d5c7f52e0b65/ingress-operator/3.log" Mar 18 17:52:28.779494 master-0 kubenswrapper[7553]: I0318 17:52:28.779459 7553 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-66b84d69b-qb7n6_7e64a377-f497-4416-8f22-d5c7f52e0b65/ingress-operator/2.log" Mar 18 17:52:28.780297 master-0 kubenswrapper[7553]: I0318 17:52:28.780204 7553 generic.go:334] "Generic (PLEG): container finished" podID="7e64a377-f497-4416-8f22-d5c7f52e0b65" containerID="283b61599e310047ed75a28fad3754db0725837893f44d2709551e02ebb45040" exitCode=1 Mar 18 17:52:28.780383 master-0 kubenswrapper[7553]: I0318 17:52:28.780329 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-66b84d69b-qb7n6" event={"ID":"7e64a377-f497-4416-8f22-d5c7f52e0b65","Type":"ContainerDied","Data":"283b61599e310047ed75a28fad3754db0725837893f44d2709551e02ebb45040"} Mar 18 17:52:28.780549 master-0 kubenswrapper[7553]: I0318 17:52:28.780519 7553 scope.go:117] "RemoveContainer" containerID="7ee6b0cddd340e9ac4b37b541379d515766ed427e5cb173553e9eea6ace8c5a9" Mar 18 17:52:28.781987 master-0 kubenswrapper[7553]: I0318 17:52:28.781947 7553 scope.go:117] "RemoveContainer" containerID="283b61599e310047ed75a28fad3754db0725837893f44d2709551e02ebb45040" Mar 18 17:52:28.782505 master-0 kubenswrapper[7553]: E0318 17:52:28.782251 7553 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ingress-operator\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ingress-operator pod=ingress-operator-66b84d69b-qb7n6_openshift-ingress-operator(7e64a377-f497-4416-8f22-d5c7f52e0b65)\"" pod="openshift-ingress-operator/ingress-operator-66b84d69b-qb7n6" podUID="7e64a377-f497-4416-8f22-d5c7f52e0b65" Mar 18 17:52:29.102664 master-0 kubenswrapper[7553]: I0318 17:52:29.102548 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:52:29.102664 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:52:29.102664 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:52:29.102664 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:52:29.103197 master-0 kubenswrapper[7553]: I0318 17:52:29.102696 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:52:29.794429 master-0 kubenswrapper[7553]: I0318 17:52:29.794334 7553 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-66b84d69b-qb7n6_7e64a377-f497-4416-8f22-d5c7f52e0b65/ingress-operator/3.log" Mar 18 17:52:30.103075 master-0 kubenswrapper[7553]: I0318 17:52:30.103005 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:52:30.103075 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:52:30.103075 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:52:30.103075 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:52:30.103589 master-0 kubenswrapper[7553]: I0318 17:52:30.103098 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:52:31.103480 master-0 kubenswrapper[7553]: I0318 17:52:31.103326 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:52:31.103480 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:52:31.103480 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:52:31.103480 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:52:31.104313 master-0 kubenswrapper[7553]: I0318 17:52:31.103518 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:52:32.102535 master-0 kubenswrapper[7553]: I0318 17:52:32.102416 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:52:32.102535 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:52:32.102535 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:52:32.102535 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:52:32.102535 master-0 kubenswrapper[7553]: I0318 17:52:32.102534 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:52:33.054544 master-0 kubenswrapper[7553]: I0318 17:52:33.054456 7553 scope.go:117] "RemoveContainer" containerID="6eeb4cd2c87e125ddb833d434b420403394a973efd8a0367e734678f55632df5" Mar 18 17:52:33.055401 master-0 kubenswrapper[7553]: E0318 17:52:33.054896 7553 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-rbac-proxy pod=cluster-cloud-controller-manager-operator-7dff898856-kfzkl_openshift-cloud-controller-manager-operator(0751c002-fe0e-4f13-bb9c-9accd8ca0df3)\"" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-kfzkl" podUID="0751c002-fe0e-4f13-bb9c-9accd8ca0df3" Mar 18 17:52:33.102565 master-0 kubenswrapper[7553]: I0318 17:52:33.102454 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:52:33.102565 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:52:33.102565 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:52:33.102565 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:52:33.102565 master-0 kubenswrapper[7553]: I0318 17:52:33.102554 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:52:34.102345 master-0 kubenswrapper[7553]: I0318 17:52:34.102263 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:52:34.102345 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:52:34.102345 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:52:34.102345 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:52:34.103547 master-0 kubenswrapper[7553]: I0318 17:52:34.102395 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:52:35.103040 master-0 kubenswrapper[7553]: I0318 17:52:35.102901 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:52:35.103040 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:52:35.103040 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:52:35.103040 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:52:35.103040 master-0 kubenswrapper[7553]: I0318 17:52:35.102999 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:52:36.102429 master-0 kubenswrapper[7553]: I0318 17:52:36.102331 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:52:36.102429 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:52:36.102429 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:52:36.102429 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:52:36.102429 master-0 kubenswrapper[7553]: I0318 17:52:36.102412 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:52:37.103086 master-0 kubenswrapper[7553]: I0318 17:52:37.102998 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:52:37.103086 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:52:37.103086 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:52:37.103086 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:52:37.103798 master-0 kubenswrapper[7553]: I0318 17:52:37.103107 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:52:38.102858 master-0 kubenswrapper[7553]: I0318 17:52:38.102745 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:52:38.102858 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:52:38.102858 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:52:38.102858 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:52:38.102858 master-0 kubenswrapper[7553]: I0318 17:52:38.102842 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:52:38.825469 master-0 kubenswrapper[7553]: E0318 17:52:38.825341 7553 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[cloud-credential-operator-serving-cert], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="openshift-cloud-credential-operator/cloud-credential-operator-744f9dbf77-djgn7" podUID="04cef0bd-f365-4bf6-864a-1895995015d6" Mar 18 17:52:38.825469 master-0 kubenswrapper[7553]: E0318 17:52:38.825388 7553 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[cert], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="openshift-machine-api/cluster-autoscaler-operator-866dc4744-l6hpt" podUID="a94f7bff-ad61-4c53-a8eb-000a13f26971" Mar 18 17:52:38.825804 master-0 kubenswrapper[7553]: E0318 17:52:38.825535 7553 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[control-plane-machine-set-operator-tls], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="openshift-machine-api/control-plane-machine-set-operator-6f97756bc8-zdqtc" podUID="de189d27-4c60-49f1-9119-d1fde5c37b1e" Mar 18 17:52:38.825804 master-0 kubenswrapper[7553]: E0318 17:52:38.825763 7553 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[samples-operator-tls], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="openshift-cluster-samples-operator/cluster-samples-operator-85f7577d78-xnx8x" podUID="e0e04440-c08b-452d-9be6-9f70a4027c92" Mar 18 17:52:38.866722 master-0 kubenswrapper[7553]: I0318 17:52:38.866597 7553 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cloud-credential-operator/cloud-credential-operator-744f9dbf77-djgn7" Mar 18 17:52:38.866722 master-0 kubenswrapper[7553]: I0318 17:52:38.866654 7553 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-85f7577d78-xnx8x" Mar 18 17:52:38.866722 master-0 kubenswrapper[7553]: I0318 17:52:38.866683 7553 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/cluster-autoscaler-operator-866dc4744-l6hpt" Mar 18 17:52:38.866722 master-0 kubenswrapper[7553]: I0318 17:52:38.866610 7553 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-6f97756bc8-zdqtc" Mar 18 17:52:39.103207 master-0 kubenswrapper[7553]: I0318 17:52:39.103012 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:52:39.103207 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:52:39.103207 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:52:39.103207 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:52:39.103207 master-0 kubenswrapper[7553]: I0318 17:52:39.103101 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:52:39.831541 master-0 kubenswrapper[7553]: E0318 17:52:39.831414 7553 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[machine-api-operator-tls], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="openshift-machine-api/machine-api-operator-6fbb6cf6f9-6x52p" podUID="2d21e77e-8b61-4f03-8f17-941b7a1d8b1d" Mar 18 17:52:39.874542 master-0 kubenswrapper[7553]: I0318 17:52:39.874455 7553 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-6fbb6cf6f9-6x52p" Mar 18 17:52:40.102979 master-0 kubenswrapper[7553]: I0318 17:52:40.102833 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:52:40.102979 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:52:40.102979 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:52:40.102979 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:52:40.102979 master-0 kubenswrapper[7553]: I0318 17:52:40.102947 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:52:41.102395 master-0 kubenswrapper[7553]: I0318 17:52:41.102260 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:52:41.102395 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:52:41.102395 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:52:41.102395 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:52:41.102395 master-0 kubenswrapper[7553]: I0318 17:52:41.102384 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:52:42.102694 master-0 kubenswrapper[7553]: I0318 17:52:42.102597 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:52:42.102694 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:52:42.102694 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:52:42.102694 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:52:42.103773 master-0 kubenswrapper[7553]: I0318 17:52:42.102729 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:52:42.625369 master-0 kubenswrapper[7553]: I0318 17:52:42.625256 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloud-credential-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/04cef0bd-f365-4bf6-864a-1895995015d6-cloud-credential-operator-serving-cert\") pod \"cloud-credential-operator-744f9dbf77-djgn7\" (UID: \"04cef0bd-f365-4bf6-864a-1895995015d6\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-744f9dbf77-djgn7" Mar 18 17:52:42.625660 master-0 kubenswrapper[7553]: I0318 17:52:42.625384 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/de189d27-4c60-49f1-9119-d1fde5c37b1e-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-6f97756bc8-zdqtc\" (UID: \"de189d27-4c60-49f1-9119-d1fde5c37b1e\") " pod="openshift-machine-api/control-plane-machine-set-operator-6f97756bc8-zdqtc" Mar 18 17:52:42.625660 master-0 kubenswrapper[7553]: I0318 17:52:42.625461 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/92153864-7959-4482-bf24-c8db36435fb5-machine-approver-tls\") pod \"machine-approver-5c6485487f-z74t2\" (UID: \"92153864-7959-4482-bf24-c8db36435fb5\") " pod="openshift-cluster-machine-approver/machine-approver-5c6485487f-z74t2" Mar 18 17:52:42.625660 master-0 kubenswrapper[7553]: E0318 17:52:42.625632 7553 secret.go:189] Couldn't get secret openshift-cluster-machine-approver/machine-approver-tls: secret "machine-approver-tls" not found Mar 18 17:52:42.625950 master-0 kubenswrapper[7553]: E0318 17:52:42.625714 7553 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/92153864-7959-4482-bf24-c8db36435fb5-machine-approver-tls podName:92153864-7959-4482-bf24-c8db36435fb5 nodeName:}" failed. No retries permitted until 2026-03-18 17:54:44.625689791 +0000 UTC m=+774.771524504 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "machine-approver-tls" (UniqueName: "kubernetes.io/secret/92153864-7959-4482-bf24-c8db36435fb5-machine-approver-tls") pod "machine-approver-5c6485487f-z74t2" (UID: "92153864-7959-4482-bf24-c8db36435fb5") : secret "machine-approver-tls" not found Mar 18 17:52:42.625950 master-0 kubenswrapper[7553]: E0318 17:52:42.625724 7553 secret.go:189] Couldn't get secret openshift-machine-api/control-plane-machine-set-operator-tls: secret "control-plane-machine-set-operator-tls" not found Mar 18 17:52:42.625950 master-0 kubenswrapper[7553]: E0318 17:52:42.625878 7553 secret.go:189] Couldn't get secret openshift-cloud-credential-operator/cloud-credential-operator-serving-cert: secret "cloud-credential-operator-serving-cert" not found Mar 18 17:52:42.625950 master-0 kubenswrapper[7553]: I0318 17:52:42.625876 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a94f7bff-ad61-4c53-a8eb-000a13f26971-cert\") pod \"cluster-autoscaler-operator-866dc4744-l6hpt\" (UID: \"a94f7bff-ad61-4c53-a8eb-000a13f26971\") " pod="openshift-machine-api/cluster-autoscaler-operator-866dc4744-l6hpt" Mar 18 17:52:42.625950 master-0 kubenswrapper[7553]: E0318 17:52:42.625929 7553 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/04cef0bd-f365-4bf6-864a-1895995015d6-cloud-credential-operator-serving-cert podName:04cef0bd-f365-4bf6-864a-1895995015d6 nodeName:}" failed. No retries permitted until 2026-03-18 17:54:44.625903767 +0000 UTC m=+774.771738550 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "cloud-credential-operator-serving-cert" (UniqueName: "kubernetes.io/secret/04cef0bd-f365-4bf6-864a-1895995015d6-cloud-credential-operator-serving-cert") pod "cloud-credential-operator-744f9dbf77-djgn7" (UID: "04cef0bd-f365-4bf6-864a-1895995015d6") : secret "cloud-credential-operator-serving-cert" not found Mar 18 17:52:42.626483 master-0 kubenswrapper[7553]: E0318 17:52:42.626001 7553 secret.go:189] Couldn't get secret openshift-machine-api/cluster-autoscaler-operator-cert: secret "cluster-autoscaler-operator-cert" not found Mar 18 17:52:42.626483 master-0 kubenswrapper[7553]: E0318 17:52:42.626022 7553 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/de189d27-4c60-49f1-9119-d1fde5c37b1e-control-plane-machine-set-operator-tls podName:de189d27-4c60-49f1-9119-d1fde5c37b1e nodeName:}" failed. No retries permitted until 2026-03-18 17:54:44.62601359 +0000 UTC m=+774.771848383 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "control-plane-machine-set-operator-tls" (UniqueName: "kubernetes.io/secret/de189d27-4c60-49f1-9119-d1fde5c37b1e-control-plane-machine-set-operator-tls") pod "control-plane-machine-set-operator-6f97756bc8-zdqtc" (UID: "de189d27-4c60-49f1-9119-d1fde5c37b1e") : secret "control-plane-machine-set-operator-tls" not found Mar 18 17:52:42.626483 master-0 kubenswrapper[7553]: E0318 17:52:42.626094 7553 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a94f7bff-ad61-4c53-a8eb-000a13f26971-cert podName:a94f7bff-ad61-4c53-a8eb-000a13f26971 nodeName:}" failed. No retries permitted until 2026-03-18 17:54:44.626072611 +0000 UTC m=+774.771907324 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/a94f7bff-ad61-4c53-a8eb-000a13f26971-cert") pod "cluster-autoscaler-operator-866dc4744-l6hpt" (UID: "a94f7bff-ad61-4c53-a8eb-000a13f26971") : secret "cluster-autoscaler-operator-cert" not found Mar 18 17:52:42.626483 master-0 kubenswrapper[7553]: I0318 17:52:42.626265 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/e0e04440-c08b-452d-9be6-9f70a4027c92-samples-operator-tls\") pod \"cluster-samples-operator-85f7577d78-xnx8x\" (UID: \"e0e04440-c08b-452d-9be6-9f70a4027c92\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-85f7577d78-xnx8x" Mar 18 17:52:42.626483 master-0 kubenswrapper[7553]: E0318 17:52:42.626435 7553 secret.go:189] Couldn't get secret openshift-cluster-samples-operator/samples-operator-tls: secret "samples-operator-tls" not found Mar 18 17:52:42.626863 master-0 kubenswrapper[7553]: E0318 17:52:42.626505 7553 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e0e04440-c08b-452d-9be6-9f70a4027c92-samples-operator-tls podName:e0e04440-c08b-452d-9be6-9f70a4027c92 nodeName:}" failed. No retries permitted until 2026-03-18 17:54:44.626487463 +0000 UTC m=+774.772322176 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "samples-operator-tls" (UniqueName: "kubernetes.io/secret/e0e04440-c08b-452d-9be6-9f70a4027c92-samples-operator-tls") pod "cluster-samples-operator-85f7577d78-xnx8x" (UID: "e0e04440-c08b-452d-9be6-9f70a4027c92") : secret "samples-operator-tls" not found Mar 18 17:52:42.727605 master-0 kubenswrapper[7553]: I0318 17:52:42.727507 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/2d21e77e-8b61-4f03-8f17-941b7a1d8b1d-machine-api-operator-tls\") pod \"machine-api-operator-6fbb6cf6f9-6x52p\" (UID: \"2d21e77e-8b61-4f03-8f17-941b7a1d8b1d\") " pod="openshift-machine-api/machine-api-operator-6fbb6cf6f9-6x52p" Mar 18 17:52:42.727934 master-0 kubenswrapper[7553]: E0318 17:52:42.727720 7553 secret.go:189] Couldn't get secret openshift-machine-api/machine-api-operator-tls: secret "machine-api-operator-tls" not found Mar 18 17:52:42.727934 master-0 kubenswrapper[7553]: E0318 17:52:42.727821 7553 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2d21e77e-8b61-4f03-8f17-941b7a1d8b1d-machine-api-operator-tls podName:2d21e77e-8b61-4f03-8f17-941b7a1d8b1d nodeName:}" failed. No retries permitted until 2026-03-18 17:54:44.727793758 +0000 UTC m=+774.873628471 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "machine-api-operator-tls" (UniqueName: "kubernetes.io/secret/2d21e77e-8b61-4f03-8f17-941b7a1d8b1d-machine-api-operator-tls") pod "machine-api-operator-6fbb6cf6f9-6x52p" (UID: "2d21e77e-8b61-4f03-8f17-941b7a1d8b1d") : secret "machine-api-operator-tls" not found Mar 18 17:52:43.103456 master-0 kubenswrapper[7553]: I0318 17:52:43.103342 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:52:43.103456 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:52:43.103456 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:52:43.103456 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:52:43.103456 master-0 kubenswrapper[7553]: I0318 17:52:43.103438 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:52:44.053759 master-0 kubenswrapper[7553]: I0318 17:52:44.053687 7553 scope.go:117] "RemoveContainer" containerID="283b61599e310047ed75a28fad3754db0725837893f44d2709551e02ebb45040" Mar 18 17:52:44.054227 master-0 kubenswrapper[7553]: E0318 17:52:44.054178 7553 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ingress-operator\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ingress-operator pod=ingress-operator-66b84d69b-qb7n6_openshift-ingress-operator(7e64a377-f497-4416-8f22-d5c7f52e0b65)\"" pod="openshift-ingress-operator/ingress-operator-66b84d69b-qb7n6" podUID="7e64a377-f497-4416-8f22-d5c7f52e0b65" Mar 18 17:52:44.102365 master-0 kubenswrapper[7553]: I0318 17:52:44.102257 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:52:44.102365 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:52:44.102365 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:52:44.102365 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:52:44.102838 master-0 kubenswrapper[7553]: I0318 17:52:44.102382 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:52:45.102760 master-0 kubenswrapper[7553]: I0318 17:52:45.102649 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:52:45.102760 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:52:45.102760 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:52:45.102760 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:52:45.102760 master-0 kubenswrapper[7553]: I0318 17:52:45.102744 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:52:46.053455 master-0 kubenswrapper[7553]: I0318 17:52:46.053367 7553 scope.go:117] "RemoveContainer" containerID="6eeb4cd2c87e125ddb833d434b420403394a973efd8a0367e734678f55632df5" Mar 18 17:52:46.053995 master-0 kubenswrapper[7553]: E0318 17:52:46.053725 7553 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-rbac-proxy pod=cluster-cloud-controller-manager-operator-7dff898856-kfzkl_openshift-cloud-controller-manager-operator(0751c002-fe0e-4f13-bb9c-9accd8ca0df3)\"" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-kfzkl" podUID="0751c002-fe0e-4f13-bb9c-9accd8ca0df3" Mar 18 17:52:46.102133 master-0 kubenswrapper[7553]: I0318 17:52:46.102053 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:52:46.102133 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:52:46.102133 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:52:46.102133 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:52:46.102699 master-0 kubenswrapper[7553]: I0318 17:52:46.102152 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:52:47.103263 master-0 kubenswrapper[7553]: I0318 17:52:47.103188 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:52:47.103263 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:52:47.103263 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:52:47.103263 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:52:47.104707 master-0 kubenswrapper[7553]: I0318 17:52:47.103353 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:52:48.102978 master-0 kubenswrapper[7553]: I0318 17:52:48.102847 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:52:48.102978 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:52:48.102978 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:52:48.102978 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:52:48.104004 master-0 kubenswrapper[7553]: I0318 17:52:48.103043 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:52:49.102134 master-0 kubenswrapper[7553]: I0318 17:52:49.102039 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:52:49.102134 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:52:49.102134 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:52:49.102134 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:52:49.103336 master-0 kubenswrapper[7553]: I0318 17:52:49.103203 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:52:50.103438 master-0 kubenswrapper[7553]: I0318 17:52:50.103167 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:52:50.103438 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:52:50.103438 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:52:50.103438 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:52:50.103438 master-0 kubenswrapper[7553]: I0318 17:52:50.103250 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:52:51.101928 master-0 kubenswrapper[7553]: I0318 17:52:51.101861 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:52:51.101928 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:52:51.101928 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:52:51.101928 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:52:51.102395 master-0 kubenswrapper[7553]: I0318 17:52:51.101949 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:52:52.102846 master-0 kubenswrapper[7553]: I0318 17:52:52.102738 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:52:52.102846 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:52:52.102846 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:52:52.102846 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:52:52.103824 master-0 kubenswrapper[7553]: I0318 17:52:52.102865 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:52:53.102226 master-0 kubenswrapper[7553]: I0318 17:52:53.102111 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:52:53.102226 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:52:53.102226 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:52:53.102226 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:52:53.103860 master-0 kubenswrapper[7553]: I0318 17:52:53.102233 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:52:54.053925 master-0 kubenswrapper[7553]: E0318 17:52:54.053843 7553 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[machine-approver-tls], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="openshift-cluster-machine-approver/machine-approver-5c6485487f-z74t2" podUID="92153864-7959-4482-bf24-c8db36435fb5" Mar 18 17:52:54.102765 master-0 kubenswrapper[7553]: I0318 17:52:54.102667 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:52:54.102765 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:52:54.102765 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:52:54.102765 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:52:54.102765 master-0 kubenswrapper[7553]: I0318 17:52:54.102757 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:52:55.103112 master-0 kubenswrapper[7553]: I0318 17:52:55.103022 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:52:55.103112 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:52:55.103112 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:52:55.103112 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:52:55.103924 master-0 kubenswrapper[7553]: I0318 17:52:55.103137 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:52:55.962054 master-0 kubenswrapper[7553]: E0318 17:52:55.961915 7553 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[prometheus-operator-tls], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="openshift-monitoring/prometheus-operator-6c8df6d4b-fshkm" podUID="9c0dbd44-7669-41d6-bf1b-d8c1343c9d98" Mar 18 17:52:56.049170 master-0 kubenswrapper[7553]: I0318 17:52:56.049070 7553 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-6c8df6d4b-fshkm" Mar 18 17:52:56.104155 master-0 kubenswrapper[7553]: I0318 17:52:56.104043 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:52:56.104155 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:52:56.104155 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:52:56.104155 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:52:56.105199 master-0 kubenswrapper[7553]: I0318 17:52:56.104158 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:52:57.053230 master-0 kubenswrapper[7553]: I0318 17:52:57.053132 7553 scope.go:117] "RemoveContainer" containerID="6eeb4cd2c87e125ddb833d434b420403394a973efd8a0367e734678f55632df5" Mar 18 17:52:57.053614 master-0 kubenswrapper[7553]: E0318 17:52:57.053383 7553 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-rbac-proxy pod=cluster-cloud-controller-manager-operator-7dff898856-kfzkl_openshift-cloud-controller-manager-operator(0751c002-fe0e-4f13-bb9c-9accd8ca0df3)\"" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-kfzkl" podUID="0751c002-fe0e-4f13-bb9c-9accd8ca0df3" Mar 18 17:52:57.102579 master-0 kubenswrapper[7553]: I0318 17:52:57.102455 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:52:57.102579 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:52:57.102579 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:52:57.102579 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:52:57.103012 master-0 kubenswrapper[7553]: I0318 17:52:57.102610 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:52:58.054108 master-0 kubenswrapper[7553]: I0318 17:52:58.054010 7553 scope.go:117] "RemoveContainer" containerID="283b61599e310047ed75a28fad3754db0725837893f44d2709551e02ebb45040" Mar 18 17:52:58.054908 master-0 kubenswrapper[7553]: E0318 17:52:58.054382 7553 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ingress-operator\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ingress-operator pod=ingress-operator-66b84d69b-qb7n6_openshift-ingress-operator(7e64a377-f497-4416-8f22-d5c7f52e0b65)\"" pod="openshift-ingress-operator/ingress-operator-66b84d69b-qb7n6" podUID="7e64a377-f497-4416-8f22-d5c7f52e0b65" Mar 18 17:52:58.103182 master-0 kubenswrapper[7553]: I0318 17:52:58.103092 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:52:58.103182 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:52:58.103182 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:52:58.103182 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:52:58.103712 master-0 kubenswrapper[7553]: I0318 17:52:58.103209 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:52:59.102341 master-0 kubenswrapper[7553]: I0318 17:52:59.102251 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:52:59.102341 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:52:59.102341 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:52:59.102341 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:52:59.103254 master-0 kubenswrapper[7553]: I0318 17:52:59.102354 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:52:59.157335 master-0 kubenswrapper[7553]: I0318 17:52:59.157238 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/9c0dbd44-7669-41d6-bf1b-d8c1343c9d98-prometheus-operator-tls\") pod \"prometheus-operator-6c8df6d4b-fshkm\" (UID: \"9c0dbd44-7669-41d6-bf1b-d8c1343c9d98\") " pod="openshift-monitoring/prometheus-operator-6c8df6d4b-fshkm" Mar 18 17:52:59.158061 master-0 kubenswrapper[7553]: E0318 17:52:59.158020 7553 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-operator-tls: secret "prometheus-operator-tls" not found Mar 18 17:52:59.158120 master-0 kubenswrapper[7553]: E0318 17:52:59.158090 7553 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9c0dbd44-7669-41d6-bf1b-d8c1343c9d98-prometheus-operator-tls podName:9c0dbd44-7669-41d6-bf1b-d8c1343c9d98 nodeName:}" failed. No retries permitted until 2026-03-18 17:55:01.158068495 +0000 UTC m=+791.303903168 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "prometheus-operator-tls" (UniqueName: "kubernetes.io/secret/9c0dbd44-7669-41d6-bf1b-d8c1343c9d98-prometheus-operator-tls") pod "prometheus-operator-6c8df6d4b-fshkm" (UID: "9c0dbd44-7669-41d6-bf1b-d8c1343c9d98") : secret "prometheus-operator-tls" not found Mar 18 17:53:00.103082 master-0 kubenswrapper[7553]: I0318 17:53:00.103020 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:53:00.103082 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:53:00.103082 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:53:00.103082 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:53:00.103719 master-0 kubenswrapper[7553]: I0318 17:53:00.103107 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:53:01.102300 master-0 kubenswrapper[7553]: I0318 17:53:01.102217 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:53:01.102300 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:53:01.102300 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:53:01.102300 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:53:01.102780 master-0 kubenswrapper[7553]: I0318 17:53:01.102313 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:53:02.101970 master-0 kubenswrapper[7553]: I0318 17:53:02.101867 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:53:02.101970 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:53:02.101970 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:53:02.101970 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:53:02.102986 master-0 kubenswrapper[7553]: I0318 17:53:02.102001 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:53:03.102293 master-0 kubenswrapper[7553]: I0318 17:53:03.102166 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:53:03.102293 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:53:03.102293 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:53:03.102293 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:53:03.102293 master-0 kubenswrapper[7553]: I0318 17:53:03.102261 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:53:04.103503 master-0 kubenswrapper[7553]: I0318 17:53:04.103431 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:53:04.103503 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:53:04.103503 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:53:04.103503 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:53:04.104411 master-0 kubenswrapper[7553]: I0318 17:53:04.103540 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:53:05.053046 master-0 kubenswrapper[7553]: I0318 17:53:05.052925 7553 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-5c6485487f-z74t2" Mar 18 17:53:05.102914 master-0 kubenswrapper[7553]: I0318 17:53:05.102796 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:53:05.102914 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:53:05.102914 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:53:05.102914 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:53:05.102914 master-0 kubenswrapper[7553]: I0318 17:53:05.102905 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:53:06.102982 master-0 kubenswrapper[7553]: I0318 17:53:06.102895 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:53:06.102982 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:53:06.102982 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:53:06.102982 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:53:06.103821 master-0 kubenswrapper[7553]: I0318 17:53:06.102993 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:53:07.103460 master-0 kubenswrapper[7553]: I0318 17:53:07.103362 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:53:07.103460 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:53:07.103460 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:53:07.103460 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:53:07.103460 master-0 kubenswrapper[7553]: I0318 17:53:07.103453 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:53:08.054855 master-0 kubenswrapper[7553]: I0318 17:53:08.054757 7553 scope.go:117] "RemoveContainer" containerID="6eeb4cd2c87e125ddb833d434b420403394a973efd8a0367e734678f55632df5" Mar 18 17:53:08.055321 master-0 kubenswrapper[7553]: E0318 17:53:08.055156 7553 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-rbac-proxy pod=cluster-cloud-controller-manager-operator-7dff898856-kfzkl_openshift-cloud-controller-manager-operator(0751c002-fe0e-4f13-bb9c-9accd8ca0df3)\"" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-kfzkl" podUID="0751c002-fe0e-4f13-bb9c-9accd8ca0df3" Mar 18 17:53:08.101637 master-0 kubenswrapper[7553]: I0318 17:53:08.101557 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:53:08.101637 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:53:08.101637 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:53:08.101637 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:53:08.102034 master-0 kubenswrapper[7553]: I0318 17:53:08.101647 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:53:09.102965 master-0 kubenswrapper[7553]: I0318 17:53:09.102859 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:53:09.102965 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:53:09.102965 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:53:09.102965 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:53:09.104356 master-0 kubenswrapper[7553]: I0318 17:53:09.104305 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:53:10.102389 master-0 kubenswrapper[7553]: I0318 17:53:10.102311 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:53:10.102389 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:53:10.102389 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:53:10.102389 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:53:10.102854 master-0 kubenswrapper[7553]: I0318 17:53:10.102425 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:53:10.169036 master-0 kubenswrapper[7553]: I0318 17:53:10.168968 7553 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-vcrq9"] Mar 18 17:53:10.169929 master-0 kubenswrapper[7553]: I0318 17:53:10.169898 7553 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-vcrq9" Mar 18 17:53:10.172136 master-0 kubenswrapper[7553]: I0318 17:53:10.172071 7553 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-sysctl-allowlist" Mar 18 17:53:10.173189 master-0 kubenswrapper[7553]: I0318 17:53:10.173135 7553 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-pjrlq" Mar 18 17:53:10.238762 master-0 kubenswrapper[7553]: I0318 17:53:10.238618 7553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/6c99b809-ecbb-44a2-8dd2-9f4523948f9e-ready\") pod \"cni-sysctl-allowlist-ds-vcrq9\" (UID: \"6c99b809-ecbb-44a2-8dd2-9f4523948f9e\") " pod="openshift-multus/cni-sysctl-allowlist-ds-vcrq9" Mar 18 17:53:10.239108 master-0 kubenswrapper[7553]: I0318 17:53:10.238861 7553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6pxfp\" (UniqueName: \"kubernetes.io/projected/6c99b809-ecbb-44a2-8dd2-9f4523948f9e-kube-api-access-6pxfp\") pod \"cni-sysctl-allowlist-ds-vcrq9\" (UID: \"6c99b809-ecbb-44a2-8dd2-9f4523948f9e\") " pod="openshift-multus/cni-sysctl-allowlist-ds-vcrq9" Mar 18 17:53:10.239108 master-0 kubenswrapper[7553]: I0318 17:53:10.238940 7553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/6c99b809-ecbb-44a2-8dd2-9f4523948f9e-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-vcrq9\" (UID: \"6c99b809-ecbb-44a2-8dd2-9f4523948f9e\") " pod="openshift-multus/cni-sysctl-allowlist-ds-vcrq9" Mar 18 17:53:10.239203 master-0 kubenswrapper[7553]: I0318 17:53:10.239149 7553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/6c99b809-ecbb-44a2-8dd2-9f4523948f9e-tuning-conf-dir\") pod \"cni-sysctl-allowlist-ds-vcrq9\" (UID: \"6c99b809-ecbb-44a2-8dd2-9f4523948f9e\") " pod="openshift-multus/cni-sysctl-allowlist-ds-vcrq9" Mar 18 17:53:10.340632 master-0 kubenswrapper[7553]: I0318 17:53:10.340525 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/6c99b809-ecbb-44a2-8dd2-9f4523948f9e-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-vcrq9\" (UID: \"6c99b809-ecbb-44a2-8dd2-9f4523948f9e\") " pod="openshift-multus/cni-sysctl-allowlist-ds-vcrq9" Mar 18 17:53:10.341139 master-0 kubenswrapper[7553]: I0318 17:53:10.341069 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/6c99b809-ecbb-44a2-8dd2-9f4523948f9e-tuning-conf-dir\") pod \"cni-sysctl-allowlist-ds-vcrq9\" (UID: \"6c99b809-ecbb-44a2-8dd2-9f4523948f9e\") " pod="openshift-multus/cni-sysctl-allowlist-ds-vcrq9" Mar 18 17:53:10.341246 master-0 kubenswrapper[7553]: I0318 17:53:10.341228 7553 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/6c99b809-ecbb-44a2-8dd2-9f4523948f9e-tuning-conf-dir\") pod \"cni-sysctl-allowlist-ds-vcrq9\" (UID: \"6c99b809-ecbb-44a2-8dd2-9f4523948f9e\") " pod="openshift-multus/cni-sysctl-allowlist-ds-vcrq9" Mar 18 17:53:10.341355 master-0 kubenswrapper[7553]: I0318 17:53:10.341247 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/6c99b809-ecbb-44a2-8dd2-9f4523948f9e-ready\") pod \"cni-sysctl-allowlist-ds-vcrq9\" (UID: \"6c99b809-ecbb-44a2-8dd2-9f4523948f9e\") " pod="openshift-multus/cni-sysctl-allowlist-ds-vcrq9" Mar 18 17:53:10.341544 master-0 kubenswrapper[7553]: I0318 17:53:10.341517 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6pxfp\" (UniqueName: \"kubernetes.io/projected/6c99b809-ecbb-44a2-8dd2-9f4523948f9e-kube-api-access-6pxfp\") pod \"cni-sysctl-allowlist-ds-vcrq9\" (UID: \"6c99b809-ecbb-44a2-8dd2-9f4523948f9e\") " pod="openshift-multus/cni-sysctl-allowlist-ds-vcrq9" Mar 18 17:53:10.341995 master-0 kubenswrapper[7553]: I0318 17:53:10.341950 7553 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/6c99b809-ecbb-44a2-8dd2-9f4523948f9e-ready\") pod \"cni-sysctl-allowlist-ds-vcrq9\" (UID: \"6c99b809-ecbb-44a2-8dd2-9f4523948f9e\") " pod="openshift-multus/cni-sysctl-allowlist-ds-vcrq9" Mar 18 17:53:10.342161 master-0 kubenswrapper[7553]: I0318 17:53:10.342046 7553 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/6c99b809-ecbb-44a2-8dd2-9f4523948f9e-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-vcrq9\" (UID: \"6c99b809-ecbb-44a2-8dd2-9f4523948f9e\") " pod="openshift-multus/cni-sysctl-allowlist-ds-vcrq9" Mar 18 17:53:10.359095 master-0 kubenswrapper[7553]: I0318 17:53:10.358965 7553 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6pxfp\" (UniqueName: \"kubernetes.io/projected/6c99b809-ecbb-44a2-8dd2-9f4523948f9e-kube-api-access-6pxfp\") pod \"cni-sysctl-allowlist-ds-vcrq9\" (UID: \"6c99b809-ecbb-44a2-8dd2-9f4523948f9e\") " pod="openshift-multus/cni-sysctl-allowlist-ds-vcrq9" Mar 18 17:53:10.492002 master-0 kubenswrapper[7553]: I0318 17:53:10.491918 7553 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-vcrq9" Mar 18 17:53:10.519714 master-0 kubenswrapper[7553]: W0318 17:53:10.519654 7553 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6c99b809_ecbb_44a2_8dd2_9f4523948f9e.slice/crio-47c964a48b33217f7685b63a417a1aa1b96e75afdcfa50c8510c71012596727c WatchSource:0}: Error finding container 47c964a48b33217f7685b63a417a1aa1b96e75afdcfa50c8510c71012596727c: Status 404 returned error can't find the container with id 47c964a48b33217f7685b63a417a1aa1b96e75afdcfa50c8510c71012596727c Mar 18 17:53:11.102063 master-0 kubenswrapper[7553]: I0318 17:53:11.102004 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:53:11.102063 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:53:11.102063 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:53:11.102063 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:53:11.102708 master-0 kubenswrapper[7553]: I0318 17:53:11.102098 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:53:11.158902 master-0 kubenswrapper[7553]: I0318 17:53:11.158828 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-vcrq9" event={"ID":"6c99b809-ecbb-44a2-8dd2-9f4523948f9e","Type":"ContainerStarted","Data":"1d0d4ddba13dbb8f658a12a861f77aa4d43ed6407e03647febb3e13df6464fd7"} Mar 18 17:53:11.158902 master-0 kubenswrapper[7553]: I0318 17:53:11.158898 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-vcrq9" event={"ID":"6c99b809-ecbb-44a2-8dd2-9f4523948f9e","Type":"ContainerStarted","Data":"47c964a48b33217f7685b63a417a1aa1b96e75afdcfa50c8510c71012596727c"} Mar 18 17:53:11.159404 master-0 kubenswrapper[7553]: I0318 17:53:11.159380 7553 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-multus/cni-sysctl-allowlist-ds-vcrq9" Mar 18 17:53:11.184367 master-0 kubenswrapper[7553]: I0318 17:53:11.184267 7553 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/cni-sysctl-allowlist-ds-vcrq9" podStartSLOduration=1.184241622 podStartE2EDuration="1.184241622s" podCreationTimestamp="2026-03-18 17:53:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 17:53:11.181017827 +0000 UTC m=+681.326852520" watchObservedRunningTime="2026-03-18 17:53:11.184241622 +0000 UTC m=+681.330076295" Mar 18 17:53:11.186706 master-0 kubenswrapper[7553]: I0318 17:53:11.186650 7553 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-multus/cni-sysctl-allowlist-ds-vcrq9" Mar 18 17:53:12.102989 master-0 kubenswrapper[7553]: I0318 17:53:12.102921 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:53:12.102989 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:53:12.102989 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:53:12.102989 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:53:12.103408 master-0 kubenswrapper[7553]: I0318 17:53:12.103000 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:53:12.188836 master-0 kubenswrapper[7553]: I0318 17:53:12.188759 7553 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-vcrq9"] Mar 18 17:53:13.053684 master-0 kubenswrapper[7553]: I0318 17:53:13.053633 7553 scope.go:117] "RemoveContainer" containerID="283b61599e310047ed75a28fad3754db0725837893f44d2709551e02ebb45040" Mar 18 17:53:13.102745 master-0 kubenswrapper[7553]: I0318 17:53:13.102657 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:53:13.102745 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:53:13.102745 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:53:13.102745 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:53:13.102745 master-0 kubenswrapper[7553]: I0318 17:53:13.102736 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:53:14.102213 master-0 kubenswrapper[7553]: I0318 17:53:14.102128 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:53:14.102213 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:53:14.102213 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:53:14.102213 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:53:14.102213 master-0 kubenswrapper[7553]: I0318 17:53:14.102222 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:53:14.200013 master-0 kubenswrapper[7553]: I0318 17:53:14.193138 7553 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-66b84d69b-qb7n6_7e64a377-f497-4416-8f22-d5c7f52e0b65/ingress-operator/3.log" Mar 18 17:53:14.200013 master-0 kubenswrapper[7553]: I0318 17:53:14.194048 7553 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-multus/cni-sysctl-allowlist-ds-vcrq9" podUID="6c99b809-ecbb-44a2-8dd2-9f4523948f9e" containerName="kube-multus-additional-cni-plugins" containerID="cri-o://1d0d4ddba13dbb8f658a12a861f77aa4d43ed6407e03647febb3e13df6464fd7" gracePeriod=30 Mar 18 17:53:14.200013 master-0 kubenswrapper[7553]: I0318 17:53:14.195102 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-66b84d69b-qb7n6" event={"ID":"7e64a377-f497-4416-8f22-d5c7f52e0b65","Type":"ContainerStarted","Data":"af159afaa033efb036b878b04bdffa8fd814f7fc2cf559b2f4b190fa136e0905"} Mar 18 17:53:15.102209 master-0 kubenswrapper[7553]: I0318 17:53:15.102139 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:53:15.102209 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:53:15.102209 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:53:15.102209 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:53:15.103012 master-0 kubenswrapper[7553]: I0318 17:53:15.102974 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:53:16.103170 master-0 kubenswrapper[7553]: I0318 17:53:16.103059 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:53:16.103170 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:53:16.103170 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:53:16.103170 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:53:16.103920 master-0 kubenswrapper[7553]: I0318 17:53:16.103219 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:53:16.551172 master-0 kubenswrapper[7553]: I0318 17:53:16.551117 7553 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/installer-3-master-0"] Mar 18 17:53:16.552040 master-0 kubenswrapper[7553]: I0318 17:53:16.552012 7553 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-3-master-0" Mar 18 17:53:16.554717 master-0 kubenswrapper[7553]: I0318 17:53:16.554689 7553 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Mar 18 17:53:16.558408 master-0 kubenswrapper[7553]: I0318 17:53:16.558377 7553 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-cskqs" Mar 18 17:53:16.570158 master-0 kubenswrapper[7553]: I0318 17:53:16.570104 7553 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/installer-3-master-0"] Mar 18 17:53:16.650827 master-0 kubenswrapper[7553]: I0318 17:53:16.650731 7553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/98c88ce7-94dd-434c-99fc-96d900d544e6-var-lock\") pod \"installer-3-master-0\" (UID: \"98c88ce7-94dd-434c-99fc-96d900d544e6\") " pod="openshift-kube-controller-manager/installer-3-master-0" Mar 18 17:53:16.651149 master-0 kubenswrapper[7553]: I0318 17:53:16.651067 7553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/98c88ce7-94dd-434c-99fc-96d900d544e6-kubelet-dir\") pod \"installer-3-master-0\" (UID: \"98c88ce7-94dd-434c-99fc-96d900d544e6\") " pod="openshift-kube-controller-manager/installer-3-master-0" Mar 18 17:53:16.651297 master-0 kubenswrapper[7553]: I0318 17:53:16.651240 7553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/98c88ce7-94dd-434c-99fc-96d900d544e6-kube-api-access\") pod \"installer-3-master-0\" (UID: \"98c88ce7-94dd-434c-99fc-96d900d544e6\") " pod="openshift-kube-controller-manager/installer-3-master-0" Mar 18 17:53:16.752927 master-0 kubenswrapper[7553]: I0318 17:53:16.752847 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/98c88ce7-94dd-434c-99fc-96d900d544e6-var-lock\") pod \"installer-3-master-0\" (UID: \"98c88ce7-94dd-434c-99fc-96d900d544e6\") " pod="openshift-kube-controller-manager/installer-3-master-0" Mar 18 17:53:16.752927 master-0 kubenswrapper[7553]: I0318 17:53:16.752936 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/98c88ce7-94dd-434c-99fc-96d900d544e6-kubelet-dir\") pod \"installer-3-master-0\" (UID: \"98c88ce7-94dd-434c-99fc-96d900d544e6\") " pod="openshift-kube-controller-manager/installer-3-master-0" Mar 18 17:53:16.753196 master-0 kubenswrapper[7553]: I0318 17:53:16.752984 7553 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/98c88ce7-94dd-434c-99fc-96d900d544e6-kubelet-dir\") pod \"installer-3-master-0\" (UID: \"98c88ce7-94dd-434c-99fc-96d900d544e6\") " pod="openshift-kube-controller-manager/installer-3-master-0" Mar 18 17:53:16.753196 master-0 kubenswrapper[7553]: I0318 17:53:16.752989 7553 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/98c88ce7-94dd-434c-99fc-96d900d544e6-var-lock\") pod \"installer-3-master-0\" (UID: \"98c88ce7-94dd-434c-99fc-96d900d544e6\") " pod="openshift-kube-controller-manager/installer-3-master-0" Mar 18 17:53:16.753196 master-0 kubenswrapper[7553]: I0318 17:53:16.753029 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/98c88ce7-94dd-434c-99fc-96d900d544e6-kube-api-access\") pod \"installer-3-master-0\" (UID: \"98c88ce7-94dd-434c-99fc-96d900d544e6\") " pod="openshift-kube-controller-manager/installer-3-master-0" Mar 18 17:53:16.774394 master-0 kubenswrapper[7553]: I0318 17:53:16.774306 7553 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/98c88ce7-94dd-434c-99fc-96d900d544e6-kube-api-access\") pod \"installer-3-master-0\" (UID: \"98c88ce7-94dd-434c-99fc-96d900d544e6\") " pod="openshift-kube-controller-manager/installer-3-master-0" Mar 18 17:53:16.873349 master-0 kubenswrapper[7553]: I0318 17:53:16.873163 7553 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-3-master-0" Mar 18 17:53:17.106073 master-0 kubenswrapper[7553]: I0318 17:53:17.105068 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:53:17.106073 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:53:17.106073 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:53:17.106073 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:53:17.106073 master-0 kubenswrapper[7553]: I0318 17:53:17.105171 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:53:17.335751 master-0 kubenswrapper[7553]: I0318 17:53:17.335660 7553 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/installer-3-master-0"] Mar 18 17:53:17.607624 master-0 kubenswrapper[7553]: I0318 17:53:17.607505 7553 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/installer-2-master-0"] Mar 18 17:53:17.608558 master-0 kubenswrapper[7553]: I0318 17:53:17.608527 7553 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-2-master-0" Mar 18 17:53:17.611794 master-0 kubenswrapper[7553]: I0318 17:53:17.611747 7553 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd"/"installer-sa-dockercfg-jkbcl" Mar 18 17:53:17.613187 master-0 kubenswrapper[7553]: I0318 17:53:17.613163 7553 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd"/"kube-root-ca.crt" Mar 18 17:53:17.617888 master-0 kubenswrapper[7553]: I0318 17:53:17.617817 7553 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd/installer-2-master-0"] Mar 18 17:53:17.673266 master-0 kubenswrapper[7553]: I0318 17:53:17.673181 7553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/cd9d8bd7-68a0-458f-9d25-f600932e303c-kube-api-access\") pod \"installer-2-master-0\" (UID: \"cd9d8bd7-68a0-458f-9d25-f600932e303c\") " pod="openshift-etcd/installer-2-master-0" Mar 18 17:53:17.673502 master-0 kubenswrapper[7553]: I0318 17:53:17.673364 7553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/cd9d8bd7-68a0-458f-9d25-f600932e303c-kubelet-dir\") pod \"installer-2-master-0\" (UID: \"cd9d8bd7-68a0-458f-9d25-f600932e303c\") " pod="openshift-etcd/installer-2-master-0" Mar 18 17:53:17.673502 master-0 kubenswrapper[7553]: I0318 17:53:17.673425 7553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/cd9d8bd7-68a0-458f-9d25-f600932e303c-var-lock\") pod \"installer-2-master-0\" (UID: \"cd9d8bd7-68a0-458f-9d25-f600932e303c\") " pod="openshift-etcd/installer-2-master-0" Mar 18 17:53:17.775302 master-0 kubenswrapper[7553]: I0318 17:53:17.775207 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/cd9d8bd7-68a0-458f-9d25-f600932e303c-kube-api-access\") pod \"installer-2-master-0\" (UID: \"cd9d8bd7-68a0-458f-9d25-f600932e303c\") " pod="openshift-etcd/installer-2-master-0" Mar 18 17:53:17.775302 master-0 kubenswrapper[7553]: I0318 17:53:17.775302 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/cd9d8bd7-68a0-458f-9d25-f600932e303c-kubelet-dir\") pod \"installer-2-master-0\" (UID: \"cd9d8bd7-68a0-458f-9d25-f600932e303c\") " pod="openshift-etcd/installer-2-master-0" Mar 18 17:53:17.775622 master-0 kubenswrapper[7553]: I0318 17:53:17.775440 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/cd9d8bd7-68a0-458f-9d25-f600932e303c-var-lock\") pod \"installer-2-master-0\" (UID: \"cd9d8bd7-68a0-458f-9d25-f600932e303c\") " pod="openshift-etcd/installer-2-master-0" Mar 18 17:53:17.775691 master-0 kubenswrapper[7553]: I0318 17:53:17.775589 7553 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/cd9d8bd7-68a0-458f-9d25-f600932e303c-kubelet-dir\") pod \"installer-2-master-0\" (UID: \"cd9d8bd7-68a0-458f-9d25-f600932e303c\") " pod="openshift-etcd/installer-2-master-0" Mar 18 17:53:17.775759 master-0 kubenswrapper[7553]: I0318 17:53:17.775703 7553 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/cd9d8bd7-68a0-458f-9d25-f600932e303c-var-lock\") pod \"installer-2-master-0\" (UID: \"cd9d8bd7-68a0-458f-9d25-f600932e303c\") " pod="openshift-etcd/installer-2-master-0" Mar 18 17:53:17.795215 master-0 kubenswrapper[7553]: I0318 17:53:17.795130 7553 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/cd9d8bd7-68a0-458f-9d25-f600932e303c-kube-api-access\") pod \"installer-2-master-0\" (UID: \"cd9d8bd7-68a0-458f-9d25-f600932e303c\") " pod="openshift-etcd/installer-2-master-0" Mar 18 17:53:17.951136 master-0 kubenswrapper[7553]: I0318 17:53:17.950922 7553 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-2-master-0" Mar 18 17:53:18.103503 master-0 kubenswrapper[7553]: I0318 17:53:18.103419 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:53:18.103503 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:53:18.103503 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:53:18.103503 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:53:18.103878 master-0 kubenswrapper[7553]: I0318 17:53:18.103528 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:53:18.223102 master-0 kubenswrapper[7553]: I0318 17:53:18.222892 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-3-master-0" event={"ID":"98c88ce7-94dd-434c-99fc-96d900d544e6","Type":"ContainerStarted","Data":"f946a82c484d87fe7448697a732facf5002625190cba529f3bfbd4dceece22e3"} Mar 18 17:53:18.223102 master-0 kubenswrapper[7553]: I0318 17:53:18.222992 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-3-master-0" event={"ID":"98c88ce7-94dd-434c-99fc-96d900d544e6","Type":"ContainerStarted","Data":"c257b7064ba1ee282a10d14ba9ea68bf5e64596dfd922f601f3ce37e1e2104a5"} Mar 18 17:53:18.252180 master-0 kubenswrapper[7553]: I0318 17:53:18.250393 7553 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/installer-3-master-0" podStartSLOduration=2.250346573 podStartE2EDuration="2.250346573s" podCreationTimestamp="2026-03-18 17:53:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 17:53:18.247820016 +0000 UTC m=+688.393654709" watchObservedRunningTime="2026-03-18 17:53:18.250346573 +0000 UTC m=+688.396181266" Mar 18 17:53:18.413106 master-0 kubenswrapper[7553]: I0318 17:53:18.413060 7553 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd/installer-2-master-0"] Mar 18 17:53:18.420763 master-0 kubenswrapper[7553]: W0318 17:53:18.420587 7553 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-podcd9d8bd7_68a0_458f_9d25_f600932e303c.slice/crio-bc6ed26ff47dcb63cc0618959e2aa5d6fdd1facec54c0eb66675504b09f0fb7c WatchSource:0}: Error finding container bc6ed26ff47dcb63cc0618959e2aa5d6fdd1facec54c0eb66675504b09f0fb7c: Status 404 returned error can't find the container with id bc6ed26ff47dcb63cc0618959e2aa5d6fdd1facec54c0eb66675504b09f0fb7c Mar 18 17:53:19.102559 master-0 kubenswrapper[7553]: I0318 17:53:19.102457 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:53:19.102559 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:53:19.102559 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:53:19.102559 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:53:19.102897 master-0 kubenswrapper[7553]: I0318 17:53:19.102581 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:53:19.232996 master-0 kubenswrapper[7553]: I0318 17:53:19.232911 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/installer-2-master-0" event={"ID":"cd9d8bd7-68a0-458f-9d25-f600932e303c","Type":"ContainerStarted","Data":"c609c2b3b4935f3bff5c215911aef6aecfcc54b41e1023b5431ec59542ec2f9d"} Mar 18 17:53:19.232996 master-0 kubenswrapper[7553]: I0318 17:53:19.232994 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/installer-2-master-0" event={"ID":"cd9d8bd7-68a0-458f-9d25-f600932e303c","Type":"ContainerStarted","Data":"bc6ed26ff47dcb63cc0618959e2aa5d6fdd1facec54c0eb66675504b09f0fb7c"} Mar 18 17:53:19.263704 master-0 kubenswrapper[7553]: I0318 17:53:19.263544 7553 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/installer-2-master-0" podStartSLOduration=2.263461378 podStartE2EDuration="2.263461378s" podCreationTimestamp="2026-03-18 17:53:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 17:53:19.260594802 +0000 UTC m=+689.406429515" watchObservedRunningTime="2026-03-18 17:53:19.263461378 +0000 UTC m=+689.409296051" Mar 18 17:53:19.760077 master-0 kubenswrapper[7553]: I0318 17:53:19.760015 7553 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-58c9f8fc64-9c6bk"] Mar 18 17:53:19.761238 master-0 kubenswrapper[7553]: I0318 17:53:19.761206 7553 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-58c9f8fc64-9c6bk" Mar 18 17:53:19.764841 master-0 kubenswrapper[7553]: I0318 17:53:19.763777 7553 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-r9bww" Mar 18 17:53:19.787488 master-0 kubenswrapper[7553]: I0318 17:53:19.787403 7553 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-58c9f8fc64-9c6bk"] Mar 18 17:53:19.839633 master-0 kubenswrapper[7553]: I0318 17:53:19.839542 7553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/e7f76afa-4b23-421c-8451-46323813f06e-webhook-certs\") pod \"multus-admission-controller-58c9f8fc64-9c6bk\" (UID: \"e7f76afa-4b23-421c-8451-46323813f06e\") " pod="openshift-multus/multus-admission-controller-58c9f8fc64-9c6bk" Mar 18 17:53:19.839633 master-0 kubenswrapper[7553]: I0318 17:53:19.839618 7553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gzhsq\" (UniqueName: \"kubernetes.io/projected/e7f76afa-4b23-421c-8451-46323813f06e-kube-api-access-gzhsq\") pod \"multus-admission-controller-58c9f8fc64-9c6bk\" (UID: \"e7f76afa-4b23-421c-8451-46323813f06e\") " pod="openshift-multus/multus-admission-controller-58c9f8fc64-9c6bk" Mar 18 17:53:19.940696 master-0 kubenswrapper[7553]: I0318 17:53:19.940615 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/e7f76afa-4b23-421c-8451-46323813f06e-webhook-certs\") pod \"multus-admission-controller-58c9f8fc64-9c6bk\" (UID: \"e7f76afa-4b23-421c-8451-46323813f06e\") " pod="openshift-multus/multus-admission-controller-58c9f8fc64-9c6bk" Mar 18 17:53:19.940696 master-0 kubenswrapper[7553]: I0318 17:53:19.940702 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gzhsq\" (UniqueName: \"kubernetes.io/projected/e7f76afa-4b23-421c-8451-46323813f06e-kube-api-access-gzhsq\") pod \"multus-admission-controller-58c9f8fc64-9c6bk\" (UID: \"e7f76afa-4b23-421c-8451-46323813f06e\") " pod="openshift-multus/multus-admission-controller-58c9f8fc64-9c6bk" Mar 18 17:53:19.945298 master-0 kubenswrapper[7553]: I0318 17:53:19.944950 7553 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/e7f76afa-4b23-421c-8451-46323813f06e-webhook-certs\") pod \"multus-admission-controller-58c9f8fc64-9c6bk\" (UID: \"e7f76afa-4b23-421c-8451-46323813f06e\") " pod="openshift-multus/multus-admission-controller-58c9f8fc64-9c6bk" Mar 18 17:53:19.967510 master-0 kubenswrapper[7553]: I0318 17:53:19.967457 7553 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gzhsq\" (UniqueName: \"kubernetes.io/projected/e7f76afa-4b23-421c-8451-46323813f06e-kube-api-access-gzhsq\") pod \"multus-admission-controller-58c9f8fc64-9c6bk\" (UID: \"e7f76afa-4b23-421c-8451-46323813f06e\") " pod="openshift-multus/multus-admission-controller-58c9f8fc64-9c6bk" Mar 18 17:53:20.057010 master-0 kubenswrapper[7553]: I0318 17:53:20.056950 7553 scope.go:117] "RemoveContainer" containerID="6eeb4cd2c87e125ddb833d434b420403394a973efd8a0367e734678f55632df5" Mar 18 17:53:20.057312 master-0 kubenswrapper[7553]: E0318 17:53:20.057107 7553 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-rbac-proxy pod=cluster-cloud-controller-manager-operator-7dff898856-kfzkl_openshift-cloud-controller-manager-operator(0751c002-fe0e-4f13-bb9c-9accd8ca0df3)\"" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-kfzkl" podUID="0751c002-fe0e-4f13-bb9c-9accd8ca0df3" Mar 18 17:53:20.083914 master-0 kubenswrapper[7553]: I0318 17:53:20.083823 7553 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-58c9f8fc64-9c6bk" Mar 18 17:53:20.102467 master-0 kubenswrapper[7553]: I0318 17:53:20.102188 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:53:20.102467 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:53:20.102467 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:53:20.102467 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:53:20.102467 master-0 kubenswrapper[7553]: I0318 17:53:20.102308 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:53:20.495223 master-0 kubenswrapper[7553]: E0318 17:53:20.495152 7553 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="1d0d4ddba13dbb8f658a12a861f77aa4d43ed6407e03647febb3e13df6464fd7" cmd=["/bin/bash","-c","test -f /ready/ready"] Mar 18 17:53:20.497774 master-0 kubenswrapper[7553]: E0318 17:53:20.497753 7553 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="1d0d4ddba13dbb8f658a12a861f77aa4d43ed6407e03647febb3e13df6464fd7" cmd=["/bin/bash","-c","test -f /ready/ready"] Mar 18 17:53:20.500333 master-0 kubenswrapper[7553]: E0318 17:53:20.500171 7553 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="1d0d4ddba13dbb8f658a12a861f77aa4d43ed6407e03647febb3e13df6464fd7" cmd=["/bin/bash","-c","test -f /ready/ready"] Mar 18 17:53:20.500426 master-0 kubenswrapper[7553]: E0318 17:53:20.500326 7553 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-vcrq9" podUID="6c99b809-ecbb-44a2-8dd2-9f4523948f9e" containerName="kube-multus-additional-cni-plugins" Mar 18 17:53:20.521751 master-0 kubenswrapper[7553]: I0318 17:53:20.521700 7553 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-58c9f8fc64-9c6bk"] Mar 18 17:53:20.531922 master-0 kubenswrapper[7553]: W0318 17:53:20.531844 7553 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode7f76afa_4b23_421c_8451_46323813f06e.slice/crio-36a9c5c55aaa067ac7414f9662835335c782889c32307de35102428e52f590c8 WatchSource:0}: Error finding container 36a9c5c55aaa067ac7414f9662835335c782889c32307de35102428e52f590c8: Status 404 returned error can't find the container with id 36a9c5c55aaa067ac7414f9662835335c782889c32307de35102428e52f590c8 Mar 18 17:53:21.102084 master-0 kubenswrapper[7553]: I0318 17:53:21.102003 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:53:21.102084 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:53:21.102084 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:53:21.102084 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:53:21.102548 master-0 kubenswrapper[7553]: I0318 17:53:21.102095 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:53:21.255729 master-0 kubenswrapper[7553]: I0318 17:53:21.255654 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-58c9f8fc64-9c6bk" event={"ID":"e7f76afa-4b23-421c-8451-46323813f06e","Type":"ContainerStarted","Data":"423f8be834783e9c373340d420049700f4c316646579bf4110152d9a2311fd36"} Mar 18 17:53:21.255729 master-0 kubenswrapper[7553]: I0318 17:53:21.255723 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-58c9f8fc64-9c6bk" event={"ID":"e7f76afa-4b23-421c-8451-46323813f06e","Type":"ContainerStarted","Data":"c45ee1f8cc6579b3047b0ff90e6e7a6851994137b4bf768e09f3f3e778c2ab84"} Mar 18 17:53:21.255729 master-0 kubenswrapper[7553]: I0318 17:53:21.255736 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-58c9f8fc64-9c6bk" event={"ID":"e7f76afa-4b23-421c-8451-46323813f06e","Type":"ContainerStarted","Data":"36a9c5c55aaa067ac7414f9662835335c782889c32307de35102428e52f590c8"} Mar 18 17:53:21.278861 master-0 kubenswrapper[7553]: I0318 17:53:21.278664 7553 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-58c9f8fc64-9c6bk" podStartSLOduration=2.278630773 podStartE2EDuration="2.278630773s" podCreationTimestamp="2026-03-18 17:53:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 17:53:21.275890541 +0000 UTC m=+691.421725234" watchObservedRunningTime="2026-03-18 17:53:21.278630773 +0000 UTC m=+691.424465436" Mar 18 17:53:21.321972 master-0 kubenswrapper[7553]: I0318 17:53:21.319153 7553 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-multus/multus-admission-controller-5dbbb8b86f-gr8jc"] Mar 18 17:53:21.321972 master-0 kubenswrapper[7553]: I0318 17:53:21.320690 7553 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-multus/multus-admission-controller-5dbbb8b86f-gr8jc" podUID="a8c34df1-ea0d-4dfa-bf4d-5b58dc5bee8e" containerName="multus-admission-controller" containerID="cri-o://44bf631b967a6a5c4f33c650ce7e77866fd0f758bbaa4aaabffd566bdac21bf2" gracePeriod=30 Mar 18 17:53:21.321972 master-0 kubenswrapper[7553]: I0318 17:53:21.320870 7553 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-multus/multus-admission-controller-5dbbb8b86f-gr8jc" podUID="a8c34df1-ea0d-4dfa-bf4d-5b58dc5bee8e" containerName="kube-rbac-proxy" containerID="cri-o://8d4a4392fb62b19690bdd00e7dd0f4626d2ed6c3f32141c69d0cf8e940849d1f" gracePeriod=30 Mar 18 17:53:22.102914 master-0 kubenswrapper[7553]: I0318 17:53:22.102835 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:53:22.102914 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:53:22.102914 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:53:22.102914 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:53:22.104014 master-0 kubenswrapper[7553]: I0318 17:53:22.102928 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:53:22.266218 master-0 kubenswrapper[7553]: I0318 17:53:22.266144 7553 generic.go:334] "Generic (PLEG): container finished" podID="a8c34df1-ea0d-4dfa-bf4d-5b58dc5bee8e" containerID="8d4a4392fb62b19690bdd00e7dd0f4626d2ed6c3f32141c69d0cf8e940849d1f" exitCode=0 Mar 18 17:53:22.266548 master-0 kubenswrapper[7553]: I0318 17:53:22.266237 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-5dbbb8b86f-gr8jc" event={"ID":"a8c34df1-ea0d-4dfa-bf4d-5b58dc5bee8e","Type":"ContainerDied","Data":"8d4a4392fb62b19690bdd00e7dd0f4626d2ed6c3f32141c69d0cf8e940849d1f"} Mar 18 17:53:23.103488 master-0 kubenswrapper[7553]: I0318 17:53:23.103394 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:53:23.103488 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:53:23.103488 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:53:23.103488 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:53:23.104174 master-0 kubenswrapper[7553]: I0318 17:53:23.103541 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:53:24.102756 master-0 kubenswrapper[7553]: I0318 17:53:24.102672 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:53:24.102756 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:53:24.102756 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:53:24.102756 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:53:24.103114 master-0 kubenswrapper[7553]: I0318 17:53:24.102790 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:53:25.102869 master-0 kubenswrapper[7553]: I0318 17:53:25.102605 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:53:25.102869 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:53:25.102869 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:53:25.102869 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:53:25.102869 master-0 kubenswrapper[7553]: I0318 17:53:25.102697 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:53:26.102297 master-0 kubenswrapper[7553]: I0318 17:53:26.102172 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:53:26.102297 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:53:26.102297 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:53:26.102297 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:53:26.102655 master-0 kubenswrapper[7553]: I0318 17:53:26.102298 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:53:27.103182 master-0 kubenswrapper[7553]: I0318 17:53:27.103070 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:53:27.103182 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:53:27.103182 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:53:27.103182 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:53:27.104560 master-0 kubenswrapper[7553]: I0318 17:53:27.103200 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:53:28.102575 master-0 kubenswrapper[7553]: I0318 17:53:28.102500 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:53:28.102575 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:53:28.102575 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:53:28.102575 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:53:28.102854 master-0 kubenswrapper[7553]: I0318 17:53:28.102611 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:53:29.103477 master-0 kubenswrapper[7553]: I0318 17:53:29.103389 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:53:29.103477 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:53:29.103477 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:53:29.103477 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:53:29.104556 master-0 kubenswrapper[7553]: I0318 17:53:29.103500 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:53:30.102048 master-0 kubenswrapper[7553]: I0318 17:53:30.101990 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:53:30.102048 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:53:30.102048 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:53:30.102048 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:53:30.102508 master-0 kubenswrapper[7553]: I0318 17:53:30.102468 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:53:30.496056 master-0 kubenswrapper[7553]: E0318 17:53:30.495940 7553 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="1d0d4ddba13dbb8f658a12a861f77aa4d43ed6407e03647febb3e13df6464fd7" cmd=["/bin/bash","-c","test -f /ready/ready"] Mar 18 17:53:30.498346 master-0 kubenswrapper[7553]: E0318 17:53:30.498254 7553 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="1d0d4ddba13dbb8f658a12a861f77aa4d43ed6407e03647febb3e13df6464fd7" cmd=["/bin/bash","-c","test -f /ready/ready"] Mar 18 17:53:30.500089 master-0 kubenswrapper[7553]: E0318 17:53:30.500001 7553 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="1d0d4ddba13dbb8f658a12a861f77aa4d43ed6407e03647febb3e13df6464fd7" cmd=["/bin/bash","-c","test -f /ready/ready"] Mar 18 17:53:30.500255 master-0 kubenswrapper[7553]: E0318 17:53:30.500093 7553 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-vcrq9" podUID="6c99b809-ecbb-44a2-8dd2-9f4523948f9e" containerName="kube-multus-additional-cni-plugins" Mar 18 17:53:31.102051 master-0 kubenswrapper[7553]: I0318 17:53:31.101955 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:53:31.102051 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:53:31.102051 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:53:31.102051 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:53:31.102535 master-0 kubenswrapper[7553]: I0318 17:53:31.102100 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:53:32.053908 master-0 kubenswrapper[7553]: I0318 17:53:32.053822 7553 scope.go:117] "RemoveContainer" containerID="6eeb4cd2c87e125ddb833d434b420403394a973efd8a0367e734678f55632df5" Mar 18 17:53:32.054704 master-0 kubenswrapper[7553]: E0318 17:53:32.054120 7553 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-rbac-proxy pod=cluster-cloud-controller-manager-operator-7dff898856-kfzkl_openshift-cloud-controller-manager-operator(0751c002-fe0e-4f13-bb9c-9accd8ca0df3)\"" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-kfzkl" podUID="0751c002-fe0e-4f13-bb9c-9accd8ca0df3" Mar 18 17:53:32.102674 master-0 kubenswrapper[7553]: I0318 17:53:32.102595 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:53:32.102674 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:53:32.102674 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:53:32.102674 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:53:32.102674 master-0 kubenswrapper[7553]: I0318 17:53:32.102685 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:53:33.102537 master-0 kubenswrapper[7553]: I0318 17:53:33.102426 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:53:33.102537 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:53:33.102537 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:53:33.102537 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:53:33.103732 master-0 kubenswrapper[7553]: I0318 17:53:33.102569 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:53:34.102598 master-0 kubenswrapper[7553]: I0318 17:53:34.102495 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:53:34.102598 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:53:34.102598 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:53:34.102598 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:53:34.103786 master-0 kubenswrapper[7553]: I0318 17:53:34.102618 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:53:34.103786 master-0 kubenswrapper[7553]: I0318 17:53:34.102706 7553 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" Mar 18 17:53:34.103786 master-0 kubenswrapper[7553]: I0318 17:53:34.103711 7553 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="router" containerStatusID={"Type":"cri-o","ID":"3be88236d1075355721a3a53c0d6a8b5bc0a4bd441e11b9ae0dd32cd30599a9f"} pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" containerMessage="Container router failed startup probe, will be restarted" Mar 18 17:53:34.103786 master-0 kubenswrapper[7553]: I0318 17:53:34.103771 7553 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" containerID="cri-o://3be88236d1075355721a3a53c0d6a8b5bc0a4bd441e11b9ae0dd32cd30599a9f" gracePeriod=3600 Mar 18 17:53:38.237251 master-0 kubenswrapper[7553]: I0318 17:53:38.237172 7553 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-1-retry-1-master-0"] Mar 18 17:53:38.238345 master-0 kubenswrapper[7553]: I0318 17:53:38.238249 7553 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-1-retry-1-master-0" Mar 18 17:53:38.242658 master-0 kubenswrapper[7553]: I0318 17:53:38.242505 7553 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Mar 18 17:53:38.242996 master-0 kubenswrapper[7553]: I0318 17:53:38.242928 7553 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-kzvvj" Mar 18 17:53:38.260010 master-0 kubenswrapper[7553]: I0318 17:53:38.259946 7553 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-1-retry-1-master-0"] Mar 18 17:53:38.373030 master-0 kubenswrapper[7553]: I0318 17:53:38.372931 7553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/da246674-9ad1-4732-9a9e-d86d18fb0c55-var-lock\") pod \"installer-1-retry-1-master-0\" (UID: \"da246674-9ad1-4732-9a9e-d86d18fb0c55\") " pod="openshift-kube-apiserver/installer-1-retry-1-master-0" Mar 18 17:53:38.373030 master-0 kubenswrapper[7553]: I0318 17:53:38.373020 7553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/da246674-9ad1-4732-9a9e-d86d18fb0c55-kubelet-dir\") pod \"installer-1-retry-1-master-0\" (UID: \"da246674-9ad1-4732-9a9e-d86d18fb0c55\") " pod="openshift-kube-apiserver/installer-1-retry-1-master-0" Mar 18 17:53:38.373765 master-0 kubenswrapper[7553]: I0318 17:53:38.373222 7553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/da246674-9ad1-4732-9a9e-d86d18fb0c55-kube-api-access\") pod \"installer-1-retry-1-master-0\" (UID: \"da246674-9ad1-4732-9a9e-d86d18fb0c55\") " pod="openshift-kube-apiserver/installer-1-retry-1-master-0" Mar 18 17:53:38.475172 master-0 kubenswrapper[7553]: I0318 17:53:38.475054 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/da246674-9ad1-4732-9a9e-d86d18fb0c55-kube-api-access\") pod \"installer-1-retry-1-master-0\" (UID: \"da246674-9ad1-4732-9a9e-d86d18fb0c55\") " pod="openshift-kube-apiserver/installer-1-retry-1-master-0" Mar 18 17:53:38.475533 master-0 kubenswrapper[7553]: I0318 17:53:38.475253 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/da246674-9ad1-4732-9a9e-d86d18fb0c55-var-lock\") pod \"installer-1-retry-1-master-0\" (UID: \"da246674-9ad1-4732-9a9e-d86d18fb0c55\") " pod="openshift-kube-apiserver/installer-1-retry-1-master-0" Mar 18 17:53:38.475533 master-0 kubenswrapper[7553]: I0318 17:53:38.475342 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/da246674-9ad1-4732-9a9e-d86d18fb0c55-kubelet-dir\") pod \"installer-1-retry-1-master-0\" (UID: \"da246674-9ad1-4732-9a9e-d86d18fb0c55\") " pod="openshift-kube-apiserver/installer-1-retry-1-master-0" Mar 18 17:53:38.475760 master-0 kubenswrapper[7553]: I0318 17:53:38.475556 7553 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/da246674-9ad1-4732-9a9e-d86d18fb0c55-kubelet-dir\") pod \"installer-1-retry-1-master-0\" (UID: \"da246674-9ad1-4732-9a9e-d86d18fb0c55\") " pod="openshift-kube-apiserver/installer-1-retry-1-master-0" Mar 18 17:53:38.476102 master-0 kubenswrapper[7553]: I0318 17:53:38.476038 7553 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/da246674-9ad1-4732-9a9e-d86d18fb0c55-var-lock\") pod \"installer-1-retry-1-master-0\" (UID: \"da246674-9ad1-4732-9a9e-d86d18fb0c55\") " pod="openshift-kube-apiserver/installer-1-retry-1-master-0" Mar 18 17:53:38.506762 master-0 kubenswrapper[7553]: I0318 17:53:38.506594 7553 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/da246674-9ad1-4732-9a9e-d86d18fb0c55-kube-api-access\") pod \"installer-1-retry-1-master-0\" (UID: \"da246674-9ad1-4732-9a9e-d86d18fb0c55\") " pod="openshift-kube-apiserver/installer-1-retry-1-master-0" Mar 18 17:53:38.573309 master-0 kubenswrapper[7553]: I0318 17:53:38.573217 7553 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-1-retry-1-master-0" Mar 18 17:53:39.035510 master-0 kubenswrapper[7553]: I0318 17:53:39.035451 7553 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-1-retry-1-master-0"] Mar 18 17:53:39.385762 master-0 kubenswrapper[7553]: I0318 17:53:39.385690 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-1-retry-1-master-0" event={"ID":"da246674-9ad1-4732-9a9e-d86d18fb0c55","Type":"ContainerStarted","Data":"fba66f2362f417736e585bd1e5c757b3e12cdb7f292f9ad5781307faed635e6f"} Mar 18 17:53:39.385762 master-0 kubenswrapper[7553]: I0318 17:53:39.385746 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-1-retry-1-master-0" event={"ID":"da246674-9ad1-4732-9a9e-d86d18fb0c55","Type":"ContainerStarted","Data":"afa0a71d3872d19b913c3ebbc34f43353efcfea37e9fa645a1364cfa53c28503"} Mar 18 17:53:39.416628 master-0 kubenswrapper[7553]: I0318 17:53:39.416502 7553 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-1-retry-1-master-0" podStartSLOduration=1.416470892 podStartE2EDuration="1.416470892s" podCreationTimestamp="2026-03-18 17:53:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 17:53:39.409720773 +0000 UTC m=+709.555555446" watchObservedRunningTime="2026-03-18 17:53:39.416470892 +0000 UTC m=+709.562305575" Mar 18 17:53:40.494673 master-0 kubenswrapper[7553]: E0318 17:53:40.494590 7553 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="1d0d4ddba13dbb8f658a12a861f77aa4d43ed6407e03647febb3e13df6464fd7" cmd=["/bin/bash","-c","test -f /ready/ready"] Mar 18 17:53:40.495980 master-0 kubenswrapper[7553]: E0318 17:53:40.495713 7553 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="1d0d4ddba13dbb8f658a12a861f77aa4d43ed6407e03647febb3e13df6464fd7" cmd=["/bin/bash","-c","test -f /ready/ready"] Mar 18 17:53:40.496915 master-0 kubenswrapper[7553]: E0318 17:53:40.496865 7553 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="1d0d4ddba13dbb8f658a12a861f77aa4d43ed6407e03647febb3e13df6464fd7" cmd=["/bin/bash","-c","test -f /ready/ready"] Mar 18 17:53:40.496915 master-0 kubenswrapper[7553]: E0318 17:53:40.496908 7553 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-vcrq9" podUID="6c99b809-ecbb-44a2-8dd2-9f4523948f9e" containerName="kube-multus-additional-cni-plugins" Mar 18 17:53:42.241188 master-0 kubenswrapper[7553]: I0318 17:53:42.241083 7553 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/installer-1-retry-1-master-0"] Mar 18 17:53:42.242055 master-0 kubenswrapper[7553]: I0318 17:53:42.241502 7553 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/installer-1-retry-1-master-0" podUID="da246674-9ad1-4732-9a9e-d86d18fb0c55" containerName="installer" containerID="cri-o://fba66f2362f417736e585bd1e5c757b3e12cdb7f292f9ad5781307faed635e6f" gracePeriod=30 Mar 18 17:53:43.053685 master-0 kubenswrapper[7553]: I0318 17:53:43.053629 7553 scope.go:117] "RemoveContainer" containerID="6eeb4cd2c87e125ddb833d434b420403394a973efd8a0367e734678f55632df5" Mar 18 17:53:43.053957 master-0 kubenswrapper[7553]: E0318 17:53:43.053848 7553 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-rbac-proxy pod=cluster-cloud-controller-manager-operator-7dff898856-kfzkl_openshift-cloud-controller-manager-operator(0751c002-fe0e-4f13-bb9c-9accd8ca0df3)\"" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-kfzkl" podUID="0751c002-fe0e-4f13-bb9c-9accd8ca0df3" Mar 18 17:53:44.366338 master-0 kubenswrapper[7553]: I0318 17:53:44.366213 7553 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_cni-sysctl-allowlist-ds-vcrq9_6c99b809-ecbb-44a2-8dd2-9f4523948f9e/kube-multus-additional-cni-plugins/0.log" Mar 18 17:53:44.367346 master-0 kubenswrapper[7553]: I0318 17:53:44.366391 7553 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-vcrq9" Mar 18 17:53:44.429456 master-0 kubenswrapper[7553]: I0318 17:53:44.428857 7553 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_cni-sysctl-allowlist-ds-vcrq9_6c99b809-ecbb-44a2-8dd2-9f4523948f9e/kube-multus-additional-cni-plugins/0.log" Mar 18 17:53:44.429456 master-0 kubenswrapper[7553]: I0318 17:53:44.428943 7553 generic.go:334] "Generic (PLEG): container finished" podID="6c99b809-ecbb-44a2-8dd2-9f4523948f9e" containerID="1d0d4ddba13dbb8f658a12a861f77aa4d43ed6407e03647febb3e13df6464fd7" exitCode=137 Mar 18 17:53:44.429456 master-0 kubenswrapper[7553]: I0318 17:53:44.428993 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-vcrq9" event={"ID":"6c99b809-ecbb-44a2-8dd2-9f4523948f9e","Type":"ContainerDied","Data":"1d0d4ddba13dbb8f658a12a861f77aa4d43ed6407e03647febb3e13df6464fd7"} Mar 18 17:53:44.429456 master-0 kubenswrapper[7553]: I0318 17:53:44.429032 7553 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-vcrq9" Mar 18 17:53:44.429456 master-0 kubenswrapper[7553]: I0318 17:53:44.429062 7553 scope.go:117] "RemoveContainer" containerID="1d0d4ddba13dbb8f658a12a861f77aa4d43ed6407e03647febb3e13df6464fd7" Mar 18 17:53:44.429456 master-0 kubenswrapper[7553]: I0318 17:53:44.429042 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-vcrq9" event={"ID":"6c99b809-ecbb-44a2-8dd2-9f4523948f9e","Type":"ContainerDied","Data":"47c964a48b33217f7685b63a417a1aa1b96e75afdcfa50c8510c71012596727c"} Mar 18 17:53:44.450874 master-0 kubenswrapper[7553]: I0318 17:53:44.450826 7553 scope.go:117] "RemoveContainer" containerID="1d0d4ddba13dbb8f658a12a861f77aa4d43ed6407e03647febb3e13df6464fd7" Mar 18 17:53:44.451433 master-0 kubenswrapper[7553]: E0318 17:53:44.451369 7553 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1d0d4ddba13dbb8f658a12a861f77aa4d43ed6407e03647febb3e13df6464fd7\": container with ID starting with 1d0d4ddba13dbb8f658a12a861f77aa4d43ed6407e03647febb3e13df6464fd7 not found: ID does not exist" containerID="1d0d4ddba13dbb8f658a12a861f77aa4d43ed6407e03647febb3e13df6464fd7" Mar 18 17:53:44.451433 master-0 kubenswrapper[7553]: I0318 17:53:44.451408 7553 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1d0d4ddba13dbb8f658a12a861f77aa4d43ed6407e03647febb3e13df6464fd7"} err="failed to get container status \"1d0d4ddba13dbb8f658a12a861f77aa4d43ed6407e03647febb3e13df6464fd7\": rpc error: code = NotFound desc = could not find container \"1d0d4ddba13dbb8f658a12a861f77aa4d43ed6407e03647febb3e13df6464fd7\": container with ID starting with 1d0d4ddba13dbb8f658a12a861f77aa4d43ed6407e03647febb3e13df6464fd7 not found: ID does not exist" Mar 18 17:53:44.483591 master-0 kubenswrapper[7553]: I0318 17:53:44.483440 7553 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/6c99b809-ecbb-44a2-8dd2-9f4523948f9e-ready\") pod \"6c99b809-ecbb-44a2-8dd2-9f4523948f9e\" (UID: \"6c99b809-ecbb-44a2-8dd2-9f4523948f9e\") " Mar 18 17:53:44.483591 master-0 kubenswrapper[7553]: I0318 17:53:44.483496 7553 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6pxfp\" (UniqueName: \"kubernetes.io/projected/6c99b809-ecbb-44a2-8dd2-9f4523948f9e-kube-api-access-6pxfp\") pod \"6c99b809-ecbb-44a2-8dd2-9f4523948f9e\" (UID: \"6c99b809-ecbb-44a2-8dd2-9f4523948f9e\") " Mar 18 17:53:44.483591 master-0 kubenswrapper[7553]: I0318 17:53:44.483557 7553 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/6c99b809-ecbb-44a2-8dd2-9f4523948f9e-tuning-conf-dir\") pod \"6c99b809-ecbb-44a2-8dd2-9f4523948f9e\" (UID: \"6c99b809-ecbb-44a2-8dd2-9f4523948f9e\") " Mar 18 17:53:44.484002 master-0 kubenswrapper[7553]: I0318 17:53:44.483616 7553 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/6c99b809-ecbb-44a2-8dd2-9f4523948f9e-cni-sysctl-allowlist\") pod \"6c99b809-ecbb-44a2-8dd2-9f4523948f9e\" (UID: \"6c99b809-ecbb-44a2-8dd2-9f4523948f9e\") " Mar 18 17:53:44.484108 master-0 kubenswrapper[7553]: I0318 17:53:44.483859 7553 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6c99b809-ecbb-44a2-8dd2-9f4523948f9e-tuning-conf-dir" (OuterVolumeSpecName: "tuning-conf-dir") pod "6c99b809-ecbb-44a2-8dd2-9f4523948f9e" (UID: "6c99b809-ecbb-44a2-8dd2-9f4523948f9e"). InnerVolumeSpecName "tuning-conf-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 17:53:44.484182 master-0 kubenswrapper[7553]: I0318 17:53:44.484025 7553 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6c99b809-ecbb-44a2-8dd2-9f4523948f9e-ready" (OuterVolumeSpecName: "ready") pod "6c99b809-ecbb-44a2-8dd2-9f4523948f9e" (UID: "6c99b809-ecbb-44a2-8dd2-9f4523948f9e"). InnerVolumeSpecName "ready". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 18 17:53:44.484360 master-0 kubenswrapper[7553]: I0318 17:53:44.484317 7553 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6c99b809-ecbb-44a2-8dd2-9f4523948f9e-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "6c99b809-ecbb-44a2-8dd2-9f4523948f9e" (UID: "6c99b809-ecbb-44a2-8dd2-9f4523948f9e"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 17:53:44.484649 master-0 kubenswrapper[7553]: I0318 17:53:44.484602 7553 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/6c99b809-ecbb-44a2-8dd2-9f4523948f9e-cni-sysctl-allowlist\") on node \"master-0\" DevicePath \"\"" Mar 18 17:53:44.484649 master-0 kubenswrapper[7553]: I0318 17:53:44.484629 7553 reconciler_common.go:293] "Volume detached for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/6c99b809-ecbb-44a2-8dd2-9f4523948f9e-ready\") on node \"master-0\" DevicePath \"\"" Mar 18 17:53:44.484649 master-0 kubenswrapper[7553]: I0318 17:53:44.484646 7553 reconciler_common.go:293] "Volume detached for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/6c99b809-ecbb-44a2-8dd2-9f4523948f9e-tuning-conf-dir\") on node \"master-0\" DevicePath \"\"" Mar 18 17:53:44.488639 master-0 kubenswrapper[7553]: I0318 17:53:44.488575 7553 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6c99b809-ecbb-44a2-8dd2-9f4523948f9e-kube-api-access-6pxfp" (OuterVolumeSpecName: "kube-api-access-6pxfp") pod "6c99b809-ecbb-44a2-8dd2-9f4523948f9e" (UID: "6c99b809-ecbb-44a2-8dd2-9f4523948f9e"). InnerVolumeSpecName "kube-api-access-6pxfp". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 17:53:44.585906 master-0 kubenswrapper[7553]: I0318 17:53:44.585818 7553 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6pxfp\" (UniqueName: \"kubernetes.io/projected/6c99b809-ecbb-44a2-8dd2-9f4523948f9e-kube-api-access-6pxfp\") on node \"master-0\" DevicePath \"\"" Mar 18 17:53:44.771541 master-0 kubenswrapper[7553]: I0318 17:53:44.771470 7553 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-vcrq9"] Mar 18 17:53:44.777679 master-0 kubenswrapper[7553]: I0318 17:53:44.777626 7553 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-vcrq9"] Mar 18 17:53:45.439002 master-0 kubenswrapper[7553]: I0318 17:53:45.438918 7553 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-2-master-0"] Mar 18 17:53:45.439833 master-0 kubenswrapper[7553]: E0318 17:53:45.439434 7553 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6c99b809-ecbb-44a2-8dd2-9f4523948f9e" containerName="kube-multus-additional-cni-plugins" Mar 18 17:53:45.439833 master-0 kubenswrapper[7553]: I0318 17:53:45.439473 7553 state_mem.go:107] "Deleted CPUSet assignment" podUID="6c99b809-ecbb-44a2-8dd2-9f4523948f9e" containerName="kube-multus-additional-cni-plugins" Mar 18 17:53:45.439833 master-0 kubenswrapper[7553]: I0318 17:53:45.439721 7553 memory_manager.go:354] "RemoveStaleState removing state" podUID="6c99b809-ecbb-44a2-8dd2-9f4523948f9e" containerName="kube-multus-additional-cni-plugins" Mar 18 17:53:45.440582 master-0 kubenswrapper[7553]: I0318 17:53:45.440535 7553 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-2-master-0" Mar 18 17:53:45.460810 master-0 kubenswrapper[7553]: I0318 17:53:45.460747 7553 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-2-master-0"] Mar 18 17:53:45.500213 master-0 kubenswrapper[7553]: I0318 17:53:45.500082 7553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/c9655d59-a594-499f-b474-dfc870239174-var-lock\") pod \"installer-2-master-0\" (UID: \"c9655d59-a594-499f-b474-dfc870239174\") " pod="openshift-kube-apiserver/installer-2-master-0" Mar 18 17:53:45.500536 master-0 kubenswrapper[7553]: I0318 17:53:45.500228 7553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c9655d59-a594-499f-b474-dfc870239174-kubelet-dir\") pod \"installer-2-master-0\" (UID: \"c9655d59-a594-499f-b474-dfc870239174\") " pod="openshift-kube-apiserver/installer-2-master-0" Mar 18 17:53:45.500536 master-0 kubenswrapper[7553]: I0318 17:53:45.500344 7553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c9655d59-a594-499f-b474-dfc870239174-kube-api-access\") pod \"installer-2-master-0\" (UID: \"c9655d59-a594-499f-b474-dfc870239174\") " pod="openshift-kube-apiserver/installer-2-master-0" Mar 18 17:53:45.601541 master-0 kubenswrapper[7553]: I0318 17:53:45.601454 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/c9655d59-a594-499f-b474-dfc870239174-var-lock\") pod \"installer-2-master-0\" (UID: \"c9655d59-a594-499f-b474-dfc870239174\") " pod="openshift-kube-apiserver/installer-2-master-0" Mar 18 17:53:45.601831 master-0 kubenswrapper[7553]: I0318 17:53:45.601634 7553 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/c9655d59-a594-499f-b474-dfc870239174-var-lock\") pod \"installer-2-master-0\" (UID: \"c9655d59-a594-499f-b474-dfc870239174\") " pod="openshift-kube-apiserver/installer-2-master-0" Mar 18 17:53:45.601831 master-0 kubenswrapper[7553]: I0318 17:53:45.601742 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c9655d59-a594-499f-b474-dfc870239174-kubelet-dir\") pod \"installer-2-master-0\" (UID: \"c9655d59-a594-499f-b474-dfc870239174\") " pod="openshift-kube-apiserver/installer-2-master-0" Mar 18 17:53:45.601963 master-0 kubenswrapper[7553]: I0318 17:53:45.601893 7553 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c9655d59-a594-499f-b474-dfc870239174-kubelet-dir\") pod \"installer-2-master-0\" (UID: \"c9655d59-a594-499f-b474-dfc870239174\") " pod="openshift-kube-apiserver/installer-2-master-0" Mar 18 17:53:45.601963 master-0 kubenswrapper[7553]: I0318 17:53:45.601944 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c9655d59-a594-499f-b474-dfc870239174-kube-api-access\") pod \"installer-2-master-0\" (UID: \"c9655d59-a594-499f-b474-dfc870239174\") " pod="openshift-kube-apiserver/installer-2-master-0" Mar 18 17:53:45.631943 master-0 kubenswrapper[7553]: I0318 17:53:45.631829 7553 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c9655d59-a594-499f-b474-dfc870239174-kube-api-access\") pod \"installer-2-master-0\" (UID: \"c9655d59-a594-499f-b474-dfc870239174\") " pod="openshift-kube-apiserver/installer-2-master-0" Mar 18 17:53:45.777674 master-0 kubenswrapper[7553]: I0318 17:53:45.777494 7553 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-2-master-0" Mar 18 17:53:46.082048 master-0 kubenswrapper[7553]: I0318 17:53:46.081957 7553 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6c99b809-ecbb-44a2-8dd2-9f4523948f9e" path="/var/lib/kubelet/pods/6c99b809-ecbb-44a2-8dd2-9f4523948f9e/volumes" Mar 18 17:53:46.269062 master-0 kubenswrapper[7553]: I0318 17:53:46.268970 7553 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-2-master-0"] Mar 18 17:53:46.271887 master-0 kubenswrapper[7553]: W0318 17:53:46.271835 7553 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-podc9655d59_a594_499f_b474_dfc870239174.slice/crio-202e717017ea47879d90c4603f14b936f4bf42a19ba2cb4cf9411280f3913d38 WatchSource:0}: Error finding container 202e717017ea47879d90c4603f14b936f4bf42a19ba2cb4cf9411280f3913d38: Status 404 returned error can't find the container with id 202e717017ea47879d90c4603f14b936f4bf42a19ba2cb4cf9411280f3913d38 Mar 18 17:53:46.452324 master-0 kubenswrapper[7553]: I0318 17:53:46.452158 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-2-master-0" event={"ID":"c9655d59-a594-499f-b474-dfc870239174","Type":"ContainerStarted","Data":"202e717017ea47879d90c4603f14b936f4bf42a19ba2cb4cf9411280f3913d38"} Mar 18 17:53:47.459576 master-0 kubenswrapper[7553]: I0318 17:53:47.459513 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-2-master-0" event={"ID":"c9655d59-a594-499f-b474-dfc870239174","Type":"ContainerStarted","Data":"88c92e9d0661b28d9a41bcdec55c597d6015bf273bee5facfd2419530f4f2c64"} Mar 18 17:53:47.479054 master-0 kubenswrapper[7553]: I0318 17:53:47.477354 7553 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-2-master-0" podStartSLOduration=2.477328703 podStartE2EDuration="2.477328703s" podCreationTimestamp="2026-03-18 17:53:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 17:53:47.476184553 +0000 UTC m=+717.622019246" watchObservedRunningTime="2026-03-18 17:53:47.477328703 +0000 UTC m=+717.623163396" Mar 18 17:53:50.139379 master-0 kubenswrapper[7553]: I0318 17:53:50.139244 7553 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-etcd/etcd-master-0"] Mar 18 17:53:50.140697 master-0 kubenswrapper[7553]: I0318 17:53:50.140256 7553 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-etcd/etcd-master-0" podUID="24b4ed170d527099878cb5fdd508a2fb" containerName="etcd-readyz" containerID="cri-o://209ad042b5f8ad649dfc90b4fe15da791a0a47c7ee7a1b73c22be24e311175e0" gracePeriod=30 Mar 18 17:53:50.140697 master-0 kubenswrapper[7553]: I0318 17:53:50.140355 7553 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-etcd/etcd-master-0" podUID="24b4ed170d527099878cb5fdd508a2fb" containerName="etcd" containerID="cri-o://368e05dc45d9fe9a58fa3e9e73f9caacf28864ce4d8f255cb43f6da2a5a67e8c" gracePeriod=30 Mar 18 17:53:50.140697 master-0 kubenswrapper[7553]: I0318 17:53:50.140211 7553 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-etcd/etcd-master-0" podUID="24b4ed170d527099878cb5fdd508a2fb" containerName="etcdctl" containerID="cri-o://4bad2f207b9c61f1bc1840e5051ac1d4e30175c583e7b44190e2def255d9e40e" gracePeriod=30 Mar 18 17:53:50.140697 master-0 kubenswrapper[7553]: I0318 17:53:50.140517 7553 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-etcd/etcd-master-0" podUID="24b4ed170d527099878cb5fdd508a2fb" containerName="etcd-rev" containerID="cri-o://da8f0e31af82e82db7f44f999e96d023cf94af8530f7fe4d1cf38fd2a71678aa" gracePeriod=30 Mar 18 17:53:50.140697 master-0 kubenswrapper[7553]: I0318 17:53:50.140260 7553 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-etcd/etcd-master-0" podUID="24b4ed170d527099878cb5fdd508a2fb" containerName="etcd-metrics" containerID="cri-o://4e6c889cbc9d045dfa05a1c69aed57d0b4bc590c83ddf8cc03ee2cace2b29173" gracePeriod=30 Mar 18 17:53:50.141836 master-0 kubenswrapper[7553]: I0318 17:53:50.141766 7553 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-etcd/etcd-master-0"] Mar 18 17:53:50.142301 master-0 kubenswrapper[7553]: E0318 17:53:50.142243 7553 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24b4ed170d527099878cb5fdd508a2fb" containerName="etcd" Mar 18 17:53:50.142361 master-0 kubenswrapper[7553]: I0318 17:53:50.142304 7553 state_mem.go:107] "Deleted CPUSet assignment" podUID="24b4ed170d527099878cb5fdd508a2fb" containerName="etcd" Mar 18 17:53:50.142361 master-0 kubenswrapper[7553]: E0318 17:53:50.142335 7553 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24b4ed170d527099878cb5fdd508a2fb" containerName="etcd-rev" Mar 18 17:53:50.142361 master-0 kubenswrapper[7553]: I0318 17:53:50.142349 7553 state_mem.go:107] "Deleted CPUSet assignment" podUID="24b4ed170d527099878cb5fdd508a2fb" containerName="etcd-rev" Mar 18 17:53:50.142475 master-0 kubenswrapper[7553]: E0318 17:53:50.142374 7553 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24b4ed170d527099878cb5fdd508a2fb" containerName="setup" Mar 18 17:53:50.142475 master-0 kubenswrapper[7553]: I0318 17:53:50.142389 7553 state_mem.go:107] "Deleted CPUSet assignment" podUID="24b4ed170d527099878cb5fdd508a2fb" containerName="setup" Mar 18 17:53:50.142475 master-0 kubenswrapper[7553]: E0318 17:53:50.142414 7553 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24b4ed170d527099878cb5fdd508a2fb" containerName="etcd-metrics" Mar 18 17:53:50.142475 master-0 kubenswrapper[7553]: I0318 17:53:50.142427 7553 state_mem.go:107] "Deleted CPUSet assignment" podUID="24b4ed170d527099878cb5fdd508a2fb" containerName="etcd-metrics" Mar 18 17:53:50.142475 master-0 kubenswrapper[7553]: E0318 17:53:50.142462 7553 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24b4ed170d527099878cb5fdd508a2fb" containerName="etcdctl" Mar 18 17:53:50.142475 master-0 kubenswrapper[7553]: I0318 17:53:50.142475 7553 state_mem.go:107] "Deleted CPUSet assignment" podUID="24b4ed170d527099878cb5fdd508a2fb" containerName="etcdctl" Mar 18 17:53:50.142772 master-0 kubenswrapper[7553]: E0318 17:53:50.142496 7553 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24b4ed170d527099878cb5fdd508a2fb" containerName="etcd-readyz" Mar 18 17:53:50.142772 master-0 kubenswrapper[7553]: I0318 17:53:50.142509 7553 state_mem.go:107] "Deleted CPUSet assignment" podUID="24b4ed170d527099878cb5fdd508a2fb" containerName="etcd-readyz" Mar 18 17:53:50.142772 master-0 kubenswrapper[7553]: E0318 17:53:50.142532 7553 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24b4ed170d527099878cb5fdd508a2fb" containerName="etcd-ensure-env-vars" Mar 18 17:53:50.142772 master-0 kubenswrapper[7553]: I0318 17:53:50.142545 7553 state_mem.go:107] "Deleted CPUSet assignment" podUID="24b4ed170d527099878cb5fdd508a2fb" containerName="etcd-ensure-env-vars" Mar 18 17:53:50.142772 master-0 kubenswrapper[7553]: E0318 17:53:50.142560 7553 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24b4ed170d527099878cb5fdd508a2fb" containerName="etcd-resources-copy" Mar 18 17:53:50.142772 master-0 kubenswrapper[7553]: I0318 17:53:50.142573 7553 state_mem.go:107] "Deleted CPUSet assignment" podUID="24b4ed170d527099878cb5fdd508a2fb" containerName="etcd-resources-copy" Mar 18 17:53:50.142772 master-0 kubenswrapper[7553]: I0318 17:53:50.142778 7553 memory_manager.go:354] "RemoveStaleState removing state" podUID="24b4ed170d527099878cb5fdd508a2fb" containerName="etcd-readyz" Mar 18 17:53:50.143199 master-0 kubenswrapper[7553]: I0318 17:53:50.142797 7553 memory_manager.go:354] "RemoveStaleState removing state" podUID="24b4ed170d527099878cb5fdd508a2fb" containerName="etcd-rev" Mar 18 17:53:50.143199 master-0 kubenswrapper[7553]: I0318 17:53:50.142817 7553 memory_manager.go:354] "RemoveStaleState removing state" podUID="24b4ed170d527099878cb5fdd508a2fb" containerName="etcd-metrics" Mar 18 17:53:50.143199 master-0 kubenswrapper[7553]: I0318 17:53:50.142845 7553 memory_manager.go:354] "RemoveStaleState removing state" podUID="24b4ed170d527099878cb5fdd508a2fb" containerName="etcd" Mar 18 17:53:50.143199 master-0 kubenswrapper[7553]: I0318 17:53:50.143062 7553 memory_manager.go:354] "RemoveStaleState removing state" podUID="24b4ed170d527099878cb5fdd508a2fb" containerName="etcdctl" Mar 18 17:53:50.282374 master-0 kubenswrapper[7553]: I0318 17:53:50.282267 7553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/094204df314fe45bd5af12ca1b4622bb-cert-dir\") pod \"etcd-master-0\" (UID: \"094204df314fe45bd5af12ca1b4622bb\") " pod="openshift-etcd/etcd-master-0" Mar 18 17:53:50.282374 master-0 kubenswrapper[7553]: I0318 17:53:50.282346 7553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/094204df314fe45bd5af12ca1b4622bb-data-dir\") pod \"etcd-master-0\" (UID: \"094204df314fe45bd5af12ca1b4622bb\") " pod="openshift-etcd/etcd-master-0" Mar 18 17:53:50.282725 master-0 kubenswrapper[7553]: I0318 17:53:50.282435 7553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/094204df314fe45bd5af12ca1b4622bb-usr-local-bin\") pod \"etcd-master-0\" (UID: \"094204df314fe45bd5af12ca1b4622bb\") " pod="openshift-etcd/etcd-master-0" Mar 18 17:53:50.282725 master-0 kubenswrapper[7553]: I0318 17:53:50.282674 7553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/094204df314fe45bd5af12ca1b4622bb-resource-dir\") pod \"etcd-master-0\" (UID: \"094204df314fe45bd5af12ca1b4622bb\") " pod="openshift-etcd/etcd-master-0" Mar 18 17:53:50.282941 master-0 kubenswrapper[7553]: I0318 17:53:50.282730 7553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/094204df314fe45bd5af12ca1b4622bb-static-pod-dir\") pod \"etcd-master-0\" (UID: \"094204df314fe45bd5af12ca1b4622bb\") " pod="openshift-etcd/etcd-master-0" Mar 18 17:53:50.282941 master-0 kubenswrapper[7553]: I0318 17:53:50.282904 7553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/094204df314fe45bd5af12ca1b4622bb-log-dir\") pod \"etcd-master-0\" (UID: \"094204df314fe45bd5af12ca1b4622bb\") " pod="openshift-etcd/etcd-master-0" Mar 18 17:53:50.362154 master-0 kubenswrapper[7553]: I0318 17:53:50.362067 7553 scope.go:117] "RemoveContainer" containerID="79efc2b057b97c5f729a0b5a1fa1420cba96b5301fd6279190cf494ebd7bf5f8" Mar 18 17:53:50.385427 master-0 kubenswrapper[7553]: I0318 17:53:50.385327 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/094204df314fe45bd5af12ca1b4622bb-resource-dir\") pod \"etcd-master-0\" (UID: \"094204df314fe45bd5af12ca1b4622bb\") " pod="openshift-etcd/etcd-master-0" Mar 18 17:53:50.385427 master-0 kubenswrapper[7553]: I0318 17:53:50.385411 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/094204df314fe45bd5af12ca1b4622bb-static-pod-dir\") pod \"etcd-master-0\" (UID: \"094204df314fe45bd5af12ca1b4622bb\") " pod="openshift-etcd/etcd-master-0" Mar 18 17:53:50.385866 master-0 kubenswrapper[7553]: I0318 17:53:50.385529 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/094204df314fe45bd5af12ca1b4622bb-log-dir\") pod \"etcd-master-0\" (UID: \"094204df314fe45bd5af12ca1b4622bb\") " pod="openshift-etcd/etcd-master-0" Mar 18 17:53:50.385866 master-0 kubenswrapper[7553]: I0318 17:53:50.385641 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/094204df314fe45bd5af12ca1b4622bb-cert-dir\") pod \"etcd-master-0\" (UID: \"094204df314fe45bd5af12ca1b4622bb\") " pod="openshift-etcd/etcd-master-0" Mar 18 17:53:50.385866 master-0 kubenswrapper[7553]: I0318 17:53:50.385672 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/094204df314fe45bd5af12ca1b4622bb-data-dir\") pod \"etcd-master-0\" (UID: \"094204df314fe45bd5af12ca1b4622bb\") " pod="openshift-etcd/etcd-master-0" Mar 18 17:53:50.385866 master-0 kubenswrapper[7553]: I0318 17:53:50.385809 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/094204df314fe45bd5af12ca1b4622bb-usr-local-bin\") pod \"etcd-master-0\" (UID: \"094204df314fe45bd5af12ca1b4622bb\") " pod="openshift-etcd/etcd-master-0" Mar 18 17:53:50.386143 master-0 kubenswrapper[7553]: I0318 17:53:50.386023 7553 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/094204df314fe45bd5af12ca1b4622bb-usr-local-bin\") pod \"etcd-master-0\" (UID: \"094204df314fe45bd5af12ca1b4622bb\") " pod="openshift-etcd/etcd-master-0" Mar 18 17:53:50.386805 master-0 kubenswrapper[7553]: I0318 17:53:50.386638 7553 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/094204df314fe45bd5af12ca1b4622bb-data-dir\") pod \"etcd-master-0\" (UID: \"094204df314fe45bd5af12ca1b4622bb\") " pod="openshift-etcd/etcd-master-0" Mar 18 17:53:50.386805 master-0 kubenswrapper[7553]: I0318 17:53:50.386720 7553 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/094204df314fe45bd5af12ca1b4622bb-log-dir\") pod \"etcd-master-0\" (UID: \"094204df314fe45bd5af12ca1b4622bb\") " pod="openshift-etcd/etcd-master-0" Mar 18 17:53:50.386805 master-0 kubenswrapper[7553]: I0318 17:53:50.386757 7553 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/094204df314fe45bd5af12ca1b4622bb-cert-dir\") pod \"etcd-master-0\" (UID: \"094204df314fe45bd5af12ca1b4622bb\") " pod="openshift-etcd/etcd-master-0" Mar 18 17:53:50.386805 master-0 kubenswrapper[7553]: I0318 17:53:50.386804 7553 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/094204df314fe45bd5af12ca1b4622bb-resource-dir\") pod \"etcd-master-0\" (UID: \"094204df314fe45bd5af12ca1b4622bb\") " pod="openshift-etcd/etcd-master-0" Mar 18 17:53:50.387149 master-0 kubenswrapper[7553]: I0318 17:53:50.387109 7553 scope.go:117] "RemoveContainer" containerID="21f65b83dcd474e201c2e5f73d8624edd7acb25dd6db2218299da95d8111811c" Mar 18 17:53:50.387472 master-0 kubenswrapper[7553]: I0318 17:53:50.386813 7553 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/094204df314fe45bd5af12ca1b4622bb-static-pod-dir\") pod \"etcd-master-0\" (UID: \"094204df314fe45bd5af12ca1b4622bb\") " pod="openshift-etcd/etcd-master-0" Mar 18 17:53:50.489662 master-0 kubenswrapper[7553]: I0318 17:53:50.489585 7553 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_24b4ed170d527099878cb5fdd508a2fb/etcd-rev/0.log" Mar 18 17:53:50.491864 master-0 kubenswrapper[7553]: I0318 17:53:50.491795 7553 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_24b4ed170d527099878cb5fdd508a2fb/etcd-metrics/0.log" Mar 18 17:53:50.496464 master-0 kubenswrapper[7553]: I0318 17:53:50.496378 7553 generic.go:334] "Generic (PLEG): container finished" podID="24b4ed170d527099878cb5fdd508a2fb" containerID="da8f0e31af82e82db7f44f999e96d023cf94af8530f7fe4d1cf38fd2a71678aa" exitCode=2 Mar 18 17:53:50.496464 master-0 kubenswrapper[7553]: I0318 17:53:50.496448 7553 generic.go:334] "Generic (PLEG): container finished" podID="24b4ed170d527099878cb5fdd508a2fb" containerID="209ad042b5f8ad649dfc90b4fe15da791a0a47c7ee7a1b73c22be24e311175e0" exitCode=0 Mar 18 17:53:50.496464 master-0 kubenswrapper[7553]: I0318 17:53:50.496465 7553 generic.go:334] "Generic (PLEG): container finished" podID="24b4ed170d527099878cb5fdd508a2fb" containerID="4e6c889cbc9d045dfa05a1c69aed57d0b4bc590c83ddf8cc03ee2cace2b29173" exitCode=2 Mar 18 17:53:51.512012 master-0 kubenswrapper[7553]: I0318 17:53:51.511916 7553 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-admission-controller-5dbbb8b86f-gr8jc_a8c34df1-ea0d-4dfa-bf4d-5b58dc5bee8e/multus-admission-controller/0.log" Mar 18 17:53:51.512953 master-0 kubenswrapper[7553]: I0318 17:53:51.512011 7553 generic.go:334] "Generic (PLEG): container finished" podID="a8c34df1-ea0d-4dfa-bf4d-5b58dc5bee8e" containerID="44bf631b967a6a5c4f33c650ce7e77866fd0f758bbaa4aaabffd566bdac21bf2" exitCode=137 Mar 18 17:53:51.512953 master-0 kubenswrapper[7553]: I0318 17:53:51.512089 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-5dbbb8b86f-gr8jc" event={"ID":"a8c34df1-ea0d-4dfa-bf4d-5b58dc5bee8e","Type":"ContainerDied","Data":"44bf631b967a6a5c4f33c650ce7e77866fd0f758bbaa4aaabffd566bdac21bf2"} Mar 18 17:53:51.774940 master-0 kubenswrapper[7553]: I0318 17:53:51.774876 7553 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-admission-controller-5dbbb8b86f-gr8jc_a8c34df1-ea0d-4dfa-bf4d-5b58dc5bee8e/multus-admission-controller/0.log" Mar 18 17:53:51.775338 master-0 kubenswrapper[7553]: I0318 17:53:51.774970 7553 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-5dbbb8b86f-gr8jc" Mar 18 17:53:51.910418 master-0 kubenswrapper[7553]: I0318 17:53:51.910333 7553 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/a8c34df1-ea0d-4dfa-bf4d-5b58dc5bee8e-webhook-certs\") pod \"a8c34df1-ea0d-4dfa-bf4d-5b58dc5bee8e\" (UID: \"a8c34df1-ea0d-4dfa-bf4d-5b58dc5bee8e\") " Mar 18 17:53:51.910787 master-0 kubenswrapper[7553]: I0318 17:53:51.910611 7553 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dlcnh\" (UniqueName: \"kubernetes.io/projected/a8c34df1-ea0d-4dfa-bf4d-5b58dc5bee8e-kube-api-access-dlcnh\") pod \"a8c34df1-ea0d-4dfa-bf4d-5b58dc5bee8e\" (UID: \"a8c34df1-ea0d-4dfa-bf4d-5b58dc5bee8e\") " Mar 18 17:53:52.280878 master-0 kubenswrapper[7553]: I0318 17:53:52.280780 7553 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a8c34df1-ea0d-4dfa-bf4d-5b58dc5bee8e-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "a8c34df1-ea0d-4dfa-bf4d-5b58dc5bee8e" (UID: "a8c34df1-ea0d-4dfa-bf4d-5b58dc5bee8e"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 17:53:52.282634 master-0 kubenswrapper[7553]: I0318 17:53:52.282532 7553 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a8c34df1-ea0d-4dfa-bf4d-5b58dc5bee8e-kube-api-access-dlcnh" (OuterVolumeSpecName: "kube-api-access-dlcnh") pod "a8c34df1-ea0d-4dfa-bf4d-5b58dc5bee8e" (UID: "a8c34df1-ea0d-4dfa-bf4d-5b58dc5bee8e"). InnerVolumeSpecName "kube-api-access-dlcnh". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 17:53:52.319948 master-0 kubenswrapper[7553]: I0318 17:53:52.319877 7553 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dlcnh\" (UniqueName: \"kubernetes.io/projected/a8c34df1-ea0d-4dfa-bf4d-5b58dc5bee8e-kube-api-access-dlcnh\") on node \"master-0\" DevicePath \"\"" Mar 18 17:53:52.319948 master-0 kubenswrapper[7553]: I0318 17:53:52.319930 7553 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/a8c34df1-ea0d-4dfa-bf4d-5b58dc5bee8e-webhook-certs\") on node \"master-0\" DevicePath \"\"" Mar 18 17:53:52.524452 master-0 kubenswrapper[7553]: I0318 17:53:52.524409 7553 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-admission-controller-5dbbb8b86f-gr8jc_a8c34df1-ea0d-4dfa-bf4d-5b58dc5bee8e/multus-admission-controller/0.log" Mar 18 17:53:52.524981 master-0 kubenswrapper[7553]: I0318 17:53:52.524476 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-5dbbb8b86f-gr8jc" event={"ID":"a8c34df1-ea0d-4dfa-bf4d-5b58dc5bee8e","Type":"ContainerDied","Data":"a1b64f60bcb1d57a34f6bca29856ee1a6dadd3b9493681f5dd98bb90b3066e3b"} Mar 18 17:53:52.524981 master-0 kubenswrapper[7553]: I0318 17:53:52.524523 7553 scope.go:117] "RemoveContainer" containerID="8d4a4392fb62b19690bdd00e7dd0f4626d2ed6c3f32141c69d0cf8e940849d1f" Mar 18 17:53:52.524981 master-0 kubenswrapper[7553]: I0318 17:53:52.524650 7553 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-5dbbb8b86f-gr8jc" Mar 18 17:53:52.546336 master-0 kubenswrapper[7553]: I0318 17:53:52.546254 7553 scope.go:117] "RemoveContainer" containerID="44bf631b967a6a5c4f33c650ce7e77866fd0f758bbaa4aaabffd566bdac21bf2" Mar 18 17:53:54.053544 master-0 kubenswrapper[7553]: I0318 17:53:54.053451 7553 scope.go:117] "RemoveContainer" containerID="6eeb4cd2c87e125ddb833d434b420403394a973efd8a0367e734678f55632df5" Mar 18 17:53:54.054438 master-0 kubenswrapper[7553]: E0318 17:53:54.053740 7553 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-rbac-proxy pod=cluster-cloud-controller-manager-operator-7dff898856-kfzkl_openshift-cloud-controller-manager-operator(0751c002-fe0e-4f13-bb9c-9accd8ca0df3)\"" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-kfzkl" podUID="0751c002-fe0e-4f13-bb9c-9accd8ca0df3" Mar 18 17:54:04.625627 master-0 kubenswrapper[7553]: I0318 17:54:04.625531 7553 generic.go:334] "Generic (PLEG): container finished" podID="cd9d8bd7-68a0-458f-9d25-f600932e303c" containerID="c609c2b3b4935f3bff5c215911aef6aecfcc54b41e1023b5431ec59542ec2f9d" exitCode=0 Mar 18 17:54:04.625627 master-0 kubenswrapper[7553]: I0318 17:54:04.625607 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/installer-2-master-0" event={"ID":"cd9d8bd7-68a0-458f-9d25-f600932e303c","Type":"ContainerDied","Data":"c609c2b3b4935f3bff5c215911aef6aecfcc54b41e1023b5431ec59542ec2f9d"} Mar 18 17:54:04.628806 master-0 kubenswrapper[7553]: I0318 17:54:04.628757 7553 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_3b3363934623637fdc1d37ff8b16880a/kube-controller-manager/0.log" Mar 18 17:54:04.628806 master-0 kubenswrapper[7553]: I0318 17:54:04.628790 7553 generic.go:334] "Generic (PLEG): container finished" podID="3b3363934623637fdc1d37ff8b16880a" containerID="6007004024fecf1344918d5eba36f91c4644591c32375ce8f9e07fc9beb46c69" exitCode=1 Mar 18 17:54:04.628806 master-0 kubenswrapper[7553]: I0318 17:54:04.628807 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"3b3363934623637fdc1d37ff8b16880a","Type":"ContainerDied","Data":"6007004024fecf1344918d5eba36f91c4644591c32375ce8f9e07fc9beb46c69"} Mar 18 17:54:04.629257 master-0 kubenswrapper[7553]: I0318 17:54:04.629217 7553 scope.go:117] "RemoveContainer" containerID="6007004024fecf1344918d5eba36f91c4644591c32375ce8f9e07fc9beb46c69" Mar 18 17:54:04.714350 master-0 kubenswrapper[7553]: I0318 17:54:04.714293 7553 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 17:54:04.714350 master-0 kubenswrapper[7553]: I0318 17:54:04.714359 7553 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 17:54:04.714811 master-0 kubenswrapper[7553]: I0318 17:54:04.714379 7553 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 17:54:05.641830 master-0 kubenswrapper[7553]: I0318 17:54:05.641761 7553 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_3b3363934623637fdc1d37ff8b16880a/kube-controller-manager/0.log" Mar 18 17:54:05.643085 master-0 kubenswrapper[7553]: I0318 17:54:05.643032 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"3b3363934623637fdc1d37ff8b16880a","Type":"ContainerStarted","Data":"522b734ad03d049a879cfa7a8145e3b81a8d9061164b95712992e2f7f7b61d1d"} Mar 18 17:54:05.949192 master-0 kubenswrapper[7553]: I0318 17:54:05.949139 7553 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-2-master-0" Mar 18 17:54:05.963793 master-0 kubenswrapper[7553]: I0318 17:54:05.963702 7553 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/cd9d8bd7-68a0-458f-9d25-f600932e303c-kube-api-access\") pod \"cd9d8bd7-68a0-458f-9d25-f600932e303c\" (UID: \"cd9d8bd7-68a0-458f-9d25-f600932e303c\") " Mar 18 17:54:05.964331 master-0 kubenswrapper[7553]: I0318 17:54:05.963838 7553 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/cd9d8bd7-68a0-458f-9d25-f600932e303c-var-lock\") pod \"cd9d8bd7-68a0-458f-9d25-f600932e303c\" (UID: \"cd9d8bd7-68a0-458f-9d25-f600932e303c\") " Mar 18 17:54:05.964331 master-0 kubenswrapper[7553]: I0318 17:54:05.963988 7553 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/cd9d8bd7-68a0-458f-9d25-f600932e303c-kubelet-dir\") pod \"cd9d8bd7-68a0-458f-9d25-f600932e303c\" (UID: \"cd9d8bd7-68a0-458f-9d25-f600932e303c\") " Mar 18 17:54:05.964331 master-0 kubenswrapper[7553]: I0318 17:54:05.964158 7553 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cd9d8bd7-68a0-458f-9d25-f600932e303c-var-lock" (OuterVolumeSpecName: "var-lock") pod "cd9d8bd7-68a0-458f-9d25-f600932e303c" (UID: "cd9d8bd7-68a0-458f-9d25-f600932e303c"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 17:54:05.964456 master-0 kubenswrapper[7553]: I0318 17:54:05.964319 7553 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cd9d8bd7-68a0-458f-9d25-f600932e303c-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "cd9d8bd7-68a0-458f-9d25-f600932e303c" (UID: "cd9d8bd7-68a0-458f-9d25-f600932e303c"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 17:54:05.965870 master-0 kubenswrapper[7553]: I0318 17:54:05.965818 7553 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/cd9d8bd7-68a0-458f-9d25-f600932e303c-var-lock\") on node \"master-0\" DevicePath \"\"" Mar 18 17:54:05.965928 master-0 kubenswrapper[7553]: I0318 17:54:05.965867 7553 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/cd9d8bd7-68a0-458f-9d25-f600932e303c-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Mar 18 17:54:05.966854 master-0 kubenswrapper[7553]: I0318 17:54:05.966781 7553 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd9d8bd7-68a0-458f-9d25-f600932e303c-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "cd9d8bd7-68a0-458f-9d25-f600932e303c" (UID: "cd9d8bd7-68a0-458f-9d25-f600932e303c"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 17:54:06.067210 master-0 kubenswrapper[7553]: I0318 17:54:06.067141 7553 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/cd9d8bd7-68a0-458f-9d25-f600932e303c-kube-api-access\") on node \"master-0\" DevicePath \"\"" Mar 18 17:54:06.650661 master-0 kubenswrapper[7553]: I0318 17:54:06.650587 7553 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-2-master-0" Mar 18 17:54:06.651900 master-0 kubenswrapper[7553]: I0318 17:54:06.650554 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/installer-2-master-0" event={"ID":"cd9d8bd7-68a0-458f-9d25-f600932e303c","Type":"ContainerDied","Data":"bc6ed26ff47dcb63cc0618959e2aa5d6fdd1facec54c0eb66675504b09f0fb7c"} Mar 18 17:54:06.651900 master-0 kubenswrapper[7553]: I0318 17:54:06.650805 7553 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bc6ed26ff47dcb63cc0618959e2aa5d6fdd1facec54c0eb66675504b09f0fb7c" Mar 18 17:54:06.653827 master-0 kubenswrapper[7553]: I0318 17:54:06.653783 7553 generic.go:334] "Generic (PLEG): container finished" podID="c83737980b9ee109184b1d78e942cf36" containerID="39e81d7022f76aa50f44926362dbcc435bd580e0e562220512ebed69c23461e5" exitCode=1 Mar 18 17:54:06.654091 master-0 kubenswrapper[7553]: I0318 17:54:06.654029 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-scheduler-master-0" event={"ID":"c83737980b9ee109184b1d78e942cf36","Type":"ContainerDied","Data":"39e81d7022f76aa50f44926362dbcc435bd580e0e562220512ebed69c23461e5"} Mar 18 17:54:06.654761 master-0 kubenswrapper[7553]: I0318 17:54:06.654705 7553 scope.go:117] "RemoveContainer" containerID="774c63dac090e52a2318d2a44e73b16fc328b4dc2d265dcfd10522ed7532c288" Mar 18 17:54:06.654912 master-0 kubenswrapper[7553]: I0318 17:54:06.654874 7553 scope.go:117] "RemoveContainer" containerID="39e81d7022f76aa50f44926362dbcc435bd580e0e562220512ebed69c23461e5" Mar 18 17:54:07.053453 master-0 kubenswrapper[7553]: I0318 17:54:07.053369 7553 scope.go:117] "RemoveContainer" containerID="6eeb4cd2c87e125ddb833d434b420403394a973efd8a0367e734678f55632df5" Mar 18 17:54:07.053748 master-0 kubenswrapper[7553]: E0318 17:54:07.053660 7553 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-rbac-proxy pod=cluster-cloud-controller-manager-operator-7dff898856-kfzkl_openshift-cloud-controller-manager-operator(0751c002-fe0e-4f13-bb9c-9accd8ca0df3)\"" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-kfzkl" podUID="0751c002-fe0e-4f13-bb9c-9accd8ca0df3" Mar 18 17:54:07.664465 master-0 kubenswrapper[7553]: I0318 17:54:07.664389 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-scheduler-master-0" event={"ID":"c83737980b9ee109184b1d78e942cf36","Type":"ContainerStarted","Data":"c0003daaaf5a355b3cb392bb03905611a5e11defed3a5bf40942d6e99ba55bcb"} Mar 18 17:54:09.364589 master-0 kubenswrapper[7553]: E0318 17:54:09.364479 7553 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 18 17:54:10.684458 master-0 kubenswrapper[7553]: I0318 17:54:10.684409 7553 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-1-retry-1-master-0_da246674-9ad1-4732-9a9e-d86d18fb0c55/installer/0.log" Mar 18 17:54:10.684904 master-0 kubenswrapper[7553]: I0318 17:54:10.684476 7553 generic.go:334] "Generic (PLEG): container finished" podID="da246674-9ad1-4732-9a9e-d86d18fb0c55" containerID="fba66f2362f417736e585bd1e5c757b3e12cdb7f292f9ad5781307faed635e6f" exitCode=1 Mar 18 17:54:10.684904 master-0 kubenswrapper[7553]: I0318 17:54:10.684520 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-1-retry-1-master-0" event={"ID":"da246674-9ad1-4732-9a9e-d86d18fb0c55","Type":"ContainerDied","Data":"fba66f2362f417736e585bd1e5c757b3e12cdb7f292f9ad5781307faed635e6f"} Mar 18 17:54:10.684904 master-0 kubenswrapper[7553]: I0318 17:54:10.684554 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-1-retry-1-master-0" event={"ID":"da246674-9ad1-4732-9a9e-d86d18fb0c55","Type":"ContainerDied","Data":"afa0a71d3872d19b913c3ebbc34f43353efcfea37e9fa645a1364cfa53c28503"} Mar 18 17:54:10.684904 master-0 kubenswrapper[7553]: I0318 17:54:10.684569 7553 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="afa0a71d3872d19b913c3ebbc34f43353efcfea37e9fa645a1364cfa53c28503" Mar 18 17:54:10.696415 master-0 kubenswrapper[7553]: I0318 17:54:10.696340 7553 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-1-retry-1-master-0_da246674-9ad1-4732-9a9e-d86d18fb0c55/installer/0.log" Mar 18 17:54:10.696495 master-0 kubenswrapper[7553]: I0318 17:54:10.696471 7553 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-1-retry-1-master-0" Mar 18 17:54:10.750143 master-0 kubenswrapper[7553]: I0318 17:54:10.749953 7553 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/da246674-9ad1-4732-9a9e-d86d18fb0c55-kubelet-dir\") pod \"da246674-9ad1-4732-9a9e-d86d18fb0c55\" (UID: \"da246674-9ad1-4732-9a9e-d86d18fb0c55\") " Mar 18 17:54:10.750143 master-0 kubenswrapper[7553]: I0318 17:54:10.750105 7553 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/da246674-9ad1-4732-9a9e-d86d18fb0c55-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "da246674-9ad1-4732-9a9e-d86d18fb0c55" (UID: "da246674-9ad1-4732-9a9e-d86d18fb0c55"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 17:54:10.750545 master-0 kubenswrapper[7553]: I0318 17:54:10.750195 7553 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/da246674-9ad1-4732-9a9e-d86d18fb0c55-kube-api-access\") pod \"da246674-9ad1-4732-9a9e-d86d18fb0c55\" (UID: \"da246674-9ad1-4732-9a9e-d86d18fb0c55\") " Mar 18 17:54:10.750545 master-0 kubenswrapper[7553]: I0318 17:54:10.750299 7553 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/da246674-9ad1-4732-9a9e-d86d18fb0c55-var-lock\") pod \"da246674-9ad1-4732-9a9e-d86d18fb0c55\" (UID: \"da246674-9ad1-4732-9a9e-d86d18fb0c55\") " Mar 18 17:54:10.750545 master-0 kubenswrapper[7553]: I0318 17:54:10.750532 7553 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/da246674-9ad1-4732-9a9e-d86d18fb0c55-var-lock" (OuterVolumeSpecName: "var-lock") pod "da246674-9ad1-4732-9a9e-d86d18fb0c55" (UID: "da246674-9ad1-4732-9a9e-d86d18fb0c55"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 17:54:10.752070 master-0 kubenswrapper[7553]: I0318 17:54:10.751009 7553 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/da246674-9ad1-4732-9a9e-d86d18fb0c55-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Mar 18 17:54:10.752070 master-0 kubenswrapper[7553]: I0318 17:54:10.751828 7553 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/da246674-9ad1-4732-9a9e-d86d18fb0c55-var-lock\") on node \"master-0\" DevicePath \"\"" Mar 18 17:54:10.754608 master-0 kubenswrapper[7553]: I0318 17:54:10.754557 7553 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/da246674-9ad1-4732-9a9e-d86d18fb0c55-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "da246674-9ad1-4732-9a9e-d86d18fb0c55" (UID: "da246674-9ad1-4732-9a9e-d86d18fb0c55"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 17:54:10.853454 master-0 kubenswrapper[7553]: I0318 17:54:10.853246 7553 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/da246674-9ad1-4732-9a9e-d86d18fb0c55-kube-api-access\") on node \"master-0\" DevicePath \"\"" Mar 18 17:54:11.690743 master-0 kubenswrapper[7553]: I0318 17:54:11.690680 7553 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-1-retry-1-master-0" Mar 18 17:54:14.714045 master-0 kubenswrapper[7553]: I0318 17:54:14.713853 7553 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 17:54:14.714045 master-0 kubenswrapper[7553]: I0318 17:54:14.713951 7553 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 17:54:14.721330 master-0 kubenswrapper[7553]: I0318 17:54:14.720120 7553 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 17:54:15.735238 master-0 kubenswrapper[7553]: I0318 17:54:15.735123 7553 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 17:54:19.365253 master-0 kubenswrapper[7553]: E0318 17:54:19.365173 7553 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 18 17:54:20.274582 master-0 kubenswrapper[7553]: E0318 17:54:20.274522 7553 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc57f282a_829b_41b2_827a_f4bc598245a2.slice/crio-3be88236d1075355721a3a53c0d6a8b5bc0a4bd441e11b9ae0dd32cd30599a9f.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc57f282a_829b_41b2_827a_f4bc598245a2.slice/crio-conmon-3be88236d1075355721a3a53c0d6a8b5bc0a4bd441e11b9ae0dd32cd30599a9f.scope\": RecentStats: unable to find data in memory cache]" Mar 18 17:54:20.380994 master-0 kubenswrapper[7553]: I0318 17:54:20.380892 7553 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_24b4ed170d527099878cb5fdd508a2fb/etcd-rev/0.log" Mar 18 17:54:20.382215 master-0 kubenswrapper[7553]: I0318 17:54:20.382142 7553 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_24b4ed170d527099878cb5fdd508a2fb/etcd-metrics/0.log" Mar 18 17:54:20.383195 master-0 kubenswrapper[7553]: I0318 17:54:20.383156 7553 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_24b4ed170d527099878cb5fdd508a2fb/etcdctl/0.log" Mar 18 17:54:20.384611 master-0 kubenswrapper[7553]: I0318 17:54:20.384525 7553 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-master-0" Mar 18 17:54:20.400341 master-0 kubenswrapper[7553]: I0318 17:54:20.400285 7553 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/24b4ed170d527099878cb5fdd508a2fb-static-pod-dir\") pod \"24b4ed170d527099878cb5fdd508a2fb\" (UID: \"24b4ed170d527099878cb5fdd508a2fb\") " Mar 18 17:54:20.400598 master-0 kubenswrapper[7553]: I0318 17:54:20.400372 7553 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/24b4ed170d527099878cb5fdd508a2fb-data-dir\") pod \"24b4ed170d527099878cb5fdd508a2fb\" (UID: \"24b4ed170d527099878cb5fdd508a2fb\") " Mar 18 17:54:20.400598 master-0 kubenswrapper[7553]: I0318 17:54:20.400455 7553 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/24b4ed170d527099878cb5fdd508a2fb-usr-local-bin\") pod \"24b4ed170d527099878cb5fdd508a2fb\" (UID: \"24b4ed170d527099878cb5fdd508a2fb\") " Mar 18 17:54:20.400598 master-0 kubenswrapper[7553]: I0318 17:54:20.400464 7553 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/24b4ed170d527099878cb5fdd508a2fb-static-pod-dir" (OuterVolumeSpecName: "static-pod-dir") pod "24b4ed170d527099878cb5fdd508a2fb" (UID: "24b4ed170d527099878cb5fdd508a2fb"). InnerVolumeSpecName "static-pod-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 17:54:20.400598 master-0 kubenswrapper[7553]: I0318 17:54:20.400509 7553 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/24b4ed170d527099878cb5fdd508a2fb-resource-dir\") pod \"24b4ed170d527099878cb5fdd508a2fb\" (UID: \"24b4ed170d527099878cb5fdd508a2fb\") " Mar 18 17:54:20.400598 master-0 kubenswrapper[7553]: I0318 17:54:20.400538 7553 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/24b4ed170d527099878cb5fdd508a2fb-data-dir" (OuterVolumeSpecName: "data-dir") pod "24b4ed170d527099878cb5fdd508a2fb" (UID: "24b4ed170d527099878cb5fdd508a2fb"). InnerVolumeSpecName "data-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 17:54:20.400598 master-0 kubenswrapper[7553]: I0318 17:54:20.400571 7553 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/24b4ed170d527099878cb5fdd508a2fb-cert-dir\") pod \"24b4ed170d527099878cb5fdd508a2fb\" (UID: \"24b4ed170d527099878cb5fdd508a2fb\") " Mar 18 17:54:20.400841 master-0 kubenswrapper[7553]: I0318 17:54:20.400657 7553 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/24b4ed170d527099878cb5fdd508a2fb-log-dir\") pod \"24b4ed170d527099878cb5fdd508a2fb\" (UID: \"24b4ed170d527099878cb5fdd508a2fb\") " Mar 18 17:54:20.400841 master-0 kubenswrapper[7553]: I0318 17:54:20.400574 7553 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/24b4ed170d527099878cb5fdd508a2fb-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "24b4ed170d527099878cb5fdd508a2fb" (UID: "24b4ed170d527099878cb5fdd508a2fb"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 17:54:20.400841 master-0 kubenswrapper[7553]: I0318 17:54:20.400595 7553 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/24b4ed170d527099878cb5fdd508a2fb-usr-local-bin" (OuterVolumeSpecName: "usr-local-bin") pod "24b4ed170d527099878cb5fdd508a2fb" (UID: "24b4ed170d527099878cb5fdd508a2fb"). InnerVolumeSpecName "usr-local-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 17:54:20.400841 master-0 kubenswrapper[7553]: I0318 17:54:20.400615 7553 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/24b4ed170d527099878cb5fdd508a2fb-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "24b4ed170d527099878cb5fdd508a2fb" (UID: "24b4ed170d527099878cb5fdd508a2fb"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 17:54:20.400841 master-0 kubenswrapper[7553]: I0318 17:54:20.400801 7553 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/24b4ed170d527099878cb5fdd508a2fb-log-dir" (OuterVolumeSpecName: "log-dir") pod "24b4ed170d527099878cb5fdd508a2fb" (UID: "24b4ed170d527099878cb5fdd508a2fb"). InnerVolumeSpecName "log-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 17:54:20.401651 master-0 kubenswrapper[7553]: I0318 17:54:20.401613 7553 reconciler_common.go:293] "Volume detached for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/24b4ed170d527099878cb5fdd508a2fb-data-dir\") on node \"master-0\" DevicePath \"\"" Mar 18 17:54:20.401733 master-0 kubenswrapper[7553]: I0318 17:54:20.401651 7553 reconciler_common.go:293] "Volume detached for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/24b4ed170d527099878cb5fdd508a2fb-usr-local-bin\") on node \"master-0\" DevicePath \"\"" Mar 18 17:54:20.401733 master-0 kubenswrapper[7553]: I0318 17:54:20.401671 7553 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/24b4ed170d527099878cb5fdd508a2fb-resource-dir\") on node \"master-0\" DevicePath \"\"" Mar 18 17:54:20.401733 master-0 kubenswrapper[7553]: I0318 17:54:20.401689 7553 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/24b4ed170d527099878cb5fdd508a2fb-cert-dir\") on node \"master-0\" DevicePath \"\"" Mar 18 17:54:20.401733 master-0 kubenswrapper[7553]: I0318 17:54:20.401707 7553 reconciler_common.go:293] "Volume detached for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/24b4ed170d527099878cb5fdd508a2fb-log-dir\") on node \"master-0\" DevicePath \"\"" Mar 18 17:54:20.401733 master-0 kubenswrapper[7553]: I0318 17:54:20.401725 7553 reconciler_common.go:293] "Volume detached for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/24b4ed170d527099878cb5fdd508a2fb-static-pod-dir\") on node \"master-0\" DevicePath \"\"" Mar 18 17:54:20.768479 master-0 kubenswrapper[7553]: I0318 17:54:20.768376 7553 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_24b4ed170d527099878cb5fdd508a2fb/etcd-rev/0.log" Mar 18 17:54:20.770162 master-0 kubenswrapper[7553]: I0318 17:54:20.770140 7553 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_24b4ed170d527099878cb5fdd508a2fb/etcd-metrics/0.log" Mar 18 17:54:20.771416 master-0 kubenswrapper[7553]: I0318 17:54:20.771399 7553 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_24b4ed170d527099878cb5fdd508a2fb/etcdctl/0.log" Mar 18 17:54:20.772854 master-0 kubenswrapper[7553]: I0318 17:54:20.772820 7553 generic.go:334] "Generic (PLEG): container finished" podID="24b4ed170d527099878cb5fdd508a2fb" containerID="368e05dc45d9fe9a58fa3e9e73f9caacf28864ce4d8f255cb43f6da2a5a67e8c" exitCode=0 Mar 18 17:54:20.772962 master-0 kubenswrapper[7553]: I0318 17:54:20.772854 7553 generic.go:334] "Generic (PLEG): container finished" podID="24b4ed170d527099878cb5fdd508a2fb" containerID="4bad2f207b9c61f1bc1840e5051ac1d4e30175c583e7b44190e2def255d9e40e" exitCode=137 Mar 18 17:54:20.772962 master-0 kubenswrapper[7553]: I0318 17:54:20.772955 7553 scope.go:117] "RemoveContainer" containerID="da8f0e31af82e82db7f44f999e96d023cf94af8530f7fe4d1cf38fd2a71678aa" Mar 18 17:54:20.773038 master-0 kubenswrapper[7553]: I0318 17:54:20.772981 7553 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-master-0" Mar 18 17:54:20.777484 master-0 kubenswrapper[7553]: I0318 17:54:20.777421 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" event={"ID":"c57f282a-829b-41b2-827a-f4bc598245a2","Type":"ContainerDied","Data":"3be88236d1075355721a3a53c0d6a8b5bc0a4bd441e11b9ae0dd32cd30599a9f"} Mar 18 17:54:20.778052 master-0 kubenswrapper[7553]: I0318 17:54:20.777202 7553 generic.go:334] "Generic (PLEG): container finished" podID="c57f282a-829b-41b2-827a-f4bc598245a2" containerID="3be88236d1075355721a3a53c0d6a8b5bc0a4bd441e11b9ae0dd32cd30599a9f" exitCode=0 Mar 18 17:54:20.778378 master-0 kubenswrapper[7553]: I0318 17:54:20.778252 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" event={"ID":"c57f282a-829b-41b2-827a-f4bc598245a2","Type":"ContainerStarted","Data":"f00456b24dab05375bbbeac67add4ae933f0340a0db97ddc7192a2436c6be1ec"} Mar 18 17:54:20.804224 master-0 kubenswrapper[7553]: I0318 17:54:20.804196 7553 scope.go:117] "RemoveContainer" containerID="209ad042b5f8ad649dfc90b4fe15da791a0a47c7ee7a1b73c22be24e311175e0" Mar 18 17:54:20.824931 master-0 kubenswrapper[7553]: I0318 17:54:20.824876 7553 scope.go:117] "RemoveContainer" containerID="4e6c889cbc9d045dfa05a1c69aed57d0b4bc590c83ddf8cc03ee2cace2b29173" Mar 18 17:54:20.846079 master-0 kubenswrapper[7553]: I0318 17:54:20.846011 7553 scope.go:117] "RemoveContainer" containerID="368e05dc45d9fe9a58fa3e9e73f9caacf28864ce4d8f255cb43f6da2a5a67e8c" Mar 18 17:54:20.866955 master-0 kubenswrapper[7553]: I0318 17:54:20.866867 7553 scope.go:117] "RemoveContainer" containerID="4bad2f207b9c61f1bc1840e5051ac1d4e30175c583e7b44190e2def255d9e40e" Mar 18 17:54:20.887783 master-0 kubenswrapper[7553]: I0318 17:54:20.887709 7553 scope.go:117] "RemoveContainer" containerID="a516eed231a828089f9e5b970a9cd6a4d60cf12dd2e8fae44516a4db570c131e" Mar 18 17:54:20.909911 master-0 kubenswrapper[7553]: I0318 17:54:20.909812 7553 scope.go:117] "RemoveContainer" containerID="a1a8b65b76478f61aae57183c5d2785b161f6f2faa30ae8f66ca227270ba2dc5" Mar 18 17:54:20.927045 master-0 kubenswrapper[7553]: I0318 17:54:20.926966 7553 scope.go:117] "RemoveContainer" containerID="94631d3fb06d21e6367357342a45a846a836b814bfd835e3d206e702cdd96815" Mar 18 17:54:20.946219 master-0 kubenswrapper[7553]: I0318 17:54:20.946160 7553 scope.go:117] "RemoveContainer" containerID="da8f0e31af82e82db7f44f999e96d023cf94af8530f7fe4d1cf38fd2a71678aa" Mar 18 17:54:20.947156 master-0 kubenswrapper[7553]: E0318 17:54:20.946970 7553 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"da8f0e31af82e82db7f44f999e96d023cf94af8530f7fe4d1cf38fd2a71678aa\": container with ID starting with da8f0e31af82e82db7f44f999e96d023cf94af8530f7fe4d1cf38fd2a71678aa not found: ID does not exist" containerID="da8f0e31af82e82db7f44f999e96d023cf94af8530f7fe4d1cf38fd2a71678aa" Mar 18 17:54:20.947262 master-0 kubenswrapper[7553]: I0318 17:54:20.947194 7553 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"da8f0e31af82e82db7f44f999e96d023cf94af8530f7fe4d1cf38fd2a71678aa"} err="failed to get container status \"da8f0e31af82e82db7f44f999e96d023cf94af8530f7fe4d1cf38fd2a71678aa\": rpc error: code = NotFound desc = could not find container \"da8f0e31af82e82db7f44f999e96d023cf94af8530f7fe4d1cf38fd2a71678aa\": container with ID starting with da8f0e31af82e82db7f44f999e96d023cf94af8530f7fe4d1cf38fd2a71678aa not found: ID does not exist" Mar 18 17:54:20.947262 master-0 kubenswrapper[7553]: I0318 17:54:20.947255 7553 scope.go:117] "RemoveContainer" containerID="209ad042b5f8ad649dfc90b4fe15da791a0a47c7ee7a1b73c22be24e311175e0" Mar 18 17:54:20.947847 master-0 kubenswrapper[7553]: E0318 17:54:20.947786 7553 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"209ad042b5f8ad649dfc90b4fe15da791a0a47c7ee7a1b73c22be24e311175e0\": container with ID starting with 209ad042b5f8ad649dfc90b4fe15da791a0a47c7ee7a1b73c22be24e311175e0 not found: ID does not exist" containerID="209ad042b5f8ad649dfc90b4fe15da791a0a47c7ee7a1b73c22be24e311175e0" Mar 18 17:54:20.947917 master-0 kubenswrapper[7553]: I0318 17:54:20.947845 7553 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"209ad042b5f8ad649dfc90b4fe15da791a0a47c7ee7a1b73c22be24e311175e0"} err="failed to get container status \"209ad042b5f8ad649dfc90b4fe15da791a0a47c7ee7a1b73c22be24e311175e0\": rpc error: code = NotFound desc = could not find container \"209ad042b5f8ad649dfc90b4fe15da791a0a47c7ee7a1b73c22be24e311175e0\": container with ID starting with 209ad042b5f8ad649dfc90b4fe15da791a0a47c7ee7a1b73c22be24e311175e0 not found: ID does not exist" Mar 18 17:54:20.947917 master-0 kubenswrapper[7553]: I0318 17:54:20.947887 7553 scope.go:117] "RemoveContainer" containerID="4e6c889cbc9d045dfa05a1c69aed57d0b4bc590c83ddf8cc03ee2cace2b29173" Mar 18 17:54:20.948357 master-0 kubenswrapper[7553]: E0318 17:54:20.948299 7553 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4e6c889cbc9d045dfa05a1c69aed57d0b4bc590c83ddf8cc03ee2cace2b29173\": container with ID starting with 4e6c889cbc9d045dfa05a1c69aed57d0b4bc590c83ddf8cc03ee2cace2b29173 not found: ID does not exist" containerID="4e6c889cbc9d045dfa05a1c69aed57d0b4bc590c83ddf8cc03ee2cace2b29173" Mar 18 17:54:20.948431 master-0 kubenswrapper[7553]: I0318 17:54:20.948388 7553 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4e6c889cbc9d045dfa05a1c69aed57d0b4bc590c83ddf8cc03ee2cace2b29173"} err="failed to get container status \"4e6c889cbc9d045dfa05a1c69aed57d0b4bc590c83ddf8cc03ee2cace2b29173\": rpc error: code = NotFound desc = could not find container \"4e6c889cbc9d045dfa05a1c69aed57d0b4bc590c83ddf8cc03ee2cace2b29173\": container with ID starting with 4e6c889cbc9d045dfa05a1c69aed57d0b4bc590c83ddf8cc03ee2cace2b29173 not found: ID does not exist" Mar 18 17:54:20.948488 master-0 kubenswrapper[7553]: I0318 17:54:20.948439 7553 scope.go:117] "RemoveContainer" containerID="368e05dc45d9fe9a58fa3e9e73f9caacf28864ce4d8f255cb43f6da2a5a67e8c" Mar 18 17:54:20.949183 master-0 kubenswrapper[7553]: E0318 17:54:20.949010 7553 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"368e05dc45d9fe9a58fa3e9e73f9caacf28864ce4d8f255cb43f6da2a5a67e8c\": container with ID starting with 368e05dc45d9fe9a58fa3e9e73f9caacf28864ce4d8f255cb43f6da2a5a67e8c not found: ID does not exist" containerID="368e05dc45d9fe9a58fa3e9e73f9caacf28864ce4d8f255cb43f6da2a5a67e8c" Mar 18 17:54:20.949183 master-0 kubenswrapper[7553]: I0318 17:54:20.949058 7553 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"368e05dc45d9fe9a58fa3e9e73f9caacf28864ce4d8f255cb43f6da2a5a67e8c"} err="failed to get container status \"368e05dc45d9fe9a58fa3e9e73f9caacf28864ce4d8f255cb43f6da2a5a67e8c\": rpc error: code = NotFound desc = could not find container \"368e05dc45d9fe9a58fa3e9e73f9caacf28864ce4d8f255cb43f6da2a5a67e8c\": container with ID starting with 368e05dc45d9fe9a58fa3e9e73f9caacf28864ce4d8f255cb43f6da2a5a67e8c not found: ID does not exist" Mar 18 17:54:20.949183 master-0 kubenswrapper[7553]: I0318 17:54:20.949085 7553 scope.go:117] "RemoveContainer" containerID="4bad2f207b9c61f1bc1840e5051ac1d4e30175c583e7b44190e2def255d9e40e" Mar 18 17:54:20.949870 master-0 kubenswrapper[7553]: E0318 17:54:20.949438 7553 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4bad2f207b9c61f1bc1840e5051ac1d4e30175c583e7b44190e2def255d9e40e\": container with ID starting with 4bad2f207b9c61f1bc1840e5051ac1d4e30175c583e7b44190e2def255d9e40e not found: ID does not exist" containerID="4bad2f207b9c61f1bc1840e5051ac1d4e30175c583e7b44190e2def255d9e40e" Mar 18 17:54:20.949870 master-0 kubenswrapper[7553]: I0318 17:54:20.949467 7553 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4bad2f207b9c61f1bc1840e5051ac1d4e30175c583e7b44190e2def255d9e40e"} err="failed to get container status \"4bad2f207b9c61f1bc1840e5051ac1d4e30175c583e7b44190e2def255d9e40e\": rpc error: code = NotFound desc = could not find container \"4bad2f207b9c61f1bc1840e5051ac1d4e30175c583e7b44190e2def255d9e40e\": container with ID starting with 4bad2f207b9c61f1bc1840e5051ac1d4e30175c583e7b44190e2def255d9e40e not found: ID does not exist" Mar 18 17:54:20.949870 master-0 kubenswrapper[7553]: I0318 17:54:20.949488 7553 scope.go:117] "RemoveContainer" containerID="a516eed231a828089f9e5b970a9cd6a4d60cf12dd2e8fae44516a4db570c131e" Mar 18 17:54:20.949870 master-0 kubenswrapper[7553]: E0318 17:54:20.949724 7553 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a516eed231a828089f9e5b970a9cd6a4d60cf12dd2e8fae44516a4db570c131e\": container with ID starting with a516eed231a828089f9e5b970a9cd6a4d60cf12dd2e8fae44516a4db570c131e not found: ID does not exist" containerID="a516eed231a828089f9e5b970a9cd6a4d60cf12dd2e8fae44516a4db570c131e" Mar 18 17:54:20.949870 master-0 kubenswrapper[7553]: I0318 17:54:20.949759 7553 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a516eed231a828089f9e5b970a9cd6a4d60cf12dd2e8fae44516a4db570c131e"} err="failed to get container status \"a516eed231a828089f9e5b970a9cd6a4d60cf12dd2e8fae44516a4db570c131e\": rpc error: code = NotFound desc = could not find container \"a516eed231a828089f9e5b970a9cd6a4d60cf12dd2e8fae44516a4db570c131e\": container with ID starting with a516eed231a828089f9e5b970a9cd6a4d60cf12dd2e8fae44516a4db570c131e not found: ID does not exist" Mar 18 17:54:20.949870 master-0 kubenswrapper[7553]: I0318 17:54:20.949784 7553 scope.go:117] "RemoveContainer" containerID="a1a8b65b76478f61aae57183c5d2785b161f6f2faa30ae8f66ca227270ba2dc5" Mar 18 17:54:20.950467 master-0 kubenswrapper[7553]: E0318 17:54:20.950440 7553 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a1a8b65b76478f61aae57183c5d2785b161f6f2faa30ae8f66ca227270ba2dc5\": container with ID starting with a1a8b65b76478f61aae57183c5d2785b161f6f2faa30ae8f66ca227270ba2dc5 not found: ID does not exist" containerID="a1a8b65b76478f61aae57183c5d2785b161f6f2faa30ae8f66ca227270ba2dc5" Mar 18 17:54:20.950570 master-0 kubenswrapper[7553]: I0318 17:54:20.950482 7553 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a1a8b65b76478f61aae57183c5d2785b161f6f2faa30ae8f66ca227270ba2dc5"} err="failed to get container status \"a1a8b65b76478f61aae57183c5d2785b161f6f2faa30ae8f66ca227270ba2dc5\": rpc error: code = NotFound desc = could not find container \"a1a8b65b76478f61aae57183c5d2785b161f6f2faa30ae8f66ca227270ba2dc5\": container with ID starting with a1a8b65b76478f61aae57183c5d2785b161f6f2faa30ae8f66ca227270ba2dc5 not found: ID does not exist" Mar 18 17:54:20.950570 master-0 kubenswrapper[7553]: I0318 17:54:20.950510 7553 scope.go:117] "RemoveContainer" containerID="94631d3fb06d21e6367357342a45a846a836b814bfd835e3d206e702cdd96815" Mar 18 17:54:20.951080 master-0 kubenswrapper[7553]: E0318 17:54:20.950955 7553 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"94631d3fb06d21e6367357342a45a846a836b814bfd835e3d206e702cdd96815\": container with ID starting with 94631d3fb06d21e6367357342a45a846a836b814bfd835e3d206e702cdd96815 not found: ID does not exist" containerID="94631d3fb06d21e6367357342a45a846a836b814bfd835e3d206e702cdd96815" Mar 18 17:54:20.951080 master-0 kubenswrapper[7553]: I0318 17:54:20.951056 7553 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"94631d3fb06d21e6367357342a45a846a836b814bfd835e3d206e702cdd96815"} err="failed to get container status \"94631d3fb06d21e6367357342a45a846a836b814bfd835e3d206e702cdd96815\": rpc error: code = NotFound desc = could not find container \"94631d3fb06d21e6367357342a45a846a836b814bfd835e3d206e702cdd96815\": container with ID starting with 94631d3fb06d21e6367357342a45a846a836b814bfd835e3d206e702cdd96815 not found: ID does not exist" Mar 18 17:54:20.951339 master-0 kubenswrapper[7553]: I0318 17:54:20.951108 7553 scope.go:117] "RemoveContainer" containerID="da8f0e31af82e82db7f44f999e96d023cf94af8530f7fe4d1cf38fd2a71678aa" Mar 18 17:54:20.951663 master-0 kubenswrapper[7553]: I0318 17:54:20.951617 7553 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"da8f0e31af82e82db7f44f999e96d023cf94af8530f7fe4d1cf38fd2a71678aa"} err="failed to get container status \"da8f0e31af82e82db7f44f999e96d023cf94af8530f7fe4d1cf38fd2a71678aa\": rpc error: code = NotFound desc = could not find container \"da8f0e31af82e82db7f44f999e96d023cf94af8530f7fe4d1cf38fd2a71678aa\": container with ID starting with da8f0e31af82e82db7f44f999e96d023cf94af8530f7fe4d1cf38fd2a71678aa not found: ID does not exist" Mar 18 17:54:20.951663 master-0 kubenswrapper[7553]: I0318 17:54:20.951643 7553 scope.go:117] "RemoveContainer" containerID="209ad042b5f8ad649dfc90b4fe15da791a0a47c7ee7a1b73c22be24e311175e0" Mar 18 17:54:20.952055 master-0 kubenswrapper[7553]: I0318 17:54:20.952012 7553 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"209ad042b5f8ad649dfc90b4fe15da791a0a47c7ee7a1b73c22be24e311175e0"} err="failed to get container status \"209ad042b5f8ad649dfc90b4fe15da791a0a47c7ee7a1b73c22be24e311175e0\": rpc error: code = NotFound desc = could not find container \"209ad042b5f8ad649dfc90b4fe15da791a0a47c7ee7a1b73c22be24e311175e0\": container with ID starting with 209ad042b5f8ad649dfc90b4fe15da791a0a47c7ee7a1b73c22be24e311175e0 not found: ID does not exist" Mar 18 17:54:20.952055 master-0 kubenswrapper[7553]: I0318 17:54:20.952033 7553 scope.go:117] "RemoveContainer" containerID="4e6c889cbc9d045dfa05a1c69aed57d0b4bc590c83ddf8cc03ee2cace2b29173" Mar 18 17:54:20.952422 master-0 kubenswrapper[7553]: I0318 17:54:20.952351 7553 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4e6c889cbc9d045dfa05a1c69aed57d0b4bc590c83ddf8cc03ee2cace2b29173"} err="failed to get container status \"4e6c889cbc9d045dfa05a1c69aed57d0b4bc590c83ddf8cc03ee2cace2b29173\": rpc error: code = NotFound desc = could not find container \"4e6c889cbc9d045dfa05a1c69aed57d0b4bc590c83ddf8cc03ee2cace2b29173\": container with ID starting with 4e6c889cbc9d045dfa05a1c69aed57d0b4bc590c83ddf8cc03ee2cace2b29173 not found: ID does not exist" Mar 18 17:54:20.952422 master-0 kubenswrapper[7553]: I0318 17:54:20.952403 7553 scope.go:117] "RemoveContainer" containerID="368e05dc45d9fe9a58fa3e9e73f9caacf28864ce4d8f255cb43f6da2a5a67e8c" Mar 18 17:54:20.952809 master-0 kubenswrapper[7553]: I0318 17:54:20.952749 7553 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"368e05dc45d9fe9a58fa3e9e73f9caacf28864ce4d8f255cb43f6da2a5a67e8c"} err="failed to get container status \"368e05dc45d9fe9a58fa3e9e73f9caacf28864ce4d8f255cb43f6da2a5a67e8c\": rpc error: code = NotFound desc = could not find container \"368e05dc45d9fe9a58fa3e9e73f9caacf28864ce4d8f255cb43f6da2a5a67e8c\": container with ID starting with 368e05dc45d9fe9a58fa3e9e73f9caacf28864ce4d8f255cb43f6da2a5a67e8c not found: ID does not exist" Mar 18 17:54:20.952809 master-0 kubenswrapper[7553]: I0318 17:54:20.952793 7553 scope.go:117] "RemoveContainer" containerID="4bad2f207b9c61f1bc1840e5051ac1d4e30175c583e7b44190e2def255d9e40e" Mar 18 17:54:20.953316 master-0 kubenswrapper[7553]: I0318 17:54:20.953201 7553 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4bad2f207b9c61f1bc1840e5051ac1d4e30175c583e7b44190e2def255d9e40e"} err="failed to get container status \"4bad2f207b9c61f1bc1840e5051ac1d4e30175c583e7b44190e2def255d9e40e\": rpc error: code = NotFound desc = could not find container \"4bad2f207b9c61f1bc1840e5051ac1d4e30175c583e7b44190e2def255d9e40e\": container with ID starting with 4bad2f207b9c61f1bc1840e5051ac1d4e30175c583e7b44190e2def255d9e40e not found: ID does not exist" Mar 18 17:54:20.953419 master-0 kubenswrapper[7553]: I0318 17:54:20.953314 7553 scope.go:117] "RemoveContainer" containerID="a516eed231a828089f9e5b970a9cd6a4d60cf12dd2e8fae44516a4db570c131e" Mar 18 17:54:20.953970 master-0 kubenswrapper[7553]: I0318 17:54:20.953916 7553 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a516eed231a828089f9e5b970a9cd6a4d60cf12dd2e8fae44516a4db570c131e"} err="failed to get container status \"a516eed231a828089f9e5b970a9cd6a4d60cf12dd2e8fae44516a4db570c131e\": rpc error: code = NotFound desc = could not find container \"a516eed231a828089f9e5b970a9cd6a4d60cf12dd2e8fae44516a4db570c131e\": container with ID starting with a516eed231a828089f9e5b970a9cd6a4d60cf12dd2e8fae44516a4db570c131e not found: ID does not exist" Mar 18 17:54:20.953970 master-0 kubenswrapper[7553]: I0318 17:54:20.953953 7553 scope.go:117] "RemoveContainer" containerID="a1a8b65b76478f61aae57183c5d2785b161f6f2faa30ae8f66ca227270ba2dc5" Mar 18 17:54:20.954497 master-0 kubenswrapper[7553]: I0318 17:54:20.954435 7553 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a1a8b65b76478f61aae57183c5d2785b161f6f2faa30ae8f66ca227270ba2dc5"} err="failed to get container status \"a1a8b65b76478f61aae57183c5d2785b161f6f2faa30ae8f66ca227270ba2dc5\": rpc error: code = NotFound desc = could not find container \"a1a8b65b76478f61aae57183c5d2785b161f6f2faa30ae8f66ca227270ba2dc5\": container with ID starting with a1a8b65b76478f61aae57183c5d2785b161f6f2faa30ae8f66ca227270ba2dc5 not found: ID does not exist" Mar 18 17:54:20.954497 master-0 kubenswrapper[7553]: I0318 17:54:20.954480 7553 scope.go:117] "RemoveContainer" containerID="94631d3fb06d21e6367357342a45a846a836b814bfd835e3d206e702cdd96815" Mar 18 17:54:20.954899 master-0 kubenswrapper[7553]: I0318 17:54:20.954830 7553 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"94631d3fb06d21e6367357342a45a846a836b814bfd835e3d206e702cdd96815"} err="failed to get container status \"94631d3fb06d21e6367357342a45a846a836b814bfd835e3d206e702cdd96815\": rpc error: code = NotFound desc = could not find container \"94631d3fb06d21e6367357342a45a846a836b814bfd835e3d206e702cdd96815\": container with ID starting with 94631d3fb06d21e6367357342a45a846a836b814bfd835e3d206e702cdd96815 not found: ID does not exist" Mar 18 17:54:20.954899 master-0 kubenswrapper[7553]: I0318 17:54:20.954876 7553 scope.go:117] "RemoveContainer" containerID="3d1a4c794f84645b132cca3ce7dc17d228df153769dd3f1d6b34979465df7e8d" Mar 18 17:54:21.054128 master-0 kubenswrapper[7553]: I0318 17:54:21.054080 7553 scope.go:117] "RemoveContainer" containerID="6eeb4cd2c87e125ddb833d434b420403394a973efd8a0367e734678f55632df5" Mar 18 17:54:21.054418 master-0 kubenswrapper[7553]: E0318 17:54:21.054320 7553 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-rbac-proxy pod=cluster-cloud-controller-manager-operator-7dff898856-kfzkl_openshift-cloud-controller-manager-operator(0751c002-fe0e-4f13-bb9c-9accd8ca0df3)\"" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-kfzkl" podUID="0751c002-fe0e-4f13-bb9c-9accd8ca0df3" Mar 18 17:54:21.099680 master-0 kubenswrapper[7553]: I0318 17:54:21.099588 7553 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" Mar 18 17:54:21.104184 master-0 kubenswrapper[7553]: I0318 17:54:21.104134 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:54:21.104184 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:54:21.104184 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:54:21.104184 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:54:21.104412 master-0 kubenswrapper[7553]: I0318 17:54:21.104197 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:54:22.060662 master-0 kubenswrapper[7553]: I0318 17:54:22.060158 7553 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="24b4ed170d527099878cb5fdd508a2fb" path="/var/lib/kubelet/pods/24b4ed170d527099878cb5fdd508a2fb/volumes" Mar 18 17:54:22.103312 master-0 kubenswrapper[7553]: I0318 17:54:22.103205 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:54:22.103312 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:54:22.103312 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:54:22.103312 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:54:22.103312 master-0 kubenswrapper[7553]: I0318 17:54:22.103305 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:54:23.100239 master-0 kubenswrapper[7553]: I0318 17:54:23.100180 7553 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" Mar 18 17:54:23.103714 master-0 kubenswrapper[7553]: I0318 17:54:23.103636 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:54:23.103714 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:54:23.103714 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:54:23.103714 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:54:23.103923 master-0 kubenswrapper[7553]: I0318 17:54:23.103746 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:54:24.103887 master-0 kubenswrapper[7553]: I0318 17:54:24.103782 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:54:24.103887 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:54:24.103887 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:54:24.103887 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:54:24.104981 master-0 kubenswrapper[7553]: I0318 17:54:24.103901 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:54:24.163245 master-0 kubenswrapper[7553]: E0318 17:54:24.163044 7553 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{etcd-master-0.189e0106610cdc47 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-master-0,UID:24b4ed170d527099878cb5fdd508a2fb,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-readyz},},Reason:Killing,Message:Stopping container etcd-readyz,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 17:53:50.140218439 +0000 UTC m=+720.286053192,LastTimestamp:2026-03-18 17:53:50.140218439 +0000 UTC m=+720.286053192,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 17:54:25.103593 master-0 kubenswrapper[7553]: I0318 17:54:25.103523 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:54:25.103593 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:54:25.103593 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:54:25.103593 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:54:25.104823 master-0 kubenswrapper[7553]: I0318 17:54:25.103603 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:54:26.102381 master-0 kubenswrapper[7553]: I0318 17:54:26.102265 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:54:26.102381 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:54:26.102381 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:54:26.102381 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:54:26.102868 master-0 kubenswrapper[7553]: I0318 17:54:26.102411 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:54:27.101567 master-0 kubenswrapper[7553]: I0318 17:54:27.101497 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:54:27.101567 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:54:27.101567 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:54:27.101567 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:54:27.102409 master-0 kubenswrapper[7553]: I0318 17:54:27.101595 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:54:28.103786 master-0 kubenswrapper[7553]: I0318 17:54:28.103727 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:54:28.103786 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:54:28.103786 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:54:28.103786 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:54:28.104557 master-0 kubenswrapper[7553]: I0318 17:54:28.103809 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:54:29.054550 master-0 kubenswrapper[7553]: I0318 17:54:29.054472 7553 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-master-0" Mar 18 17:54:29.074155 master-0 kubenswrapper[7553]: I0318 17:54:29.074093 7553 kubelet.go:1909] "Trying to delete pod" pod="openshift-etcd/etcd-master-0" podUID="b7082744-e417-4683-8069-858394c5fc53" Mar 18 17:54:29.074155 master-0 kubenswrapper[7553]: I0318 17:54:29.074153 7553 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-etcd/etcd-master-0" podUID="b7082744-e417-4683-8069-858394c5fc53" Mar 18 17:54:29.102544 master-0 kubenswrapper[7553]: I0318 17:54:29.102472 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:54:29.102544 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:54:29.102544 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:54:29.102544 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:54:29.102802 master-0 kubenswrapper[7553]: I0318 17:54:29.102568 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:54:29.366401 master-0 kubenswrapper[7553]: E0318 17:54:29.366137 7553 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 18 17:54:30.103165 master-0 kubenswrapper[7553]: I0318 17:54:30.103083 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:54:30.103165 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:54:30.103165 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:54:30.103165 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:54:30.103543 master-0 kubenswrapper[7553]: I0318 17:54:30.103195 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:54:31.102073 master-0 kubenswrapper[7553]: I0318 17:54:31.102020 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:54:31.102073 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:54:31.102073 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:54:31.102073 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:54:31.102980 master-0 kubenswrapper[7553]: I0318 17:54:31.102096 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:54:32.103684 master-0 kubenswrapper[7553]: I0318 17:54:32.103598 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:54:32.103684 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:54:32.103684 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:54:32.103684 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:54:32.104834 master-0 kubenswrapper[7553]: I0318 17:54:32.103705 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:54:33.104037 master-0 kubenswrapper[7553]: I0318 17:54:33.103910 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:54:33.104037 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:54:33.104037 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:54:33.104037 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:54:33.104842 master-0 kubenswrapper[7553]: I0318 17:54:33.104073 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:54:34.102730 master-0 kubenswrapper[7553]: I0318 17:54:34.102630 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:54:34.102730 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:54:34.102730 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:54:34.102730 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:54:34.103085 master-0 kubenswrapper[7553]: I0318 17:54:34.102757 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:54:35.054418 master-0 kubenswrapper[7553]: I0318 17:54:35.054317 7553 scope.go:117] "RemoveContainer" containerID="6eeb4cd2c87e125ddb833d434b420403394a973efd8a0367e734678f55632df5" Mar 18 17:54:35.055591 master-0 kubenswrapper[7553]: E0318 17:54:35.054633 7553 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-rbac-proxy pod=cluster-cloud-controller-manager-operator-7dff898856-kfzkl_openshift-cloud-controller-manager-operator(0751c002-fe0e-4f13-bb9c-9accd8ca0df3)\"" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-kfzkl" podUID="0751c002-fe0e-4f13-bb9c-9accd8ca0df3" Mar 18 17:54:35.103783 master-0 kubenswrapper[7553]: I0318 17:54:35.103692 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:54:35.103783 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:54:35.103783 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:54:35.103783 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:54:35.104360 master-0 kubenswrapper[7553]: I0318 17:54:35.103803 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:54:36.104125 master-0 kubenswrapper[7553]: I0318 17:54:36.104014 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:54:36.104125 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:54:36.104125 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:54:36.104125 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:54:36.105316 master-0 kubenswrapper[7553]: I0318 17:54:36.104147 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:54:37.103185 master-0 kubenswrapper[7553]: I0318 17:54:37.103102 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:54:37.103185 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:54:37.103185 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:54:37.103185 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:54:37.103965 master-0 kubenswrapper[7553]: I0318 17:54:37.103926 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:54:38.103738 master-0 kubenswrapper[7553]: I0318 17:54:38.103647 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:54:38.103738 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:54:38.103738 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:54:38.103738 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:54:38.105008 master-0 kubenswrapper[7553]: I0318 17:54:38.103746 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:54:39.103104 master-0 kubenswrapper[7553]: I0318 17:54:39.103011 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:54:39.103104 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:54:39.103104 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:54:39.103104 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:54:39.103601 master-0 kubenswrapper[7553]: I0318 17:54:39.103135 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:54:39.367480 master-0 kubenswrapper[7553]: E0318 17:54:39.367405 7553 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 18 17:54:40.102971 master-0 kubenswrapper[7553]: I0318 17:54:40.102878 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:54:40.102971 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:54:40.102971 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:54:40.102971 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:54:40.102971 master-0 kubenswrapper[7553]: I0318 17:54:40.102985 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:54:41.102333 master-0 kubenswrapper[7553]: I0318 17:54:41.102234 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:54:41.102333 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:54:41.102333 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:54:41.102333 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:54:41.102910 master-0 kubenswrapper[7553]: I0318 17:54:41.102354 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:54:41.869015 master-0 kubenswrapper[7553]: E0318 17:54:41.868912 7553 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[cloud-credential-operator-serving-cert], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="openshift-cloud-credential-operator/cloud-credential-operator-744f9dbf77-djgn7" podUID="04cef0bd-f365-4bf6-864a-1895995015d6" Mar 18 17:54:41.869256 master-0 kubenswrapper[7553]: E0318 17:54:41.869169 7553 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[samples-operator-tls], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="openshift-cluster-samples-operator/cluster-samples-operator-85f7577d78-xnx8x" podUID="e0e04440-c08b-452d-9be6-9f70a4027c92" Mar 18 17:54:41.869256 master-0 kubenswrapper[7553]: E0318 17:54:41.869186 7553 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[cert], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="openshift-machine-api/cluster-autoscaler-operator-866dc4744-l6hpt" podUID="a94f7bff-ad61-4c53-a8eb-000a13f26971" Mar 18 17:54:41.869382 master-0 kubenswrapper[7553]: E0318 17:54:41.869337 7553 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[control-plane-machine-set-operator-tls], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="openshift-machine-api/control-plane-machine-set-operator-6f97756bc8-zdqtc" podUID="de189d27-4c60-49f1-9119-d1fde5c37b1e" Mar 18 17:54:41.944644 master-0 kubenswrapper[7553]: I0318 17:54:41.944550 7553 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-6f97756bc8-zdqtc" Mar 18 17:54:41.944644 master-0 kubenswrapper[7553]: I0318 17:54:41.944610 7553 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/cluster-autoscaler-operator-866dc4744-l6hpt" Mar 18 17:54:41.945071 master-0 kubenswrapper[7553]: I0318 17:54:41.944691 7553 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-85f7577d78-xnx8x" Mar 18 17:54:41.945071 master-0 kubenswrapper[7553]: I0318 17:54:41.944577 7553 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cloud-credential-operator/cloud-credential-operator-744f9dbf77-djgn7" Mar 18 17:54:42.102866 master-0 kubenswrapper[7553]: I0318 17:54:42.102752 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:54:42.102866 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:54:42.102866 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:54:42.102866 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:54:42.103632 master-0 kubenswrapper[7553]: I0318 17:54:42.102894 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:54:42.875997 master-0 kubenswrapper[7553]: E0318 17:54:42.875889 7553 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[machine-api-operator-tls], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="openshift-machine-api/machine-api-operator-6fbb6cf6f9-6x52p" podUID="2d21e77e-8b61-4f03-8f17-941b7a1d8b1d" Mar 18 17:54:42.951306 master-0 kubenswrapper[7553]: I0318 17:54:42.951181 7553 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-6fbb6cf6f9-6x52p" Mar 18 17:54:43.103001 master-0 kubenswrapper[7553]: I0318 17:54:43.102892 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:54:43.103001 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:54:43.103001 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:54:43.103001 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:54:43.104076 master-0 kubenswrapper[7553]: I0318 17:54:43.103042 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:54:44.103231 master-0 kubenswrapper[7553]: I0318 17:54:44.103144 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:54:44.103231 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:54:44.103231 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:54:44.103231 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:54:44.104192 master-0 kubenswrapper[7553]: I0318 17:54:44.104156 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:54:44.634251 master-0 kubenswrapper[7553]: I0318 17:54:44.634103 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/e0e04440-c08b-452d-9be6-9f70a4027c92-samples-operator-tls\") pod \"cluster-samples-operator-85f7577d78-xnx8x\" (UID: \"e0e04440-c08b-452d-9be6-9f70a4027c92\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-85f7577d78-xnx8x" Mar 18 17:54:44.634251 master-0 kubenswrapper[7553]: I0318 17:54:44.634227 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloud-credential-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/04cef0bd-f365-4bf6-864a-1895995015d6-cloud-credential-operator-serving-cert\") pod \"cloud-credential-operator-744f9dbf77-djgn7\" (UID: \"04cef0bd-f365-4bf6-864a-1895995015d6\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-744f9dbf77-djgn7" Mar 18 17:54:44.636748 master-0 kubenswrapper[7553]: I0318 17:54:44.634325 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/de189d27-4c60-49f1-9119-d1fde5c37b1e-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-6f97756bc8-zdqtc\" (UID: \"de189d27-4c60-49f1-9119-d1fde5c37b1e\") " pod="openshift-machine-api/control-plane-machine-set-operator-6f97756bc8-zdqtc" Mar 18 17:54:44.636748 master-0 kubenswrapper[7553]: I0318 17:54:44.634402 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/92153864-7959-4482-bf24-c8db36435fb5-machine-approver-tls\") pod \"machine-approver-5c6485487f-z74t2\" (UID: \"92153864-7959-4482-bf24-c8db36435fb5\") " pod="openshift-cluster-machine-approver/machine-approver-5c6485487f-z74t2" Mar 18 17:54:44.636748 master-0 kubenswrapper[7553]: I0318 17:54:44.634455 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a94f7bff-ad61-4c53-a8eb-000a13f26971-cert\") pod \"cluster-autoscaler-operator-866dc4744-l6hpt\" (UID: \"a94f7bff-ad61-4c53-a8eb-000a13f26971\") " pod="openshift-machine-api/cluster-autoscaler-operator-866dc4744-l6hpt" Mar 18 17:54:44.636748 master-0 kubenswrapper[7553]: E0318 17:54:44.634803 7553 secret.go:189] Couldn't get secret openshift-machine-api/cluster-autoscaler-operator-cert: secret "cluster-autoscaler-operator-cert" not found Mar 18 17:54:44.636748 master-0 kubenswrapper[7553]: E0318 17:54:44.634918 7553 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a94f7bff-ad61-4c53-a8eb-000a13f26971-cert podName:a94f7bff-ad61-4c53-a8eb-000a13f26971 nodeName:}" failed. No retries permitted until 2026-03-18 17:56:46.634878623 +0000 UTC m=+896.780713336 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/a94f7bff-ad61-4c53-a8eb-000a13f26971-cert") pod "cluster-autoscaler-operator-866dc4744-l6hpt" (UID: "a94f7bff-ad61-4c53-a8eb-000a13f26971") : secret "cluster-autoscaler-operator-cert" not found Mar 18 17:54:44.636748 master-0 kubenswrapper[7553]: E0318 17:54:44.635177 7553 secret.go:189] Couldn't get secret openshift-cluster-samples-operator/samples-operator-tls: secret "samples-operator-tls" not found Mar 18 17:54:44.636748 master-0 kubenswrapper[7553]: E0318 17:54:44.635328 7553 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e0e04440-c08b-452d-9be6-9f70a4027c92-samples-operator-tls podName:e0e04440-c08b-452d-9be6-9f70a4027c92 nodeName:}" failed. No retries permitted until 2026-03-18 17:56:46.635302844 +0000 UTC m=+896.781137517 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "samples-operator-tls" (UniqueName: "kubernetes.io/secret/e0e04440-c08b-452d-9be6-9f70a4027c92-samples-operator-tls") pod "cluster-samples-operator-85f7577d78-xnx8x" (UID: "e0e04440-c08b-452d-9be6-9f70a4027c92") : secret "samples-operator-tls" not found Mar 18 17:54:44.636748 master-0 kubenswrapper[7553]: E0318 17:54:44.635369 7553 secret.go:189] Couldn't get secret openshift-machine-api/control-plane-machine-set-operator-tls: secret "control-plane-machine-set-operator-tls" not found Mar 18 17:54:44.636748 master-0 kubenswrapper[7553]: E0318 17:54:44.635394 7553 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/de189d27-4c60-49f1-9119-d1fde5c37b1e-control-plane-machine-set-operator-tls podName:de189d27-4c60-49f1-9119-d1fde5c37b1e nodeName:}" failed. No retries permitted until 2026-03-18 17:56:46.635385147 +0000 UTC m=+896.781219820 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "control-plane-machine-set-operator-tls" (UniqueName: "kubernetes.io/secret/de189d27-4c60-49f1-9119-d1fde5c37b1e-control-plane-machine-set-operator-tls") pod "control-plane-machine-set-operator-6f97756bc8-zdqtc" (UID: "de189d27-4c60-49f1-9119-d1fde5c37b1e") : secret "control-plane-machine-set-operator-tls" not found Mar 18 17:54:44.636748 master-0 kubenswrapper[7553]: E0318 17:54:44.635416 7553 secret.go:189] Couldn't get secret openshift-cluster-machine-approver/machine-approver-tls: secret "machine-approver-tls" not found Mar 18 17:54:44.636748 master-0 kubenswrapper[7553]: E0318 17:54:44.635484 7553 secret.go:189] Couldn't get secret openshift-cloud-credential-operator/cloud-credential-operator-serving-cert: secret "cloud-credential-operator-serving-cert" not found Mar 18 17:54:44.636748 master-0 kubenswrapper[7553]: E0318 17:54:44.635547 7553 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/92153864-7959-4482-bf24-c8db36435fb5-machine-approver-tls podName:92153864-7959-4482-bf24-c8db36435fb5 nodeName:}" failed. No retries permitted until 2026-03-18 17:56:46.63551208 +0000 UTC m=+896.781346933 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "machine-approver-tls" (UniqueName: "kubernetes.io/secret/92153864-7959-4482-bf24-c8db36435fb5-machine-approver-tls") pod "machine-approver-5c6485487f-z74t2" (UID: "92153864-7959-4482-bf24-c8db36435fb5") : secret "machine-approver-tls" not found Mar 18 17:54:44.636748 master-0 kubenswrapper[7553]: E0318 17:54:44.635611 7553 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/04cef0bd-f365-4bf6-864a-1895995015d6-cloud-credential-operator-serving-cert podName:04cef0bd-f365-4bf6-864a-1895995015d6 nodeName:}" failed. No retries permitted until 2026-03-18 17:56:46.635595152 +0000 UTC m=+896.781430065 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "cloud-credential-operator-serving-cert" (UniqueName: "kubernetes.io/secret/04cef0bd-f365-4bf6-864a-1895995015d6-cloud-credential-operator-serving-cert") pod "cloud-credential-operator-744f9dbf77-djgn7" (UID: "04cef0bd-f365-4bf6-864a-1895995015d6") : secret "cloud-credential-operator-serving-cert" not found Mar 18 17:54:44.736245 master-0 kubenswrapper[7553]: I0318 17:54:44.736169 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/2d21e77e-8b61-4f03-8f17-941b7a1d8b1d-machine-api-operator-tls\") pod \"machine-api-operator-6fbb6cf6f9-6x52p\" (UID: \"2d21e77e-8b61-4f03-8f17-941b7a1d8b1d\") " pod="openshift-machine-api/machine-api-operator-6fbb6cf6f9-6x52p" Mar 18 17:54:44.736748 master-0 kubenswrapper[7553]: E0318 17:54:44.736532 7553 secret.go:189] Couldn't get secret openshift-machine-api/machine-api-operator-tls: secret "machine-api-operator-tls" not found Mar 18 17:54:44.736748 master-0 kubenswrapper[7553]: E0318 17:54:44.736673 7553 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2d21e77e-8b61-4f03-8f17-941b7a1d8b1d-machine-api-operator-tls podName:2d21e77e-8b61-4f03-8f17-941b7a1d8b1d nodeName:}" failed. No retries permitted until 2026-03-18 17:56:46.736644391 +0000 UTC m=+896.882479064 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "machine-api-operator-tls" (UniqueName: "kubernetes.io/secret/2d21e77e-8b61-4f03-8f17-941b7a1d8b1d-machine-api-operator-tls") pod "machine-api-operator-6fbb6cf6f9-6x52p" (UID: "2d21e77e-8b61-4f03-8f17-941b7a1d8b1d") : secret "machine-api-operator-tls" not found Mar 18 17:54:45.102961 master-0 kubenswrapper[7553]: I0318 17:54:45.102848 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:54:45.102961 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:54:45.102961 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:54:45.102961 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:54:45.104124 master-0 kubenswrapper[7553]: I0318 17:54:45.102971 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:54:45.976410 master-0 kubenswrapper[7553]: I0318 17:54:45.976223 7553 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-node-identity_network-node-identity-7s68k_9875ed82-813c-483d-8471-8f9b74b774ee/approver/1.log" Mar 18 17:54:45.977433 master-0 kubenswrapper[7553]: I0318 17:54:45.977362 7553 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-node-identity_network-node-identity-7s68k_9875ed82-813c-483d-8471-8f9b74b774ee/approver/0.log" Mar 18 17:54:45.978331 master-0 kubenswrapper[7553]: I0318 17:54:45.978243 7553 generic.go:334] "Generic (PLEG): container finished" podID="9875ed82-813c-483d-8471-8f9b74b774ee" containerID="d6933300553a8b09299df5113bf7cc86680b024bf430a5e7f3a091b6af9ab04a" exitCode=1 Mar 18 17:54:45.978514 master-0 kubenswrapper[7553]: I0318 17:54:45.978344 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-7s68k" event={"ID":"9875ed82-813c-483d-8471-8f9b74b774ee","Type":"ContainerDied","Data":"d6933300553a8b09299df5113bf7cc86680b024bf430a5e7f3a091b6af9ab04a"} Mar 18 17:54:45.978514 master-0 kubenswrapper[7553]: I0318 17:54:45.978401 7553 scope.go:117] "RemoveContainer" containerID="e68d50794bc18082c3da1be336c93731deac7bad0cc308995bf349c65577d305" Mar 18 17:54:45.979308 master-0 kubenswrapper[7553]: I0318 17:54:45.979223 7553 scope.go:117] "RemoveContainer" containerID="d6933300553a8b09299df5113bf7cc86680b024bf430a5e7f3a091b6af9ab04a" Mar 18 17:54:46.054179 master-0 kubenswrapper[7553]: I0318 17:54:46.054104 7553 scope.go:117] "RemoveContainer" containerID="6eeb4cd2c87e125ddb833d434b420403394a973efd8a0367e734678f55632df5" Mar 18 17:54:46.054505 master-0 kubenswrapper[7553]: E0318 17:54:46.054465 7553 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-rbac-proxy pod=cluster-cloud-controller-manager-operator-7dff898856-kfzkl_openshift-cloud-controller-manager-operator(0751c002-fe0e-4f13-bb9c-9accd8ca0df3)\"" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-kfzkl" podUID="0751c002-fe0e-4f13-bb9c-9accd8ca0df3" Mar 18 17:54:46.102965 master-0 kubenswrapper[7553]: I0318 17:54:46.102891 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:54:46.102965 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:54:46.102965 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:54:46.102965 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:54:46.103378 master-0 kubenswrapper[7553]: I0318 17:54:46.102983 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:54:46.989640 master-0 kubenswrapper[7553]: I0318 17:54:46.989556 7553 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-node-identity_network-node-identity-7s68k_9875ed82-813c-483d-8471-8f9b74b774ee/approver/1.log" Mar 18 17:54:46.990720 master-0 kubenswrapper[7553]: I0318 17:54:46.990026 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-7s68k" event={"ID":"9875ed82-813c-483d-8471-8f9b74b774ee","Type":"ContainerStarted","Data":"acecfffd919d49121298b9fb66038ef4f06c0304556835210a3233bb5f246330"} Mar 18 17:54:47.103430 master-0 kubenswrapper[7553]: I0318 17:54:47.103207 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:54:47.103430 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:54:47.103430 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:54:47.103430 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:54:47.103430 master-0 kubenswrapper[7553]: I0318 17:54:47.103339 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:54:48.103128 master-0 kubenswrapper[7553]: I0318 17:54:48.102990 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:54:48.103128 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:54:48.103128 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:54:48.103128 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:54:48.103128 master-0 kubenswrapper[7553]: I0318 17:54:48.103099 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:54:49.102885 master-0 kubenswrapper[7553]: I0318 17:54:49.102808 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:54:49.102885 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:54:49.102885 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:54:49.102885 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:54:49.103802 master-0 kubenswrapper[7553]: I0318 17:54:49.102920 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:54:49.368717 master-0 kubenswrapper[7553]: E0318 17:54:49.368460 7553 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 18 17:54:49.368717 master-0 kubenswrapper[7553]: I0318 17:54:49.368544 7553 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Mar 18 17:54:50.102579 master-0 kubenswrapper[7553]: I0318 17:54:50.102509 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:54:50.102579 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:54:50.102579 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:54:50.102579 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:54:50.102912 master-0 kubenswrapper[7553]: I0318 17:54:50.102610 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:54:51.102663 master-0 kubenswrapper[7553]: I0318 17:54:51.102571 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:54:51.102663 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:54:51.102663 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:54:51.102663 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:54:51.103725 master-0 kubenswrapper[7553]: I0318 17:54:51.102668 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:54:51.777270 master-0 kubenswrapper[7553]: I0318 17:54:51.777155 7553 status_manager.go:851] "Failed to get status for pod" podUID="24b4ed170d527099878cb5fdd508a2fb" pod="openshift-etcd/etcd-master-0" err="the server was unable to return a response in the time allotted, but may still be processing the request (get pods etcd-master-0)" Mar 18 17:54:52.102802 master-0 kubenswrapper[7553]: I0318 17:54:52.102728 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:54:52.102802 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:54:52.102802 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:54:52.102802 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:54:52.103827 master-0 kubenswrapper[7553]: I0318 17:54:52.102819 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:54:53.103573 master-0 kubenswrapper[7553]: I0318 17:54:53.103493 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:54:53.103573 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:54:53.103573 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:54:53.103573 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:54:53.104512 master-0 kubenswrapper[7553]: I0318 17:54:53.103593 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:54:54.103216 master-0 kubenswrapper[7553]: I0318 17:54:54.103091 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:54:54.103216 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:54:54.103216 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:54:54.103216 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:54:54.103216 master-0 kubenswrapper[7553]: I0318 17:54:54.103205 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:54:55.102589 master-0 kubenswrapper[7553]: I0318 17:54:55.102507 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:54:55.102589 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:54:55.102589 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:54:55.102589 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:54:55.102901 master-0 kubenswrapper[7553]: I0318 17:54:55.102611 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:54:56.103210 master-0 kubenswrapper[7553]: I0318 17:54:56.103127 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:54:56.103210 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:54:56.103210 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:54:56.103210 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:54:56.103827 master-0 kubenswrapper[7553]: I0318 17:54:56.103229 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:54:57.103047 master-0 kubenswrapper[7553]: I0318 17:54:57.102943 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:54:57.103047 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:54:57.103047 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:54:57.103047 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:54:57.103047 master-0 kubenswrapper[7553]: I0318 17:54:57.103048 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:54:58.102449 master-0 kubenswrapper[7553]: I0318 17:54:58.102369 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:54:58.102449 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:54:58.102449 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:54:58.102449 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:54:58.102772 master-0 kubenswrapper[7553]: I0318 17:54:58.102463 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:54:58.166730 master-0 kubenswrapper[7553]: E0318 17:54:58.166515 7553 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{cluster-cloud-controller-manager-operator-7dff898856-kfzkl.189e00c29679e369 openshift-cloud-controller-manager-operator 10745 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-cloud-controller-manager-operator,Name:cluster-cloud-controller-manager-operator-7dff898856-kfzkl,UID:0751c002-fe0e-4f13-bb9c-9accd8ca0df3,APIVersion:v1,ResourceVersion:10535,FieldPath:spec.containers{kube-rbac-proxy},},Reason:BackOff,Message:Back-off restarting failed container kube-rbac-proxy in pod cluster-cloud-controller-manager-operator-7dff898856-kfzkl_openshift-cloud-controller-manager-operator(0751c002-fe0e-4f13-bb9c-9accd8ca0df3),Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 17:48:58 +0000 UTC,LastTimestamp:2026-03-18 17:53:54.053689504 +0000 UTC m=+724.199524177,Count:25,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 17:54:59.063446 master-0 kubenswrapper[7553]: E0318 17:54:59.063339 7553 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[prometheus-operator-tls], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="openshift-monitoring/prometheus-operator-6c8df6d4b-fshkm" podUID="9c0dbd44-7669-41d6-bf1b-d8c1343c9d98" Mar 18 17:54:59.093017 master-0 kubenswrapper[7553]: I0318 17:54:59.092853 7553 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-6c8df6d4b-fshkm" Mar 18 17:54:59.103468 master-0 kubenswrapper[7553]: I0318 17:54:59.103333 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:54:59.103468 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:54:59.103468 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:54:59.103468 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:54:59.103798 master-0 kubenswrapper[7553]: I0318 17:54:59.103498 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:54:59.369230 master-0 kubenswrapper[7553]: E0318 17:54:59.369055 7553 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="200ms" Mar 18 17:55:00.054118 master-0 kubenswrapper[7553]: I0318 17:55:00.054054 7553 scope.go:117] "RemoveContainer" containerID="6eeb4cd2c87e125ddb833d434b420403394a973efd8a0367e734678f55632df5" Mar 18 17:55:00.102294 master-0 kubenswrapper[7553]: I0318 17:55:00.102235 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:55:00.102294 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:55:00.102294 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:55:00.102294 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:55:00.102694 master-0 kubenswrapper[7553]: I0318 17:55:00.102633 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:55:01.101345 master-0 kubenswrapper[7553]: I0318 17:55:01.101239 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:55:01.101345 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:55:01.101345 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:55:01.101345 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:55:01.101893 master-0 kubenswrapper[7553]: I0318 17:55:01.101340 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:55:01.108324 master-0 kubenswrapper[7553]: I0318 17:55:01.108283 7553 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_installer-3-master-0_98c88ce7-94dd-434c-99fc-96d900d544e6/installer/0.log" Mar 18 17:55:01.108513 master-0 kubenswrapper[7553]: I0318 17:55:01.108368 7553 generic.go:334] "Generic (PLEG): container finished" podID="98c88ce7-94dd-434c-99fc-96d900d544e6" containerID="f946a82c484d87fe7448697a732facf5002625190cba529f3bfbd4dceece22e3" exitCode=1 Mar 18 17:55:01.108513 master-0 kubenswrapper[7553]: I0318 17:55:01.108427 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-3-master-0" event={"ID":"98c88ce7-94dd-434c-99fc-96d900d544e6","Type":"ContainerDied","Data":"f946a82c484d87fe7448697a732facf5002625190cba529f3bfbd4dceece22e3"} Mar 18 17:55:01.110553 master-0 kubenswrapper[7553]: I0318 17:55:01.110523 7553 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-controller-manager-operator_cluster-cloud-controller-manager-operator-7dff898856-kfzkl_0751c002-fe0e-4f13-bb9c-9accd8ca0df3/kube-rbac-proxy/6.log" Mar 18 17:55:01.111006 master-0 kubenswrapper[7553]: I0318 17:55:01.110985 7553 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-controller-manager-operator_cluster-cloud-controller-manager-operator-7dff898856-kfzkl_0751c002-fe0e-4f13-bb9c-9accd8ca0df3/kube-rbac-proxy/5.log" Mar 18 17:55:01.111679 master-0 kubenswrapper[7553]: I0318 17:55:01.111654 7553 generic.go:334] "Generic (PLEG): container finished" podID="0751c002-fe0e-4f13-bb9c-9accd8ca0df3" containerID="7805e3a084be32fd22f401c11f2cafaf6f6853b5c227bbbf9238a583daa6ea61" exitCode=1 Mar 18 17:55:01.111743 master-0 kubenswrapper[7553]: I0318 17:55:01.111696 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-kfzkl" event={"ID":"0751c002-fe0e-4f13-bb9c-9accd8ca0df3","Type":"ContainerDied","Data":"7805e3a084be32fd22f401c11f2cafaf6f6853b5c227bbbf9238a583daa6ea61"} Mar 18 17:55:01.111781 master-0 kubenswrapper[7553]: I0318 17:55:01.111738 7553 scope.go:117] "RemoveContainer" containerID="6eeb4cd2c87e125ddb833d434b420403394a973efd8a0367e734678f55632df5" Mar 18 17:55:01.112905 master-0 kubenswrapper[7553]: I0318 17:55:01.112701 7553 scope.go:117] "RemoveContainer" containerID="7805e3a084be32fd22f401c11f2cafaf6f6853b5c227bbbf9238a583daa6ea61" Mar 18 17:55:01.112905 master-0 kubenswrapper[7553]: E0318 17:55:01.112863 7553 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kube-rbac-proxy pod=cluster-cloud-controller-manager-operator-7dff898856-kfzkl_openshift-cloud-controller-manager-operator(0751c002-fe0e-4f13-bb9c-9accd8ca0df3)\"" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-kfzkl" podUID="0751c002-fe0e-4f13-bb9c-9accd8ca0df3" Mar 18 17:55:01.220935 master-0 kubenswrapper[7553]: I0318 17:55:01.220810 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/9c0dbd44-7669-41d6-bf1b-d8c1343c9d98-prometheus-operator-tls\") pod \"prometheus-operator-6c8df6d4b-fshkm\" (UID: \"9c0dbd44-7669-41d6-bf1b-d8c1343c9d98\") " pod="openshift-monitoring/prometheus-operator-6c8df6d4b-fshkm" Mar 18 17:55:01.220935 master-0 kubenswrapper[7553]: E0318 17:55:01.220883 7553 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-operator-tls: secret "prometheus-operator-tls" not found Mar 18 17:55:01.221964 master-0 kubenswrapper[7553]: E0318 17:55:01.221078 7553 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9c0dbd44-7669-41d6-bf1b-d8c1343c9d98-prometheus-operator-tls podName:9c0dbd44-7669-41d6-bf1b-d8c1343c9d98 nodeName:}" failed. No retries permitted until 2026-03-18 17:57:03.221049235 +0000 UTC m=+913.366883948 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "prometheus-operator-tls" (UniqueName: "kubernetes.io/secret/9c0dbd44-7669-41d6-bf1b-d8c1343c9d98-prometheus-operator-tls") pod "prometheus-operator-6c8df6d4b-fshkm" (UID: "9c0dbd44-7669-41d6-bf1b-d8c1343c9d98") : secret "prometheus-operator-tls" not found Mar 18 17:55:02.102955 master-0 kubenswrapper[7553]: I0318 17:55:02.102836 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:55:02.102955 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:55:02.102955 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:55:02.102955 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:55:02.103983 master-0 kubenswrapper[7553]: I0318 17:55:02.102967 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:55:02.122632 master-0 kubenswrapper[7553]: I0318 17:55:02.122553 7553 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-controller-manager-operator_cluster-cloud-controller-manager-operator-7dff898856-kfzkl_0751c002-fe0e-4f13-bb9c-9accd8ca0df3/kube-rbac-proxy/6.log" Mar 18 17:55:02.522929 master-0 kubenswrapper[7553]: I0318 17:55:02.522874 7553 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_installer-3-master-0_98c88ce7-94dd-434c-99fc-96d900d544e6/installer/0.log" Mar 18 17:55:02.523175 master-0 kubenswrapper[7553]: I0318 17:55:02.522998 7553 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-3-master-0" Mar 18 17:55:02.549942 master-0 kubenswrapper[7553]: I0318 17:55:02.549875 7553 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/98c88ce7-94dd-434c-99fc-96d900d544e6-kube-api-access\") pod \"98c88ce7-94dd-434c-99fc-96d900d544e6\" (UID: \"98c88ce7-94dd-434c-99fc-96d900d544e6\") " Mar 18 17:55:02.550146 master-0 kubenswrapper[7553]: I0318 17:55:02.550080 7553 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/98c88ce7-94dd-434c-99fc-96d900d544e6-var-lock\") pod \"98c88ce7-94dd-434c-99fc-96d900d544e6\" (UID: \"98c88ce7-94dd-434c-99fc-96d900d544e6\") " Mar 18 17:55:02.550203 master-0 kubenswrapper[7553]: I0318 17:55:02.550184 7553 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/98c88ce7-94dd-434c-99fc-96d900d544e6-kubelet-dir\") pod \"98c88ce7-94dd-434c-99fc-96d900d544e6\" (UID: \"98c88ce7-94dd-434c-99fc-96d900d544e6\") " Mar 18 17:55:02.550387 master-0 kubenswrapper[7553]: I0318 17:55:02.550176 7553 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/98c88ce7-94dd-434c-99fc-96d900d544e6-var-lock" (OuterVolumeSpecName: "var-lock") pod "98c88ce7-94dd-434c-99fc-96d900d544e6" (UID: "98c88ce7-94dd-434c-99fc-96d900d544e6"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 17:55:02.550445 master-0 kubenswrapper[7553]: I0318 17:55:02.550222 7553 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/98c88ce7-94dd-434c-99fc-96d900d544e6-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "98c88ce7-94dd-434c-99fc-96d900d544e6" (UID: "98c88ce7-94dd-434c-99fc-96d900d544e6"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 17:55:02.550900 master-0 kubenswrapper[7553]: I0318 17:55:02.550856 7553 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/98c88ce7-94dd-434c-99fc-96d900d544e6-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Mar 18 17:55:02.550964 master-0 kubenswrapper[7553]: I0318 17:55:02.550905 7553 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/98c88ce7-94dd-434c-99fc-96d900d544e6-var-lock\") on node \"master-0\" DevicePath \"\"" Mar 18 17:55:02.554931 master-0 kubenswrapper[7553]: I0318 17:55:02.554851 7553 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/98c88ce7-94dd-434c-99fc-96d900d544e6-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "98c88ce7-94dd-434c-99fc-96d900d544e6" (UID: "98c88ce7-94dd-434c-99fc-96d900d544e6"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 17:55:02.652036 master-0 kubenswrapper[7553]: I0318 17:55:02.651952 7553 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/98c88ce7-94dd-434c-99fc-96d900d544e6-kube-api-access\") on node \"master-0\" DevicePath \"\"" Mar 18 17:55:03.079417 master-0 kubenswrapper[7553]: E0318 17:55:03.078119 7553 mirror_client.go:138] "Failed deleting a mirror pod" err="Timeout: request did not complete within requested timeout - context deadline exceeded" pod="openshift-etcd/etcd-master-0" Mar 18 17:55:03.079417 master-0 kubenswrapper[7553]: I0318 17:55:03.078741 7553 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-master-0" Mar 18 17:55:03.105064 master-0 kubenswrapper[7553]: I0318 17:55:03.103106 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:55:03.105064 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:55:03.105064 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:55:03.105064 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:55:03.105064 master-0 kubenswrapper[7553]: I0318 17:55:03.103205 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:55:03.109567 master-0 kubenswrapper[7553]: W0318 17:55:03.108988 7553 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod094204df314fe45bd5af12ca1b4622bb.slice/crio-f975ed7e1c1dcf64feeba9dd4dfc173ec9be8b509e8d2f868a326c611d5b7d2d WatchSource:0}: Error finding container f975ed7e1c1dcf64feeba9dd4dfc173ec9be8b509e8d2f868a326c611d5b7d2d: Status 404 returned error can't find the container with id f975ed7e1c1dcf64feeba9dd4dfc173ec9be8b509e8d2f868a326c611d5b7d2d Mar 18 17:55:03.155374 master-0 kubenswrapper[7553]: I0318 17:55:03.153767 7553 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_installer-3-master-0_98c88ce7-94dd-434c-99fc-96d900d544e6/installer/0.log" Mar 18 17:55:03.155374 master-0 kubenswrapper[7553]: I0318 17:55:03.153873 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-3-master-0" event={"ID":"98c88ce7-94dd-434c-99fc-96d900d544e6","Type":"ContainerDied","Data":"c257b7064ba1ee282a10d14ba9ea68bf5e64596dfd922f601f3ce37e1e2104a5"} Mar 18 17:55:03.155374 master-0 kubenswrapper[7553]: I0318 17:55:03.153907 7553 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c257b7064ba1ee282a10d14ba9ea68bf5e64596dfd922f601f3ce37e1e2104a5" Mar 18 17:55:03.155374 master-0 kubenswrapper[7553]: I0318 17:55:03.153965 7553 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-3-master-0" Mar 18 17:55:03.156477 master-0 kubenswrapper[7553]: I0318 17:55:03.155440 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"094204df314fe45bd5af12ca1b4622bb","Type":"ContainerStarted","Data":"f975ed7e1c1dcf64feeba9dd4dfc173ec9be8b509e8d2f868a326c611d5b7d2d"} Mar 18 17:55:04.102166 master-0 kubenswrapper[7553]: I0318 17:55:04.102065 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:55:04.102166 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:55:04.102166 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:55:04.102166 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:55:04.102166 master-0 kubenswrapper[7553]: I0318 17:55:04.102153 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:55:04.165931 master-0 kubenswrapper[7553]: I0318 17:55:04.165816 7553 generic.go:334] "Generic (PLEG): container finished" podID="094204df314fe45bd5af12ca1b4622bb" containerID="57c8f7a47edecb41fe3286b9e71f767917df948188cdf7bbad415d2bd7f1ab5b" exitCode=0 Mar 18 17:55:04.165931 master-0 kubenswrapper[7553]: I0318 17:55:04.165929 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"094204df314fe45bd5af12ca1b4622bb","Type":"ContainerDied","Data":"57c8f7a47edecb41fe3286b9e71f767917df948188cdf7bbad415d2bd7f1ab5b"} Mar 18 17:55:04.167340 master-0 kubenswrapper[7553]: I0318 17:55:04.166489 7553 kubelet.go:1909] "Trying to delete pod" pod="openshift-etcd/etcd-master-0" podUID="b7082744-e417-4683-8069-858394c5fc53" Mar 18 17:55:04.167340 master-0 kubenswrapper[7553]: I0318 17:55:04.166522 7553 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-etcd/etcd-master-0" podUID="b7082744-e417-4683-8069-858394c5fc53" Mar 18 17:55:05.106429 master-0 kubenswrapper[7553]: I0318 17:55:05.103063 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:55:05.106429 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:55:05.106429 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:55:05.106429 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:55:05.106429 master-0 kubenswrapper[7553]: I0318 17:55:05.103202 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:55:06.103443 master-0 kubenswrapper[7553]: I0318 17:55:06.103357 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:55:06.103443 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:55:06.103443 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:55:06.103443 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:55:06.103443 master-0 kubenswrapper[7553]: I0318 17:55:06.103444 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:55:06.188975 master-0 kubenswrapper[7553]: I0318 17:55:06.188886 7553 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-2-master-0_c9655d59-a594-499f-b474-dfc870239174/installer/0.log" Mar 18 17:55:06.188975 master-0 kubenswrapper[7553]: I0318 17:55:06.188966 7553 generic.go:334] "Generic (PLEG): container finished" podID="c9655d59-a594-499f-b474-dfc870239174" containerID="88c92e9d0661b28d9a41bcdec55c597d6015bf273bee5facfd2419530f4f2c64" exitCode=1 Mar 18 17:55:06.189516 master-0 kubenswrapper[7553]: I0318 17:55:06.189046 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-2-master-0" event={"ID":"c9655d59-a594-499f-b474-dfc870239174","Type":"ContainerDied","Data":"88c92e9d0661b28d9a41bcdec55c597d6015bf273bee5facfd2419530f4f2c64"} Mar 18 17:55:07.103520 master-0 kubenswrapper[7553]: I0318 17:55:07.103433 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:55:07.103520 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:55:07.103520 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:55:07.103520 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:55:07.104562 master-0 kubenswrapper[7553]: I0318 17:55:07.104473 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:55:07.571286 master-0 kubenswrapper[7553]: I0318 17:55:07.571129 7553 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-2-master-0_c9655d59-a594-499f-b474-dfc870239174/installer/0.log" Mar 18 17:55:07.571286 master-0 kubenswrapper[7553]: I0318 17:55:07.571200 7553 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-2-master-0" Mar 18 17:55:07.653300 master-0 kubenswrapper[7553]: I0318 17:55:07.653162 7553 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c9655d59-a594-499f-b474-dfc870239174-kubelet-dir\") pod \"c9655d59-a594-499f-b474-dfc870239174\" (UID: \"c9655d59-a594-499f-b474-dfc870239174\") " Mar 18 17:55:07.653300 master-0 kubenswrapper[7553]: I0318 17:55:07.653242 7553 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/c9655d59-a594-499f-b474-dfc870239174-var-lock\") pod \"c9655d59-a594-499f-b474-dfc870239174\" (UID: \"c9655d59-a594-499f-b474-dfc870239174\") " Mar 18 17:55:07.653719 master-0 kubenswrapper[7553]: I0318 17:55:07.653393 7553 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c9655d59-a594-499f-b474-dfc870239174-kube-api-access\") pod \"c9655d59-a594-499f-b474-dfc870239174\" (UID: \"c9655d59-a594-499f-b474-dfc870239174\") " Mar 18 17:55:07.653719 master-0 kubenswrapper[7553]: I0318 17:55:07.653516 7553 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c9655d59-a594-499f-b474-dfc870239174-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "c9655d59-a594-499f-b474-dfc870239174" (UID: "c9655d59-a594-499f-b474-dfc870239174"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 17:55:07.653719 master-0 kubenswrapper[7553]: I0318 17:55:07.653551 7553 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c9655d59-a594-499f-b474-dfc870239174-var-lock" (OuterVolumeSpecName: "var-lock") pod "c9655d59-a594-499f-b474-dfc870239174" (UID: "c9655d59-a594-499f-b474-dfc870239174"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 17:55:07.654040 master-0 kubenswrapper[7553]: I0318 17:55:07.653988 7553 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c9655d59-a594-499f-b474-dfc870239174-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Mar 18 17:55:07.654040 master-0 kubenswrapper[7553]: I0318 17:55:07.654024 7553 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/c9655d59-a594-499f-b474-dfc870239174-var-lock\") on node \"master-0\" DevicePath \"\"" Mar 18 17:55:07.658687 master-0 kubenswrapper[7553]: I0318 17:55:07.658617 7553 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c9655d59-a594-499f-b474-dfc870239174-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "c9655d59-a594-499f-b474-dfc870239174" (UID: "c9655d59-a594-499f-b474-dfc870239174"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 17:55:07.755219 master-0 kubenswrapper[7553]: I0318 17:55:07.755112 7553 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c9655d59-a594-499f-b474-dfc870239174-kube-api-access\") on node \"master-0\" DevicePath \"\"" Mar 18 17:55:08.054525 master-0 kubenswrapper[7553]: E0318 17:55:08.054400 7553 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[machine-approver-tls], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="openshift-cluster-machine-approver/machine-approver-5c6485487f-z74t2" podUID="92153864-7959-4482-bf24-c8db36435fb5" Mar 18 17:55:08.105249 master-0 kubenswrapper[7553]: I0318 17:55:08.105166 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:55:08.105249 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:55:08.105249 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:55:08.105249 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:55:08.105937 master-0 kubenswrapper[7553]: I0318 17:55:08.105259 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:55:08.206678 master-0 kubenswrapper[7553]: I0318 17:55:08.206603 7553 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-2-master-0_c9655d59-a594-499f-b474-dfc870239174/installer/0.log" Mar 18 17:55:08.206876 master-0 kubenswrapper[7553]: I0318 17:55:08.206776 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-2-master-0" event={"ID":"c9655d59-a594-499f-b474-dfc870239174","Type":"ContainerDied","Data":"202e717017ea47879d90c4603f14b936f4bf42a19ba2cb4cf9411280f3913d38"} Mar 18 17:55:08.206876 master-0 kubenswrapper[7553]: I0318 17:55:08.206842 7553 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="202e717017ea47879d90c4603f14b936f4bf42a19ba2cb4cf9411280f3913d38" Mar 18 17:55:08.207083 master-0 kubenswrapper[7553]: I0318 17:55:08.206896 7553 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-2-master-0" Mar 18 17:55:09.103156 master-0 kubenswrapper[7553]: I0318 17:55:09.103078 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:55:09.103156 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:55:09.103156 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:55:09.103156 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:55:09.103156 master-0 kubenswrapper[7553]: I0318 17:55:09.103143 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:55:09.570883 master-0 kubenswrapper[7553]: E0318 17:55:09.570747 7553 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="400ms" Mar 18 17:55:10.102408 master-0 kubenswrapper[7553]: I0318 17:55:10.102343 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:55:10.102408 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:55:10.102408 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:55:10.102408 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:55:10.102704 master-0 kubenswrapper[7553]: I0318 17:55:10.102450 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:55:11.102154 master-0 kubenswrapper[7553]: I0318 17:55:11.102077 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:55:11.102154 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:55:11.102154 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:55:11.102154 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:55:11.103230 master-0 kubenswrapper[7553]: I0318 17:55:11.102167 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:55:12.053219 master-0 kubenswrapper[7553]: I0318 17:55:12.053133 7553 scope.go:117] "RemoveContainer" containerID="7805e3a084be32fd22f401c11f2cafaf6f6853b5c227bbbf9238a583daa6ea61" Mar 18 17:55:12.053547 master-0 kubenswrapper[7553]: E0318 17:55:12.053515 7553 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kube-rbac-proxy pod=cluster-cloud-controller-manager-operator-7dff898856-kfzkl_openshift-cloud-controller-manager-operator(0751c002-fe0e-4f13-bb9c-9accd8ca0df3)\"" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-kfzkl" podUID="0751c002-fe0e-4f13-bb9c-9accd8ca0df3" Mar 18 17:55:12.103366 master-0 kubenswrapper[7553]: I0318 17:55:12.103220 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:55:12.103366 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:55:12.103366 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:55:12.103366 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:55:12.103366 master-0 kubenswrapper[7553]: I0318 17:55:12.103346 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:55:13.104014 master-0 kubenswrapper[7553]: I0318 17:55:13.103929 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:55:13.104014 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:55:13.104014 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:55:13.104014 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:55:13.104807 master-0 kubenswrapper[7553]: I0318 17:55:13.104050 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:55:14.102989 master-0 kubenswrapper[7553]: I0318 17:55:14.102894 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:55:14.102989 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:55:14.102989 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:55:14.102989 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:55:14.103351 master-0 kubenswrapper[7553]: I0318 17:55:14.103004 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:55:15.102939 master-0 kubenswrapper[7553]: I0318 17:55:15.102856 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:55:15.102939 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:55:15.102939 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:55:15.102939 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:55:15.103942 master-0 kubenswrapper[7553]: I0318 17:55:15.102966 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:55:16.101733 master-0 kubenswrapper[7553]: I0318 17:55:16.101667 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:55:16.101733 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:55:16.101733 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:55:16.101733 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:55:16.102124 master-0 kubenswrapper[7553]: I0318 17:55:16.101765 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:55:17.103185 master-0 kubenswrapper[7553]: I0318 17:55:17.103099 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:55:17.103185 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:55:17.103185 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:55:17.103185 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:55:17.104212 master-0 kubenswrapper[7553]: I0318 17:55:17.103236 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:55:18.101818 master-0 kubenswrapper[7553]: I0318 17:55:18.101742 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:55:18.101818 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:55:18.101818 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:55:18.101818 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:55:18.102175 master-0 kubenswrapper[7553]: I0318 17:55:18.101836 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:55:19.102188 master-0 kubenswrapper[7553]: I0318 17:55:19.102071 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:55:19.102188 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:55:19.102188 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:55:19.102188 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:55:19.102188 master-0 kubenswrapper[7553]: I0318 17:55:19.102146 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:55:19.972379 master-0 kubenswrapper[7553]: E0318 17:55:19.972246 7553 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": context deadline exceeded" interval="800ms" Mar 18 17:55:20.103653 master-0 kubenswrapper[7553]: I0318 17:55:20.103600 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:55:20.103653 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:55:20.103653 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:55:20.103653 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:55:20.104417 master-0 kubenswrapper[7553]: I0318 17:55:20.104383 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:55:21.102594 master-0 kubenswrapper[7553]: I0318 17:55:21.102524 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:55:21.102594 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:55:21.102594 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:55:21.102594 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:55:21.103137 master-0 kubenswrapper[7553]: I0318 17:55:21.103092 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:55:22.104526 master-0 kubenswrapper[7553]: I0318 17:55:22.102551 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:55:22.104526 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:55:22.104526 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:55:22.104526 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:55:22.104526 master-0 kubenswrapper[7553]: I0318 17:55:22.102646 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:55:23.052317 master-0 kubenswrapper[7553]: I0318 17:55:23.052186 7553 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-5c6485487f-z74t2" Mar 18 17:55:23.102164 master-0 kubenswrapper[7553]: I0318 17:55:23.102082 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:55:23.102164 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:55:23.102164 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:55:23.102164 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:55:23.102497 master-0 kubenswrapper[7553]: I0318 17:55:23.102177 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:55:24.103166 master-0 kubenswrapper[7553]: I0318 17:55:24.103073 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:55:24.103166 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:55:24.103166 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:55:24.103166 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:55:24.103166 master-0 kubenswrapper[7553]: I0318 17:55:24.103163 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:55:25.102188 master-0 kubenswrapper[7553]: I0318 17:55:25.102114 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:55:25.102188 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:55:25.102188 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:55:25.102188 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:55:25.102585 master-0 kubenswrapper[7553]: I0318 17:55:25.102190 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:55:25.333245 master-0 kubenswrapper[7553]: I0318 17:55:25.333153 7553 generic.go:334] "Generic (PLEG): container finished" podID="ce5831a6-5a8d-4cda-9299-5d86437bcab2" containerID="fe07019623ba4afabfbf6551b7028ec6e274c77f8b3075096e77bb2fa5ab0961" exitCode=0 Mar 18 17:55:25.333245 master-0 kubenswrapper[7553]: I0318 17:55:25.333211 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-89ccd998f-l5gm7" event={"ID":"ce5831a6-5a8d-4cda-9299-5d86437bcab2","Type":"ContainerDied","Data":"fe07019623ba4afabfbf6551b7028ec6e274c77f8b3075096e77bb2fa5ab0961"} Mar 18 17:55:25.334047 master-0 kubenswrapper[7553]: I0318 17:55:25.333331 7553 scope.go:117] "RemoveContainer" containerID="c7f5d502541807602a24d2f39710701583fd6aae06267e2b4ee473df7bbfd13e" Mar 18 17:55:25.334122 master-0 kubenswrapper[7553]: I0318 17:55:25.334038 7553 scope.go:117] "RemoveContainer" containerID="fe07019623ba4afabfbf6551b7028ec6e274c77f8b3075096e77bb2fa5ab0961" Mar 18 17:55:25.976328 master-0 kubenswrapper[7553]: E0318 17:55:25.976216 7553 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-18T17:55:15Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-18T17:55:15Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-18T17:55:15Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-18T17:55:15Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"master-0\": Patch \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0/status?timeout=10s\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 18 17:55:26.053138 master-0 kubenswrapper[7553]: I0318 17:55:26.053043 7553 scope.go:117] "RemoveContainer" containerID="7805e3a084be32fd22f401c11f2cafaf6f6853b5c227bbbf9238a583daa6ea61" Mar 18 17:55:26.053414 master-0 kubenswrapper[7553]: E0318 17:55:26.053380 7553 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kube-rbac-proxy pod=cluster-cloud-controller-manager-operator-7dff898856-kfzkl_openshift-cloud-controller-manager-operator(0751c002-fe0e-4f13-bb9c-9accd8ca0df3)\"" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-kfzkl" podUID="0751c002-fe0e-4f13-bb9c-9accd8ca0df3" Mar 18 17:55:26.102812 master-0 kubenswrapper[7553]: I0318 17:55:26.102704 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:55:26.102812 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:55:26.102812 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:55:26.102812 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:55:26.103242 master-0 kubenswrapper[7553]: I0318 17:55:26.102808 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:55:26.341873 master-0 kubenswrapper[7553]: I0318 17:55:26.341787 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-89ccd998f-l5gm7" event={"ID":"ce5831a6-5a8d-4cda-9299-5d86437bcab2","Type":"ContainerStarted","Data":"b73c8977b21f30cbbb9e502e36e5bebff03e78b4e5aff7d86803b34ab2c6326f"} Mar 18 17:55:26.342400 master-0 kubenswrapper[7553]: I0318 17:55:26.342135 7553 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-89ccd998f-l5gm7" Mar 18 17:55:26.345435 master-0 kubenswrapper[7553]: I0318 17:55:26.345389 7553 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-89ccd998f-l5gm7" Mar 18 17:55:27.102199 master-0 kubenswrapper[7553]: I0318 17:55:27.102116 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:55:27.102199 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:55:27.102199 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:55:27.102199 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:55:27.102199 master-0 kubenswrapper[7553]: I0318 17:55:27.102202 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:55:28.103560 master-0 kubenswrapper[7553]: I0318 17:55:28.103469 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:55:28.103560 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:55:28.103560 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:55:28.103560 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:55:28.103560 master-0 kubenswrapper[7553]: I0318 17:55:28.103541 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:55:29.102597 master-0 kubenswrapper[7553]: I0318 17:55:29.102523 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:55:29.102597 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:55:29.102597 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:55:29.102597 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:55:29.102944 master-0 kubenswrapper[7553]: I0318 17:55:29.102602 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:55:30.101803 master-0 kubenswrapper[7553]: I0318 17:55:30.101704 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:55:30.101803 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:55:30.101803 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:55:30.101803 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:55:30.103017 master-0 kubenswrapper[7553]: I0318 17:55:30.102429 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:55:30.774601 master-0 kubenswrapper[7553]: E0318 17:55:30.774519 7553 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="1.6s" Mar 18 17:55:31.102205 master-0 kubenswrapper[7553]: I0318 17:55:31.102114 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:55:31.102205 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:55:31.102205 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:55:31.102205 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:55:31.103336 master-0 kubenswrapper[7553]: I0318 17:55:31.102216 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:55:32.102674 master-0 kubenswrapper[7553]: I0318 17:55:32.102566 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:55:32.102674 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:55:32.102674 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:55:32.102674 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:55:32.103723 master-0 kubenswrapper[7553]: I0318 17:55:32.102682 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:55:32.169633 master-0 kubenswrapper[7553]: E0318 17:55:32.169450 7553 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{kube-controller-manager-master-0.189e00b849696f22 openshift-kube-controller-manager 9640 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-master-0,UID:3b3363934623637fdc1d37ff8b16880a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b23c544d3894e5b31f66a18c554f03b0d29f92c2000c46b57b1c96da7ec25db9\" already present on machine,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 17:48:14 +0000 UTC,LastTimestamp:2026-03-18 17:54:04.631066259 +0000 UTC m=+734.776900932,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 17:55:33.103427 master-0 kubenswrapper[7553]: I0318 17:55:33.103358 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:55:33.103427 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:55:33.103427 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:55:33.103427 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:55:33.104151 master-0 kubenswrapper[7553]: I0318 17:55:33.103459 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:55:34.103221 master-0 kubenswrapper[7553]: I0318 17:55:34.103155 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:55:34.103221 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:55:34.103221 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:55:34.103221 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:55:34.104635 master-0 kubenswrapper[7553]: I0318 17:55:34.103257 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:55:35.103660 master-0 kubenswrapper[7553]: I0318 17:55:35.103584 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:55:35.103660 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:55:35.103660 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:55:35.103660 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:55:35.104779 master-0 kubenswrapper[7553]: I0318 17:55:35.103684 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:55:35.976992 master-0 kubenswrapper[7553]: E0318 17:55:35.976908 7553 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 18 17:55:36.102891 master-0 kubenswrapper[7553]: I0318 17:55:36.102795 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:55:36.102891 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:55:36.102891 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:55:36.102891 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:55:36.103199 master-0 kubenswrapper[7553]: I0318 17:55:36.102917 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:55:37.102471 master-0 kubenswrapper[7553]: I0318 17:55:37.102348 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:55:37.102471 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:55:37.102471 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:55:37.102471 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:55:37.102471 master-0 kubenswrapper[7553]: I0318 17:55:37.102459 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:55:38.103165 master-0 kubenswrapper[7553]: I0318 17:55:38.103046 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:55:38.103165 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:55:38.103165 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:55:38.103165 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:55:38.104486 master-0 kubenswrapper[7553]: I0318 17:55:38.103179 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:55:38.169461 master-0 kubenswrapper[7553]: E0318 17:55:38.169356 7553 mirror_client.go:138] "Failed deleting a mirror pod" err="Timeout: request did not complete within requested timeout - context deadline exceeded" pod="openshift-etcd/etcd-master-0" Mar 18 17:55:39.053595 master-0 kubenswrapper[7553]: I0318 17:55:39.053457 7553 scope.go:117] "RemoveContainer" containerID="7805e3a084be32fd22f401c11f2cafaf6f6853b5c227bbbf9238a583daa6ea61" Mar 18 17:55:39.054006 master-0 kubenswrapper[7553]: E0318 17:55:39.053786 7553 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kube-rbac-proxy pod=cluster-cloud-controller-manager-operator-7dff898856-kfzkl_openshift-cloud-controller-manager-operator(0751c002-fe0e-4f13-bb9c-9accd8ca0df3)\"" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-kfzkl" podUID="0751c002-fe0e-4f13-bb9c-9accd8ca0df3" Mar 18 17:55:39.104064 master-0 kubenswrapper[7553]: I0318 17:55:39.103953 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:55:39.104064 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:55:39.104064 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:55:39.104064 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:55:39.105362 master-0 kubenswrapper[7553]: I0318 17:55:39.104089 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:55:39.445727 master-0 kubenswrapper[7553]: I0318 17:55:39.445637 7553 generic.go:334] "Generic (PLEG): container finished" podID="094204df314fe45bd5af12ca1b4622bb" containerID="8d1735bbfc7c3d66c7f4ca5e55aa86318920c68f2e40962c9c2d2008b6df984d" exitCode=0 Mar 18 17:55:39.445727 master-0 kubenswrapper[7553]: I0318 17:55:39.445729 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"094204df314fe45bd5af12ca1b4622bb","Type":"ContainerDied","Data":"8d1735bbfc7c3d66c7f4ca5e55aa86318920c68f2e40962c9c2d2008b6df984d"} Mar 18 17:55:39.446335 master-0 kubenswrapper[7553]: I0318 17:55:39.446162 7553 kubelet.go:1909] "Trying to delete pod" pod="openshift-etcd/etcd-master-0" podUID="b7082744-e417-4683-8069-858394c5fc53" Mar 18 17:55:39.446335 master-0 kubenswrapper[7553]: I0318 17:55:39.446210 7553 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-etcd/etcd-master-0" podUID="b7082744-e417-4683-8069-858394c5fc53" Mar 18 17:55:40.103314 master-0 kubenswrapper[7553]: I0318 17:55:40.103208 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:55:40.103314 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:55:40.103314 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:55:40.103314 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:55:40.103314 master-0 kubenswrapper[7553]: I0318 17:55:40.103315 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:55:41.102719 master-0 kubenswrapper[7553]: I0318 17:55:41.102640 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:55:41.102719 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:55:41.102719 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:55:41.102719 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:55:41.103940 master-0 kubenswrapper[7553]: I0318 17:55:41.102753 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:55:42.102598 master-0 kubenswrapper[7553]: I0318 17:55:42.102511 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:55:42.102598 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:55:42.102598 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:55:42.102598 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:55:42.103619 master-0 kubenswrapper[7553]: I0318 17:55:42.102601 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:55:42.376173 master-0 kubenswrapper[7553]: E0318 17:55:42.375983 7553 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="3.2s" Mar 18 17:55:43.103377 master-0 kubenswrapper[7553]: I0318 17:55:43.103298 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:55:43.103377 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:55:43.103377 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:55:43.103377 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:55:43.104074 master-0 kubenswrapper[7553]: I0318 17:55:43.103415 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:55:44.103206 master-0 kubenswrapper[7553]: I0318 17:55:44.103117 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:55:44.103206 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:55:44.103206 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:55:44.103206 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:55:44.103206 master-0 kubenswrapper[7553]: I0318 17:55:44.103195 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:55:44.483307 master-0 kubenswrapper[7553]: I0318 17:55:44.483220 7553 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-66b84d69b-qb7n6_7e64a377-f497-4416-8f22-d5c7f52e0b65/ingress-operator/4.log" Mar 18 17:55:44.484154 master-0 kubenswrapper[7553]: I0318 17:55:44.484107 7553 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-66b84d69b-qb7n6_7e64a377-f497-4416-8f22-d5c7f52e0b65/ingress-operator/3.log" Mar 18 17:55:44.485072 master-0 kubenswrapper[7553]: I0318 17:55:44.485008 7553 generic.go:334] "Generic (PLEG): container finished" podID="7e64a377-f497-4416-8f22-d5c7f52e0b65" containerID="af159afaa033efb036b878b04bdffa8fd814f7fc2cf559b2f4b190fa136e0905" exitCode=1 Mar 18 17:55:44.485172 master-0 kubenswrapper[7553]: I0318 17:55:44.485080 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-66b84d69b-qb7n6" event={"ID":"7e64a377-f497-4416-8f22-d5c7f52e0b65","Type":"ContainerDied","Data":"af159afaa033efb036b878b04bdffa8fd814f7fc2cf559b2f4b190fa136e0905"} Mar 18 17:55:44.485172 master-0 kubenswrapper[7553]: I0318 17:55:44.485147 7553 scope.go:117] "RemoveContainer" containerID="283b61599e310047ed75a28fad3754db0725837893f44d2709551e02ebb45040" Mar 18 17:55:44.486121 master-0 kubenswrapper[7553]: I0318 17:55:44.486080 7553 scope.go:117] "RemoveContainer" containerID="af159afaa033efb036b878b04bdffa8fd814f7fc2cf559b2f4b190fa136e0905" Mar 18 17:55:44.488189 master-0 kubenswrapper[7553]: E0318 17:55:44.487558 7553 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ingress-operator\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=ingress-operator pod=ingress-operator-66b84d69b-qb7n6_openshift-ingress-operator(7e64a377-f497-4416-8f22-d5c7f52e0b65)\"" pod="openshift-ingress-operator/ingress-operator-66b84d69b-qb7n6" podUID="7e64a377-f497-4416-8f22-d5c7f52e0b65" Mar 18 17:55:45.102139 master-0 kubenswrapper[7553]: I0318 17:55:45.102043 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:55:45.102139 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:55:45.102139 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:55:45.102139 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:55:45.102791 master-0 kubenswrapper[7553]: I0318 17:55:45.102145 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:55:45.493224 master-0 kubenswrapper[7553]: I0318 17:55:45.493113 7553 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-66b84d69b-qb7n6_7e64a377-f497-4416-8f22-d5c7f52e0b65/ingress-operator/4.log" Mar 18 17:55:45.978321 master-0 kubenswrapper[7553]: E0318 17:55:45.978214 7553 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 18 17:55:46.102908 master-0 kubenswrapper[7553]: I0318 17:55:46.102812 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:55:46.102908 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:55:46.102908 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:55:46.102908 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:55:46.103262 master-0 kubenswrapper[7553]: I0318 17:55:46.102906 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:55:46.504821 master-0 kubenswrapper[7553]: I0318 17:55:46.504731 7553 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-controller-manager-operator_cluster-cloud-controller-manager-operator-7dff898856-kfzkl_0751c002-fe0e-4f13-bb9c-9accd8ca0df3/kube-rbac-proxy/6.log" Mar 18 17:55:46.506688 master-0 kubenswrapper[7553]: I0318 17:55:46.506625 7553 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-controller-manager-operator_cluster-cloud-controller-manager-operator-7dff898856-kfzkl_0751c002-fe0e-4f13-bb9c-9accd8ca0df3/cluster-cloud-controller-manager/0.log" Mar 18 17:55:46.506847 master-0 kubenswrapper[7553]: I0318 17:55:46.506716 7553 generic.go:334] "Generic (PLEG): container finished" podID="0751c002-fe0e-4f13-bb9c-9accd8ca0df3" containerID="19f22c241321c089522b514fbfd3f5b1ec6df250184c4997e1e9c0766f09796c" exitCode=1 Mar 18 17:55:46.506847 master-0 kubenswrapper[7553]: I0318 17:55:46.506777 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-kfzkl" event={"ID":"0751c002-fe0e-4f13-bb9c-9accd8ca0df3","Type":"ContainerDied","Data":"19f22c241321c089522b514fbfd3f5b1ec6df250184c4997e1e9c0766f09796c"} Mar 18 17:55:46.507936 master-0 kubenswrapper[7553]: I0318 17:55:46.507856 7553 scope.go:117] "RemoveContainer" containerID="19f22c241321c089522b514fbfd3f5b1ec6df250184c4997e1e9c0766f09796c" Mar 18 17:55:46.508056 master-0 kubenswrapper[7553]: I0318 17:55:46.508018 7553 scope.go:117] "RemoveContainer" containerID="7805e3a084be32fd22f401c11f2cafaf6f6853b5c227bbbf9238a583daa6ea61" Mar 18 17:55:46.798452 master-0 kubenswrapper[7553]: E0318 17:55:46.798381 7553 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kube-rbac-proxy pod=cluster-cloud-controller-manager-operator-7dff898856-kfzkl_openshift-cloud-controller-manager-operator(0751c002-fe0e-4f13-bb9c-9accd8ca0df3)\"" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-kfzkl" podUID="0751c002-fe0e-4f13-bb9c-9accd8ca0df3" Mar 18 17:55:47.102147 master-0 kubenswrapper[7553]: I0318 17:55:47.102064 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:55:47.102147 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:55:47.102147 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:55:47.102147 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:55:47.102587 master-0 kubenswrapper[7553]: I0318 17:55:47.102145 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:55:47.519398 master-0 kubenswrapper[7553]: I0318 17:55:47.519175 7553 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-controller-manager-operator_cluster-cloud-controller-manager-operator-7dff898856-kfzkl_0751c002-fe0e-4f13-bb9c-9accd8ca0df3/kube-rbac-proxy/6.log" Mar 18 17:55:47.521615 master-0 kubenswrapper[7553]: I0318 17:55:47.521564 7553 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-controller-manager-operator_cluster-cloud-controller-manager-operator-7dff898856-kfzkl_0751c002-fe0e-4f13-bb9c-9accd8ca0df3/cluster-cloud-controller-manager/0.log" Mar 18 17:55:47.521741 master-0 kubenswrapper[7553]: I0318 17:55:47.521641 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-kfzkl" event={"ID":"0751c002-fe0e-4f13-bb9c-9accd8ca0df3","Type":"ContainerStarted","Data":"208255ce677a5d473773ba2227a27f647a47362e1547f93ae5a2f69b0856b862"} Mar 18 17:55:47.522498 master-0 kubenswrapper[7553]: I0318 17:55:47.522459 7553 scope.go:117] "RemoveContainer" containerID="7805e3a084be32fd22f401c11f2cafaf6f6853b5c227bbbf9238a583daa6ea61" Mar 18 17:55:47.522821 master-0 kubenswrapper[7553]: E0318 17:55:47.522757 7553 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kube-rbac-proxy pod=cluster-cloud-controller-manager-operator-7dff898856-kfzkl_openshift-cloud-controller-manager-operator(0751c002-fe0e-4f13-bb9c-9accd8ca0df3)\"" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-kfzkl" podUID="0751c002-fe0e-4f13-bb9c-9accd8ca0df3" Mar 18 17:55:48.103184 master-0 kubenswrapper[7553]: I0318 17:55:48.103086 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:55:48.103184 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:55:48.103184 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:55:48.103184 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:55:48.103184 master-0 kubenswrapper[7553]: I0318 17:55:48.103178 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:55:49.102870 master-0 kubenswrapper[7553]: I0318 17:55:49.102782 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:55:49.102870 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:55:49.102870 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:55:49.102870 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:55:49.102870 master-0 kubenswrapper[7553]: I0318 17:55:49.102844 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:55:50.103016 master-0 kubenswrapper[7553]: I0318 17:55:50.102915 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:55:50.103016 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:55:50.103016 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:55:50.103016 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:55:50.103016 master-0 kubenswrapper[7553]: I0318 17:55:50.102996 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:55:50.546238 master-0 kubenswrapper[7553]: I0318 17:55:50.546188 7553 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-controller-manager-operator_cluster-cloud-controller-manager-operator-7dff898856-kfzkl_0751c002-fe0e-4f13-bb9c-9accd8ca0df3/kube-rbac-proxy/6.log" Mar 18 17:55:50.547065 master-0 kubenswrapper[7553]: I0318 17:55:50.547028 7553 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-controller-manager-operator_cluster-cloud-controller-manager-operator-7dff898856-kfzkl_0751c002-fe0e-4f13-bb9c-9accd8ca0df3/config-sync-controllers/0.log" Mar 18 17:55:50.548350 master-0 kubenswrapper[7553]: I0318 17:55:50.548320 7553 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-controller-manager-operator_cluster-cloud-controller-manager-operator-7dff898856-kfzkl_0751c002-fe0e-4f13-bb9c-9accd8ca0df3/cluster-cloud-controller-manager/0.log" Mar 18 17:55:50.548487 master-0 kubenswrapper[7553]: I0318 17:55:50.548373 7553 generic.go:334] "Generic (PLEG): container finished" podID="0751c002-fe0e-4f13-bb9c-9accd8ca0df3" containerID="a81203ae354d597c88c3b98386e062196ad2d6278f0f6ad5fc4ad9c4b04a9ff2" exitCode=1 Mar 18 17:55:50.548487 master-0 kubenswrapper[7553]: I0318 17:55:50.548408 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-kfzkl" event={"ID":"0751c002-fe0e-4f13-bb9c-9accd8ca0df3","Type":"ContainerDied","Data":"a81203ae354d597c88c3b98386e062196ad2d6278f0f6ad5fc4ad9c4b04a9ff2"} Mar 18 17:55:50.549042 master-0 kubenswrapper[7553]: I0318 17:55:50.548993 7553 scope.go:117] "RemoveContainer" containerID="a81203ae354d597c88c3b98386e062196ad2d6278f0f6ad5fc4ad9c4b04a9ff2" Mar 18 17:55:50.549042 master-0 kubenswrapper[7553]: I0318 17:55:50.549023 7553 scope.go:117] "RemoveContainer" containerID="7805e3a084be32fd22f401c11f2cafaf6f6853b5c227bbbf9238a583daa6ea61" Mar 18 17:55:50.847864 master-0 kubenswrapper[7553]: E0318 17:55:50.847782 7553 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kube-rbac-proxy pod=cluster-cloud-controller-manager-operator-7dff898856-kfzkl_openshift-cloud-controller-manager-operator(0751c002-fe0e-4f13-bb9c-9accd8ca0df3)\"" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-kfzkl" podUID="0751c002-fe0e-4f13-bb9c-9accd8ca0df3" Mar 18 17:55:51.102773 master-0 kubenswrapper[7553]: I0318 17:55:51.102596 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:55:51.102773 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:55:51.102773 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:55:51.102773 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:55:51.102773 master-0 kubenswrapper[7553]: I0318 17:55:51.102702 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:55:51.561093 master-0 kubenswrapper[7553]: I0318 17:55:51.560996 7553 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-controller-manager-operator_cluster-cloud-controller-manager-operator-7dff898856-kfzkl_0751c002-fe0e-4f13-bb9c-9accd8ca0df3/kube-rbac-proxy/6.log" Mar 18 17:55:51.562043 master-0 kubenswrapper[7553]: I0318 17:55:51.561968 7553 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-controller-manager-operator_cluster-cloud-controller-manager-operator-7dff898856-kfzkl_0751c002-fe0e-4f13-bb9c-9accd8ca0df3/config-sync-controllers/0.log" Mar 18 17:55:51.562981 master-0 kubenswrapper[7553]: I0318 17:55:51.562906 7553 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-controller-manager-operator_cluster-cloud-controller-manager-operator-7dff898856-kfzkl_0751c002-fe0e-4f13-bb9c-9accd8ca0df3/cluster-cloud-controller-manager/0.log" Mar 18 17:55:51.563149 master-0 kubenswrapper[7553]: I0318 17:55:51.562996 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-kfzkl" event={"ID":"0751c002-fe0e-4f13-bb9c-9accd8ca0df3","Type":"ContainerStarted","Data":"fdf1b6ae66dfd5d3c63c42b40385582e7bcfb3c91df3cc37fda094f4df4c451c"} Mar 18 17:55:51.564078 master-0 kubenswrapper[7553]: I0318 17:55:51.564006 7553 scope.go:117] "RemoveContainer" containerID="7805e3a084be32fd22f401c11f2cafaf6f6853b5c227bbbf9238a583daa6ea61" Mar 18 17:55:51.564510 master-0 kubenswrapper[7553]: E0318 17:55:51.564442 7553 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kube-rbac-proxy pod=cluster-cloud-controller-manager-operator-7dff898856-kfzkl_openshift-cloud-controller-manager-operator(0751c002-fe0e-4f13-bb9c-9accd8ca0df3)\"" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-kfzkl" podUID="0751c002-fe0e-4f13-bb9c-9accd8ca0df3" Mar 18 17:55:51.778941 master-0 kubenswrapper[7553]: I0318 17:55:51.778829 7553 status_manager.go:851] "Failed to get status for pod" podUID="a8c34df1-ea0d-4dfa-bf4d-5b58dc5bee8e" pod="openshift-multus/multus-admission-controller-5dbbb8b86f-gr8jc" err="the server was unable to return a response in the time allotted, but may still be processing the request (get pods multus-admission-controller-5dbbb8b86f-gr8jc)" Mar 18 17:55:52.147371 master-0 kubenswrapper[7553]: I0318 17:55:52.147264 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:55:52.147371 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:55:52.147371 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:55:52.147371 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:55:52.147371 master-0 kubenswrapper[7553]: I0318 17:55:52.147317 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:55:52.573971 master-0 kubenswrapper[7553]: I0318 17:55:52.573877 7553 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-controller_operator-controller-controller-manager-57777556ff-bk26c_efbcb147-d077-4749-9289-1682daccb657/manager/1.log" Mar 18 17:55:52.575888 master-0 kubenswrapper[7553]: I0318 17:55:52.575835 7553 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-controller_operator-controller-controller-manager-57777556ff-bk26c_efbcb147-d077-4749-9289-1682daccb657/manager/0.log" Mar 18 17:55:52.576043 master-0 kubenswrapper[7553]: I0318 17:55:52.575911 7553 generic.go:334] "Generic (PLEG): container finished" podID="efbcb147-d077-4749-9289-1682daccb657" containerID="e2d7bd945ff62383c4a337619ff4a53c695923ff63d0ce2cd5a9cb7b46a58867" exitCode=1 Mar 18 17:55:52.576043 master-0 kubenswrapper[7553]: I0318 17:55:52.575957 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-bk26c" event={"ID":"efbcb147-d077-4749-9289-1682daccb657","Type":"ContainerDied","Data":"e2d7bd945ff62383c4a337619ff4a53c695923ff63d0ce2cd5a9cb7b46a58867"} Mar 18 17:55:52.576043 master-0 kubenswrapper[7553]: I0318 17:55:52.576010 7553 scope.go:117] "RemoveContainer" containerID="b1d92bc61050e9dcfcb1bd9705c2f2b94007d572857fef98c987e76770e1ad13" Mar 18 17:55:52.578555 master-0 kubenswrapper[7553]: I0318 17:55:52.578072 7553 scope.go:117] "RemoveContainer" containerID="e2d7bd945ff62383c4a337619ff4a53c695923ff63d0ce2cd5a9cb7b46a58867" Mar 18 17:55:53.102382 master-0 kubenswrapper[7553]: I0318 17:55:53.102183 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:55:53.102382 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:55:53.102382 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:55:53.102382 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:55:53.102382 master-0 kubenswrapper[7553]: I0318 17:55:53.102305 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:55:53.587303 master-0 kubenswrapper[7553]: I0318 17:55:53.587174 7553 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-controller_operator-controller-controller-manager-57777556ff-bk26c_efbcb147-d077-4749-9289-1682daccb657/manager/1.log" Mar 18 17:55:53.588234 master-0 kubenswrapper[7553]: I0318 17:55:53.587852 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-bk26c" event={"ID":"efbcb147-d077-4749-9289-1682daccb657","Type":"ContainerStarted","Data":"1ff92fba61f35c09076515d79278962004d7620dc3e8328aee5b6e48ae4ed789"} Mar 18 17:55:53.588446 master-0 kubenswrapper[7553]: I0318 17:55:53.588378 7553 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-bk26c" Mar 18 17:55:54.102482 master-0 kubenswrapper[7553]: I0318 17:55:54.102365 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:55:54.102482 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:55:54.102482 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:55:54.102482 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:55:54.102482 master-0 kubenswrapper[7553]: I0318 17:55:54.102468 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:55:55.103400 master-0 kubenswrapper[7553]: I0318 17:55:55.103153 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:55:55.103400 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:55:55.103400 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:55:55.103400 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:55:55.103400 master-0 kubenswrapper[7553]: I0318 17:55:55.103233 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:55:55.577920 master-0 kubenswrapper[7553]: E0318 17:55:55.577818 7553 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="6.4s" Mar 18 17:55:55.603675 master-0 kubenswrapper[7553]: I0318 17:55:55.603601 7553 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-catalogd_catalogd-controller-manager-6864dc98f7-8vmsv_56cde2f7-1742-45d6-aa22-8270cfb424a7/manager/1.log" Mar 18 17:55:55.604336 master-0 kubenswrapper[7553]: I0318 17:55:55.604257 7553 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-catalogd_catalogd-controller-manager-6864dc98f7-8vmsv_56cde2f7-1742-45d6-aa22-8270cfb424a7/manager/0.log" Mar 18 17:55:55.604685 master-0 kubenswrapper[7553]: I0318 17:55:55.604637 7553 generic.go:334] "Generic (PLEG): container finished" podID="56cde2f7-1742-45d6-aa22-8270cfb424a7" containerID="c455513aeeb0a865514a01932b50b8b6b2a2bfaa8dc030657e848c60ae487c2b" exitCode=1 Mar 18 17:55:55.604685 master-0 kubenswrapper[7553]: I0318 17:55:55.604678 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-8vmsv" event={"ID":"56cde2f7-1742-45d6-aa22-8270cfb424a7","Type":"ContainerDied","Data":"c455513aeeb0a865514a01932b50b8b6b2a2bfaa8dc030657e848c60ae487c2b"} Mar 18 17:55:55.604833 master-0 kubenswrapper[7553]: I0318 17:55:55.604720 7553 scope.go:117] "RemoveContainer" containerID="9a3c783faf4f4f653f053e2f216b7497912efa5f57b792ca0a2a383ce66b1a4d" Mar 18 17:55:55.605159 master-0 kubenswrapper[7553]: I0318 17:55:55.605121 7553 scope.go:117] "RemoveContainer" containerID="c455513aeeb0a865514a01932b50b8b6b2a2bfaa8dc030657e848c60ae487c2b" Mar 18 17:55:55.978565 master-0 kubenswrapper[7553]: E0318 17:55:55.978516 7553 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 18 17:55:56.053589 master-0 kubenswrapper[7553]: I0318 17:55:56.053542 7553 scope.go:117] "RemoveContainer" containerID="af159afaa033efb036b878b04bdffa8fd814f7fc2cf559b2f4b190fa136e0905" Mar 18 17:55:56.054023 master-0 kubenswrapper[7553]: E0318 17:55:56.053749 7553 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ingress-operator\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=ingress-operator pod=ingress-operator-66b84d69b-qb7n6_openshift-ingress-operator(7e64a377-f497-4416-8f22-d5c7f52e0b65)\"" pod="openshift-ingress-operator/ingress-operator-66b84d69b-qb7n6" podUID="7e64a377-f497-4416-8f22-d5c7f52e0b65" Mar 18 17:55:56.102898 master-0 kubenswrapper[7553]: I0318 17:55:56.102741 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:55:56.102898 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:55:56.102898 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:55:56.102898 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:55:56.102898 master-0 kubenswrapper[7553]: I0318 17:55:56.102824 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:55:56.615256 master-0 kubenswrapper[7553]: I0318 17:55:56.615180 7553 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-64854d9cff-vpjmp_7d39d93e-9be3-47e1-a44e-be2d18b55446/snapshot-controller/2.log" Mar 18 17:55:56.616086 master-0 kubenswrapper[7553]: I0318 17:55:56.615811 7553 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-64854d9cff-vpjmp_7d39d93e-9be3-47e1-a44e-be2d18b55446/snapshot-controller/1.log" Mar 18 17:55:56.616086 master-0 kubenswrapper[7553]: I0318 17:55:56.615884 7553 generic.go:334] "Generic (PLEG): container finished" podID="7d39d93e-9be3-47e1-a44e-be2d18b55446" containerID="9da45c50b62258b35b6fa6e25a88e2e045b13f36511821a9d8c318812731dc4c" exitCode=1 Mar 18 17:55:56.616086 master-0 kubenswrapper[7553]: I0318 17:55:56.615975 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-64854d9cff-vpjmp" event={"ID":"7d39d93e-9be3-47e1-a44e-be2d18b55446","Type":"ContainerDied","Data":"9da45c50b62258b35b6fa6e25a88e2e045b13f36511821a9d8c318812731dc4c"} Mar 18 17:55:56.616332 master-0 kubenswrapper[7553]: I0318 17:55:56.616102 7553 scope.go:117] "RemoveContainer" containerID="c9ad4dfdc283133c8325a6400b93e7ca1b286a38ba01514e1ca540aa2f6676d0" Mar 18 17:55:56.616834 master-0 kubenswrapper[7553]: I0318 17:55:56.616739 7553 scope.go:117] "RemoveContainer" containerID="9da45c50b62258b35b6fa6e25a88e2e045b13f36511821a9d8c318812731dc4c" Mar 18 17:55:56.617424 master-0 kubenswrapper[7553]: E0318 17:55:56.617344 7553 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"snapshot-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=snapshot-controller pod=csi-snapshot-controller-64854d9cff-vpjmp_openshift-cluster-storage-operator(7d39d93e-9be3-47e1-a44e-be2d18b55446)\"" pod="openshift-cluster-storage-operator/csi-snapshot-controller-64854d9cff-vpjmp" podUID="7d39d93e-9be3-47e1-a44e-be2d18b55446" Mar 18 17:55:56.619112 master-0 kubenswrapper[7553]: I0318 17:55:56.619062 7553 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-catalogd_catalogd-controller-manager-6864dc98f7-8vmsv_56cde2f7-1742-45d6-aa22-8270cfb424a7/manager/1.log" Mar 18 17:55:56.619845 master-0 kubenswrapper[7553]: I0318 17:55:56.619795 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-8vmsv" event={"ID":"56cde2f7-1742-45d6-aa22-8270cfb424a7","Type":"ContainerStarted","Data":"6b0248e5166895bd2fb140a47ceab672700ce8178cfe1b600950282b1a6ab60e"} Mar 18 17:55:56.620149 master-0 kubenswrapper[7553]: I0318 17:55:56.620100 7553 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-8vmsv" Mar 18 17:55:57.110163 master-0 kubenswrapper[7553]: I0318 17:55:57.110077 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:55:57.110163 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:55:57.110163 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:55:57.110163 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:55:57.110644 master-0 kubenswrapper[7553]: I0318 17:55:57.110197 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:55:57.632600 master-0 kubenswrapper[7553]: I0318 17:55:57.632535 7553 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-64854d9cff-vpjmp_7d39d93e-9be3-47e1-a44e-be2d18b55446/snapshot-controller/2.log" Mar 18 17:55:58.102746 master-0 kubenswrapper[7553]: I0318 17:55:58.102647 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:55:58.102746 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:55:58.102746 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:55:58.102746 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:55:58.102746 master-0 kubenswrapper[7553]: I0318 17:55:58.102731 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:55:59.103091 master-0 kubenswrapper[7553]: I0318 17:55:59.102989 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:55:59.103091 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:55:59.103091 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:55:59.103091 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:55:59.103091 master-0 kubenswrapper[7553]: I0318 17:55:59.103077 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:56:00.103309 master-0 kubenswrapper[7553]: I0318 17:56:00.103205 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:56:00.103309 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:56:00.103309 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:56:00.103309 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:56:00.104753 master-0 kubenswrapper[7553]: I0318 17:56:00.104631 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:56:01.101693 master-0 kubenswrapper[7553]: I0318 17:56:01.101580 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:56:01.101693 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:56:01.101693 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:56:01.101693 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:56:01.101693 master-0 kubenswrapper[7553]: I0318 17:56:01.101691 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:56:02.102702 master-0 kubenswrapper[7553]: I0318 17:56:02.102619 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:56:02.102702 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:56:02.102702 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:56:02.102702 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:56:02.102702 master-0 kubenswrapper[7553]: I0318 17:56:02.102704 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:56:03.055091 master-0 kubenswrapper[7553]: I0318 17:56:03.054230 7553 scope.go:117] "RemoveContainer" containerID="7805e3a084be32fd22f401c11f2cafaf6f6853b5c227bbbf9238a583daa6ea61" Mar 18 17:56:03.055091 master-0 kubenswrapper[7553]: E0318 17:56:03.054790 7553 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kube-rbac-proxy pod=cluster-cloud-controller-manager-operator-7dff898856-kfzkl_openshift-cloud-controller-manager-operator(0751c002-fe0e-4f13-bb9c-9accd8ca0df3)\"" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-kfzkl" podUID="0751c002-fe0e-4f13-bb9c-9accd8ca0df3" Mar 18 17:56:03.103167 master-0 kubenswrapper[7553]: I0318 17:56:03.103100 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:56:03.103167 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:56:03.103167 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:56:03.103167 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:56:03.103167 master-0 kubenswrapper[7553]: I0318 17:56:03.103174 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:56:04.103002 master-0 kubenswrapper[7553]: I0318 17:56:04.102944 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:56:04.103002 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:56:04.103002 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:56:04.103002 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:56:04.103829 master-0 kubenswrapper[7553]: I0318 17:56:04.103025 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:56:04.211772 master-0 kubenswrapper[7553]: I0318 17:56:04.211697 7553 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-bk26c" Mar 18 17:56:04.237412 master-0 kubenswrapper[7553]: I0318 17:56:04.237351 7553 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-8vmsv" Mar 18 17:56:05.103177 master-0 kubenswrapper[7553]: I0318 17:56:05.103092 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:56:05.103177 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:56:05.103177 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:56:05.103177 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:56:05.104262 master-0 kubenswrapper[7553]: I0318 17:56:05.103187 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:56:05.979393 master-0 kubenswrapper[7553]: E0318 17:56:05.979269 7553 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 18 17:56:05.979393 master-0 kubenswrapper[7553]: E0318 17:56:05.979348 7553 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Mar 18 17:56:06.104093 master-0 kubenswrapper[7553]: I0318 17:56:06.104005 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:56:06.104093 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:56:06.104093 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:56:06.104093 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:56:06.105102 master-0 kubenswrapper[7553]: I0318 17:56:06.104100 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:56:06.173539 master-0 kubenswrapper[7553]: E0318 17:56:06.173246 7553 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{kube-controller-manager-master-0.189e00b8598cf8d5 openshift-kube-controller-manager 9642 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-master-0,UID:3b3363934623637fdc1d37ff8b16880a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Created,Message:Created container: kube-controller-manager,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 17:48:15 +0000 UTC,LastTimestamp:2026-03-18 17:54:04.848363929 +0000 UTC m=+734.994198612,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 17:56:07.102500 master-0 kubenswrapper[7553]: I0318 17:56:07.102383 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:56:07.102500 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:56:07.102500 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:56:07.102500 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:56:07.103036 master-0 kubenswrapper[7553]: I0318 17:56:07.102515 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:56:08.103805 master-0 kubenswrapper[7553]: I0318 17:56:08.103708 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:56:08.103805 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:56:08.103805 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:56:08.103805 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:56:08.105069 master-0 kubenswrapper[7553]: I0318 17:56:08.103810 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:56:09.053858 master-0 kubenswrapper[7553]: I0318 17:56:09.053771 7553 scope.go:117] "RemoveContainer" containerID="9da45c50b62258b35b6fa6e25a88e2e045b13f36511821a9d8c318812731dc4c" Mar 18 17:56:09.054468 master-0 kubenswrapper[7553]: E0318 17:56:09.054413 7553 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"snapshot-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=snapshot-controller pod=csi-snapshot-controller-64854d9cff-vpjmp_openshift-cluster-storage-operator(7d39d93e-9be3-47e1-a44e-be2d18b55446)\"" pod="openshift-cluster-storage-operator/csi-snapshot-controller-64854d9cff-vpjmp" podUID="7d39d93e-9be3-47e1-a44e-be2d18b55446" Mar 18 17:56:09.103538 master-0 kubenswrapper[7553]: I0318 17:56:09.103457 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:56:09.103538 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:56:09.103538 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:56:09.103538 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:56:09.103851 master-0 kubenswrapper[7553]: I0318 17:56:09.103585 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:56:10.053312 master-0 kubenswrapper[7553]: I0318 17:56:10.053209 7553 scope.go:117] "RemoveContainer" containerID="af159afaa033efb036b878b04bdffa8fd814f7fc2cf559b2f4b190fa136e0905" Mar 18 17:56:10.053699 master-0 kubenswrapper[7553]: E0318 17:56:10.053631 7553 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ingress-operator\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=ingress-operator pod=ingress-operator-66b84d69b-qb7n6_openshift-ingress-operator(7e64a377-f497-4416-8f22-d5c7f52e0b65)\"" pod="openshift-ingress-operator/ingress-operator-66b84d69b-qb7n6" podUID="7e64a377-f497-4416-8f22-d5c7f52e0b65" Mar 18 17:56:10.102184 master-0 kubenswrapper[7553]: I0318 17:56:10.102121 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:56:10.102184 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:56:10.102184 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:56:10.102184 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:56:10.102563 master-0 kubenswrapper[7553]: I0318 17:56:10.102206 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:56:11.102541 master-0 kubenswrapper[7553]: I0318 17:56:11.102443 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:56:11.102541 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:56:11.102541 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:56:11.102541 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:56:11.103529 master-0 kubenswrapper[7553]: I0318 17:56:11.103428 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:56:11.979940 master-0 kubenswrapper[7553]: E0318 17:56:11.979813 7553 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s" Mar 18 17:56:12.103083 master-0 kubenswrapper[7553]: I0318 17:56:12.102985 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:56:12.103083 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:56:12.103083 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:56:12.103083 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:56:12.104113 master-0 kubenswrapper[7553]: I0318 17:56:12.103088 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:56:13.103518 master-0 kubenswrapper[7553]: I0318 17:56:13.103430 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:56:13.103518 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:56:13.103518 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:56:13.103518 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:56:13.104290 master-0 kubenswrapper[7553]: I0318 17:56:13.103541 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:56:13.450089 master-0 kubenswrapper[7553]: E0318 17:56:13.449950 7553 mirror_client.go:138] "Failed deleting a mirror pod" err="Timeout: request did not complete within requested timeout - context deadline exceeded" pod="openshift-etcd/etcd-master-0" Mar 18 17:56:13.801209 master-0 kubenswrapper[7553]: I0318 17:56:13.801109 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"094204df314fe45bd5af12ca1b4622bb","Type":"ContainerStarted","Data":"bf1214a2258760165a58c692fdf834c33da4c7a8a15a2275bd354ac819d9c857"} Mar 18 17:56:13.802013 master-0 kubenswrapper[7553]: I0318 17:56:13.801954 7553 kubelet.go:1909] "Trying to delete pod" pod="openshift-etcd/etcd-master-0" podUID="b7082744-e417-4683-8069-858394c5fc53" Mar 18 17:56:13.802013 master-0 kubenswrapper[7553]: I0318 17:56:13.801996 7553 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-etcd/etcd-master-0" podUID="b7082744-e417-4683-8069-858394c5fc53" Mar 18 17:56:14.103788 master-0 kubenswrapper[7553]: I0318 17:56:14.103713 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:56:14.103788 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:56:14.103788 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:56:14.103788 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:56:14.104862 master-0 kubenswrapper[7553]: I0318 17:56:14.103816 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:56:14.814254 master-0 kubenswrapper[7553]: I0318 17:56:14.814149 7553 generic.go:334] "Generic (PLEG): container finished" podID="094204df314fe45bd5af12ca1b4622bb" containerID="bf1214a2258760165a58c692fdf834c33da4c7a8a15a2275bd354ac819d9c857" exitCode=0 Mar 18 17:56:14.814254 master-0 kubenswrapper[7553]: I0318 17:56:14.814231 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"094204df314fe45bd5af12ca1b4622bb","Type":"ContainerDied","Data":"bf1214a2258760165a58c692fdf834c33da4c7a8a15a2275bd354ac819d9c857"} Mar 18 17:56:15.054153 master-0 kubenswrapper[7553]: I0318 17:56:15.054099 7553 scope.go:117] "RemoveContainer" containerID="7805e3a084be32fd22f401c11f2cafaf6f6853b5c227bbbf9238a583daa6ea61" Mar 18 17:56:15.055219 master-0 kubenswrapper[7553]: E0318 17:56:15.055140 7553 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kube-rbac-proxy pod=cluster-cloud-controller-manager-operator-7dff898856-kfzkl_openshift-cloud-controller-manager-operator(0751c002-fe0e-4f13-bb9c-9accd8ca0df3)\"" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-kfzkl" podUID="0751c002-fe0e-4f13-bb9c-9accd8ca0df3" Mar 18 17:56:15.101992 master-0 kubenswrapper[7553]: I0318 17:56:15.101846 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:56:15.101992 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:56:15.101992 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:56:15.101992 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:56:15.101992 master-0 kubenswrapper[7553]: I0318 17:56:15.101927 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:56:15.827943 master-0 kubenswrapper[7553]: I0318 17:56:15.827864 7553 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_3b3363934623637fdc1d37ff8b16880a/kube-controller-manager/0.log" Mar 18 17:56:15.828823 master-0 kubenswrapper[7553]: I0318 17:56:15.827958 7553 generic.go:334] "Generic (PLEG): container finished" podID="3b3363934623637fdc1d37ff8b16880a" containerID="c8289571034ebc6739ae21b3260df385ebf8dcd2b89305874e7d44766e4b4396" exitCode=0 Mar 18 17:56:15.828823 master-0 kubenswrapper[7553]: I0318 17:56:15.828005 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"3b3363934623637fdc1d37ff8b16880a","Type":"ContainerDied","Data":"c8289571034ebc6739ae21b3260df385ebf8dcd2b89305874e7d44766e4b4396"} Mar 18 17:56:15.829029 master-0 kubenswrapper[7553]: I0318 17:56:15.828973 7553 scope.go:117] "RemoveContainer" containerID="c8289571034ebc6739ae21b3260df385ebf8dcd2b89305874e7d44766e4b4396" Mar 18 17:56:16.102349 master-0 kubenswrapper[7553]: I0318 17:56:16.101900 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:56:16.102349 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:56:16.102349 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:56:16.102349 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:56:16.102349 master-0 kubenswrapper[7553]: I0318 17:56:16.101977 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:56:16.842085 master-0 kubenswrapper[7553]: I0318 17:56:16.841987 7553 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_3b3363934623637fdc1d37ff8b16880a/kube-controller-manager/0.log" Mar 18 17:56:16.842085 master-0 kubenswrapper[7553]: I0318 17:56:16.842060 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"3b3363934623637fdc1d37ff8b16880a","Type":"ContainerStarted","Data":"e0805d4cfeb69f415294ea780d12f67a0be7280a0259b3fb8434d02f506e058c"} Mar 18 17:56:17.101340 master-0 kubenswrapper[7553]: I0318 17:56:17.101115 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:56:17.101340 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:56:17.101340 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:56:17.101340 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:56:17.101340 master-0 kubenswrapper[7553]: I0318 17:56:17.101172 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:56:18.102239 master-0 kubenswrapper[7553]: I0318 17:56:18.102182 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:56:18.102239 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:56:18.102239 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:56:18.102239 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:56:18.103554 master-0 kubenswrapper[7553]: I0318 17:56:18.103459 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:56:19.102879 master-0 kubenswrapper[7553]: I0318 17:56:19.102773 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:56:19.102879 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:56:19.102879 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:56:19.102879 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:56:19.102879 master-0 kubenswrapper[7553]: I0318 17:56:19.102873 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:56:20.103105 master-0 kubenswrapper[7553]: I0318 17:56:20.102998 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:56:20.103105 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:56:20.103105 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:56:20.103105 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:56:20.103105 master-0 kubenswrapper[7553]: I0318 17:56:20.103087 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:56:20.104463 master-0 kubenswrapper[7553]: I0318 17:56:20.103162 7553 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" Mar 18 17:56:20.104463 master-0 kubenswrapper[7553]: I0318 17:56:20.104040 7553 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="router" containerStatusID={"Type":"cri-o","ID":"f00456b24dab05375bbbeac67add4ae933f0340a0db97ddc7192a2436c6be1ec"} pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" containerMessage="Container router failed startup probe, will be restarted" Mar 18 17:56:20.104463 master-0 kubenswrapper[7553]: I0318 17:56:20.104127 7553 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" containerID="cri-o://f00456b24dab05375bbbeac67add4ae933f0340a0db97ddc7192a2436c6be1ec" gracePeriod=3600 Mar 18 17:56:21.052452 master-0 kubenswrapper[7553]: I0318 17:56:21.052404 7553 scope.go:117] "RemoveContainer" containerID="9da45c50b62258b35b6fa6e25a88e2e045b13f36511821a9d8c318812731dc4c" Mar 18 17:56:21.887240 master-0 kubenswrapper[7553]: I0318 17:56:21.887193 7553 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-64854d9cff-vpjmp_7d39d93e-9be3-47e1-a44e-be2d18b55446/snapshot-controller/2.log" Mar 18 17:56:21.887752 master-0 kubenswrapper[7553]: I0318 17:56:21.887370 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-64854d9cff-vpjmp" event={"ID":"7d39d93e-9be3-47e1-a44e-be2d18b55446","Type":"ContainerStarted","Data":"1d85680a94931192610dbfb5c97df34c749df00528eff7841b2245f6d30aa63a"} Mar 18 17:56:21.890361 master-0 kubenswrapper[7553]: I0318 17:56:21.890328 7553 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-6f69995874-dh5zl_37b3753f-bf4f-4a9e-a4a8-d58296bada79/cluster-baremetal-operator/2.log" Mar 18 17:56:21.891086 master-0 kubenswrapper[7553]: I0318 17:56:21.891063 7553 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-6f69995874-dh5zl_37b3753f-bf4f-4a9e-a4a8-d58296bada79/cluster-baremetal-operator/1.log" Mar 18 17:56:21.891765 master-0 kubenswrapper[7553]: I0318 17:56:21.891725 7553 generic.go:334] "Generic (PLEG): container finished" podID="37b3753f-bf4f-4a9e-a4a8-d58296bada79" containerID="80c5f3220064c232d03dadbc88c1a47282c553de28295165f3109e332825aa0f" exitCode=1 Mar 18 17:56:21.891839 master-0 kubenswrapper[7553]: I0318 17:56:21.891790 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-dh5zl" event={"ID":"37b3753f-bf4f-4a9e-a4a8-d58296bada79","Type":"ContainerDied","Data":"80c5f3220064c232d03dadbc88c1a47282c553de28295165f3109e332825aa0f"} Mar 18 17:56:21.891911 master-0 kubenswrapper[7553]: I0318 17:56:21.891887 7553 scope.go:117] "RemoveContainer" containerID="fd1baed9e081b7d0a16ba577c3675952403bd2f32763aeb842989654f0b5e115" Mar 18 17:56:21.892758 master-0 kubenswrapper[7553]: I0318 17:56:21.892713 7553 scope.go:117] "RemoveContainer" containerID="80c5f3220064c232d03dadbc88c1a47282c553de28295165f3109e332825aa0f" Mar 18 17:56:21.893103 master-0 kubenswrapper[7553]: E0318 17:56:21.893060 7553 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cluster-baremetal-operator\" with CrashLoopBackOff: \"back-off 20s restarting failed container=cluster-baremetal-operator pod=cluster-baremetal-operator-6f69995874-dh5zl_openshift-machine-api(37b3753f-bf4f-4a9e-a4a8-d58296bada79)\"" pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-dh5zl" podUID="37b3753f-bf4f-4a9e-a4a8-d58296bada79" Mar 18 17:56:22.053122 master-0 kubenswrapper[7553]: I0318 17:56:22.053055 7553 scope.go:117] "RemoveContainer" containerID="af159afaa033efb036b878b04bdffa8fd814f7fc2cf559b2f4b190fa136e0905" Mar 18 17:56:22.053403 master-0 kubenswrapper[7553]: E0318 17:56:22.053319 7553 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ingress-operator\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=ingress-operator pod=ingress-operator-66b84d69b-qb7n6_openshift-ingress-operator(7e64a377-f497-4416-8f22-d5c7f52e0b65)\"" pod="openshift-ingress-operator/ingress-operator-66b84d69b-qb7n6" podUID="7e64a377-f497-4416-8f22-d5c7f52e0b65" Mar 18 17:56:22.903541 master-0 kubenswrapper[7553]: I0318 17:56:22.903446 7553 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-6f69995874-dh5zl_37b3753f-bf4f-4a9e-a4a8-d58296bada79/cluster-baremetal-operator/2.log" Mar 18 17:56:24.714151 master-0 kubenswrapper[7553]: I0318 17:56:24.714082 7553 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 17:56:24.714761 master-0 kubenswrapper[7553]: I0318 17:56:24.714169 7553 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 17:56:27.715121 master-0 kubenswrapper[7553]: I0318 17:56:27.714984 7553 patch_prober.go:28] interesting pod/kube-controller-manager-master-0 container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 18 17:56:27.716020 master-0 kubenswrapper[7553]: I0318 17:56:27.715150 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="3b3363934623637fdc1d37ff8b16880a" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 18 17:56:28.988101 master-0 kubenswrapper[7553]: E0318 17:56:28.987998 7553 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s" Mar 18 17:56:30.053343 master-0 kubenswrapper[7553]: I0318 17:56:30.053245 7553 scope.go:117] "RemoveContainer" containerID="7805e3a084be32fd22f401c11f2cafaf6f6853b5c227bbbf9238a583daa6ea61" Mar 18 17:56:30.054370 master-0 kubenswrapper[7553]: E0318 17:56:30.053583 7553 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kube-rbac-proxy pod=cluster-cloud-controller-manager-operator-7dff898856-kfzkl_openshift-cloud-controller-manager-operator(0751c002-fe0e-4f13-bb9c-9accd8ca0df3)\"" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-kfzkl" podUID="0751c002-fe0e-4f13-bb9c-9accd8ca0df3" Mar 18 17:56:30.966567 master-0 kubenswrapper[7553]: I0318 17:56:30.966361 7553 generic.go:334] "Generic (PLEG): container finished" podID="1db0a246-ca43-4e7c-b09e-e80218ae99b1" containerID="a1b33ace751148a425bdf00f07398d55bcc2dc83ff83d8278e7851ed219c1db7" exitCode=0 Mar 18 17:56:30.966567 master-0 kubenswrapper[7553]: I0318 17:56:30.966482 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-f5755b457-f4cbl" event={"ID":"1db0a246-ca43-4e7c-b09e-e80218ae99b1","Type":"ContainerDied","Data":"a1b33ace751148a425bdf00f07398d55bcc2dc83ff83d8278e7851ed219c1db7"} Mar 18 17:56:30.966567 master-0 kubenswrapper[7553]: I0318 17:56:30.966555 7553 scope.go:117] "RemoveContainer" containerID="b3ebfba10cf9d40bcef8b7b1707842cdd5329c0fa6c5118e3bdbf4e1fe51f08d" Mar 18 17:56:30.967432 master-0 kubenswrapper[7553]: I0318 17:56:30.967386 7553 scope.go:117] "RemoveContainer" containerID="a1b33ace751148a425bdf00f07398d55bcc2dc83ff83d8278e7851ed219c1db7" Mar 18 17:56:30.970000 master-0 kubenswrapper[7553]: I0318 17:56:30.969937 7553 generic.go:334] "Generic (PLEG): container finished" podID="7b94e08c-7944-445e-bfb7-6c7c14880c65" containerID="94d941e21f1ab13a20fa6356fcedca0030606e420e596dcef8825d0ce5bcf87a" exitCode=0 Mar 18 17:56:30.970069 master-0 kubenswrapper[7553]: I0318 17:56:30.970025 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57f769d897-m82wx" event={"ID":"7b94e08c-7944-445e-bfb7-6c7c14880c65","Type":"ContainerDied","Data":"94d941e21f1ab13a20fa6356fcedca0030606e420e596dcef8825d0ce5bcf87a"} Mar 18 17:56:30.971045 master-0 kubenswrapper[7553]: I0318 17:56:30.971003 7553 scope.go:117] "RemoveContainer" containerID="94d941e21f1ab13a20fa6356fcedca0030606e420e596dcef8825d0ce5bcf87a" Mar 18 17:56:31.004118 master-0 kubenswrapper[7553]: I0318 17:56:31.004066 7553 scope.go:117] "RemoveContainer" containerID="10ef0540ad110067bbacf0ae0c51fcdf81ed8a0e014b67c2675d03499d28dfab" Mar 18 17:56:31.047850 master-0 kubenswrapper[7553]: I0318 17:56:31.047743 7553 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-f5755b457-f4cbl" Mar 18 17:56:31.047850 master-0 kubenswrapper[7553]: I0318 17:56:31.047825 7553 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-controller-manager/controller-manager-f5755b457-f4cbl" Mar 18 17:56:32.001150 master-0 kubenswrapper[7553]: I0318 17:56:32.001052 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57f769d897-m82wx" event={"ID":"7b94e08c-7944-445e-bfb7-6c7c14880c65","Type":"ContainerStarted","Data":"239f7ea31324e3996de953870c3f93e658d2e459c859a37f86bb765bd17f0310"} Mar 18 17:56:32.009860 master-0 kubenswrapper[7553]: I0318 17:56:32.009776 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-f5755b457-f4cbl" event={"ID":"1db0a246-ca43-4e7c-b09e-e80218ae99b1","Type":"ContainerStarted","Data":"1612a1070b4cbefcdbe41900f384f9e6a016b7884254d5d1d20e3137f1bfc3ce"} Mar 18 17:56:32.010344 master-0 kubenswrapper[7553]: I0318 17:56:32.010316 7553 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-f5755b457-f4cbl" Mar 18 17:56:32.019616 master-0 kubenswrapper[7553]: I0318 17:56:32.019509 7553 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-f5755b457-f4cbl" Mar 18 17:56:35.053541 master-0 kubenswrapper[7553]: I0318 17:56:35.053440 7553 scope.go:117] "RemoveContainer" containerID="af159afaa033efb036b878b04bdffa8fd814f7fc2cf559b2f4b190fa136e0905" Mar 18 17:56:35.054443 master-0 kubenswrapper[7553]: E0318 17:56:35.053831 7553 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ingress-operator\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=ingress-operator pod=ingress-operator-66b84d69b-qb7n6_openshift-ingress-operator(7e64a377-f497-4416-8f22-d5c7f52e0b65)\"" pod="openshift-ingress-operator/ingress-operator-66b84d69b-qb7n6" podUID="7e64a377-f497-4416-8f22-d5c7f52e0b65" Mar 18 17:56:36.284485 master-0 kubenswrapper[7553]: E0318 17:56:36.284342 7553 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-18T17:56:26Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-18T17:56:26Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-18T17:56:26Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-18T17:56:26Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"master-0\": Patch \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0/status?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 18 17:56:37.054135 master-0 kubenswrapper[7553]: I0318 17:56:37.054042 7553 scope.go:117] "RemoveContainer" containerID="80c5f3220064c232d03dadbc88c1a47282c553de28295165f3109e332825aa0f" Mar 18 17:56:37.054508 master-0 kubenswrapper[7553]: E0318 17:56:37.054428 7553 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cluster-baremetal-operator\" with CrashLoopBackOff: \"back-off 20s restarting failed container=cluster-baremetal-operator pod=cluster-baremetal-operator-6f69995874-dh5zl_openshift-machine-api(37b3753f-bf4f-4a9e-a4a8-d58296bada79)\"" pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-dh5zl" podUID="37b3753f-bf4f-4a9e-a4a8-d58296bada79" Mar 18 17:56:37.714466 master-0 kubenswrapper[7553]: I0318 17:56:37.714378 7553 patch_prober.go:28] interesting pod/kube-controller-manager-master-0 container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 18 17:56:37.715258 master-0 kubenswrapper[7553]: I0318 17:56:37.714467 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="3b3363934623637fdc1d37ff8b16880a" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 18 17:56:40.177162 master-0 kubenswrapper[7553]: E0318 17:56:40.176990 7553 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{kube-controller-manager-master-0.189e00b85a48fb60 openshift-kube-controller-manager 9643 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-master-0,UID:3b3363934623637fdc1d37ff8b16880a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Started,Message:Started container kube-controller-manager,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 17:48:15 +0000 UTC,LastTimestamp:2026-03-18 17:54:04.860406468 +0000 UTC m=+735.006241181,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 17:56:42.053066 master-0 kubenswrapper[7553]: I0318 17:56:42.052985 7553 scope.go:117] "RemoveContainer" containerID="7805e3a084be32fd22f401c11f2cafaf6f6853b5c227bbbf9238a583daa6ea61" Mar 18 17:56:42.053883 master-0 kubenswrapper[7553]: E0318 17:56:42.053325 7553 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kube-rbac-proxy pod=cluster-cloud-controller-manager-operator-7dff898856-kfzkl_openshift-cloud-controller-manager-operator(0751c002-fe0e-4f13-bb9c-9accd8ca0df3)\"" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-kfzkl" podUID="0751c002-fe0e-4f13-bb9c-9accd8ca0df3" Mar 18 17:56:44.946102 master-0 kubenswrapper[7553]: E0318 17:56:44.945953 7553 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[control-plane-machine-set-operator-tls], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="openshift-machine-api/control-plane-machine-set-operator-6f97756bc8-zdqtc" podUID="de189d27-4c60-49f1-9119-d1fde5c37b1e" Mar 18 17:56:44.946995 master-0 kubenswrapper[7553]: E0318 17:56:44.946212 7553 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[samples-operator-tls], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="openshift-cluster-samples-operator/cluster-samples-operator-85f7577d78-xnx8x" podUID="e0e04440-c08b-452d-9be6-9f70a4027c92" Mar 18 17:56:44.946995 master-0 kubenswrapper[7553]: E0318 17:56:44.946462 7553 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[cloud-credential-operator-serving-cert], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="openshift-cloud-credential-operator/cloud-credential-operator-744f9dbf77-djgn7" podUID="04cef0bd-f365-4bf6-864a-1895995015d6" Mar 18 17:56:44.946995 master-0 kubenswrapper[7553]: E0318 17:56:44.946466 7553 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[cert], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="openshift-machine-api/cluster-autoscaler-operator-866dc4744-l6hpt" podUID="a94f7bff-ad61-4c53-a8eb-000a13f26971" Mar 18 17:56:45.117543 master-0 kubenswrapper[7553]: I0318 17:56:45.117459 7553 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-6f97756bc8-zdqtc" Mar 18 17:56:45.117543 master-0 kubenswrapper[7553]: I0318 17:56:45.117520 7553 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-85f7577d78-xnx8x" Mar 18 17:56:45.117895 master-0 kubenswrapper[7553]: I0318 17:56:45.117575 7553 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/cluster-autoscaler-operator-866dc4744-l6hpt" Mar 18 17:56:45.117895 master-0 kubenswrapper[7553]: I0318 17:56:45.117477 7553 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cloud-credential-operator/cloud-credential-operator-744f9dbf77-djgn7" Mar 18 17:56:45.952866 master-0 kubenswrapper[7553]: E0318 17:56:45.952715 7553 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[machine-api-operator-tls], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="openshift-machine-api/machine-api-operator-6fbb6cf6f9-6x52p" podUID="2d21e77e-8b61-4f03-8f17-941b7a1d8b1d" Mar 18 17:56:45.988767 master-0 kubenswrapper[7553]: E0318 17:56:45.988628 7553 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s" Mar 18 17:56:46.123829 master-0 kubenswrapper[7553]: I0318 17:56:46.123741 7553 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-6fbb6cf6f9-6x52p" Mar 18 17:56:46.284963 master-0 kubenswrapper[7553]: E0318 17:56:46.284755 7553 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 18 17:56:46.638307 master-0 kubenswrapper[7553]: I0318 17:56:46.638154 7553 patch_prober.go:28] interesting pod/kube-controller-manager-master-0 container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://localhost:10357/healthz\": read tcp 127.0.0.1:54334->127.0.0.1:10357: read: connection reset by peer" start-of-body= Mar 18 17:56:46.638639 master-0 kubenswrapper[7553]: I0318 17:56:46.638305 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="3b3363934623637fdc1d37ff8b16880a" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://localhost:10357/healthz\": read tcp 127.0.0.1:54334->127.0.0.1:10357: read: connection reset by peer" Mar 18 17:56:46.638639 master-0 kubenswrapper[7553]: I0318 17:56:46.638410 7553 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 17:56:46.640088 master-0 kubenswrapper[7553]: I0318 17:56:46.639793 7553 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="cluster-policy-controller" containerStatusID={"Type":"cri-o","ID":"e0805d4cfeb69f415294ea780d12f67a0be7280a0259b3fb8434d02f506e058c"} pod="openshift-kube-controller-manager/kube-controller-manager-master-0" containerMessage="Container cluster-policy-controller failed startup probe, will be restarted" Mar 18 17:56:46.640088 master-0 kubenswrapper[7553]: I0318 17:56:46.640055 7553 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="3b3363934623637fdc1d37ff8b16880a" containerName="cluster-policy-controller" containerID="cri-o://e0805d4cfeb69f415294ea780d12f67a0be7280a0259b3fb8434d02f506e058c" gracePeriod=30 Mar 18 17:56:46.700759 master-0 kubenswrapper[7553]: I0318 17:56:46.700711 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/e0e04440-c08b-452d-9be6-9f70a4027c92-samples-operator-tls\") pod \"cluster-samples-operator-85f7577d78-xnx8x\" (UID: \"e0e04440-c08b-452d-9be6-9f70a4027c92\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-85f7577d78-xnx8x" Mar 18 17:56:46.700995 master-0 kubenswrapper[7553]: E0318 17:56:46.700953 7553 secret.go:189] Couldn't get secret openshift-cluster-samples-operator/samples-operator-tls: secret "samples-operator-tls" not found Mar 18 17:56:46.701090 master-0 kubenswrapper[7553]: E0318 17:56:46.701041 7553 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e0e04440-c08b-452d-9be6-9f70a4027c92-samples-operator-tls podName:e0e04440-c08b-452d-9be6-9f70a4027c92 nodeName:}" failed. No retries permitted until 2026-03-18 17:58:48.7010175 +0000 UTC m=+1018.846852213 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "samples-operator-tls" (UniqueName: "kubernetes.io/secret/e0e04440-c08b-452d-9be6-9f70a4027c92-samples-operator-tls") pod "cluster-samples-operator-85f7577d78-xnx8x" (UID: "e0e04440-c08b-452d-9be6-9f70a4027c92") : secret "samples-operator-tls" not found Mar 18 17:56:46.701212 master-0 kubenswrapper[7553]: I0318 17:56:46.700973 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloud-credential-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/04cef0bd-f365-4bf6-864a-1895995015d6-cloud-credential-operator-serving-cert\") pod \"cloud-credential-operator-744f9dbf77-djgn7\" (UID: \"04cef0bd-f365-4bf6-864a-1895995015d6\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-744f9dbf77-djgn7" Mar 18 17:56:46.701468 master-0 kubenswrapper[7553]: I0318 17:56:46.701437 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/de189d27-4c60-49f1-9119-d1fde5c37b1e-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-6f97756bc8-zdqtc\" (UID: \"de189d27-4c60-49f1-9119-d1fde5c37b1e\") " pod="openshift-machine-api/control-plane-machine-set-operator-6f97756bc8-zdqtc" Mar 18 17:56:46.701676 master-0 kubenswrapper[7553]: I0318 17:56:46.701647 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/92153864-7959-4482-bf24-c8db36435fb5-machine-approver-tls\") pod \"machine-approver-5c6485487f-z74t2\" (UID: \"92153864-7959-4482-bf24-c8db36435fb5\") " pod="openshift-cluster-machine-approver/machine-approver-5c6485487f-z74t2" Mar 18 17:56:46.701844 master-0 kubenswrapper[7553]: I0318 17:56:46.701819 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a94f7bff-ad61-4c53-a8eb-000a13f26971-cert\") pod \"cluster-autoscaler-operator-866dc4744-l6hpt\" (UID: \"a94f7bff-ad61-4c53-a8eb-000a13f26971\") " pod="openshift-machine-api/cluster-autoscaler-operator-866dc4744-l6hpt" Mar 18 17:56:46.702072 master-0 kubenswrapper[7553]: E0318 17:56:46.701325 7553 secret.go:189] Couldn't get secret openshift-cloud-credential-operator/cloud-credential-operator-serving-cert: secret "cloud-credential-operator-serving-cert" not found Mar 18 17:56:46.702167 master-0 kubenswrapper[7553]: E0318 17:56:46.702099 7553 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/04cef0bd-f365-4bf6-864a-1895995015d6-cloud-credential-operator-serving-cert podName:04cef0bd-f365-4bf6-864a-1895995015d6 nodeName:}" failed. No retries permitted until 2026-03-18 17:58:48.702081808 +0000 UTC m=+1018.847916511 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "cloud-credential-operator-serving-cert" (UniqueName: "kubernetes.io/secret/04cef0bd-f365-4bf6-864a-1895995015d6-cloud-credential-operator-serving-cert") pod "cloud-credential-operator-744f9dbf77-djgn7" (UID: "04cef0bd-f365-4bf6-864a-1895995015d6") : secret "cloud-credential-operator-serving-cert" not found Mar 18 17:56:46.702167 master-0 kubenswrapper[7553]: E0318 17:56:46.701571 7553 secret.go:189] Couldn't get secret openshift-machine-api/control-plane-machine-set-operator-tls: secret "control-plane-machine-set-operator-tls" not found Mar 18 17:56:46.702167 master-0 kubenswrapper[7553]: E0318 17:56:46.702155 7553 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/de189d27-4c60-49f1-9119-d1fde5c37b1e-control-plane-machine-set-operator-tls podName:de189d27-4c60-49f1-9119-d1fde5c37b1e nodeName:}" failed. No retries permitted until 2026-03-18 17:58:48.70214273 +0000 UTC m=+1018.847977433 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "control-plane-machine-set-operator-tls" (UniqueName: "kubernetes.io/secret/de189d27-4c60-49f1-9119-d1fde5c37b1e-control-plane-machine-set-operator-tls") pod "control-plane-machine-set-operator-6f97756bc8-zdqtc" (UID: "de189d27-4c60-49f1-9119-d1fde5c37b1e") : secret "control-plane-machine-set-operator-tls" not found Mar 18 17:56:46.702434 master-0 kubenswrapper[7553]: E0318 17:56:46.701740 7553 secret.go:189] Couldn't get secret openshift-cluster-machine-approver/machine-approver-tls: secret "machine-approver-tls" not found Mar 18 17:56:46.702434 master-0 kubenswrapper[7553]: E0318 17:56:46.701954 7553 secret.go:189] Couldn't get secret openshift-machine-api/cluster-autoscaler-operator-cert: secret "cluster-autoscaler-operator-cert" not found Mar 18 17:56:46.702434 master-0 kubenswrapper[7553]: E0318 17:56:46.702346 7553 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/92153864-7959-4482-bf24-c8db36435fb5-machine-approver-tls podName:92153864-7959-4482-bf24-c8db36435fb5 nodeName:}" failed. No retries permitted until 2026-03-18 17:58:48.702264713 +0000 UTC m=+1018.848099446 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "machine-approver-tls" (UniqueName: "kubernetes.io/secret/92153864-7959-4482-bf24-c8db36435fb5-machine-approver-tls") pod "machine-approver-5c6485487f-z74t2" (UID: "92153864-7959-4482-bf24-c8db36435fb5") : secret "machine-approver-tls" not found Mar 18 17:56:46.702434 master-0 kubenswrapper[7553]: E0318 17:56:46.702400 7553 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a94f7bff-ad61-4c53-a8eb-000a13f26971-cert podName:a94f7bff-ad61-4c53-a8eb-000a13f26971 nodeName:}" failed. No retries permitted until 2026-03-18 17:58:48.702370587 +0000 UTC m=+1018.848205360 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/a94f7bff-ad61-4c53-a8eb-000a13f26971-cert") pod "cluster-autoscaler-operator-866dc4744-l6hpt" (UID: "a94f7bff-ad61-4c53-a8eb-000a13f26971") : secret "cluster-autoscaler-operator-cert" not found Mar 18 17:56:46.803912 master-0 kubenswrapper[7553]: I0318 17:56:46.803788 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/2d21e77e-8b61-4f03-8f17-941b7a1d8b1d-machine-api-operator-tls\") pod \"machine-api-operator-6fbb6cf6f9-6x52p\" (UID: \"2d21e77e-8b61-4f03-8f17-941b7a1d8b1d\") " pod="openshift-machine-api/machine-api-operator-6fbb6cf6f9-6x52p" Mar 18 17:56:46.804366 master-0 kubenswrapper[7553]: E0318 17:56:46.804319 7553 secret.go:189] Couldn't get secret openshift-machine-api/machine-api-operator-tls: secret "machine-api-operator-tls" not found Mar 18 17:56:46.804464 master-0 kubenswrapper[7553]: E0318 17:56:46.804417 7553 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2d21e77e-8b61-4f03-8f17-941b7a1d8b1d-machine-api-operator-tls podName:2d21e77e-8b61-4f03-8f17-941b7a1d8b1d nodeName:}" failed. No retries permitted until 2026-03-18 17:58:48.804391116 +0000 UTC m=+1018.950225829 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "machine-api-operator-tls" (UniqueName: "kubernetes.io/secret/2d21e77e-8b61-4f03-8f17-941b7a1d8b1d-machine-api-operator-tls") pod "machine-api-operator-6fbb6cf6f9-6x52p" (UID: "2d21e77e-8b61-4f03-8f17-941b7a1d8b1d") : secret "machine-api-operator-tls" not found Mar 18 17:56:47.053562 master-0 kubenswrapper[7553]: I0318 17:56:47.053475 7553 scope.go:117] "RemoveContainer" containerID="af159afaa033efb036b878b04bdffa8fd814f7fc2cf559b2f4b190fa136e0905" Mar 18 17:56:47.054182 master-0 kubenswrapper[7553]: E0318 17:56:47.053732 7553 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ingress-operator\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=ingress-operator pod=ingress-operator-66b84d69b-qb7n6_openshift-ingress-operator(7e64a377-f497-4416-8f22-d5c7f52e0b65)\"" pod="openshift-ingress-operator/ingress-operator-66b84d69b-qb7n6" podUID="7e64a377-f497-4416-8f22-d5c7f52e0b65" Mar 18 17:56:47.133573 master-0 kubenswrapper[7553]: I0318 17:56:47.133490 7553 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_3b3363934623637fdc1d37ff8b16880a/cluster-policy-controller/1.log" Mar 18 17:56:47.136491 master-0 kubenswrapper[7553]: I0318 17:56:47.136438 7553 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_3b3363934623637fdc1d37ff8b16880a/kube-controller-manager/0.log" Mar 18 17:56:47.136587 master-0 kubenswrapper[7553]: I0318 17:56:47.136519 7553 generic.go:334] "Generic (PLEG): container finished" podID="3b3363934623637fdc1d37ff8b16880a" containerID="e0805d4cfeb69f415294ea780d12f67a0be7280a0259b3fb8434d02f506e058c" exitCode=255 Mar 18 17:56:47.136587 master-0 kubenswrapper[7553]: I0318 17:56:47.136571 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"3b3363934623637fdc1d37ff8b16880a","Type":"ContainerDied","Data":"e0805d4cfeb69f415294ea780d12f67a0be7280a0259b3fb8434d02f506e058c"} Mar 18 17:56:47.136783 master-0 kubenswrapper[7553]: I0318 17:56:47.136620 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"3b3363934623637fdc1d37ff8b16880a","Type":"ContainerStarted","Data":"4f77b12cf038ead0220c33175d86e43cfecac0a70163faf8fcc2bf5b516805b3"} Mar 18 17:56:47.136783 master-0 kubenswrapper[7553]: I0318 17:56:47.136655 7553 scope.go:117] "RemoveContainer" containerID="c8289571034ebc6739ae21b3260df385ebf8dcd2b89305874e7d44766e4b4396" Mar 18 17:56:47.806471 master-0 kubenswrapper[7553]: E0318 17:56:47.806358 7553 mirror_client.go:138] "Failed deleting a mirror pod" err="Timeout: request did not complete within requested timeout - context deadline exceeded" pod="openshift-etcd/etcd-master-0" Mar 18 17:56:48.053442 master-0 kubenswrapper[7553]: I0318 17:56:48.053355 7553 scope.go:117] "RemoveContainer" containerID="80c5f3220064c232d03dadbc88c1a47282c553de28295165f3109e332825aa0f" Mar 18 17:56:48.147458 master-0 kubenswrapper[7553]: I0318 17:56:48.147372 7553 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_3b3363934623637fdc1d37ff8b16880a/cluster-policy-controller/1.log" Mar 18 17:56:48.150912 master-0 kubenswrapper[7553]: I0318 17:56:48.150850 7553 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_3b3363934623637fdc1d37ff8b16880a/kube-controller-manager/0.log" Mar 18 17:56:48.151494 master-0 kubenswrapper[7553]: I0318 17:56:48.151418 7553 kubelet.go:1909] "Trying to delete pod" pod="openshift-etcd/etcd-master-0" podUID="b7082744-e417-4683-8069-858394c5fc53" Mar 18 17:56:48.151494 master-0 kubenswrapper[7553]: I0318 17:56:48.151454 7553 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-etcd/etcd-master-0" podUID="b7082744-e417-4683-8069-858394c5fc53" Mar 18 17:56:49.160815 master-0 kubenswrapper[7553]: I0318 17:56:49.160726 7553 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-6f69995874-dh5zl_37b3753f-bf4f-4a9e-a4a8-d58296bada79/cluster-baremetal-operator/2.log" Mar 18 17:56:49.161845 master-0 kubenswrapper[7553]: I0318 17:56:49.161218 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-dh5zl" event={"ID":"37b3753f-bf4f-4a9e-a4a8-d58296bada79","Type":"ContainerStarted","Data":"921ec206afcda3ad2ed54f119faab2d531fbc22d2917452ab79dc39397439722"} Mar 18 17:56:51.788753 master-0 kubenswrapper[7553]: I0318 17:56:51.788562 7553 status_manager.go:851] "Failed to get status for pod" podUID="c9655d59-a594-499f-b474-dfc870239174" pod="openshift-kube-apiserver/installer-2-master-0" err="the server was unable to return a response in the time allotted, but may still be processing the request (get pods installer-2-master-0)" Mar 18 17:56:52.189573 master-0 kubenswrapper[7553]: I0318 17:56:52.189502 7553 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-64854d9cff-vpjmp_7d39d93e-9be3-47e1-a44e-be2d18b55446/snapshot-controller/3.log" Mar 18 17:56:52.190224 master-0 kubenswrapper[7553]: I0318 17:56:52.190175 7553 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-64854d9cff-vpjmp_7d39d93e-9be3-47e1-a44e-be2d18b55446/snapshot-controller/2.log" Mar 18 17:56:52.190349 master-0 kubenswrapper[7553]: I0318 17:56:52.190235 7553 generic.go:334] "Generic (PLEG): container finished" podID="7d39d93e-9be3-47e1-a44e-be2d18b55446" containerID="1d85680a94931192610dbfb5c97df34c749df00528eff7841b2245f6d30aa63a" exitCode=1 Mar 18 17:56:52.190349 master-0 kubenswrapper[7553]: I0318 17:56:52.190281 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-64854d9cff-vpjmp" event={"ID":"7d39d93e-9be3-47e1-a44e-be2d18b55446","Type":"ContainerDied","Data":"1d85680a94931192610dbfb5c97df34c749df00528eff7841b2245f6d30aa63a"} Mar 18 17:56:52.190349 master-0 kubenswrapper[7553]: I0318 17:56:52.190317 7553 scope.go:117] "RemoveContainer" containerID="9da45c50b62258b35b6fa6e25a88e2e045b13f36511821a9d8c318812731dc4c" Mar 18 17:56:52.191406 master-0 kubenswrapper[7553]: I0318 17:56:52.191346 7553 scope.go:117] "RemoveContainer" containerID="1d85680a94931192610dbfb5c97df34c749df00528eff7841b2245f6d30aa63a" Mar 18 17:56:52.191836 master-0 kubenswrapper[7553]: E0318 17:56:52.191778 7553 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"snapshot-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=snapshot-controller pod=csi-snapshot-controller-64854d9cff-vpjmp_openshift-cluster-storage-operator(7d39d93e-9be3-47e1-a44e-be2d18b55446)\"" pod="openshift-cluster-storage-operator/csi-snapshot-controller-64854d9cff-vpjmp" podUID="7d39d93e-9be3-47e1-a44e-be2d18b55446" Mar 18 17:56:53.203638 master-0 kubenswrapper[7553]: I0318 17:56:53.203532 7553 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-64854d9cff-vpjmp_7d39d93e-9be3-47e1-a44e-be2d18b55446/snapshot-controller/3.log" Mar 18 17:56:54.714454 master-0 kubenswrapper[7553]: I0318 17:56:54.714374 7553 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 17:56:54.718842 master-0 kubenswrapper[7553]: I0318 17:56:54.714477 7553 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 17:56:55.053751 master-0 kubenswrapper[7553]: I0318 17:56:55.053656 7553 scope.go:117] "RemoveContainer" containerID="7805e3a084be32fd22f401c11f2cafaf6f6853b5c227bbbf9238a583daa6ea61" Mar 18 17:56:55.054097 master-0 kubenswrapper[7553]: E0318 17:56:55.054040 7553 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kube-rbac-proxy pod=cluster-cloud-controller-manager-operator-7dff898856-kfzkl_openshift-cloud-controller-manager-operator(0751c002-fe0e-4f13-bb9c-9accd8ca0df3)\"" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-kfzkl" podUID="0751c002-fe0e-4f13-bb9c-9accd8ca0df3" Mar 18 17:56:56.285341 master-0 kubenswrapper[7553]: E0318 17:56:56.285263 7553 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 18 17:56:57.714841 master-0 kubenswrapper[7553]: I0318 17:56:57.714695 7553 patch_prober.go:28] interesting pod/kube-controller-manager-master-0 container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 18 17:56:57.714841 master-0 kubenswrapper[7553]: I0318 17:56:57.714837 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="3b3363934623637fdc1d37ff8b16880a" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 18 17:57:01.054162 master-0 kubenswrapper[7553]: I0318 17:57:01.054107 7553 scope.go:117] "RemoveContainer" containerID="af159afaa033efb036b878b04bdffa8fd814f7fc2cf559b2f4b190fa136e0905" Mar 18 17:57:01.054666 master-0 kubenswrapper[7553]: E0318 17:57:01.054576 7553 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ingress-operator\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=ingress-operator pod=ingress-operator-66b84d69b-qb7n6_openshift-ingress-operator(7e64a377-f497-4416-8f22-d5c7f52e0b65)\"" pod="openshift-ingress-operator/ingress-operator-66b84d69b-qb7n6" podUID="7e64a377-f497-4416-8f22-d5c7f52e0b65" Mar 18 17:57:02.094682 master-0 kubenswrapper[7553]: E0318 17:57:02.094562 7553 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[prometheus-operator-tls], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="openshift-monitoring/prometheus-operator-6c8df6d4b-fshkm" podUID="9c0dbd44-7669-41d6-bf1b-d8c1343c9d98" Mar 18 17:57:02.275505 master-0 kubenswrapper[7553]: I0318 17:57:02.275393 7553 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-6c8df6d4b-fshkm" Mar 18 17:57:02.989726 master-0 kubenswrapper[7553]: E0318 17:57:02.989618 7553 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": context deadline exceeded" interval="7s" Mar 18 17:57:03.282820 master-0 kubenswrapper[7553]: I0318 17:57:03.282635 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/9c0dbd44-7669-41d6-bf1b-d8c1343c9d98-prometheus-operator-tls\") pod \"prometheus-operator-6c8df6d4b-fshkm\" (UID: \"9c0dbd44-7669-41d6-bf1b-d8c1343c9d98\") " pod="openshift-monitoring/prometheus-operator-6c8df6d4b-fshkm" Mar 18 17:57:03.283703 master-0 kubenswrapper[7553]: E0318 17:57:03.282941 7553 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-operator-tls: secret "prometheus-operator-tls" not found Mar 18 17:57:03.283703 master-0 kubenswrapper[7553]: E0318 17:57:03.283011 7553 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9c0dbd44-7669-41d6-bf1b-d8c1343c9d98-prometheus-operator-tls podName:9c0dbd44-7669-41d6-bf1b-d8c1343c9d98 nodeName:}" failed. No retries permitted until 2026-03-18 17:59:05.282987132 +0000 UTC m=+1035.428821835 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "prometheus-operator-tls" (UniqueName: "kubernetes.io/secret/9c0dbd44-7669-41d6-bf1b-d8c1343c9d98-prometheus-operator-tls") pod "prometheus-operator-6c8df6d4b-fshkm" (UID: "9c0dbd44-7669-41d6-bf1b-d8c1343c9d98") : secret "prometheus-operator-tls" not found Mar 18 17:57:04.054412 master-0 kubenswrapper[7553]: I0318 17:57:04.053446 7553 scope.go:117] "RemoveContainer" containerID="1d85680a94931192610dbfb5c97df34c749df00528eff7841b2245f6d30aa63a" Mar 18 17:57:04.054412 master-0 kubenswrapper[7553]: E0318 17:57:04.053901 7553 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"snapshot-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=snapshot-controller pod=csi-snapshot-controller-64854d9cff-vpjmp_openshift-cluster-storage-operator(7d39d93e-9be3-47e1-a44e-be2d18b55446)\"" pod="openshift-cluster-storage-operator/csi-snapshot-controller-64854d9cff-vpjmp" podUID="7d39d93e-9be3-47e1-a44e-be2d18b55446" Mar 18 17:57:06.286906 master-0 kubenswrapper[7553]: E0318 17:57:06.286699 7553 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 18 17:57:06.310474 master-0 kubenswrapper[7553]: I0318 17:57:06.310326 7553 generic.go:334] "Generic (PLEG): container finished" podID="c57f282a-829b-41b2-827a-f4bc598245a2" containerID="f00456b24dab05375bbbeac67add4ae933f0340a0db97ddc7192a2436c6be1ec" exitCode=0 Mar 18 17:57:06.310474 master-0 kubenswrapper[7553]: I0318 17:57:06.310465 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" event={"ID":"c57f282a-829b-41b2-827a-f4bc598245a2","Type":"ContainerDied","Data":"f00456b24dab05375bbbeac67add4ae933f0340a0db97ddc7192a2436c6be1ec"} Mar 18 17:57:06.310676 master-0 kubenswrapper[7553]: I0318 17:57:06.310515 7553 scope.go:117] "RemoveContainer" containerID="3be88236d1075355721a3a53c0d6a8b5bc0a4bd441e11b9ae0dd32cd30599a9f" Mar 18 17:57:07.054437 master-0 kubenswrapper[7553]: I0318 17:57:07.054334 7553 scope.go:117] "RemoveContainer" containerID="7805e3a084be32fd22f401c11f2cafaf6f6853b5c227bbbf9238a583daa6ea61" Mar 18 17:57:07.054781 master-0 kubenswrapper[7553]: E0318 17:57:07.054641 7553 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kube-rbac-proxy pod=cluster-cloud-controller-manager-operator-7dff898856-kfzkl_openshift-cloud-controller-manager-operator(0751c002-fe0e-4f13-bb9c-9accd8ca0df3)\"" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-kfzkl" podUID="0751c002-fe0e-4f13-bb9c-9accd8ca0df3" Mar 18 17:57:07.320597 master-0 kubenswrapper[7553]: I0318 17:57:07.320411 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" event={"ID":"c57f282a-829b-41b2-827a-f4bc598245a2","Type":"ContainerStarted","Data":"40665a65803f46b85c5841b161668f9dc53195967c924003dedfb177dd66895a"} Mar 18 17:57:07.714743 master-0 kubenswrapper[7553]: I0318 17:57:07.714523 7553 patch_prober.go:28] interesting pod/kube-controller-manager-master-0 container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 18 17:57:07.714743 master-0 kubenswrapper[7553]: I0318 17:57:07.714623 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="3b3363934623637fdc1d37ff8b16880a" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 18 17:57:08.100463 master-0 kubenswrapper[7553]: I0318 17:57:08.100381 7553 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" Mar 18 17:57:08.103996 master-0 kubenswrapper[7553]: I0318 17:57:08.103946 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:57:08.103996 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:57:08.103996 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:57:08.103996 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:57:08.104461 master-0 kubenswrapper[7553]: I0318 17:57:08.104417 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:57:09.103408 master-0 kubenswrapper[7553]: I0318 17:57:09.103287 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:57:09.103408 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:57:09.103408 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:57:09.103408 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:57:09.103998 master-0 kubenswrapper[7553]: I0318 17:57:09.103425 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:57:10.103205 master-0 kubenswrapper[7553]: I0318 17:57:10.103111 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:57:10.103205 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:57:10.103205 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:57:10.103205 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:57:10.104213 master-0 kubenswrapper[7553]: I0318 17:57:10.103208 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:57:11.103890 master-0 kubenswrapper[7553]: I0318 17:57:11.103782 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:57:11.103890 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:57:11.103890 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:57:11.103890 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:57:11.103890 master-0 kubenswrapper[7553]: I0318 17:57:11.103850 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:57:12.102182 master-0 kubenswrapper[7553]: I0318 17:57:12.102104 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:57:12.102182 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:57:12.102182 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:57:12.102182 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:57:12.102937 master-0 kubenswrapper[7553]: I0318 17:57:12.102201 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:57:13.099828 master-0 kubenswrapper[7553]: I0318 17:57:13.099706 7553 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" Mar 18 17:57:13.102606 master-0 kubenswrapper[7553]: I0318 17:57:13.102517 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:57:13.102606 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:57:13.102606 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:57:13.102606 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:57:13.102980 master-0 kubenswrapper[7553]: I0318 17:57:13.102621 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:57:14.053897 master-0 kubenswrapper[7553]: I0318 17:57:14.053815 7553 scope.go:117] "RemoveContainer" containerID="af159afaa033efb036b878b04bdffa8fd814f7fc2cf559b2f4b190fa136e0905" Mar 18 17:57:14.103039 master-0 kubenswrapper[7553]: I0318 17:57:14.102953 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:57:14.103039 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:57:14.103039 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:57:14.103039 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:57:14.110775 master-0 kubenswrapper[7553]: I0318 17:57:14.103043 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:57:14.181140 master-0 kubenswrapper[7553]: E0318 17:57:14.180932 7553 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{bootstrap-kube-scheduler-master-0.189e0071e751715d kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-scheduler-master-0,UID:c83737980b9ee109184b1d78e942cf36,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b23c544d3894e5b31f66a18c554f03b0d29f92c2000c46b57b1c96da7ec25db9\" already present on machine,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 17:43:12.442732893 +0000 UTC m=+82.588567606,LastTimestamp:2026-03-18 17:54:06.656017374 +0000 UTC m=+736.801852047,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 17:57:14.378414 master-0 kubenswrapper[7553]: I0318 17:57:14.378247 7553 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-66b84d69b-qb7n6_7e64a377-f497-4416-8f22-d5c7f52e0b65/ingress-operator/4.log" Mar 18 17:57:14.378769 master-0 kubenswrapper[7553]: I0318 17:57:14.378724 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-66b84d69b-qb7n6" event={"ID":"7e64a377-f497-4416-8f22-d5c7f52e0b65","Type":"ContainerStarted","Data":"029fdec7254f162c629eedb8568b32645f8d7d59c5b8e802c4b2084d177c4d77"} Mar 18 17:57:15.102687 master-0 kubenswrapper[7553]: I0318 17:57:15.102573 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:57:15.102687 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:57:15.102687 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:57:15.102687 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:57:15.103062 master-0 kubenswrapper[7553]: I0318 17:57:15.102691 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:57:16.102120 master-0 kubenswrapper[7553]: I0318 17:57:16.102068 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:57:16.102120 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:57:16.102120 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:57:16.102120 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:57:16.103081 master-0 kubenswrapper[7553]: I0318 17:57:16.102136 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:57:16.287553 master-0 kubenswrapper[7553]: E0318 17:57:16.287435 7553 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 18 17:57:16.287553 master-0 kubenswrapper[7553]: E0318 17:57:16.287508 7553 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Mar 18 17:57:17.102625 master-0 kubenswrapper[7553]: I0318 17:57:17.102560 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:57:17.102625 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:57:17.102625 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:57:17.102625 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:57:17.103426 master-0 kubenswrapper[7553]: I0318 17:57:17.102647 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:57:17.275910 master-0 kubenswrapper[7553]: I0318 17:57:17.275811 7553 patch_prober.go:28] interesting pod/kube-controller-manager-master-0 container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://localhost:10357/healthz\": read tcp 127.0.0.1:57074->127.0.0.1:10357: read: connection reset by peer" start-of-body= Mar 18 17:57:17.276137 master-0 kubenswrapper[7553]: I0318 17:57:17.275928 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="3b3363934623637fdc1d37ff8b16880a" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://localhost:10357/healthz\": read tcp 127.0.0.1:57074->127.0.0.1:10357: read: connection reset by peer" Mar 18 17:57:17.276137 master-0 kubenswrapper[7553]: I0318 17:57:17.276035 7553 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 17:57:17.277624 master-0 kubenswrapper[7553]: I0318 17:57:17.277536 7553 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="cluster-policy-controller" containerStatusID={"Type":"cri-o","ID":"4f77b12cf038ead0220c33175d86e43cfecac0a70163faf8fcc2bf5b516805b3"} pod="openshift-kube-controller-manager/kube-controller-manager-master-0" containerMessage="Container cluster-policy-controller failed startup probe, will be restarted" Mar 18 17:57:17.277913 master-0 kubenswrapper[7553]: I0318 17:57:17.277725 7553 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="3b3363934623637fdc1d37ff8b16880a" containerName="cluster-policy-controller" containerID="cri-o://4f77b12cf038ead0220c33175d86e43cfecac0a70163faf8fcc2bf5b516805b3" gracePeriod=30 Mar 18 17:57:17.409716 master-0 kubenswrapper[7553]: I0318 17:57:17.409555 7553 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_3b3363934623637fdc1d37ff8b16880a/cluster-policy-controller/2.log" Mar 18 17:57:17.410004 master-0 kubenswrapper[7553]: I0318 17:57:17.409956 7553 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_3b3363934623637fdc1d37ff8b16880a/cluster-policy-controller/1.log" Mar 18 17:57:17.412823 master-0 kubenswrapper[7553]: I0318 17:57:17.412728 7553 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_3b3363934623637fdc1d37ff8b16880a/kube-controller-manager/0.log" Mar 18 17:57:17.413070 master-0 kubenswrapper[7553]: I0318 17:57:17.412860 7553 generic.go:334] "Generic (PLEG): container finished" podID="3b3363934623637fdc1d37ff8b16880a" containerID="4f77b12cf038ead0220c33175d86e43cfecac0a70163faf8fcc2bf5b516805b3" exitCode=255 Mar 18 17:57:17.413070 master-0 kubenswrapper[7553]: I0318 17:57:17.412920 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"3b3363934623637fdc1d37ff8b16880a","Type":"ContainerDied","Data":"4f77b12cf038ead0220c33175d86e43cfecac0a70163faf8fcc2bf5b516805b3"} Mar 18 17:57:17.413070 master-0 kubenswrapper[7553]: I0318 17:57:17.412990 7553 scope.go:117] "RemoveContainer" containerID="e0805d4cfeb69f415294ea780d12f67a0be7280a0259b3fb8434d02f506e058c" Mar 18 17:57:18.102520 master-0 kubenswrapper[7553]: I0318 17:57:18.102445 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:57:18.102520 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:57:18.102520 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:57:18.102520 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:57:18.102520 master-0 kubenswrapper[7553]: I0318 17:57:18.102509 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:57:18.423535 master-0 kubenswrapper[7553]: I0318 17:57:18.423338 7553 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_3b3363934623637fdc1d37ff8b16880a/cluster-policy-controller/2.log" Mar 18 17:57:18.425945 master-0 kubenswrapper[7553]: I0318 17:57:18.425861 7553 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_3b3363934623637fdc1d37ff8b16880a/kube-controller-manager/0.log" Mar 18 17:57:18.426107 master-0 kubenswrapper[7553]: I0318 17:57:18.425977 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"3b3363934623637fdc1d37ff8b16880a","Type":"ContainerStarted","Data":"230861fc75c4cf91f00521990347ee4c6eaab66ee62a9284086ae7fb81bebad6"} Mar 18 17:57:19.053414 master-0 kubenswrapper[7553]: I0318 17:57:19.053318 7553 scope.go:117] "RemoveContainer" containerID="7805e3a084be32fd22f401c11f2cafaf6f6853b5c227bbbf9238a583daa6ea61" Mar 18 17:57:19.053414 master-0 kubenswrapper[7553]: I0318 17:57:19.053390 7553 scope.go:117] "RemoveContainer" containerID="1d85680a94931192610dbfb5c97df34c749df00528eff7841b2245f6d30aa63a" Mar 18 17:57:19.053801 master-0 kubenswrapper[7553]: E0318 17:57:19.053584 7553 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kube-rbac-proxy pod=cluster-cloud-controller-manager-operator-7dff898856-kfzkl_openshift-cloud-controller-manager-operator(0751c002-fe0e-4f13-bb9c-9accd8ca0df3)\"" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-kfzkl" podUID="0751c002-fe0e-4f13-bb9c-9accd8ca0df3" Mar 18 17:57:19.053801 master-0 kubenswrapper[7553]: E0318 17:57:19.053769 7553 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"snapshot-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=snapshot-controller pod=csi-snapshot-controller-64854d9cff-vpjmp_openshift-cluster-storage-operator(7d39d93e-9be3-47e1-a44e-be2d18b55446)\"" pod="openshift-cluster-storage-operator/csi-snapshot-controller-64854d9cff-vpjmp" podUID="7d39d93e-9be3-47e1-a44e-be2d18b55446" Mar 18 17:57:19.103362 master-0 kubenswrapper[7553]: I0318 17:57:19.103235 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:57:19.103362 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:57:19.103362 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:57:19.103362 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:57:19.104378 master-0 kubenswrapper[7553]: I0318 17:57:19.103380 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:57:19.991561 master-0 kubenswrapper[7553]: E0318 17:57:19.991418 7553 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s" Mar 18 17:57:20.102710 master-0 kubenswrapper[7553]: I0318 17:57:20.102594 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:57:20.102710 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:57:20.102710 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:57:20.102710 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:57:20.103151 master-0 kubenswrapper[7553]: I0318 17:57:20.102720 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:57:21.103001 master-0 kubenswrapper[7553]: I0318 17:57:21.102923 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:57:21.103001 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:57:21.103001 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:57:21.103001 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:57:21.104045 master-0 kubenswrapper[7553]: I0318 17:57:21.103017 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:57:22.102699 master-0 kubenswrapper[7553]: I0318 17:57:22.102577 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:57:22.102699 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:57:22.102699 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:57:22.102699 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:57:22.103884 master-0 kubenswrapper[7553]: I0318 17:57:22.102695 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:57:22.154732 master-0 kubenswrapper[7553]: E0318 17:57:22.154619 7553 mirror_client.go:138] "Failed deleting a mirror pod" err="Timeout: request did not complete within requested timeout - context deadline exceeded" pod="openshift-etcd/etcd-master-0" Mar 18 17:57:22.461227 master-0 kubenswrapper[7553]: I0318 17:57:22.461123 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"094204df314fe45bd5af12ca1b4622bb","Type":"ContainerStarted","Data":"2b27f565e17a3ee26335a0bdd98708332824c925381f1ed9987f74ef23fd2f1a"} Mar 18 17:57:23.102040 master-0 kubenswrapper[7553]: I0318 17:57:23.101943 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:57:23.102040 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:57:23.102040 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:57:23.102040 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:57:23.102460 master-0 kubenswrapper[7553]: I0318 17:57:23.102059 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:57:23.473430 master-0 kubenswrapper[7553]: I0318 17:57:23.473353 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"094204df314fe45bd5af12ca1b4622bb","Type":"ContainerStarted","Data":"d2d8b53aa63600a513849d49d8afb7d6359ec5cfb72d80c1e09ca1dc600d4650"} Mar 18 17:57:23.473430 master-0 kubenswrapper[7553]: I0318 17:57:23.473414 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"094204df314fe45bd5af12ca1b4622bb","Type":"ContainerStarted","Data":"60b8beabf9bc2cea64f509c80af659d92f7e928ab7b8915a214c69b2dce558c8"} Mar 18 17:57:23.473430 master-0 kubenswrapper[7553]: I0318 17:57:23.473427 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"094204df314fe45bd5af12ca1b4622bb","Type":"ContainerStarted","Data":"25455a2ac49061fc7d9927f513d9b409d2c3568243e18d1a4eb9af39a224b7df"} Mar 18 17:57:23.473430 master-0 kubenswrapper[7553]: I0318 17:57:23.473437 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"094204df314fe45bd5af12ca1b4622bb","Type":"ContainerStarted","Data":"808efd19e16a0549495b7fa4df574bf88e4360937fd74bcc189cd80473a41295"} Mar 18 17:57:23.474493 master-0 kubenswrapper[7553]: I0318 17:57:23.473806 7553 kubelet.go:1909] "Trying to delete pod" pod="openshift-etcd/etcd-master-0" podUID="b7082744-e417-4683-8069-858394c5fc53" Mar 18 17:57:23.474493 master-0 kubenswrapper[7553]: I0318 17:57:23.473826 7553 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-etcd/etcd-master-0" podUID="b7082744-e417-4683-8069-858394c5fc53" Mar 18 17:57:24.103120 master-0 kubenswrapper[7553]: I0318 17:57:24.103057 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:57:24.103120 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:57:24.103120 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:57:24.103120 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:57:24.104147 master-0 kubenswrapper[7553]: I0318 17:57:24.103133 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:57:24.713947 master-0 kubenswrapper[7553]: I0318 17:57:24.713857 7553 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 17:57:24.713947 master-0 kubenswrapper[7553]: I0318 17:57:24.713970 7553 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 17:57:25.102816 master-0 kubenswrapper[7553]: I0318 17:57:25.102695 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:57:25.102816 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:57:25.102816 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:57:25.102816 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:57:25.102816 master-0 kubenswrapper[7553]: I0318 17:57:25.102805 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:57:26.053690 master-0 kubenswrapper[7553]: E0318 17:57:26.053588 7553 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[machine-approver-tls], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="openshift-cluster-machine-approver/machine-approver-5c6485487f-z74t2" podUID="92153864-7959-4482-bf24-c8db36435fb5" Mar 18 17:57:26.102320 master-0 kubenswrapper[7553]: I0318 17:57:26.102180 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:57:26.102320 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:57:26.102320 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:57:26.102320 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:57:26.102320 master-0 kubenswrapper[7553]: I0318 17:57:26.102318 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:57:27.103104 master-0 kubenswrapper[7553]: I0318 17:57:27.103025 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:57:27.103104 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:57:27.103104 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:57:27.103104 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:57:27.104155 master-0 kubenswrapper[7553]: I0318 17:57:27.103173 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:57:27.714495 master-0 kubenswrapper[7553]: I0318 17:57:27.714375 7553 patch_prober.go:28] interesting pod/kube-controller-manager-master-0 container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 18 17:57:27.714929 master-0 kubenswrapper[7553]: I0318 17:57:27.714517 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="3b3363934623637fdc1d37ff8b16880a" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 18 17:57:28.079322 master-0 kubenswrapper[7553]: I0318 17:57:28.079193 7553 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-master-0" Mar 18 17:57:28.102990 master-0 kubenswrapper[7553]: I0318 17:57:28.102912 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:57:28.102990 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:57:28.102990 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:57:28.102990 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:57:28.103738 master-0 kubenswrapper[7553]: I0318 17:57:28.103018 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:57:29.102535 master-0 kubenswrapper[7553]: I0318 17:57:29.102425 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:57:29.102535 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:57:29.102535 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:57:29.102535 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:57:29.102970 master-0 kubenswrapper[7553]: I0318 17:57:29.102549 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:57:30.103516 master-0 kubenswrapper[7553]: I0318 17:57:30.103392 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:57:30.103516 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:57:30.103516 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:57:30.103516 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:57:30.103516 master-0 kubenswrapper[7553]: I0318 17:57:30.103514 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:57:31.052990 master-0 kubenswrapper[7553]: I0318 17:57:31.052957 7553 scope.go:117] "RemoveContainer" containerID="1d85680a94931192610dbfb5c97df34c749df00528eff7841b2245f6d30aa63a" Mar 18 17:57:31.053466 master-0 kubenswrapper[7553]: E0318 17:57:31.053436 7553 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"snapshot-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=snapshot-controller pod=csi-snapshot-controller-64854d9cff-vpjmp_openshift-cluster-storage-operator(7d39d93e-9be3-47e1-a44e-be2d18b55446)\"" pod="openshift-cluster-storage-operator/csi-snapshot-controller-64854d9cff-vpjmp" podUID="7d39d93e-9be3-47e1-a44e-be2d18b55446" Mar 18 17:57:31.101620 master-0 kubenswrapper[7553]: I0318 17:57:31.101516 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:57:31.101620 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:57:31.101620 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:57:31.101620 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:57:31.101967 master-0 kubenswrapper[7553]: I0318 17:57:31.101682 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:57:32.054438 master-0 kubenswrapper[7553]: I0318 17:57:32.054366 7553 scope.go:117] "RemoveContainer" containerID="7805e3a084be32fd22f401c11f2cafaf6f6853b5c227bbbf9238a583daa6ea61" Mar 18 17:57:32.056013 master-0 kubenswrapper[7553]: E0318 17:57:32.054685 7553 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kube-rbac-proxy pod=cluster-cloud-controller-manager-operator-7dff898856-kfzkl_openshift-cloud-controller-manager-operator(0751c002-fe0e-4f13-bb9c-9accd8ca0df3)\"" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-kfzkl" podUID="0751c002-fe0e-4f13-bb9c-9accd8ca0df3" Mar 18 17:57:32.103161 master-0 kubenswrapper[7553]: I0318 17:57:32.103068 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:57:32.103161 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:57:32.103161 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:57:32.103161 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:57:32.103161 master-0 kubenswrapper[7553]: I0318 17:57:32.103141 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:57:33.079066 master-0 kubenswrapper[7553]: I0318 17:57:33.078960 7553 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-master-0" Mar 18 17:57:33.105554 master-0 kubenswrapper[7553]: I0318 17:57:33.105425 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:57:33.105554 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:57:33.105554 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:57:33.105554 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:57:33.105554 master-0 kubenswrapper[7553]: I0318 17:57:33.105509 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:57:33.107217 master-0 kubenswrapper[7553]: I0318 17:57:33.107179 7553 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-master-0" Mar 18 17:57:34.103038 master-0 kubenswrapper[7553]: I0318 17:57:34.102923 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:57:34.103038 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:57:34.103038 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:57:34.103038 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:57:34.104073 master-0 kubenswrapper[7553]: I0318 17:57:34.103081 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:57:35.101978 master-0 kubenswrapper[7553]: I0318 17:57:35.101893 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:57:35.101978 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:57:35.101978 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:57:35.101978 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:57:35.101978 master-0 kubenswrapper[7553]: I0318 17:57:35.101969 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:57:36.102837 master-0 kubenswrapper[7553]: I0318 17:57:36.102754 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:57:36.102837 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:57:36.102837 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:57:36.102837 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:57:36.102837 master-0 kubenswrapper[7553]: I0318 17:57:36.102831 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:57:36.993966 master-0 kubenswrapper[7553]: E0318 17:57:36.993866 7553 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s" Mar 18 17:57:37.053156 master-0 kubenswrapper[7553]: I0318 17:57:37.053081 7553 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-5c6485487f-z74t2" Mar 18 17:57:37.102924 master-0 kubenswrapper[7553]: I0318 17:57:37.102841 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:57:37.102924 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:57:37.102924 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:57:37.102924 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:57:37.104369 master-0 kubenswrapper[7553]: I0318 17:57:37.102951 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:57:37.714699 master-0 kubenswrapper[7553]: I0318 17:57:37.714585 7553 patch_prober.go:28] interesting pod/kube-controller-manager-master-0 container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 18 17:57:37.714699 master-0 kubenswrapper[7553]: I0318 17:57:37.714690 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="3b3363934623637fdc1d37ff8b16880a" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 18 17:57:38.102803 master-0 kubenswrapper[7553]: I0318 17:57:38.102732 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:57:38.102803 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:57:38.102803 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:57:38.102803 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:57:38.103410 master-0 kubenswrapper[7553]: I0318 17:57:38.102839 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:57:38.104292 master-0 kubenswrapper[7553]: I0318 17:57:38.104203 7553 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-master-0" Mar 18 17:57:39.123156 master-0 kubenswrapper[7553]: I0318 17:57:39.123050 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:57:39.123156 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:57:39.123156 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:57:39.123156 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:57:39.124091 master-0 kubenswrapper[7553]: I0318 17:57:39.123166 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:57:40.103318 master-0 kubenswrapper[7553]: I0318 17:57:40.103164 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:57:40.103318 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:57:40.103318 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:57:40.103318 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:57:40.103318 master-0 kubenswrapper[7553]: I0318 17:57:40.103288 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:57:41.101995 master-0 kubenswrapper[7553]: I0318 17:57:41.101925 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:57:41.101995 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:57:41.101995 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:57:41.101995 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:57:41.102579 master-0 kubenswrapper[7553]: I0318 17:57:41.102017 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:57:42.102770 master-0 kubenswrapper[7553]: I0318 17:57:42.102702 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:57:42.102770 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:57:42.102770 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:57:42.102770 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:57:42.103785 master-0 kubenswrapper[7553]: I0318 17:57:42.102771 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:57:43.053236 master-0 kubenswrapper[7553]: I0318 17:57:43.053166 7553 scope.go:117] "RemoveContainer" containerID="1d85680a94931192610dbfb5c97df34c749df00528eff7841b2245f6d30aa63a" Mar 18 17:57:43.103850 master-0 kubenswrapper[7553]: I0318 17:57:43.103582 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:57:43.103850 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:57:43.103850 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:57:43.103850 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:57:43.103850 master-0 kubenswrapper[7553]: I0318 17:57:43.103685 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:57:43.640035 master-0 kubenswrapper[7553]: I0318 17:57:43.639808 7553 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-64854d9cff-vpjmp_7d39d93e-9be3-47e1-a44e-be2d18b55446/snapshot-controller/3.log" Mar 18 17:57:43.640035 master-0 kubenswrapper[7553]: I0318 17:57:43.639904 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-64854d9cff-vpjmp" event={"ID":"7d39d93e-9be3-47e1-a44e-be2d18b55446","Type":"ContainerStarted","Data":"2b3a93a1f208538619b8c053a215774b7a5b76ad51695ab4679fe93b9c8aef84"} Mar 18 17:57:44.103832 master-0 kubenswrapper[7553]: I0318 17:57:44.103724 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:57:44.103832 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:57:44.103832 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:57:44.103832 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:57:44.103832 master-0 kubenswrapper[7553]: I0318 17:57:44.103823 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:57:45.103468 master-0 kubenswrapper[7553]: I0318 17:57:45.103404 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:57:45.103468 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:57:45.103468 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:57:45.103468 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:57:45.104416 master-0 kubenswrapper[7553]: I0318 17:57:45.104367 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:57:46.054164 master-0 kubenswrapper[7553]: I0318 17:57:46.054100 7553 scope.go:117] "RemoveContainer" containerID="7805e3a084be32fd22f401c11f2cafaf6f6853b5c227bbbf9238a583daa6ea61" Mar 18 17:57:46.054757 master-0 kubenswrapper[7553]: E0318 17:57:46.054425 7553 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kube-rbac-proxy pod=cluster-cloud-controller-manager-operator-7dff898856-kfzkl_openshift-cloud-controller-manager-operator(0751c002-fe0e-4f13-bb9c-9accd8ca0df3)\"" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-kfzkl" podUID="0751c002-fe0e-4f13-bb9c-9accd8ca0df3" Mar 18 17:57:46.102545 master-0 kubenswrapper[7553]: I0318 17:57:46.102482 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:57:46.102545 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:57:46.102545 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:57:46.102545 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:57:46.103023 master-0 kubenswrapper[7553]: I0318 17:57:46.102988 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:57:47.102380 master-0 kubenswrapper[7553]: I0318 17:57:47.102328 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:57:47.102380 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:57:47.102380 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:57:47.102380 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:57:47.103049 master-0 kubenswrapper[7553]: I0318 17:57:47.103021 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:57:47.715762 master-0 kubenswrapper[7553]: I0318 17:57:47.715566 7553 patch_prober.go:28] interesting pod/kube-controller-manager-master-0 container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 18 17:57:47.715762 master-0 kubenswrapper[7553]: I0318 17:57:47.715745 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="3b3363934623637fdc1d37ff8b16880a" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 18 17:57:47.716157 master-0 kubenswrapper[7553]: I0318 17:57:47.715821 7553 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 17:57:47.716842 master-0 kubenswrapper[7553]: I0318 17:57:47.716795 7553 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="cluster-policy-controller" containerStatusID={"Type":"cri-o","ID":"230861fc75c4cf91f00521990347ee4c6eaab66ee62a9284086ae7fb81bebad6"} pod="openshift-kube-controller-manager/kube-controller-manager-master-0" containerMessage="Container cluster-policy-controller failed startup probe, will be restarted" Mar 18 17:57:47.716964 master-0 kubenswrapper[7553]: I0318 17:57:47.716907 7553 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="3b3363934623637fdc1d37ff8b16880a" containerName="cluster-policy-controller" containerID="cri-o://230861fc75c4cf91f00521990347ee4c6eaab66ee62a9284086ae7fb81bebad6" gracePeriod=30 Mar 18 17:57:47.841182 master-0 kubenswrapper[7553]: E0318 17:57:47.841110 7553 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cluster-policy-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=cluster-policy-controller pod=kube-controller-manager-master-0_openshift-kube-controller-manager(3b3363934623637fdc1d37ff8b16880a)\"" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="3b3363934623637fdc1d37ff8b16880a" Mar 18 17:57:48.102897 master-0 kubenswrapper[7553]: I0318 17:57:48.102752 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:57:48.102897 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:57:48.102897 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:57:48.102897 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:57:48.102897 master-0 kubenswrapper[7553]: I0318 17:57:48.102861 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:57:48.184518 master-0 kubenswrapper[7553]: E0318 17:57:48.184361 7553 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{bootstrap-kube-scheduler-master-0.189e007200840f23 kube-system 9185 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-scheduler-master-0,UID:c83737980b9ee109184b1d78e942cf36,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Created,Message:Created container: kube-scheduler,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 17:43:12 +0000 UTC,LastTimestamp:2026-03-18 17:54:06.942801706 +0000 UTC m=+737.088636379,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 17:57:48.691055 master-0 kubenswrapper[7553]: I0318 17:57:48.690961 7553 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_3b3363934623637fdc1d37ff8b16880a/cluster-policy-controller/3.log" Mar 18 17:57:48.691893 master-0 kubenswrapper[7553]: I0318 17:57:48.691835 7553 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_3b3363934623637fdc1d37ff8b16880a/cluster-policy-controller/2.log" Mar 18 17:57:48.693866 master-0 kubenswrapper[7553]: I0318 17:57:48.693804 7553 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_3b3363934623637fdc1d37ff8b16880a/kube-controller-manager/0.log" Mar 18 17:57:48.693999 master-0 kubenswrapper[7553]: I0318 17:57:48.693895 7553 generic.go:334] "Generic (PLEG): container finished" podID="3b3363934623637fdc1d37ff8b16880a" containerID="230861fc75c4cf91f00521990347ee4c6eaab66ee62a9284086ae7fb81bebad6" exitCode=255 Mar 18 17:57:48.694090 master-0 kubenswrapper[7553]: I0318 17:57:48.693984 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"3b3363934623637fdc1d37ff8b16880a","Type":"ContainerDied","Data":"230861fc75c4cf91f00521990347ee4c6eaab66ee62a9284086ae7fb81bebad6"} Mar 18 17:57:48.694090 master-0 kubenswrapper[7553]: I0318 17:57:48.694055 7553 scope.go:117] "RemoveContainer" containerID="4f77b12cf038ead0220c33175d86e43cfecac0a70163faf8fcc2bf5b516805b3" Mar 18 17:57:48.695254 master-0 kubenswrapper[7553]: I0318 17:57:48.695188 7553 scope.go:117] "RemoveContainer" containerID="230861fc75c4cf91f00521990347ee4c6eaab66ee62a9284086ae7fb81bebad6" Mar 18 17:57:48.695612 master-0 kubenswrapper[7553]: E0318 17:57:48.695550 7553 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cluster-policy-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=cluster-policy-controller pod=kube-controller-manager-master-0_openshift-kube-controller-manager(3b3363934623637fdc1d37ff8b16880a)\"" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="3b3363934623637fdc1d37ff8b16880a" Mar 18 17:57:48.696753 master-0 kubenswrapper[7553]: I0318 17:57:48.696711 7553 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-6f69995874-dh5zl_37b3753f-bf4f-4a9e-a4a8-d58296bada79/cluster-baremetal-operator/3.log" Mar 18 17:57:48.697612 master-0 kubenswrapper[7553]: I0318 17:57:48.697528 7553 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-6f69995874-dh5zl_37b3753f-bf4f-4a9e-a4a8-d58296bada79/cluster-baremetal-operator/2.log" Mar 18 17:57:48.698589 master-0 kubenswrapper[7553]: I0318 17:57:48.698252 7553 generic.go:334] "Generic (PLEG): container finished" podID="37b3753f-bf4f-4a9e-a4a8-d58296bada79" containerID="921ec206afcda3ad2ed54f119faab2d531fbc22d2917452ab79dc39397439722" exitCode=1 Mar 18 17:57:48.698589 master-0 kubenswrapper[7553]: I0318 17:57:48.698377 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-dh5zl" event={"ID":"37b3753f-bf4f-4a9e-a4a8-d58296bada79","Type":"ContainerDied","Data":"921ec206afcda3ad2ed54f119faab2d531fbc22d2917452ab79dc39397439722"} Mar 18 17:57:48.699143 master-0 kubenswrapper[7553]: I0318 17:57:48.699110 7553 scope.go:117] "RemoveContainer" containerID="921ec206afcda3ad2ed54f119faab2d531fbc22d2917452ab79dc39397439722" Mar 18 17:57:48.699573 master-0 kubenswrapper[7553]: E0318 17:57:48.699536 7553 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cluster-baremetal-operator\" with CrashLoopBackOff: \"back-off 40s restarting failed container=cluster-baremetal-operator pod=cluster-baremetal-operator-6f69995874-dh5zl_openshift-machine-api(37b3753f-bf4f-4a9e-a4a8-d58296bada79)\"" pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-dh5zl" podUID="37b3753f-bf4f-4a9e-a4a8-d58296bada79" Mar 18 17:57:48.721212 master-0 kubenswrapper[7553]: I0318 17:57:48.721067 7553 scope.go:117] "RemoveContainer" containerID="80c5f3220064c232d03dadbc88c1a47282c553de28295165f3109e332825aa0f" Mar 18 17:57:49.103007 master-0 kubenswrapper[7553]: I0318 17:57:49.102908 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:57:49.103007 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:57:49.103007 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:57:49.103007 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:57:49.103007 master-0 kubenswrapper[7553]: I0318 17:57:49.103004 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:57:49.706460 master-0 kubenswrapper[7553]: I0318 17:57:49.706360 7553 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-6f69995874-dh5zl_37b3753f-bf4f-4a9e-a4a8-d58296bada79/cluster-baremetal-operator/3.log" Mar 18 17:57:49.709022 master-0 kubenswrapper[7553]: I0318 17:57:49.708893 7553 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_3b3363934623637fdc1d37ff8b16880a/cluster-policy-controller/3.log" Mar 18 17:57:49.711151 master-0 kubenswrapper[7553]: I0318 17:57:49.711093 7553 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_3b3363934623637fdc1d37ff8b16880a/kube-controller-manager/0.log" Mar 18 17:57:50.103050 master-0 kubenswrapper[7553]: I0318 17:57:50.102917 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:57:50.103050 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:57:50.103050 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:57:50.103050 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:57:50.103050 master-0 kubenswrapper[7553]: I0318 17:57:50.103046 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:57:51.103387 master-0 kubenswrapper[7553]: I0318 17:57:51.102648 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:57:51.103387 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:57:51.103387 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:57:51.103387 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:57:51.104224 master-0 kubenswrapper[7553]: I0318 17:57:51.103441 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:57:51.792098 master-0 kubenswrapper[7553]: I0318 17:57:51.791975 7553 status_manager.go:851] "Failed to get status for pod" podUID="da246674-9ad1-4732-9a9e-d86d18fb0c55" pod="openshift-kube-apiserver/installer-1-retry-1-master-0" err="the server was unable to return a response in the time allotted, but may still be processing the request (get pods installer-1-retry-1-master-0)" Mar 18 17:57:52.459599 master-0 kubenswrapper[7553]: I0318 17:57:52.102538 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:57:52.459599 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:57:52.459599 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:57:52.459599 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:57:52.459599 master-0 kubenswrapper[7553]: I0318 17:57:52.102637 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:57:53.102695 master-0 kubenswrapper[7553]: I0318 17:57:53.102553 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:57:53.102695 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:57:53.102695 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:57:53.102695 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:57:53.102695 master-0 kubenswrapper[7553]: I0318 17:57:53.102642 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:57:53.999341 master-0 kubenswrapper[7553]: E0318 17:57:53.995649 7553 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s" Mar 18 17:57:54.103550 master-0 kubenswrapper[7553]: I0318 17:57:54.103455 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:57:54.103550 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:57:54.103550 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:57:54.103550 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:57:54.104336 master-0 kubenswrapper[7553]: I0318 17:57:54.103574 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:57:54.713764 master-0 kubenswrapper[7553]: I0318 17:57:54.713673 7553 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 17:57:54.715419 master-0 kubenswrapper[7553]: I0318 17:57:54.715328 7553 scope.go:117] "RemoveContainer" containerID="230861fc75c4cf91f00521990347ee4c6eaab66ee62a9284086ae7fb81bebad6" Mar 18 17:57:54.715955 master-0 kubenswrapper[7553]: E0318 17:57:54.715866 7553 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cluster-policy-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=cluster-policy-controller pod=kube-controller-manager-master-0_openshift-kube-controller-manager(3b3363934623637fdc1d37ff8b16880a)\"" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="3b3363934623637fdc1d37ff8b16880a" Mar 18 17:57:55.104746 master-0 kubenswrapper[7553]: I0318 17:57:55.104660 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:57:55.104746 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:57:55.104746 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:57:55.104746 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:57:55.105484 master-0 kubenswrapper[7553]: I0318 17:57:55.104778 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:57:56.102853 master-0 kubenswrapper[7553]: I0318 17:57:56.102752 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:57:56.102853 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:57:56.102853 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:57:56.102853 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:57:56.102853 master-0 kubenswrapper[7553]: I0318 17:57:56.102850 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:57:57.103401 master-0 kubenswrapper[7553]: I0318 17:57:57.103262 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:57:57.103401 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:57:57.103401 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:57:57.103401 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:57:57.103401 master-0 kubenswrapper[7553]: I0318 17:57:57.103390 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:57:57.476829 master-0 kubenswrapper[7553]: E0318 17:57:57.476697 7553 mirror_client.go:138] "Failed deleting a mirror pod" err="Timeout: request did not complete within requested timeout - context deadline exceeded" pod="openshift-etcd/etcd-master-0" Mar 18 17:57:57.790403 master-0 kubenswrapper[7553]: I0318 17:57:57.790195 7553 kubelet.go:1909] "Trying to delete pod" pod="openshift-etcd/etcd-master-0" podUID="b7082744-e417-4683-8069-858394c5fc53" Mar 18 17:57:57.790403 master-0 kubenswrapper[7553]: I0318 17:57:57.790322 7553 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-etcd/etcd-master-0" podUID="b7082744-e417-4683-8069-858394c5fc53" Mar 18 17:57:58.103650 master-0 kubenswrapper[7553]: I0318 17:57:58.103529 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:57:58.103650 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:57:58.103650 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:57:58.103650 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:57:58.104587 master-0 kubenswrapper[7553]: I0318 17:57:58.103690 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:57:59.102577 master-0 kubenswrapper[7553]: I0318 17:57:59.102484 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:57:59.102577 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:57:59.102577 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:57:59.102577 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:57:59.103058 master-0 kubenswrapper[7553]: I0318 17:57:59.102593 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:58:00.103388 master-0 kubenswrapper[7553]: I0318 17:58:00.103301 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:58:00.103388 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:58:00.103388 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:58:00.103388 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:58:00.104187 master-0 kubenswrapper[7553]: I0318 17:58:00.103406 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:58:01.054492 master-0 kubenswrapper[7553]: I0318 17:58:01.054395 7553 scope.go:117] "RemoveContainer" containerID="7805e3a084be32fd22f401c11f2cafaf6f6853b5c227bbbf9238a583daa6ea61" Mar 18 17:58:01.054891 master-0 kubenswrapper[7553]: E0318 17:58:01.054727 7553 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kube-rbac-proxy pod=cluster-cloud-controller-manager-operator-7dff898856-kfzkl_openshift-cloud-controller-manager-operator(0751c002-fe0e-4f13-bb9c-9accd8ca0df3)\"" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-kfzkl" podUID="0751c002-fe0e-4f13-bb9c-9accd8ca0df3" Mar 18 17:58:01.102915 master-0 kubenswrapper[7553]: I0318 17:58:01.102834 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:58:01.102915 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:58:01.102915 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:58:01.102915 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:58:01.103362 master-0 kubenswrapper[7553]: I0318 17:58:01.102919 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:58:02.102064 master-0 kubenswrapper[7553]: I0318 17:58:02.101978 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:58:02.102064 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:58:02.102064 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:58:02.102064 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:58:02.102064 master-0 kubenswrapper[7553]: I0318 17:58:02.102066 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:58:03.102047 master-0 kubenswrapper[7553]: I0318 17:58:03.101961 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:58:03.102047 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:58:03.102047 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:58:03.102047 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:58:03.103103 master-0 kubenswrapper[7553]: I0318 17:58:03.102051 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:58:04.053205 master-0 kubenswrapper[7553]: I0318 17:58:04.053118 7553 scope.go:117] "RemoveContainer" containerID="921ec206afcda3ad2ed54f119faab2d531fbc22d2917452ab79dc39397439722" Mar 18 17:58:04.053597 master-0 kubenswrapper[7553]: E0318 17:58:04.053490 7553 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cluster-baremetal-operator\" with CrashLoopBackOff: \"back-off 40s restarting failed container=cluster-baremetal-operator pod=cluster-baremetal-operator-6f69995874-dh5zl_openshift-machine-api(37b3753f-bf4f-4a9e-a4a8-d58296bada79)\"" pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-dh5zl" podUID="37b3753f-bf4f-4a9e-a4a8-d58296bada79" Mar 18 17:58:04.103227 master-0 kubenswrapper[7553]: I0318 17:58:04.103107 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:58:04.103227 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:58:04.103227 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:58:04.103227 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:58:04.104321 master-0 kubenswrapper[7553]: I0318 17:58:04.103254 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:58:05.103089 master-0 kubenswrapper[7553]: I0318 17:58:05.102963 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:58:05.103089 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:58:05.103089 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:58:05.103089 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:58:05.103089 master-0 kubenswrapper[7553]: I0318 17:58:05.103071 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:58:06.054480 master-0 kubenswrapper[7553]: I0318 17:58:06.054393 7553 scope.go:117] "RemoveContainer" containerID="230861fc75c4cf91f00521990347ee4c6eaab66ee62a9284086ae7fb81bebad6" Mar 18 17:58:06.054936 master-0 kubenswrapper[7553]: E0318 17:58:06.054887 7553 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cluster-policy-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=cluster-policy-controller pod=kube-controller-manager-master-0_openshift-kube-controller-manager(3b3363934623637fdc1d37ff8b16880a)\"" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="3b3363934623637fdc1d37ff8b16880a" Mar 18 17:58:06.102454 master-0 kubenswrapper[7553]: I0318 17:58:06.102365 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:58:06.102454 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:58:06.102454 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:58:06.102454 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:58:06.103164 master-0 kubenswrapper[7553]: I0318 17:58:06.103112 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:58:07.102994 master-0 kubenswrapper[7553]: I0318 17:58:07.102912 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:58:07.102994 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:58:07.102994 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:58:07.102994 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:58:07.103907 master-0 kubenswrapper[7553]: I0318 17:58:07.103007 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:58:08.102951 master-0 kubenswrapper[7553]: I0318 17:58:08.102870 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:58:08.102951 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:58:08.102951 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:58:08.102951 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:58:08.104238 master-0 kubenswrapper[7553]: I0318 17:58:08.104198 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:58:09.102209 master-0 kubenswrapper[7553]: I0318 17:58:09.102141 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:58:09.102209 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:58:09.102209 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:58:09.102209 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:58:09.102616 master-0 kubenswrapper[7553]: I0318 17:58:09.102237 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:58:10.043387 master-0 kubenswrapper[7553]: I0318 17:58:10.043314 7553 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-etcd/etcd-master-0"] Mar 18 17:58:10.045223 master-0 kubenswrapper[7553]: I0318 17:58:10.044910 7553 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-etcd/etcd-master-0" Mar 18 17:58:10.066645 master-0 kubenswrapper[7553]: I0318 17:58:10.066553 7553 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-etcd/etcd-master-0"] Mar 18 17:58:10.107845 master-0 kubenswrapper[7553]: I0318 17:58:10.107730 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:58:10.107845 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:58:10.107845 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:58:10.107845 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:58:10.108409 master-0 kubenswrapper[7553]: I0318 17:58:10.107897 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:58:10.395337 master-0 kubenswrapper[7553]: I0318 17:58:10.395243 7553 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-multus/multus-admission-controller-5dbbb8b86f-gr8jc"] Mar 18 17:58:10.398438 master-0 kubenswrapper[7553]: I0318 17:58:10.398380 7553 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-multus/multus-admission-controller-5dbbb8b86f-gr8jc"] Mar 18 17:58:10.888036 master-0 kubenswrapper[7553]: I0318 17:58:10.887967 7553 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/installer-1-retry-1-master-0"] Mar 18 17:58:10.910757 master-0 kubenswrapper[7553]: I0318 17:58:10.910671 7553 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/installer-1-retry-1-master-0"] Mar 18 17:58:10.996889 master-0 kubenswrapper[7553]: E0318 17:58:10.996791 7553 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s" Mar 18 17:58:11.102495 master-0 kubenswrapper[7553]: I0318 17:58:11.102384 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:58:11.102495 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:58:11.102495 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:58:11.102495 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:58:11.103288 master-0 kubenswrapper[7553]: I0318 17:58:11.102591 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:58:12.061887 master-0 kubenswrapper[7553]: I0318 17:58:12.061805 7553 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a8c34df1-ea0d-4dfa-bf4d-5b58dc5bee8e" path="/var/lib/kubelet/pods/a8c34df1-ea0d-4dfa-bf4d-5b58dc5bee8e/volumes" Mar 18 17:58:12.062553 master-0 kubenswrapper[7553]: I0318 17:58:12.062518 7553 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="da246674-9ad1-4732-9a9e-d86d18fb0c55" path="/var/lib/kubelet/pods/da246674-9ad1-4732-9a9e-d86d18fb0c55/volumes" Mar 18 17:58:12.103298 master-0 kubenswrapper[7553]: I0318 17:58:12.103185 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:58:12.103298 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:58:12.103298 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:58:12.103298 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:58:12.103949 master-0 kubenswrapper[7553]: I0318 17:58:12.103335 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:58:13.103234 master-0 kubenswrapper[7553]: I0318 17:58:13.103120 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:58:13.103234 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:58:13.103234 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:58:13.103234 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:58:13.104032 master-0 kubenswrapper[7553]: I0318 17:58:13.103304 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:58:13.924595 master-0 kubenswrapper[7553]: I0318 17:58:13.924468 7553 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-64854d9cff-vpjmp_7d39d93e-9be3-47e1-a44e-be2d18b55446/snapshot-controller/4.log" Mar 18 17:58:13.925647 master-0 kubenswrapper[7553]: I0318 17:58:13.925589 7553 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-64854d9cff-vpjmp_7d39d93e-9be3-47e1-a44e-be2d18b55446/snapshot-controller/3.log" Mar 18 17:58:13.925767 master-0 kubenswrapper[7553]: I0318 17:58:13.925666 7553 generic.go:334] "Generic (PLEG): container finished" podID="7d39d93e-9be3-47e1-a44e-be2d18b55446" containerID="2b3a93a1f208538619b8c053a215774b7a5b76ad51695ab4679fe93b9c8aef84" exitCode=1 Mar 18 17:58:13.925767 master-0 kubenswrapper[7553]: I0318 17:58:13.925708 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-64854d9cff-vpjmp" event={"ID":"7d39d93e-9be3-47e1-a44e-be2d18b55446","Type":"ContainerDied","Data":"2b3a93a1f208538619b8c053a215774b7a5b76ad51695ab4679fe93b9c8aef84"} Mar 18 17:58:13.925767 master-0 kubenswrapper[7553]: I0318 17:58:13.925756 7553 scope.go:117] "RemoveContainer" containerID="1d85680a94931192610dbfb5c97df34c749df00528eff7841b2245f6d30aa63a" Mar 18 17:58:13.926716 master-0 kubenswrapper[7553]: I0318 17:58:13.926653 7553 scope.go:117] "RemoveContainer" containerID="2b3a93a1f208538619b8c053a215774b7a5b76ad51695ab4679fe93b9c8aef84" Mar 18 17:58:13.927033 master-0 kubenswrapper[7553]: E0318 17:58:13.926990 7553 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"snapshot-controller\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=snapshot-controller pod=csi-snapshot-controller-64854d9cff-vpjmp_openshift-cluster-storage-operator(7d39d93e-9be3-47e1-a44e-be2d18b55446)\"" pod="openshift-cluster-storage-operator/csi-snapshot-controller-64854d9cff-vpjmp" podUID="7d39d93e-9be3-47e1-a44e-be2d18b55446" Mar 18 17:58:14.053739 master-0 kubenswrapper[7553]: I0318 17:58:14.053670 7553 scope.go:117] "RemoveContainer" containerID="7805e3a084be32fd22f401c11f2cafaf6f6853b5c227bbbf9238a583daa6ea61" Mar 18 17:58:14.054082 master-0 kubenswrapper[7553]: E0318 17:58:14.053971 7553 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kube-rbac-proxy pod=cluster-cloud-controller-manager-operator-7dff898856-kfzkl_openshift-cloud-controller-manager-operator(0751c002-fe0e-4f13-bb9c-9accd8ca0df3)\"" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-kfzkl" podUID="0751c002-fe0e-4f13-bb9c-9accd8ca0df3" Mar 18 17:58:14.103888 master-0 kubenswrapper[7553]: I0318 17:58:14.103802 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:58:14.103888 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:58:14.103888 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:58:14.103888 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:58:14.104765 master-0 kubenswrapper[7553]: I0318 17:58:14.103897 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:58:14.934037 master-0 kubenswrapper[7553]: I0318 17:58:14.933812 7553 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-64854d9cff-vpjmp_7d39d93e-9be3-47e1-a44e-be2d18b55446/snapshot-controller/4.log" Mar 18 17:58:15.102262 master-0 kubenswrapper[7553]: I0318 17:58:15.102183 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:58:15.102262 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:58:15.102262 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:58:15.102262 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:58:15.102778 master-0 kubenswrapper[7553]: I0318 17:58:15.102268 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:58:15.943793 master-0 kubenswrapper[7553]: I0318 17:58:15.943726 7553 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_3b3363934623637fdc1d37ff8b16880a/cluster-policy-controller/3.log" Mar 18 17:58:15.946124 master-0 kubenswrapper[7553]: I0318 17:58:15.946081 7553 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_3b3363934623637fdc1d37ff8b16880a/kube-controller-manager-cert-syncer/0.log" Mar 18 17:58:15.946997 master-0 kubenswrapper[7553]: I0318 17:58:15.946957 7553 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_3b3363934623637fdc1d37ff8b16880a/kube-controller-manager/0.log" Mar 18 17:58:15.947368 master-0 kubenswrapper[7553]: I0318 17:58:15.947218 7553 generic.go:334] "Generic (PLEG): container finished" podID="3b3363934623637fdc1d37ff8b16880a" containerID="243a7398c383ba8c402d23dcf0f7c5b93b0d9dae2f29d0c0170f8b972de06495" exitCode=1 Mar 18 17:58:15.947368 master-0 kubenswrapper[7553]: I0318 17:58:15.947289 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"3b3363934623637fdc1d37ff8b16880a","Type":"ContainerDied","Data":"243a7398c383ba8c402d23dcf0f7c5b93b0d9dae2f29d0c0170f8b972de06495"} Mar 18 17:58:15.948486 master-0 kubenswrapper[7553]: I0318 17:58:15.948434 7553 scope.go:117] "RemoveContainer" containerID="230861fc75c4cf91f00521990347ee4c6eaab66ee62a9284086ae7fb81bebad6" Mar 18 17:58:15.948486 master-0 kubenswrapper[7553]: I0318 17:58:15.948477 7553 scope.go:117] "RemoveContainer" containerID="243a7398c383ba8c402d23dcf0f7c5b93b0d9dae2f29d0c0170f8b972de06495" Mar 18 17:58:16.053687 master-0 kubenswrapper[7553]: I0318 17:58:16.053624 7553 scope.go:117] "RemoveContainer" containerID="921ec206afcda3ad2ed54f119faab2d531fbc22d2917452ab79dc39397439722" Mar 18 17:58:16.053979 master-0 kubenswrapper[7553]: E0318 17:58:16.053947 7553 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cluster-baremetal-operator\" with CrashLoopBackOff: \"back-off 40s restarting failed container=cluster-baremetal-operator pod=cluster-baremetal-operator-6f69995874-dh5zl_openshift-machine-api(37b3753f-bf4f-4a9e-a4a8-d58296bada79)\"" pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-dh5zl" podUID="37b3753f-bf4f-4a9e-a4a8-d58296bada79" Mar 18 17:58:16.102419 master-0 kubenswrapper[7553]: I0318 17:58:16.102370 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:58:16.102419 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:58:16.102419 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:58:16.102419 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:58:16.102676 master-0 kubenswrapper[7553]: I0318 17:58:16.102429 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:58:16.193022 master-0 kubenswrapper[7553]: E0318 17:58:16.192925 7553 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cluster-policy-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=cluster-policy-controller pod=kube-controller-manager-master-0_openshift-kube-controller-manager(3b3363934623637fdc1d37ff8b16880a)\"" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="3b3363934623637fdc1d37ff8b16880a" Mar 18 17:58:16.958232 master-0 kubenswrapper[7553]: I0318 17:58:16.958137 7553 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_3b3363934623637fdc1d37ff8b16880a/cluster-policy-controller/3.log" Mar 18 17:58:16.960428 master-0 kubenswrapper[7553]: I0318 17:58:16.960383 7553 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_3b3363934623637fdc1d37ff8b16880a/kube-controller-manager-cert-syncer/0.log" Mar 18 17:58:16.961423 master-0 kubenswrapper[7553]: I0318 17:58:16.961357 7553 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_3b3363934623637fdc1d37ff8b16880a/kube-controller-manager/0.log" Mar 18 17:58:16.961561 master-0 kubenswrapper[7553]: I0318 17:58:16.961465 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"3b3363934623637fdc1d37ff8b16880a","Type":"ContainerStarted","Data":"5e9de81daca56e7a14e9bb6ed5c647f47dd366c571087c15f6fae5baeebccd1e"} Mar 18 17:58:16.962261 master-0 kubenswrapper[7553]: I0318 17:58:16.962206 7553 scope.go:117] "RemoveContainer" containerID="230861fc75c4cf91f00521990347ee4c6eaab66ee62a9284086ae7fb81bebad6" Mar 18 17:58:16.964455 master-0 kubenswrapper[7553]: E0318 17:58:16.964377 7553 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cluster-policy-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=cluster-policy-controller pod=kube-controller-manager-master-0_openshift-kube-controller-manager(3b3363934623637fdc1d37ff8b16880a)\"" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="3b3363934623637fdc1d37ff8b16880a" Mar 18 17:58:17.102638 master-0 kubenswrapper[7553]: I0318 17:58:17.102529 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:58:17.102638 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:58:17.102638 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:58:17.102638 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:58:17.103118 master-0 kubenswrapper[7553]: I0318 17:58:17.102654 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:58:18.102925 master-0 kubenswrapper[7553]: I0318 17:58:18.102835 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:58:18.102925 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:58:18.102925 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:58:18.102925 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:58:18.104094 master-0 kubenswrapper[7553]: I0318 17:58:18.103052 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:58:19.102544 master-0 kubenswrapper[7553]: I0318 17:58:19.102458 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:58:19.102544 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:58:19.102544 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:58:19.102544 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:58:19.103520 master-0 kubenswrapper[7553]: I0318 17:58:19.102571 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:58:20.102927 master-0 kubenswrapper[7553]: I0318 17:58:20.102810 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:58:20.102927 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:58:20.102927 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:58:20.102927 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:58:20.102927 master-0 kubenswrapper[7553]: I0318 17:58:20.102921 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:58:21.101945 master-0 kubenswrapper[7553]: I0318 17:58:21.101802 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:58:21.101945 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:58:21.101945 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:58:21.101945 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:58:21.101945 master-0 kubenswrapper[7553]: I0318 17:58:21.101914 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:58:22.101840 master-0 kubenswrapper[7553]: I0318 17:58:22.101745 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:58:22.101840 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:58:22.101840 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:58:22.101840 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:58:22.102434 master-0 kubenswrapper[7553]: I0318 17:58:22.101859 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:58:22.187593 master-0 kubenswrapper[7553]: E0318 17:58:22.187386 7553 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{bootstrap-kube-scheduler-master-0.189e00720177d21f kube-system 9187 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-scheduler-master-0,UID:c83737980b9ee109184b1d78e942cf36,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Started,Message:Started container kube-scheduler,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 17:43:12 +0000 UTC,LastTimestamp:2026-03-18 17:54:06.954869606 +0000 UTC m=+737.100704279,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 17:58:23.053031 master-0 kubenswrapper[7553]: E0318 17:58:23.052902 7553 kubelet.go:1929] "Failed creating a mirror pod for" err="Internal error occurred: admission plugin \"LimitRanger\" failed to complete mutation in 13s" pod="openshift-etcd/etcd-master-0" Mar 18 17:58:23.103763 master-0 kubenswrapper[7553]: I0318 17:58:23.103669 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:58:23.103763 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:58:23.103763 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:58:23.103763 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:58:23.105118 master-0 kubenswrapper[7553]: I0318 17:58:23.103779 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:58:24.020862 master-0 kubenswrapper[7553]: I0318 17:58:24.020754 7553 kubelet.go:1909] "Trying to delete pod" pod="openshift-etcd/etcd-master-0" podUID="b7082744-e417-4683-8069-858394c5fc53" Mar 18 17:58:24.020862 master-0 kubenswrapper[7553]: I0318 17:58:24.020838 7553 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-etcd/etcd-master-0" podUID="b7082744-e417-4683-8069-858394c5fc53" Mar 18 17:58:24.102974 master-0 kubenswrapper[7553]: I0318 17:58:24.102885 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:58:24.102974 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:58:24.102974 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:58:24.102974 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:58:24.103504 master-0 kubenswrapper[7553]: I0318 17:58:24.102979 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:58:25.054175 master-0 kubenswrapper[7553]: I0318 17:58:25.054082 7553 scope.go:117] "RemoveContainer" containerID="2b3a93a1f208538619b8c053a215774b7a5b76ad51695ab4679fe93b9c8aef84" Mar 18 17:58:25.055240 master-0 kubenswrapper[7553]: E0318 17:58:25.054482 7553 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"snapshot-controller\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=snapshot-controller pod=csi-snapshot-controller-64854d9cff-vpjmp_openshift-cluster-storage-operator(7d39d93e-9be3-47e1-a44e-be2d18b55446)\"" pod="openshift-cluster-storage-operator/csi-snapshot-controller-64854d9cff-vpjmp" podUID="7d39d93e-9be3-47e1-a44e-be2d18b55446" Mar 18 17:58:25.103057 master-0 kubenswrapper[7553]: I0318 17:58:25.102926 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:58:25.103057 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:58:25.103057 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:58:25.103057 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:58:25.103057 master-0 kubenswrapper[7553]: I0318 17:58:25.103040 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:58:26.053824 master-0 kubenswrapper[7553]: I0318 17:58:26.053706 7553 scope.go:117] "RemoveContainer" containerID="7805e3a084be32fd22f401c11f2cafaf6f6853b5c227bbbf9238a583daa6ea61" Mar 18 17:58:26.054194 master-0 kubenswrapper[7553]: E0318 17:58:26.054035 7553 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kube-rbac-proxy pod=cluster-cloud-controller-manager-operator-7dff898856-kfzkl_openshift-cloud-controller-manager-operator(0751c002-fe0e-4f13-bb9c-9accd8ca0df3)\"" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-kfzkl" podUID="0751c002-fe0e-4f13-bb9c-9accd8ca0df3" Mar 18 17:58:26.102549 master-0 kubenswrapper[7553]: I0318 17:58:26.102463 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:58:26.102549 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:58:26.102549 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:58:26.102549 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:58:26.102986 master-0 kubenswrapper[7553]: I0318 17:58:26.102563 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:58:27.053379 master-0 kubenswrapper[7553]: I0318 17:58:27.053264 7553 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-node-tuning-operator_cluster-node-tuning-operator-598fbc5f8f-7qwxn_7c6694a8-ccd0-491b-9f21-215450f6ce67/cluster-node-tuning-operator/1.log" Mar 18 17:58:27.053909 master-0 kubenswrapper[7553]: I0318 17:58:27.053878 7553 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-node-tuning-operator_cluster-node-tuning-operator-598fbc5f8f-7qwxn_7c6694a8-ccd0-491b-9f21-215450f6ce67/cluster-node-tuning-operator/0.log" Mar 18 17:58:27.053975 master-0 kubenswrapper[7553]: I0318 17:58:27.053922 7553 generic.go:334] "Generic (PLEG): container finished" podID="7c6694a8-ccd0-491b-9f21-215450f6ce67" containerID="54489b0edcfa24dfcbbb34581a482bdade21886266c2b553e30f0c64c39e011f" exitCode=1 Mar 18 17:58:27.053975 master-0 kubenswrapper[7553]: I0318 17:58:27.053948 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-7qwxn" event={"ID":"7c6694a8-ccd0-491b-9f21-215450f6ce67","Type":"ContainerDied","Data":"54489b0edcfa24dfcbbb34581a482bdade21886266c2b553e30f0c64c39e011f"} Mar 18 17:58:27.054079 master-0 kubenswrapper[7553]: I0318 17:58:27.053978 7553 scope.go:117] "RemoveContainer" containerID="6af98a7327b83a0f9fcfd3425055ee2bbebd96176bf419d80ea4f980729da819" Mar 18 17:58:27.054627 master-0 kubenswrapper[7553]: I0318 17:58:27.054511 7553 scope.go:117] "RemoveContainer" containerID="54489b0edcfa24dfcbbb34581a482bdade21886266c2b553e30f0c64c39e011f" Mar 18 17:58:27.102610 master-0 kubenswrapper[7553]: I0318 17:58:27.102538 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:58:27.102610 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:58:27.102610 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:58:27.102610 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:58:27.102897 master-0 kubenswrapper[7553]: I0318 17:58:27.102647 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:58:27.998894 master-0 kubenswrapper[7553]: E0318 17:58:27.998744 7553 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s" Mar 18 17:58:28.063608 master-0 kubenswrapper[7553]: I0318 17:58:28.063548 7553 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-node-tuning-operator_cluster-node-tuning-operator-598fbc5f8f-7qwxn_7c6694a8-ccd0-491b-9f21-215450f6ce67/cluster-node-tuning-operator/1.log" Mar 18 17:58:28.064189 master-0 kubenswrapper[7553]: I0318 17:58:28.063679 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-7qwxn" event={"ID":"7c6694a8-ccd0-491b-9f21-215450f6ce67","Type":"ContainerStarted","Data":"8d7d104b9eb6bb99ceff8bf6b623056e935513f672dba7bb8d4a7b18efe2a2b6"} Mar 18 17:58:28.102586 master-0 kubenswrapper[7553]: I0318 17:58:28.102491 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:58:28.102586 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:58:28.102586 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:58:28.102586 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:58:28.102901 master-0 kubenswrapper[7553]: I0318 17:58:28.102584 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:58:29.053615 master-0 kubenswrapper[7553]: I0318 17:58:29.053541 7553 scope.go:117] "RemoveContainer" containerID="230861fc75c4cf91f00521990347ee4c6eaab66ee62a9284086ae7fb81bebad6" Mar 18 17:58:29.053615 master-0 kubenswrapper[7553]: I0318 17:58:29.053616 7553 scope.go:117] "RemoveContainer" containerID="921ec206afcda3ad2ed54f119faab2d531fbc22d2917452ab79dc39397439722" Mar 18 17:58:29.105212 master-0 kubenswrapper[7553]: I0318 17:58:29.105114 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:58:29.105212 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:58:29.105212 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:58:29.105212 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:58:29.105920 master-0 kubenswrapper[7553]: I0318 17:58:29.105238 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:58:30.081409 master-0 kubenswrapper[7553]: I0318 17:58:30.081257 7553 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_3b3363934623637fdc1d37ff8b16880a/cluster-policy-controller/3.log" Mar 18 17:58:30.083643 master-0 kubenswrapper[7553]: I0318 17:58:30.083585 7553 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_3b3363934623637fdc1d37ff8b16880a/kube-controller-manager-cert-syncer/0.log" Mar 18 17:58:30.084591 master-0 kubenswrapper[7553]: I0318 17:58:30.084545 7553 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_3b3363934623637fdc1d37ff8b16880a/kube-controller-manager/0.log" Mar 18 17:58:30.084743 master-0 kubenswrapper[7553]: I0318 17:58:30.084673 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"3b3363934623637fdc1d37ff8b16880a","Type":"ContainerStarted","Data":"3c51974ba55ce77de4db6060fda42dd205fc3b6d69ff15656f21b3a7b488ddc3"} Mar 18 17:58:30.087538 master-0 kubenswrapper[7553]: I0318 17:58:30.087490 7553 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-6f69995874-dh5zl_37b3753f-bf4f-4a9e-a4a8-d58296bada79/cluster-baremetal-operator/3.log" Mar 18 17:58:30.088050 master-0 kubenswrapper[7553]: I0318 17:58:30.087979 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-dh5zl" event={"ID":"37b3753f-bf4f-4a9e-a4a8-d58296bada79","Type":"ContainerStarted","Data":"805eda78d119b52b8d61d27184b36b337491b85a9c6934b07133cc95929fe22a"} Mar 18 17:58:30.103794 master-0 kubenswrapper[7553]: I0318 17:58:30.103718 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:58:30.103794 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:58:30.103794 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:58:30.103794 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:58:30.103794 master-0 kubenswrapper[7553]: I0318 17:58:30.103798 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:58:31.100112 master-0 kubenswrapper[7553]: I0318 17:58:31.100069 7553 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager-operator_kube-controller-manager-operator-ff989d6cc-qk279_9b424d6c-7440-4c98-ac19-2d0642c696fd/kube-controller-manager-operator/2.log" Mar 18 17:58:31.100940 master-0 kubenswrapper[7553]: I0318 17:58:31.100855 7553 generic.go:334] "Generic (PLEG): container finished" podID="9b424d6c-7440-4c98-ac19-2d0642c696fd" containerID="733c4831624297f5112d8028d0486f0fad40d94494178f2290df8fe70a7c80e2" exitCode=0 Mar 18 17:58:31.101032 master-0 kubenswrapper[7553]: I0318 17:58:31.100931 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-ff989d6cc-qk279" event={"ID":"9b424d6c-7440-4c98-ac19-2d0642c696fd","Type":"ContainerDied","Data":"733c4831624297f5112d8028d0486f0fad40d94494178f2290df8fe70a7c80e2"} Mar 18 17:58:31.101131 master-0 kubenswrapper[7553]: I0318 17:58:31.101119 7553 scope.go:117] "RemoveContainer" containerID="1fd744dbcfad29e0a4211253fc988f9ef696171ed5032f9e61793918d136f6fa" Mar 18 17:58:31.102670 master-0 kubenswrapper[7553]: I0318 17:58:31.102618 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:58:31.102670 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:58:31.102670 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:58:31.102670 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:58:31.102869 master-0 kubenswrapper[7553]: I0318 17:58:31.102704 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:58:31.103795 master-0 kubenswrapper[7553]: I0318 17:58:31.103433 7553 scope.go:117] "RemoveContainer" containerID="733c4831624297f5112d8028d0486f0fad40d94494178f2290df8fe70a7c80e2" Mar 18 17:58:32.103729 master-0 kubenswrapper[7553]: I0318 17:58:32.103460 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:58:32.103729 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:58:32.103729 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:58:32.103729 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:58:32.104856 master-0 kubenswrapper[7553]: I0318 17:58:32.103754 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:58:32.111246 master-0 kubenswrapper[7553]: I0318 17:58:32.111174 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-ff989d6cc-qk279" event={"ID":"9b424d6c-7440-4c98-ac19-2d0642c696fd","Type":"ContainerStarted","Data":"5d4c1ff7cef209c476da4126b0f2b7c9b6a816547d3c8aff58a7b704dbb9e503"} Mar 18 17:58:33.102081 master-0 kubenswrapper[7553]: I0318 17:58:33.101997 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:58:33.102081 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:58:33.102081 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:58:33.102081 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:58:33.102569 master-0 kubenswrapper[7553]: I0318 17:58:33.102089 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:58:33.127693 master-0 kubenswrapper[7553]: I0318 17:58:33.127537 7553 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd-operator_etcd-operator-8544cbcf9c-rws9x_0100a259-1358-45e8-8191-4e1f9a14ec89/etcd-operator/2.log" Mar 18 17:58:33.127693 master-0 kubenswrapper[7553]: I0318 17:58:33.127634 7553 generic.go:334] "Generic (PLEG): container finished" podID="0100a259-1358-45e8-8191-4e1f9a14ec89" containerID="1bb2dec1f59aff9832355c134a19ba762af95a3f61ff179296debc28c40ca05c" exitCode=0 Mar 18 17:58:33.127693 master-0 kubenswrapper[7553]: I0318 17:58:33.127685 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-rws9x" event={"ID":"0100a259-1358-45e8-8191-4e1f9a14ec89","Type":"ContainerDied","Data":"1bb2dec1f59aff9832355c134a19ba762af95a3f61ff179296debc28c40ca05c"} Mar 18 17:58:33.128579 master-0 kubenswrapper[7553]: I0318 17:58:33.127748 7553 scope.go:117] "RemoveContainer" containerID="8df0fa7291cab5e340fb319c595e0406033737475f352f9d19dfc2dafb7b328f" Mar 18 17:58:33.128628 master-0 kubenswrapper[7553]: I0318 17:58:33.128563 7553 scope.go:117] "RemoveContainer" containerID="1bb2dec1f59aff9832355c134a19ba762af95a3f61ff179296debc28c40ca05c" Mar 18 17:58:34.102548 master-0 kubenswrapper[7553]: I0318 17:58:34.102478 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:58:34.102548 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:58:34.102548 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:58:34.102548 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:58:34.102972 master-0 kubenswrapper[7553]: I0318 17:58:34.102564 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:58:34.140083 master-0 kubenswrapper[7553]: I0318 17:58:34.140006 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-rws9x" event={"ID":"0100a259-1358-45e8-8191-4e1f9a14ec89","Type":"ContainerStarted","Data":"cbc3b43b23d2fb80df9da2945869badd645ff6d509efbfa4b4ac5132b015bcef"} Mar 18 17:58:34.714594 master-0 kubenswrapper[7553]: I0318 17:58:34.714368 7553 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 17:58:34.714594 master-0 kubenswrapper[7553]: I0318 17:58:34.714449 7553 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 17:58:35.102437 master-0 kubenswrapper[7553]: I0318 17:58:35.102387 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:58:35.102437 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:58:35.102437 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:58:35.102437 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:58:35.102971 master-0 kubenswrapper[7553]: I0318 17:58:35.102457 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:58:35.152809 master-0 kubenswrapper[7553]: I0318 17:58:35.152740 7553 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-service-ca-operator_service-ca-operator-b865698dc-5zj8r_c355c750-ae2f-49fa-9a16-8fb4f688853e/service-ca-operator/2.log" Mar 18 17:58:35.153544 master-0 kubenswrapper[7553]: I0318 17:58:35.152828 7553 generic.go:334] "Generic (PLEG): container finished" podID="c355c750-ae2f-49fa-9a16-8fb4f688853e" containerID="82b3c41b778f6b2cb0358e27e4513c9d6911408756eafe9881b278fd4128f2db" exitCode=0 Mar 18 17:58:35.153544 master-0 kubenswrapper[7553]: I0318 17:58:35.152878 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-b865698dc-5zj8r" event={"ID":"c355c750-ae2f-49fa-9a16-8fb4f688853e","Type":"ContainerDied","Data":"82b3c41b778f6b2cb0358e27e4513c9d6911408756eafe9881b278fd4128f2db"} Mar 18 17:58:35.153544 master-0 kubenswrapper[7553]: I0318 17:58:35.152927 7553 scope.go:117] "RemoveContainer" containerID="ebe23adafc49efd64f86fbe53ef0b2cf71f92ee87bec64f94def1d1fde4df324" Mar 18 17:58:35.153733 master-0 kubenswrapper[7553]: I0318 17:58:35.153620 7553 scope.go:117] "RemoveContainer" containerID="82b3c41b778f6b2cb0358e27e4513c9d6911408756eafe9881b278fd4128f2db" Mar 18 17:58:36.103314 master-0 kubenswrapper[7553]: I0318 17:58:36.103191 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:58:36.103314 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:58:36.103314 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:58:36.103314 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:58:36.104029 master-0 kubenswrapper[7553]: I0318 17:58:36.103327 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:58:36.167363 master-0 kubenswrapper[7553]: I0318 17:58:36.167245 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-b865698dc-5zj8r" event={"ID":"c355c750-ae2f-49fa-9a16-8fb4f688853e","Type":"ContainerStarted","Data":"48e1be582cddfcccb33c7db6ac24b7e2d75db6d2b2ac1d0dd05642d82d40a973"} Mar 18 17:58:37.234322 master-0 kubenswrapper[7553]: I0318 17:58:37.234247 7553 scope.go:117] "RemoveContainer" containerID="7805e3a084be32fd22f401c11f2cafaf6f6853b5c227bbbf9238a583daa6ea61" Mar 18 17:58:37.235060 master-0 kubenswrapper[7553]: E0318 17:58:37.234464 7553 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kube-rbac-proxy pod=cluster-cloud-controller-manager-operator-7dff898856-kfzkl_openshift-cloud-controller-manager-operator(0751c002-fe0e-4f13-bb9c-9accd8ca0df3)\"" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-kfzkl" podUID="0751c002-fe0e-4f13-bb9c-9accd8ca0df3" Mar 18 17:58:37.235574 master-0 kubenswrapper[7553]: I0318 17:58:37.235384 7553 scope.go:117] "RemoveContainer" containerID="2b3a93a1f208538619b8c053a215774b7a5b76ad51695ab4679fe93b9c8aef84" Mar 18 17:58:37.236339 master-0 kubenswrapper[7553]: E0318 17:58:37.236236 7553 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"snapshot-controller\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=snapshot-controller pod=csi-snapshot-controller-64854d9cff-vpjmp_openshift-cluster-storage-operator(7d39d93e-9be3-47e1-a44e-be2d18b55446)\"" pod="openshift-cluster-storage-operator/csi-snapshot-controller-64854d9cff-vpjmp" podUID="7d39d93e-9be3-47e1-a44e-be2d18b55446" Mar 18 17:58:37.237833 master-0 kubenswrapper[7553]: I0318 17:58:37.237775 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:58:37.237833 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:58:37.237833 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:58:37.237833 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:58:37.238091 master-0 kubenswrapper[7553]: I0318 17:58:37.237834 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:58:37.673727 master-0 kubenswrapper[7553]: E0318 17:58:37.673640 7553 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-18T17:58:27Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-18T17:58:27Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-18T17:58:27Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-18T17:58:27Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"master-0\": Patch \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0/status?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 18 17:58:37.714763 master-0 kubenswrapper[7553]: I0318 17:58:37.714709 7553 patch_prober.go:28] interesting pod/kube-controller-manager-master-0 container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 18 17:58:37.714981 master-0 kubenswrapper[7553]: I0318 17:58:37.714772 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="3b3363934623637fdc1d37ff8b16880a" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 18 17:58:38.102486 master-0 kubenswrapper[7553]: I0318 17:58:38.102392 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:58:38.102486 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:58:38.102486 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:58:38.102486 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:58:38.102918 master-0 kubenswrapper[7553]: I0318 17:58:38.102487 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:58:39.102583 master-0 kubenswrapper[7553]: I0318 17:58:39.102490 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:58:39.102583 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:58:39.102583 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:58:39.102583 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:58:39.102583 master-0 kubenswrapper[7553]: I0318 17:58:39.102572 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:58:40.102509 master-0 kubenswrapper[7553]: I0318 17:58:40.102423 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:58:40.102509 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:58:40.102509 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:58:40.102509 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:58:40.102509 master-0 kubenswrapper[7553]: I0318 17:58:40.102500 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:58:40.278530 master-0 kubenswrapper[7553]: I0318 17:58:40.278422 7553 generic.go:334] "Generic (PLEG): container finished" podID="253ec853-f637-4aa4-8e8e-eb655dfccccb" containerID="2db7d7384c8a35f95ff52d00ab25bbedc70ce8cf90c5c6ca6aff91f2c9272cd8" exitCode=0 Mar 18 17:58:40.278530 master-0 kubenswrapper[7553]: I0318 17:58:40.278501 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-57dbfd879f-44tfw" event={"ID":"253ec853-f637-4aa4-8e8e-eb655dfccccb","Type":"ContainerDied","Data":"2db7d7384c8a35f95ff52d00ab25bbedc70ce8cf90c5c6ca6aff91f2c9272cd8"} Mar 18 17:58:40.279004 master-0 kubenswrapper[7553]: I0318 17:58:40.278589 7553 scope.go:117] "RemoveContainer" containerID="44bcebab84e3e626740692adfb152c2797db6837bc5427bf84f3ada1de226018" Mar 18 17:58:40.283728 master-0 kubenswrapper[7553]: I0318 17:58:40.280297 7553 scope.go:117] "RemoveContainer" containerID="2db7d7384c8a35f95ff52d00ab25bbedc70ce8cf90c5c6ca6aff91f2c9272cd8" Mar 18 17:58:40.283728 master-0 kubenswrapper[7553]: I0318 17:58:40.281245 7553 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-lifecycle-manager_package-server-manager-7b95f86987-6qqz4_d26d4515-391e-41a5-8c82-1b2b8a375662/package-server-manager/1.log" Mar 18 17:58:40.283728 master-0 kubenswrapper[7553]: I0318 17:58:40.282143 7553 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-lifecycle-manager_package-server-manager-7b95f86987-6qqz4_d26d4515-391e-41a5-8c82-1b2b8a375662/package-server-manager/0.log" Mar 18 17:58:40.283728 master-0 kubenswrapper[7553]: I0318 17:58:40.282722 7553 generic.go:334] "Generic (PLEG): container finished" podID="d26d4515-391e-41a5-8c82-1b2b8a375662" containerID="2bf18e51a1823185cc3f2ac648f42885a8d2aea94913a831a7d4285f0b01a344" exitCode=1 Mar 18 17:58:40.283728 master-0 kubenswrapper[7553]: I0318 17:58:40.282768 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-7b95f86987-6qqz4" event={"ID":"d26d4515-391e-41a5-8c82-1b2b8a375662","Type":"ContainerDied","Data":"2bf18e51a1823185cc3f2ac648f42885a8d2aea94913a831a7d4285f0b01a344"} Mar 18 17:58:40.283728 master-0 kubenswrapper[7553]: I0318 17:58:40.283423 7553 scope.go:117] "RemoveContainer" containerID="2bf18e51a1823185cc3f2ac648f42885a8d2aea94913a831a7d4285f0b01a344" Mar 18 17:58:40.315095 master-0 kubenswrapper[7553]: I0318 17:58:40.315022 7553 scope.go:117] "RemoveContainer" containerID="c08cd14fe1ce6dcf04e7916d9d5a8cb80981c4007a423a03755dfeee8e27eeb4" Mar 18 17:58:41.076063 master-0 kubenswrapper[7553]: I0318 17:58:41.075974 7553 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-route-controller-manager/route-controller-manager-57dbfd879f-44tfw" Mar 18 17:58:41.076063 master-0 kubenswrapper[7553]: I0318 17:58:41.076061 7553 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-57dbfd879f-44tfw" Mar 18 17:58:41.102628 master-0 kubenswrapper[7553]: I0318 17:58:41.102572 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:58:41.102628 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:58:41.102628 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:58:41.102628 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:58:41.103834 master-0 kubenswrapper[7553]: I0318 17:58:41.103784 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:58:41.294551 master-0 kubenswrapper[7553]: I0318 17:58:41.294487 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-57dbfd879f-44tfw" event={"ID":"253ec853-f637-4aa4-8e8e-eb655dfccccb","Type":"ContainerStarted","Data":"3a64cbfee76c34b2d29f80158f22a8a5ce70c126b4ec12a27218b02eb32ec139"} Mar 18 17:58:41.294903 master-0 kubenswrapper[7553]: I0318 17:58:41.294875 7553 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-57dbfd879f-44tfw" Mar 18 17:58:41.297787 master-0 kubenswrapper[7553]: I0318 17:58:41.297742 7553 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-lifecycle-manager_package-server-manager-7b95f86987-6qqz4_d26d4515-391e-41a5-8c82-1b2b8a375662/package-server-manager/1.log" Mar 18 17:58:41.298692 master-0 kubenswrapper[7553]: I0318 17:58:41.298615 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-7b95f86987-6qqz4" event={"ID":"d26d4515-391e-41a5-8c82-1b2b8a375662","Type":"ContainerStarted","Data":"85719150f1fb2b5eb9dba4180bd7ec20106e8c664576e8e171651e80e8baa763"} Mar 18 17:58:41.299173 master-0 kubenswrapper[7553]: I0318 17:58:41.299128 7553 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-7b95f86987-6qqz4" Mar 18 17:58:42.102817 master-0 kubenswrapper[7553]: I0318 17:58:42.102710 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:58:42.102817 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:58:42.102817 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:58:42.102817 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:58:42.103982 master-0 kubenswrapper[7553]: I0318 17:58:42.102836 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:58:42.295265 master-0 kubenswrapper[7553]: I0318 17:58:42.295119 7553 patch_prober.go:28] interesting pod/route-controller-manager-57dbfd879f-44tfw container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.128.0.52:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 18 17:58:42.295589 master-0 kubenswrapper[7553]: I0318 17:58:42.295323 7553 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-57dbfd879f-44tfw" podUID="253ec853-f637-4aa4-8e8e-eb655dfccccb" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.128.0.52:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 18 17:58:43.103192 master-0 kubenswrapper[7553]: I0318 17:58:43.103105 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:58:43.103192 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:58:43.103192 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:58:43.103192 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:58:43.104224 master-0 kubenswrapper[7553]: I0318 17:58:43.103197 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:58:43.309175 master-0 kubenswrapper[7553]: I0318 17:58:43.308216 7553 patch_prober.go:28] interesting pod/route-controller-manager-57dbfd879f-44tfw container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.128.0.52:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 18 17:58:43.309175 master-0 kubenswrapper[7553]: I0318 17:58:43.308369 7553 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-57dbfd879f-44tfw" podUID="253ec853-f637-4aa4-8e8e-eb655dfccccb" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.128.0.52:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 18 17:58:43.973625 master-0 kubenswrapper[7553]: E0318 17:58:43.973517 7553 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod14a0661b_7bde_4e22_a9a9_5e3fb24df77f.slice/crio-34974a400194e4abf23a570b3bcaf62e9c0cf2c55d12e3ded0eb4a493b533868.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod14a0661b_7bde_4e22_a9a9_5e3fb24df77f.slice/crio-conmon-34974a400194e4abf23a570b3bcaf62e9c0cf2c55d12e3ded0eb4a493b533868.scope\": RecentStats: unable to find data in memory cache]" Mar 18 17:58:44.103552 master-0 kubenswrapper[7553]: I0318 17:58:44.103453 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:58:44.103552 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:58:44.103552 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:58:44.103552 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:58:44.103552 master-0 kubenswrapper[7553]: I0318 17:58:44.103553 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:58:44.339239 master-0 kubenswrapper[7553]: I0318 17:58:44.339174 7553 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-operator_network-operator-7bd846bfc4-dxxbl_14a0661b-7bde-4e22-a9a9-5e3fb24df77f/network-operator/1.log" Mar 18 17:58:44.339602 master-0 kubenswrapper[7553]: I0318 17:58:44.339338 7553 generic.go:334] "Generic (PLEG): container finished" podID="14a0661b-7bde-4e22-a9a9-5e3fb24df77f" containerID="34974a400194e4abf23a570b3bcaf62e9c0cf2c55d12e3ded0eb4a493b533868" exitCode=0 Mar 18 17:58:44.339602 master-0 kubenswrapper[7553]: I0318 17:58:44.339407 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-7bd846bfc4-dxxbl" event={"ID":"14a0661b-7bde-4e22-a9a9-5e3fb24df77f","Type":"ContainerDied","Data":"34974a400194e4abf23a570b3bcaf62e9c0cf2c55d12e3ded0eb4a493b533868"} Mar 18 17:58:44.339602 master-0 kubenswrapper[7553]: I0318 17:58:44.339487 7553 scope.go:117] "RemoveContainer" containerID="a6ebfcc622558a7e545ac685d6d46ff4d61a7219bfcb2c7a5f468d332911df22" Mar 18 17:58:44.340673 master-0 kubenswrapper[7553]: I0318 17:58:44.340601 7553 scope.go:117] "RemoveContainer" containerID="34974a400194e4abf23a570b3bcaf62e9c0cf2c55d12e3ded0eb4a493b533868" Mar 18 17:58:45.000384 master-0 kubenswrapper[7553]: E0318 17:58:45.000122 7553 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s" Mar 18 17:58:45.102644 master-0 kubenswrapper[7553]: I0318 17:58:45.102554 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:58:45.102644 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:58:45.102644 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:58:45.102644 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:58:45.103183 master-0 kubenswrapper[7553]: I0318 17:58:45.102665 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:58:45.353454 master-0 kubenswrapper[7553]: I0318 17:58:45.353333 7553 generic.go:334] "Generic (PLEG): container finished" podID="f7ff61c7-32d1-4407-a792-8e22bb4d50f9" containerID="26d9bad45253e9ed004980ee45ac455d4c739974d250f32d4e33bfde8ed6ef29" exitCode=0 Mar 18 17:58:45.354538 master-0 kubenswrapper[7553]: I0318 17:58:45.353463 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-6bb5bfb6fd-xwwcg" event={"ID":"f7ff61c7-32d1-4407-a792-8e22bb4d50f9","Type":"ContainerDied","Data":"26d9bad45253e9ed004980ee45ac455d4c739974d250f32d4e33bfde8ed6ef29"} Mar 18 17:58:45.354538 master-0 kubenswrapper[7553]: I0318 17:58:45.353588 7553 scope.go:117] "RemoveContainer" containerID="24610a985db5ce85023cf9747ca14df30c98ba89aeb22c58ca49f5ef21707a5f" Mar 18 17:58:45.354538 master-0 kubenswrapper[7553]: I0318 17:58:45.354423 7553 scope.go:117] "RemoveContainer" containerID="26d9bad45253e9ed004980ee45ac455d4c739974d250f32d4e33bfde8ed6ef29" Mar 18 17:58:45.358971 master-0 kubenswrapper[7553]: I0318 17:58:45.358906 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-7bd846bfc4-dxxbl" event={"ID":"14a0661b-7bde-4e22-a9a9-5e3fb24df77f","Type":"ContainerStarted","Data":"5179ca64eabfdf5ea99dcb47c967cf2f64d2038e73a893b0a18747a437949e68"} Mar 18 17:58:46.103003 master-0 kubenswrapper[7553]: I0318 17:58:46.102923 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:58:46.103003 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:58:46.103003 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:58:46.103003 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:58:46.103556 master-0 kubenswrapper[7553]: I0318 17:58:46.103008 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:58:46.372952 master-0 kubenswrapper[7553]: I0318 17:58:46.372794 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-6bb5bfb6fd-xwwcg" event={"ID":"f7ff61c7-32d1-4407-a792-8e22bb4d50f9","Type":"ContainerStarted","Data":"3317d222c2a4d4790fb3ac50f8e2aea8d3b6f07f3ed6494f4bfe411e22ca91cb"} Mar 18 17:58:47.102924 master-0 kubenswrapper[7553]: I0318 17:58:47.102821 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:58:47.102924 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:58:47.102924 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:58:47.102924 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:58:47.103244 master-0 kubenswrapper[7553]: I0318 17:58:47.102941 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:58:47.384192 master-0 kubenswrapper[7553]: I0318 17:58:47.383985 7553 generic.go:334] "Generic (PLEG): container finished" podID="89e6c3d6-7bd5-4df6-90db-3a349f644afb" containerID="c82dc79407cc2ebdd830e24e81c06ba7f22e81e0353adc5d05a21365ba7f195f" exitCode=0 Mar 18 17:58:47.384192 master-0 kubenswrapper[7553]: I0318 17:58:47.384124 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-b4f87c5b9-m84zq" event={"ID":"89e6c3d6-7bd5-4df6-90db-3a349f644afb","Type":"ContainerDied","Data":"c82dc79407cc2ebdd830e24e81c06ba7f22e81e0353adc5d05a21365ba7f195f"} Mar 18 17:58:47.385424 master-0 kubenswrapper[7553]: I0318 17:58:47.385245 7553 scope.go:117] "RemoveContainer" containerID="c82dc79407cc2ebdd830e24e81c06ba7f22e81e0353adc5d05a21365ba7f195f" Mar 18 17:58:47.390221 master-0 kubenswrapper[7553]: I0318 17:58:47.389375 7553 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver-operator_kube-apiserver-operator-8b68b9d9b-p72m2_26575d68-0488-4dfa-a5d0-5016e481dba6/kube-apiserver-operator/2.log" Mar 18 17:58:47.390221 master-0 kubenswrapper[7553]: I0318 17:58:47.389464 7553 generic.go:334] "Generic (PLEG): container finished" podID="26575d68-0488-4dfa-a5d0-5016e481dba6" containerID="2206a7113dacde21996d9057f09cbc9465ab1858bcc433f5c546151c4ea00afa" exitCode=0 Mar 18 17:58:47.390221 master-0 kubenswrapper[7553]: I0318 17:58:47.389547 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-8b68b9d9b-p72m2" event={"ID":"26575d68-0488-4dfa-a5d0-5016e481dba6","Type":"ContainerDied","Data":"2206a7113dacde21996d9057f09cbc9465ab1858bcc433f5c546151c4ea00afa"} Mar 18 17:58:47.390221 master-0 kubenswrapper[7553]: I0318 17:58:47.389721 7553 scope.go:117] "RemoveContainer" containerID="6469fb4bb68705329572e917ffd53c8d7d98a360f3801392b01dd10ca152c1c0" Mar 18 17:58:47.390718 master-0 kubenswrapper[7553]: I0318 17:58:47.390411 7553 scope.go:117] "RemoveContainer" containerID="2206a7113dacde21996d9057f09cbc9465ab1858bcc433f5c546151c4ea00afa" Mar 18 17:58:47.402852 master-0 kubenswrapper[7553]: I0318 17:58:47.402754 7553 generic.go:334] "Generic (PLEG): container finished" podID="c3267271-e0c5-45d6-980c-d78e4f9eef35" containerID="4af4292c294ed18f4d7a20d7c6af6118981afc3f4dccaa087fc72c0bbc4f6572" exitCode=0 Mar 18 17:58:47.403083 master-0 kubenswrapper[7553]: I0318 17:58:47.402902 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-84d549f6d5-b5lps" event={"ID":"c3267271-e0c5-45d6-980c-d78e4f9eef35","Type":"ContainerDied","Data":"4af4292c294ed18f4d7a20d7c6af6118981afc3f4dccaa087fc72c0bbc4f6572"} Mar 18 17:58:47.403955 master-0 kubenswrapper[7553]: I0318 17:58:47.403868 7553 scope.go:117] "RemoveContainer" containerID="4af4292c294ed18f4d7a20d7c6af6118981afc3f4dccaa087fc72c0bbc4f6572" Mar 18 17:58:47.408861 master-0 kubenswrapper[7553]: I0318 17:58:47.408512 7553 generic.go:334] "Generic (PLEG): container finished" podID="6f26e239-2988-4faa-bc1d-24b15b95b7f1" containerID="e31032eb3407bce853d0be38a115c77d3679d1c63fdc6c68fe19ac271b5e7c71" exitCode=0 Mar 18 17:58:47.408861 master-0 kubenswrapper[7553]: I0318 17:58:47.408595 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-5549dc66cb-ljrq8" event={"ID":"6f26e239-2988-4faa-bc1d-24b15b95b7f1","Type":"ContainerDied","Data":"e31032eb3407bce853d0be38a115c77d3679d1c63fdc6c68fe19ac271b5e7c71"} Mar 18 17:58:47.409176 master-0 kubenswrapper[7553]: I0318 17:58:47.409134 7553 scope.go:117] "RemoveContainer" containerID="e31032eb3407bce853d0be38a115c77d3679d1c63fdc6c68fe19ac271b5e7c71" Mar 18 17:58:47.423775 master-0 kubenswrapper[7553]: I0318 17:58:47.414678 7553 generic.go:334] "Generic (PLEG): container finished" podID="c38c5f03-a753-49f4-ab06-33e75a03bd45" containerID="a3a77ef6f8f671fb5f80e7a57420cd1c8a6c6e49b81d12a2df38ba7e576274fc" exitCode=0 Mar 18 17:58:47.423775 master-0 kubenswrapper[7553]: I0318 17:58:47.414756 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/cluster-storage-operator-7d87854d6-d4bmc" event={"ID":"c38c5f03-a753-49f4-ab06-33e75a03bd45","Type":"ContainerDied","Data":"a3a77ef6f8f671fb5f80e7a57420cd1c8a6c6e49b81d12a2df38ba7e576274fc"} Mar 18 17:58:47.423775 master-0 kubenswrapper[7553]: I0318 17:58:47.415218 7553 scope.go:117] "RemoveContainer" containerID="a3a77ef6f8f671fb5f80e7a57420cd1c8a6c6e49b81d12a2df38ba7e576274fc" Mar 18 17:58:47.423775 master-0 kubenswrapper[7553]: I0318 17:58:47.418538 7553 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-apiserver-operator_openshift-apiserver-operator-d65958b8-t266j_0b9ff55a-73fb-473f-b406-1f8b6cffdb89/openshift-apiserver-operator/1.log" Mar 18 17:58:47.423775 master-0 kubenswrapper[7553]: I0318 17:58:47.418610 7553 generic.go:334] "Generic (PLEG): container finished" podID="0b9ff55a-73fb-473f-b406-1f8b6cffdb89" containerID="208f151f73d2054e8fc1e7bad5a7840184b6f1a99cd1c642769a09479cee5ec9" exitCode=0 Mar 18 17:58:47.423775 master-0 kubenswrapper[7553]: I0318 17:58:47.418656 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-d65958b8-t266j" event={"ID":"0b9ff55a-73fb-473f-b406-1f8b6cffdb89","Type":"ContainerDied","Data":"208f151f73d2054e8fc1e7bad5a7840184b6f1a99cd1c642769a09479cee5ec9"} Mar 18 17:58:47.423775 master-0 kubenswrapper[7553]: I0318 17:58:47.420978 7553 scope.go:117] "RemoveContainer" containerID="208f151f73d2054e8fc1e7bad5a7840184b6f1a99cd1c642769a09479cee5ec9" Mar 18 17:58:47.443759 master-0 kubenswrapper[7553]: I0318 17:58:47.442878 7553 scope.go:117] "RemoveContainer" containerID="d4e55edde3b012389f45dd8d1909f3ff7e569bfb5c590f0e8e7e8c080c91f4b0" Mar 18 17:58:47.536798 master-0 kubenswrapper[7553]: I0318 17:58:47.536657 7553 scope.go:117] "RemoveContainer" containerID="36a5d9d231da98f0f9e0dae16fa8c5d4e171fd401ed1a351ab236e19bff04107" Mar 18 17:58:47.674647 master-0 kubenswrapper[7553]: E0318 17:58:47.674551 7553 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 18 17:58:47.715756 master-0 kubenswrapper[7553]: I0318 17:58:47.715688 7553 patch_prober.go:28] interesting pod/kube-controller-manager-master-0 container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 18 17:58:47.715851 master-0 kubenswrapper[7553]: I0318 17:58:47.715781 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="3b3363934623637fdc1d37ff8b16880a" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 18 17:58:48.053654 master-0 kubenswrapper[7553]: I0318 17:58:48.053608 7553 scope.go:117] "RemoveContainer" containerID="7805e3a084be32fd22f401c11f2cafaf6f6853b5c227bbbf9238a583daa6ea61" Mar 18 17:58:48.053909 master-0 kubenswrapper[7553]: E0318 17:58:48.053811 7553 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kube-rbac-proxy pod=cluster-cloud-controller-manager-operator-7dff898856-kfzkl_openshift-cloud-controller-manager-operator(0751c002-fe0e-4f13-bb9c-9accd8ca0df3)\"" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-kfzkl" podUID="0751c002-fe0e-4f13-bb9c-9accd8ca0df3" Mar 18 17:58:48.103714 master-0 kubenswrapper[7553]: I0318 17:58:48.103628 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:58:48.103714 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:58:48.103714 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:58:48.103714 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:58:48.104056 master-0 kubenswrapper[7553]: I0318 17:58:48.103732 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:58:48.118788 master-0 kubenswrapper[7553]: E0318 17:58:48.118712 7553 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[control-plane-machine-set-operator-tls], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="openshift-machine-api/control-plane-machine-set-operator-6f97756bc8-zdqtc" podUID="de189d27-4c60-49f1-9119-d1fde5c37b1e" Mar 18 17:58:48.118898 master-0 kubenswrapper[7553]: E0318 17:58:48.118807 7553 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[cert], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="openshift-machine-api/cluster-autoscaler-operator-866dc4744-l6hpt" podUID="a94f7bff-ad61-4c53-a8eb-000a13f26971" Mar 18 17:58:48.118898 master-0 kubenswrapper[7553]: E0318 17:58:48.118863 7553 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[cloud-credential-operator-serving-cert], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="openshift-cloud-credential-operator/cloud-credential-operator-744f9dbf77-djgn7" podUID="04cef0bd-f365-4bf6-864a-1895995015d6" Mar 18 17:58:48.119134 master-0 kubenswrapper[7553]: E0318 17:58:48.119097 7553 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[samples-operator-tls], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="openshift-cluster-samples-operator/cluster-samples-operator-85f7577d78-xnx8x" podUID="e0e04440-c08b-452d-9be6-9f70a4027c92" Mar 18 17:58:48.427502 master-0 kubenswrapper[7553]: I0318 17:58:48.427341 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-d65958b8-t266j" event={"ID":"0b9ff55a-73fb-473f-b406-1f8b6cffdb89","Type":"ContainerStarted","Data":"9ffd9d7d453821aece68d8713c679371af06848e999126737b0425384451d89c"} Mar 18 17:58:48.430053 master-0 kubenswrapper[7553]: I0318 17:58:48.429972 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-b4f87c5b9-m84zq" event={"ID":"89e6c3d6-7bd5-4df6-90db-3a349f644afb","Type":"ContainerStarted","Data":"cf552189761711b43b701012467d471aad8152dee3739b0c376e2816b6b64b91"} Mar 18 17:58:48.432203 master-0 kubenswrapper[7553]: I0318 17:58:48.432142 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-8b68b9d9b-p72m2" event={"ID":"26575d68-0488-4dfa-a5d0-5016e481dba6","Type":"ContainerStarted","Data":"671367c5073ad6f744a54d607f1da2cb0ed076b02fcae42f0c7e0ad31e24b8f2"} Mar 18 17:58:48.434682 master-0 kubenswrapper[7553]: I0318 17:58:48.434639 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-84d549f6d5-b5lps" event={"ID":"c3267271-e0c5-45d6-980c-d78e4f9eef35","Type":"ContainerStarted","Data":"a0cb800a189ca59f912d76febb998e601a8f14c09524633ec99a83c4ce0f00ab"} Mar 18 17:58:48.436855 master-0 kubenswrapper[7553]: I0318 17:58:48.436818 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-5549dc66cb-ljrq8" event={"ID":"6f26e239-2988-4faa-bc1d-24b15b95b7f1","Type":"ContainerStarted","Data":"ebbe1390fc92bac0c8c1e692fd10b7d4bda2116d50c15067dbc54677a553313e"} Mar 18 17:58:48.439049 master-0 kubenswrapper[7553]: I0318 17:58:48.439012 7553 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-85f7577d78-xnx8x" Mar 18 17:58:48.439136 master-0 kubenswrapper[7553]: I0318 17:58:48.439067 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/cluster-storage-operator-7d87854d6-d4bmc" event={"ID":"c38c5f03-a753-49f4-ab06-33e75a03bd45","Type":"ContainerStarted","Data":"bebc620615d8515b61198d36eaf34f71ebabfb72d8742ae380f777df67d1970c"} Mar 18 17:58:48.439378 master-0 kubenswrapper[7553]: I0318 17:58:48.439140 7553 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/cluster-autoscaler-operator-866dc4744-l6hpt" Mar 18 17:58:48.439495 master-0 kubenswrapper[7553]: I0318 17:58:48.439462 7553 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cloud-credential-operator/cloud-credential-operator-744f9dbf77-djgn7" Mar 18 17:58:48.439731 master-0 kubenswrapper[7553]: I0318 17:58:48.439702 7553 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-6f97756bc8-zdqtc" Mar 18 17:58:48.737611 master-0 kubenswrapper[7553]: I0318 17:58:48.737395 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/de189d27-4c60-49f1-9119-d1fde5c37b1e-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-6f97756bc8-zdqtc\" (UID: \"de189d27-4c60-49f1-9119-d1fde5c37b1e\") " pod="openshift-machine-api/control-plane-machine-set-operator-6f97756bc8-zdqtc" Mar 18 17:58:48.737611 master-0 kubenswrapper[7553]: I0318 17:58:48.737502 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/92153864-7959-4482-bf24-c8db36435fb5-machine-approver-tls\") pod \"machine-approver-5c6485487f-z74t2\" (UID: \"92153864-7959-4482-bf24-c8db36435fb5\") " pod="openshift-cluster-machine-approver/machine-approver-5c6485487f-z74t2" Mar 18 17:58:48.737611 master-0 kubenswrapper[7553]: I0318 17:58:48.737530 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a94f7bff-ad61-4c53-a8eb-000a13f26971-cert\") pod \"cluster-autoscaler-operator-866dc4744-l6hpt\" (UID: \"a94f7bff-ad61-4c53-a8eb-000a13f26971\") " pod="openshift-machine-api/cluster-autoscaler-operator-866dc4744-l6hpt" Mar 18 17:58:48.738002 master-0 kubenswrapper[7553]: E0318 17:58:48.737623 7553 secret.go:189] Couldn't get secret openshift-machine-api/control-plane-machine-set-operator-tls: secret "control-plane-machine-set-operator-tls" not found Mar 18 17:58:48.738002 master-0 kubenswrapper[7553]: E0318 17:58:48.737752 7553 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/de189d27-4c60-49f1-9119-d1fde5c37b1e-control-plane-machine-set-operator-tls podName:de189d27-4c60-49f1-9119-d1fde5c37b1e nodeName:}" failed. No retries permitted until 2026-03-18 18:00:50.73771549 +0000 UTC m=+1140.883550203 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "control-plane-machine-set-operator-tls" (UniqueName: "kubernetes.io/secret/de189d27-4c60-49f1-9119-d1fde5c37b1e-control-plane-machine-set-operator-tls") pod "control-plane-machine-set-operator-6f97756bc8-zdqtc" (UID: "de189d27-4c60-49f1-9119-d1fde5c37b1e") : secret "control-plane-machine-set-operator-tls" not found Mar 18 17:58:48.738002 master-0 kubenswrapper[7553]: E0318 17:58:48.737789 7553 secret.go:189] Couldn't get secret openshift-cluster-machine-approver/machine-approver-tls: secret "machine-approver-tls" not found Mar 18 17:58:48.738002 master-0 kubenswrapper[7553]: E0318 17:58:48.737836 7553 secret.go:189] Couldn't get secret openshift-machine-api/cluster-autoscaler-operator-cert: secret "cluster-autoscaler-operator-cert" not found Mar 18 17:58:48.738002 master-0 kubenswrapper[7553]: E0318 17:58:48.737897 7553 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a94f7bff-ad61-4c53-a8eb-000a13f26971-cert podName:a94f7bff-ad61-4c53-a8eb-000a13f26971 nodeName:}" failed. No retries permitted until 2026-03-18 18:00:50.737878495 +0000 UTC m=+1140.883713168 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/a94f7bff-ad61-4c53-a8eb-000a13f26971-cert") pod "cluster-autoscaler-operator-866dc4744-l6hpt" (UID: "a94f7bff-ad61-4c53-a8eb-000a13f26971") : secret "cluster-autoscaler-operator-cert" not found Mar 18 17:58:48.738002 master-0 kubenswrapper[7553]: I0318 17:58:48.737921 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/e0e04440-c08b-452d-9be6-9f70a4027c92-samples-operator-tls\") pod \"cluster-samples-operator-85f7577d78-xnx8x\" (UID: \"e0e04440-c08b-452d-9be6-9f70a4027c92\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-85f7577d78-xnx8x" Mar 18 17:58:48.738002 master-0 kubenswrapper[7553]: I0318 17:58:48.737956 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloud-credential-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/04cef0bd-f365-4bf6-864a-1895995015d6-cloud-credential-operator-serving-cert\") pod \"cloud-credential-operator-744f9dbf77-djgn7\" (UID: \"04cef0bd-f365-4bf6-864a-1895995015d6\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-744f9dbf77-djgn7" Mar 18 17:58:48.738251 master-0 kubenswrapper[7553]: E0318 17:58:48.738028 7553 secret.go:189] Couldn't get secret openshift-cloud-credential-operator/cloud-credential-operator-serving-cert: secret "cloud-credential-operator-serving-cert" not found Mar 18 17:58:48.738251 master-0 kubenswrapper[7553]: E0318 17:58:48.738029 7553 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/92153864-7959-4482-bf24-c8db36435fb5-machine-approver-tls podName:92153864-7959-4482-bf24-c8db36435fb5 nodeName:}" failed. No retries permitted until 2026-03-18 18:00:50.737980577 +0000 UTC m=+1140.883815270 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "machine-approver-tls" (UniqueName: "kubernetes.io/secret/92153864-7959-4482-bf24-c8db36435fb5-machine-approver-tls") pod "machine-approver-5c6485487f-z74t2" (UID: "92153864-7959-4482-bf24-c8db36435fb5") : secret "machine-approver-tls" not found Mar 18 17:58:48.738251 master-0 kubenswrapper[7553]: E0318 17:58:48.738070 7553 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/04cef0bd-f365-4bf6-864a-1895995015d6-cloud-credential-operator-serving-cert podName:04cef0bd-f365-4bf6-864a-1895995015d6 nodeName:}" failed. No retries permitted until 2026-03-18 18:00:50.738057689 +0000 UTC m=+1140.883892372 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "cloud-credential-operator-serving-cert" (UniqueName: "kubernetes.io/secret/04cef0bd-f365-4bf6-864a-1895995015d6-cloud-credential-operator-serving-cert") pod "cloud-credential-operator-744f9dbf77-djgn7" (UID: "04cef0bd-f365-4bf6-864a-1895995015d6") : secret "cloud-credential-operator-serving-cert" not found Mar 18 17:58:48.738251 master-0 kubenswrapper[7553]: E0318 17:58:48.738078 7553 secret.go:189] Couldn't get secret openshift-cluster-samples-operator/samples-operator-tls: secret "samples-operator-tls" not found Mar 18 17:58:48.738251 master-0 kubenswrapper[7553]: E0318 17:58:48.738153 7553 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e0e04440-c08b-452d-9be6-9f70a4027c92-samples-operator-tls podName:e0e04440-c08b-452d-9be6-9f70a4027c92 nodeName:}" failed. No retries permitted until 2026-03-18 18:00:50.738132191 +0000 UTC m=+1140.883966894 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "samples-operator-tls" (UniqueName: "kubernetes.io/secret/e0e04440-c08b-452d-9be6-9f70a4027c92-samples-operator-tls") pod "cluster-samples-operator-85f7577d78-xnx8x" (UID: "e0e04440-c08b-452d-9be6-9f70a4027c92") : secret "samples-operator-tls" not found Mar 18 17:58:48.840209 master-0 kubenswrapper[7553]: I0318 17:58:48.840117 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/2d21e77e-8b61-4f03-8f17-941b7a1d8b1d-machine-api-operator-tls\") pod \"machine-api-operator-6fbb6cf6f9-6x52p\" (UID: \"2d21e77e-8b61-4f03-8f17-941b7a1d8b1d\") " pod="openshift-machine-api/machine-api-operator-6fbb6cf6f9-6x52p" Mar 18 17:58:48.840641 master-0 kubenswrapper[7553]: E0318 17:58:48.840387 7553 secret.go:189] Couldn't get secret openshift-machine-api/machine-api-operator-tls: secret "machine-api-operator-tls" not found Mar 18 17:58:48.840641 master-0 kubenswrapper[7553]: E0318 17:58:48.840495 7553 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2d21e77e-8b61-4f03-8f17-941b7a1d8b1d-machine-api-operator-tls podName:2d21e77e-8b61-4f03-8f17-941b7a1d8b1d nodeName:}" failed. No retries permitted until 2026-03-18 18:00:50.840466092 +0000 UTC m=+1140.986300805 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "machine-api-operator-tls" (UniqueName: "kubernetes.io/secret/2d21e77e-8b61-4f03-8f17-941b7a1d8b1d-machine-api-operator-tls") pod "machine-api-operator-6fbb6cf6f9-6x52p" (UID: "2d21e77e-8b61-4f03-8f17-941b7a1d8b1d") : secret "machine-api-operator-tls" not found Mar 18 17:58:49.101400 master-0 kubenswrapper[7553]: I0318 17:58:49.101328 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:58:49.101400 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:58:49.101400 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:58:49.101400 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:58:49.101842 master-0 kubenswrapper[7553]: I0318 17:58:49.101437 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:58:49.125993 master-0 kubenswrapper[7553]: E0318 17:58:49.125914 7553 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[machine-api-operator-tls], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="openshift-machine-api/machine-api-operator-6fbb6cf6f9-6x52p" podUID="2d21e77e-8b61-4f03-8f17-941b7a1d8b1d" Mar 18 17:58:49.446873 master-0 kubenswrapper[7553]: I0318 17:58:49.446727 7553 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-6fbb6cf6f9-6x52p" Mar 18 17:58:50.059707 master-0 kubenswrapper[7553]: I0318 17:58:50.059260 7553 scope.go:117] "RemoveContainer" containerID="2b3a93a1f208538619b8c053a215774b7a5b76ad51695ab4679fe93b9c8aef84" Mar 18 17:58:50.060225 master-0 kubenswrapper[7553]: E0318 17:58:50.060188 7553 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"snapshot-controller\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=snapshot-controller pod=csi-snapshot-controller-64854d9cff-vpjmp_openshift-cluster-storage-operator(7d39d93e-9be3-47e1-a44e-be2d18b55446)\"" pod="openshift-cluster-storage-operator/csi-snapshot-controller-64854d9cff-vpjmp" podUID="7d39d93e-9be3-47e1-a44e-be2d18b55446" Mar 18 17:58:50.102036 master-0 kubenswrapper[7553]: I0318 17:58:50.101991 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:58:50.102036 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:58:50.102036 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:58:50.102036 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:58:50.102433 master-0 kubenswrapper[7553]: I0318 17:58:50.102402 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:58:50.543365 master-0 kubenswrapper[7553]: I0318 17:58:50.543309 7553 scope.go:117] "RemoveContainer" containerID="5eda9ef28d74f5cd7a10971a5854c8a51a0c32becadb69afd3686ca34d1563e1" Mar 18 17:58:51.085196 master-0 kubenswrapper[7553]: I0318 17:58:51.085112 7553 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-57dbfd879f-44tfw" Mar 18 17:58:51.102758 master-0 kubenswrapper[7553]: I0318 17:58:51.102665 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:58:51.102758 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:58:51.102758 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:58:51.102758 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:58:51.102758 master-0 kubenswrapper[7553]: I0318 17:58:51.102744 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:58:51.463113 master-0 kubenswrapper[7553]: I0318 17:58:51.462954 7553 generic.go:334] "Generic (PLEG): container finished" podID="fdab27a1-1d7a-4dc5-b828-eba3f57592dd" containerID="b533f593b28cafb60fbcf6432d0aa3477e72d3d1f721e9b883b828b9059da814" exitCode=0 Mar 18 17:58:51.463113 master-0 kubenswrapper[7553]: I0318 17:58:51.463034 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-7d58488df-l48xm" event={"ID":"fdab27a1-1d7a-4dc5-b828-eba3f57592dd","Type":"ContainerDied","Data":"b533f593b28cafb60fbcf6432d0aa3477e72d3d1f721e9b883b828b9059da814"} Mar 18 17:58:51.465214 master-0 kubenswrapper[7553]: I0318 17:58:51.464030 7553 scope.go:117] "RemoveContainer" containerID="b533f593b28cafb60fbcf6432d0aa3477e72d3d1f721e9b883b828b9059da814" Mar 18 17:58:52.102847 master-0 kubenswrapper[7553]: I0318 17:58:52.102754 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:58:52.102847 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:58:52.102847 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:58:52.102847 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:58:52.102847 master-0 kubenswrapper[7553]: I0318 17:58:52.102846 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:58:52.474444 master-0 kubenswrapper[7553]: I0318 17:58:52.474241 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-7d58488df-l48xm" event={"ID":"fdab27a1-1d7a-4dc5-b828-eba3f57592dd","Type":"ContainerStarted","Data":"f4e011e80bd67daeb6ca72a2398ab752dde89a84fb6d0d9223ad7799d83a44fd"} Mar 18 17:58:53.103090 master-0 kubenswrapper[7553]: I0318 17:58:53.102991 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:58:53.103090 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:58:53.103090 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:58:53.103090 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:58:53.103844 master-0 kubenswrapper[7553]: I0318 17:58:53.103099 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:58:54.102171 master-0 kubenswrapper[7553]: I0318 17:58:54.102059 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:58:54.102171 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:58:54.102171 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:58:54.102171 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:58:54.102171 master-0 kubenswrapper[7553]: I0318 17:58:54.102137 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:58:54.720263 master-0 kubenswrapper[7553]: I0318 17:58:54.719916 7553 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 17:58:54.725078 master-0 kubenswrapper[7553]: I0318 17:58:54.725006 7553 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 17:58:55.102242 master-0 kubenswrapper[7553]: I0318 17:58:55.102165 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:58:55.102242 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:58:55.102242 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:58:55.102242 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:58:55.102732 master-0 kubenswrapper[7553]: I0318 17:58:55.102262 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:58:56.103011 master-0 kubenswrapper[7553]: I0318 17:58:56.102911 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:58:56.103011 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:58:56.103011 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:58:56.103011 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:58:56.103974 master-0 kubenswrapper[7553]: I0318 17:58:56.103039 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:58:57.102928 master-0 kubenswrapper[7553]: I0318 17:58:57.102839 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:58:57.102928 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:58:57.102928 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:58:57.102928 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:58:57.103721 master-0 kubenswrapper[7553]: I0318 17:58:57.102940 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:58:57.675370 master-0 kubenswrapper[7553]: E0318 17:58:57.675243 7553 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 18 17:58:58.103253 master-0 kubenswrapper[7553]: I0318 17:58:58.103160 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:58:58.103253 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:58:58.103253 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:58:58.103253 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:58:58.105012 master-0 kubenswrapper[7553]: I0318 17:58:58.104961 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:58:59.103740 master-0 kubenswrapper[7553]: I0318 17:58:59.103647 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:58:59.103740 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:58:59.103740 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:58:59.103740 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:58:59.104710 master-0 kubenswrapper[7553]: I0318 17:58:59.104398 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:59:00.104427 master-0 kubenswrapper[7553]: I0318 17:59:00.104340 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:59:00.104427 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:59:00.104427 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:59:00.104427 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:59:00.105189 master-0 kubenswrapper[7553]: I0318 17:59:00.104449 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:59:01.053108 master-0 kubenswrapper[7553]: I0318 17:59:01.053056 7553 scope.go:117] "RemoveContainer" containerID="7805e3a084be32fd22f401c11f2cafaf6f6853b5c227bbbf9238a583daa6ea61" Mar 18 17:59:01.053377 master-0 kubenswrapper[7553]: E0318 17:59:01.053290 7553 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kube-rbac-proxy pod=cluster-cloud-controller-manager-operator-7dff898856-kfzkl_openshift-cloud-controller-manager-operator(0751c002-fe0e-4f13-bb9c-9accd8ca0df3)\"" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-kfzkl" podUID="0751c002-fe0e-4f13-bb9c-9accd8ca0df3" Mar 18 17:59:01.101689 master-0 kubenswrapper[7553]: I0318 17:59:01.101620 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:59:01.101689 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:59:01.101689 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:59:01.101689 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:59:01.102068 master-0 kubenswrapper[7553]: I0318 17:59:01.101729 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:59:02.054121 master-0 kubenswrapper[7553]: I0318 17:59:02.054052 7553 scope.go:117] "RemoveContainer" containerID="2b3a93a1f208538619b8c053a215774b7a5b76ad51695ab4679fe93b9c8aef84" Mar 18 17:59:02.054865 master-0 kubenswrapper[7553]: E0318 17:59:02.054329 7553 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"snapshot-controller\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=snapshot-controller pod=csi-snapshot-controller-64854d9cff-vpjmp_openshift-cluster-storage-operator(7d39d93e-9be3-47e1-a44e-be2d18b55446)\"" pod="openshift-cluster-storage-operator/csi-snapshot-controller-64854d9cff-vpjmp" podUID="7d39d93e-9be3-47e1-a44e-be2d18b55446" Mar 18 17:59:02.108085 master-0 kubenswrapper[7553]: I0318 17:59:02.107970 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:59:02.108085 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:59:02.108085 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:59:02.108085 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:59:02.108488 master-0 kubenswrapper[7553]: I0318 17:59:02.108113 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:59:03.101801 master-0 kubenswrapper[7553]: I0318 17:59:03.101719 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:59:03.101801 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:59:03.101801 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:59:03.101801 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:59:03.101801 master-0 kubenswrapper[7553]: I0318 17:59:03.101800 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:59:04.103152 master-0 kubenswrapper[7553]: I0318 17:59:04.103037 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:59:04.103152 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:59:04.103152 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:59:04.103152 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:59:04.104535 master-0 kubenswrapper[7553]: I0318 17:59:04.103185 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:59:05.103481 master-0 kubenswrapper[7553]: I0318 17:59:05.103383 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:59:05.103481 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:59:05.103481 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:59:05.103481 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:59:05.104716 master-0 kubenswrapper[7553]: I0318 17:59:05.103495 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:59:05.277078 master-0 kubenswrapper[7553]: E0318 17:59:05.276971 7553 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[prometheus-operator-tls], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="openshift-monitoring/prometheus-operator-6c8df6d4b-fshkm" podUID="9c0dbd44-7669-41d6-bf1b-d8c1343c9d98" Mar 18 17:59:05.369400 master-0 kubenswrapper[7553]: I0318 17:59:05.369208 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/9c0dbd44-7669-41d6-bf1b-d8c1343c9d98-prometheus-operator-tls\") pod \"prometheus-operator-6c8df6d4b-fshkm\" (UID: \"9c0dbd44-7669-41d6-bf1b-d8c1343c9d98\") " pod="openshift-monitoring/prometheus-operator-6c8df6d4b-fshkm" Mar 18 17:59:05.369837 master-0 kubenswrapper[7553]: E0318 17:59:05.369645 7553 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-operator-tls: secret "prometheus-operator-tls" not found Mar 18 17:59:05.370011 master-0 kubenswrapper[7553]: E0318 17:59:05.369958 7553 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9c0dbd44-7669-41d6-bf1b-d8c1343c9d98-prometheus-operator-tls podName:9c0dbd44-7669-41d6-bf1b-d8c1343c9d98 nodeName:}" failed. No retries permitted until 2026-03-18 18:01:07.369916617 +0000 UTC m=+1157.515751330 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "prometheus-operator-tls" (UniqueName: "kubernetes.io/secret/9c0dbd44-7669-41d6-bf1b-d8c1343c9d98-prometheus-operator-tls") pod "prometheus-operator-6c8df6d4b-fshkm" (UID: "9c0dbd44-7669-41d6-bf1b-d8c1343c9d98") : secret "prometheus-operator-tls" not found Mar 18 17:59:05.572404 master-0 kubenswrapper[7553]: I0318 17:59:05.572324 7553 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-6c8df6d4b-fshkm" Mar 18 17:59:06.103931 master-0 kubenswrapper[7553]: I0318 17:59:06.103836 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:59:06.103931 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:59:06.103931 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:59:06.103931 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:59:06.104995 master-0 kubenswrapper[7553]: I0318 17:59:06.103968 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:59:07.102680 master-0 kubenswrapper[7553]: I0318 17:59:07.102573 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:59:07.102680 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:59:07.102680 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:59:07.102680 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:59:07.102680 master-0 kubenswrapper[7553]: I0318 17:59:07.102673 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:59:07.103264 master-0 kubenswrapper[7553]: I0318 17:59:07.102751 7553 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" Mar 18 17:59:07.103672 master-0 kubenswrapper[7553]: I0318 17:59:07.103548 7553 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="router" containerStatusID={"Type":"cri-o","ID":"40665a65803f46b85c5841b161668f9dc53195967c924003dedfb177dd66895a"} pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" containerMessage="Container router failed startup probe, will be restarted" Mar 18 17:59:07.103672 master-0 kubenswrapper[7553]: I0318 17:59:07.103603 7553 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" containerID="cri-o://40665a65803f46b85c5841b161668f9dc53195967c924003dedfb177dd66895a" gracePeriod=3600 Mar 18 17:59:07.426494 master-0 kubenswrapper[7553]: I0318 17:59:07.426366 7553 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-2-retry-1-master-0"] Mar 18 17:59:07.426975 master-0 kubenswrapper[7553]: E0318 17:59:07.426772 7553 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd9d8bd7-68a0-458f-9d25-f600932e303c" containerName="installer" Mar 18 17:59:07.426975 master-0 kubenswrapper[7553]: I0318 17:59:07.426793 7553 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd9d8bd7-68a0-458f-9d25-f600932e303c" containerName="installer" Mar 18 17:59:07.426975 master-0 kubenswrapper[7553]: E0318 17:59:07.426813 7553 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a8c34df1-ea0d-4dfa-bf4d-5b58dc5bee8e" containerName="multus-admission-controller" Mar 18 17:59:07.426975 master-0 kubenswrapper[7553]: I0318 17:59:07.426826 7553 state_mem.go:107] "Deleted CPUSet assignment" podUID="a8c34df1-ea0d-4dfa-bf4d-5b58dc5bee8e" containerName="multus-admission-controller" Mar 18 17:59:07.426975 master-0 kubenswrapper[7553]: E0318 17:59:07.426853 7553 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c9655d59-a594-499f-b474-dfc870239174" containerName="installer" Mar 18 17:59:07.426975 master-0 kubenswrapper[7553]: I0318 17:59:07.426867 7553 state_mem.go:107] "Deleted CPUSet assignment" podUID="c9655d59-a594-499f-b474-dfc870239174" containerName="installer" Mar 18 17:59:07.426975 master-0 kubenswrapper[7553]: E0318 17:59:07.426898 7553 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="da246674-9ad1-4732-9a9e-d86d18fb0c55" containerName="installer" Mar 18 17:59:07.426975 master-0 kubenswrapper[7553]: I0318 17:59:07.426911 7553 state_mem.go:107] "Deleted CPUSet assignment" podUID="da246674-9ad1-4732-9a9e-d86d18fb0c55" containerName="installer" Mar 18 17:59:07.426975 master-0 kubenswrapper[7553]: E0318 17:59:07.426944 7553 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a8c34df1-ea0d-4dfa-bf4d-5b58dc5bee8e" containerName="kube-rbac-proxy" Mar 18 17:59:07.426975 master-0 kubenswrapper[7553]: I0318 17:59:07.426956 7553 state_mem.go:107] "Deleted CPUSet assignment" podUID="a8c34df1-ea0d-4dfa-bf4d-5b58dc5bee8e" containerName="kube-rbac-proxy" Mar 18 17:59:07.427300 master-0 kubenswrapper[7553]: E0318 17:59:07.426981 7553 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="98c88ce7-94dd-434c-99fc-96d900d544e6" containerName="installer" Mar 18 17:59:07.427300 master-0 kubenswrapper[7553]: I0318 17:59:07.426994 7553 state_mem.go:107] "Deleted CPUSet assignment" podUID="98c88ce7-94dd-434c-99fc-96d900d544e6" containerName="installer" Mar 18 17:59:07.427300 master-0 kubenswrapper[7553]: I0318 17:59:07.427211 7553 memory_manager.go:354] "RemoveStaleState removing state" podUID="98c88ce7-94dd-434c-99fc-96d900d544e6" containerName="installer" Mar 18 17:59:07.427300 master-0 kubenswrapper[7553]: I0318 17:59:07.427261 7553 memory_manager.go:354] "RemoveStaleState removing state" podUID="c9655d59-a594-499f-b474-dfc870239174" containerName="installer" Mar 18 17:59:07.427454 master-0 kubenswrapper[7553]: I0318 17:59:07.427331 7553 memory_manager.go:354] "RemoveStaleState removing state" podUID="a8c34df1-ea0d-4dfa-bf4d-5b58dc5bee8e" containerName="kube-rbac-proxy" Mar 18 17:59:07.427454 master-0 kubenswrapper[7553]: I0318 17:59:07.427385 7553 memory_manager.go:354] "RemoveStaleState removing state" podUID="da246674-9ad1-4732-9a9e-d86d18fb0c55" containerName="installer" Mar 18 17:59:07.427454 master-0 kubenswrapper[7553]: I0318 17:59:07.427406 7553 memory_manager.go:354] "RemoveStaleState removing state" podUID="a8c34df1-ea0d-4dfa-bf4d-5b58dc5bee8e" containerName="multus-admission-controller" Mar 18 17:59:07.427454 master-0 kubenswrapper[7553]: I0318 17:59:07.427445 7553 memory_manager.go:354] "RemoveStaleState removing state" podUID="cd9d8bd7-68a0-458f-9d25-f600932e303c" containerName="installer" Mar 18 17:59:07.428373 master-0 kubenswrapper[7553]: I0318 17:59:07.428342 7553 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-2-retry-1-master-0" Mar 18 17:59:07.431924 master-0 kubenswrapper[7553]: I0318 17:59:07.431875 7553 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Mar 18 17:59:07.432170 master-0 kubenswrapper[7553]: I0318 17:59:07.432089 7553 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-kzvvj" Mar 18 17:59:07.448252 master-0 kubenswrapper[7553]: I0318 17:59:07.448165 7553 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-2-retry-1-master-0"] Mar 18 17:59:07.607030 master-0 kubenswrapper[7553]: I0318 17:59:07.606960 7553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/ff10351e-9378-4e25-87df-90ace60e5d16-var-lock\") pod \"installer-2-retry-1-master-0\" (UID: \"ff10351e-9378-4e25-87df-90ace60e5d16\") " pod="openshift-kube-apiserver/installer-2-retry-1-master-0" Mar 18 17:59:07.607030 master-0 kubenswrapper[7553]: I0318 17:59:07.607040 7553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ff10351e-9378-4e25-87df-90ace60e5d16-kube-api-access\") pod \"installer-2-retry-1-master-0\" (UID: \"ff10351e-9378-4e25-87df-90ace60e5d16\") " pod="openshift-kube-apiserver/installer-2-retry-1-master-0" Mar 18 17:59:07.607591 master-0 kubenswrapper[7553]: I0318 17:59:07.607203 7553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ff10351e-9378-4e25-87df-90ace60e5d16-kubelet-dir\") pod \"installer-2-retry-1-master-0\" (UID: \"ff10351e-9378-4e25-87df-90ace60e5d16\") " pod="openshift-kube-apiserver/installer-2-retry-1-master-0" Mar 18 17:59:07.709458 master-0 kubenswrapper[7553]: I0318 17:59:07.709198 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/ff10351e-9378-4e25-87df-90ace60e5d16-var-lock\") pod \"installer-2-retry-1-master-0\" (UID: \"ff10351e-9378-4e25-87df-90ace60e5d16\") " pod="openshift-kube-apiserver/installer-2-retry-1-master-0" Mar 18 17:59:07.709805 master-0 kubenswrapper[7553]: I0318 17:59:07.709445 7553 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/ff10351e-9378-4e25-87df-90ace60e5d16-var-lock\") pod \"installer-2-retry-1-master-0\" (UID: \"ff10351e-9378-4e25-87df-90ace60e5d16\") " pod="openshift-kube-apiserver/installer-2-retry-1-master-0" Mar 18 17:59:07.709878 master-0 kubenswrapper[7553]: I0318 17:59:07.709760 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ff10351e-9378-4e25-87df-90ace60e5d16-kube-api-access\") pod \"installer-2-retry-1-master-0\" (UID: \"ff10351e-9378-4e25-87df-90ace60e5d16\") " pod="openshift-kube-apiserver/installer-2-retry-1-master-0" Mar 18 17:59:07.710247 master-0 kubenswrapper[7553]: I0318 17:59:07.710190 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ff10351e-9378-4e25-87df-90ace60e5d16-kubelet-dir\") pod \"installer-2-retry-1-master-0\" (UID: \"ff10351e-9378-4e25-87df-90ace60e5d16\") " pod="openshift-kube-apiserver/installer-2-retry-1-master-0" Mar 18 17:59:07.710499 master-0 kubenswrapper[7553]: I0318 17:59:07.710430 7553 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ff10351e-9378-4e25-87df-90ace60e5d16-kubelet-dir\") pod \"installer-2-retry-1-master-0\" (UID: \"ff10351e-9378-4e25-87df-90ace60e5d16\") " pod="openshift-kube-apiserver/installer-2-retry-1-master-0" Mar 18 17:59:07.731169 master-0 kubenswrapper[7553]: I0318 17:59:07.731096 7553 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ff10351e-9378-4e25-87df-90ace60e5d16-kube-api-access\") pod \"installer-2-retry-1-master-0\" (UID: \"ff10351e-9378-4e25-87df-90ace60e5d16\") " pod="openshift-kube-apiserver/installer-2-retry-1-master-0" Mar 18 17:59:07.764855 master-0 kubenswrapper[7553]: I0318 17:59:07.764768 7553 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-2-retry-1-master-0" Mar 18 17:59:08.248204 master-0 kubenswrapper[7553]: I0318 17:59:08.246413 7553 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-2-retry-1-master-0"] Mar 18 17:59:08.248204 master-0 kubenswrapper[7553]: W0318 17:59:08.247686 7553 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-podff10351e_9378_4e25_87df_90ace60e5d16.slice/crio-5b5768e7c3863053d5f0700738ab22492634ebadad91f6b6c8f2086da29b8dd5 WatchSource:0}: Error finding container 5b5768e7c3863053d5f0700738ab22492634ebadad91f6b6c8f2086da29b8dd5: Status 404 returned error can't find the container with id 5b5768e7c3863053d5f0700738ab22492634ebadad91f6b6c8f2086da29b8dd5 Mar 18 17:59:08.439607 master-0 kubenswrapper[7553]: I0318 17:59:08.439515 7553 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/installer-4-master-0"] Mar 18 17:59:08.440714 master-0 kubenswrapper[7553]: I0318 17:59:08.440662 7553 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-4-master-0" Mar 18 17:59:08.448723 master-0 kubenswrapper[7553]: I0318 17:59:08.448613 7553 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler"/"kube-root-ca.crt" Mar 18 17:59:08.448723 master-0 kubenswrapper[7553]: I0318 17:59:08.448722 7553 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler"/"installer-sa-dockercfg-7rpkg" Mar 18 17:59:08.452686 master-0 kubenswrapper[7553]: I0318 17:59:08.452626 7553 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/installer-4-master-0"] Mar 18 17:59:08.593949 master-0 kubenswrapper[7553]: I0318 17:59:08.593847 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-2-retry-1-master-0" event={"ID":"ff10351e-9378-4e25-87df-90ace60e5d16","Type":"ContainerStarted","Data":"21e0077e6fdd9559bfd3ede636836e5d1c3c79c17e113399eead6f42be9f3307"} Mar 18 17:59:08.593949 master-0 kubenswrapper[7553]: I0318 17:59:08.593921 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-2-retry-1-master-0" event={"ID":"ff10351e-9378-4e25-87df-90ace60e5d16","Type":"ContainerStarted","Data":"5b5768e7c3863053d5f0700738ab22492634ebadad91f6b6c8f2086da29b8dd5"} Mar 18 17:59:08.621473 master-0 kubenswrapper[7553]: I0318 17:59:08.621359 7553 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-2-retry-1-master-0" podStartSLOduration=1.621337575 podStartE2EDuration="1.621337575s" podCreationTimestamp="2026-03-18 17:59:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 17:59:08.612677128 +0000 UTC m=+1038.758511911" watchObservedRunningTime="2026-03-18 17:59:08.621337575 +0000 UTC m=+1038.767172258" Mar 18 17:59:08.636763 master-0 kubenswrapper[7553]: I0318 17:59:08.636659 7553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/5e216493-e343-4c59-a3c1-5aad5edd67e2-var-lock\") pod \"installer-4-master-0\" (UID: \"5e216493-e343-4c59-a3c1-5aad5edd67e2\") " pod="openshift-kube-scheduler/installer-4-master-0" Mar 18 17:59:08.636916 master-0 kubenswrapper[7553]: I0318 17:59:08.636828 7553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5e216493-e343-4c59-a3c1-5aad5edd67e2-kube-api-access\") pod \"installer-4-master-0\" (UID: \"5e216493-e343-4c59-a3c1-5aad5edd67e2\") " pod="openshift-kube-scheduler/installer-4-master-0" Mar 18 17:59:08.637339 master-0 kubenswrapper[7553]: I0318 17:59:08.637216 7553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5e216493-e343-4c59-a3c1-5aad5edd67e2-kubelet-dir\") pod \"installer-4-master-0\" (UID: \"5e216493-e343-4c59-a3c1-5aad5edd67e2\") " pod="openshift-kube-scheduler/installer-4-master-0" Mar 18 17:59:08.738415 master-0 kubenswrapper[7553]: I0318 17:59:08.738316 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5e216493-e343-4c59-a3c1-5aad5edd67e2-kube-api-access\") pod \"installer-4-master-0\" (UID: \"5e216493-e343-4c59-a3c1-5aad5edd67e2\") " pod="openshift-kube-scheduler/installer-4-master-0" Mar 18 17:59:08.738726 master-0 kubenswrapper[7553]: I0318 17:59:08.738657 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5e216493-e343-4c59-a3c1-5aad5edd67e2-kubelet-dir\") pod \"installer-4-master-0\" (UID: \"5e216493-e343-4c59-a3c1-5aad5edd67e2\") " pod="openshift-kube-scheduler/installer-4-master-0" Mar 18 17:59:08.738816 master-0 kubenswrapper[7553]: I0318 17:59:08.738750 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/5e216493-e343-4c59-a3c1-5aad5edd67e2-var-lock\") pod \"installer-4-master-0\" (UID: \"5e216493-e343-4c59-a3c1-5aad5edd67e2\") " pod="openshift-kube-scheduler/installer-4-master-0" Mar 18 17:59:08.738915 master-0 kubenswrapper[7553]: I0318 17:59:08.738799 7553 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5e216493-e343-4c59-a3c1-5aad5edd67e2-kubelet-dir\") pod \"installer-4-master-0\" (UID: \"5e216493-e343-4c59-a3c1-5aad5edd67e2\") " pod="openshift-kube-scheduler/installer-4-master-0" Mar 18 17:59:08.738915 master-0 kubenswrapper[7553]: I0318 17:59:08.738889 7553 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/5e216493-e343-4c59-a3c1-5aad5edd67e2-var-lock\") pod \"installer-4-master-0\" (UID: \"5e216493-e343-4c59-a3c1-5aad5edd67e2\") " pod="openshift-kube-scheduler/installer-4-master-0" Mar 18 17:59:08.763130 master-0 kubenswrapper[7553]: I0318 17:59:08.763076 7553 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5e216493-e343-4c59-a3c1-5aad5edd67e2-kube-api-access\") pod \"installer-4-master-0\" (UID: \"5e216493-e343-4c59-a3c1-5aad5edd67e2\") " pod="openshift-kube-scheduler/installer-4-master-0" Mar 18 17:59:08.802164 master-0 kubenswrapper[7553]: I0318 17:59:08.802098 7553 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-4-master-0" Mar 18 17:59:09.302780 master-0 kubenswrapper[7553]: I0318 17:59:09.302725 7553 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/installer-4-master-0"] Mar 18 17:59:09.316345 master-0 kubenswrapper[7553]: W0318 17:59:09.316254 7553 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod5e216493_e343_4c59_a3c1_5aad5edd67e2.slice/crio-62b095928168d47377d353f0ce39eebc777747ee26de9ec57d0eb0a49ec53d3a WatchSource:0}: Error finding container 62b095928168d47377d353f0ce39eebc777747ee26de9ec57d0eb0a49ec53d3a: Status 404 returned error can't find the container with id 62b095928168d47377d353f0ce39eebc777747ee26de9ec57d0eb0a49ec53d3a Mar 18 17:59:09.603194 master-0 kubenswrapper[7553]: I0318 17:59:09.603038 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-4-master-0" event={"ID":"5e216493-e343-4c59-a3c1-5aad5edd67e2","Type":"ContainerStarted","Data":"62b095928168d47377d353f0ce39eebc777747ee26de9ec57d0eb0a49ec53d3a"} Mar 18 17:59:10.615076 master-0 kubenswrapper[7553]: I0318 17:59:10.614984 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-4-master-0" event={"ID":"5e216493-e343-4c59-a3c1-5aad5edd67e2","Type":"ContainerStarted","Data":"b80c144acadff41c49bf3614230955b846d46e4c70083852e45c512d06842840"} Mar 18 17:59:10.654379 master-0 kubenswrapper[7553]: I0318 17:59:10.654236 7553 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/installer-4-master-0" podStartSLOduration=2.654202909 podStartE2EDuration="2.654202909s" podCreationTimestamp="2026-03-18 17:59:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 17:59:10.645053459 +0000 UTC m=+1040.790888132" watchObservedRunningTime="2026-03-18 17:59:10.654202909 +0000 UTC m=+1040.800037622" Mar 18 17:59:13.118417 master-0 kubenswrapper[7553]: I0318 17:59:13.118339 7553 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-7b95f86987-6qqz4" Mar 18 17:59:15.054087 master-0 kubenswrapper[7553]: I0318 17:59:15.053985 7553 scope.go:117] "RemoveContainer" containerID="7805e3a084be32fd22f401c11f2cafaf6f6853b5c227bbbf9238a583daa6ea61" Mar 18 17:59:15.055196 master-0 kubenswrapper[7553]: E0318 17:59:15.054351 7553 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kube-rbac-proxy pod=cluster-cloud-controller-manager-operator-7dff898856-kfzkl_openshift-cloud-controller-manager-operator(0751c002-fe0e-4f13-bb9c-9accd8ca0df3)\"" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-kfzkl" podUID="0751c002-fe0e-4f13-bb9c-9accd8ca0df3" Mar 18 17:59:17.053660 master-0 kubenswrapper[7553]: I0318 17:59:17.053580 7553 scope.go:117] "RemoveContainer" containerID="2b3a93a1f208538619b8c053a215774b7a5b76ad51695ab4679fe93b9c8aef84" Mar 18 17:59:17.054502 master-0 kubenswrapper[7553]: E0318 17:59:17.053800 7553 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"snapshot-controller\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=snapshot-controller pod=csi-snapshot-controller-64854d9cff-vpjmp_openshift-cluster-storage-operator(7d39d93e-9be3-47e1-a44e-be2d18b55446)\"" pod="openshift-cluster-storage-operator/csi-snapshot-controller-64854d9cff-vpjmp" podUID="7d39d93e-9be3-47e1-a44e-be2d18b55446" Mar 18 17:59:29.038459 master-0 kubenswrapper[7553]: I0318 17:59:29.037886 7553 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/installer-2-retry-1-master-0"] Mar 18 17:59:29.038459 master-0 kubenswrapper[7553]: I0318 17:59:29.038356 7553 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/installer-2-retry-1-master-0" podUID="ff10351e-9378-4e25-87df-90ace60e5d16" containerName="installer" containerID="cri-o://21e0077e6fdd9559bfd3ede636836e5d1c3c79c17e113399eead6f42be9f3307" gracePeriod=30 Mar 18 17:59:29.053523 master-0 kubenswrapper[7553]: I0318 17:59:29.053460 7553 scope.go:117] "RemoveContainer" containerID="2b3a93a1f208538619b8c053a215774b7a5b76ad51695ab4679fe93b9c8aef84" Mar 18 17:59:29.053803 master-0 kubenswrapper[7553]: E0318 17:59:29.053760 7553 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"snapshot-controller\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=snapshot-controller pod=csi-snapshot-controller-64854d9cff-vpjmp_openshift-cluster-storage-operator(7d39d93e-9be3-47e1-a44e-be2d18b55446)\"" pod="openshift-cluster-storage-operator/csi-snapshot-controller-64854d9cff-vpjmp" podUID="7d39d93e-9be3-47e1-a44e-be2d18b55446" Mar 18 17:59:29.053993 master-0 kubenswrapper[7553]: I0318 17:59:29.053951 7553 scope.go:117] "RemoveContainer" containerID="7805e3a084be32fd22f401c11f2cafaf6f6853b5c227bbbf9238a583daa6ea61" Mar 18 17:59:29.054396 master-0 kubenswrapper[7553]: E0318 17:59:29.054334 7553 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kube-rbac-proxy pod=cluster-cloud-controller-manager-operator-7dff898856-kfzkl_openshift-cloud-controller-manager-operator(0751c002-fe0e-4f13-bb9c-9accd8ca0df3)\"" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-kfzkl" podUID="0751c002-fe0e-4f13-bb9c-9accd8ca0df3" Mar 18 17:59:32.025256 master-0 kubenswrapper[7553]: I0318 17:59:32.025182 7553 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-3-master-0"] Mar 18 17:59:32.026541 master-0 kubenswrapper[7553]: I0318 17:59:32.026504 7553 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-3-master-0" Mar 18 17:59:32.070351 master-0 kubenswrapper[7553]: I0318 17:59:32.070295 7553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/4285e80c-1ff9-42b3-9692-9f2ab6b61916-var-lock\") pod \"installer-3-master-0\" (UID: \"4285e80c-1ff9-42b3-9692-9f2ab6b61916\") " pod="openshift-kube-apiserver/installer-3-master-0" Mar 18 17:59:32.070351 master-0 kubenswrapper[7553]: I0318 17:59:32.070348 7553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4285e80c-1ff9-42b3-9692-9f2ab6b61916-kubelet-dir\") pod \"installer-3-master-0\" (UID: \"4285e80c-1ff9-42b3-9692-9f2ab6b61916\") " pod="openshift-kube-apiserver/installer-3-master-0" Mar 18 17:59:32.070695 master-0 kubenswrapper[7553]: I0318 17:59:32.070386 7553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4285e80c-1ff9-42b3-9692-9f2ab6b61916-kube-api-access\") pod \"installer-3-master-0\" (UID: \"4285e80c-1ff9-42b3-9692-9f2ab6b61916\") " pod="openshift-kube-apiserver/installer-3-master-0" Mar 18 17:59:32.076309 master-0 kubenswrapper[7553]: I0318 17:59:32.074636 7553 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-3-master-0"] Mar 18 17:59:32.171587 master-0 kubenswrapper[7553]: I0318 17:59:32.171520 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4285e80c-1ff9-42b3-9692-9f2ab6b61916-kube-api-access\") pod \"installer-3-master-0\" (UID: \"4285e80c-1ff9-42b3-9692-9f2ab6b61916\") " pod="openshift-kube-apiserver/installer-3-master-0" Mar 18 17:59:32.171900 master-0 kubenswrapper[7553]: I0318 17:59:32.171871 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/4285e80c-1ff9-42b3-9692-9f2ab6b61916-var-lock\") pod \"installer-3-master-0\" (UID: \"4285e80c-1ff9-42b3-9692-9f2ab6b61916\") " pod="openshift-kube-apiserver/installer-3-master-0" Mar 18 17:59:32.171950 master-0 kubenswrapper[7553]: I0318 17:59:32.171907 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4285e80c-1ff9-42b3-9692-9f2ab6b61916-kubelet-dir\") pod \"installer-3-master-0\" (UID: \"4285e80c-1ff9-42b3-9692-9f2ab6b61916\") " pod="openshift-kube-apiserver/installer-3-master-0" Mar 18 17:59:32.171950 master-0 kubenswrapper[7553]: I0318 17:59:32.171933 7553 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/4285e80c-1ff9-42b3-9692-9f2ab6b61916-var-lock\") pod \"installer-3-master-0\" (UID: \"4285e80c-1ff9-42b3-9692-9f2ab6b61916\") " pod="openshift-kube-apiserver/installer-3-master-0" Mar 18 17:59:32.172021 master-0 kubenswrapper[7553]: I0318 17:59:32.171973 7553 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4285e80c-1ff9-42b3-9692-9f2ab6b61916-kubelet-dir\") pod \"installer-3-master-0\" (UID: \"4285e80c-1ff9-42b3-9692-9f2ab6b61916\") " pod="openshift-kube-apiserver/installer-3-master-0" Mar 18 17:59:32.188609 master-0 kubenswrapper[7553]: I0318 17:59:32.188554 7553 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4285e80c-1ff9-42b3-9692-9f2ab6b61916-kube-api-access\") pod \"installer-3-master-0\" (UID: \"4285e80c-1ff9-42b3-9692-9f2ab6b61916\") " pod="openshift-kube-apiserver/installer-3-master-0" Mar 18 17:59:32.348439 master-0 kubenswrapper[7553]: I0318 17:59:32.348346 7553 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-3-master-0" Mar 18 17:59:32.814321 master-0 kubenswrapper[7553]: W0318 17:59:32.814222 7553 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod4285e80c_1ff9_42b3_9692_9f2ab6b61916.slice/crio-f34d77ba93703fe1437d1652719d678aefcfb27c7b8ba0e8d8cf97f2d8fb7718 WatchSource:0}: Error finding container f34d77ba93703fe1437d1652719d678aefcfb27c7b8ba0e8d8cf97f2d8fb7718: Status 404 returned error can't find the container with id f34d77ba93703fe1437d1652719d678aefcfb27c7b8ba0e8d8cf97f2d8fb7718 Mar 18 17:59:32.820064 master-0 kubenswrapper[7553]: I0318 17:59:32.820025 7553 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-3-master-0"] Mar 18 17:59:33.826158 master-0 kubenswrapper[7553]: I0318 17:59:33.826073 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-3-master-0" event={"ID":"4285e80c-1ff9-42b3-9692-9f2ab6b61916","Type":"ContainerStarted","Data":"7af43e761f47509ec1402b4287569aac08cd400280ac0f2b280a0b47c6c678f0"} Mar 18 17:59:33.826158 master-0 kubenswrapper[7553]: I0318 17:59:33.826160 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-3-master-0" event={"ID":"4285e80c-1ff9-42b3-9692-9f2ab6b61916","Type":"ContainerStarted","Data":"f34d77ba93703fe1437d1652719d678aefcfb27c7b8ba0e8d8cf97f2d8fb7718"} Mar 18 17:59:33.859797 master-0 kubenswrapper[7553]: I0318 17:59:33.859634 7553 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-3-master-0" podStartSLOduration=1.8595984269999999 podStartE2EDuration="1.859598427s" podCreationTimestamp="2026-03-18 17:59:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 17:59:33.850670474 +0000 UTC m=+1063.996505187" watchObservedRunningTime="2026-03-18 17:59:33.859598427 +0000 UTC m=+1064.005433100" Mar 18 17:59:39.877494 master-0 kubenswrapper[7553]: I0318 17:59:39.877356 7553 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-2-retry-1-master-0_ff10351e-9378-4e25-87df-90ace60e5d16/installer/0.log" Mar 18 17:59:39.877494 master-0 kubenswrapper[7553]: I0318 17:59:39.877454 7553 generic.go:334] "Generic (PLEG): container finished" podID="ff10351e-9378-4e25-87df-90ace60e5d16" containerID="21e0077e6fdd9559bfd3ede636836e5d1c3c79c17e113399eead6f42be9f3307" exitCode=1 Mar 18 17:59:39.878117 master-0 kubenswrapper[7553]: I0318 17:59:39.877493 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-2-retry-1-master-0" event={"ID":"ff10351e-9378-4e25-87df-90ace60e5d16","Type":"ContainerDied","Data":"21e0077e6fdd9559bfd3ede636836e5d1c3c79c17e113399eead6f42be9f3307"} Mar 18 17:59:40.009284 master-0 kubenswrapper[7553]: I0318 17:59:40.009044 7553 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-2-retry-1-master-0_ff10351e-9378-4e25-87df-90ace60e5d16/installer/0.log" Mar 18 17:59:40.009486 master-0 kubenswrapper[7553]: I0318 17:59:40.009328 7553 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-2-retry-1-master-0" Mar 18 17:59:40.055011 master-0 kubenswrapper[7553]: E0318 17:59:40.054917 7553 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[machine-approver-tls], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="openshift-cluster-machine-approver/machine-approver-5c6485487f-z74t2" podUID="92153864-7959-4482-bf24-c8db36435fb5" Mar 18 17:59:40.058839 master-0 kubenswrapper[7553]: I0318 17:59:40.058785 7553 scope.go:117] "RemoveContainer" containerID="7805e3a084be32fd22f401c11f2cafaf6f6853b5c227bbbf9238a583daa6ea61" Mar 18 17:59:40.059149 master-0 kubenswrapper[7553]: E0318 17:59:40.059112 7553 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kube-rbac-proxy pod=cluster-cloud-controller-manager-operator-7dff898856-kfzkl_openshift-cloud-controller-manager-operator(0751c002-fe0e-4f13-bb9c-9accd8ca0df3)\"" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-kfzkl" podUID="0751c002-fe0e-4f13-bb9c-9accd8ca0df3" Mar 18 17:59:40.110117 master-0 kubenswrapper[7553]: I0318 17:59:40.109892 7553 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/ff10351e-9378-4e25-87df-90ace60e5d16-var-lock\") pod \"ff10351e-9378-4e25-87df-90ace60e5d16\" (UID: \"ff10351e-9378-4e25-87df-90ace60e5d16\") " Mar 18 17:59:40.110117 master-0 kubenswrapper[7553]: I0318 17:59:40.109971 7553 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ff10351e-9378-4e25-87df-90ace60e5d16-kubelet-dir\") pod \"ff10351e-9378-4e25-87df-90ace60e5d16\" (UID: \"ff10351e-9378-4e25-87df-90ace60e5d16\") " Mar 18 17:59:40.110117 master-0 kubenswrapper[7553]: I0318 17:59:40.109965 7553 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ff10351e-9378-4e25-87df-90ace60e5d16-var-lock" (OuterVolumeSpecName: "var-lock") pod "ff10351e-9378-4e25-87df-90ace60e5d16" (UID: "ff10351e-9378-4e25-87df-90ace60e5d16"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 17:59:40.110117 master-0 kubenswrapper[7553]: I0318 17:59:40.110016 7553 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ff10351e-9378-4e25-87df-90ace60e5d16-kube-api-access\") pod \"ff10351e-9378-4e25-87df-90ace60e5d16\" (UID: \"ff10351e-9378-4e25-87df-90ace60e5d16\") " Mar 18 17:59:40.110117 master-0 kubenswrapper[7553]: I0318 17:59:40.110068 7553 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ff10351e-9378-4e25-87df-90ace60e5d16-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "ff10351e-9378-4e25-87df-90ace60e5d16" (UID: "ff10351e-9378-4e25-87df-90ace60e5d16"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 17:59:40.110794 master-0 kubenswrapper[7553]: I0318 17:59:40.110492 7553 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/ff10351e-9378-4e25-87df-90ace60e5d16-var-lock\") on node \"master-0\" DevicePath \"\"" Mar 18 17:59:40.110794 master-0 kubenswrapper[7553]: I0318 17:59:40.110541 7553 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ff10351e-9378-4e25-87df-90ace60e5d16-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Mar 18 17:59:40.113348 master-0 kubenswrapper[7553]: I0318 17:59:40.113318 7553 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ff10351e-9378-4e25-87df-90ace60e5d16-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "ff10351e-9378-4e25-87df-90ace60e5d16" (UID: "ff10351e-9378-4e25-87df-90ace60e5d16"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 17:59:40.212177 master-0 kubenswrapper[7553]: I0318 17:59:40.212114 7553 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ff10351e-9378-4e25-87df-90ace60e5d16-kube-api-access\") on node \"master-0\" DevicePath \"\"" Mar 18 17:59:40.885142 master-0 kubenswrapper[7553]: I0318 17:59:40.885073 7553 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-2-retry-1-master-0_ff10351e-9378-4e25-87df-90ace60e5d16/installer/0.log" Mar 18 17:59:40.885142 master-0 kubenswrapper[7553]: I0318 17:59:40.885135 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-2-retry-1-master-0" event={"ID":"ff10351e-9378-4e25-87df-90ace60e5d16","Type":"ContainerDied","Data":"5b5768e7c3863053d5f0700738ab22492634ebadad91f6b6c8f2086da29b8dd5"} Mar 18 17:59:40.885753 master-0 kubenswrapper[7553]: I0318 17:59:40.885178 7553 scope.go:117] "RemoveContainer" containerID="21e0077e6fdd9559bfd3ede636836e5d1c3c79c17e113399eead6f42be9f3307" Mar 18 17:59:40.885753 master-0 kubenswrapper[7553]: I0318 17:59:40.885318 7553 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-2-retry-1-master-0" Mar 18 17:59:40.927325 master-0 kubenswrapper[7553]: I0318 17:59:40.926819 7553 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/installer-2-retry-1-master-0"] Mar 18 17:59:40.963448 master-0 kubenswrapper[7553]: I0318 17:59:40.963364 7553 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/installer-2-retry-1-master-0"] Mar 18 17:59:41.247589 master-0 kubenswrapper[7553]: I0318 17:59:41.247420 7553 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["kube-system/bootstrap-kube-scheduler-master-0"] Mar 18 17:59:41.247880 master-0 kubenswrapper[7553]: I0318 17:59:41.247830 7553 kuberuntime_container.go:808] "Killing container with a grace period" pod="kube-system/bootstrap-kube-scheduler-master-0" podUID="c83737980b9ee109184b1d78e942cf36" containerName="kube-scheduler" containerID="cri-o://c0003daaaf5a355b3cb392bb03905611a5e11defed3a5bf40942d6e99ba55bcb" gracePeriod=30 Mar 18 17:59:41.249580 master-0 kubenswrapper[7553]: I0318 17:59:41.249549 7553 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-scheduler/openshift-kube-scheduler-master-0"] Mar 18 17:59:41.250101 master-0 kubenswrapper[7553]: E0318 17:59:41.250081 7553 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c83737980b9ee109184b1d78e942cf36" containerName="kube-scheduler" Mar 18 17:59:41.250198 master-0 kubenswrapper[7553]: I0318 17:59:41.250187 7553 state_mem.go:107] "Deleted CPUSet assignment" podUID="c83737980b9ee109184b1d78e942cf36" containerName="kube-scheduler" Mar 18 17:59:41.250290 master-0 kubenswrapper[7553]: E0318 17:59:41.250259 7553 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c83737980b9ee109184b1d78e942cf36" containerName="kube-scheduler" Mar 18 17:59:41.250357 master-0 kubenswrapper[7553]: I0318 17:59:41.250347 7553 state_mem.go:107] "Deleted CPUSet assignment" podUID="c83737980b9ee109184b1d78e942cf36" containerName="kube-scheduler" Mar 18 17:59:41.250423 master-0 kubenswrapper[7553]: E0318 17:59:41.250413 7553 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c83737980b9ee109184b1d78e942cf36" containerName="kube-scheduler" Mar 18 17:59:41.250494 master-0 kubenswrapper[7553]: I0318 17:59:41.250482 7553 state_mem.go:107] "Deleted CPUSet assignment" podUID="c83737980b9ee109184b1d78e942cf36" containerName="kube-scheduler" Mar 18 17:59:41.250575 master-0 kubenswrapper[7553]: E0318 17:59:41.250562 7553 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff10351e-9378-4e25-87df-90ace60e5d16" containerName="installer" Mar 18 17:59:41.250648 master-0 kubenswrapper[7553]: I0318 17:59:41.250636 7553 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff10351e-9378-4e25-87df-90ace60e5d16" containerName="installer" Mar 18 17:59:41.250875 master-0 kubenswrapper[7553]: I0318 17:59:41.250857 7553 memory_manager.go:354] "RemoveStaleState removing state" podUID="c83737980b9ee109184b1d78e942cf36" containerName="kube-scheduler" Mar 18 17:59:41.250973 master-0 kubenswrapper[7553]: I0318 17:59:41.250959 7553 memory_manager.go:354] "RemoveStaleState removing state" podUID="c83737980b9ee109184b1d78e942cf36" containerName="kube-scheduler" Mar 18 17:59:41.251055 master-0 kubenswrapper[7553]: I0318 17:59:41.251045 7553 memory_manager.go:354] "RemoveStaleState removing state" podUID="ff10351e-9378-4e25-87df-90ace60e5d16" containerName="installer" Mar 18 17:59:41.251406 master-0 kubenswrapper[7553]: I0318 17:59:41.251391 7553 memory_manager.go:354] "RemoveStaleState removing state" podUID="c83737980b9ee109184b1d78e942cf36" containerName="kube-scheduler" Mar 18 17:59:41.252442 master-0 kubenswrapper[7553]: I0318 17:59:41.252422 7553 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 18 17:59:41.333442 master-0 kubenswrapper[7553]: I0318 17:59:41.333372 7553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/8413125cf444e5c95f023c5dd9c6151e-resource-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"8413125cf444e5c95f023c5dd9c6151e\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 18 17:59:41.333662 master-0 kubenswrapper[7553]: I0318 17:59:41.333557 7553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/8413125cf444e5c95f023c5dd9c6151e-cert-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"8413125cf444e5c95f023c5dd9c6151e\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 18 17:59:41.383854 master-0 kubenswrapper[7553]: I0318 17:59:41.383784 7553 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-master-0"] Mar 18 17:59:41.434705 master-0 kubenswrapper[7553]: I0318 17:59:41.434669 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/8413125cf444e5c95f023c5dd9c6151e-resource-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"8413125cf444e5c95f023c5dd9c6151e\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 18 17:59:41.434877 master-0 kubenswrapper[7553]: I0318 17:59:41.434787 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/8413125cf444e5c95f023c5dd9c6151e-cert-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"8413125cf444e5c95f023c5dd9c6151e\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 18 17:59:41.434877 master-0 kubenswrapper[7553]: I0318 17:59:41.434852 7553 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/8413125cf444e5c95f023c5dd9c6151e-resource-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"8413125cf444e5c95f023c5dd9c6151e\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 18 17:59:41.434969 master-0 kubenswrapper[7553]: I0318 17:59:41.434914 7553 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/8413125cf444e5c95f023c5dd9c6151e-cert-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"8413125cf444e5c95f023c5dd9c6151e\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 18 17:59:41.441219 master-0 kubenswrapper[7553]: I0318 17:59:41.437665 7553 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kube-system/bootstrap-kube-scheduler-master-0" Mar 18 17:59:41.471363 master-0 kubenswrapper[7553]: I0318 17:59:41.470641 7553 kubelet.go:2706] "Unable to find pod for mirror pod, skipping" mirrorPod="kube-system/bootstrap-kube-scheduler-master-0" mirrorPodUID="d95a26b3-70d6-4f47-b342-b1b1b1c9b7db" Mar 18 17:59:41.536330 master-0 kubenswrapper[7553]: I0318 17:59:41.536169 7553 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/c83737980b9ee109184b1d78e942cf36-secrets\") pod \"c83737980b9ee109184b1d78e942cf36\" (UID: \"c83737980b9ee109184b1d78e942cf36\") " Mar 18 17:59:41.536560 master-0 kubenswrapper[7553]: I0318 17:59:41.536310 7553 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c83737980b9ee109184b1d78e942cf36-secrets" (OuterVolumeSpecName: "secrets") pod "c83737980b9ee109184b1d78e942cf36" (UID: "c83737980b9ee109184b1d78e942cf36"). InnerVolumeSpecName "secrets". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 17:59:41.536560 master-0 kubenswrapper[7553]: I0318 17:59:41.536429 7553 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/c83737980b9ee109184b1d78e942cf36-logs\") pod \"c83737980b9ee109184b1d78e942cf36\" (UID: \"c83737980b9ee109184b1d78e942cf36\") " Mar 18 17:59:41.536560 master-0 kubenswrapper[7553]: I0318 17:59:41.536479 7553 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c83737980b9ee109184b1d78e942cf36-logs" (OuterVolumeSpecName: "logs") pod "c83737980b9ee109184b1d78e942cf36" (UID: "c83737980b9ee109184b1d78e942cf36"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 17:59:41.536875 master-0 kubenswrapper[7553]: I0318 17:59:41.536846 7553 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/c83737980b9ee109184b1d78e942cf36-logs\") on node \"master-0\" DevicePath \"\"" Mar 18 17:59:41.536978 master-0 kubenswrapper[7553]: I0318 17:59:41.536881 7553 reconciler_common.go:293] "Volume detached for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/c83737980b9ee109184b1d78e942cf36-secrets\") on node \"master-0\" DevicePath \"\"" Mar 18 17:59:41.676023 master-0 kubenswrapper[7553]: I0318 17:59:41.675940 7553 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 18 17:59:41.708637 master-0 kubenswrapper[7553]: W0318 17:59:41.708566 7553 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8413125cf444e5c95f023c5dd9c6151e.slice/crio-89705fd182a90dfe140ac5efc8c14b16140f0a05f824bdb1f27db7295abcee76 WatchSource:0}: Error finding container 89705fd182a90dfe140ac5efc8c14b16140f0a05f824bdb1f27db7295abcee76: Status 404 returned error can't find the container with id 89705fd182a90dfe140ac5efc8c14b16140f0a05f824bdb1f27db7295abcee76 Mar 18 17:59:41.896112 master-0 kubenswrapper[7553]: I0318 17:59:41.896051 7553 generic.go:334] "Generic (PLEG): container finished" podID="c83737980b9ee109184b1d78e942cf36" containerID="c0003daaaf5a355b3cb392bb03905611a5e11defed3a5bf40942d6e99ba55bcb" exitCode=0 Mar 18 17:59:41.904662 master-0 kubenswrapper[7553]: I0318 17:59:41.896139 7553 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="95d9ed6b43dde926cc6bff2e2109470565941e7ec534301d19b34350a3fd9914" Mar 18 17:59:41.904662 master-0 kubenswrapper[7553]: I0318 17:59:41.896160 7553 scope.go:117] "RemoveContainer" containerID="39e81d7022f76aa50f44926362dbcc435bd580e0e562220512ebed69c23461e5" Mar 18 17:59:41.904662 master-0 kubenswrapper[7553]: I0318 17:59:41.896261 7553 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kube-system/bootstrap-kube-scheduler-master-0" Mar 18 17:59:41.904662 master-0 kubenswrapper[7553]: I0318 17:59:41.902684 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"8413125cf444e5c95f023c5dd9c6151e","Type":"ContainerStarted","Data":"89705fd182a90dfe140ac5efc8c14b16140f0a05f824bdb1f27db7295abcee76"} Mar 18 17:59:41.904662 master-0 kubenswrapper[7553]: I0318 17:59:41.904102 7553 generic.go:334] "Generic (PLEG): container finished" podID="5e216493-e343-4c59-a3c1-5aad5edd67e2" containerID="b80c144acadff41c49bf3614230955b846d46e4c70083852e45c512d06842840" exitCode=0 Mar 18 17:59:41.904662 master-0 kubenswrapper[7553]: I0318 17:59:41.904124 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-4-master-0" event={"ID":"5e216493-e343-4c59-a3c1-5aad5edd67e2","Type":"ContainerDied","Data":"b80c144acadff41c49bf3614230955b846d46e4c70083852e45c512d06842840"} Mar 18 17:59:42.061297 master-0 kubenswrapper[7553]: I0318 17:59:42.061243 7553 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c83737980b9ee109184b1d78e942cf36" path="/var/lib/kubelet/pods/c83737980b9ee109184b1d78e942cf36/volumes" Mar 18 17:59:42.061917 master-0 kubenswrapper[7553]: I0318 17:59:42.061887 7553 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ff10351e-9378-4e25-87df-90ace60e5d16" path="/var/lib/kubelet/pods/ff10351e-9378-4e25-87df-90ace60e5d16/volumes" Mar 18 17:59:42.062324 master-0 kubenswrapper[7553]: I0318 17:59:42.062294 7553 mirror_client.go:130] "Deleting a mirror pod" pod="kube-system/bootstrap-kube-scheduler-master-0" podUID="" Mar 18 17:59:42.077720 master-0 kubenswrapper[7553]: I0318 17:59:42.077662 7553 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["kube-system/bootstrap-kube-scheduler-master-0"] Mar 18 17:59:42.077892 master-0 kubenswrapper[7553]: I0318 17:59:42.077722 7553 kubelet.go:2649] "Unable to find pod for mirror pod, skipping" mirrorPod="kube-system/bootstrap-kube-scheduler-master-0" mirrorPodUID="d95a26b3-70d6-4f47-b342-b1b1b1c9b7db" Mar 18 17:59:42.082137 master-0 kubenswrapper[7553]: I0318 17:59:42.082070 7553 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["kube-system/bootstrap-kube-scheduler-master-0"] Mar 18 17:59:42.082202 master-0 kubenswrapper[7553]: I0318 17:59:42.082136 7553 kubelet.go:2673] "Unable to find pod for mirror pod, skipping" mirrorPod="kube-system/bootstrap-kube-scheduler-master-0" mirrorPodUID="d95a26b3-70d6-4f47-b342-b1b1b1c9b7db" Mar 18 17:59:42.921571 master-0 kubenswrapper[7553]: I0318 17:59:42.921495 7553 generic.go:334] "Generic (PLEG): container finished" podID="8413125cf444e5c95f023c5dd9c6151e" containerID="54007053bb13b45932056c940a92e3590d13348f18ea18bf943b7365ae07e843" exitCode=0 Mar 18 17:59:42.922623 master-0 kubenswrapper[7553]: I0318 17:59:42.921900 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"8413125cf444e5c95f023c5dd9c6151e","Type":"ContainerDied","Data":"54007053bb13b45932056c940a92e3590d13348f18ea18bf943b7365ae07e843"} Mar 18 17:59:43.054077 master-0 kubenswrapper[7553]: I0318 17:59:43.054003 7553 scope.go:117] "RemoveContainer" containerID="2b3a93a1f208538619b8c053a215774b7a5b76ad51695ab4679fe93b9c8aef84" Mar 18 17:59:43.379344 master-0 kubenswrapper[7553]: I0318 17:59:43.379305 7553 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-4-master-0" Mar 18 17:59:43.478423 master-0 kubenswrapper[7553]: I0318 17:59:43.478377 7553 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5e216493-e343-4c59-a3c1-5aad5edd67e2-kube-api-access\") pod \"5e216493-e343-4c59-a3c1-5aad5edd67e2\" (UID: \"5e216493-e343-4c59-a3c1-5aad5edd67e2\") " Mar 18 17:59:43.478601 master-0 kubenswrapper[7553]: I0318 17:59:43.478509 7553 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5e216493-e343-4c59-a3c1-5aad5edd67e2-kubelet-dir\") pod \"5e216493-e343-4c59-a3c1-5aad5edd67e2\" (UID: \"5e216493-e343-4c59-a3c1-5aad5edd67e2\") " Mar 18 17:59:43.478656 master-0 kubenswrapper[7553]: I0318 17:59:43.478624 7553 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/5e216493-e343-4c59-a3c1-5aad5edd67e2-var-lock\") pod \"5e216493-e343-4c59-a3c1-5aad5edd67e2\" (UID: \"5e216493-e343-4c59-a3c1-5aad5edd67e2\") " Mar 18 17:59:43.479117 master-0 kubenswrapper[7553]: I0318 17:59:43.479090 7553 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5e216493-e343-4c59-a3c1-5aad5edd67e2-var-lock" (OuterVolumeSpecName: "var-lock") pod "5e216493-e343-4c59-a3c1-5aad5edd67e2" (UID: "5e216493-e343-4c59-a3c1-5aad5edd67e2"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 17:59:43.479579 master-0 kubenswrapper[7553]: I0318 17:59:43.479551 7553 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5e216493-e343-4c59-a3c1-5aad5edd67e2-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "5e216493-e343-4c59-a3c1-5aad5edd67e2" (UID: "5e216493-e343-4c59-a3c1-5aad5edd67e2"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 17:59:43.483680 master-0 kubenswrapper[7553]: I0318 17:59:43.483648 7553 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5e216493-e343-4c59-a3c1-5aad5edd67e2-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "5e216493-e343-4c59-a3c1-5aad5edd67e2" (UID: "5e216493-e343-4c59-a3c1-5aad5edd67e2"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 17:59:43.580952 master-0 kubenswrapper[7553]: I0318 17:59:43.580908 7553 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/5e216493-e343-4c59-a3c1-5aad5edd67e2-var-lock\") on node \"master-0\" DevicePath \"\"" Mar 18 17:59:43.580952 master-0 kubenswrapper[7553]: I0318 17:59:43.580950 7553 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5e216493-e343-4c59-a3c1-5aad5edd67e2-kube-api-access\") on node \"master-0\" DevicePath \"\"" Mar 18 17:59:43.581075 master-0 kubenswrapper[7553]: I0318 17:59:43.580968 7553 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5e216493-e343-4c59-a3c1-5aad5edd67e2-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Mar 18 17:59:43.935167 master-0 kubenswrapper[7553]: I0318 17:59:43.935102 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"8413125cf444e5c95f023c5dd9c6151e","Type":"ContainerStarted","Data":"8dcb28e72b5e3d607cb0442eacc9389954c39aee0b6eacf8e715a788f8bfb9f4"} Mar 18 17:59:43.935167 master-0 kubenswrapper[7553]: I0318 17:59:43.935149 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"8413125cf444e5c95f023c5dd9c6151e","Type":"ContainerStarted","Data":"29879c8abe23bc57a7aa348868d9ac01b7adc18d9c27f2fd1e733adaceab54a9"} Mar 18 17:59:43.935167 master-0 kubenswrapper[7553]: I0318 17:59:43.935162 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"8413125cf444e5c95f023c5dd9c6151e","Type":"ContainerStarted","Data":"043429a2f809c60d137c59f31d4e052f1930753c2d8c68039661e422f3f8def6"} Mar 18 17:59:43.936200 master-0 kubenswrapper[7553]: I0318 17:59:43.936168 7553 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 18 17:59:43.938080 master-0 kubenswrapper[7553]: I0318 17:59:43.938053 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-4-master-0" event={"ID":"5e216493-e343-4c59-a3c1-5aad5edd67e2","Type":"ContainerDied","Data":"62b095928168d47377d353f0ce39eebc777747ee26de9ec57d0eb0a49ec53d3a"} Mar 18 17:59:43.938080 master-0 kubenswrapper[7553]: I0318 17:59:43.938076 7553 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="62b095928168d47377d353f0ce39eebc777747ee26de9ec57d0eb0a49ec53d3a" Mar 18 17:59:43.938184 master-0 kubenswrapper[7553]: I0318 17:59:43.938115 7553 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-4-master-0" Mar 18 17:59:43.945098 master-0 kubenswrapper[7553]: I0318 17:59:43.943905 7553 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-64854d9cff-vpjmp_7d39d93e-9be3-47e1-a44e-be2d18b55446/snapshot-controller/4.log" Mar 18 17:59:43.945098 master-0 kubenswrapper[7553]: I0318 17:59:43.943962 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-64854d9cff-vpjmp" event={"ID":"7d39d93e-9be3-47e1-a44e-be2d18b55446","Type":"ContainerStarted","Data":"3fd3e8266c47086b9b59fc8ae98a1dda02eef8263c89ab559a0ca53656ccb64a"} Mar 18 17:59:43.974575 master-0 kubenswrapper[7553]: I0318 17:59:43.973898 7553 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" podStartSLOduration=2.9738710619999997 podStartE2EDuration="2.973871062s" podCreationTimestamp="2026-03-18 17:59:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 17:59:43.970433938 +0000 UTC m=+1074.116268651" watchObservedRunningTime="2026-03-18 17:59:43.973871062 +0000 UTC m=+1074.119705735" Mar 18 17:59:50.577250 master-0 kubenswrapper[7553]: I0318 17:59:50.577159 7553 scope.go:117] "RemoveContainer" containerID="fba66f2362f417736e585bd1e5c757b3e12cdb7f292f9ad5781307faed635e6f" Mar 18 17:59:52.081952 master-0 kubenswrapper[7553]: I0318 17:59:52.081836 7553 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-master-0"] Mar 18 17:59:53.053129 master-0 kubenswrapper[7553]: I0318 17:59:53.053036 7553 scope.go:117] "RemoveContainer" containerID="7805e3a084be32fd22f401c11f2cafaf6f6853b5c227bbbf9238a583daa6ea61" Mar 18 17:59:53.053511 master-0 kubenswrapper[7553]: E0318 17:59:53.053258 7553 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kube-rbac-proxy pod=cluster-cloud-controller-manager-operator-7dff898856-kfzkl_openshift-cloud-controller-manager-operator(0751c002-fe0e-4f13-bb9c-9accd8ca0df3)\"" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-kfzkl" podUID="0751c002-fe0e-4f13-bb9c-9accd8ca0df3" Mar 18 17:59:54.023373 master-0 kubenswrapper[7553]: I0318 17:59:54.023316 7553 generic.go:334] "Generic (PLEG): container finished" podID="c57f282a-829b-41b2-827a-f4bc598245a2" containerID="40665a65803f46b85c5841b161668f9dc53195967c924003dedfb177dd66895a" exitCode=0 Mar 18 17:59:54.023373 master-0 kubenswrapper[7553]: I0318 17:59:54.023367 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" event={"ID":"c57f282a-829b-41b2-827a-f4bc598245a2","Type":"ContainerDied","Data":"40665a65803f46b85c5841b161668f9dc53195967c924003dedfb177dd66895a"} Mar 18 17:59:54.024449 master-0 kubenswrapper[7553]: I0318 17:59:54.023396 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" event={"ID":"c57f282a-829b-41b2-827a-f4bc598245a2","Type":"ContainerStarted","Data":"ca7ffe50c570056829fed06297cd9d3056bcc20924619ac09d0f25bf08da641a"} Mar 18 17:59:54.024449 master-0 kubenswrapper[7553]: I0318 17:59:54.023414 7553 scope.go:117] "RemoveContainer" containerID="f00456b24dab05375bbbeac67add4ae933f0340a0db97ddc7192a2436c6be1ec" Mar 18 17:59:54.053931 master-0 kubenswrapper[7553]: I0318 17:59:54.053664 7553 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-5c6485487f-z74t2" Mar 18 17:59:54.099706 master-0 kubenswrapper[7553]: I0318 17:59:54.099600 7553 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" Mar 18 17:59:54.103088 master-0 kubenswrapper[7553]: I0318 17:59:54.103019 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:59:54.103088 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:59:54.103088 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:59:54.103088 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:59:54.103324 master-0 kubenswrapper[7553]: I0318 17:59:54.103096 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:59:54.107875 master-0 kubenswrapper[7553]: I0318 17:59:54.107405 7553 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-master-0" podStartSLOduration=2.107380563 podStartE2EDuration="2.107380563s" podCreationTimestamp="2026-03-18 17:59:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 17:59:54.105920103 +0000 UTC m=+1084.251754826" watchObservedRunningTime="2026-03-18 17:59:54.107380563 +0000 UTC m=+1084.253215246" Mar 18 17:59:55.102905 master-0 kubenswrapper[7553]: I0318 17:59:55.102837 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:59:55.102905 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:59:55.102905 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:59:55.102905 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:59:55.103588 master-0 kubenswrapper[7553]: I0318 17:59:55.102925 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:59:56.102791 master-0 kubenswrapper[7553]: I0318 17:59:56.102723 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:59:56.102791 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:59:56.102791 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:59:56.102791 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:59:56.103506 master-0 kubenswrapper[7553]: I0318 17:59:56.102814 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:59:57.103620 master-0 kubenswrapper[7553]: I0318 17:59:57.103534 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:59:57.103620 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:59:57.103620 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:59:57.103620 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:59:57.104453 master-0 kubenswrapper[7553]: I0318 17:59:57.103636 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:59:58.102551 master-0 kubenswrapper[7553]: I0318 17:59:58.102486 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:59:58.102551 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:59:58.102551 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:59:58.102551 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:59:58.103025 master-0 kubenswrapper[7553]: I0318 17:59:58.102577 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 17:59:59.102361 master-0 kubenswrapper[7553]: I0318 17:59:59.102236 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 17:59:59.102361 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 17:59:59.102361 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 17:59:59.102361 master-0 kubenswrapper[7553]: healthz check failed Mar 18 17:59:59.103464 master-0 kubenswrapper[7553]: I0318 17:59:59.102388 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 18:00:00.106433 master-0 kubenswrapper[7553]: I0318 18:00:00.106268 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 18:00:00.106433 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 18:00:00.106433 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 18:00:00.106433 master-0 kubenswrapper[7553]: healthz check failed Mar 18 18:00:00.107665 master-0 kubenswrapper[7553]: I0318 18:00:00.106433 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 18:00:01.105361 master-0 kubenswrapper[7553]: I0318 18:00:01.105242 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 18:00:01.105361 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 18:00:01.105361 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 18:00:01.105361 master-0 kubenswrapper[7553]: healthz check failed Mar 18 18:00:01.105361 master-0 kubenswrapper[7553]: I0318 18:00:01.105355 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 18:00:02.102822 master-0 kubenswrapper[7553]: I0318 18:00:02.102714 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 18:00:02.102822 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 18:00:02.102822 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 18:00:02.102822 master-0 kubenswrapper[7553]: healthz check failed Mar 18 18:00:02.104056 master-0 kubenswrapper[7553]: I0318 18:00:02.102831 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 18:00:03.100235 master-0 kubenswrapper[7553]: I0318 18:00:03.100155 7553 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" Mar 18 18:00:03.102968 master-0 kubenswrapper[7553]: I0318 18:00:03.102923 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 18:00:03.102968 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 18:00:03.102968 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 18:00:03.102968 master-0 kubenswrapper[7553]: healthz check failed Mar 18 18:00:03.103585 master-0 kubenswrapper[7553]: I0318 18:00:03.103000 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 18:00:04.053482 master-0 kubenswrapper[7553]: I0318 18:00:04.053388 7553 scope.go:117] "RemoveContainer" containerID="7805e3a084be32fd22f401c11f2cafaf6f6853b5c227bbbf9238a583daa6ea61" Mar 18 18:00:04.102115 master-0 kubenswrapper[7553]: I0318 18:00:04.101838 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 18:00:04.102115 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 18:00:04.102115 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 18:00:04.102115 master-0 kubenswrapper[7553]: healthz check failed Mar 18 18:00:04.102115 master-0 kubenswrapper[7553]: I0318 18:00:04.101912 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 18:00:05.102730 master-0 kubenswrapper[7553]: I0318 18:00:05.102651 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 18:00:05.102730 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 18:00:05.102730 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 18:00:05.102730 master-0 kubenswrapper[7553]: healthz check failed Mar 18 18:00:05.103784 master-0 kubenswrapper[7553]: I0318 18:00:05.102750 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 18:00:05.116424 master-0 kubenswrapper[7553]: I0318 18:00:05.116365 7553 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-controller-manager-operator_cluster-cloud-controller-manager-operator-7dff898856-kfzkl_0751c002-fe0e-4f13-bb9c-9accd8ca0df3/kube-rbac-proxy/6.log" Mar 18 18:00:05.117211 master-0 kubenswrapper[7553]: I0318 18:00:05.117176 7553 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-controller-manager-operator_cluster-cloud-controller-manager-operator-7dff898856-kfzkl_0751c002-fe0e-4f13-bb9c-9accd8ca0df3/config-sync-controllers/0.log" Mar 18 18:00:05.117963 master-0 kubenswrapper[7553]: I0318 18:00:05.117925 7553 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-controller-manager-operator_cluster-cloud-controller-manager-operator-7dff898856-kfzkl_0751c002-fe0e-4f13-bb9c-9accd8ca0df3/cluster-cloud-controller-manager/0.log" Mar 18 18:00:05.118191 master-0 kubenswrapper[7553]: I0318 18:00:05.118156 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-kfzkl" event={"ID":"0751c002-fe0e-4f13-bb9c-9accd8ca0df3","Type":"ContainerStarted","Data":"1b08ac70c38429bbff29986b374d973cde314329d2bb9b7699834ee6b93a1d85"} Mar 18 18:00:06.102770 master-0 kubenswrapper[7553]: I0318 18:00:06.102703 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 18:00:06.102770 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 18:00:06.102770 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 18:00:06.102770 master-0 kubenswrapper[7553]: healthz check failed Mar 18 18:00:06.104074 master-0 kubenswrapper[7553]: I0318 18:00:06.104011 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 18:00:07.103310 master-0 kubenswrapper[7553]: I0318 18:00:07.103146 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 18:00:07.103310 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 18:00:07.103310 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 18:00:07.103310 master-0 kubenswrapper[7553]: healthz check failed Mar 18 18:00:07.105190 master-0 kubenswrapper[7553]: I0318 18:00:07.103373 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 18:00:08.102637 master-0 kubenswrapper[7553]: I0318 18:00:08.102537 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 18:00:08.102637 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 18:00:08.102637 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 18:00:08.102637 master-0 kubenswrapper[7553]: healthz check failed Mar 18 18:00:08.102637 master-0 kubenswrapper[7553]: I0318 18:00:08.102634 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 18:00:09.102448 master-0 kubenswrapper[7553]: I0318 18:00:09.102327 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 18:00:09.102448 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 18:00:09.102448 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 18:00:09.102448 master-0 kubenswrapper[7553]: healthz check failed Mar 18 18:00:09.103811 master-0 kubenswrapper[7553]: I0318 18:00:09.102475 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 18:00:10.102510 master-0 kubenswrapper[7553]: I0318 18:00:10.102398 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 18:00:10.102510 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 18:00:10.102510 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 18:00:10.102510 master-0 kubenswrapper[7553]: healthz check failed Mar 18 18:00:10.103409 master-0 kubenswrapper[7553]: I0318 18:00:10.102551 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 18:00:11.102523 master-0 kubenswrapper[7553]: I0318 18:00:11.102419 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 18:00:11.102523 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 18:00:11.102523 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 18:00:11.102523 master-0 kubenswrapper[7553]: healthz check failed Mar 18 18:00:11.102523 master-0 kubenswrapper[7553]: I0318 18:00:11.102503 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 18:00:12.103798 master-0 kubenswrapper[7553]: I0318 18:00:12.103669 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 18:00:12.103798 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 18:00:12.103798 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 18:00:12.103798 master-0 kubenswrapper[7553]: healthz check failed Mar 18 18:00:12.103798 master-0 kubenswrapper[7553]: I0318 18:00:12.103793 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 18:00:13.103392 master-0 kubenswrapper[7553]: I0318 18:00:13.102590 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 18:00:13.103392 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 18:00:13.103392 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 18:00:13.103392 master-0 kubenswrapper[7553]: healthz check failed Mar 18 18:00:13.103392 master-0 kubenswrapper[7553]: I0318 18:00:13.102730 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 18:00:14.102357 master-0 kubenswrapper[7553]: I0318 18:00:14.102259 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 18:00:14.102357 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 18:00:14.102357 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 18:00:14.102357 master-0 kubenswrapper[7553]: healthz check failed Mar 18 18:00:14.102357 master-0 kubenswrapper[7553]: I0318 18:00:14.102352 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 18:00:15.103129 master-0 kubenswrapper[7553]: I0318 18:00:15.103047 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 18:00:15.103129 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 18:00:15.103129 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 18:00:15.103129 master-0 kubenswrapper[7553]: healthz check failed Mar 18 18:00:15.104259 master-0 kubenswrapper[7553]: I0318 18:00:15.103158 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 18:00:16.103043 master-0 kubenswrapper[7553]: I0318 18:00:16.102935 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 18:00:16.103043 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 18:00:16.103043 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 18:00:16.103043 master-0 kubenswrapper[7553]: healthz check failed Mar 18 18:00:16.104257 master-0 kubenswrapper[7553]: I0318 18:00:16.103040 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 18:00:17.102313 master-0 kubenswrapper[7553]: I0318 18:00:17.102205 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 18:00:17.102313 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 18:00:17.102313 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 18:00:17.102313 master-0 kubenswrapper[7553]: healthz check failed Mar 18 18:00:17.102885 master-0 kubenswrapper[7553]: I0318 18:00:17.102335 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 18:00:18.101715 master-0 kubenswrapper[7553]: I0318 18:00:18.101648 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 18:00:18.101715 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 18:00:18.101715 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 18:00:18.101715 master-0 kubenswrapper[7553]: healthz check failed Mar 18 18:00:18.102857 master-0 kubenswrapper[7553]: I0318 18:00:18.102526 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 18:00:19.102121 master-0 kubenswrapper[7553]: I0318 18:00:19.102033 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 18:00:19.102121 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 18:00:19.102121 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 18:00:19.102121 master-0 kubenswrapper[7553]: healthz check failed Mar 18 18:00:19.102845 master-0 kubenswrapper[7553]: I0318 18:00:19.102126 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 18:00:20.103493 master-0 kubenswrapper[7553]: I0318 18:00:20.103383 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 18:00:20.103493 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 18:00:20.103493 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 18:00:20.103493 master-0 kubenswrapper[7553]: healthz check failed Mar 18 18:00:20.103493 master-0 kubenswrapper[7553]: I0318 18:00:20.103470 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 18:00:21.101980 master-0 kubenswrapper[7553]: I0318 18:00:21.101909 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 18:00:21.101980 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 18:00:21.101980 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 18:00:21.101980 master-0 kubenswrapper[7553]: healthz check failed Mar 18 18:00:21.102453 master-0 kubenswrapper[7553]: I0318 18:00:21.101979 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 18:00:21.461828 master-0 kubenswrapper[7553]: I0318 18:00:21.461639 7553 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0"] Mar 18 18:00:21.462738 master-0 kubenswrapper[7553]: E0318 18:00:21.462068 7553 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5e216493-e343-4c59-a3c1-5aad5edd67e2" containerName="installer" Mar 18 18:00:21.462738 master-0 kubenswrapper[7553]: I0318 18:00:21.462092 7553 state_mem.go:107] "Deleted CPUSet assignment" podUID="5e216493-e343-4c59-a3c1-5aad5edd67e2" containerName="installer" Mar 18 18:00:21.462738 master-0 kubenswrapper[7553]: I0318 18:00:21.462360 7553 memory_manager.go:354] "RemoveStaleState removing state" podUID="5e216493-e343-4c59-a3c1-5aad5edd67e2" containerName="installer" Mar 18 18:00:21.463146 master-0 kubenswrapper[7553]: I0318 18:00:21.463102 7553 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 18 18:00:21.464181 master-0 kubenswrapper[7553]: I0318 18:00:21.464080 7553 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/bootstrap-kube-apiserver-master-0"] Mar 18 18:00:21.464982 master-0 kubenswrapper[7553]: I0318 18:00:21.464900 7553 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" podUID="49fac1b46a11e49501805e891baae4a9" containerName="kube-apiserver" containerID="cri-o://3c6f642b736991fd20242697f9273f8f6a126bc6027f7c5ddd27e70569fd9054" gracePeriod=15 Mar 18 18:00:21.464982 master-0 kubenswrapper[7553]: I0318 18:00:21.464932 7553 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" podUID="49fac1b46a11e49501805e891baae4a9" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://b07f4eb106a117d2a3aedb26bb538e640c6545e341eb4a44bae581e10c947c17" gracePeriod=15 Mar 18 18:00:21.466637 master-0 kubenswrapper[7553]: I0318 18:00:21.466200 7553 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-master-0"] Mar 18 18:00:21.468535 master-0 kubenswrapper[7553]: E0318 18:00:21.468456 7553 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="49fac1b46a11e49501805e891baae4a9" containerName="kube-apiserver" Mar 18 18:00:21.468535 master-0 kubenswrapper[7553]: I0318 18:00:21.468520 7553 state_mem.go:107] "Deleted CPUSet assignment" podUID="49fac1b46a11e49501805e891baae4a9" containerName="kube-apiserver" Mar 18 18:00:21.468815 master-0 kubenswrapper[7553]: E0318 18:00:21.468581 7553 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="49fac1b46a11e49501805e891baae4a9" containerName="setup" Mar 18 18:00:21.468815 master-0 kubenswrapper[7553]: I0318 18:00:21.468604 7553 state_mem.go:107] "Deleted CPUSet assignment" podUID="49fac1b46a11e49501805e891baae4a9" containerName="setup" Mar 18 18:00:21.468815 master-0 kubenswrapper[7553]: E0318 18:00:21.468647 7553 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="49fac1b46a11e49501805e891baae4a9" containerName="kube-apiserver-insecure-readyz" Mar 18 18:00:21.468815 master-0 kubenswrapper[7553]: I0318 18:00:21.468666 7553 state_mem.go:107] "Deleted CPUSet assignment" podUID="49fac1b46a11e49501805e891baae4a9" containerName="kube-apiserver-insecure-readyz" Mar 18 18:00:21.469169 master-0 kubenswrapper[7553]: I0318 18:00:21.468937 7553 memory_manager.go:354] "RemoveStaleState removing state" podUID="49fac1b46a11e49501805e891baae4a9" containerName="kube-apiserver" Mar 18 18:00:21.469169 master-0 kubenswrapper[7553]: I0318 18:00:21.469004 7553 memory_manager.go:354] "RemoveStaleState removing state" podUID="49fac1b46a11e49501805e891baae4a9" containerName="setup" Mar 18 18:00:21.469169 master-0 kubenswrapper[7553]: I0318 18:00:21.469036 7553 memory_manager.go:354] "RemoveStaleState removing state" podUID="49fac1b46a11e49501805e891baae4a9" containerName="kube-apiserver-insecure-readyz" Mar 18 18:00:21.473114 master-0 kubenswrapper[7553]: I0318 18:00:21.473051 7553 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 18 18:00:21.536323 master-0 kubenswrapper[7553]: E0318 18:00:21.536204 7553 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 192.168.32.10:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 18 18:00:21.560314 master-0 kubenswrapper[7553]: I0318 18:00:21.559541 7553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/8e7a82869988463543d3d8dd1f0b5fe3-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"8e7a82869988463543d3d8dd1f0b5fe3\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 18 18:00:21.560314 master-0 kubenswrapper[7553]: I0318 18:00:21.559695 7553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/8e7a82869988463543d3d8dd1f0b5fe3-manifests\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"8e7a82869988463543d3d8dd1f0b5fe3\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 18 18:00:21.560314 master-0 kubenswrapper[7553]: I0318 18:00:21.559989 7553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/8e7a82869988463543d3d8dd1f0b5fe3-var-log\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"8e7a82869988463543d3d8dd1f0b5fe3\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 18 18:00:21.560314 master-0 kubenswrapper[7553]: I0318 18:00:21.560053 7553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/8e7a82869988463543d3d8dd1f0b5fe3-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"8e7a82869988463543d3d8dd1f0b5fe3\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 18 18:00:21.561032 master-0 kubenswrapper[7553]: I0318 18:00:21.560862 7553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/8e7a82869988463543d3d8dd1f0b5fe3-var-lock\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"8e7a82869988463543d3d8dd1f0b5fe3\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 18 18:00:21.569513 master-0 kubenswrapper[7553]: E0318 18:00:21.569468 7553 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 192.168.32.10:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 18 18:00:21.662619 master-0 kubenswrapper[7553]: I0318 18:00:21.662517 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/8e7a82869988463543d3d8dd1f0b5fe3-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"8e7a82869988463543d3d8dd1f0b5fe3\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 18 18:00:21.662827 master-0 kubenswrapper[7553]: I0318 18:00:21.662637 7553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b45ea2ef1cf2bc9d1d994d6538ae0a64-audit-dir\") pod \"kube-apiserver-master-0\" (UID: \"b45ea2ef1cf2bc9d1d994d6538ae0a64\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 18 18:00:21.662827 master-0 kubenswrapper[7553]: I0318 18:00:21.662702 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/8e7a82869988463543d3d8dd1f0b5fe3-var-lock\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"8e7a82869988463543d3d8dd1f0b5fe3\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 18 18:00:21.662827 master-0 kubenswrapper[7553]: I0318 18:00:21.662740 7553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/b45ea2ef1cf2bc9d1d994d6538ae0a64-cert-dir\") pod \"kube-apiserver-master-0\" (UID: \"b45ea2ef1cf2bc9d1d994d6538ae0a64\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 18 18:00:21.662827 master-0 kubenswrapper[7553]: I0318 18:00:21.662801 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/8e7a82869988463543d3d8dd1f0b5fe3-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"8e7a82869988463543d3d8dd1f0b5fe3\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 18 18:00:21.663177 master-0 kubenswrapper[7553]: I0318 18:00:21.662871 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/8e7a82869988463543d3d8dd1f0b5fe3-manifests\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"8e7a82869988463543d3d8dd1f0b5fe3\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 18 18:00:21.663177 master-0 kubenswrapper[7553]: I0318 18:00:21.662941 7553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/b45ea2ef1cf2bc9d1d994d6538ae0a64-resource-dir\") pod \"kube-apiserver-master-0\" (UID: \"b45ea2ef1cf2bc9d1d994d6538ae0a64\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 18 18:00:21.663177 master-0 kubenswrapper[7553]: I0318 18:00:21.663023 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/8e7a82869988463543d3d8dd1f0b5fe3-var-log\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"8e7a82869988463543d3d8dd1f0b5fe3\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 18 18:00:21.663177 master-0 kubenswrapper[7553]: I0318 18:00:21.663158 7553 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/8e7a82869988463543d3d8dd1f0b5fe3-var-log\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"8e7a82869988463543d3d8dd1f0b5fe3\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 18 18:00:21.663626 master-0 kubenswrapper[7553]: I0318 18:00:21.663222 7553 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/8e7a82869988463543d3d8dd1f0b5fe3-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"8e7a82869988463543d3d8dd1f0b5fe3\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 18 18:00:21.663626 master-0 kubenswrapper[7553]: I0318 18:00:21.663311 7553 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/8e7a82869988463543d3d8dd1f0b5fe3-var-lock\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"8e7a82869988463543d3d8dd1f0b5fe3\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 18 18:00:21.663626 master-0 kubenswrapper[7553]: I0318 18:00:21.663365 7553 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/8e7a82869988463543d3d8dd1f0b5fe3-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"8e7a82869988463543d3d8dd1f0b5fe3\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 18 18:00:21.663626 master-0 kubenswrapper[7553]: I0318 18:00:21.663410 7553 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/8e7a82869988463543d3d8dd1f0b5fe3-manifests\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"8e7a82869988463543d3d8dd1f0b5fe3\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 18 18:00:21.764697 master-0 kubenswrapper[7553]: I0318 18:00:21.764541 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/b45ea2ef1cf2bc9d1d994d6538ae0a64-resource-dir\") pod \"kube-apiserver-master-0\" (UID: \"b45ea2ef1cf2bc9d1d994d6538ae0a64\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 18 18:00:21.764896 master-0 kubenswrapper[7553]: I0318 18:00:21.764738 7553 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/b45ea2ef1cf2bc9d1d994d6538ae0a64-resource-dir\") pod \"kube-apiserver-master-0\" (UID: \"b45ea2ef1cf2bc9d1d994d6538ae0a64\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 18 18:00:21.764896 master-0 kubenswrapper[7553]: I0318 18:00:21.764817 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b45ea2ef1cf2bc9d1d994d6538ae0a64-audit-dir\") pod \"kube-apiserver-master-0\" (UID: \"b45ea2ef1cf2bc9d1d994d6538ae0a64\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 18 18:00:21.765112 master-0 kubenswrapper[7553]: I0318 18:00:21.764925 7553 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/b45ea2ef1cf2bc9d1d994d6538ae0a64-cert-dir\") pod \"kube-apiserver-master-0\" (UID: \"b45ea2ef1cf2bc9d1d994d6538ae0a64\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 18 18:00:21.765112 master-0 kubenswrapper[7553]: I0318 18:00:21.765018 7553 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b45ea2ef1cf2bc9d1d994d6538ae0a64-audit-dir\") pod \"kube-apiserver-master-0\" (UID: \"b45ea2ef1cf2bc9d1d994d6538ae0a64\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 18 18:00:21.765112 master-0 kubenswrapper[7553]: I0318 18:00:21.765088 7553 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/b45ea2ef1cf2bc9d1d994d6538ae0a64-cert-dir\") pod \"kube-apiserver-master-0\" (UID: \"b45ea2ef1cf2bc9d1d994d6538ae0a64\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 18 18:00:21.837966 master-0 kubenswrapper[7553]: I0318 18:00:21.837839 7553 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 18 18:00:21.870550 master-0 kubenswrapper[7553]: I0318 18:00:21.870463 7553 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 18 18:00:21.875884 master-0 kubenswrapper[7553]: W0318 18:00:21.875809 7553 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8e7a82869988463543d3d8dd1f0b5fe3.slice/crio-1ea3dbfa5dfb13332a0f1977477497e5220b4bba3727358399c90d2b8664c6d7 WatchSource:0}: Error finding container 1ea3dbfa5dfb13332a0f1977477497e5220b4bba3727358399c90d2b8664c6d7: Status 404 returned error can't find the container with id 1ea3dbfa5dfb13332a0f1977477497e5220b4bba3727358399c90d2b8664c6d7 Mar 18 18:00:21.899628 master-0 kubenswrapper[7553]: E0318 18:00:21.899438 7553 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 192.168.32.10:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-master-0.189e0161979e0cc5 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-master-0,UID:8e7a82869988463543d3d8dd1f0b5fe3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c5ce3d1134d6500e2b8528516c1889d7bbc6259aba4981c6983395b0e9eeff65\" already present on machine,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 18:00:21.897727173 +0000 UTC m=+1112.043561886,LastTimestamp:2026-03-18 18:00:21.897727173 +0000 UTC m=+1112.043561886,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 18:00:21.915502 master-0 kubenswrapper[7553]: W0318 18:00:21.915146 7553 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb45ea2ef1cf2bc9d1d994d6538ae0a64.slice/crio-796303c5f2a585dc8ab37c0a21b453aa0dd8797dea11dc3eee7c72e5dad9b158 WatchSource:0}: Error finding container 796303c5f2a585dc8ab37c0a21b453aa0dd8797dea11dc3eee7c72e5dad9b158: Status 404 returned error can't find the container with id 796303c5f2a585dc8ab37c0a21b453aa0dd8797dea11dc3eee7c72e5dad9b158 Mar 18 18:00:22.102236 master-0 kubenswrapper[7553]: I0318 18:00:22.102178 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 18:00:22.102236 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 18:00:22.102236 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 18:00:22.102236 master-0 kubenswrapper[7553]: healthz check failed Mar 18 18:00:22.102580 master-0 kubenswrapper[7553]: I0318 18:00:22.102245 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 18:00:22.281910 master-0 kubenswrapper[7553]: I0318 18:00:22.281835 7553 generic.go:334] "Generic (PLEG): container finished" podID="49fac1b46a11e49501805e891baae4a9" containerID="b07f4eb106a117d2a3aedb26bb538e640c6545e341eb4a44bae581e10c947c17" exitCode=0 Mar 18 18:00:22.286542 master-0 kubenswrapper[7553]: I0318 18:00:22.286477 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" event={"ID":"8e7a82869988463543d3d8dd1f0b5fe3","Type":"ContainerStarted","Data":"25f0059cb7f28e57d54587af9a075f46b53e453c6a901d45bc7aae8b1f8557d8"} Mar 18 18:00:22.286542 master-0 kubenswrapper[7553]: I0318 18:00:22.286534 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" event={"ID":"8e7a82869988463543d3d8dd1f0b5fe3","Type":"ContainerStarted","Data":"1ea3dbfa5dfb13332a0f1977477497e5220b4bba3727358399c90d2b8664c6d7"} Mar 18 18:00:22.289423 master-0 kubenswrapper[7553]: E0318 18:00:22.289228 7553 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 192.168.32.10:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 18 18:00:22.290512 master-0 kubenswrapper[7553]: I0318 18:00:22.290352 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"b45ea2ef1cf2bc9d1d994d6538ae0a64","Type":"ContainerStarted","Data":"4d1607d2ed493b59fcd5008ae3ef413f5adc41abf3185470578dd70106590036"} Mar 18 18:00:22.290512 master-0 kubenswrapper[7553]: I0318 18:00:22.290381 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"b45ea2ef1cf2bc9d1d994d6538ae0a64","Type":"ContainerStarted","Data":"796303c5f2a585dc8ab37c0a21b453aa0dd8797dea11dc3eee7c72e5dad9b158"} Mar 18 18:00:22.291300 master-0 kubenswrapper[7553]: E0318 18:00:22.291231 7553 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 192.168.32.10:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 18 18:00:22.292504 master-0 kubenswrapper[7553]: I0318 18:00:22.292434 7553 generic.go:334] "Generic (PLEG): container finished" podID="4285e80c-1ff9-42b3-9692-9f2ab6b61916" containerID="7af43e761f47509ec1402b4287569aac08cd400280ac0f2b280a0b47c6c678f0" exitCode=0 Mar 18 18:00:22.292634 master-0 kubenswrapper[7553]: I0318 18:00:22.292516 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-3-master-0" event={"ID":"4285e80c-1ff9-42b3-9692-9f2ab6b61916","Type":"ContainerDied","Data":"7af43e761f47509ec1402b4287569aac08cd400280ac0f2b280a0b47c6c678f0"} Mar 18 18:00:22.293927 master-0 kubenswrapper[7553]: I0318 18:00:22.293875 7553 status_manager.go:851] "Failed to get status for pod" podUID="4285e80c-1ff9-42b3-9692-9f2ab6b61916" pod="openshift-kube-apiserver/installer-3-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-3-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 18 18:00:23.105456 master-0 kubenswrapper[7553]: I0318 18:00:23.105197 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 18:00:23.105456 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 18:00:23.105456 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 18:00:23.105456 master-0 kubenswrapper[7553]: healthz check failed Mar 18 18:00:23.105456 master-0 kubenswrapper[7553]: I0318 18:00:23.105351 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 18:00:23.305892 master-0 kubenswrapper[7553]: I0318 18:00:23.305802 7553 generic.go:334] "Generic (PLEG): container finished" podID="b45ea2ef1cf2bc9d1d994d6538ae0a64" containerID="4d1607d2ed493b59fcd5008ae3ef413f5adc41abf3185470578dd70106590036" exitCode=0 Mar 18 18:00:23.306155 master-0 kubenswrapper[7553]: I0318 18:00:23.306029 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"b45ea2ef1cf2bc9d1d994d6538ae0a64","Type":"ContainerDied","Data":"4d1607d2ed493b59fcd5008ae3ef413f5adc41abf3185470578dd70106590036"} Mar 18 18:00:23.306155 master-0 kubenswrapper[7553]: I0318 18:00:23.306071 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"b45ea2ef1cf2bc9d1d994d6538ae0a64","Type":"ContainerStarted","Data":"532613e49ea9e30ac1511410a4da92cfd72901d45720ba547c93524371db0317"} Mar 18 18:00:23.306155 master-0 kubenswrapper[7553]: I0318 18:00:23.306088 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"b45ea2ef1cf2bc9d1d994d6538ae0a64","Type":"ContainerStarted","Data":"f5d16ced4f31a4b0a79823e1a1297685f8a42b35e0b0e91f27b749a47d590dcc"} Mar 18 18:00:23.306155 master-0 kubenswrapper[7553]: I0318 18:00:23.306100 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"b45ea2ef1cf2bc9d1d994d6538ae0a64","Type":"ContainerStarted","Data":"f46251b34f5dd065effa27bf2880569964ed0fdb5993d9d07458a0798550963d"} Mar 18 18:00:23.827916 master-0 kubenswrapper[7553]: I0318 18:00:23.827875 7553 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-3-master-0" Mar 18 18:00:23.869667 master-0 kubenswrapper[7553]: I0318 18:00:23.869602 7553 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 18 18:00:23.923059 master-0 kubenswrapper[7553]: I0318 18:00:23.922995 7553 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-etc-kubernetes-cloud\") pod \"49fac1b46a11e49501805e891baae4a9\" (UID: \"49fac1b46a11e49501805e891baae4a9\") " Mar 18 18:00:23.923059 master-0 kubenswrapper[7553]: I0318 18:00:23.923050 7553 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/4285e80c-1ff9-42b3-9692-9f2ab6b61916-var-lock\") pod \"4285e80c-1ff9-42b3-9692-9f2ab6b61916\" (UID: \"4285e80c-1ff9-42b3-9692-9f2ab6b61916\") " Mar 18 18:00:23.923699 master-0 kubenswrapper[7553]: I0318 18:00:23.923102 7553 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-ssl-certs-host\") pod \"49fac1b46a11e49501805e891baae4a9\" (UID: \"49fac1b46a11e49501805e891baae4a9\") " Mar 18 18:00:23.923699 master-0 kubenswrapper[7553]: I0318 18:00:23.923169 7553 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-secrets\") pod \"49fac1b46a11e49501805e891baae4a9\" (UID: \"49fac1b46a11e49501805e891baae4a9\") " Mar 18 18:00:23.923699 master-0 kubenswrapper[7553]: I0318 18:00:23.923204 7553 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4285e80c-1ff9-42b3-9692-9f2ab6b61916-kube-api-access\") pod \"4285e80c-1ff9-42b3-9692-9f2ab6b61916\" (UID: \"4285e80c-1ff9-42b3-9692-9f2ab6b61916\") " Mar 18 18:00:23.923699 master-0 kubenswrapper[7553]: I0318 18:00:23.923243 7553 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4285e80c-1ff9-42b3-9692-9f2ab6b61916-kubelet-dir\") pod \"4285e80c-1ff9-42b3-9692-9f2ab6b61916\" (UID: \"4285e80c-1ff9-42b3-9692-9f2ab6b61916\") " Mar 18 18:00:23.923699 master-0 kubenswrapper[7553]: I0318 18:00:23.923261 7553 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-config\") pod \"49fac1b46a11e49501805e891baae4a9\" (UID: \"49fac1b46a11e49501805e891baae4a9\") " Mar 18 18:00:23.923699 master-0 kubenswrapper[7553]: I0318 18:00:23.923369 7553 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4285e80c-1ff9-42b3-9692-9f2ab6b61916-var-lock" (OuterVolumeSpecName: "var-lock") pod "4285e80c-1ff9-42b3-9692-9f2ab6b61916" (UID: "4285e80c-1ff9-42b3-9692-9f2ab6b61916"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 18:00:23.923895 master-0 kubenswrapper[7553]: I0318 18:00:23.923711 7553 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-secrets" (OuterVolumeSpecName: "secrets") pod "49fac1b46a11e49501805e891baae4a9" (UID: "49fac1b46a11e49501805e891baae4a9"). InnerVolumeSpecName "secrets". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 18:00:23.923895 master-0 kubenswrapper[7553]: I0318 18:00:23.923749 7553 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-etc-kubernetes-cloud" (OuterVolumeSpecName: "etc-kubernetes-cloud") pod "49fac1b46a11e49501805e891baae4a9" (UID: "49fac1b46a11e49501805e891baae4a9"). InnerVolumeSpecName "etc-kubernetes-cloud". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 18:00:23.923895 master-0 kubenswrapper[7553]: I0318 18:00:23.923711 7553 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-ssl-certs-host" (OuterVolumeSpecName: "ssl-certs-host") pod "49fac1b46a11e49501805e891baae4a9" (UID: "49fac1b46a11e49501805e891baae4a9"). InnerVolumeSpecName "ssl-certs-host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 18:00:23.924018 master-0 kubenswrapper[7553]: I0318 18:00:23.923992 7553 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-logs\") pod \"49fac1b46a11e49501805e891baae4a9\" (UID: \"49fac1b46a11e49501805e891baae4a9\") " Mar 18 18:00:23.924059 master-0 kubenswrapper[7553]: I0318 18:00:23.924044 7553 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-audit-dir\") pod \"49fac1b46a11e49501805e891baae4a9\" (UID: \"49fac1b46a11e49501805e891baae4a9\") " Mar 18 18:00:23.924406 master-0 kubenswrapper[7553]: I0318 18:00:23.924380 7553 reconciler_common.go:293] "Volume detached for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-etc-kubernetes-cloud\") on node \"master-0\" DevicePath \"\"" Mar 18 18:00:23.924406 master-0 kubenswrapper[7553]: I0318 18:00:23.924405 7553 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/4285e80c-1ff9-42b3-9692-9f2ab6b61916-var-lock\") on node \"master-0\" DevicePath \"\"" Mar 18 18:00:23.924500 master-0 kubenswrapper[7553]: I0318 18:00:23.924416 7553 reconciler_common.go:293] "Volume detached for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-ssl-certs-host\") on node \"master-0\" DevicePath \"\"" Mar 18 18:00:23.924500 master-0 kubenswrapper[7553]: I0318 18:00:23.924428 7553 reconciler_common.go:293] "Volume detached for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-secrets\") on node \"master-0\" DevicePath \"\"" Mar 18 18:00:23.932400 master-0 kubenswrapper[7553]: I0318 18:00:23.932358 7553 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4285e80c-1ff9-42b3-9692-9f2ab6b61916-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "4285e80c-1ff9-42b3-9692-9f2ab6b61916" (UID: "4285e80c-1ff9-42b3-9692-9f2ab6b61916"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 18:00:23.932474 master-0 kubenswrapper[7553]: I0318 18:00:23.932385 7553 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-logs" (OuterVolumeSpecName: "logs") pod "49fac1b46a11e49501805e891baae4a9" (UID: "49fac1b46a11e49501805e891baae4a9"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 18:00:23.932474 master-0 kubenswrapper[7553]: I0318 18:00:23.932386 7553 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "49fac1b46a11e49501805e891baae4a9" (UID: "49fac1b46a11e49501805e891baae4a9"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 18:00:23.933357 master-0 kubenswrapper[7553]: I0318 18:00:23.933266 7553 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-config" (OuterVolumeSpecName: "config") pod "49fac1b46a11e49501805e891baae4a9" (UID: "49fac1b46a11e49501805e891baae4a9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 18:00:23.935369 master-0 kubenswrapper[7553]: I0318 18:00:23.935328 7553 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4285e80c-1ff9-42b3-9692-9f2ab6b61916-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "4285e80c-1ff9-42b3-9692-9f2ab6b61916" (UID: "4285e80c-1ff9-42b3-9692-9f2ab6b61916"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 18:00:24.026505 master-0 kubenswrapper[7553]: I0318 18:00:24.026347 7553 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-audit-dir\") on node \"master-0\" DevicePath \"\"" Mar 18 18:00:24.026505 master-0 kubenswrapper[7553]: I0318 18:00:24.026396 7553 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4285e80c-1ff9-42b3-9692-9f2ab6b61916-kube-api-access\") on node \"master-0\" DevicePath \"\"" Mar 18 18:00:24.026505 master-0 kubenswrapper[7553]: I0318 18:00:24.026411 7553 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4285e80c-1ff9-42b3-9692-9f2ab6b61916-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Mar 18 18:00:24.026505 master-0 kubenswrapper[7553]: I0318 18:00:24.026423 7553 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-config\") on node \"master-0\" DevicePath \"\"" Mar 18 18:00:24.026505 master-0 kubenswrapper[7553]: I0318 18:00:24.026437 7553 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-logs\") on node \"master-0\" DevicePath \"\"" Mar 18 18:00:24.062178 master-0 kubenswrapper[7553]: I0318 18:00:24.062116 7553 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49fac1b46a11e49501805e891baae4a9" path="/var/lib/kubelet/pods/49fac1b46a11e49501805e891baae4a9/volumes" Mar 18 18:00:24.062918 master-0 kubenswrapper[7553]: I0318 18:00:24.062882 7553 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" podUID="" Mar 18 18:00:24.102894 master-0 kubenswrapper[7553]: I0318 18:00:24.102814 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 18:00:24.102894 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 18:00:24.102894 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 18:00:24.102894 master-0 kubenswrapper[7553]: healthz check failed Mar 18 18:00:24.103181 master-0 kubenswrapper[7553]: I0318 18:00:24.102910 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 18:00:24.329315 master-0 kubenswrapper[7553]: I0318 18:00:24.328741 7553 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-3-master-0" Mar 18 18:00:24.349626 master-0 kubenswrapper[7553]: I0318 18:00:24.349585 7553 generic.go:334] "Generic (PLEG): container finished" podID="49fac1b46a11e49501805e891baae4a9" containerID="3c6f642b736991fd20242697f9273f8f6a126bc6027f7c5ddd27e70569fd9054" exitCode=0 Mar 18 18:00:24.349832 master-0 kubenswrapper[7553]: I0318 18:00:24.349812 7553 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 18 18:00:25.101385 master-0 kubenswrapper[7553]: I0318 18:00:25.101213 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 18:00:25.101385 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 18:00:25.101385 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 18:00:25.101385 master-0 kubenswrapper[7553]: healthz check failed Mar 18 18:00:25.101926 master-0 kubenswrapper[7553]: I0318 18:00:25.101882 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 18:00:26.103936 master-0 kubenswrapper[7553]: I0318 18:00:26.103849 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 18:00:26.103936 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 18:00:26.103936 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 18:00:26.103936 master-0 kubenswrapper[7553]: healthz check failed Mar 18 18:00:26.105170 master-0 kubenswrapper[7553]: I0318 18:00:26.103944 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 18:00:27.103210 master-0 kubenswrapper[7553]: I0318 18:00:27.103135 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 18:00:27.103210 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 18:00:27.103210 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 18:00:27.103210 master-0 kubenswrapper[7553]: healthz check failed Mar 18 18:00:27.103685 master-0 kubenswrapper[7553]: I0318 18:00:27.103234 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 18:00:28.103548 master-0 kubenswrapper[7553]: I0318 18:00:28.103254 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 18:00:28.103548 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 18:00:28.103548 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 18:00:28.103548 master-0 kubenswrapper[7553]: healthz check failed Mar 18 18:00:28.103548 master-0 kubenswrapper[7553]: I0318 18:00:28.103377 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 18:00:29.103580 master-0 kubenswrapper[7553]: I0318 18:00:29.103490 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 18:00:29.103580 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 18:00:29.103580 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 18:00:29.103580 master-0 kubenswrapper[7553]: healthz check failed Mar 18 18:00:29.104631 master-0 kubenswrapper[7553]: I0318 18:00:29.103589 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 18:00:29.321184 master-0 kubenswrapper[7553]: E0318 18:00:29.321099 7553 kubelet.go:2526] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="5.268s" Mar 18 18:00:29.321517 master-0 kubenswrapper[7553]: I0318 18:00:29.321189 7553 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-3-master-0" event={"ID":"4285e80c-1ff9-42b3-9692-9f2ab6b61916","Type":"ContainerDied","Data":"f34d77ba93703fe1437d1652719d678aefcfb27c7b8ba0e8d8cf97f2d8fb7718"} Mar 18 18:00:29.321517 master-0 kubenswrapper[7553]: I0318 18:00:29.321234 7553 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f34d77ba93703fe1437d1652719d678aefcfb27c7b8ba0e8d8cf97f2d8fb7718" Mar 18 18:00:29.334757 master-0 kubenswrapper[7553]: I0318 18:00:29.334662 7553 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" podUID="" Mar 18 18:00:30.102645 master-0 kubenswrapper[7553]: I0318 18:00:30.102597 7553 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-m5dh4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 18:00:30.102645 master-0 kubenswrapper[7553]: [-]has-synced failed: reason withheld Mar 18 18:00:30.102645 master-0 kubenswrapper[7553]: [+]process-running ok Mar 18 18:00:30.102645 master-0 kubenswrapper[7553]: healthz check failed Mar 18 18:00:30.102913 master-0 kubenswrapper[7553]: I0318 18:00:30.102671 7553 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" podUID="c57f282a-829b-41b2-827a-f4bc598245a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 18:00:30.316790 master-0 kubenswrapper[7553]: I0318 18:00:30.316730 7553 request.go:700] Waited for 1.003451658s, retries: 1, retry-after: 5s - retry-reason: 503 - request: GET:https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-oauth-apiserver/configmaps?allowWatchBookmarks=true&fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=12045&timeout=40m0s&timeoutSeconds=2400&watch=true Mar 18 18:00:30.454237 master-0 kubenswrapper[7553]: I0318 18:00:30.454138 7553 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-master-0_b45ea2ef1cf2bc9d1d994d6538ae0a64/kube-apiserver-check-endpoints/0.log" Mar 18 18:00:30.456553 master-0 kubenswrapper[7553]: I0318 18:00:30.456504 7553 generic.go:334] "Generic (PLEG): container finished" podID="b45ea2ef1cf2bc9d1d994d6538ae0a64" containerID="f887def1d9b97d72f25ddb564fd0ecbae06aba6b64de1338a239aa08a40c032f" exitCode=255 Mar 18 18:00:30.717968 master-0 kubenswrapper[7553]: I0318 18:00:30.717898 7553 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Mar 18 18:00:30.718014 master-0 systemd[1]: Stopping Kubernetes Kubelet... Mar 18 18:00:30.743106 master-0 systemd[1]: kubelet.service: Deactivated successfully. Mar 18 18:00:30.743435 master-0 systemd[1]: Stopped Kubernetes Kubelet. Mar 18 18:00:30.744525 master-0 systemd[1]: kubelet.service: Consumed 2min 50.329s CPU time. Mar 18 18:00:30.765091 master-0 systemd[1]: Starting Kubernetes Kubelet... Mar 18 18:00:30.882840 master-0 kubenswrapper[30278]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 18 18:00:30.883342 master-0 kubenswrapper[30278]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Mar 18 18:00:30.883395 master-0 kubenswrapper[30278]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 18 18:00:30.883463 master-0 kubenswrapper[30278]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 18 18:00:30.883510 master-0 kubenswrapper[30278]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Mar 18 18:00:30.883553 master-0 kubenswrapper[30278]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 18 18:00:30.883750 master-0 kubenswrapper[30278]: I0318 18:00:30.883697 30278 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 18 18:00:30.892892 master-0 kubenswrapper[30278]: W0318 18:00:30.892856 30278 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Mar 18 18:00:30.893116 master-0 kubenswrapper[30278]: W0318 18:00:30.893107 30278 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Mar 18 18:00:30.893168 master-0 kubenswrapper[30278]: W0318 18:00:30.893161 30278 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Mar 18 18:00:30.893216 master-0 kubenswrapper[30278]: W0318 18:00:30.893209 30278 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Mar 18 18:00:30.893263 master-0 kubenswrapper[30278]: W0318 18:00:30.893256 30278 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Mar 18 18:00:30.893344 master-0 kubenswrapper[30278]: W0318 18:00:30.893335 30278 feature_gate.go:330] unrecognized feature gate: Example Mar 18 18:00:30.893403 master-0 kubenswrapper[30278]: W0318 18:00:30.893395 30278 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Mar 18 18:00:30.893451 master-0 kubenswrapper[30278]: W0318 18:00:30.893444 30278 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Mar 18 18:00:30.893498 master-0 kubenswrapper[30278]: W0318 18:00:30.893490 30278 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Mar 18 18:00:30.893544 master-0 kubenswrapper[30278]: W0318 18:00:30.893537 30278 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Mar 18 18:00:30.893590 master-0 kubenswrapper[30278]: W0318 18:00:30.893583 30278 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Mar 18 18:00:30.893635 master-0 kubenswrapper[30278]: W0318 18:00:30.893628 30278 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Mar 18 18:00:30.893685 master-0 kubenswrapper[30278]: W0318 18:00:30.893677 30278 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Mar 18 18:00:30.893734 master-0 kubenswrapper[30278]: W0318 18:00:30.893726 30278 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Mar 18 18:00:30.893782 master-0 kubenswrapper[30278]: W0318 18:00:30.893775 30278 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Mar 18 18:00:30.893830 master-0 kubenswrapper[30278]: W0318 18:00:30.893823 30278 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Mar 18 18:00:30.893877 master-0 kubenswrapper[30278]: W0318 18:00:30.893870 30278 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Mar 18 18:00:30.893927 master-0 kubenswrapper[30278]: W0318 18:00:30.893919 30278 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Mar 18 18:00:30.893975 master-0 kubenswrapper[30278]: W0318 18:00:30.893967 30278 feature_gate.go:330] unrecognized feature gate: SignatureStores Mar 18 18:00:30.894029 master-0 kubenswrapper[30278]: W0318 18:00:30.894021 30278 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Mar 18 18:00:30.894077 master-0 kubenswrapper[30278]: W0318 18:00:30.894069 30278 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Mar 18 18:00:30.894125 master-0 kubenswrapper[30278]: W0318 18:00:30.894117 30278 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Mar 18 18:00:30.894173 master-0 kubenswrapper[30278]: W0318 18:00:30.894165 30278 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Mar 18 18:00:30.894220 master-0 kubenswrapper[30278]: W0318 18:00:30.894212 30278 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Mar 18 18:00:30.894285 master-0 kubenswrapper[30278]: W0318 18:00:30.894261 30278 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Mar 18 18:00:30.894336 master-0 kubenswrapper[30278]: W0318 18:00:30.894327 30278 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Mar 18 18:00:30.894397 master-0 kubenswrapper[30278]: W0318 18:00:30.894389 30278 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Mar 18 18:00:30.894446 master-0 kubenswrapper[30278]: W0318 18:00:30.894438 30278 feature_gate.go:330] unrecognized feature gate: InsightsConfig Mar 18 18:00:30.894494 master-0 kubenswrapper[30278]: W0318 18:00:30.894486 30278 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Mar 18 18:00:30.894564 master-0 kubenswrapper[30278]: W0318 18:00:30.894556 30278 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Mar 18 18:00:30.894614 master-0 kubenswrapper[30278]: W0318 18:00:30.894606 30278 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Mar 18 18:00:30.894664 master-0 kubenswrapper[30278]: W0318 18:00:30.894656 30278 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Mar 18 18:00:30.894712 master-0 kubenswrapper[30278]: W0318 18:00:30.894704 30278 feature_gate.go:330] unrecognized feature gate: PinnedImages Mar 18 18:00:30.894763 master-0 kubenswrapper[30278]: W0318 18:00:30.894755 30278 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Mar 18 18:00:30.894819 master-0 kubenswrapper[30278]: W0318 18:00:30.894799 30278 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Mar 18 18:00:30.894869 master-0 kubenswrapper[30278]: W0318 18:00:30.894861 30278 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Mar 18 18:00:30.894919 master-0 kubenswrapper[30278]: W0318 18:00:30.894912 30278 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Mar 18 18:00:30.894976 master-0 kubenswrapper[30278]: W0318 18:00:30.894968 30278 feature_gate.go:330] unrecognized feature gate: NewOLM Mar 18 18:00:30.895023 master-0 kubenswrapper[30278]: W0318 18:00:30.895015 30278 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Mar 18 18:00:30.895071 master-0 kubenswrapper[30278]: W0318 18:00:30.895063 30278 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Mar 18 18:00:30.895118 master-0 kubenswrapper[30278]: W0318 18:00:30.895111 30278 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Mar 18 18:00:30.895166 master-0 kubenswrapper[30278]: W0318 18:00:30.895159 30278 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Mar 18 18:00:30.895213 master-0 kubenswrapper[30278]: W0318 18:00:30.895206 30278 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Mar 18 18:00:30.895263 master-0 kubenswrapper[30278]: W0318 18:00:30.895255 30278 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Mar 18 18:00:30.895330 master-0 kubenswrapper[30278]: W0318 18:00:30.895321 30278 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Mar 18 18:00:30.895378 master-0 kubenswrapper[30278]: W0318 18:00:30.895371 30278 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Mar 18 18:00:30.895440 master-0 kubenswrapper[30278]: W0318 18:00:30.895432 30278 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Mar 18 18:00:30.895490 master-0 kubenswrapper[30278]: W0318 18:00:30.895482 30278 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Mar 18 18:00:30.895537 master-0 kubenswrapper[30278]: W0318 18:00:30.895529 30278 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Mar 18 18:00:30.895588 master-0 kubenswrapper[30278]: W0318 18:00:30.895581 30278 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Mar 18 18:00:30.895640 master-0 kubenswrapper[30278]: W0318 18:00:30.895632 30278 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Mar 18 18:00:30.895687 master-0 kubenswrapper[30278]: W0318 18:00:30.895680 30278 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Mar 18 18:00:30.895735 master-0 kubenswrapper[30278]: W0318 18:00:30.895726 30278 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Mar 18 18:00:30.895797 master-0 kubenswrapper[30278]: W0318 18:00:30.895789 30278 feature_gate.go:330] unrecognized feature gate: OVNObservability Mar 18 18:00:30.895846 master-0 kubenswrapper[30278]: W0318 18:00:30.895838 30278 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Mar 18 18:00:30.895892 master-0 kubenswrapper[30278]: W0318 18:00:30.895885 30278 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Mar 18 18:00:30.895939 master-0 kubenswrapper[30278]: W0318 18:00:30.895931 30278 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Mar 18 18:00:30.895989 master-0 kubenswrapper[30278]: W0318 18:00:30.895981 30278 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Mar 18 18:00:30.896039 master-0 kubenswrapper[30278]: W0318 18:00:30.896031 30278 feature_gate.go:330] unrecognized feature gate: PlatformOperators Mar 18 18:00:30.896090 master-0 kubenswrapper[30278]: W0318 18:00:30.896083 30278 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Mar 18 18:00:30.896138 master-0 kubenswrapper[30278]: W0318 18:00:30.896131 30278 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Mar 18 18:00:30.896185 master-0 kubenswrapper[30278]: W0318 18:00:30.896178 30278 feature_gate.go:330] unrecognized feature gate: GatewayAPI Mar 18 18:00:30.896233 master-0 kubenswrapper[30278]: W0318 18:00:30.896225 30278 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Mar 18 18:00:30.896686 master-0 kubenswrapper[30278]: W0318 18:00:30.896676 30278 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Mar 18 18:00:30.896748 master-0 kubenswrapper[30278]: W0318 18:00:30.896740 30278 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Mar 18 18:00:30.896797 master-0 kubenswrapper[30278]: W0318 18:00:30.896789 30278 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Mar 18 18:00:30.896844 master-0 kubenswrapper[30278]: W0318 18:00:30.896837 30278 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Mar 18 18:00:30.896894 master-0 kubenswrapper[30278]: W0318 18:00:30.896886 30278 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Mar 18 18:00:30.896943 master-0 kubenswrapper[30278]: W0318 18:00:30.896935 30278 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Mar 18 18:00:30.896989 master-0 kubenswrapper[30278]: W0318 18:00:30.896982 30278 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Mar 18 18:00:30.897038 master-0 kubenswrapper[30278]: W0318 18:00:30.897030 30278 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Mar 18 18:00:30.897092 master-0 kubenswrapper[30278]: W0318 18:00:30.897084 30278 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Mar 18 18:00:30.897310 master-0 kubenswrapper[30278]: I0318 18:00:30.897267 30278 flags.go:64] FLAG: --address="0.0.0.0" Mar 18 18:00:30.897380 master-0 kubenswrapper[30278]: I0318 18:00:30.897364 30278 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Mar 18 18:00:30.897442 master-0 kubenswrapper[30278]: I0318 18:00:30.897432 30278 flags.go:64] FLAG: --anonymous-auth="true" Mar 18 18:00:30.897493 master-0 kubenswrapper[30278]: I0318 18:00:30.897483 30278 flags.go:64] FLAG: --application-metrics-count-limit="100" Mar 18 18:00:30.897542 master-0 kubenswrapper[30278]: I0318 18:00:30.897533 30278 flags.go:64] FLAG: --authentication-token-webhook="false" Mar 18 18:00:30.897596 master-0 kubenswrapper[30278]: I0318 18:00:30.897586 30278 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Mar 18 18:00:30.897649 master-0 kubenswrapper[30278]: I0318 18:00:30.897639 30278 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Mar 18 18:00:30.897695 master-0 kubenswrapper[30278]: I0318 18:00:30.897687 30278 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Mar 18 18:00:30.897803 master-0 kubenswrapper[30278]: I0318 18:00:30.897794 30278 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Mar 18 18:00:30.897867 master-0 kubenswrapper[30278]: I0318 18:00:30.897857 30278 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Mar 18 18:00:30.897921 master-0 kubenswrapper[30278]: I0318 18:00:30.897912 30278 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Mar 18 18:00:30.897970 master-0 kubenswrapper[30278]: I0318 18:00:30.897962 30278 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Mar 18 18:00:30.898023 master-0 kubenswrapper[30278]: I0318 18:00:30.898015 30278 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Mar 18 18:00:30.898073 master-0 kubenswrapper[30278]: I0318 18:00:30.898065 30278 flags.go:64] FLAG: --cgroup-root="" Mar 18 18:00:30.898118 master-0 kubenswrapper[30278]: I0318 18:00:30.898110 30278 flags.go:64] FLAG: --cgroups-per-qos="true" Mar 18 18:00:30.898167 master-0 kubenswrapper[30278]: I0318 18:00:30.898158 30278 flags.go:64] FLAG: --client-ca-file="" Mar 18 18:00:30.898229 master-0 kubenswrapper[30278]: I0318 18:00:30.898221 30278 flags.go:64] FLAG: --cloud-config="" Mar 18 18:00:30.898299 master-0 kubenswrapper[30278]: I0318 18:00:30.898290 30278 flags.go:64] FLAG: --cloud-provider="" Mar 18 18:00:30.898350 master-0 kubenswrapper[30278]: I0318 18:00:30.898340 30278 flags.go:64] FLAG: --cluster-dns="[]" Mar 18 18:00:30.898405 master-0 kubenswrapper[30278]: I0318 18:00:30.898396 30278 flags.go:64] FLAG: --cluster-domain="" Mar 18 18:00:30.898462 master-0 kubenswrapper[30278]: I0318 18:00:30.898454 30278 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Mar 18 18:00:30.898527 master-0 kubenswrapper[30278]: I0318 18:00:30.898518 30278 flags.go:64] FLAG: --config-dir="" Mar 18 18:00:30.898578 master-0 kubenswrapper[30278]: I0318 18:00:30.898570 30278 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Mar 18 18:00:30.898629 master-0 kubenswrapper[30278]: I0318 18:00:30.898619 30278 flags.go:64] FLAG: --container-log-max-files="5" Mar 18 18:00:30.898678 master-0 kubenswrapper[30278]: I0318 18:00:30.898669 30278 flags.go:64] FLAG: --container-log-max-size="10Mi" Mar 18 18:00:30.898729 master-0 kubenswrapper[30278]: I0318 18:00:30.898721 30278 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Mar 18 18:00:30.898774 master-0 kubenswrapper[30278]: I0318 18:00:30.898766 30278 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Mar 18 18:00:30.898827 master-0 kubenswrapper[30278]: I0318 18:00:30.898819 30278 flags.go:64] FLAG: --containerd-namespace="k8s.io" Mar 18 18:00:30.898876 master-0 kubenswrapper[30278]: I0318 18:00:30.898867 30278 flags.go:64] FLAG: --contention-profiling="false" Mar 18 18:00:30.898920 master-0 kubenswrapper[30278]: I0318 18:00:30.898912 30278 flags.go:64] FLAG: --cpu-cfs-quota="true" Mar 18 18:00:30.898968 master-0 kubenswrapper[30278]: I0318 18:00:30.898960 30278 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Mar 18 18:00:30.899018 master-0 kubenswrapper[30278]: I0318 18:00:30.899010 30278 flags.go:64] FLAG: --cpu-manager-policy="none" Mar 18 18:00:30.899066 master-0 kubenswrapper[30278]: I0318 18:00:30.899056 30278 flags.go:64] FLAG: --cpu-manager-policy-options="" Mar 18 18:00:30.899110 master-0 kubenswrapper[30278]: I0318 18:00:30.899102 30278 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Mar 18 18:00:30.899160 master-0 kubenswrapper[30278]: I0318 18:00:30.899152 30278 flags.go:64] FLAG: --enable-controller-attach-detach="true" Mar 18 18:00:30.899209 master-0 kubenswrapper[30278]: I0318 18:00:30.899201 30278 flags.go:64] FLAG: --enable-debugging-handlers="true" Mar 18 18:00:30.899259 master-0 kubenswrapper[30278]: I0318 18:00:30.899251 30278 flags.go:64] FLAG: --enable-load-reader="false" Mar 18 18:00:30.899329 master-0 kubenswrapper[30278]: I0318 18:00:30.899320 30278 flags.go:64] FLAG: --enable-server="true" Mar 18 18:00:30.899386 master-0 kubenswrapper[30278]: I0318 18:00:30.899374 30278 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Mar 18 18:00:30.899497 master-0 kubenswrapper[30278]: I0318 18:00:30.899487 30278 flags.go:64] FLAG: --event-burst="100" Mar 18 18:00:30.899549 master-0 kubenswrapper[30278]: I0318 18:00:30.899540 30278 flags.go:64] FLAG: --event-qps="50" Mar 18 18:00:30.899599 master-0 kubenswrapper[30278]: I0318 18:00:30.899590 30278 flags.go:64] FLAG: --event-storage-age-limit="default=0" Mar 18 18:00:30.899647 master-0 kubenswrapper[30278]: I0318 18:00:30.899639 30278 flags.go:64] FLAG: --event-storage-event-limit="default=0" Mar 18 18:00:30.899694 master-0 kubenswrapper[30278]: I0318 18:00:30.899684 30278 flags.go:64] FLAG: --eviction-hard="" Mar 18 18:00:30.899749 master-0 kubenswrapper[30278]: I0318 18:00:30.899740 30278 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Mar 18 18:00:30.899798 master-0 kubenswrapper[30278]: I0318 18:00:30.899790 30278 flags.go:64] FLAG: --eviction-minimum-reclaim="" Mar 18 18:00:30.899850 master-0 kubenswrapper[30278]: I0318 18:00:30.899841 30278 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Mar 18 18:00:30.899899 master-0 kubenswrapper[30278]: I0318 18:00:30.899890 30278 flags.go:64] FLAG: --eviction-soft="" Mar 18 18:00:30.899947 master-0 kubenswrapper[30278]: I0318 18:00:30.899939 30278 flags.go:64] FLAG: --eviction-soft-grace-period="" Mar 18 18:00:30.899998 master-0 kubenswrapper[30278]: I0318 18:00:30.899990 30278 flags.go:64] FLAG: --exit-on-lock-contention="false" Mar 18 18:00:30.900047 master-0 kubenswrapper[30278]: I0318 18:00:30.900039 30278 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Mar 18 18:00:30.900100 master-0 kubenswrapper[30278]: I0318 18:00:30.900092 30278 flags.go:64] FLAG: --experimental-mounter-path="" Mar 18 18:00:30.900150 master-0 kubenswrapper[30278]: I0318 18:00:30.900141 30278 flags.go:64] FLAG: --fail-cgroupv1="false" Mar 18 18:00:30.900201 master-0 kubenswrapper[30278]: I0318 18:00:30.900193 30278 flags.go:64] FLAG: --fail-swap-on="true" Mar 18 18:00:30.900251 master-0 kubenswrapper[30278]: I0318 18:00:30.900241 30278 flags.go:64] FLAG: --feature-gates="" Mar 18 18:00:30.900312 master-0 kubenswrapper[30278]: I0318 18:00:30.900302 30278 flags.go:64] FLAG: --file-check-frequency="20s" Mar 18 18:00:30.900367 master-0 kubenswrapper[30278]: I0318 18:00:30.900358 30278 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Mar 18 18:00:30.900414 master-0 kubenswrapper[30278]: I0318 18:00:30.900406 30278 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Mar 18 18:00:30.900463 master-0 kubenswrapper[30278]: I0318 18:00:30.900455 30278 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Mar 18 18:00:30.900512 master-0 kubenswrapper[30278]: I0318 18:00:30.900503 30278 flags.go:64] FLAG: --healthz-port="10248" Mar 18 18:00:30.900562 master-0 kubenswrapper[30278]: I0318 18:00:30.900553 30278 flags.go:64] FLAG: --help="false" Mar 18 18:00:30.900614 master-0 kubenswrapper[30278]: I0318 18:00:30.900606 30278 flags.go:64] FLAG: --hostname-override="" Mar 18 18:00:30.900662 master-0 kubenswrapper[30278]: I0318 18:00:30.900654 30278 flags.go:64] FLAG: --housekeeping-interval="10s" Mar 18 18:00:30.900713 master-0 kubenswrapper[30278]: I0318 18:00:30.900705 30278 flags.go:64] FLAG: --http-check-frequency="20s" Mar 18 18:00:30.900763 master-0 kubenswrapper[30278]: I0318 18:00:30.900754 30278 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Mar 18 18:00:30.900807 master-0 kubenswrapper[30278]: I0318 18:00:30.900799 30278 flags.go:64] FLAG: --image-credential-provider-config="" Mar 18 18:00:30.900857 master-0 kubenswrapper[30278]: I0318 18:00:30.900848 30278 flags.go:64] FLAG: --image-gc-high-threshold="85" Mar 18 18:00:30.900906 master-0 kubenswrapper[30278]: I0318 18:00:30.900898 30278 flags.go:64] FLAG: --image-gc-low-threshold="80" Mar 18 18:00:30.900960 master-0 kubenswrapper[30278]: I0318 18:00:30.900952 30278 flags.go:64] FLAG: --image-service-endpoint="" Mar 18 18:00:30.901005 master-0 kubenswrapper[30278]: I0318 18:00:30.900997 30278 flags.go:64] FLAG: --kernel-memcg-notification="false" Mar 18 18:00:30.901054 master-0 kubenswrapper[30278]: I0318 18:00:30.901046 30278 flags.go:64] FLAG: --kube-api-burst="100" Mar 18 18:00:30.901104 master-0 kubenswrapper[30278]: I0318 18:00:30.901095 30278 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Mar 18 18:00:30.901153 master-0 kubenswrapper[30278]: I0318 18:00:30.901144 30278 flags.go:64] FLAG: --kube-api-qps="50" Mar 18 18:00:30.901198 master-0 kubenswrapper[30278]: I0318 18:00:30.901190 30278 flags.go:64] FLAG: --kube-reserved="" Mar 18 18:00:30.901246 master-0 kubenswrapper[30278]: I0318 18:00:30.901238 30278 flags.go:64] FLAG: --kube-reserved-cgroup="" Mar 18 18:00:30.901322 master-0 kubenswrapper[30278]: I0318 18:00:30.901312 30278 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Mar 18 18:00:30.901376 master-0 kubenswrapper[30278]: I0318 18:00:30.901367 30278 flags.go:64] FLAG: --kubelet-cgroups="" Mar 18 18:00:30.901425 master-0 kubenswrapper[30278]: I0318 18:00:30.901417 30278 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Mar 18 18:00:30.901473 master-0 kubenswrapper[30278]: I0318 18:00:30.901465 30278 flags.go:64] FLAG: --lock-file="" Mar 18 18:00:30.901525 master-0 kubenswrapper[30278]: I0318 18:00:30.901517 30278 flags.go:64] FLAG: --log-cadvisor-usage="false" Mar 18 18:00:30.901573 master-0 kubenswrapper[30278]: I0318 18:00:30.901565 30278 flags.go:64] FLAG: --log-flush-frequency="5s" Mar 18 18:00:30.901625 master-0 kubenswrapper[30278]: I0318 18:00:30.901614 30278 flags.go:64] FLAG: --log-json-info-buffer-size="0" Mar 18 18:00:30.901678 master-0 kubenswrapper[30278]: I0318 18:00:30.901669 30278 flags.go:64] FLAG: --log-json-split-stream="false" Mar 18 18:00:30.901731 master-0 kubenswrapper[30278]: I0318 18:00:30.901720 30278 flags.go:64] FLAG: --log-text-info-buffer-size="0" Mar 18 18:00:30.901780 master-0 kubenswrapper[30278]: I0318 18:00:30.901772 30278 flags.go:64] FLAG: --log-text-split-stream="false" Mar 18 18:00:30.901842 master-0 kubenswrapper[30278]: I0318 18:00:30.901833 30278 flags.go:64] FLAG: --logging-format="text" Mar 18 18:00:30.901898 master-0 kubenswrapper[30278]: I0318 18:00:30.901889 30278 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Mar 18 18:00:30.901952 master-0 kubenswrapper[30278]: I0318 18:00:30.901943 30278 flags.go:64] FLAG: --make-iptables-util-chains="true" Mar 18 18:00:30.902003 master-0 kubenswrapper[30278]: I0318 18:00:30.901995 30278 flags.go:64] FLAG: --manifest-url="" Mar 18 18:00:30.902058 master-0 kubenswrapper[30278]: I0318 18:00:30.902047 30278 flags.go:64] FLAG: --manifest-url-header="" Mar 18 18:00:30.902107 master-0 kubenswrapper[30278]: I0318 18:00:30.902098 30278 flags.go:64] FLAG: --max-housekeeping-interval="15s" Mar 18 18:00:30.902156 master-0 kubenswrapper[30278]: I0318 18:00:30.902146 30278 flags.go:64] FLAG: --max-open-files="1000000" Mar 18 18:00:30.902200 master-0 kubenswrapper[30278]: I0318 18:00:30.902192 30278 flags.go:64] FLAG: --max-pods="110" Mar 18 18:00:30.902248 master-0 kubenswrapper[30278]: I0318 18:00:30.902240 30278 flags.go:64] FLAG: --maximum-dead-containers="-1" Mar 18 18:00:30.902308 master-0 kubenswrapper[30278]: I0318 18:00:30.902299 30278 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Mar 18 18:00:30.902364 master-0 kubenswrapper[30278]: I0318 18:00:30.902356 30278 flags.go:64] FLAG: --memory-manager-policy="None" Mar 18 18:00:30.902417 master-0 kubenswrapper[30278]: I0318 18:00:30.902409 30278 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Mar 18 18:00:30.902467 master-0 kubenswrapper[30278]: I0318 18:00:30.902457 30278 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Mar 18 18:00:30.902544 master-0 kubenswrapper[30278]: I0318 18:00:30.902532 30278 flags.go:64] FLAG: --node-ip="192.168.32.10" Mar 18 18:00:30.902620 master-0 kubenswrapper[30278]: I0318 18:00:30.902599 30278 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Mar 18 18:00:30.902686 master-0 kubenswrapper[30278]: I0318 18:00:30.902676 30278 flags.go:64] FLAG: --node-status-max-images="50" Mar 18 18:00:30.902745 master-0 kubenswrapper[30278]: I0318 18:00:30.902736 30278 flags.go:64] FLAG: --node-status-update-frequency="10s" Mar 18 18:00:30.902796 master-0 kubenswrapper[30278]: I0318 18:00:30.902788 30278 flags.go:64] FLAG: --oom-score-adj="-999" Mar 18 18:00:30.902930 master-0 kubenswrapper[30278]: I0318 18:00:30.902920 30278 flags.go:64] FLAG: --pod-cidr="" Mar 18 18:00:30.903001 master-0 kubenswrapper[30278]: I0318 18:00:30.902986 30278 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:53d66d524ca3e787d8dbe30dbc4d9b8612c9cebd505ccb4375a8441814e85422" Mar 18 18:00:30.903055 master-0 kubenswrapper[30278]: I0318 18:00:30.903047 30278 flags.go:64] FLAG: --pod-manifest-path="" Mar 18 18:00:30.903105 master-0 kubenswrapper[30278]: I0318 18:00:30.903096 30278 flags.go:64] FLAG: --pod-max-pids="-1" Mar 18 18:00:30.903150 master-0 kubenswrapper[30278]: I0318 18:00:30.903142 30278 flags.go:64] FLAG: --pods-per-core="0" Mar 18 18:00:30.903197 master-0 kubenswrapper[30278]: I0318 18:00:30.903189 30278 flags.go:64] FLAG: --port="10250" Mar 18 18:00:30.903251 master-0 kubenswrapper[30278]: I0318 18:00:30.903243 30278 flags.go:64] FLAG: --protect-kernel-defaults="false" Mar 18 18:00:30.903341 master-0 kubenswrapper[30278]: I0318 18:00:30.903332 30278 flags.go:64] FLAG: --provider-id="" Mar 18 18:00:30.903422 master-0 kubenswrapper[30278]: I0318 18:00:30.903412 30278 flags.go:64] FLAG: --qos-reserved="" Mar 18 18:00:30.903473 master-0 kubenswrapper[30278]: I0318 18:00:30.903465 30278 flags.go:64] FLAG: --read-only-port="10255" Mar 18 18:00:30.903522 master-0 kubenswrapper[30278]: I0318 18:00:30.903513 30278 flags.go:64] FLAG: --register-node="true" Mar 18 18:00:30.903566 master-0 kubenswrapper[30278]: I0318 18:00:30.903558 30278 flags.go:64] FLAG: --register-schedulable="true" Mar 18 18:00:30.903619 master-0 kubenswrapper[30278]: I0318 18:00:30.903606 30278 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Mar 18 18:00:30.903667 master-0 kubenswrapper[30278]: I0318 18:00:30.903659 30278 flags.go:64] FLAG: --registry-burst="10" Mar 18 18:00:30.903720 master-0 kubenswrapper[30278]: I0318 18:00:30.903711 30278 flags.go:64] FLAG: --registry-qps="5" Mar 18 18:00:30.903771 master-0 kubenswrapper[30278]: I0318 18:00:30.903763 30278 flags.go:64] FLAG: --reserved-cpus="" Mar 18 18:00:30.903818 master-0 kubenswrapper[30278]: I0318 18:00:30.903808 30278 flags.go:64] FLAG: --reserved-memory="" Mar 18 18:00:30.903867 master-0 kubenswrapper[30278]: I0318 18:00:30.903859 30278 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Mar 18 18:00:30.903925 master-0 kubenswrapper[30278]: I0318 18:00:30.903916 30278 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Mar 18 18:00:30.903974 master-0 kubenswrapper[30278]: I0318 18:00:30.903965 30278 flags.go:64] FLAG: --rotate-certificates="false" Mar 18 18:00:30.904022 master-0 kubenswrapper[30278]: I0318 18:00:30.904014 30278 flags.go:64] FLAG: --rotate-server-certificates="false" Mar 18 18:00:30.904067 master-0 kubenswrapper[30278]: I0318 18:00:30.904059 30278 flags.go:64] FLAG: --runonce="false" Mar 18 18:00:30.904125 master-0 kubenswrapper[30278]: I0318 18:00:30.904116 30278 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Mar 18 18:00:30.904175 master-0 kubenswrapper[30278]: I0318 18:00:30.904166 30278 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Mar 18 18:00:30.904223 master-0 kubenswrapper[30278]: I0318 18:00:30.904215 30278 flags.go:64] FLAG: --seccomp-default="false" Mar 18 18:00:30.904285 master-0 kubenswrapper[30278]: I0318 18:00:30.904264 30278 flags.go:64] FLAG: --serialize-image-pulls="true" Mar 18 18:00:30.904342 master-0 kubenswrapper[30278]: I0318 18:00:30.904333 30278 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Mar 18 18:00:30.904389 master-0 kubenswrapper[30278]: I0318 18:00:30.904380 30278 flags.go:64] FLAG: --storage-driver-db="cadvisor" Mar 18 18:00:30.904435 master-0 kubenswrapper[30278]: I0318 18:00:30.904427 30278 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Mar 18 18:00:30.904489 master-0 kubenswrapper[30278]: I0318 18:00:30.904480 30278 flags.go:64] FLAG: --storage-driver-password="root" Mar 18 18:00:30.904540 master-0 kubenswrapper[30278]: I0318 18:00:30.904532 30278 flags.go:64] FLAG: --storage-driver-secure="false" Mar 18 18:00:30.904592 master-0 kubenswrapper[30278]: I0318 18:00:30.904583 30278 flags.go:64] FLAG: --storage-driver-table="stats" Mar 18 18:00:30.904640 master-0 kubenswrapper[30278]: I0318 18:00:30.904632 30278 flags.go:64] FLAG: --storage-driver-user="root" Mar 18 18:00:30.904689 master-0 kubenswrapper[30278]: I0318 18:00:30.904680 30278 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Mar 18 18:00:30.904735 master-0 kubenswrapper[30278]: I0318 18:00:30.904727 30278 flags.go:64] FLAG: --sync-frequency="1m0s" Mar 18 18:00:30.904784 master-0 kubenswrapper[30278]: I0318 18:00:30.904776 30278 flags.go:64] FLAG: --system-cgroups="" Mar 18 18:00:30.904844 master-0 kubenswrapper[30278]: I0318 18:00:30.904830 30278 flags.go:64] FLAG: --system-reserved="cpu=500m,ephemeral-storage=1Gi,memory=1Gi" Mar 18 18:00:30.904895 master-0 kubenswrapper[30278]: I0318 18:00:30.904887 30278 flags.go:64] FLAG: --system-reserved-cgroup="" Mar 18 18:00:30.904945 master-0 kubenswrapper[30278]: I0318 18:00:30.904936 30278 flags.go:64] FLAG: --tls-cert-file="" Mar 18 18:00:30.904996 master-0 kubenswrapper[30278]: I0318 18:00:30.904986 30278 flags.go:64] FLAG: --tls-cipher-suites="[]" Mar 18 18:00:30.905042 master-0 kubenswrapper[30278]: I0318 18:00:30.905033 30278 flags.go:64] FLAG: --tls-min-version="" Mar 18 18:00:30.905090 master-0 kubenswrapper[30278]: I0318 18:00:30.905082 30278 flags.go:64] FLAG: --tls-private-key-file="" Mar 18 18:00:30.905140 master-0 kubenswrapper[30278]: I0318 18:00:30.905131 30278 flags.go:64] FLAG: --topology-manager-policy="none" Mar 18 18:00:30.905188 master-0 kubenswrapper[30278]: I0318 18:00:30.905180 30278 flags.go:64] FLAG: --topology-manager-policy-options="" Mar 18 18:00:30.905237 master-0 kubenswrapper[30278]: I0318 18:00:30.905229 30278 flags.go:64] FLAG: --topology-manager-scope="container" Mar 18 18:00:30.905312 master-0 kubenswrapper[30278]: I0318 18:00:30.905292 30278 flags.go:64] FLAG: --v="2" Mar 18 18:00:30.905363 master-0 kubenswrapper[30278]: I0318 18:00:30.905352 30278 flags.go:64] FLAG: --version="false" Mar 18 18:00:30.905417 master-0 kubenswrapper[30278]: I0318 18:00:30.905407 30278 flags.go:64] FLAG: --vmodule="" Mar 18 18:00:30.905463 master-0 kubenswrapper[30278]: I0318 18:00:30.905454 30278 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Mar 18 18:00:30.905512 master-0 kubenswrapper[30278]: I0318 18:00:30.905503 30278 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Mar 18 18:00:30.905758 master-0 kubenswrapper[30278]: W0318 18:00:30.905748 30278 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Mar 18 18:00:30.905817 master-0 kubenswrapper[30278]: W0318 18:00:30.905809 30278 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Mar 18 18:00:30.905873 master-0 kubenswrapper[30278]: W0318 18:00:30.905865 30278 feature_gate.go:330] unrecognized feature gate: OVNObservability Mar 18 18:00:30.905923 master-0 kubenswrapper[30278]: W0318 18:00:30.905915 30278 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Mar 18 18:00:30.905971 master-0 kubenswrapper[30278]: W0318 18:00:30.905963 30278 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Mar 18 18:00:30.906018 master-0 kubenswrapper[30278]: W0318 18:00:30.906011 30278 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Mar 18 18:00:30.906066 master-0 kubenswrapper[30278]: W0318 18:00:30.906058 30278 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Mar 18 18:00:30.906116 master-0 kubenswrapper[30278]: W0318 18:00:30.906108 30278 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Mar 18 18:00:30.906172 master-0 kubenswrapper[30278]: W0318 18:00:30.906164 30278 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Mar 18 18:00:30.906219 master-0 kubenswrapper[30278]: W0318 18:00:30.906211 30278 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Mar 18 18:00:30.906285 master-0 kubenswrapper[30278]: W0318 18:00:30.906260 30278 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Mar 18 18:00:30.906341 master-0 kubenswrapper[30278]: W0318 18:00:30.906333 30278 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Mar 18 18:00:30.906390 master-0 kubenswrapper[30278]: W0318 18:00:30.906382 30278 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Mar 18 18:00:30.906439 master-0 kubenswrapper[30278]: W0318 18:00:30.906431 30278 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Mar 18 18:00:30.906499 master-0 kubenswrapper[30278]: W0318 18:00:30.906479 30278 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Mar 18 18:00:30.906642 master-0 kubenswrapper[30278]: W0318 18:00:30.906632 30278 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Mar 18 18:00:30.906694 master-0 kubenswrapper[30278]: W0318 18:00:30.906687 30278 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Mar 18 18:00:30.906743 master-0 kubenswrapper[30278]: W0318 18:00:30.906735 30278 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Mar 18 18:00:30.906791 master-0 kubenswrapper[30278]: W0318 18:00:30.906783 30278 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Mar 18 18:00:30.906836 master-0 kubenswrapper[30278]: W0318 18:00:30.906828 30278 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Mar 18 18:00:30.906883 master-0 kubenswrapper[30278]: W0318 18:00:30.906875 30278 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Mar 18 18:00:30.906936 master-0 kubenswrapper[30278]: W0318 18:00:30.906928 30278 feature_gate.go:330] unrecognized feature gate: GatewayAPI Mar 18 18:00:30.906985 master-0 kubenswrapper[30278]: W0318 18:00:30.906977 30278 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Mar 18 18:00:30.907032 master-0 kubenswrapper[30278]: W0318 18:00:30.907024 30278 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Mar 18 18:00:30.907077 master-0 kubenswrapper[30278]: W0318 18:00:30.907070 30278 feature_gate.go:330] unrecognized feature gate: PlatformOperators Mar 18 18:00:30.907120 master-0 kubenswrapper[30278]: W0318 18:00:30.907112 30278 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Mar 18 18:00:30.907162 master-0 kubenswrapper[30278]: W0318 18:00:30.907155 30278 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Mar 18 18:00:30.907205 master-0 kubenswrapper[30278]: W0318 18:00:30.907198 30278 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Mar 18 18:00:30.907257 master-0 kubenswrapper[30278]: W0318 18:00:30.907249 30278 feature_gate.go:330] unrecognized feature gate: SignatureStores Mar 18 18:00:30.907331 master-0 kubenswrapper[30278]: W0318 18:00:30.907322 30278 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Mar 18 18:00:30.907380 master-0 kubenswrapper[30278]: W0318 18:00:30.907373 30278 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Mar 18 18:00:30.907428 master-0 kubenswrapper[30278]: W0318 18:00:30.907421 30278 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Mar 18 18:00:30.907472 master-0 kubenswrapper[30278]: W0318 18:00:30.907465 30278 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Mar 18 18:00:30.907521 master-0 kubenswrapper[30278]: W0318 18:00:30.907514 30278 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Mar 18 18:00:30.907569 master-0 kubenswrapper[30278]: W0318 18:00:30.907561 30278 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Mar 18 18:00:30.907623 master-0 kubenswrapper[30278]: W0318 18:00:30.907615 30278 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Mar 18 18:00:30.907683 master-0 kubenswrapper[30278]: W0318 18:00:30.907675 30278 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Mar 18 18:00:30.907732 master-0 kubenswrapper[30278]: W0318 18:00:30.907725 30278 feature_gate.go:330] unrecognized feature gate: Example Mar 18 18:00:30.907780 master-0 kubenswrapper[30278]: W0318 18:00:30.907772 30278 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Mar 18 18:00:30.908352 master-0 kubenswrapper[30278]: W0318 18:00:30.907816 30278 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Mar 18 18:00:30.908352 master-0 kubenswrapper[30278]: W0318 18:00:30.907823 30278 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Mar 18 18:00:30.908352 master-0 kubenswrapper[30278]: W0318 18:00:30.907827 30278 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Mar 18 18:00:30.908352 master-0 kubenswrapper[30278]: W0318 18:00:30.907831 30278 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Mar 18 18:00:30.908352 master-0 kubenswrapper[30278]: W0318 18:00:30.907834 30278 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Mar 18 18:00:30.908352 master-0 kubenswrapper[30278]: W0318 18:00:30.907840 30278 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Mar 18 18:00:30.908352 master-0 kubenswrapper[30278]: W0318 18:00:30.907845 30278 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Mar 18 18:00:30.908352 master-0 kubenswrapper[30278]: W0318 18:00:30.907851 30278 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Mar 18 18:00:30.908352 master-0 kubenswrapper[30278]: W0318 18:00:30.907855 30278 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Mar 18 18:00:30.908352 master-0 kubenswrapper[30278]: W0318 18:00:30.907859 30278 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Mar 18 18:00:30.908352 master-0 kubenswrapper[30278]: W0318 18:00:30.907863 30278 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Mar 18 18:00:30.908352 master-0 kubenswrapper[30278]: W0318 18:00:30.907866 30278 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Mar 18 18:00:30.908352 master-0 kubenswrapper[30278]: W0318 18:00:30.907870 30278 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Mar 18 18:00:30.908352 master-0 kubenswrapper[30278]: W0318 18:00:30.907874 30278 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Mar 18 18:00:30.908352 master-0 kubenswrapper[30278]: W0318 18:00:30.907878 30278 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Mar 18 18:00:30.908352 master-0 kubenswrapper[30278]: W0318 18:00:30.907882 30278 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Mar 18 18:00:30.908352 master-0 kubenswrapper[30278]: W0318 18:00:30.907886 30278 feature_gate.go:330] unrecognized feature gate: InsightsConfig Mar 18 18:00:30.908352 master-0 kubenswrapper[30278]: W0318 18:00:30.907891 30278 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Mar 18 18:00:30.908352 master-0 kubenswrapper[30278]: W0318 18:00:30.907895 30278 feature_gate.go:330] unrecognized feature gate: PinnedImages Mar 18 18:00:30.908853 master-0 kubenswrapper[30278]: W0318 18:00:30.907901 30278 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Mar 18 18:00:30.908853 master-0 kubenswrapper[30278]: W0318 18:00:30.907905 30278 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Mar 18 18:00:30.908853 master-0 kubenswrapper[30278]: W0318 18:00:30.907909 30278 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Mar 18 18:00:30.908853 master-0 kubenswrapper[30278]: W0318 18:00:30.907914 30278 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Mar 18 18:00:30.908853 master-0 kubenswrapper[30278]: W0318 18:00:30.907919 30278 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Mar 18 18:00:30.908853 master-0 kubenswrapper[30278]: W0318 18:00:30.907923 30278 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Mar 18 18:00:30.908853 master-0 kubenswrapper[30278]: W0318 18:00:30.907927 30278 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Mar 18 18:00:30.908853 master-0 kubenswrapper[30278]: W0318 18:00:30.907931 30278 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Mar 18 18:00:30.908853 master-0 kubenswrapper[30278]: W0318 18:00:30.907936 30278 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Mar 18 18:00:30.908853 master-0 kubenswrapper[30278]: W0318 18:00:30.907939 30278 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Mar 18 18:00:30.908853 master-0 kubenswrapper[30278]: W0318 18:00:30.907945 30278 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Mar 18 18:00:30.908853 master-0 kubenswrapper[30278]: W0318 18:00:30.907950 30278 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Mar 18 18:00:30.908853 master-0 kubenswrapper[30278]: W0318 18:00:30.907954 30278 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Mar 18 18:00:30.908853 master-0 kubenswrapper[30278]: W0318 18:00:30.907958 30278 feature_gate.go:330] unrecognized feature gate: NewOLM Mar 18 18:00:30.909186 master-0 kubenswrapper[30278]: I0318 18:00:30.907966 30278 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false StreamingCollectionEncodingToJSON:true StreamingCollectionEncodingToProtobuf:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Mar 18 18:00:30.913195 master-0 kubenswrapper[30278]: I0318 18:00:30.913161 30278 server.go:491] "Kubelet version" kubeletVersion="v1.31.14" Mar 18 18:00:30.913195 master-0 kubenswrapper[30278]: I0318 18:00:30.913189 30278 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 18 18:00:30.913322 master-0 kubenswrapper[30278]: W0318 18:00:30.913247 30278 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Mar 18 18:00:30.913322 master-0 kubenswrapper[30278]: W0318 18:00:30.913252 30278 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Mar 18 18:00:30.913322 master-0 kubenswrapper[30278]: W0318 18:00:30.913257 30278 feature_gate.go:330] unrecognized feature gate: GatewayAPI Mar 18 18:00:30.913322 master-0 kubenswrapper[30278]: W0318 18:00:30.913261 30278 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Mar 18 18:00:30.913322 master-0 kubenswrapper[30278]: W0318 18:00:30.913265 30278 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Mar 18 18:00:30.913322 master-0 kubenswrapper[30278]: W0318 18:00:30.913292 30278 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Mar 18 18:00:30.913322 master-0 kubenswrapper[30278]: W0318 18:00:30.913296 30278 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Mar 18 18:00:30.913322 master-0 kubenswrapper[30278]: W0318 18:00:30.913299 30278 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Mar 18 18:00:30.913322 master-0 kubenswrapper[30278]: W0318 18:00:30.913303 30278 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Mar 18 18:00:30.913322 master-0 kubenswrapper[30278]: W0318 18:00:30.913308 30278 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Mar 18 18:00:30.913322 master-0 kubenswrapper[30278]: W0318 18:00:30.913313 30278 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Mar 18 18:00:30.913322 master-0 kubenswrapper[30278]: W0318 18:00:30.913317 30278 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Mar 18 18:00:30.913322 master-0 kubenswrapper[30278]: W0318 18:00:30.913321 30278 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Mar 18 18:00:30.913322 master-0 kubenswrapper[30278]: W0318 18:00:30.913326 30278 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Mar 18 18:00:30.913671 master-0 kubenswrapper[30278]: W0318 18:00:30.913331 30278 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Mar 18 18:00:30.913671 master-0 kubenswrapper[30278]: W0318 18:00:30.913335 30278 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Mar 18 18:00:30.913671 master-0 kubenswrapper[30278]: W0318 18:00:30.913340 30278 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Mar 18 18:00:30.913671 master-0 kubenswrapper[30278]: W0318 18:00:30.913345 30278 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Mar 18 18:00:30.913671 master-0 kubenswrapper[30278]: W0318 18:00:30.913349 30278 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Mar 18 18:00:30.913671 master-0 kubenswrapper[30278]: W0318 18:00:30.913353 30278 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Mar 18 18:00:30.913671 master-0 kubenswrapper[30278]: W0318 18:00:30.913357 30278 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Mar 18 18:00:30.913671 master-0 kubenswrapper[30278]: W0318 18:00:30.913361 30278 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Mar 18 18:00:30.913671 master-0 kubenswrapper[30278]: W0318 18:00:30.913365 30278 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Mar 18 18:00:30.913671 master-0 kubenswrapper[30278]: W0318 18:00:30.913369 30278 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Mar 18 18:00:30.913671 master-0 kubenswrapper[30278]: W0318 18:00:30.913373 30278 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Mar 18 18:00:30.913671 master-0 kubenswrapper[30278]: W0318 18:00:30.913376 30278 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Mar 18 18:00:30.913671 master-0 kubenswrapper[30278]: W0318 18:00:30.913380 30278 feature_gate.go:330] unrecognized feature gate: OVNObservability Mar 18 18:00:30.913671 master-0 kubenswrapper[30278]: W0318 18:00:30.913384 30278 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Mar 18 18:00:30.913671 master-0 kubenswrapper[30278]: W0318 18:00:30.913388 30278 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Mar 18 18:00:30.913671 master-0 kubenswrapper[30278]: W0318 18:00:30.913392 30278 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Mar 18 18:00:30.913671 master-0 kubenswrapper[30278]: W0318 18:00:30.913396 30278 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Mar 18 18:00:30.913671 master-0 kubenswrapper[30278]: W0318 18:00:30.913400 30278 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Mar 18 18:00:30.913671 master-0 kubenswrapper[30278]: W0318 18:00:30.913403 30278 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Mar 18 18:00:30.914145 master-0 kubenswrapper[30278]: W0318 18:00:30.913407 30278 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Mar 18 18:00:30.914145 master-0 kubenswrapper[30278]: W0318 18:00:30.913411 30278 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Mar 18 18:00:30.914145 master-0 kubenswrapper[30278]: W0318 18:00:30.913415 30278 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Mar 18 18:00:30.914145 master-0 kubenswrapper[30278]: W0318 18:00:30.913418 30278 feature_gate.go:330] unrecognized feature gate: PinnedImages Mar 18 18:00:30.914145 master-0 kubenswrapper[30278]: W0318 18:00:30.913422 30278 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Mar 18 18:00:30.914145 master-0 kubenswrapper[30278]: W0318 18:00:30.913425 30278 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Mar 18 18:00:30.914145 master-0 kubenswrapper[30278]: W0318 18:00:30.913429 30278 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Mar 18 18:00:30.914145 master-0 kubenswrapper[30278]: W0318 18:00:30.913432 30278 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Mar 18 18:00:30.914145 master-0 kubenswrapper[30278]: W0318 18:00:30.913436 30278 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Mar 18 18:00:30.914145 master-0 kubenswrapper[30278]: W0318 18:00:30.913440 30278 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Mar 18 18:00:30.914145 master-0 kubenswrapper[30278]: W0318 18:00:30.913443 30278 feature_gate.go:330] unrecognized feature gate: PlatformOperators Mar 18 18:00:30.914145 master-0 kubenswrapper[30278]: W0318 18:00:30.913447 30278 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Mar 18 18:00:30.914145 master-0 kubenswrapper[30278]: W0318 18:00:30.913450 30278 feature_gate.go:330] unrecognized feature gate: SignatureStores Mar 18 18:00:30.914145 master-0 kubenswrapper[30278]: W0318 18:00:30.913454 30278 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Mar 18 18:00:30.914145 master-0 kubenswrapper[30278]: W0318 18:00:30.913458 30278 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Mar 18 18:00:30.914145 master-0 kubenswrapper[30278]: W0318 18:00:30.913461 30278 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Mar 18 18:00:30.914145 master-0 kubenswrapper[30278]: W0318 18:00:30.913465 30278 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Mar 18 18:00:30.914145 master-0 kubenswrapper[30278]: W0318 18:00:30.913468 30278 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Mar 18 18:00:30.914145 master-0 kubenswrapper[30278]: W0318 18:00:30.913472 30278 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Mar 18 18:00:30.914145 master-0 kubenswrapper[30278]: W0318 18:00:30.913476 30278 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Mar 18 18:00:30.914807 master-0 kubenswrapper[30278]: W0318 18:00:30.913479 30278 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Mar 18 18:00:30.914807 master-0 kubenswrapper[30278]: W0318 18:00:30.913483 30278 feature_gate.go:330] unrecognized feature gate: InsightsConfig Mar 18 18:00:30.914807 master-0 kubenswrapper[30278]: W0318 18:00:30.913487 30278 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Mar 18 18:00:30.914807 master-0 kubenswrapper[30278]: W0318 18:00:30.913490 30278 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Mar 18 18:00:30.914807 master-0 kubenswrapper[30278]: W0318 18:00:30.913494 30278 feature_gate.go:330] unrecognized feature gate: Example Mar 18 18:00:30.914807 master-0 kubenswrapper[30278]: W0318 18:00:30.913497 30278 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Mar 18 18:00:30.914807 master-0 kubenswrapper[30278]: W0318 18:00:30.913501 30278 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Mar 18 18:00:30.914807 master-0 kubenswrapper[30278]: W0318 18:00:30.913504 30278 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Mar 18 18:00:30.914807 master-0 kubenswrapper[30278]: W0318 18:00:30.913508 30278 feature_gate.go:330] unrecognized feature gate: NewOLM Mar 18 18:00:30.914807 master-0 kubenswrapper[30278]: W0318 18:00:30.913512 30278 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Mar 18 18:00:30.914807 master-0 kubenswrapper[30278]: W0318 18:00:30.913515 30278 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Mar 18 18:00:30.914807 master-0 kubenswrapper[30278]: W0318 18:00:30.913519 30278 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Mar 18 18:00:30.914807 master-0 kubenswrapper[30278]: W0318 18:00:30.913522 30278 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Mar 18 18:00:30.914807 master-0 kubenswrapper[30278]: W0318 18:00:30.913526 30278 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Mar 18 18:00:30.914807 master-0 kubenswrapper[30278]: W0318 18:00:30.913529 30278 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Mar 18 18:00:30.914807 master-0 kubenswrapper[30278]: W0318 18:00:30.913533 30278 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Mar 18 18:00:30.914807 master-0 kubenswrapper[30278]: W0318 18:00:30.913536 30278 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Mar 18 18:00:30.914807 master-0 kubenswrapper[30278]: W0318 18:00:30.913540 30278 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Mar 18 18:00:30.914807 master-0 kubenswrapper[30278]: W0318 18:00:30.913544 30278 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Mar 18 18:00:30.915379 master-0 kubenswrapper[30278]: I0318 18:00:30.913550 30278 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false StreamingCollectionEncodingToJSON:true StreamingCollectionEncodingToProtobuf:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Mar 18 18:00:30.915379 master-0 kubenswrapper[30278]: W0318 18:00:30.913655 30278 feature_gate.go:330] unrecognized feature gate: GatewayAPI Mar 18 18:00:30.915379 master-0 kubenswrapper[30278]: W0318 18:00:30.913664 30278 feature_gate.go:330] unrecognized feature gate: InsightsConfig Mar 18 18:00:30.915379 master-0 kubenswrapper[30278]: W0318 18:00:30.913668 30278 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Mar 18 18:00:30.915379 master-0 kubenswrapper[30278]: W0318 18:00:30.913673 30278 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Mar 18 18:00:30.915379 master-0 kubenswrapper[30278]: W0318 18:00:30.913679 30278 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Mar 18 18:00:30.915379 master-0 kubenswrapper[30278]: W0318 18:00:30.913683 30278 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Mar 18 18:00:30.915379 master-0 kubenswrapper[30278]: W0318 18:00:30.913688 30278 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Mar 18 18:00:30.915379 master-0 kubenswrapper[30278]: W0318 18:00:30.913692 30278 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Mar 18 18:00:30.915379 master-0 kubenswrapper[30278]: W0318 18:00:30.913696 30278 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Mar 18 18:00:30.915379 master-0 kubenswrapper[30278]: W0318 18:00:30.913700 30278 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Mar 18 18:00:30.915379 master-0 kubenswrapper[30278]: W0318 18:00:30.913704 30278 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Mar 18 18:00:30.915379 master-0 kubenswrapper[30278]: W0318 18:00:30.913708 30278 feature_gate.go:330] unrecognized feature gate: SignatureStores Mar 18 18:00:30.915379 master-0 kubenswrapper[30278]: W0318 18:00:30.913711 30278 feature_gate.go:330] unrecognized feature gate: PinnedImages Mar 18 18:00:30.915379 master-0 kubenswrapper[30278]: W0318 18:00:30.913715 30278 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Mar 18 18:00:30.915753 master-0 kubenswrapper[30278]: W0318 18:00:30.913718 30278 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Mar 18 18:00:30.915753 master-0 kubenswrapper[30278]: W0318 18:00:30.913723 30278 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Mar 18 18:00:30.915753 master-0 kubenswrapper[30278]: W0318 18:00:30.913726 30278 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Mar 18 18:00:30.915753 master-0 kubenswrapper[30278]: W0318 18:00:30.913730 30278 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Mar 18 18:00:30.915753 master-0 kubenswrapper[30278]: W0318 18:00:30.913733 30278 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Mar 18 18:00:30.915753 master-0 kubenswrapper[30278]: W0318 18:00:30.913738 30278 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Mar 18 18:00:30.915753 master-0 kubenswrapper[30278]: W0318 18:00:30.913742 30278 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Mar 18 18:00:30.915753 master-0 kubenswrapper[30278]: W0318 18:00:30.913747 30278 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Mar 18 18:00:30.915753 master-0 kubenswrapper[30278]: W0318 18:00:30.913751 30278 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Mar 18 18:00:30.915753 master-0 kubenswrapper[30278]: W0318 18:00:30.913755 30278 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Mar 18 18:00:30.915753 master-0 kubenswrapper[30278]: W0318 18:00:30.913759 30278 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Mar 18 18:00:30.915753 master-0 kubenswrapper[30278]: W0318 18:00:30.913762 30278 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Mar 18 18:00:30.915753 master-0 kubenswrapper[30278]: W0318 18:00:30.913766 30278 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Mar 18 18:00:30.915753 master-0 kubenswrapper[30278]: W0318 18:00:30.913770 30278 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Mar 18 18:00:30.915753 master-0 kubenswrapper[30278]: W0318 18:00:30.913773 30278 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Mar 18 18:00:30.915753 master-0 kubenswrapper[30278]: W0318 18:00:30.913778 30278 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Mar 18 18:00:30.915753 master-0 kubenswrapper[30278]: W0318 18:00:30.913784 30278 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Mar 18 18:00:30.915753 master-0 kubenswrapper[30278]: W0318 18:00:30.913789 30278 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Mar 18 18:00:30.915753 master-0 kubenswrapper[30278]: W0318 18:00:30.913793 30278 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Mar 18 18:00:30.916252 master-0 kubenswrapper[30278]: W0318 18:00:30.913804 30278 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Mar 18 18:00:30.916252 master-0 kubenswrapper[30278]: W0318 18:00:30.913811 30278 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Mar 18 18:00:30.916252 master-0 kubenswrapper[30278]: W0318 18:00:30.913817 30278 feature_gate.go:330] unrecognized feature gate: NewOLM Mar 18 18:00:30.916252 master-0 kubenswrapper[30278]: W0318 18:00:30.913823 30278 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Mar 18 18:00:30.916252 master-0 kubenswrapper[30278]: W0318 18:00:30.913829 30278 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Mar 18 18:00:30.916252 master-0 kubenswrapper[30278]: W0318 18:00:30.913833 30278 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Mar 18 18:00:30.916252 master-0 kubenswrapper[30278]: W0318 18:00:30.913838 30278 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Mar 18 18:00:30.916252 master-0 kubenswrapper[30278]: W0318 18:00:30.913842 30278 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Mar 18 18:00:30.916252 master-0 kubenswrapper[30278]: W0318 18:00:30.913847 30278 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Mar 18 18:00:30.916252 master-0 kubenswrapper[30278]: W0318 18:00:30.913853 30278 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Mar 18 18:00:30.916252 master-0 kubenswrapper[30278]: W0318 18:00:30.913857 30278 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Mar 18 18:00:30.916252 master-0 kubenswrapper[30278]: W0318 18:00:30.913861 30278 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Mar 18 18:00:30.916252 master-0 kubenswrapper[30278]: W0318 18:00:30.913866 30278 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Mar 18 18:00:30.916252 master-0 kubenswrapper[30278]: W0318 18:00:30.913870 30278 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Mar 18 18:00:30.916252 master-0 kubenswrapper[30278]: W0318 18:00:30.913874 30278 feature_gate.go:330] unrecognized feature gate: Example Mar 18 18:00:30.916252 master-0 kubenswrapper[30278]: W0318 18:00:30.913878 30278 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Mar 18 18:00:30.916252 master-0 kubenswrapper[30278]: W0318 18:00:30.913882 30278 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Mar 18 18:00:30.916252 master-0 kubenswrapper[30278]: W0318 18:00:30.913887 30278 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Mar 18 18:00:30.916252 master-0 kubenswrapper[30278]: W0318 18:00:30.913891 30278 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Mar 18 18:00:30.916252 master-0 kubenswrapper[30278]: W0318 18:00:30.913896 30278 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Mar 18 18:00:30.916780 master-0 kubenswrapper[30278]: W0318 18:00:30.913900 30278 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Mar 18 18:00:30.916780 master-0 kubenswrapper[30278]: W0318 18:00:30.913904 30278 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Mar 18 18:00:30.916780 master-0 kubenswrapper[30278]: W0318 18:00:30.913908 30278 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Mar 18 18:00:30.916780 master-0 kubenswrapper[30278]: W0318 18:00:30.913913 30278 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Mar 18 18:00:30.916780 master-0 kubenswrapper[30278]: W0318 18:00:30.913917 30278 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Mar 18 18:00:30.916780 master-0 kubenswrapper[30278]: W0318 18:00:30.913921 30278 feature_gate.go:330] unrecognized feature gate: OVNObservability Mar 18 18:00:30.916780 master-0 kubenswrapper[30278]: W0318 18:00:30.913924 30278 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Mar 18 18:00:30.916780 master-0 kubenswrapper[30278]: W0318 18:00:30.913928 30278 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Mar 18 18:00:30.916780 master-0 kubenswrapper[30278]: W0318 18:00:30.913931 30278 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Mar 18 18:00:30.916780 master-0 kubenswrapper[30278]: W0318 18:00:30.913935 30278 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Mar 18 18:00:30.916780 master-0 kubenswrapper[30278]: W0318 18:00:30.913938 30278 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Mar 18 18:00:30.916780 master-0 kubenswrapper[30278]: W0318 18:00:30.913942 30278 feature_gate.go:330] unrecognized feature gate: PlatformOperators Mar 18 18:00:30.916780 master-0 kubenswrapper[30278]: W0318 18:00:30.913945 30278 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Mar 18 18:00:30.916780 master-0 kubenswrapper[30278]: W0318 18:00:30.913949 30278 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Mar 18 18:00:30.916780 master-0 kubenswrapper[30278]: W0318 18:00:30.913953 30278 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Mar 18 18:00:30.916780 master-0 kubenswrapper[30278]: W0318 18:00:30.913956 30278 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Mar 18 18:00:30.916780 master-0 kubenswrapper[30278]: W0318 18:00:30.913960 30278 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Mar 18 18:00:30.916780 master-0 kubenswrapper[30278]: W0318 18:00:30.913964 30278 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Mar 18 18:00:30.916780 master-0 kubenswrapper[30278]: W0318 18:00:30.913967 30278 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Mar 18 18:00:30.917252 master-0 kubenswrapper[30278]: I0318 18:00:30.913974 30278 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false StreamingCollectionEncodingToJSON:true StreamingCollectionEncodingToProtobuf:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Mar 18 18:00:30.917252 master-0 kubenswrapper[30278]: I0318 18:00:30.914125 30278 server.go:940] "Client rotation is on, will bootstrap in background" Mar 18 18:00:30.917252 master-0 kubenswrapper[30278]: I0318 18:00:30.915527 30278 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Mar 18 18:00:30.917252 master-0 kubenswrapper[30278]: I0318 18:00:30.915596 30278 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Mar 18 18:00:30.917252 master-0 kubenswrapper[30278]: I0318 18:00:30.915780 30278 server.go:997] "Starting client certificate rotation" Mar 18 18:00:30.917252 master-0 kubenswrapper[30278]: I0318 18:00:30.915789 30278 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Mar 18 18:00:30.917252 master-0 kubenswrapper[30278]: I0318 18:00:30.916009 30278 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-03-19 17:31:47 +0000 UTC, rotation deadline is 2026-03-19 14:56:10.992985511 +0000 UTC Mar 18 18:00:30.917252 master-0 kubenswrapper[30278]: I0318 18:00:30.916091 30278 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 20h55m40.076896854s for next certificate rotation Mar 18 18:00:30.917252 master-0 kubenswrapper[30278]: I0318 18:00:30.916334 30278 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Mar 18 18:00:30.918070 master-0 kubenswrapper[30278]: I0318 18:00:30.917532 30278 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Mar 18 18:00:30.919876 master-0 kubenswrapper[30278]: I0318 18:00:30.919822 30278 log.go:25] "Validated CRI v1 runtime API" Mar 18 18:00:30.929445 master-0 kubenswrapper[30278]: I0318 18:00:30.928620 30278 log.go:25] "Validated CRI v1 image API" Mar 18 18:00:30.932601 master-0 kubenswrapper[30278]: I0318 18:00:30.929820 30278 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Mar 18 18:00:30.940414 master-0 kubenswrapper[30278]: I0318 18:00:30.940346 30278 fs.go:135] Filesystem UUIDs: map[7B77-95E7:/dev/vda2 910678ff-f77e-4a7d-8d53-86f2ac47a823:/dev/vda4 fad39e74-417f-48de-99cb-6a377eb68dd8:/dev/vda3] Mar 18 18:00:30.942787 master-0 kubenswrapper[30278]: I0318 18:00:30.940452 30278 fs.go:136] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/016383ed2ea822809808dec1c74c3db939646679d52a777698739d705adae757/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/016383ed2ea822809808dec1c74c3db939646679d52a777698739d705adae757/userdata/shm major:0 minor:427 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/01d8f1f738d166015accb45a5a875b9da0577b0908a968320b9793f9dbe962a2/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/01d8f1f738d166015accb45a5a875b9da0577b0908a968320b9793f9dbe962a2/userdata/shm major:0 minor:948 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/07ab0c66a64f7bf6d68ef0555d877888ab4c67aaec1ac0fea7f62d1ed0bed612/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/07ab0c66a64f7bf6d68ef0555d877888ab4c67aaec1ac0fea7f62d1ed0bed612/userdata/shm major:0 minor:564 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/14298257e1956a282ef61298797ea8ea8e4d9b9c2a924ea5f21c88394abce76c/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/14298257e1956a282ef61298797ea8ea8e4d9b9c2a924ea5f21c88394abce76c/userdata/shm major:0 minor:796 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/1add5afbf418952e0016f7866a470207154a949d28966174c8a7f5fa79ba0e1f/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/1add5afbf418952e0016f7866a470207154a949d28966174c8a7f5fa79ba0e1f/userdata/shm major:0 minor:134 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/1ea3dbfa5dfb13332a0f1977477497e5220b4bba3727358399c90d2b8664c6d7/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/1ea3dbfa5dfb13332a0f1977477497e5220b4bba3727358399c90d2b8664c6d7/userdata/shm major:0 minor:89 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/1efe23c09252f4c82f118ceb82a14b9f9f470b6a2eb0f4b9f30449b0d185550a/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/1efe23c09252f4c82f118ceb82a14b9f9f470b6a2eb0f4b9f30449b0d185550a/userdata/shm major:0 minor:559 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/22b260c86b95c080bc9989f63b5311a346d5ef3d9e462e33577fe76c4fe05c6d/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/22b260c86b95c080bc9989f63b5311a346d5ef3d9e462e33577fe76c4fe05c6d/userdata/shm major:0 minor:355 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/230edebdcb314d25cf4af81ff75a06a2701ace4abbe260261cb0347a76dc2bd1/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/230edebdcb314d25cf4af81ff75a06a2701ace4abbe260261cb0347a76dc2bd1/userdata/shm major:0 minor:812 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/2753215bec4df07a683a29fd9db1d0ae5aeba0e6f73fa6fbc662ede34576fdd9/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/2753215bec4df07a683a29fd9db1d0ae5aeba0e6f73fa6fbc662ede34576fdd9/userdata/shm major:0 minor:563 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/2939a6d3195afe0f356d31ab56455f8d084b2077c497baf972062cb08363566d/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/2939a6d3195afe0f356d31ab56455f8d084b2077c497baf972062cb08363566d/userdata/shm major:0 minor:498 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/36a9c5c55aaa067ac7414f9662835335c782889c32307de35102428e52f590c8/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/36a9c5c55aaa067ac7414f9662835335c782889c32307de35102428e52f590c8/userdata/shm major:0 minor:993 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/3850c530da1325c13b135240c71869228656f1ceff63510ab0a98443cee54a55/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/3850c530da1325c13b135240c71869228656f1ceff63510ab0a98443cee54a55/userdata/shm major:0 minor:276 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/39f34c1f903429d7c69072e5211db003fe4dc2847c946a6e7e2b74d4bd2e8ac8/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/39f34c1f903429d7c69072e5211db003fe4dc2847c946a6e7e2b74d4bd2e8ac8/userdata/shm major:0 minor:216 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/3b679ceb3c60d8555810f42293ecb4e72f346293b26bbcc64d5cc427efca2bcd/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/3b679ceb3c60d8555810f42293ecb4e72f346293b26bbcc64d5cc427efca2bcd/userdata/shm major:0 minor:340 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/3d376cebaacd129d547a2b1f5d7c73be7200c80d9de53fd252db3ff4f06f931e/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/3d376cebaacd129d547a2b1f5d7c73be7200c80d9de53fd252db3ff4f06f931e/userdata/shm major:0 minor:139 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/41da80af31fef99194cfa8b9345b104ba93283b541371be7f518ffdcd5945af7/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/41da80af31fef99194cfa8b9345b104ba93283b541371be7f518ffdcd5945af7/userdata/shm major:0 minor:413 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/506ad6c80a1b7c5f38a2826db8ee7d4115a8417001018ebd69e1309cf067fc6a/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/506ad6c80a1b7c5f38a2826db8ee7d4115a8417001018ebd69e1309cf067fc6a/userdata/shm major:0 minor:129 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/594d4a59acf0a0da5be4aa4bcad6deb49fd2749cf6065ab7e5a5a39d60f17265/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/594d4a59acf0a0da5be4aa4bcad6deb49fd2749cf6065ab7e5a5a39d60f17265/userdata/shm major:0 minor:829 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/5d787dbd681a8506a724fcd5492ed061edac8e14293b27fcf81a68f92d0df82e/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/5d787dbd681a8506a724fcd5492ed061edac8e14293b27fcf81a68f92d0df82e/userdata/shm major:0 minor:258 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/62f87c779c80aac58d08d6114e2c8cc2c2974d823d9538d2de8360d3c4243057/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/62f87c779c80aac58d08d6114e2c8cc2c2974d823d9538d2de8360d3c4243057/userdata/shm major:0 minor:271 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/6607dcf54fd176dc56698130f9297b2ab4381953d03d40abc0b2240c71f3820b/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/6607dcf54fd176dc56698130f9297b2ab4381953d03d40abc0b2240c71f3820b/userdata/shm major:0 minor:494 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/681e9cfa9d99b6787480ff89127df11d81327ab93296d6efacd157b94bbfa393/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/681e9cfa9d99b6787480ff89127df11d81327ab93296d6efacd157b94bbfa393/userdata/shm major:0 minor:269 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/6855c26bf134f973aca5b753cd9252cc1f86b218f035870b1dab49845cbadb56/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/6855c26bf134f973aca5b753cd9252cc1f86b218f035870b1dab49845cbadb56/userdata/shm major:0 minor:266 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/697aebe4c54170282f2900b6eb7950a2671c76c6eb51ac74def7ef20f0b63370/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/697aebe4c54170282f2900b6eb7950a2671c76c6eb51ac74def7ef20f0b63370/userdata/shm major:0 minor:807 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/6bd8b74e410d81f6dbc5c2f014e72715199a5fa6c057d771fdb8890689635805/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/6bd8b74e410d81f6dbc5c2f014e72715199a5fa6c057d771fdb8890689635805/userdata/shm major:0 minor:262 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/726dac522b338193798e05019afcc3525452535e3149d4a25e33142fc811a586/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/726dac522b338193798e05019afcc3525452535e3149d4a25e33142fc811a586/userdata/shm major:0 minor:809 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/796303c5f2a585dc8ab37c0a21b453aa0dd8797dea11dc3eee7c72e5dad9b158/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/796303c5f2a585dc8ab37c0a21b453aa0dd8797dea11dc3eee7c72e5dad9b158/userdata/shm major:0 minor:963 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/7b2841761444793b373ed80c5f092794f38989726bcf53c2a969f325f8459b75/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/7b2841761444793b373ed80c5f092794f38989726bcf53c2a969f325f8459b75/userdata/shm major:0 minor:95 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/7e0345d8f514108b800a0c4627bc3a13dd0326586f06b4e1904eb81090cc64aa/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/7e0345d8f514108b800a0c4627bc3a13dd0326586f06b4e1904eb81090cc64aa/userdata/shm major:0 minor:816 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/819894978d4b63b70f3c5ba05beeaf66b4fdd7279c891272a2e358b0b8143717/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/819894978d4b63b70f3c5ba05beeaf66b4fdd7279c891272a2e358b0b8143717/userdata/shm major:0 minor:916 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/8381cd7a6e5c885500f3bdd0849aefb2b5f39ab2f05f498f742ce3eacc790c78/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/8381cd7a6e5c885500f3bdd0849aefb2b5f39ab2f05f498f742ce3eacc790c78/userdata/shm major:0 minor:560 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/86bb0fefbe9a7075d6c0212cf27e6d83a749aa0d66749340ff4d2f7ce29488f0/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/86bb0fefbe9a7075d6c0212cf27e6d83a749aa0d66749340ff4d2f7ce29488f0/userdata/shm major:0 minor:558 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/89705fd182a90dfe140ac5efc8c14b16140f0a05f824bdb1f27db7295abcee76/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/89705fd182a90dfe140ac5efc8c14b16140f0a05f824bdb1f27db7295abcee76/userdata/shm major:0 minor:44 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/8a589501a96ed1e6f8752cc00ece99aa42162ad128546ec6cfe89722a04ec5b1/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/8a589501a96ed1e6f8752cc00ece99aa42162ad128546ec6cfe89722a04ec5b1/userdata/shm major:0 minor:412 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/8d76a48b181c0cd15d1de5c39a3bc3d9f330bf1dff375bce677cfee095393ae6/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/8d76a48b181c0cd15d1de5c39a3bc3d9f330bf1dff375bce677cfee095393ae6/userdata/shm major:0 minor:247 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/90cc2b02445555cd2d532e865fff8c504dc1d3510b60d980449ac43b37071918/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/90cc2b02445555cd2d532e865fff8c504dc1d3510b60d980449ac43b37071918/userdata/shm major:0 minor:919 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/93f149a1ecb7aaccb9bdce489447440893c003702d0a6409833391c55955f7eb/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/93f149a1ecb7aaccb9bdce489447440893c003702d0a6409833391c55955f7eb/userdata/shm major:0 minor:362 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/989b2fdc7ef152f9acfe916ef0d60955c7235426f6fe1dd2fc891166e785e105/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/989b2fdc7ef152f9acfe916ef0d60955c7235426f6fe1dd2fc891166e785e105/userdata/shm major:0 minor:41 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/98a2fd574e075391d5a514f212989330aab4c8ffe303103d815d81e2f13e5d87/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/98a2fd574e075391d5a514f212989330aab4c8ffe303103d815d81e2f13e5d87/userdata/shm major:0 minor:874 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/9c6ba19a43312e7426d156208bc0c31b36ee526eb8006e7186d5ea94923d4e9f/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/9c6ba19a43312e7426d156208bc0c31b36ee526eb8006e7186d5ea94923d4e9f/userdata/shm major:0 minor:252 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/a06a3f0fb54d1869684741c01721cbf6af520d75473205b84e908f306a368b3a/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/a06a3f0fb54d1869684741c01721cbf6af520d75473205b84e908f306a368b3a/userdata/shm major:0 minor:257 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/a0f14825defb92b50c4747c20631ca30f9e30632027bb38a918f6a6a14b5c095/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/a0f14825defb92b50c4747c20631ca30f9e30632027bb38a918f6a6a14b5c095/userdata/shm major:0 minor:309 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/a94ad7630ca05ccfa8a345aad202de39848166c994875cfad1d5875137f9cf66/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/a94ad7630ca05ccfa8a345aad202de39848166c994875cfad1d5875137f9cf66/userdata/shm major:0 minor:115 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/a9a9d675b5bc654d44d972fe5be99d008e180b13cd245216bdc5bd95af4fe020/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/a9a9d675b5bc654d44d972fe5be99d008e180b13cd245216bdc5bd95af4fe020/userdata/shm major:0 minor:805 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/b4db07afd1a03d8c1456d9bd3e2fc4e66947bcaa942aef9864e3ed3e54889795/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/b4db07afd1a03d8c1456d9bd3e2fc4e66947bcaa942aef9864e3ed3e54889795/userdata/shm major:0 minor:119 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/b5e733421a5534241a408f87a9d1282c96549651c461bd5bf9e9c1999c97d9e5/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/b5e733421a5534241a408f87a9d1282c96549651c461bd5bf9e9c1999c97d9e5/userdata/shm major:0 minor:255 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/b84bd85aac3ddf41b65c4a3ee28624adfec16e2d4dd19c154137ff1a28ded42b/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/b84bd85aac3ddf41b65c4a3ee28624adfec16e2d4dd19c154137ff1a28ded42b/userdata/shm major:0 minor:723 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/c23a831f572d860a391d4d959c13e33c442846ac9ce5af54ffdc6e3a90052296/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/c23a831f572d860a391d4d959c13e33c442846ac9ce5af54ffdc6e3a90052296/userdata/shm major:0 minor:895 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/c73523c110a89aa2ec5b986dce6527591a38ece4a4afaf4032ec9cf612257a34/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/c73523c110a89aa2ec5b986dce6527591a38ece4a4afaf4032ec9cf612257a34/userdata/shm major:0 minor:370 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/c9ef5e66c74bafc259dc619a6d19d1eda5f874894c689b2f23043bfdee6a39c1/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/c9ef5e66c74bafc259dc619a6d19d1eda5f874894c689b2f23043bfdee6a39c1/userdata/shm major:0 minor:319 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/cc45ef13b745a7538de0764bc9063fe610d54078c6f17e39280d0e2b21ebeeb0/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/cc45ef13b745a7538de0764bc9063fe610d54078c6f17e39280d0e2b21ebeeb0/userdata/shm major:0 minor:421 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/ce5639dc0f602d1c7e6ad6fc44e82114cfe133ad8a9de1890037405180569936/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/ce5639dc0f602d1c7e6ad6fc44e82114cfe133ad8a9de1890037405180569936/userdata/shm major:0 minor:408 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/cfbf03c8cc7b89c553e9ea829ef567259d08d9f435265881b903a1b99dfdd65c/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/cfbf03c8cc7b89c553e9ea829ef567259d08d9f435265881b903a1b99dfdd65c/userdata/shm major:0 minor:513 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/d32f636809075be6cf635b9dbbf658143a67ef27c719c0247cb93d87c34ccc46/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/d32f636809075be6cf635b9dbbf658143a67ef27c719c0247cb93d87c34ccc46/userdata/shm major:0 minor:918 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/d451cc909e96cb90161ef2054b945e5cb54cff4fe2886dd65033c87b6a8fe884/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/d451cc909e96cb90161ef2054b945e5cb54cff4fe2886dd65033c87b6a8fe884/userdata/shm major:0 minor:245 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/d5c2063a6c515ba48297abcc083d96a0ee10588d1979ea68a5e48cdf4d96c90e/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/d5c2063a6c515ba48297abcc083d96a0ee10588d1979ea68a5e48cdf4d96c90e/userdata/shm major:0 minor:277 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/db5aec9e61cde7bec4f42fa13ed7d132af8e2baca532c12d1638296fcf06dd34/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/db5aec9e61cde7bec4f42fa13ed7d132af8e2baca532c12d1638296fcf06dd34/userdata/shm major:0 minor:104 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/dd1b805aae172e18f337dd45784c075e0ad3687afa3a8879338aa90a6a42ed54/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/dd1b805aae172e18f337dd45784c075e0ad3687afa3a8879338aa90a6a42ed54/userdata/shm major:0 minor:415 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/dfc93735e306184cc4596c59d2bb37e97390ba2f327b3655dd96eec7dc58139e/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/dfc93735e306184cc4596c59d2bb37e97390ba2f327b3655dd96eec7dc58139e/userdata/shm major:0 minor:562 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/e03411b7c2bf8367c968ed1adc09d9fecafd420d1122805ea765d466352e23a3/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/e03411b7c2bf8367c968ed1adc09d9fecafd420d1122805ea765d466352e23a3/userdata/shm major:0 minor:54 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/e34a7d43723491c0ffb4df04571420d726ec22d80fe5f50be4255c5ba300c922/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/e34a7d43723491c0ffb4df04571420d726ec22d80fe5f50be4255c5ba300c922/userdata/shm major:0 minor:722 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/f7dc5373fa76e1da12d58e0de7c6eb4b3bc82471bd7a410a252fcb24df6cb1d6/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/f7dc5373fa76e1da12d58e0de7c6eb4b3bc82471bd7a410a252fcb24df6cb1d6/userdata/shm major:0 minor:827 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/f975ed7e1c1dcf64feeba9dd4dfc173ec9be8b509e8d2f868a326c611d5b7d2d/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/f975ed7e1c1dcf64feeba9dd4dfc173ec9be8b509e8d2f868a326c611d5b7d2d/userdata/shm major:0 minor:307 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/fecfc938509f77a7c6b0246891b9f62fa9cb5c8d24c6ae113e36e04682301649/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/fecfc938509f77a7c6b0246891b9f62fa9cb5c8d24c6ae113e36e04682301649/userdata/shm major:0 minor:428 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/0100a259-1358-45e8-8191-4e1f9a14ec89/volumes/kubernetes.io~projected/kube-api-access-tnknt:{mountpoint:/var/lib/kubelet/pods/0100a259-1358-45e8-8191-4e1f9a14ec89/volumes/kubernetes.io~projected/kube-api-access-tnknt major:0 minor:239 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/0100a259-1358-45e8-8191-4e1f9a14ec89/volumes/kubernetes.io~secret/etcd-client:{mountpoint:/var/lib/kubelet/pods/0100a259-1358-45e8-8191-4e1f9a14ec89/volumes/kubernetes.io~secret/etcd-client major:0 minor:229 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/0100a259-1358-45e8-8191-4e1f9a14ec89/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/0100a259-1358-45e8-8191-4e1f9a14ec89/volumes/kubernetes.io~secret/serving-cert major:0 minor:224 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/04cef0bd-f365-4bf6-864a-1895995015d6/volumes/kubernetes.io~projected/kube-api-access-qlhls:{mountpoint:/var/lib/kubelet/pods/04cef0bd-f365-4bf6-864a-1895995015d6/volumes/kubernetes.io~projected/kube-api-access-qlhls major:0 minor:789 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/0751c002-fe0e-4f13-bb9c-9accd8ca0df3/volumes/kubernetes.io~projected/kube-api-access-njx6n:{mountpoint:/var/lib/kubelet/pods/0751c002-fe0e-4f13-bb9c-9accd8ca0df3/volumes/kubernetes.io~projected/kube-api-access-njx6n major:0 minor:788 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/0751c002-fe0e-4f13-bb9c-9accd8ca0df3/volumes/kubernetes.io~secret/cloud-controller-manager-operator-tls:{mountpoint:/var/lib/kubelet/pods/0751c002-fe0e-4f13-bb9c-9accd8ca0df3/volumes/kubernetes.io~secret/cloud-controller-manager-operator-tls major:0 minor:778 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/0b9ff55a-73fb-473f-b406-1f8b6cffdb89/volumes/kubernetes.io~projected/kube-api-access-2tvgq:{mountpoint:/var/lib/kubelet/pods/0b9ff55a-73fb-473f-b406-1f8b6cffdb89/volumes/kubernetes.io~projected/kube-api-access-2tvgq major:0 minor:214 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/0b9ff55a-73fb-473f-b406-1f8b6cffdb89/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/0b9ff55a-73fb-473f-b406-1f8b6cffdb89/volumes/kubernetes.io~secret/serving-cert major:0 minor:213 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/14a0661b-7bde-4e22-a9a9-5e3fb24df77f/volumes/kubernetes.io~projected/kube-api-access-2pqww:{mountpoint:/var/lib/kubelet/pods/14a0661b-7bde-4e22-a9a9-5e3fb24df77f/volumes/kubernetes.io~projected/kube-api-access-2pqww major:0 minor:92 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/14a0661b-7bde-4e22-a9a9-5e3fb24df77f/volumes/kubernetes.io~secret/metrics-tls:{mountpoint:/var/lib/kubelet/pods/14a0661b-7bde-4e22-a9a9-5e3fb24df77f/volumes/kubernetes.io~secret/metrics-tls major:0 minor:43 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/1d969530-c138-4fb7-9bfe-0825be66c009/volumes/kubernetes.io~projected/kube-api-access-cd868:{mountpoint:/var/lib/kubelet/pods/1d969530-c138-4fb7-9bfe-0825be66c009/volumes/kubernetes.io~projected/kube-api-access-cd868 major:0 minor:275 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/1db0a246-ca43-4e7c-b09e-e80218ae99b1/volumes/kubernetes.io~projected/kube-api-access-n9g8f:{mountpoint:/var/lib/kubelet/pods/1db0a246-ca43-4e7c-b09e-e80218ae99b1/volumes/kubernetes.io~projected/kube-api-access-n9g8f major:0 minor:719 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/1db0a246-ca43-4e7c-b09e-e80218ae99b1/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/1db0a246-ca43-4e7c-b09e-e80218ae99b1/volumes/kubernetes.io~secret/serving-cert major:0 minor:714 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/253ec853-f637-4aa4-8e8e-eb655dfccccb/volumes/kubernetes.io~projected/kube-api-access-cx596:{mountpoint:/var/lib/kubelet/pods/253ec853-f637-4aa4-8e8e-eb655dfccccb/volumes/kubernetes.io~projected/kube-api-access-cx596 major:0 minor:718 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/253ec853-f637-4aa4-8e8e-eb655dfccccb/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/253ec853-f637-4aa4-8e8e-eb655dfccccb/volumes/kubernetes.io~secret/serving-cert major:0 minor:409 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/26575d68-0488-4dfa-a5d0-5016e481dba6/volumes/kubernetes.io~projected/kube-api-access:{mountpoint:/var/lib/kubelet/pods/26575d68-0488-4dfa-a5d0-5016e481dba6/volumes/kubernetes.io~projected/kube-api-access major:0 minor:233 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/26575d68-0488-4dfa-a5d0-5016e481dba6/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/26575d68-0488-4dfa-a5d0-5016e481dba6/volumes/kubernetes.io~secret/serving-cert major:0 minor:219 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/2d21e77e-8b61-4f03-8f17-941b7a1d8b1d/volumes/kubernetes.io~projected/kube-api-access-fc27m:{mountpoint:/var/lib/kubelet/pods/2d21e77e-8b61-4f03-8f17-941b7a1d8b1d/volumes/kubernetes.io~projected/kube-api-access-fc27m major:0 minor:792 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/30d77a7c-222e-41c7-8a98-219854aa3da2/volumes/kubernetes.io~projected/kube-api-access-x47z7:{mountpoint:/var/lib/kubelet/pods/30d77a7c-222e-41c7-8a98-219854aa3da2/volumes/kubernetes.io~projected/kube-api-access-x47z7 major:0 minor:474 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/30d77a7c-222e-41c7-8a98-219854aa3da2/volumes/kubernetes.io~secret/encryption-config:{mountpoint:/var/lib/kubelet/pods/30d77a7c-222e-41c7-8a98-219854aa3da2/volumes/kubernetes.io~secret/encryption-config major:0 minor:487 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/30d77a7c-222e-41c7-8a98-219854aa3da2/volumes/kubernetes.io~secret/etcd-client:{mountpoint:/var/lib/kubelet/pods/30d77a7c-222e-41c7-8a98-219854aa3da2/volumes/kubernetes.io~secret/etcd-client major:0 minor:486 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/30d77a7c-222e-41c7-8a98-219854aa3da2/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/30d77a7c-222e-41c7-8a98-219854aa3da2/volumes/kubernetes.io~secret/serving-cert major:0 minor:491 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/37b3753f-bf4f-4a9e-a4a8-d58296bada79/volumes/kubernetes.io~projected/kube-api-access-zwlxb:{mountpoint:/var/lib/kubelet/pods/37b3753f-bf4f-4a9e-a4a8-d58296bada79/volumes/kubernetes.io~projected/kube-api-access-zwlxb major:0 minor:227 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/37b3753f-bf4f-4a9e-a4a8-d58296bada79/volumes/kubernetes.io~secret/cert:{mountpoint:/var/lib/kubelet/pods/37b3753f-bf4f-4a9e-a4a8-d58296bada79/volumes/kubernetes.io~secret/cert major:0 minor:540 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/37b3753f-bf4f-4a9e-a4a8-d58296bada79/volumes/kubernetes.io~secret/cluster-baremetal-operator-tls:{mountpoint:/var/lib/kubelet/pods/37b3753f-bf4f-4a9e-a4a8-d58296bada79/volumes/kubernetes.io~secret/cluster-baremetal-operator-tls major:0 minor:541 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/3a3a6c2c-78e7-41f3-acff-20173cbc012a/volumes/kubernetes.io~projected/kube-api-access:{mountpoint:/var/lib/kubelet/pods/3a3a6c2c-78e7-41f3-acff-20173cbc012a/volumes/kubernetes.io~projected/kube-api-access major:0 minor:259 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/3a3a6c2c-78e7-41f3-acff-20173cbc012a/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/3a3a6c2c-78e7-41f3-acff-20173cbc012a/volumes/kubernetes.io~secret/serving-cert major:0 minor:218 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/427e5ce9-f4b3-4f12-bb77-2b13775aa334/volumes/kubernetes.io~projected/kube-api-access-z5jd4:{mountpoint:/var/lib/kubelet/pods/427e5ce9-f4b3-4f12-bb77-2b13775aa334/volumes/kubernetes.io~projected/kube-api-access-z5jd4 major:0 minor:549 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/43fab0f2-5cfd-4b5e-a632-728fd5b960fd/volumes/kubernetes.io~projected/kube-api-access-rsj86:{mountpoint:/var/lib/kubelet/pods/43fab0f2-5cfd-4b5e-a632-728fd5b960fd/volumes/kubernetes.io~projected/kube-api-access-rsj86 major:0 minor:553 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/43fab0f2-5cfd-4b5e-a632-728fd5b960fd/volumes/kubernetes.io~secret/encryption-config:{mountpoint:/var/lib/kubelet/pods/43fab0f2-5cfd-4b5e-a632-728fd5b960fd/volumes/kubernetes.io~secret/encryption-config major:0 minor:530 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/43fab0f2-5cfd-4b5e-a632-728fd5b960fd/volumes/kubernetes.io~secret/etcd-client:{mountpoint:/var/lib/kubelet/pods/43fab0f2-5cfd-4b5e-a632-728fd5b960fd/volumes/kubernetes.io~secret/etcd-client major:0 minor:536 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/43fab0f2-5cfd-4b5e-a632-728fd5b960fd/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/43fab0f2-5cfd-4b5e-a632-728fd5b960fd/volumes/kubernetes.io~secret/serving-cert major:0 minor:535 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/4460d3d3-c55f-4f1c-a623-e3feccf937bb/volumes/kubernetes.io~projected/kube-api-access-2tskm:{mountpoint:/var/lib/kubelet/pods/4460d3d3-c55f-4f1c-a623-e3feccf937bb/volumes/kubernetes.io~projected/kube-api-access-2tskm major:0 minor:155 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/489dd872-39c3-4ce2-8dc1-9d0552b88616/volumes/kubernetes.io~projected/kube-api-access-wjtg7:{mountpoint:/var/lib/kubelet/pods/489dd872-39c3-4ce2-8dc1-9d0552b88616/volumes/kubernetes.io~projected/kube-api-access-wjtg7 major:0 minor:797 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/56cde2f7-1742-45d6-aa22-8270cfb424a7/volumes/kubernetes.io~projected/ca-certs:{mountpoint:/var/lib/kubelet/pods/56cde2f7-1742-45d6-aa22-8270cfb424a7/volumes/kubernetes.io~projected/ca-certs major:0 minor:554 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/56cde2f7-1742-45d6-aa22-8270cfb424a7/volumes/kubernetes.io~projected/kube-api-access-mbctm:{mountpoint:/var/lib/kubelet/pods/56cde2f7-1742-45d6-aa22-8270cfb424a7/volumes/kubernetes.io~projected/kube-api-access-mbctm major:0 minor:555 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/56cde2f7-1742-45d6-aa22-8270cfb424a7/volumes/kubernetes.io~secret/catalogserver-certs:{mountpoint:/var/lib/kubelet/pods/56cde2f7-1742-45d6-aa22-8270cfb424a7/volumes/kubernetes.io~secret/catalogserver-certs major:0 minor:458 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/59407fdf-b1e9-4992-a3c8-54b4e26f496c/volumes/kubernetes.io~projected/kube-api-access-9dt8f:{mountpoint:/var/lib/kubelet/pods/59407fdf-b1e9-4992-a3c8-54b4e26f496c/volumes/kubernetes.io~projected/kube-api-access-9dt8f major:0 minor:488 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/59407fdf-b1e9-4992-a3c8-54b4e26f496c/volumes/kubernetes.io~secret/metrics-tls:{mountpoint:/var/lib/kubelet/pods/59407fdf-b1e9-4992-a3c8-54b4e26f496c/volumes/kubernetes.io~secret/metrics-tls major:0 minor:508 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/5a4f94f3-d63a-4869-b723-ae9637610b4b/volumes/kubernetes.io~projected/kube-api-access-hgnz6:{mountpoint:/var/lib/kubelet/pods/5a4f94f3-d63a-4869-b723-ae9637610b4b/volumes/kubernetes.io~projected/kube-api-access-hgnz6 major:0 minor:123 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/5a4f94f3-d63a-4869-b723-ae9637610b4b/volumes/kubernetes.io~secret/metrics-certs:{mountpoint:/var/lib/kubelet/pods/5a4f94f3-d63a-4869-b723-ae9637610b4b/volumes/kubernetes.io~secret/metrics-certs major:0 minor:457 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/5b0e38f3-3ab5-4519-86a6-68003deb94da/volumes/kubernetes.io~projected/kube-api-access-grnqn:{mountpoint:/var/lib/kubelet/pods/5b0e38f3-3ab5-4519-86a6-68003deb94da/volumes/kubernetes.io~projected/kube-api-access-grnqn major:0 minor:99 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/6f26e239-2988-4faa-bc1d-24b15b95b7f1/volumes/kubernetes.io~projected/bound-sa-token:{mountpoint:/var/lib/kubelet/pods/6f26e239-2988-4faa-bc1d-24b15b95b7f1/volumes/kubernetes.io~projected/bound-sa-token major:0 minor:249 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/6f26e239-2988-4faa-bc1d-24b15b95b7f1/volumes/kubernetes.io~projected/kube-api-access-5sl7p:{mountpoint:/var/lib/kubelet/pods/6f26e239-2988-4faa-bc1d-24b15b95b7f1/volumes/kubernetes.io~projected/kube-api-access-5sl7p major:0 minor:225 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/6f26e239-2988-4faa-bc1d-24b15b95b7f1/volumes/kubernetes.io~secret/image-registry-operator-tls:{mountpoint:/var/lib/kubelet/pods/6f26e239-2988-4faa-bc1d-24b15b95b7f1/volumes/kubernetes.io~secret/image-registry-operator-tls major:0 minor:419 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/7047a862-8cbe-46fb-9af3-06ba224cbe26/volumes/kubernetes.io~projected/kube-api-access-4g42g:{mountpoint:/var/lib/kubelet/pods/7047a862-8cbe-46fb-9af3-06ba224cbe26/volumes/kubernetes.io~projected/kube-api-access-4g42g major:0 minor:445 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/7b94e08c-7944-445e-bfb7-6c7c14880c65/volumes/kubernetes.io~projected/kube-api-access-g4zcv:{mountpoint:/var/lib/kubelet/pods/7b94e08c-7944-445e-bfb7-6c7c14880c65/volumes/kubernetes.io~projected/kube-api-access-g4zcv major:0 minor:125 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/7b94e08c-7944-445e-bfb7-6c7c14880c65/volumes/kubernetes.io~secret/ovn-control-plane-metrics-cert:{mountpoint:/var/lib/kubelet/pods/7b94e08c-7944-445e-bfb7-6c7c14880c65/volumes/kubernetes.io~secret/ovn-control-plane-metrics-cert major:0 minor:124 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/7c6694a8-ccd0-491b-9f21-215450f6ce67/volumes/kubernetes.io~projected/kube-api-access-mrdqg:{mountpoint:/var/lib/kubelet/pods/7c6694a8-ccd0-491b-9f21-215450f6ce67/volumes/kubernetes.io~projected/kube-api-access-mrdqg major:0 minor:268 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/7c6694a8-ccd0-491b-9f21-215450f6ce67/volumes/kubernetes.io~secret/apiservice-cert:{mountpoint:/var/lib/kubelet/pods/7c6694a8-ccd0-491b-9f21-215450f6ce67/volumes/kubernetes.io~secret/apiservice-cert major:0 minor:420 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/7c6694a8-ccd0-491b-9f21-215450f6ce67/volumes/kubernetes.io~secret/node-tuning-operator-tls:{mountpoint:/var/lib/kubelet/pods/7c6694a8-ccd0-491b-9f21-215450f6ce67/volumes/kubernetes.io~secret/node-tuning-operator-tls major:0 minor:440 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/7d39d93e-9be3-47e1-a44e-be2d18b55446/volumes/kubernetes.io~projected/kube-api-access-vkcx9:{mountpoint:/var/lib/kubelet/pods/7d39d93e-9be3-47e1-a44e-be2d18b55446/volumes/kubernetes.io~projected/kube-api-access-vkcx9 major:0 minor:320 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/7d72bb42-1ee6-4f61-9515-d1c5bafa896f/volumes/kubernetes.io~projected/kube-api-access-ljbl7:{mountpoint:/var/lib/kubelet/pods/7d72bb42-1ee6-4f61-9515-d1c5bafa896f/volumes/kubernetes.io~projected/kube-api-access-ljbl7 major:0 minor:915 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/7e64a377-f497-4416-8f22-d5c7f52e0b65/volumes/kubernetes.io~projected/bound-sa-token:{mountpoint:/var/lib/kubelet/pods/7e64a377-f497-4416-8f22-d5c7f52e0b65/volumes/kubernetes.io~projected/bound-sa-token major:0 minor:234 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/7e64a377-f497-4416-8f22-d5c7f52e0b65/volumes/kubernetes.io~projected/kube-api-access-sclm5:{mountpoint:/var/lib/kubelet/pods/7e64a377-f497-4416-8f22-d5c7f52e0b65/volumes/kubernetes.io~projected/kube-api-access-sclm5 major:0 minor:238 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/7e64a377-f497-4416-8f22-d5c7f52e0b65/volumes/kubernetes.io~secret/metrics-tls:{mountpoint:/var/lib/kubelet/pods/7e64a377-f497-4416-8f22-d5c7f52e0b65/volumes/kubernetes.io~secret/metrics-tls major:0 minor:417 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/822080a5-2926-4a51-866d-86bb0b437da2/volumes/kubernetes.io~empty-dir/etc-tuned:{mountpoint:/var/lib/kubelet/pods/822080a5-2926-4a51-866d-86bb0b437da2/volumes/kubernetes.io~empty-dir/etc-tuned major:0 minor:674 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/822080a5-2926-4a51-866d-86bb0b437da2/volumes/kubernetes.io~empty-dir/tmp:{mountpoint:/var/lib/kubelet/pods/822080a5-2926-4a51-866d-86bb0b437da2/volumes/kubernetes.io~empty-dir/tmp major:0 minor:670 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/822080a5-2926-4a51-866d-86bb0b437da2/volumes/kubernetes.io~projected/kube-api-access-f48gg:{mountpoint:/var/lib/kubelet/pods/822080a5-2926-4a51-866d-86bb0b437da2/volumes/kubernetes.io~projected/kube-api-access-f48gg major:0 minor:675 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/89e6c3d6-7bd5-4df6-90db-3a349f644afb/volumes/kubernetes.io~projected/kube-api-access-88hkw:{mountpoint:/var/lib/kubelet/pods/89e6c3d6-7bd5-4df6-90db-3a349f644afb/volumes/kubernetes.io~projected/kube-api-access-88hkw major:0 minor:894 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/89e6c3d6-7bd5-4df6-90db-3a349f644afb/volumes/kubernetes.io~secret/proxy-tls:{mountpoint:/var/lib/kubelet/pods/89e6c3d6-7bd5-4df6-90db-3a349f644afb/volumes/kubernetes.io~secret/proxy-tls major:0 minor:891 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/8d5e9525-6c0d-4c0b-a2ce-e42eaf66c311/volumes/kubernetes.io~projected/kube-api-access-clm4b:{mountpoint:/var/lib/kubelet/pods/8d5e9525-6c0d-4c0b-a2ce-e42eaf66c311/volumes/kubernetes.io~projected/kube-api-access-clm4b major:0 minor:237 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/8d5e9525-6c0d-4c0b-a2ce-e42eaf66c311/volumes/kubernetes.io~secret/cluster-monitoring-operator-tls:{mountpoint:/var/lib/kubelet/pods/8d5e9525-6c0d-4c0b-a2ce-e42eaf66c311/volumes/kubernetes.io~secret/cluster-monitoring-operator-tls major:0 minor:539 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/8db04037-c7cc-4246-92c3-6e7985384b14/volumes/kubernetes.io~projected/kube-api-access-fglbh:{mountpoint:/var/lib/kubelet/pods/8db04037-c7cc-4246-92c3-6e7985384b14/volumes/kubernetes.io~projected/kube-api-access-fglbh major:0 minor:808 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/8db04037-c7cc-4246-92c3-6e7985384b14/volumes/kubernetes.io~secret/apiservice-cert:{mountpoint:/var/lib/kubelet/pods/8db04037-c7cc-4246-92c3-6e7985384b14/volumes/kubernetes.io~secret/apiservice-cert major:0 minor:804 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/8db04037-c7cc-4246-92c3-6e7985384b14/volumes/kubernetes.io~secret/webhook-cert:{mountpoint:/var/lib/kubelet/pods/8db04037-c7cc-4246-92c3-6e7985384b14/volumes/kubernetes.io~secret/webhook-cert major:0 minor:802 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/92153864-7959-4482-bf24-c8db36435fb5/volumes/kubernetes.io~projected/kube-api-access-sb496:{mountpoint:/var/lib/kubelet/pods/92153864-7959-4482-bf24-c8db36435fb5/volumes/kubernetes.io~projected/kube-api-access-sb496 major:0 minor:784 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/978dcca6-b396-463f-9614-9e24194a1aaa/volumes/kubernetes.io~projected/kube-api-access-5s6f5:{mountpoint:/var/lib/kubelet/pods/978dcca6-b396-463f-9614-9e24194a1aaa/volumes/kubernetes.io~projected/kube-api-access-5s6f5 major:0 minor:304 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/9875ed82-813c-483d-8471-8f9b74b774ee/volumes/kubernetes.io~projected/kube-api-access-76j8w:{mountpoint:/var/lib/kubelet/pods/9875ed82-813c-483d-8471-8f9b74b774ee/volumes/kubernetes.io~projected/kube-api-access-76j8w major:0 minor:143 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/9875ed82-813c-483d-8471-8f9b74b774ee/volumes/kubernetes.io~secret/webhook-cert:{mountpoint:/var/lib/kubelet/pods/9875ed82-813c-483d-8471-8f9b74b774ee/volumes/kubernetes.io~secret/webhook-cert major:0 minor:138 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/994fff04-c1d7-4f10-8d4b-6b49a6934829/volume-subpaths/run-systemd/ovnkube-controller/6:{mountpoint:/var/lib/kubelet/pods/994fff04-c1d7-4f10-8d4b-6b49a6934829/volume-subpaths/run-systemd/ovnkube-controller/6 major:0 minor:24 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/994fff04-c1d7-4f10-8d4b-6b49a6934829/volumes/kubernetes.io~projected/kube-api-access-9lwsm:{mountpoint:/var/lib/kubelet/pods/994fff04-c1d7-4f10-8d4b-6b49a6934829/volumes/kubernetes.io~projected/kube-api-access-9lwsm major:0 minor:127 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/994fff04-c1d7-4f10-8d4b-6b49a6934829/volumes/kubernetes.io~secret/ovn-node-metrics-cert:{mountpoint:/var/lib/kubelet/pods/994fff04-c1d7-4f10-8d4b-6b49a6934829/volumes/kubernetes.io~secret/ovn-node-metrics-cert major:0 minor:126 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/99e215da-759d-4fff-af65-0fb64245fbd0/volumes/kubernetes.io~projected/kube-api-access-n8k5q:{mountpoint:/var/lib/kubelet/pods/99e215da-759d-4fff-af65-0fb64245fbd0/volumes/kubernetes.io~projected/kube-api-access-n8k5q major:0 minor:250 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/99e215da-759d-4fff-af65-0fb64245fbd0/volumes/kubernetes.io~secret/cluster-olm-operator-serving-cert:{mountpoint:/var/lib/kubelet/pods/99e215da-759d-4fff-af65-0fb64245fbd0/volumes/kubernetes.io~secret/cluster-olm-operator-serving-cert major:0 minor:226 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/9a240ab7-a1d5-4e9a-96f3-4590681cc7ed/volumes/kubernetes.io~projected/kube-api-access-l5tw2:{mountpoint:/var/lib/kubelet/pods/9a240ab7-a1d5-4e9a-96f3-4590681cc7ed/volumes/kubernetes.io~projected/kube-api-access-l5tw2 major:0 minor:231 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/9a240ab7-a1d5-4e9a-96f3-4590681cc7ed/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/9a240ab7-a1d5-4e9a-96f3-4590681cc7ed/volumes/kubernetes.io~secret/serving-cert major:0 minor:220 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/9b424d6c-7440-4c98-ac19-2d0642c696fd/volumes/kubernetes.io~projected/kube-api-access:{mountpoint:/var/lib/kubelet/pods/9b424d6c-7440-4c98-ac19-2d0642c696fd/volumes/kubernetes.io~projected/kube-api-access major:0 minor:215 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/9b424d6c-7440-4c98-ac19-2d0642c696fd/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/9b424d6c-7440-4c98-ac19-2d0642c696fd/volumes/kubernetes.io~secret/serving-cert major:0 minor:209 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/9c0dbd44-7669-41d6-bf1b-d8c1343c9d98/volumes/kubernetes.io~projected/kube-api-access-qbdth:{mountpoint:/var/lib/kubelet/pods/9c0dbd44-7669-41d6-bf1b-d8c1343c9d98/volumes/kubernetes.io~projected/kube-api-access-qbdth major:0 minor:958 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/9c0dbd44-7669-41d6-bf1b-d8c1343c9d98/volumes/kubernetes.io~secret/prometheus-operator-kube-rbac-proxy-config:{mountpoint:/var/lib/kubelet/pods/9c0dbd44-7669-41d6-bf1b-d8c1343c9d98/volumes/kubernetes.io~secret/prometheus-operator-kube-rbac-proxy-config major:0 minor:954 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/9e2d0d0d-54ca-475b-be8a-4eb6d4434e74/volumes/kubernetes.io~secret/tls-certificates:{mountpoint:/var/lib/kubelet/pods/9e2d0d0d-54ca-475b-be8a-4eb6d4434e74/volumes/kubernetes.io~secret/tls-certificates major:0 minor:907 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/a94f7bff-ad61-4c53-a8eb-000a13f26971/volumes/kubernetes.io~projected/kube-api-access-5xvzx:{mountpoint:/var/lib/kubelet/pods/a94f7bff-ad61-4c53-a8eb-000a13f26971/volumes/kubernetes.io~projected/kube-api-access-5xvzx major:0 minor:790 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/b1352cc7-4099-44c5-9c31-8259fb783bc7/volumes/kubernetes.io~projected/kube-api-access-9pp5f:{mountpoint:/var/lib/kubelet/pods/b1352cc7-4099-44c5-9c31-8259fb783bc7/volumes/kubernetes.io~projected/kube-api-access-9pp5f major:0 minor:240 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/b1352cc7-4099-44c5-9c31-8259fb783bc7/volumes/kubernetes.io~secret/metrics-tls:{mountpoint:/var/lib/kubelet/pods/b1352cc7-4099-44c5-9c31-8259fb783bc7/volumes/kubernetes.io~secret/metrics-tls major:0 minor:418 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/b3385316-45f0-46c5-ac82-683168db5878/volumes/kubernetes.io~projected/kube-api-access-wd9sc:{mountpoint:/var/lib/kubelet/pods/b3385316-45f0-46c5-ac82-683168db5878/volumes/kubernetes.io~projected/kube-api-access-wd9sc major:0 minor:947 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/b3385316-45f0-46c5-ac82-683168db5878/volumes/kubernetes.io~secret/certs:{mountpoint:/var/lib/kubelet/pods/b3385316-45f0-46c5-ac82-683168db5878/volumes/kubernetes.io~secret/certs major:0 minor:939 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/b3385316-45f0-46c5-ac82-683168db5878/volumes/kubernetes.io~secret/node-bootstrap-token:{mountpoint:/var/lib/kubelet/pods/b3385316-45f0-46c5-ac82-683168db5878/volumes/kubernetes.io~secret/node-bootstrap-token major:0 minor:938 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/c087ce06-a16b-41f4-ba93-8fccdee09003/volumes/kubernetes.io~projected/kube-api-access-789k6:{mountpoint:/var/lib/kubelet/pods/c087ce06-a16b-41f4-ba93-8fccdee09003/volumes/kubernetes.io~projected/kube-api-access-789k6 major:0 minor:244 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/c087ce06-a16b-41f4-ba93-8fccdee09003/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/c087ce06-a16b-41f4-ba93-8fccdee09003/volumes/kubernetes.io~secret/serving-cert major:0 minor:223 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/c3267271-e0c5-45d6-980c-d78e4f9eef35/volumes/kubernetes.io~projected/kube-api-access-z7xqg:{mountpoint:/var/lib/kubelet/pods/c3267271-e0c5-45d6-980c-d78e4f9eef35/volumes/kubernetes.io~projected/kube-api-access-z7xqg major:0 minor:826 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/c3267271-e0c5-45d6-980c-d78e4f9eef35/volumes/kubernetes.io~secret/proxy-tls:{mountpoint:/var/lib/kubelet/pods/c3267271-e0c5-45d6-980c-d78e4f9eef35/volumes/kubernetes.io~secret/proxy-tls major:0 minor:814 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/c355c750-ae2f-49fa-9a16-8fb4f688853e/volumes/kubernetes.io~projected/kube-api-access-zfnqp:{mountpoint:/var/lib/kubelet/pods/c355c750-ae2f-49fa-9a16-8fb4f688853e/volumes/kubernetes.io~projected/kube-api-access-zfnqp major:0 minor:236 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/c355c750-ae2f-49fa-9a16-8fb4f688853e/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/c355c750-ae2f-49fa-9a16-8fb4f688853e/volumes/kubernetes.io~secret/serving-cert major:0 minor:221 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/c38c5f03-a753-49f4-ab06-33e75a03bd45/volumes/kubernetes.io~projected/kube-api-access-d8d74:{mountpoint:/var/lib/kubelet/pods/c38c5f03-a753-49f4-ab06-33e75a03bd45/volumes/kubernetes.io~projected/kube-api-access-d8d74 major:0 minor:785 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/c38c5f03-a753-49f4-ab06-33e75a03bd45/volumes/kubernetes.io~secret/cluster-storage-operator-serving-cert:{mountpoint:/var/lib/kubelet/pods/c38c5f03-a753-49f4-ab06-33e75a03bd45/volumes/kubernetes.io~secret/cluster-storage-operator-serving-cert major:0 minor:783 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/c57f282a-829b-41b2-827a-f4bc598245a2/volumes/kubernetes.io~projected/kube-api-access-d6c68:{mountpoint:/var/lib/kubelet/pods/c57f282a-829b-41b2-827a-f4bc598245a2/volumes/kubernetes.io~projected/kube-api-access-d6c68 major:0 minor:914 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/c57f282a-829b-41b2-827a-f4bc598245a2/volumes/kubernetes.io~secret/default-certificate:{mountpoint:/var/lib/kubelet/pods/c57f282a-829b-41b2-827a-f4bc598245a2/volumes/kubernetes.io~secret/default-certificate major:0 minor:912 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/c57f282a-829b-41b2-827a-f4bc598245a2/volumes/kubernetes.io~secret/metrics-certs:{mountpoint:/var/lib/kubelet/pods/c57f282a-829b-41b2-827a-f4bc598245a2/volumes/kubernetes.io~secret/metrics-certs major:0 minor:913 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/c57f282a-829b-41b2-827a-f4bc598245a2/volumes/kubernetes.io~secret/stats-auth:{mountpoint:/var/lib/kubelet/pods/c57f282a-829b-41b2-827a-f4bc598245a2/volumes/kubernetes.io~secret/stats-auth major:0 minor:911 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/cb522b02-0b93-4711-9041-566daa06b95a/volumes/kubernetes.io~projected/kube-api-access-fk59q:{mountpoint:/var/lib/kubelet/pods/cb522b02-0b93-4711-9041-566daa06b95a/volumes/kubernetes.io~projected/kube-api-access-fk59q major:0 minor:251 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/cb522b02-0b93-4711-9041-566daa06b95a/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/cb522b02-0b93-4711-9041-566daa06b95a/volumes/kubernetes.io~secret/serving-cert major:0 minor:228 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/ce5831a6-5a8d-4cda-9299-5d86437bcab2/volumes/kubernetes.io~projected/kube-api-access-756j8:{mountpoint:/var/lib/kubelet/pods/ce5831a6-5a8d-4cda-9299-5d86437bcab2/volumes/kubernetes.io~projected/kube-api-access-756j8 major:0 minor:241 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/ce5831a6-5a8d-4cda-9299-5d86437bcab2/volumes/kubernetes.io~secret/marketplace-operator-metrics:{mountpoint:/var/lib/kubelet/pods/ce5831a6-5a8d-4cda-9299-5d86437bcab2/volumes/kubernetes.io~secret/marketplace-operator-metrics major:0 minor:544 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/d26d4515-391e-41a5-8c82-1b2b8a375662/volumes/kubernetes.io~projected/kube-api-access-bm8jj:{mountpoint:/var/lib/kubelet/pods/d26d4515-391e-41a5-8c82-1b2b8a375662/volumes/kubernetes.io~projected/kube-api-access-bm8jj major:0 minor:265 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/d26d4515-391e-41a5-8c82-1b2b8a375662/volumes/kubernetes.io~secret/package-server-manager-serving-cert:{mountpoint:/var/lib/kubelet/pods/d26d4515-391e-41a5-8c82-1b2b8a375662/volumes/kubernetes.io~secret/package-server-manager-serving-cert major:0 minor:537 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/d4c75bee-d0d2-4261-8f89-8c3375dbd868/volumes/kubernetes.io~projected/kube-api-access-bz8rf:{mountpoint:/var/lib/kubelet/pods/d4c75bee-d0d2-4261-8f89-8c3375dbd868/volumes/kubernetes.io~projected/kube-api-access-bz8rf major:0 minor:793 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/d4c75bee-d0d2-4261-8f89-8c3375 Mar 18 18:00:30.943197 master-0 kubenswrapper[30278]: dbd868/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/d4c75bee-d0d2-4261-8f89-8c3375dbd868/volumes/kubernetes.io~secret/serving-cert major:0 minor:791 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/dba5f8d7-4d25-42b5-9c58-813221bf96bb/volumes/kubernetes.io~projected/kube-api-access-lmsm4:{mountpoint:/var/lib/kubelet/pods/dba5f8d7-4d25-42b5-9c58-813221bf96bb/volumes/kubernetes.io~projected/kube-api-access-lmsm4 major:0 minor:254 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/dc110414-3a6b-474c-bce3-33450cab8fcd/volumes/kubernetes.io~projected/kube-api-access-mnl7c:{mountpoint:/var/lib/kubelet/pods/dc110414-3a6b-474c-bce3-33450cab8fcd/volumes/kubernetes.io~projected/kube-api-access-mnl7c major:0 minor:811 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/de189d27-4c60-49f1-9119-d1fde5c37b1e/volumes/kubernetes.io~projected/kube-api-access-tf476:{mountpoint:/var/lib/kubelet/pods/de189d27-4c60-49f1-9119-d1fde5c37b1e/volumes/kubernetes.io~projected/kube-api-access-tf476 major:0 minor:787 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/e0e04440-c08b-452d-9be6-9f70a4027c92/volumes/kubernetes.io~projected/kube-api-access-767c7:{mountpoint:/var/lib/kubelet/pods/e0e04440-c08b-452d-9be6-9f70a4027c92/volumes/kubernetes.io~projected/kube-api-access-767c7 major:0 minor:786 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/e73f2834-c56c-4cef-ac3c-2317e9a4324c/volumes/kubernetes.io~projected/kube-api-access-qwps9:{mountpoint:/var/lib/kubelet/pods/e73f2834-c56c-4cef-ac3c-2317e9a4324c/volumes/kubernetes.io~projected/kube-api-access-qwps9 major:0 minor:232 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/e73f2834-c56c-4cef-ac3c-2317e9a4324c/volumes/kubernetes.io~secret/srv-cert:{mountpoint:/var/lib/kubelet/pods/e73f2834-c56c-4cef-ac3c-2317e9a4324c/volumes/kubernetes.io~secret/srv-cert major:0 minor:542 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/e7f76afa-4b23-421c-8451-46323813f06e/volumes/kubernetes.io~projected/kube-api-access-gzhsq:{mountpoint:/var/lib/kubelet/pods/e7f76afa-4b23-421c-8451-46323813f06e/volumes/kubernetes.io~projected/kube-api-access-gzhsq major:0 minor:992 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/e7f76afa-4b23-421c-8451-46323813f06e/volumes/kubernetes.io~secret/webhook-certs:{mountpoint:/var/lib/kubelet/pods/e7f76afa-4b23-421c-8451-46323813f06e/volumes/kubernetes.io~secret/webhook-certs major:0 minor:987 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/e9e04572-1425-440e-9869-6deef05e13e3/volumes/kubernetes.io~projected/kube-api-access-t92bz:{mountpoint:/var/lib/kubelet/pods/e9e04572-1425-440e-9869-6deef05e13e3/volumes/kubernetes.io~projected/kube-api-access-t92bz major:0 minor:235 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/e9e04572-1425-440e-9869-6deef05e13e3/volumes/kubernetes.io~secret/srv-cert:{mountpoint:/var/lib/kubelet/pods/e9e04572-1425-440e-9869-6deef05e13e3/volumes/kubernetes.io~secret/srv-cert major:0 minor:543 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/efbcb147-d077-4749-9289-1682daccb657/volumes/kubernetes.io~projected/ca-certs:{mountpoint:/var/lib/kubelet/pods/efbcb147-d077-4749-9289-1682daccb657/volumes/kubernetes.io~projected/ca-certs major:0 minor:455 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/efbcb147-d077-4749-9289-1682daccb657/volumes/kubernetes.io~projected/kube-api-access-vqrdl:{mountpoint:/var/lib/kubelet/pods/efbcb147-d077-4749-9289-1682daccb657/volumes/kubernetes.io~projected/kube-api-access-vqrdl major:0 minor:456 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/efd0d6b1-652c-44b2-b918-5c7ced5d15c3/volumes/kubernetes.io~projected/kube-api-access-5wkqk:{mountpoint:/var/lib/kubelet/pods/efd0d6b1-652c-44b2-b918-5c7ced5d15c3/volumes/kubernetes.io~projected/kube-api-access-5wkqk major:0 minor:490 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/f7ff61c7-32d1-4407-a792-8e22bb4d50f9/volumes/kubernetes.io~projected/kube-api-access-nf82n:{mountpoint:/var/lib/kubelet/pods/f7ff61c7-32d1-4407-a792-8e22bb4d50f9/volumes/kubernetes.io~projected/kube-api-access-nf82n major:0 minor:230 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/f7ff61c7-32d1-4407-a792-8e22bb4d50f9/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/f7ff61c7-32d1-4407-a792-8e22bb4d50f9/volumes/kubernetes.io~secret/serving-cert major:0 minor:222 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/fcf459dc-bd30-4143-b5c4-60fd01b46548/volumes/kubernetes.io~projected/kube-api-access-xzp78:{mountpoint:/var/lib/kubelet/pods/fcf459dc-bd30-4143-b5c4-60fd01b46548/volumes/kubernetes.io~projected/kube-api-access-xzp78 major:0 minor:862 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/fcf459dc-bd30-4143-b5c4-60fd01b46548/volumes/kubernetes.io~secret/proxy-tls:{mountpoint:/var/lib/kubelet/pods/fcf459dc-bd30-4143-b5c4-60fd01b46548/volumes/kubernetes.io~secret/proxy-tls major:0 minor:860 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/fd4c81e2-699b-4fdf-ac7d-1607cde6a8ab/volumes/kubernetes.io~projected/kube-api-access-rf2qx:{mountpoint:/var/lib/kubelet/pods/fd4c81e2-699b-4fdf-ac7d-1607cde6a8ab/volumes/kubernetes.io~projected/kube-api-access-rf2qx major:0 minor:357 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/fd4c81e2-699b-4fdf-ac7d-1607cde6a8ab/volumes/kubernetes.io~secret/signing-key:{mountpoint:/var/lib/kubelet/pods/fd4c81e2-699b-4fdf-ac7d-1607cde6a8ab/volumes/kubernetes.io~secret/signing-key major:0 minor:356 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/fdab27a1-1d7a-4dc5-b828-eba3f57592dd/volumes/kubernetes.io~projected/kube-api-access:{mountpoint:/var/lib/kubelet/pods/fdab27a1-1d7a-4dc5-b828-eba3f57592dd/volumes/kubernetes.io~projected/kube-api-access major:0 minor:798 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/fdab27a1-1d7a-4dc5-b828-eba3f57592dd/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/fdab27a1-1d7a-4dc5-b828-eba3f57592dd/volumes/kubernetes.io~secret/serving-cert major:0 minor:803 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/fea7b899-fde4-4463-9520-4d433a8ebe21/volumes/kubernetes.io~projected/kube-api-access-ts9b9:{mountpoint:/var/lib/kubelet/pods/fea7b899-fde4-4463-9520-4d433a8ebe21/volumes/kubernetes.io~projected/kube-api-access-ts9b9 major:0 minor:100 fsType:tmpfs blockSize:0} overlay_0-106:{mountpoint:/var/lib/containers/storage/overlay/3e5a3adaba6a56dd4426c71040fc587e60bbdde94919e0abd38918058afc3893/merged major:0 minor:106 fsType:overlay blockSize:0} overlay_0-108:{mountpoint:/var/lib/containers/storage/overlay/ab4b5f0ed4b684d8f0b363dac491b853fa3da515dfb9ddbed84b9783f3b0d424/merged major:0 minor:108 fsType:overlay blockSize:0} overlay_0-109:{mountpoint:/var/lib/containers/storage/overlay/5dd9150bc65868fa24c4256fdeb98b06a692d993c994c161b6a1769f08b5242f/merged major:0 minor:109 fsType:overlay blockSize:0} overlay_0-113:{mountpoint:/var/lib/containers/storage/overlay/63e8744864e42acfaeeb9ca0c4df55238c2dc04c570bb0bff01b71a9a3cdd972/merged major:0 minor:113 fsType:overlay blockSize:0} overlay_0-117:{mountpoint:/var/lib/containers/storage/overlay/230b845cd22cdbae440715b993ceacf024c3eb27456ff73c3f13cb327dc2a15c/merged major:0 minor:117 fsType:overlay blockSize:0} overlay_0-121:{mountpoint:/var/lib/containers/storage/overlay/048f03f3200551b0fdb293888e2cfab6b47ba228d07864c32879e47fd544d31a/merged major:0 minor:121 fsType:overlay blockSize:0} overlay_0-128:{mountpoint:/var/lib/containers/storage/overlay/d7b32d6f52c6e21b8d4c124367bf9bc94d1ec8d01eba6c8b154fb9d4b6ff252f/merged major:0 minor:128 fsType:overlay blockSize:0} overlay_0-132:{mountpoint:/var/lib/containers/storage/overlay/ba821875d8f2d09c67960b628bd80ba7295c7a86307dd559a993d655ad74695b/merged major:0 minor:132 fsType:overlay blockSize:0} overlay_0-136:{mountpoint:/var/lib/containers/storage/overlay/938bb47b9676cc4c014ae3c6218e7b9d004161e8536a78c4dd5ba4b9cf1c0ff9/merged major:0 minor:136 fsType:overlay blockSize:0} overlay_0-140:{mountpoint:/var/lib/containers/storage/overlay/9de8a38afff62807154665d68e5d53e978e24142bf5a081c7f63e366cb1fa26e/merged major:0 minor:140 fsType:overlay blockSize:0} overlay_0-144:{mountpoint:/var/lib/containers/storage/overlay/72c94b4d3c9098bc9a42db251d40eb350f0a2f91869b1b53620fc92337547242/merged major:0 minor:144 fsType:overlay blockSize:0} overlay_0-152:{mountpoint:/var/lib/containers/storage/overlay/4995490c1d892df79ba3f9ab0ff04542ab70e207b8943f4d819e6ce7253d6766/merged major:0 minor:152 fsType:overlay blockSize:0} overlay_0-154:{mountpoint:/var/lib/containers/storage/overlay/47a03fb88ff60af44e90ead8966624a58145f8ae908cfc9135496cca0de559dc/merged major:0 minor:154 fsType:overlay blockSize:0} overlay_0-158:{mountpoint:/var/lib/containers/storage/overlay/ff2fdf168798b6dd873e01a789ef75cb0ee93d51efa9c2d3030bfa85d1b01e22/merged major:0 minor:158 fsType:overlay blockSize:0} overlay_0-164:{mountpoint:/var/lib/containers/storage/overlay/3e524e4fca6a121b14d4862ca00042bbf168d85be6c414d6b49d27bebb363917/merged major:0 minor:164 fsType:overlay blockSize:0} overlay_0-165:{mountpoint:/var/lib/containers/storage/overlay/91c9efa36b8b043acf091b2e47777288503984061ebb6e3cf1a693dfd4c99cf4/merged major:0 minor:165 fsType:overlay blockSize:0} overlay_0-166:{mountpoint:/var/lib/containers/storage/overlay/a0bbc363d45482979abbd3a9155fc1cf87b53346d770666c8e055fe1923c008d/merged major:0 minor:166 fsType:overlay blockSize:0} overlay_0-169:{mountpoint:/var/lib/containers/storage/overlay/dbda68c3fa2850449d5f0a63bd24f1aa1b17c6d15e3e73f05b64faeb598ea167/merged major:0 minor:169 fsType:overlay blockSize:0} overlay_0-171:{mountpoint:/var/lib/containers/storage/overlay/e0a8b65888cecdba28af337c7264e3253d10cb1831f887836211476cfdeb23c5/merged major:0 minor:171 fsType:overlay blockSize:0} overlay_0-179:{mountpoint:/var/lib/containers/storage/overlay/e39a1115d62f46a98f84dab0fae5939bd3450f50111ed27cab088b0bb23f9bcc/merged major:0 minor:179 fsType:overlay blockSize:0} overlay_0-184:{mountpoint:/var/lib/containers/storage/overlay/397b8335ce595c480c2fb98072849c0a4f2d4f9e31c706fdc8799c3ccbc2bdc6/merged major:0 minor:184 fsType:overlay blockSize:0} overlay_0-189:{mountpoint:/var/lib/containers/storage/overlay/afde5f7cb1b5e67175567ec51589b957bf63b638658fbf75fe266c74f183da1f/merged major:0 minor:189 fsType:overlay blockSize:0} overlay_0-194:{mountpoint:/var/lib/containers/storage/overlay/ca75afc0abe50c3c409a3dd7b3ff5d29c918e7940798e4bdc799eeb4590e3c63/merged major:0 minor:194 fsType:overlay blockSize:0} overlay_0-199:{mountpoint:/var/lib/containers/storage/overlay/51810ee05c7308617d1b9228d22bd2f2a94d94f05c0862a99ea75abcd1e9a068/merged major:0 minor:199 fsType:overlay blockSize:0} overlay_0-204:{mountpoint:/var/lib/containers/storage/overlay/1b314302766293157882743f84a5f315e7d6d6e6a6d7e21ee0b0dc6bc750895e/merged major:0 minor:204 fsType:overlay blockSize:0} overlay_0-242:{mountpoint:/var/lib/containers/storage/overlay/a7a1d3d76ac2b826816ae765ff55db0ff84190d30a2fc6a06f084db3e17661c7/merged major:0 minor:242 fsType:overlay blockSize:0} overlay_0-263:{mountpoint:/var/lib/containers/storage/overlay/d3e144e9ecf4ff1aec9a373a67a2b2b6f1ad51fc4bde5e2c0ac1ad1ef60fcb99/merged major:0 minor:263 fsType:overlay blockSize:0} overlay_0-273:{mountpoint:/var/lib/containers/storage/overlay/be29a67b0c658ff064407ca06bef7e2258154a4e31b0977a9988164b4a74a969/merged major:0 minor:273 fsType:overlay blockSize:0} overlay_0-280:{mountpoint:/var/lib/containers/storage/overlay/05327422cee626a7c5414860ed297136bde63e6b55ef9a6c141a037c71090962/merged major:0 minor:280 fsType:overlay blockSize:0} overlay_0-282:{mountpoint:/var/lib/containers/storage/overlay/f6765a5e8ba7b22d75c7a3e1cc3b26d4c166ad3137715f64a1a55cb5cb6b56a6/merged major:0 minor:282 fsType:overlay blockSize:0} overlay_0-284:{mountpoint:/var/lib/containers/storage/overlay/14d5fa9bb70c1e978ec7103419d0cb59559bf114aad1c80282385ce045275da5/merged major:0 minor:284 fsType:overlay blockSize:0} overlay_0-286:{mountpoint:/var/lib/containers/storage/overlay/eb889e1eadd9b0335f01c2dcae987a9508309e056586b6599fc5a93f332952d3/merged major:0 minor:286 fsType:overlay blockSize:0} overlay_0-288:{mountpoint:/var/lib/containers/storage/overlay/4f3403bbde21c93358ba87e3a6eb0668028009a625947e5b1f47ea684323422c/merged major:0 minor:288 fsType:overlay blockSize:0} overlay_0-290:{mountpoint:/var/lib/containers/storage/overlay/f51f0eb157b480f321da334a35d40bd5a4b33933eada48d487740f6561b9afce/merged major:0 minor:290 fsType:overlay blockSize:0} overlay_0-292:{mountpoint:/var/lib/containers/storage/overlay/851f6d5ec3306f79381066c73776e93d42910c889a4ee8ef1af32a0352b7a872/merged major:0 minor:292 fsType:overlay blockSize:0} overlay_0-294:{mountpoint:/var/lib/containers/storage/overlay/6f0e887f9a1c796c7618ad7ba6babea82030c85225510ab9a3bdbd3edcc8a9cb/merged major:0 minor:294 fsType:overlay blockSize:0} overlay_0-296:{mountpoint:/var/lib/containers/storage/overlay/330b317d85d787d8bec4f1d97d1ef090c4b38d0876a653bdd29641a33a1dc672/merged major:0 minor:296 fsType:overlay blockSize:0} overlay_0-298:{mountpoint:/var/lib/containers/storage/overlay/353621adc635806ddeadb892b03ed5d02c2d3f9e6a6aafca04b09f694565fab5/merged major:0 minor:298 fsType:overlay blockSize:0} overlay_0-300:{mountpoint:/var/lib/containers/storage/overlay/e92ce1bc425864ea9580a8b1e0b3c9f8f24f633a0ca77182575d1ea9182046cf/merged major:0 minor:300 fsType:overlay blockSize:0} overlay_0-302:{mountpoint:/var/lib/containers/storage/overlay/ed74f4f46f9e8b2d3077870db2d79dd0b7360627a8ea2addc63302756effab1e/merged major:0 minor:302 fsType:overlay blockSize:0} overlay_0-305:{mountpoint:/var/lib/containers/storage/overlay/8a3ce3bba35f86dc5e835ddbf13578d23ebbac9a8bd04f3a02bdcffd3d523de2/merged major:0 minor:305 fsType:overlay blockSize:0} overlay_0-313:{mountpoint:/var/lib/containers/storage/overlay/7dfb05b611851c76be581c8fc4838e6e3ff551da90742142efc333a53c53d845/merged major:0 minor:313 fsType:overlay blockSize:0} overlay_0-318:{mountpoint:/var/lib/containers/storage/overlay/144203dfbe0f11bf5fac551b26b7e2c6d1c5396a7e5ae7bd5ce283d42007c1c2/merged major:0 minor:318 fsType:overlay blockSize:0} overlay_0-321:{mountpoint:/var/lib/containers/storage/overlay/c6f9fdb2f157375ec7d770f8c57223a4e1cbd04824238b8afab3601ab41f346c/merged major:0 minor:321 fsType:overlay blockSize:0} overlay_0-326:{mountpoint:/var/lib/containers/storage/overlay/3afe3ebe5d352af9b25dad065a2a5abee83e99e4713b92a151e72bd40f24bb1e/merged major:0 minor:326 fsType:overlay blockSize:0} overlay_0-328:{mountpoint:/var/lib/containers/storage/overlay/5a4b99be2729de20b67862f82759c95b8ec9377dcae7d2beeb2fa18240d32fe8/merged major:0 minor:328 fsType:overlay blockSize:0} overlay_0-331:{mountpoint:/var/lib/containers/storage/overlay/e94ad6ee739ad6a04170317d556ededa36bbf842c9a10353f2db4f2d0ff155e2/merged major:0 minor:331 fsType:overlay blockSize:0} overlay_0-332:{mountpoint:/var/lib/containers/storage/overlay/ca9e83d0814f87679a49000fdad46a5ebc35d2a1b7efbf9c25e4e111aa2f5335/merged major:0 minor:332 fsType:overlay blockSize:0} overlay_0-342:{mountpoint:/var/lib/containers/storage/overlay/8b8ef71be99826ff4f14b770ab4c88ea5def85c9fc73be78724a7bef3eb8c11f/merged major:0 minor:342 fsType:overlay blockSize:0} overlay_0-343:{mountpoint:/var/lib/containers/storage/overlay/3029422f5eaa11114350d7aa071ff6739adf95b5fc55e53a60342ad18530d65b/merged major:0 minor:343 fsType:overlay blockSize:0} overlay_0-351:{mountpoint:/var/lib/containers/storage/overlay/3f325b25a4e0306e687dd5be2e61cbf8de6eb714c3f23d65393eb27295278e4c/merged major:0 minor:351 fsType:overlay blockSize:0} overlay_0-364:{mountpoint:/var/lib/containers/storage/overlay/587cdfd768fd618fc08ac9c276dd9880d1096b5ea2bb9cf0604b3de8e061cf28/merged major:0 minor:364 fsType:overlay blockSize:0} overlay_0-371:{mountpoint:/var/lib/containers/storage/overlay/e6f66ca212da6222d173486a2892e258785e4935e2d6b23a67c750e9f1b398dd/merged major:0 minor:371 fsType:overlay blockSize:0} overlay_0-378:{mountpoint:/var/lib/containers/storage/overlay/a37a6e2da61e962454c0d3d405c06ff56fa271ade37463b66abafec8952c838a/merged major:0 minor:378 fsType:overlay blockSize:0} overlay_0-384:{mountpoint:/var/lib/containers/storage/overlay/a1f3a7f1d75a124c1225d77d09386079d08a35b509b3ccb8111ed2b5c66e62eb/merged major:0 minor:384 fsType:overlay blockSize:0} overlay_0-388:{mountpoint:/var/lib/containers/storage/overlay/794b6c47378fb32484e993c59c836bc88d4bece19f7096a6ad78741e2ba4f33b/merged major:0 minor:388 fsType:overlay blockSize:0} overlay_0-390:{mountpoint:/var/lib/containers/storage/overlay/b7bb48954c1eecf600a00b4a588fea4cfad2266860be9e92af1dabbfe9e730eb/merged major:0 minor:390 fsType:overlay blockSize:0} overlay_0-394:{mountpoint:/var/lib/containers/storage/overlay/0cfb8a5c8bebb1cf611d79bd92293d7b7c28ece906e19e82abba67e0f8279fc2/merged major:0 minor:394 fsType:overlay blockSize:0} overlay_0-400:{mountpoint:/var/lib/containers/storage/overlay/37d488444a1257ff9996d7c2a5698f31087b569f9972929902b25a6a80eefb98/merged major:0 minor:400 fsType:overlay blockSize:0} overlay_0-42:{mountpoint:/var/lib/containers/storage/overlay/8295138f2f81859384c2fe9e5af6e8679b564eb17222ee39d9b6e0638ab39fdc/merged major:0 minor:42 fsType:overlay blockSize:0} overlay_0-424:{mountpoint:/var/lib/containers/storage/overlay/3ebc461315724bb66fe785a6b40b3337b7ad4945bfd7b0d7c417c31aede6fc56/merged major:0 minor:424 fsType:overlay blockSize:0} overlay_0-432:{mountpoint:/var/lib/containers/storage/overlay/acc49e2c8dbf3fbef3d3254ffad1ef66969ef59024ba4014f911ae45e3b83d07/merged major:0 minor:432 fsType:overlay blockSize:0} overlay_0-434:{mountpoint:/var/lib/containers/storage/overlay/d460f00a067ab3280448053e31d36a393114ed5a575a1deef126edfcc207f1f3/merged major:0 minor:434 fsType:overlay blockSize:0} overlay_0-436:{mountpoint:/var/lib/containers/storage/overlay/bc9ba933913700818fe3cdfb590749f5ef32956d91ab44e602e99ff17fa1fac8/merged major:0 minor:436 fsType:overlay blockSize:0} overlay_0-441:{mountpoint:/var/lib/containers/storage/overlay/6fe3fb049cc256de7abfedce76e0bb4679fa73ba7aa81cab5ae3795ddc00667f/merged major:0 minor:441 fsType:overlay blockSize:0} overlay_0-446:{mountpoint:/var/lib/containers/storage/overlay/60e495a99cfb27da683c3f549a722a444dc2b391418618b22872301c8587a74c/merged major:0 minor:446 fsType:overlay blockSize:0} overlay_0-447:{mountpoint:/var/lib/containers/storage/overlay/12f389ad8b008438d617b1aba2e0bb7f197b042bf2c91073fdd0dd61b2d9a584/merged major:0 minor:447 fsType:overlay blockSize:0} overlay_0-451:{mountpoint:/var/lib/containers/storage/overlay/09f55838d2b84aa90500591c7e99d3e2bd3ad09bf4d2f1e61c0f423abc18bdc6/merged major:0 minor:451 fsType:overlay blockSize:0} overlay_0-453:{mountpoint:/var/lib/containers/storage/overlay/41b5d056b3e9d64eaa51a19aff006fee4fef8ab7e3c196f4154a1abc8d12027a/merged major:0 minor:453 fsType:overlay blockSize:0} overlay_0-464:{mountpoint:/var/lib/containers/storage/overlay/9a7a999394a9b0f470f631aa7e200168fa735b87ffc04718753a55f7bca589b1/merged major:0 minor:464 fsType:overlay blockSize:0} overlay_0-471:{mountpoint:/var/lib/containers/storage/overlay/a1c30aea7a530c5cf09aaa2b8dc046055d425bc866a5181b2ab085377ddf79af/merged major:0 minor:471 fsType:overlay blockSize:0} overlay_0-472:{mountpoint:/var/lib/containers/storage/overlay/3f3ea0ca428871ffc6e4adec94db93b812f64c340830b2fb89fa36b456f9df64/merged major:0 minor:472 fsType:overlay blockSize:0} overlay_0-48:{mountpoint:/var/lib/containers/storage/overlay/f6f1bb678a96e874c2653e4b9df58544769fc6eb2e2e30c858cfdb5f51f39a3a/merged major:0 minor:48 fsType:overlay blockSize:0} overlay_0-493:{mountpoint:/var/lib/containers/storage/overlay/1d2b61575c2e49619d7b359f4b6965b333d8606f50635e5ab13f2dc18c8e5737/merged major:0 minor:493 fsType:overlay blockSize:0} overlay_0-50:{mountpoint:/var/lib/containers/storage/overlay/f11013efd8b3f177ae1208c618df559992c93717e31d4520b970afcd94d7d4e5/merged major:0 minor:50 fsType:overlay blockSize:0} overlay_0-502:{mountpoint:/var/lib/containers/storage/overlay/a31a4a8094afec1a15d7f56e8ba59ae4bd4c94c0a39c565e1c3ece9ed3176cd3/merged major:0 minor:502 fsType:overlay blockSize:0} overlay_0-503:{mountpoint:/var/lib/containers/storage/overlay/bf03187dd836e53b1cacb5381a4a82d9b3dcd3b0aecb755e643f349a9ebe15f0/merged major:0 minor:503 fsType:overlay blockSize:0} overlay_0-509:{mountpoint:/var/lib/containers/storage/overlay/99a6816a6bc253e1f71ea84f6174395b318acd239797d4e1ca61ca3c4decdbb6/merged major:0 minor:509 fsType:overlay blockSize:0} overlay_0-518:{mountpoint:/var/lib/containers/storage/overlay/dd3f64e0e384ac054b91a8a5fa8ae60b4d9b9cad9ce90a3e6594c60dda6c52e6/merged major:0 minor:518 fsType:overlay blockSize:0} overlay_0-52:{mountpoint:/var/lib/containers/storage/overlay/3e8521005f6bd9b0f3ff357c656d62f7ab3c52622963c8364c16a75d5d654531/merged major:0 minor:52 fsType:overlay blockSize:0} overlay_0-522:{mountpoint:/var/lib/containers/storage/overlay/19e7fff6b4e06f20e53dce4104cea83b1d3bec4ef88a009853e7f3b337448091/merged major:0 minor:522 fsType:overlay blockSize:0} overlay_0-528:{mountpoint:/var/lib/containers/storage/overlay/0fe5051be92b2a8988ca724e30972b0ae9298b8fca3b168cf3e0b8fae4cbb775/merged major:0 minor:528 fsType:overlay blockSize:0} overlay_0-557:{mountpoint:/var/lib/containers/storage/overlay/85ac661a11283333ce2812ca8a27d6f7d780beae2d1e557ffc4fce250fb20d04/merged major:0 minor:557 fsType:overlay blockSize:0} overlay_0-56:{mountpoint:/var/lib/containers/storage/overlay/79ed0ba7437157a00acb54b4f4f7c7ebc9d5e59fa031cc5c2e664cddd2eea6ad/merged major:0 minor:56 fsType:overlay blockSize:0} overlay_0-568:{mountpoint:/var/lib/containers/storage/overlay/20363c1537e0905b2daba972e227ec844e55f3a88c7c2d5a2d223e3539782d9a/merged major:0 minor:568 fsType:overlay blockSize:0} overlay_0-576:{mountpoint:/var/lib/containers/storage/overlay/bad577ef477f0b06f026eda7cee7ce763bd28ba570098e02c83f191c97546d15/merged major:0 minor:576 fsType:overlay blockSize:0} overlay_0-578:{mountpoint:/var/lib/containers/storage/overlay/443e1f64e1b3428ccbf7c2f6ee55f59ad0765254e76cac58253f0e6634363513/merged major:0 minor:578 fsType:overlay blockSize:0} overlay_0-580:{mountpoint:/var/lib/containers/storage/overlay/069feb3c8c49776ee2a45beb2997e6df314844476c84e45cfd3a5886c5fd7684/merged major:0 minor:580 fsType:overlay blockSize:0} overlay_0-582:{mountpoint:/var/lib/containers/storage/overlay/0d415214e00ed4a7bae708347fc2a16b6243013c6a64c632d698bcd31f2dd2d9/merged major:0 minor:582 fsType:overlay blockSize:0} overlay_0-584:{mountpoint:/var/lib/containers/storage/overlay/6d1e90b6d35e8a7a70bf4f1035a422b9af373785c6cdb6856b165b61b2c732af/merged major:0 minor:584 fsType:overlay blockSize:0} overlay_0-586:{mountpoint:/var/lib/containers/storage/overlay/f405d7666432ca6af8095d892e7af3653ceacedb79ce1bc5f71319a5c30d157c/merged major:0 minor:586 fsType:overlay blockSize:0} overlay_0-588:{mountpoint:/var/lib/containers/storage/overlay/c6a09b3406509d4df248078a4e98f99bf4aa6ad23cc7fc2b3eac1d2cd2963eb2/merged major:0 minor:588 fsType:overlay blockSize:0} overlay_0-590:{mountpoint:/var/lib/containers/storage/overlay/71b1af397e0d70d113eb3471a760bbb695378b6a7bee2cd47e66820324074cc8/merged major:0 minor:590 fsType:overlay blockSize:0} overlay_0-592:{mountpoint:/var/lib/containers/storage/overlay/4529432edd967224ddb0478a731f9c9742cc9743f2b97bcc8a7d4ba77c19e193/merged major:0 minor:592 fsType:overlay blockSize:0} overlay_0-594:{mountpoint:/var/lib/containers/storage/overlay/a55adf099a1894444a688fc94f142adde6d24bece6188ffcd54886698de35250/merged major:0 minor:594 fsType:overlay blockSize:0} overlay_0-597:{mountpoint:/var/lib/containers/storage/overlay/02dbd0e4d9f95cd30c16bdc0732fe7e41da1f022e79c6ef7b57e5e8971cf8bdf/merged major:0 minor:597 fsType:overlay blockSize:0} overlay_0-599:{mountpoint:/var/lib/containers/storage/overlay/a6064cdc3687351bdecd69c3283caad028d4f78a56a306c1b17926d183db9511/merged major:0 minor:599 fsType:overlay blockSize:0} overlay_0-607:{mountpoint:/var/lib/containers/storage/overlay/1a5d6f9a56e9706cd28a1be9e98242295a7b3a9bae87b85854e58eb65d1a487f/merged major:0 minor:607 fsType:overlay blockSize:0} overlay_0-608:{mountpoint:/var/lib/containers/storage/overlay/d46af2ebae0a328bf545bbfb3730d215f3ee00c50d952942aa44528e79a72070/merged major:0 minor:608 fsType:overlay blockSize:0} overlay_0-610:{mountpoint:/var/lib/containers/storage/overlay/2a237756d068796ce43e47b9ed0b0518122b1fadfc9e53a4a95cc4cfddc71a8e/merged major:0 minor:610 fsType:overlay blockSize:0} overlay_0-611:{mountpoint:/var/lib/containers/storage/overlay/86a1698f93cc6ba876d1e5fe7ae2930bb8f769bca8f27d93505b5c1945fbf4b6/merged major:0 minor:611 fsType:overlay blockSize:0} overlay_0-613:{mountpoint:/var/lib/containers/storage/overlay/d2a19e2c21b937f43b003b9bf62492602df9216a39323f83248153b84f18b25a/merged major:0 minor:613 fsType:overlay blockSize:0} overlay_0-62:{mountpoint:/var/lib/containers/storage/overlay/7bd74885fa8e528d22fbc0ae3217140ddcfa23c57ef3775f8c87b470cfc24c67/merged major:0 minor:62 fsType:overlay blockSize:0} overlay_0-622:{mountpoint:/var/lib/containers/storage/overlay/86f4f70c4e6663959d332fb3fd0abe677bb0ee861e8eb0c753dea801d648f6e6/merged major:0 minor:622 fsType:overlay blockSize:0} overlay_0-623:{mountpoint:/var/lib/containers/storage/overlay/2ab1b6f8232195f3b1d25e642dfef0ddae240e428f01ee9e313e4e3685dc1de9/merged major:0 minor:623 fsType:overlay blockSize:0} overlay_0-632:{mountpoint:/var/lib/containers/storage/overlay/4353bfae47050653144524ad79affd50c24634ce364122161a98d9a828714907/merged major:0 minor:632 fsType:overlay blockSize:0} overlay_0-634:{mountpoint:/var/lib/containers/storage/overlay/8a973437ab66e8ac46bbff5442c4939a1cf34e2465a275c689c52efaccf8bd22/merged major:0 minor:634 fsType:overlay blockSize:0} overlay_0-64:{mountpoint:/var/lib/containers/storage/overlay/2fd0408bd1c8d3e8bd0cc7b34e8da52a06d2549f53156c51b21350f6242ea0f9/merged major:0 minor:64 fsType:overlay blockSize:0} overlay_0-641:{mountpoint:/var/lib/containers/storage/overlay/5643989556c12426b6373ce3923194bbc9ad82a1a698b443f5b9689772a8c75f/merged major:0 minor:641 fsType:overlay blockSize:0} overlay_0-646:{mountpoint:/var/lib/containers/storage/overlay/7ed42d6e3db197d63574b68b7afa129b1c325ce7366dc86b0601fff8c8d89a77/merged major:0 minor:646 fsType:overlay blockSize:0} overlay_0-650:{mountpoint:/var/lib/containers/storage/overlay/c9cdcd2d011fcf5d39d8ad0550105aa331b0d48317189b468d5ee118fb61bb87/merged major:0 minor:650 fsType:overlay blockSize:0} overlay_0-66:{mountpoint:/var/lib/containers/storage/overlay/fd039b131b8c9996b2f18e23351672490b4ad785454f05018c836cc8cfcb22e9/merged major:0 minor:66 fsType:overlay blockSize:0} overlay_0-661:{mountpoint:/var/lib/containers/storage/overlay/51575854576a109260a477b3a48d6c23f2c23dbe0365d4bbf284afa99a2f7cd6/merged major:0 minor:661 fsType:overlay blockSize:0} overlay_0-664:{mountpoint:/var/lib/containers/storage/overlay/5dee8725c06037de6e5af3149ddfb0b234b4469093f4cd9294377ba73048b73f/merged major:0 minor:664 fsType:overlay blockSize:0} overlay_0-668:{mountpoint:/var/lib/containers/storage/overlay/720d78966c1ea86c2e90a1f10632ae6cc6dac83ba6a5f3278e8e52b70b15a421/merged major:0 minor:668 fsType:overlay blockSize:0} overlay_0-67:{mountpoint:/var/lib/containers/storage/overlay/6f3ec783518c393fb8e98608fddf94f2221a57980bb969f215c0e0bcdcbd135d/merged major:0 minor:67 fsType:overlay blockSize:0} overlay_0-680:{mountpoint:/var/lib/containers/storage/overlay/7f6a9a60f5f6d2a1e957c4817e429c80303a5a99362b239475c2bce1d8e52f00/merged major:0 minor:680 fsType:overlay blockSize:0} overlay_0-697:{mountpoint:/var/lib/containers/storage/overlay/220b73c1949ab22eed989d2986d0f871c2226b6097245f1a974c98c6d0169135/merged major:0 minor:697 fsType:overlay blockSize:0} overlay_0-70:{mountpoint:/var/lib/containers/storage/overlay/7bbf765ab657d2868f81e09d6ca20fc89977b39d936fb321497cece8c68af8ed/merged major:0 minor:70 fsType:overlay blockSize:0} overlay_0-72:{mountpoint:/var/lib/containers/storage/overlay/cf1a135e901b12bd2957b0aa08cf6afa26b5737699e54295b8c23ed58b563c5a/merged major:0 minor:72 fsType:overlay blockSize:0} overlay_0-726:{mountpoint:/var/lib/containers/storage/overlay/a3393d7cc910294bdfbf274fe14e3a1d9298d8f1c7d4fd861389876f4aca1a3a/merged major:0 minor:726 fsType:overlay blockSize:0} overlay_0-734:{mountpoint:/var/lib/containers/storage/overlay/e1a31ae1750c59ce4b7072985999d0acae4c257d6a328d448582370308bdde09/merged major:0 minor:734 fsType:overlay blockSize:0} overlay_0-739:{mountpoint:/var/lib/containers/storage/overlay/d9d7ea196ccf8d4b88821a3995f77a10c73e689afd2f69434b209bf17ab46d81/merged major:0 minor:739 fsType:overlay blockSize:0} overlay_0-745:{mountpoint:/var/lib/containers/storage/overlay/c76f8ad4b43b63e2f9da77fb265395fe6c8905654c5df8121fe3029972b5e3b8/merged major:0 minor:745 fsType:overlay blockSize:0} overlay_0-750:{mountpoint:/var/lib/containers/storage/overlay/fd2da243bb46ae5ea6ce6309ae5b6a8102306da067fc1c99f19066b5e11e62a0/merged major:0 minor:750 fsType:overlay blockSize:0} overlay_0-76:{mountpoint:/var/lib/containers/storage/overlay/80c410ee1c4b9418713542ca1579ab022d8d61d16a56b3a2178ce1c7106ab3b5/merged major:0 minor:76 fsType:overlay blockSize:0} overlay_0-761:{mountpoint:/var/lib/containers/storage/overlay/294b841d8dae5a877c712d00a2e4dcdba6390cc5601326518567c73f52f57445/merged major:0 minor:761 fsType:overlay blockSize:0} overlay_0-766:{mountpoint:/var/lib/containers/storage/overlay/8ea9f2df98beb6fb55f8656cebf583df11d8b8fe33fdf0ccb2d79fa7149e355d/merged major:0 minor:766 fsType:overlay blockSize:0} overlay_0-77:{mountpoint:/var/lib/containers/storage/overlay/5aeca26052585c2478e59ff0a761a0b76dfcfc842cee92ac8450097587fb22c7/merged major:0 minor:77 fsType:overlay blockSize:0} overlay_0-78:{mountpoint:/var/lib/containers/storage/overlay/861db2b0026097055e9a5769407f347ec882ed67a95d0a7c95dd3fb26283b7d7/merged major:0 minor:78 fsType:overlay blockSize:0} overlay_0-794:{mountpoint:/var/lib/containers/storage/overlay/5f078495103881684e4056e21252fb5ce96c7e50f25d05c71c27763e03c2812c/merged major:0 minor:794 fsType:overlay blockSize:0} overlay_0-80:{mountpoint:/var/lib/containers/storage/overlay/95d927834eb83cca8345be66239eec16457d6efd8b904a1d97eb6cf01ba132b2/merged major:0 minor:80 fsType:overlay blockSize:0} overlay_0-800:{mountpoint:/var/lib/containers/storage/overlay/e98a8c81f5cb87c74036e4fb4d84d4b09cb5d001949567ba679379ece8a994af/merged major:0 minor:800 fsType:overlay blockSize:0} overlay_0-81:{mountpoint:/var/lib/containers/storage/overlay/e0fcba471e9778e0d3a068e74f87217ce89ec34028ae02a15383809b7fb3ef10/merged major:0 minor:81 fsType:overlay blockSize:0} overlay_0-818:{mountpoint:/var/lib/containers/storage/overlay/b3d60ccb8d8bafcb388779948fc56b1ef221666637c9ec3832af1b66159ab9f0/merged major:0 minor:818 fsType:overlay blockSize:0} overlay_0-820:{mountpoint:/var/lib/containers/storage/overlay/4c324f562d697feb36ea39da8b887205d0c910c4d5cb748b50e030bef22d6228/merged major:0 minor:820 fsType:overlay blockSize:0} overlay_0-822:{mountpoint:/var/lib/containers/storage/overlay/2b8d961fedeb884a7a839398009cb2df3538b34cdc7f17ed8beb2736831536c9/merged major:0 minor:822 fsType:overlay blockSize:0} overlay_0-824:{mountpoint:/var/lib/containers/storage/overlay/6290a919d39bffc9221c8034a1d8e5d581b434980e8640f728ac325bce49b9f6/merged major:0 minor:824 fsType:overlay blockSize:0} overlay_0-831:{mountpoint:/var/lib/containers/storage/overlay/39287c812bdfc08ccaaea646d95be9e0abd97fd9a71da9b292d85438cb5ee0de/merged major:0 minor:831 fsType:overlay blockSize:0} overlay_0-837:{mountpoint:/var/lib/containers/storage/overlay/11dee7c789eb30557e4ddca6cedf0558f0264f4ce50f4dac3a5a98b55b407f06/merged major:0 minor:837 fsType:overlay blockSize:0} overlay_0-840:{mountpoint:/var/lib/containers/storage/overlay/c6da2f6ea4bde5ce950a6b33c564164d3864c4659b4aa4977e2d177897e46027/merged major:0 minor:840 fsType:overlay blockSize:0} overlay_0-844:{mountpoint:/var/lib/containers/storage/overlay/4b8b1a4f7e33cf13a196761ee81c03924b30b98848dbfe0aacc2c0449e672aa4/merged major:0 minor:844 fsType:overlay blockSize:0} overlay_0-85:{mountpoint:/var/lib/containers/storage/overlay/e8f96d31d9bfd28173d412c4e14fd8a8bf94c8eeaffbde1fd07caeddc8388ee0/merged major:0 minor:85 fsType:overlay blockSize:0} overlay_0-856:{mountpoint:/var/lib/containers/storage/overlay/62ec4749c1fb71a4e7601e0124b2f8ff58359e9186b078b9c55c4b687a19a8a3/merged major:0 minor:856 fsType:overlay blockSize:0} overlay_0-861:{mountpoint:/var/lib/containers/storage/overlay/39e588a632408a99d8ef2546ce7ab36bdbc1dfc49107a82fb0f319ad7b78d062/merged major:0 minor:861 fsType:overlay blockSize:0} overlay_0-877:{mountpoint:/var/lib/containers/storage/overlay/f235b97179899c36e333d3184477b175d19e966a230204408d83880596e17169/merged major:0 minor:877 fsType:overlay blockSize:0} overlay_0-879:{mountpoint:/var/lib/containers/storage/overlay/9688a92bc357bfb303667ba4c379157a152eb02047bf8bfb27ed1a7529f14818/merged major:0 minor:879 fsType:overlay blockSize:0} overlay_0-882:{mountpoint:/var/lib/containers/storage/overlay/3587197a5c44919a657a1495103b99bf0c0187e8eb1690a3c94948db35a8d70a/merged major:0 minor:882 fsType:overlay blockSize:0} overlay_0-897:{mountpoint:/var/lib/containers/storage/overlay/7e7bcb8c74bb73e6237d57edd1ef473e5c4a07ace695b3b4c428a358bf1211b3/merged major:0 minor:897 fsType:overlay blockSize:0} overlay_0-90:{mountpoint:/var/lib/containers/storage/overlay/448b804eaf6b0b469c96e91cebd54d9b461ea738a59e8761b5d9c9ce7ce04bfd/merged major:0 minor:90 fsType:overlay blockSize:0} overlay_0-901:{mountpoint:/var/lib/containers/storage/overlay/6e388bf3556dec423796514be4b641a18cf4bd3eeda1555c006702a17260eb2a/merged major:0 minor:901 fsType:overlay blockSize:0} overlay_0-91:{mountpoint:/var/lib/containers/storage/overlay/251a99b68f0032787c2b1fcacbdf7b3876784f59f953532e896a0ada4ab516c1/merged major:0 minor:91 fsType:overlay blockSize:0} overlay_0-921:{mountpoint:/var/lib/containers/storage/overlay/6e10fa8b4a1be5d9640123287ae2884a6a4268b35490b7193fe382fb141aebc3/merged major:0 minor:921 fsType:overlay blockSize:0} overlay_0-924:{mountpoint:/var/lib/containers/storage/overlay/c302768fe3887a0a051b96f25411b89bb6bfb4df242210f4149f38e8d106e198/merged major:0 minor:924 fsType:overlay blockSize:0} overlay_0-926:{mountpoint:/var/lib/containers/storage/overlay/6b4f33574f123fd6e0902f90122ea6bd770e76c8b5e1cbbc8048b5d3725f54fe/merged major:0 minor:926 fsType:overlay blockSize:0} overlay_0-928:{mountpoint:/var/lib/containers/storage/overlay/d510f44654ea5e691f03c62292d2cac505f303435874d4e2c7db9c00feea7153/merged major:0 minor:928 fsType:overlay blockSize:0} overlay_0-93:{mountpoint:/var/lib/containers/storage/overlay/242a7ed600d8fbdfe0d4a92062f0a8e639632c4a92955978cc106d921352965e/merged major:0 minor:93 fsType:overlay blockSize:0} overlay_0-930:{mountpoint:/var/lib/containers/storage/overlay/fbcaaf27e8d6134e1d0b4e08a3b07dae186e605e7e332f5963f90b6f134ccbb9/merged major:0 minor:930 fsType:overlay blockSize:0} overlay_0-950:{mountpoint:/var/lib/containers/storage/overlay/edad70c384777ff2bc9b5393d3c0194ab7694b73570eeb5cab03a818e8328452/merged major:0 minor:950 fsType:overlay blockSize:0} overlay_0-952:{mountpoint:/var/lib/containers/storage/overlay/5e59a489981646ecd860a2b586fea508d9639be34e1319b5ebec16c43674c1c3/merged major:0 minor:952 fsType:overlay blockSize:0} overlay_0-965:{mountpoint:/var/lib/containers/storage/overlay/30e62a0cf2b923799bd90b407357f9b042d2846bf21a97d6f9abc99928c46ebb/merged major:0 minor:965 fsType:overlay blockSize:0} overlay_0-967:{mountpoint:/var/lib/containers/storage/overlay/b8a6f794ed43867248a147149302fb80f7b850aa5262ad3fc18512c0dfd72f32/merged major:0 minor:967 fsType:overlay blockSize:0} overlay_0-969:{mountpoint:/var/lib/containers/storage/overlay/07a84ade5ad96e5c343d995fd1a3ce06eab97560d765095d24e65cdc7ae30e8f/merged major:0 minor:969 fsType:overlay blockSize:0} overlay_0-977:{mountpoint:/var/lib/containers/storage/overlay/8a7e06610096e774d24f8f3efedcecf8ce7d837380cc3e1024fab3884a24c4dc/merged major:0 minor:977 fsType:overlay blockSize:0} overlay_0-98:{mountpoint:/var/lib/containers/storage/overlay/b626a66dd35a022a19d67c42e21f9ffce1a62069f0c8d5e86d6879ce7756fa97/merged major:0 minor:98 fsType:overlay blockSize:0} overlay_0-982:{mountpoint:/var/lib/containers/storage/overlay/08bece6ffc48e0f0d1b98d55c10c9e9e4906c4e9f873a5526968a91bf3fea2e3/merged major:0 minor:982 fsType:overlay blockSize:0} overlay_0-995:{mountpoint:/var/lib/containers/storage/overlay/9286abf94cc05b61e7c52c9bd49cb956e0fcbe0ac3aeeac5b6bb4270adffb321/merged major:0 minor:995 fsType:overlay blockSize:0} overlay_0-997:{mountpoint:/var/lib/containers/storage/overlay/730df00f90d29548b7162a17a68d018b8f51affffff4e67411f85c9b9ba23de9/merged major:0 minor:997 fsType:overlay blockSize:0} overlay_0-999:{mountpoint:/var/lib/containers/storage/overlay/91c8b4bc0b98bce78084f275424c271ec5584042e18dd80aad5d07896d1a1872/merged major:0 minor:999 fsType:overlay blockSize:0}] Mar 18 18:00:30.977846 master-0 kubenswrapper[30278]: I0318 18:00:30.973139 30278 manager.go:217] Machine: {Timestamp:2026-03-18 18:00:30.972398453 +0000 UTC m=+0.139583058 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2799998 MemoryCapacity:33654128640 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:6ad73e7bdc944176a9641991d01dd6fa SystemUUID:6ad73e7b-dc94-4176-a964-1991d01dd6fa BootID:00a5b6c0-ddc6-4fc3-aaa2-1f9950d0acc4 Filesystems:[{Device:/var/lib/kubelet/pods/14a0661b-7bde-4e22-a9a9-5e3fb24df77f/volumes/kubernetes.io~projected/kube-api-access-2pqww DeviceMajor:0 DeviceMinor:92 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-169 DeviceMajor:0 DeviceMinor:169 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/b3385316-45f0-46c5-ac82-683168db5878/volumes/kubernetes.io~projected/kube-api-access-wd9sc DeviceMajor:0 DeviceMinor:947 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-371 DeviceMajor:0 DeviceMinor:371 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/9875ed82-813c-483d-8471-8f9b74b774ee/volumes/kubernetes.io~projected/kube-api-access-76j8w DeviceMajor:0 DeviceMinor:143 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/7e64a377-f497-4416-8f22-d5c7f52e0b65/volumes/kubernetes.io~projected/kube-api-access-sclm5 DeviceMajor:0 DeviceMinor:238 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/681e9cfa9d99b6787480ff89127df11d81327ab93296d6efacd157b94bbfa393/userdata/shm DeviceMajor:0 DeviceMinor:269 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/6607dcf54fd176dc56698130f9297b2ab4381953d03d40abc0b2240c71f3820b/userdata/shm DeviceMajor:0 DeviceMinor:494 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/04cef0bd-f365-4bf6-864a-1895995015d6/volumes/kubernetes.io~projected/kube-api-access-qlhls DeviceMajor:0 DeviceMinor:789 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/5b0e38f3-3ab5-4519-86a6-68003deb94da/volumes/kubernetes.io~projected/kube-api-access-grnqn DeviceMajor:0 DeviceMinor:99 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-179 DeviceMajor:0 DeviceMinor:179 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/9c6ba19a43312e7426d156208bc0c31b36ee526eb8006e7186d5ea94923d4e9f/userdata/shm DeviceMajor:0 DeviceMinor:252 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/30d77a7c-222e-41c7-8a98-219854aa3da2/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:491 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/14298257e1956a282ef61298797ea8ea8e4d9b9c2a924ea5f21c88394abce76c/userdata/shm DeviceMajor:0 DeviceMinor:796 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-298 DeviceMajor:0 DeviceMinor:298 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/7c6694a8-ccd0-491b-9f21-215450f6ce67/volumes/kubernetes.io~secret/node-tuning-operator-tls DeviceMajor:0 DeviceMinor:440 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/41da80af31fef99194cfa8b9345b104ba93283b541371be7f518ffdcd5945af7/userdata/shm DeviceMajor:0 DeviceMinor:413 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827064320 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/0100a259-1358-45e8-8191-4e1f9a14ec89/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:224 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/99e215da-759d-4fff-af65-0fb64245fbd0/volumes/kubernetes.io~projected/kube-api-access-n8k5q DeviceMajor:0 DeviceMinor:250 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/6855c26bf134f973aca5b753cd9252cc1f86b218f035870b1dab49845cbadb56/userdata/shm DeviceMajor:0 DeviceMinor:266 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-288 DeviceMajor:0 DeviceMinor:288 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-502 DeviceMajor:0 DeviceMinor:502 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/ce5639dc0f602d1c7e6ad6fc44e82114cfe133ad8a9de1890037405180569936/userdata/shm DeviceMajor:0 DeviceMinor:408 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-424 DeviceMajor:0 DeviceMinor:424 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-982 DeviceMajor:0 DeviceMinor:982 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/f7ff61c7-32d1-4407-a792-8e22bb4d50f9/volumes/kubernetes.io~projected/kube-api-access-nf82n DeviceMajor:0 DeviceMinor:230 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/cb522b02-0b93-4711-9041-566daa06b95a/volumes/kubernetes.io~projected/kube-api-access-fk59q DeviceMajor:0 DeviceMinor:251 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-302 DeviceMajor:0 DeviceMinor:302 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-364 DeviceMajor:0 DeviceMinor:364 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-378 DeviceMajor:0 DeviceMinor:378 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-897 DeviceMajor:0 DeviceMinor:897 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-877 DeviceMajor:0 DeviceMinor:877 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/b1352cc7-4099-44c5-9c31-8259fb783bc7/volumes/kubernetes.io~projected/kube-api-access-9pp5f DeviceMajor:0 DeviceMinor:240 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-818 DeviceMajor:0 DeviceMinor:818 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-820 DeviceMajor:0 DeviceMinor:820 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-70 DeviceMajor:0 DeviceMinor:70 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-331 DeviceMajor:0 DeviceMinor:331 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/b5e733421a5534241a408f87a9d1282c96549651c461bd5bf9e9c1999c97d9e5/userdata/shm DeviceMajor:0 DeviceMinor:255 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/fecfc938509f77a7c6b0246891b9f62fa9cb5c8d24c6ae113e36e04682301649/userdata/shm DeviceMajor:0 DeviceMinor:428 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/d26d4515-391e-41a5-8c82-1b2b8a375662/volumes/kubernetes.io~secret/package-server-manager-serving-cert DeviceMajor:0 DeviceMinor:537 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/56cde2f7-1742-45d6-aa22-8270cfb424a7/volumes/kubernetes.io~projected/kube-api-access-mbctm DeviceMajor:0 DeviceMinor:555 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-800 DeviceMajor:0 DeviceMinor:800 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-80 DeviceMajor:0 DeviceMinor:80 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-72 DeviceMajor:0 DeviceMinor:72 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-117 DeviceMajor:0 DeviceMinor:117 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-67 DeviceMajor:0 DeviceMinor:67 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/1ea3dbfa5dfb13332a0f1977477497e5220b4bba3727358399c90d2b8664c6d7/userdata/shm DeviceMajor:0 DeviceMinor:89 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/8d76a48b181c0cd15d1de5c39a3bc3d9f330bf1dff375bce677cfee095393ae6/userdata/shm DeviceMajor:0 DeviceMinor:247 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/d5c2063a6c515ba48297abcc083d96a0ee10588d1979ea68a5e48cdf4d96c90e/userdata/shm DeviceMajor:0 DeviceMinor:277 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/22b260c86b95c080bc9989f63b5311a346d5ef3d9e462e33577fe76c4fe05c6d/userdata/shm DeviceMajor:0 DeviceMinor:355 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-447 DeviceMajor:0 DeviceMinor:447 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-650 DeviceMajor:0 DeviceMinor:650 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-901 DeviceMajor:0 DeviceMinor:901 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-292 DeviceMajor:0 DeviceMinor:292 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/0100a259-1358-45e8-8191-4e1f9a14ec89/volumes/kubernetes.io~projected/kube-api-access-tnknt DeviceMajor:0 DeviceMinor:239 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-342 DeviceMajor:0 DeviceMinor:342 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/822080a5-2926-4a51-866d-86bb0b437da2/volumes/kubernetes.io~projected/kube-api-access-f48gg DeviceMajor:0 DeviceMinor:675 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-861 DeviceMajor:0 DeviceMinor:861 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/01d8f1f738d166015accb45a5a875b9da0577b0908a968320b9793f9dbe962a2/userdata/shm DeviceMajor:0 DeviceMinor:948 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-999 DeviceMajor:0 DeviceMinor:999 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-509 DeviceMajor:0 DeviceMinor:509 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/37b3753f-bf4f-4a9e-a4a8-d58296bada79/volumes/kubernetes.io~secret/cluster-baremetal-operator-tls DeviceMajor:0 DeviceMinor:541 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/489dd872-39c3-4ce2-8dc1-9d0552b88616/volumes/kubernetes.io~projected/kube-api-access-wjtg7 DeviceMajor:0 DeviceMinor:797 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-351 DeviceMajor:0 DeviceMinor:351 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/f975ed7e1c1dcf64feeba9dd4dfc173ec9be8b509e8d2f868a326c611d5b7d2d/userdata/shm DeviceMajor:0 DeviceMinor:307 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-599 DeviceMajor:0 DeviceMinor:599 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/36a9c5c55aaa067ac7414f9662835335c782889c32307de35102428e52f590c8/userdata/shm DeviceMajor:0 DeviceMinor:993 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-152 DeviceMajor:0 DeviceMinor:152 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/e9e04572-1425-440e-9869-6deef05e13e3/volumes/kubernetes.io~projected/kube-api-access-t92bz DeviceMajor:0 DeviceMinor:235 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/59407fdf-b1e9-4992-a3c8-54b4e26f496c/volumes/kubernetes.io~projected/kube-api-access-9dt8f DeviceMajor:0 DeviceMinor:488 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/43fab0f2-5cfd-4b5e-a632-728fd5b960fd/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:535 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/43fab0f2-5cfd-4b5e-a632-728fd5b960fd/volumes/kubernetes.io~projected/kube-api-access-rsj86 DeviceMajor:0 DeviceMinor:553 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-48 DeviceMajor:0 DeviceMinor:48 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-831 DeviceMajor:0 DeviceMinor:831 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/7e0345d8f514108b800a0c4627bc3a13dd0326586f06b4e1904eb81090cc64aa/userdata/shm DeviceMajor:0 DeviceMinor:816 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827064320 Type:vfs Inodes:1048576 HasInodes:true} {Device:/var/lib/kubelet/pods/14a0661b-7bde-4e22-a9a9-5e3fb24df77f/volumes/kubernetes.io~secret/metrics-tls DeviceMajor:0 DeviceMinor:43 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/a94ad7630ca05ccfa8a345aad202de39848166c994875cfad1d5875137f9cf66/userdata/shm DeviceMajor:0 DeviceMinor:115 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-140 DeviceMajor:0 DeviceMinor:140 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-451 DeviceMajor:0 DeviceMinor:451 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-472 DeviceMajor:0 DeviceMinor:472 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-611 DeviceMajor:0 DeviceMinor:611 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-634 DeviceMajor:0 DeviceMinor:634 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-471 DeviceMajor:0 DeviceMinor:471 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/1db0a246-ca43-4e7c-b09e-e80218ae99b1/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:714 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-136 DeviceMajor:0 DeviceMinor:136 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/822080a5-2926-4a51-866d-86bb0b437da2/volumes/kubernetes.io~empty-dir/tmp DeviceMajor:0 DeviceMinor:670 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/fdab27a1-1d7a-4dc5-b828-eba3f57592dd/volumes/kubernetes.io~projected/kube-api-access DeviceMajor:0 DeviceMinor:798 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/0751c002-fe0e-4f13-bb9c-9accd8ca0df3/volumes/kubernetes.io~secret/cloud-controller-manager-operator-tls DeviceMajor:0 DeviceMinor:778 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-42 DeviceMajor:0 DeviceMinor:42 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-90 DeviceMajor:0 DeviceMinor:90 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-294 DeviceMajor:0 DeviceMinor:294 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-332 DeviceMajor:0 DeviceMinor:332 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-576 DeviceMajor:0 DeviceMinor:576 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-582 DeviceMajor:0 DeviceMinor:582 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/989b2fdc7ef152f9acfe916ef0d60955c7235426f6fe1dd2fc891166e785e105/userdata/shm DeviceMajor:0 DeviceMinor:41 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-165 DeviceMajor:0 DeviceMinor:165 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-328 DeviceMajor:0 DeviceMinor:328 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/3a3a6c2c-78e7-41f3-acff-20173cbc012a/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:218 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/ce5831a6-5a8d-4cda-9299-5d86437bcab2/volumes/kubernetes.io~projected/kube-api-access-756j8 DeviceMajor:0 DeviceMinor:241 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-597 DeviceMajor:0 DeviceMinor:597 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-745 DeviceMajor:0 DeviceMinor:745 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/92153864-7959-4482-bf24-c8db36435fb5/volumes/kubernetes.io~projected/kube-api-access-sb496 DeviceMajor:0 DeviceMinor:784 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/var/lib/kubelet/pods/26575d68-0488-4dfa-a5d0-5016e481dba6/volumes/kubernetes.io~projected/kube-api-access DeviceMajor:0 DeviceMinor:233 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/dba5f8d7-4d25-42b5-9c58-813221bf96bb/volumes/kubernetes.io~projected/kube-api-access-lmsm4 DeviceMajor:0 DeviceMinor:254 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-93 DeviceMajor:0 DeviceMinor:93 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-967 DeviceMajor:0 DeviceMinor:967 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-930 DeviceMajor:0 DeviceMinor:930 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-189 DeviceMajor:0 DeviceMinor:189 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/0b9ff55a-73fb-473f-b406-1f8b6cffdb89/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:213 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/30d77a7c-222e-41c7-8a98-219854aa3da2/volumes/kubernetes.io~projected/kube-api-access-x47z7 DeviceMajor:0 DeviceMinor:474 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/822080a5-2926-4a51-866d-86bb0b437da2/volumes/kubernetes.io~empty-dir/etc-tuned DeviceMajor:0 DeviceMinor:674 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/1db0a246-ca43-4e7c-b09e-e80218ae99b1/volumes/kubernetes.io~projected/kube-api-access-n9g8f DeviceMajor:0 DeviceMinor:719 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-739 DeviceMajor:0 DeviceMinor:739 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/fcf459dc-bd30-4143-b5c4-60fd01b46548/volumes/kubernetes.io~secret/proxy-tls DeviceMajor:0 DeviceMinor:860 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-995 DeviceMajor:0 DeviceMinor:995 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-78 DeviceMajor:0 DeviceMinor:78 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-632 DeviceMajor:0 DeviceMinor:632 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-952 DeviceMajor:0 DeviceMinor:952 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-128 DeviceMajor:0 DeviceMinor:128 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/6f26e239-2988-4faa-bc1d-24b15b95b7f1/volumes/kubernetes.io~projected/bound-sa-token DeviceMajor:0 DeviceMinor:249 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/e73f2834-c56c-4cef-ac3c-2317e9a4324c/volumes/kubernetes.io~secret/srv-cert DeviceMajor:0 DeviceMinor:542 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-580 DeviceMajor:0 DeviceMinor:580 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-607 DeviceMajor:0 DeviceMinor:607 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/fdab27a1-1d7a-4dc5-b828-eba3f57592dd/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:803 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-824 DeviceMajor:0 DeviceMinor:824 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-50 DeviceMajor:0 DeviceMinor:50 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/7b94e08c-7944-445e-bfb7-6c7c14880c65/volumes/kubernetes.io~secret/ovn-control-plane-metrics-cert DeviceMajor:0 DeviceMinor:124 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/9a240ab7-a1d5-4e9a-96f3-4590681cc7ed/volumes/kubernetes.io~projected/kube-api-access-l5tw2 DeviceMajor:0 DeviceMinor:231 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-434 DeviceMajor:0 DeviceMinor:434 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-697 DeviceMajor:0 DeviceMinor:697 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/c3267271-e0c5-45d6-980c-d78e4f9eef35/volumes/kubernetes.io~secret/proxy-tls DeviceMajor:0 DeviceMinor:814 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-113 DeviceMajor:0 DeviceMinor:113 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/5a4f94f3-d63a-4869-b723-ae9637610b4b/volumes/kubernetes.io~projected/kube-api-access-hgnz6 DeviceMajor:0 DeviceMinor:123 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/b1352cc7-4099-44c5-9c31-8259fb783bc7/volumes/kubernetes.io~secret/metrics-tls DeviceMajor:0 DeviceMinor:418 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/2939a6d3195afe0f356d31ab56455f8d084b2077c497baf972062cb08363566d/userdata/shm DeviceMajor:0 DeviceMinor:498 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/cfbf03c8cc7b89c553e9ea829ef567259d08d9f435265881b903a1b99dfdd65c/userdata/shm DeviceMajor:0 DeviceMinor:513 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/8381cd7a6e5c885500f3bdd0849aefb2b5f39ab2f05f498f742ce3eacc790c78/userdata/shm DeviceMajor:0 DeviceMinor:560 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/d4c75bee-d0d2-4261-8f89-8c3375dbd868/volumes/kubernetes.io~projected/kube-api-access-bz8rf DeviceMajor:0 DeviceMinor:793 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-77 DeviceMajor:0 DeviceMinor:77 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-503 DeviceMajor:0 DeviceMinor:503 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/e9e04572-1425-440e-9869-6deef05e13e3/volumes/kubernetes.io~secret/srv-cert DeviceMajor:0 DeviceMinor:543 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/2d21e77e-8b61-4f03-8f17-941b7a1d8b1d/volumes/kubernetes.io~projected/kube-api-access-fc27m DeviceMajor:0 DeviceMinor:792 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/f7dc5373fa76e1da12d58e0de7c6eb4b3bc82471bd7a410a252fcb24df6cb1d6/userdata/shm DeviceMajor:0 DeviceMinor:827 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-882 DeviceMajor:0 DeviceMinor:882 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/3a3a6c2c-78e7-41f3-acff-20173cbc012a/volumes/kubernetes.io~projected/kube-api-access DeviceMajor:0 DeviceMinor:259 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-290 DeviceMajor:0 DeviceMinor:290 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-518 DeviceMajor:0 DeviceMinor:518 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/427e5ce9-f4b3-4f12-bb77-2b13775aa334/volumes/kubernetes.io~projected/kube-api-access-z5jd4 DeviceMajor:0 DeviceMinor:549 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/de189d27-4c60-49f1-9119-d1fde5c37b1e/volumes/kubernetes.io~projected/kube-api-access-tf476 DeviceMajor:0 DeviceMinor:787 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-557 DeviceMajor:0 DeviceMinor:557 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-85 DeviceMajor:0 DeviceMinor:85 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/7b94e08c-7944-445e-bfb7-6c7c14880c65/volumes/kubernetes.io~projected/kube-api-access-g4zcv DeviceMajor:0 DeviceMinor:125 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-171 DeviceMajor:0 DeviceMinor:171 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/d451cc909e96cb90161ef2054b945e5cb54cff4fe2886dd65033c87b6a8fe884/userdata/shm DeviceMajor:0 DeviceMinor:245 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-284 DeviceMajor:0 DeviceMinor:284 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-568 DeviceMajor:0 DeviceMinor:568 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-586 DeviceMajor:0 DeviceMinor:586 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/c57f282a-829b-41b2-827a-f4bc598245a2/volumes/kubernetes.io~projected/kube-api-access-d6c68 DeviceMajor:0 DeviceMinor:914 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-977 DeviceMajor:0 DeviceMinor:977 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/2753215bec4df07a683a29fd9db1d0ae5aeba0e6f73fa6fbc662ede34576fdd9/userdata/shm DeviceMajor:0 DeviceMinor:563 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/a94f7bff-ad61-4c53-a8eb-000a13f26971/volumes/kubernetes.io~projected/kube-api-access-5xvzx DeviceMajor:0 DeviceMinor:790 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-166 DeviceMajor:0 DeviceMinor:166 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-921 DeviceMajor:0 DeviceMinor:921 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/b4db07afd1a03d8c1456d9bd3e2fc4e66947bcaa942aef9864e3ed3e54889795/userdata/shm DeviceMajor:0 DeviceMinor:119 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/994fff04-c1d7-4f10-8d4b-6b49a6934829/volume-subpaths/run-systemd/ovnkube-controller/6 DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true} {Device:/var/lib/kubelet/pods/8d5e9525-6c0d-4c0b-a2ce-e42eaf66c311/volumes/kubernetes.io~secret/cluster-monitoring-operator-tls DeviceMajor:0 DeviceMinor:539 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/efbcb147-d077-4749-9289-1682daccb657/volumes/kubernetes.io~projected/ca-certs DeviceMajor:0 DeviceMinor:455 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/1efe23c09252f4c82f118ceb82a14b9f9f470b6a2eb0f4b9f30449b0d185550a/userdata/shm DeviceMajor:0 DeviceMinor:559 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/726dac522b338193798e05019afcc3525452535e3149d4a25e33142fc811a586/userdata/shm DeviceMajor:0 DeviceMinor:809 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/594d4a59acf0a0da5be4aa4bcad6deb49fd2749cf6065ab7e5a5a39d60f17265/userdata/shm DeviceMajor:0 DeviceMinor:829 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/dd1b805aae172e18f337dd45784c075e0ad3687afa3a8879338aa90a6a42ed54/userdata/shm DeviceMajor:0 DeviceMinor:415 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/db5aec9e61cde7bec4f42fa13ed7d132af8e2baca532c12d1638296fcf06dd34/userdata/shm DeviceMajor:0 DeviceMinor:104 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/3d376cebaacd129d547a2b1f5d7c73be7200c80d9de53fd252db3ff4f06f931e/userdata/shm DeviceMajor:0 DeviceMinor:139 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/994fff04-c1d7-4f10-8d4b-6b49a6934829/volumes/kubernetes.io~secret/ovn-node-metrics-cert DeviceMajor:0 DeviceMinor:126 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/994fff04-c1d7-4f10-8d4b-6b49a6934829/volumes/kubernetes.io~projected/kube-api-access-9lwsm DeviceMajor:0 DeviceMinor:127 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/6bd8b74e410d81f6dbc5c2f014e72715199a5fa6c057d771fdb8890689635805/userdata/shm DeviceMajor:0 DeviceMinor:262 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-446 DeviceMajor:0 DeviceMinor:446 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-965 DeviceMajor:0 DeviceMinor:965 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/9c0dbd44-7669-41d6-bf1b-d8c1343c9d98/volumes/kubernetes.io~secret/prometheus-operator-kube-rbac-proxy-config DeviceMajor:0 DeviceMinor:954 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-64 DeviceMajor:0 DeviceMinor:64 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-305 DeviceMajor:0 DeviceMinor:305 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/8d5e9525-6c0d-4c0b-a2ce-e42eaf66c311/volumes/kubernetes.io~projected/kube-api-access-clm4b DeviceMajor:0 DeviceMinor:237 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/978dcca6-b396-463f-9614-9e24194a1aaa/volumes/kubernetes.io~projected/kube-api-access-5s6f5 DeviceMajor:0 DeviceMinor:304 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/7047a862-8cbe-46fb-9af3-06ba224cbe26/volumes/kubernetes.io~projected/kube-api-access-4g42g DeviceMajor:0 DeviceMinor:445 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/253ec853-f637-4aa4-8e8e-eb655dfccccb/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:409 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-91 DeviceMajor:0 DeviceMinor:91 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/90cc2b02445555cd2d532e865fff8c504dc1d3510b60d980449ac43b37071918/userdata/shm DeviceMajor:0 DeviceMinor:919 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-81 DeviceMajor:0 DeviceMinor:81 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/c38c5f03-a753-49f4-ab06-33e75a03bd45/volumes/kubernetes.io~projected/kube-api-access-d8d74 DeviceMajor:0 DeviceMinor:785 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-108 DeviceMajor:0 DeviceMinor:108 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/c355c750-ae2f-49fa-9a16-8fb4f688853e/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:221 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-280 DeviceMajor:0 DeviceMinor:280 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/ce5831a6-5a8d-4cda-9299-5d86437bcab2/volumes/kubernetes.io~secret/marketplace-operator-metrics DeviceMajor:0 DeviceMinor:544 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-584 DeviceMajor:0 DeviceMinor:584 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-522 DeviceMajor:0 DeviceMinor:522 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/8db04037-c7cc-4246-92c3-6e7985384b14/volumes/kubernetes.io~projected/kube-api-access-fglbh DeviceMajor:0 DeviceMinor:808 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/7c6694a8-ccd0-491b-9f21-215450f6ce67/volumes/kubernetes.io~projected/kube-api-access-mrdqg DeviceMajor:0 DeviceMinor:268 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/6f26e239-2988-4faa-bc1d-24b15b95b7f1/volumes/kubernetes.io~secret/image-registry-operator-tls DeviceMajor:0 DeviceMinor:419 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-750 DeviceMajor:0 DeviceMinor:750 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-856 DeviceMajor:0 DeviceMinor:856 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-794 DeviceMajor:0 DeviceMinor:794 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-321 DeviceMajor:0 DeviceMinor:321 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/7d39d93e-9be3-47e1-a44e-be2d18b55446/volumes/kubernetes.io~projected/kube-api-access-vkcx9 DeviceMajor:0 DeviceMinor:320 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/c9ef5e66c74bafc259dc619a6d19d1eda5f874894c689b2f23043bfdee6a39c1/userdata/shm DeviceMajor:0 DeviceMinor:319 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/43fab0f2-5cfd-4b5e-a632-728fd5b960fd/volumes/kubernetes.io~secret/etcd-client DeviceMajor:0 DeviceMinor:536 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/86bb0fefbe9a7075d6c0212cf27e6d83a749aa0d66749340ff4d2f7ce29488f0/userdata/shm DeviceMajor:0 DeviceMinor:558 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/c73523c110a89aa2ec5b986dce6527591a38ece4a4afaf4032ec9cf612257a34/userdata/shm DeviceMajor:0 DeviceMinor:370 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-950 DeviceMajor:0 DeviceMinor:950 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-318 DeviceMajor:0 DeviceMinor:318 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-158 DeviceMajor:0 DeviceMinor:158 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/1add5afbf418952e0016f7866a470207154a949d28966174c8a7f5fa79ba0e1f/userdata/shm DeviceMajor:0 DeviceMinor:134 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-199 DeviceMajor:0 DeviceMinor:199 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-300 DeviceMajor:0 DeviceMinor:300 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/c38c5f03-a753-49f4-ab06-33e75a03bd45/volumes/kubernetes.io~secret/cluster-storage-operator-serving-cert DeviceMajor:0 DeviceMinor:783 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-844 DeviceMajor:0 DeviceMinor:844 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-623 DeviceMajor:0 DeviceMinor:623 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/c087ce06-a16b-41f4-ba93-8fccdee09003/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:223 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-273 DeviceMajor:0 DeviceMinor:273 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/7c6694a8-ccd0-491b-9f21-215450f6ce67/volumes/kubernetes.io~secret/apiservice-cert DeviceMajor:0 DeviceMinor:420 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/efd0d6b1-652c-44b2-b918-5c7ced5d15c3/volumes/kubernetes.io~projected/kube-api-access-5wkqk DeviceMajor:0 DeviceMinor:490 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-66 DeviceMajor:0 DeviceMinor:66 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/9c0dbd44-7669-41d6-bf1b-d8c1343c9d98/volumes/kubernetes.io~projected/kube-api-access-qbdth DeviceMajor:0 DeviceMinor:958 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/016383ed2ea822809808dec1c74c3db939646679d52a777698739d705adae757/userdata/shm DeviceMajor:0 DeviceMinor:427 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-464 DeviceMajor:0 DeviceMinor:464 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-76 DeviceMajor:0 DeviceMinor:76 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/fd4c81e2-699b-4fdf-ac7d-1607cde6a8ab/volumes/kubernetes.io~secret/signing-key DeviceMajor:0 DeviceMinor:356 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-680 DeviceMajor:0 DeviceMinor:680 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-144 DeviceMajor:0 DeviceMinor:144 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-184 DeviceMajor:0 DeviceMinor:184 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/f7ff61c7-32d1-4407-a792-8e22bb4d50f9/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:222 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-390 DeviceMajor:0 DeviceMinor:390 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-109 DeviceMajor:0 DeviceMinor:109 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true} {Device:/run/containers/storage/overlay-containers/506ad6c80a1b7c5f38a2826db8ee7d4115a8417001018ebd69e1309cf067fc6a/userdata/shm DeviceMajor:0 DeviceMinor:129 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/c087ce06-a16b-41f4-ba93-8fccdee09003/volumes/kubernetes.io~projected/kube-api-access-789k6 DeviceMajor:0 DeviceMinor:244 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/30d77a7c-222e-41c7-8a98-219854aa3da2/volumes/kubernetes.io~secret/encryption-config DeviceMajor:0 DeviceMinor:487 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-528 DeviceMajor:0 DeviceMinor:528 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-726 DeviceMajor:0 DeviceMinor:726 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/0751c002-fe0e-4f13-bb9c-9accd8ca0df3/volumes/kubernetes.io~projected/kube-api-access-njx6n DeviceMajor:0 DeviceMinor:788 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-326 DeviceMajor:0 DeviceMinor:326 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/99e215da-759d-4fff-af65-0fb64245fbd0/volumes/kubernetes.io~secret/cluster-olm-operator-serving-cert DeviceMajor:0 DeviceMinor:226 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-436 DeviceMajor:0 DeviceMinor:436 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-588 DeviceMajor:0 DeviceMinor:588 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/697aebe4c54170282f2900b6eb7950a2671c76c6eb51ac74def7ef20f0b63370/userdata/shm DeviceMajor:0 DeviceMinor:807 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/c57f282a-829b-41b2-827a-f4bc598245a2/volumes/kubernetes.io~secret/stats-auth DeviceMajor:0 DeviceMinor:911 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/e03411b7c2bf8367c968ed1adc09d9fecafd420d1122805ea765d466352e23a3/userdata/shm DeviceMajor:0 DeviceMinor:54 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-453 DeviceMajor:0 DeviceMinor:453 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/30d77a7c-222e-41c7-8a98-219854aa3da2/volumes/kubernetes.io~secret/etcd-client DeviceMajor:0 DeviceMinor:486 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-641 DeviceMajor:0 DeviceMinor:641 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/8db04037-c7cc-4246-92c3-6e7985384b14/volumes/kubernetes.io~secret/apiservice-cert DeviceMajor:0 DeviceMinor:804 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/c23a831f572d860a391d4d959c13e33c442846ac9ce5af54ffdc6e3a90052296/userdata/shm DeviceMajor:0 DeviceMinor:895 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-106 DeviceMajor:0 DeviceMinor:106 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-204 DeviceMajor:0 DeviceMinor:204 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-286 DeviceMajor:0 DeviceMinor:286 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/98a2fd574e075391d5a514f212989330aab4c8ffe303103d815d81e2f13e5d87/userdata/shm DeviceMajor:0 DeviceMinor:874 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/d32f636809075be6cf635b9dbbf658143a67ef27c719c0247cb93d87c34ccc46/userdata/shm DeviceMajor:0 DeviceMinor:918 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-52 DeviceMajor:0 DeviceMinor:52 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/9b424d6c-7440-4c98-ac19-2d0642c696fd/volumes/kubernetes.io~projected/kube-api-access DeviceMajor:0 DeviceMinor:215 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/cb522b02-0b93-4711-9041-566daa06b95a/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:228 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/efbcb147-d077-4749-9289-1682daccb657/volumes/kubernetes.io~projected/kube-api-access-vqrdl DeviceMajor:0 DeviceMinor:456 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/8db04037-c7cc-4246-92c3-6e7985384b14/volumes/kubernetes.io~secret/webhook-cert DeviceMajor:0 DeviceMinor:802 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/4460d3d3-c55f-4f1c-a623-e3feccf937bb/volumes/kubernetes.io~projected/kube-api-access-2tskm DeviceMajor:0 DeviceMinor:155 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-837 DeviceMajor:0 DeviceMinor:837 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/89705fd182a90dfe140ac5efc8c14b16140f0a05f824bdb1f27db7295abcee76/userdata/shm DeviceMajor:0 DeviceMinor:44 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/7e64a377-f497-4416-8f22-d5c7f52e0b65/volumes/kubernetes.io~projected/bound-sa-token DeviceMajor:0 DeviceMinor:234 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/43fab0f2-5cfd-4b5e-a632-728fd5b960fd/volumes/kubernetes.io~secret/encryption-config DeviceMajor:0 DeviceMinor:530 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/56cde2f7-1742-45d6-aa22-8270cfb424a7/volumes/kubernetes.io~secret/catalogserver-certs DeviceMajor:0 DeviceMinor:458 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-924 DeviceMajor:0 DeviceMinor:924 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-926 DeviceMajor:0 DeviceMinor:926 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-62 DeviceMajor:0 DeviceMinor:62 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/9b424d6c-7440-4c98-ac19-2d0642c696fd/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:209 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/0100a259-1358-45e8-8191-4e1f9a14ec89/volumes/kubernetes.io~secret/etcd-client DeviceMajor:0 DeviceMinor:229 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/62f87c779c80aac58d08d6114e2c8cc2c2974d823d9538d2de8360d3c4243057/userdata/shm DeviceMajor:0 DeviceMinor:271 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-661 DeviceMajor:0 DeviceMinor:661 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/7e64a377-f497-4416-8f22-d5c7f52e0b65/volumes/kubernetes.io~secret/metrics-tls DeviceMajor:0 DeviceMinor:417 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-664 DeviceMajor:0 DeviceMinor:664 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/b84bd85aac3ddf41b65c4a3ee28624adfec16e2d4dd19c154137ff1a28ded42b/userdata/shm DeviceMajor:0 DeviceMinor:723 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-384 DeviceMajor:0 DeviceMinor:384 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-154 DeviceMajor:0 DeviceMinor:154 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-121 DeviceMajor:0 DeviceMinor:121 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/dfc93735e306184cc4596c59d2bb37e97390ba2f327b3655dd96eec7dc58139e/userdata/shm DeviceMajor:0 DeviceMinor:562 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-610 DeviceMajor:0 DeviceMinor:610 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-822 DeviceMajor:0 DeviceMinor:822 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/fcf459dc-bd30-4143-b5c4-60fd01b46548/volumes/kubernetes.io~projected/kube-api-access-xzp78 DeviceMajor:0 DeviceMinor:862 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/b3385316-45f0-46c5-ac82-683168db5878/volumes/kubernetes.io~secret/node-bootstrap-token DeviceMajor:0 DeviceMinor:938 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-969 DeviceMajor:0 DeviceMinor:969 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/9a240ab7-a1d5-4e9a-96f3-4590681cc7ed/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:220 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-432 DeviceMajor:0 DeviceMinor:432 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/b3385316-45f0-46c5-ac82-683168db5878/volumes/kubernetes.io~secret/certs DeviceMajor:0 DeviceMinor:939 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-313 DeviceMajor:0 DeviceMinor:313 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/3850c530da1325c13b135240c71869228656f1ceff63510ab0a98443cee54a55/userdata/shm DeviceMajor:0 DeviceMinor:276 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-343 DeviceMajor: Mar 18 18:00:30.978432 master-0 kubenswrapper[30278]: 0 DeviceMinor:343 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-766 DeviceMajor:0 DeviceMinor:766 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/c57f282a-829b-41b2-827a-f4bc598245a2/volumes/kubernetes.io~secret/metrics-certs DeviceMajor:0 DeviceMinor:913 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/819894978d4b63b70f3c5ba05beeaf66b4fdd7279c891272a2e358b0b8143717/userdata/shm DeviceMajor:0 DeviceMinor:916 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-928 DeviceMajor:0 DeviceMinor:928 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/a9a9d675b5bc654d44d972fe5be99d008e180b13cd245216bdc5bd95af4fe020/userdata/shm DeviceMajor:0 DeviceMinor:805 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-56 DeviceMajor:0 DeviceMinor:56 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/93f149a1ecb7aaccb9bdce489447440893c003702d0a6409833391c55955f7eb/userdata/shm DeviceMajor:0 DeviceMinor:362 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-388 DeviceMajor:0 DeviceMinor:388 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/230edebdcb314d25cf4af81ff75a06a2701ace4abbe260261cb0347a76dc2bd1/userdata/shm DeviceMajor:0 DeviceMinor:812 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-879 DeviceMajor:0 DeviceMinor:879 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-400 DeviceMajor:0 DeviceMinor:400 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/e73f2834-c56c-4cef-ac3c-2317e9a4324c/volumes/kubernetes.io~projected/kube-api-access-qwps9 DeviceMajor:0 DeviceMinor:232 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/fd4c81e2-699b-4fdf-ac7d-1607cde6a8ab/volumes/kubernetes.io~projected/kube-api-access-rf2qx DeviceMajor:0 DeviceMinor:357 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-493 DeviceMajor:0 DeviceMinor:493 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/c57f282a-829b-41b2-827a-f4bc598245a2/volumes/kubernetes.io~secret/default-certificate DeviceMajor:0 DeviceMinor:912 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-98 DeviceMajor:0 DeviceMinor:98 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-608 DeviceMajor:0 DeviceMinor:608 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-164 DeviceMajor:0 DeviceMinor:164 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/0b9ff55a-73fb-473f-b406-1f8b6cffdb89/volumes/kubernetes.io~projected/kube-api-access-2tvgq DeviceMajor:0 DeviceMinor:214 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-613 DeviceMajor:0 DeviceMinor:613 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/7b2841761444793b373ed80c5f092794f38989726bcf53c2a969f325f8459b75/userdata/shm DeviceMajor:0 DeviceMinor:95 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/e0e04440-c08b-452d-9be6-9f70a4027c92/volumes/kubernetes.io~projected/kube-api-access-767c7 DeviceMajor:0 DeviceMinor:786 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/7d72bb42-1ee6-4f61-9515-d1c5bafa896f/volumes/kubernetes.io~projected/kube-api-access-ljbl7 DeviceMajor:0 DeviceMinor:915 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-263 DeviceMajor:0 DeviceMinor:263 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-132 DeviceMajor:0 DeviceMinor:132 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/37b3753f-bf4f-4a9e-a4a8-d58296bada79/volumes/kubernetes.io~projected/kube-api-access-zwlxb DeviceMajor:0 DeviceMinor:227 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/59407fdf-b1e9-4992-a3c8-54b4e26f496c/volumes/kubernetes.io~secret/metrics-tls DeviceMajor:0 DeviceMinor:508 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/07ab0c66a64f7bf6d68ef0555d877888ab4c67aaec1ac0fea7f62d1ed0bed612/userdata/shm DeviceMajor:0 DeviceMinor:564 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-578 DeviceMajor:0 DeviceMinor:578 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-441 DeviceMajor:0 DeviceMinor:441 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-840 DeviceMajor:0 DeviceMinor:840 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/e34a7d43723491c0ffb4df04571420d726ec22d80fe5f50be4255c5ba300c922/userdata/shm DeviceMajor:0 DeviceMinor:722 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/e7f76afa-4b23-421c-8451-46323813f06e/volumes/kubernetes.io~projected/kube-api-access-gzhsq DeviceMajor:0 DeviceMinor:992 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/fea7b899-fde4-4463-9520-4d433a8ebe21/volumes/kubernetes.io~projected/kube-api-access-ts9b9 DeviceMajor:0 DeviceMinor:100 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-242 DeviceMajor:0 DeviceMinor:242 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-761 DeviceMajor:0 DeviceMinor:761 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/d4c75bee-d0d2-4261-8f89-8c3375dbd868/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:791 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/89e6c3d6-7bd5-4df6-90db-3a349f644afb/volumes/kubernetes.io~projected/kube-api-access-88hkw DeviceMajor:0 DeviceMinor:894 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/796303c5f2a585dc8ab37c0a21b453aa0dd8797dea11dc3eee7c72e5dad9b158/userdata/shm DeviceMajor:0 DeviceMinor:963 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/e7f76afa-4b23-421c-8451-46323813f06e/volumes/kubernetes.io~secret/webhook-certs DeviceMajor:0 DeviceMinor:987 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/9875ed82-813c-483d-8471-8f9b74b774ee/volumes/kubernetes.io~secret/webhook-cert DeviceMajor:0 DeviceMinor:138 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/6f26e239-2988-4faa-bc1d-24b15b95b7f1/volumes/kubernetes.io~projected/kube-api-access-5sl7p DeviceMajor:0 DeviceMinor:225 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-282 DeviceMajor:0 DeviceMinor:282 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-296 DeviceMajor:0 DeviceMinor:296 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/8a589501a96ed1e6f8752cc00ece99aa42162ad128546ec6cfe89722a04ec5b1/userdata/shm DeviceMajor:0 DeviceMinor:412 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-622 DeviceMajor:0 DeviceMinor:622 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/253ec853-f637-4aa4-8e8e-eb655dfccccb/volumes/kubernetes.io~projected/kube-api-access-cx596 DeviceMajor:0 DeviceMinor:718 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/1d969530-c138-4fb7-9bfe-0825be66c009/volumes/kubernetes.io~projected/kube-api-access-cd868 DeviceMajor:0 DeviceMinor:275 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-592 DeviceMajor:0 DeviceMinor:592 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-997 DeviceMajor:0 DeviceMinor:997 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-646 DeviceMajor:0 DeviceMinor:646 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-194 DeviceMajor:0 DeviceMinor:194 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/37b3753f-bf4f-4a9e-a4a8-d58296bada79/volumes/kubernetes.io~secret/cert DeviceMajor:0 DeviceMinor:540 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/5a4f94f3-d63a-4869-b723-ae9637610b4b/volumes/kubernetes.io~secret/metrics-certs DeviceMajor:0 DeviceMinor:457 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-394 DeviceMajor:0 DeviceMinor:394 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/39f34c1f903429d7c69072e5211db003fe4dc2847c946a6e7e2b74d4bd2e8ac8/userdata/shm DeviceMajor:0 DeviceMinor:216 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/d26d4515-391e-41a5-8c82-1b2b8a375662/volumes/kubernetes.io~projected/kube-api-access-bm8jj DeviceMajor:0 DeviceMinor:265 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/a0f14825defb92b50c4747c20631ca30f9e30632027bb38a918f6a6a14b5c095/userdata/shm DeviceMajor:0 DeviceMinor:309 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/cc45ef13b745a7538de0764bc9063fe610d54078c6f17e39280d0e2b21ebeeb0/userdata/shm DeviceMajor:0 DeviceMinor:421 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/56cde2f7-1742-45d6-aa22-8270cfb424a7/volumes/kubernetes.io~projected/ca-certs DeviceMajor:0 DeviceMinor:554 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/89e6c3d6-7bd5-4df6-90db-3a349f644afb/volumes/kubernetes.io~secret/proxy-tls DeviceMajor:0 DeviceMinor:891 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-668 DeviceMajor:0 DeviceMinor:668 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/c355c750-ae2f-49fa-9a16-8fb4f688853e/volumes/kubernetes.io~projected/kube-api-access-zfnqp DeviceMajor:0 DeviceMinor:236 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/a06a3f0fb54d1869684741c01721cbf6af520d75473205b84e908f306a368b3a/userdata/shm DeviceMajor:0 DeviceMinor:257 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/5d787dbd681a8506a724fcd5492ed061edac8e14293b27fcf81a68f92d0df82e/userdata/shm DeviceMajor:0 DeviceMinor:258 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-590 DeviceMajor:0 DeviceMinor:590 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-734 DeviceMajor:0 DeviceMinor:734 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/c3267271-e0c5-45d6-980c-d78e4f9eef35/volumes/kubernetes.io~projected/kube-api-access-z7xqg DeviceMajor:0 DeviceMinor:826 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/9e2d0d0d-54ca-475b-be8a-4eb6d4434e74/volumes/kubernetes.io~secret/tls-certificates DeviceMajor:0 DeviceMinor:907 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/26575d68-0488-4dfa-a5d0-5016e481dba6/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:219 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/3b679ceb3c60d8555810f42293ecb4e72f346293b26bbcc64d5cc427efca2bcd/userdata/shm DeviceMajor:0 DeviceMinor:340 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/dc110414-3a6b-474c-bce3-33450cab8fcd/volumes/kubernetes.io~projected/kube-api-access-mnl7c DeviceMajor:0 DeviceMinor:811 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-594 DeviceMajor:0 DeviceMinor:594 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none} 252:16:{Name:vdb Major:252 Minor:16 Size:21474836480 Scheduler:none} 252:32:{Name:vdc Major:252 Minor:32 Size:21474836480 Scheduler:none} 252:48:{Name:vdd Major:252 Minor:48 Size:21474836480 Scheduler:none} 252:64:{Name:vde Major:252 Minor:64 Size:21474836480 Scheduler:none}] NetworkDevices:[{Name:016383ed2ea8228 MacAddress:3a:ef:4e:b5:51:5e Speed:10000 Mtu:8900} {Name:07ab0c66a64f7bf MacAddress:7e:22:21:1e:a0:6e Speed:10000 Mtu:8900} {Name:1efe23c09252f4c MacAddress:9e:16:b8:49:0d:12 Speed:10000 Mtu:8900} {Name:22b260c86b95c08 MacAddress:c2:a9:1e:39:1f:b1 Speed:10000 Mtu:8900} {Name:230edebdcb314d2 MacAddress:06:35:f8:75:09:19 Speed:10000 Mtu:8900} {Name:2753215bec4df07 MacAddress:e6:b3:f5:54:e6:93 Speed:10000 Mtu:8900} {Name:2939a6d3195afe0 MacAddress:d6:ef:11:9c:77:b0 Speed:10000 Mtu:8900} {Name:36a9c5c55aaa067 MacAddress:22:a9:66:15:6c:f0 Speed:10000 Mtu:8900} {Name:3850c530da1325c MacAddress:72:a7:af:8c:b2:7d Speed:10000 Mtu:8900} {Name:39f34c1f903429d MacAddress:16:50:21:d6:25:f6 Speed:10000 Mtu:8900} {Name:3b679ceb3c60d85 MacAddress:32:c2:50:ef:85:2f Speed:10000 Mtu:8900} {Name:41da80af31fef99 MacAddress:b2:f0:05:78:6d:d6 Speed:10000 Mtu:8900} {Name:594d4a59acf0a0d MacAddress:f2:2f:84:a0:6c:72 Speed:10000 Mtu:8900} {Name:5d787dbd681a850 MacAddress:72:4c:ed:3e:5b:2f Speed:10000 Mtu:8900} {Name:62f87c779c80aac MacAddress:e2:b4:4c:fa:50:aa Speed:10000 Mtu:8900} {Name:681e9cfa9d99b67 MacAddress:4a:4b:99:ce:1e:1e Speed:10000 Mtu:8900} {Name:6855c26bf134f97 MacAddress:ca:1b:2e:bf:ee:57 Speed:10000 Mtu:8900} {Name:697aebe4c541702 MacAddress:82:16:b8:29:68:79 Speed:10000 Mtu:8900} {Name:6bd8b74e410d81f MacAddress:a6:8b:a0:6c:b2:eb Speed:10000 Mtu:8900} {Name:726dac522b33819 MacAddress:ce:41:8f:94:21:ee Speed:10000 Mtu:8900} {Name:7b2841761444793 MacAddress:2e:54:dc:a4:d2:02 Speed:10000 Mtu:8900} {Name:7e0345d8f514108 MacAddress:2e:11:08:c3:3c:3f Speed:10000 Mtu:8900} {Name:819894978d4b63b MacAddress:6a:0f:5d:cb:b8:f5 Speed:10000 Mtu:8900} {Name:8381cd7a6e5c885 MacAddress:d6:9d:a9:f8:33:19 Speed:10000 Mtu:8900} {Name:86bb0fefbe9a707 MacAddress:62:f5:4a:7e:fe:e6 Speed:10000 Mtu:8900} {Name:8a589501a96ed1e MacAddress:06:d5:3d:cb:ec:e7 Speed:10000 Mtu:8900} {Name:8d76a48b181c0cd MacAddress:fe:3b:de:c5:2e:fa Speed:10000 Mtu:8900} {Name:90cc2b02445555c MacAddress:c6:41:eb:68:2a:72 Speed:10000 Mtu:8900} {Name:93f149a1ecb7aac MacAddress:fe:e2:59:3d:c7:61 Speed:10000 Mtu:8900} {Name:9c6ba19a43312e7 MacAddress:2e:56:81:01:73:02 Speed:10000 Mtu:8900} {Name:a06a3f0fb54d186 MacAddress:ce:3a:7c:2c:4a:1b Speed:10000 Mtu:8900} {Name:a0f14825defb92b MacAddress:a6:4b:c4:9f:b6:7b Speed:10000 Mtu:8900} {Name:b5e733421a55342 MacAddress:2e:33:97:41:42:99 Speed:10000 Mtu:8900} {Name:b84bd85aac3ddf4 MacAddress:da:43:12:4c:20:02 Speed:10000 Mtu:8900} {Name:br-ex MacAddress:fa:16:9e:81:f6:10 Speed:0 Mtu:9000} {Name:br-int MacAddress:ba:7c:70:ac:0a:a4 Speed:0 Mtu:8900} {Name:c23a831f572d860 MacAddress:7e:02:4d:8b:ea:26 Speed:10000 Mtu:8900} {Name:c73523c110a89aa MacAddress:06:00:29:2e:85:4c Speed:10000 Mtu:8900} {Name:c9ef5e66c74bafc MacAddress:d2:f0:7e:fd:c9:73 Speed:10000 Mtu:8900} {Name:cc45ef13b745a75 MacAddress:ae:c6:96:db:1c:b0 Speed:10000 Mtu:8900} {Name:ce5639dc0f602d1 MacAddress:1e:33:a2:1e:eb:a2 Speed:10000 Mtu:8900} {Name:cfbf03c8cc7b89c MacAddress:86:60:2f:f3:02:10 Speed:10000 Mtu:8900} {Name:d451cc909e96cb9 MacAddress:ca:e7:70:a0:07:26 Speed:10000 Mtu:8900} {Name:dd1b805aae172e1 MacAddress:f6:4d:7a:90:8b:54 Speed:10000 Mtu:8900} {Name:dfc93735e306184 MacAddress:3a:ce:c5:b4:04:ea Speed:10000 Mtu:8900} {Name:e34a7d43723491c MacAddress:ba:d5:55:cd:42:41 Speed:10000 Mtu:8900} {Name:eth0 MacAddress:fa:16:9e:81:f6:10 Speed:-1 Mtu:9000} {Name:eth1 MacAddress:fa:16:3e:91:e0:f5 Speed:-1 Mtu:9000} {Name:eth2 MacAddress:fa:16:3e:ff:27:ac Speed:-1 Mtu:9000} {Name:f7dc5373fa76e1d MacAddress:0e:60:32:6b:5b:a4 Speed:10000 Mtu:8900} {Name:fecfc938509f77a MacAddress:76:2e:7b:78:46:76 Speed:10000 Mtu:8900} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:80:00:02 Speed:0 Mtu:8900} {Name:ovs-system MacAddress:96:16:48:af:1f:d9 Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654128640 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Mar 18 18:00:30.978432 master-0 kubenswrapper[30278]: I0318 18:00:30.975655 30278 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Mar 18 18:00:30.978432 master-0 kubenswrapper[30278]: I0318 18:00:30.975731 30278 manager.go:233] Version: {KernelVersion:5.14.0-427.113.1.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202603021444-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Mar 18 18:00:30.978432 master-0 kubenswrapper[30278]: I0318 18:00:30.975956 30278 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Mar 18 18:00:30.978432 master-0 kubenswrapper[30278]: I0318 18:00:30.976112 30278 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 18 18:00:30.978432 master-0 kubenswrapper[30278]: I0318 18:00:30.976145 30278 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"master-0","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"500m","ephemeral-storage":"1Gi","memory":"1Gi"},"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 18 18:00:30.978432 master-0 kubenswrapper[30278]: I0318 18:00:30.976330 30278 topology_manager.go:138] "Creating topology manager with none policy" Mar 18 18:00:30.978432 master-0 kubenswrapper[30278]: I0318 18:00:30.976339 30278 container_manager_linux.go:303] "Creating device plugin manager" Mar 18 18:00:30.978432 master-0 kubenswrapper[30278]: I0318 18:00:30.976347 30278 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Mar 18 18:00:30.978432 master-0 kubenswrapper[30278]: I0318 18:00:30.976367 30278 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Mar 18 18:00:30.978432 master-0 kubenswrapper[30278]: I0318 18:00:30.976400 30278 state_mem.go:36] "Initialized new in-memory state store" Mar 18 18:00:30.978432 master-0 kubenswrapper[30278]: I0318 18:00:30.976483 30278 server.go:1245] "Using root directory" path="/var/lib/kubelet" Mar 18 18:00:30.978432 master-0 kubenswrapper[30278]: I0318 18:00:30.976532 30278 kubelet.go:418] "Attempting to sync node with API server" Mar 18 18:00:30.978432 master-0 kubenswrapper[30278]: I0318 18:00:30.976543 30278 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 18 18:00:30.978432 master-0 kubenswrapper[30278]: I0318 18:00:30.976555 30278 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Mar 18 18:00:30.978432 master-0 kubenswrapper[30278]: I0318 18:00:30.976566 30278 kubelet.go:324] "Adding apiserver pod source" Mar 18 18:00:30.978432 master-0 kubenswrapper[30278]: I0318 18:00:30.976576 30278 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 18 18:00:30.978432 master-0 kubenswrapper[30278]: I0318 18:00:30.978158 30278 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.13-8.rhaos4.18.gitd78977c.el9" apiVersion="v1" Mar 18 18:00:30.978432 master-0 kubenswrapper[30278]: I0318 18:00:30.978353 30278 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Mar 18 18:00:30.979455 master-0 kubenswrapper[30278]: I0318 18:00:30.979411 30278 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Mar 18 18:00:30.981337 master-0 kubenswrapper[30278]: I0318 18:00:30.979577 30278 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Mar 18 18:00:30.981337 master-0 kubenswrapper[30278]: I0318 18:00:30.979604 30278 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Mar 18 18:00:30.981337 master-0 kubenswrapper[30278]: I0318 18:00:30.979613 30278 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Mar 18 18:00:30.981337 master-0 kubenswrapper[30278]: I0318 18:00:30.979621 30278 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Mar 18 18:00:30.981337 master-0 kubenswrapper[30278]: I0318 18:00:30.979630 30278 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Mar 18 18:00:30.981337 master-0 kubenswrapper[30278]: I0318 18:00:30.979639 30278 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Mar 18 18:00:30.981337 master-0 kubenswrapper[30278]: I0318 18:00:30.979648 30278 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Mar 18 18:00:30.981337 master-0 kubenswrapper[30278]: I0318 18:00:30.979656 30278 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Mar 18 18:00:30.981337 master-0 kubenswrapper[30278]: I0318 18:00:30.979667 30278 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Mar 18 18:00:30.981337 master-0 kubenswrapper[30278]: I0318 18:00:30.979675 30278 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Mar 18 18:00:30.981337 master-0 kubenswrapper[30278]: I0318 18:00:30.979687 30278 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Mar 18 18:00:30.981337 master-0 kubenswrapper[30278]: I0318 18:00:30.979701 30278 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Mar 18 18:00:30.981337 master-0 kubenswrapper[30278]: I0318 18:00:30.979726 30278 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Mar 18 18:00:30.981337 master-0 kubenswrapper[30278]: I0318 18:00:30.980176 30278 server.go:1280] "Started kubelet" Mar 18 18:00:30.987383 master-0 kubenswrapper[30278]: I0318 18:00:30.982394 30278 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 18 18:00:30.987383 master-0 kubenswrapper[30278]: I0318 18:00:30.982515 30278 server_v1.go:47] "podresources" method="list" useActivePods=true Mar 18 18:00:30.987383 master-0 kubenswrapper[30278]: I0318 18:00:30.982988 30278 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 18 18:00:30.987383 master-0 kubenswrapper[30278]: I0318 18:00:30.983062 30278 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Mar 18 18:00:30.982681 master-0 systemd[1]: Started Kubernetes Kubelet. Mar 18 18:00:30.998946 master-0 kubenswrapper[30278]: I0318 18:00:30.998898 30278 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Mar 18 18:00:30.999028 master-0 kubenswrapper[30278]: I0318 18:00:30.998996 30278 server.go:449] "Adding debug handlers to kubelet server" Mar 18 18:00:30.999246 master-0 kubenswrapper[30278]: I0318 18:00:30.999212 30278 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Mar 18 18:00:31.007061 master-0 kubenswrapper[30278]: I0318 18:00:31.007003 30278 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Mar 18 18:00:31.007061 master-0 kubenswrapper[30278]: I0318 18:00:31.007067 30278 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 18 18:00:31.007531 master-0 kubenswrapper[30278]: I0318 18:00:31.007460 30278 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-03-19 17:31:47 +0000 UTC, rotation deadline is 2026-03-19 10:55:09.334772382 +0000 UTC Mar 18 18:00:31.007531 master-0 kubenswrapper[30278]: I0318 18:00:31.007520 30278 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 16h54m38.32725507s for next certificate rotation Mar 18 18:00:31.019389 master-0 kubenswrapper[30278]: I0318 18:00:31.019205 30278 volume_manager.go:287] "The desired_state_of_world populator starts" Mar 18 18:00:31.019389 master-0 kubenswrapper[30278]: I0318 18:00:31.019237 30278 volume_manager.go:289] "Starting Kubelet Volume Manager" Mar 18 18:00:31.019389 master-0 kubenswrapper[30278]: I0318 18:00:31.019386 30278 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Mar 18 18:00:31.023643 master-0 kubenswrapper[30278]: E0318 18:00:31.023594 30278 kubelet.go:1495] "Image garbage collection failed once. Stats initialization may not have completed yet" err="failed to get imageFs info: unable to find data in memory cache" Mar 18 18:00:31.024875 master-0 kubenswrapper[30278]: I0318 18:00:31.024832 30278 factory.go:55] Registering systemd factory Mar 18 18:00:31.024875 master-0 kubenswrapper[30278]: I0318 18:00:31.024877 30278 factory.go:221] Registration of the systemd container factory successfully Mar 18 18:00:31.025237 master-0 kubenswrapper[30278]: I0318 18:00:31.025202 30278 factory.go:153] Registering CRI-O factory Mar 18 18:00:31.025237 master-0 kubenswrapper[30278]: I0318 18:00:31.025224 30278 factory.go:221] Registration of the crio container factory successfully Mar 18 18:00:31.028757 master-0 kubenswrapper[30278]: I0318 18:00:31.028728 30278 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Mar 18 18:00:31.028898 master-0 kubenswrapper[30278]: I0318 18:00:31.028781 30278 factory.go:103] Registering Raw factory Mar 18 18:00:31.028898 master-0 kubenswrapper[30278]: I0318 18:00:31.028819 30278 manager.go:1196] Started watching for new ooms in manager Mar 18 18:00:31.031120 master-0 kubenswrapper[30278]: I0318 18:00:31.029497 30278 manager.go:319] Starting recovery of all containers Mar 18 18:00:31.033209 master-0 kubenswrapper[30278]: I0318 18:00:31.033171 30278 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Mar 18 18:00:31.038368 master-0 kubenswrapper[30278]: I0318 18:00:31.037806 30278 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b0e38f3-3ab5-4519-86a6-68003deb94da" volumeName="kubernetes.io/projected/5b0e38f3-3ab5-4519-86a6-68003deb94da-kube-api-access-grnqn" seLinuxMountContext="" Mar 18 18:00:31.038368 master-0 kubenswrapper[30278]: I0318 18:00:31.037860 30278 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7e64a377-f497-4416-8f22-d5c7f52e0b65" volumeName="kubernetes.io/projected/7e64a377-f497-4416-8f22-d5c7f52e0b65-bound-sa-token" seLinuxMountContext="" Mar 18 18:00:31.038368 master-0 kubenswrapper[30278]: I0318 18:00:31.037875 30278 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="92153864-7959-4482-bf24-c8db36435fb5" volumeName="kubernetes.io/configmap/92153864-7959-4482-bf24-c8db36435fb5-auth-proxy-config" seLinuxMountContext="" Mar 18 18:00:31.038368 master-0 kubenswrapper[30278]: I0318 18:00:31.037884 30278 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="994fff04-c1d7-4f10-8d4b-6b49a6934829" volumeName="kubernetes.io/configmap/994fff04-c1d7-4f10-8d4b-6b49a6934829-ovnkube-script-lib" seLinuxMountContext="" Mar 18 18:00:31.038368 master-0 kubenswrapper[30278]: I0318 18:00:31.037892 30278 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="994fff04-c1d7-4f10-8d4b-6b49a6934829" volumeName="kubernetes.io/projected/994fff04-c1d7-4f10-8d4b-6b49a6934829-kube-api-access-9lwsm" seLinuxMountContext="" Mar 18 18:00:31.038368 master-0 kubenswrapper[30278]: I0318 18:00:31.037902 30278 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="30d77a7c-222e-41c7-8a98-219854aa3da2" volumeName="kubernetes.io/secret/30d77a7c-222e-41c7-8a98-219854aa3da2-etcd-client" seLinuxMountContext="" Mar 18 18:00:31.038368 master-0 kubenswrapper[30278]: I0318 18:00:31.037918 30278 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="427e5ce9-f4b3-4f12-bb77-2b13775aa334" volumeName="kubernetes.io/empty-dir/427e5ce9-f4b3-4f12-bb77-2b13775aa334-catalog-content" seLinuxMountContext="" Mar 18 18:00:31.038368 master-0 kubenswrapper[30278]: I0318 18:00:31.037926 30278 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="59407fdf-b1e9-4992-a3c8-54b4e26f496c" volumeName="kubernetes.io/projected/59407fdf-b1e9-4992-a3c8-54b4e26f496c-kube-api-access-9dt8f" seLinuxMountContext="" Mar 18 18:00:31.038368 master-0 kubenswrapper[30278]: I0318 18:00:31.037936 30278 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="99e215da-759d-4fff-af65-0fb64245fbd0" volumeName="kubernetes.io/projected/99e215da-759d-4fff-af65-0fb64245fbd0-kube-api-access-n8k5q" seLinuxMountContext="" Mar 18 18:00:31.038368 master-0 kubenswrapper[30278]: I0318 18:00:31.037944 30278 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9c0dbd44-7669-41d6-bf1b-d8c1343c9d98" volumeName="kubernetes.io/configmap/9c0dbd44-7669-41d6-bf1b-d8c1343c9d98-metrics-client-ca" seLinuxMountContext="" Mar 18 18:00:31.038368 master-0 kubenswrapper[30278]: I0318 18:00:31.037953 30278 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fdab27a1-1d7a-4dc5-b828-eba3f57592dd" volumeName="kubernetes.io/projected/fdab27a1-1d7a-4dc5-b828-eba3f57592dd-kube-api-access" seLinuxMountContext="" Mar 18 18:00:31.038368 master-0 kubenswrapper[30278]: I0318 18:00:31.037961 30278 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="04cef0bd-f365-4bf6-864a-1895995015d6" volumeName="kubernetes.io/configmap/04cef0bd-f365-4bf6-864a-1895995015d6-cco-trusted-ca" seLinuxMountContext="" Mar 18 18:00:31.038368 master-0 kubenswrapper[30278]: I0318 18:00:31.037970 30278 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1db0a246-ca43-4e7c-b09e-e80218ae99b1" volumeName="kubernetes.io/secret/1db0a246-ca43-4e7c-b09e-e80218ae99b1-serving-cert" seLinuxMountContext="" Mar 18 18:00:31.038368 master-0 kubenswrapper[30278]: I0318 18:00:31.037981 30278 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="26575d68-0488-4dfa-a5d0-5016e481dba6" volumeName="kubernetes.io/projected/26575d68-0488-4dfa-a5d0-5016e481dba6-kube-api-access" seLinuxMountContext="" Mar 18 18:00:31.038368 master-0 kubenswrapper[30278]: I0318 18:00:31.037989 30278 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="30d77a7c-222e-41c7-8a98-219854aa3da2" volumeName="kubernetes.io/configmap/30d77a7c-222e-41c7-8a98-219854aa3da2-config" seLinuxMountContext="" Mar 18 18:00:31.038368 master-0 kubenswrapper[30278]: I0318 18:00:31.037998 30278 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43fab0f2-5cfd-4b5e-a632-728fd5b960fd" volumeName="kubernetes.io/projected/43fab0f2-5cfd-4b5e-a632-728fd5b960fd-kube-api-access-rsj86" seLinuxMountContext="" Mar 18 18:00:31.038368 master-0 kubenswrapper[30278]: I0318 18:00:31.038006 30278 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7047a862-8cbe-46fb-9af3-06ba224cbe26" volumeName="kubernetes.io/projected/7047a862-8cbe-46fb-9af3-06ba224cbe26-kube-api-access-4g42g" seLinuxMountContext="" Mar 18 18:00:31.038368 master-0 kubenswrapper[30278]: I0318 18:00:31.038014 30278 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7e64a377-f497-4416-8f22-d5c7f52e0b65" volumeName="kubernetes.io/configmap/7e64a377-f497-4416-8f22-d5c7f52e0b65-trusted-ca" seLinuxMountContext="" Mar 18 18:00:31.038368 master-0 kubenswrapper[30278]: I0318 18:00:31.038024 30278 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0100a259-1358-45e8-8191-4e1f9a14ec89" volumeName="kubernetes.io/configmap/0100a259-1358-45e8-8191-4e1f9a14ec89-etcd-service-ca" seLinuxMountContext="" Mar 18 18:00:31.038368 master-0 kubenswrapper[30278]: I0318 18:00:31.038033 30278 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d969530-c138-4fb7-9bfe-0825be66c009" volumeName="kubernetes.io/configmap/1d969530-c138-4fb7-9bfe-0825be66c009-iptables-alerter-script" seLinuxMountContext="" Mar 18 18:00:31.038368 master-0 kubenswrapper[30278]: I0318 18:00:31.038041 30278 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="427e5ce9-f4b3-4f12-bb77-2b13775aa334" volumeName="kubernetes.io/projected/427e5ce9-f4b3-4f12-bb77-2b13775aa334-kube-api-access-z5jd4" seLinuxMountContext="" Mar 18 18:00:31.038368 master-0 kubenswrapper[30278]: I0318 18:00:31.038049 30278 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="59407fdf-b1e9-4992-a3c8-54b4e26f496c" volumeName="kubernetes.io/configmap/59407fdf-b1e9-4992-a3c8-54b4e26f496c-config-volume" seLinuxMountContext="" Mar 18 18:00:31.038368 master-0 kubenswrapper[30278]: I0318 18:00:31.038059 30278 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9b424d6c-7440-4c98-ac19-2d0642c696fd" volumeName="kubernetes.io/configmap/9b424d6c-7440-4c98-ac19-2d0642c696fd-config" seLinuxMountContext="" Mar 18 18:00:31.038368 master-0 kubenswrapper[30278]: I0318 18:00:31.038067 30278 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9b424d6c-7440-4c98-ac19-2d0642c696fd" volumeName="kubernetes.io/projected/9b424d6c-7440-4c98-ac19-2d0642c696fd-kube-api-access" seLinuxMountContext="" Mar 18 18:00:31.038368 master-0 kubenswrapper[30278]: I0318 18:00:31.038076 30278 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c57f282a-829b-41b2-827a-f4bc598245a2" volumeName="kubernetes.io/configmap/c57f282a-829b-41b2-827a-f4bc598245a2-service-ca-bundle" seLinuxMountContext="" Mar 18 18:00:31.038368 master-0 kubenswrapper[30278]: I0318 18:00:31.038085 30278 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d4c75bee-d0d2-4261-8f89-8c3375dbd868" volumeName="kubernetes.io/projected/d4c75bee-d0d2-4261-8f89-8c3375dbd868-kube-api-access-bz8rf" seLinuxMountContext="" Mar 18 18:00:31.038368 master-0 kubenswrapper[30278]: I0318 18:00:31.038097 30278 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0100a259-1358-45e8-8191-4e1f9a14ec89" volumeName="kubernetes.io/configmap/0100a259-1358-45e8-8191-4e1f9a14ec89-config" seLinuxMountContext="" Mar 18 18:00:31.038368 master-0 kubenswrapper[30278]: I0318 18:00:31.038107 30278 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2d21e77e-8b61-4f03-8f17-941b7a1d8b1d" volumeName="kubernetes.io/configmap/2d21e77e-8b61-4f03-8f17-941b7a1d8b1d-images" seLinuxMountContext="" Mar 18 18:00:31.038368 master-0 kubenswrapper[30278]: I0318 18:00:31.038117 30278 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7f76afa-4b23-421c-8451-46323813f06e" volumeName="kubernetes.io/projected/e7f76afa-4b23-421c-8451-46323813f06e-kube-api-access-gzhsq" seLinuxMountContext="" Mar 18 18:00:31.038368 master-0 kubenswrapper[30278]: I0318 18:00:31.038126 30278 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6f26e239-2988-4faa-bc1d-24b15b95b7f1" volumeName="kubernetes.io/configmap/6f26e239-2988-4faa-bc1d-24b15b95b7f1-trusted-ca" seLinuxMountContext="" Mar 18 18:00:31.038368 master-0 kubenswrapper[30278]: I0318 18:00:31.038156 30278 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7e64a377-f497-4416-8f22-d5c7f52e0b65" volumeName="kubernetes.io/projected/7e64a377-f497-4416-8f22-d5c7f52e0b65-kube-api-access-sclm5" seLinuxMountContext="" Mar 18 18:00:31.038368 master-0 kubenswrapper[30278]: I0318 18:00:31.038165 30278 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9b424d6c-7440-4c98-ac19-2d0642c696fd" volumeName="kubernetes.io/secret/9b424d6c-7440-4c98-ac19-2d0642c696fd-serving-cert" seLinuxMountContext="" Mar 18 18:00:31.038368 master-0 kubenswrapper[30278]: I0318 18:00:31.038174 30278 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c3267271-e0c5-45d6-980c-d78e4f9eef35" volumeName="kubernetes.io/configmap/c3267271-e0c5-45d6-980c-d78e4f9eef35-auth-proxy-config" seLinuxMountContext="" Mar 18 18:00:31.038368 master-0 kubenswrapper[30278]: I0318 18:00:31.038183 30278 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="253ec853-f637-4aa4-8e8e-eb655dfccccb" volumeName="kubernetes.io/secret/253ec853-f637-4aa4-8e8e-eb655dfccccb-serving-cert" seLinuxMountContext="" Mar 18 18:00:31.038368 master-0 kubenswrapper[30278]: I0318 18:00:31.038209 30278 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43fab0f2-5cfd-4b5e-a632-728fd5b960fd" volumeName="kubernetes.io/secret/43fab0f2-5cfd-4b5e-a632-728fd5b960fd-serving-cert" seLinuxMountContext="" Mar 18 18:00:31.038368 master-0 kubenswrapper[30278]: I0318 18:00:31.038218 30278 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7b94e08c-7944-445e-bfb7-6c7c14880c65" volumeName="kubernetes.io/secret/7b94e08c-7944-445e-bfb7-6c7c14880c65-ovn-control-plane-metrics-cert" seLinuxMountContext="" Mar 18 18:00:31.038368 master-0 kubenswrapper[30278]: I0318 18:00:31.038227 30278 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7c6694a8-ccd0-491b-9f21-215450f6ce67" volumeName="kubernetes.io/secret/7c6694a8-ccd0-491b-9f21-215450f6ce67-apiservice-cert" seLinuxMountContext="" Mar 18 18:00:31.038368 master-0 kubenswrapper[30278]: I0318 18:00:31.038235 30278 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="92153864-7959-4482-bf24-c8db36435fb5" volumeName="kubernetes.io/configmap/92153864-7959-4482-bf24-c8db36435fb5-config" seLinuxMountContext="" Mar 18 18:00:31.038368 master-0 kubenswrapper[30278]: I0318 18:00:31.038243 30278 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="994fff04-c1d7-4f10-8d4b-6b49a6934829" volumeName="kubernetes.io/secret/994fff04-c1d7-4f10-8d4b-6b49a6934829-ovn-node-metrics-cert" seLinuxMountContext="" Mar 18 18:00:31.038368 master-0 kubenswrapper[30278]: I0318 18:00:31.038252 30278 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9a240ab7-a1d5-4e9a-96f3-4590681cc7ed" volumeName="kubernetes.io/secret/9a240ab7-a1d5-4e9a-96f3-4590681cc7ed-serving-cert" seLinuxMountContext="" Mar 18 18:00:31.038368 master-0 kubenswrapper[30278]: I0318 18:00:31.038259 30278 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c355c750-ae2f-49fa-9a16-8fb4f688853e" volumeName="kubernetes.io/secret/c355c750-ae2f-49fa-9a16-8fb4f688853e-serving-cert" seLinuxMountContext="" Mar 18 18:00:31.038368 master-0 kubenswrapper[30278]: I0318 18:00:31.038284 30278 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="30d77a7c-222e-41c7-8a98-219854aa3da2" volumeName="kubernetes.io/configmap/30d77a7c-222e-41c7-8a98-219854aa3da2-image-import-ca" seLinuxMountContext="" Mar 18 18:00:31.038368 master-0 kubenswrapper[30278]: I0318 18:00:31.038298 30278 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="489dd872-39c3-4ce2-8dc1-9d0552b88616" volumeName="kubernetes.io/empty-dir/489dd872-39c3-4ce2-8dc1-9d0552b88616-catalog-content" seLinuxMountContext="" Mar 18 18:00:31.038368 master-0 kubenswrapper[30278]: I0318 18:00:31.038310 30278 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="dc110414-3a6b-474c-bce3-33450cab8fcd" volumeName="kubernetes.io/projected/dc110414-3a6b-474c-bce3-33450cab8fcd-kube-api-access-mnl7c" seLinuxMountContext="" Mar 18 18:00:31.038368 master-0 kubenswrapper[30278]: I0318 18:00:31.038324 30278 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efbcb147-d077-4749-9289-1682daccb657" volumeName="kubernetes.io/projected/efbcb147-d077-4749-9289-1682daccb657-kube-api-access-vqrdl" seLinuxMountContext="" Mar 18 18:00:31.038368 master-0 kubenswrapper[30278]: I0318 18:00:31.038334 30278 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fdab27a1-1d7a-4dc5-b828-eba3f57592dd" volumeName="kubernetes.io/configmap/fdab27a1-1d7a-4dc5-b828-eba3f57592dd-service-ca" seLinuxMountContext="" Mar 18 18:00:31.038368 master-0 kubenswrapper[30278]: I0318 18:00:31.038345 30278 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c38c5f03-a753-49f4-ab06-33e75a03bd45" volumeName="kubernetes.io/secret/c38c5f03-a753-49f4-ab06-33e75a03bd45-cluster-storage-operator-serving-cert" seLinuxMountContext="" Mar 18 18:00:31.038368 master-0 kubenswrapper[30278]: I0318 18:00:31.038379 30278 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d4c75bee-d0d2-4261-8f89-8c3375dbd868" volumeName="kubernetes.io/configmap/d4c75bee-d0d2-4261-8f89-8c3375dbd868-service-ca-bundle" seLinuxMountContext="" Mar 18 18:00:31.038368 master-0 kubenswrapper[30278]: I0318 18:00:31.038392 30278 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5a4f94f3-d63a-4869-b723-ae9637610b4b" volumeName="kubernetes.io/projected/5a4f94f3-d63a-4869-b723-ae9637610b4b-kube-api-access-hgnz6" seLinuxMountContext="" Mar 18 18:00:31.038368 master-0 kubenswrapper[30278]: I0318 18:00:31.038407 30278 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8db04037-c7cc-4246-92c3-6e7985384b14" volumeName="kubernetes.io/empty-dir/8db04037-c7cc-4246-92c3-6e7985384b14-tmpfs" seLinuxMountContext="" Mar 18 18:00:31.038368 master-0 kubenswrapper[30278]: I0318 18:00:31.038419 30278 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8db04037-c7cc-4246-92c3-6e7985384b14" volumeName="kubernetes.io/secret/8db04037-c7cc-4246-92c3-6e7985384b14-apiservice-cert" seLinuxMountContext="" Mar 18 18:00:31.040589 master-0 kubenswrapper[30278]: I0318 18:00:31.038430 30278 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9c0dbd44-7669-41d6-bf1b-d8c1343c9d98" volumeName="kubernetes.io/secret/9c0dbd44-7669-41d6-bf1b-d8c1343c9d98-prometheus-operator-kube-rbac-proxy-config" seLinuxMountContext="" Mar 18 18:00:31.040589 master-0 kubenswrapper[30278]: I0318 18:00:31.038450 30278 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b3385316-45f0-46c5-ac82-683168db5878" volumeName="kubernetes.io/secret/b3385316-45f0-46c5-ac82-683168db5878-node-bootstrap-token" seLinuxMountContext="" Mar 18 18:00:31.040589 master-0 kubenswrapper[30278]: I0318 18:00:31.038462 30278 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efbcb147-d077-4749-9289-1682daccb657" volumeName="kubernetes.io/projected/efbcb147-d077-4749-9289-1682daccb657-ca-certs" seLinuxMountContext="" Mar 18 18:00:31.040589 master-0 kubenswrapper[30278]: I0318 18:00:31.038475 30278 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d969530-c138-4fb7-9bfe-0825be66c009" volumeName="kubernetes.io/projected/1d969530-c138-4fb7-9bfe-0825be66c009-kube-api-access-cd868" seLinuxMountContext="" Mar 18 18:00:31.040589 master-0 kubenswrapper[30278]: I0318 18:00:31.038487 30278 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3a3a6c2c-78e7-41f3-acff-20173cbc012a" volumeName="kubernetes.io/projected/3a3a6c2c-78e7-41f3-acff-20173cbc012a-kube-api-access" seLinuxMountContext="" Mar 18 18:00:31.040589 master-0 kubenswrapper[30278]: I0318 18:00:31.038500 30278 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fcf459dc-bd30-4143-b5c4-60fd01b46548" volumeName="kubernetes.io/secret/fcf459dc-bd30-4143-b5c4-60fd01b46548-proxy-tls" seLinuxMountContext="" Mar 18 18:00:31.040589 master-0 kubenswrapper[30278]: I0318 18:00:31.038512 30278 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fea7b899-fde4-4463-9520-4d433a8ebe21" volumeName="kubernetes.io/configmap/fea7b899-fde4-4463-9520-4d433a8ebe21-cni-binary-copy" seLinuxMountContext="" Mar 18 18:00:31.040589 master-0 kubenswrapper[30278]: I0318 18:00:31.038524 30278 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efd0d6b1-652c-44b2-b918-5c7ced5d15c3" volumeName="kubernetes.io/projected/efd0d6b1-652c-44b2-b918-5c7ced5d15c3-kube-api-access-5wkqk" seLinuxMountContext="" Mar 18 18:00:31.040589 master-0 kubenswrapper[30278]: I0318 18:00:31.038568 30278 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f7ff61c7-32d1-4407-a792-8e22bb4d50f9" volumeName="kubernetes.io/secret/f7ff61c7-32d1-4407-a792-8e22bb4d50f9-serving-cert" seLinuxMountContext="" Mar 18 18:00:31.040589 master-0 kubenswrapper[30278]: I0318 18:00:31.038583 30278 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="56cde2f7-1742-45d6-aa22-8270cfb424a7" volumeName="kubernetes.io/empty-dir/56cde2f7-1742-45d6-aa22-8270cfb424a7-cache" seLinuxMountContext="" Mar 18 18:00:31.040589 master-0 kubenswrapper[30278]: I0318 18:00:31.038593 30278 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9a240ab7-a1d5-4e9a-96f3-4590681cc7ed" volumeName="kubernetes.io/configmap/9a240ab7-a1d5-4e9a-96f3-4590681cc7ed-config" seLinuxMountContext="" Mar 18 18:00:31.040589 master-0 kubenswrapper[30278]: I0318 18:00:31.038605 30278 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0100a259-1358-45e8-8191-4e1f9a14ec89" volumeName="kubernetes.io/projected/0100a259-1358-45e8-8191-4e1f9a14ec89-kube-api-access-tnknt" seLinuxMountContext="" Mar 18 18:00:31.040589 master-0 kubenswrapper[30278]: I0318 18:00:31.038633 30278 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1db0a246-ca43-4e7c-b09e-e80218ae99b1" volumeName="kubernetes.io/projected/1db0a246-ca43-4e7c-b09e-e80218ae99b1-kube-api-access-n9g8f" seLinuxMountContext="" Mar 18 18:00:31.040589 master-0 kubenswrapper[30278]: I0318 18:00:31.038645 30278 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3a3a6c2c-78e7-41f3-acff-20173cbc012a" volumeName="kubernetes.io/configmap/3a3a6c2c-78e7-41f3-acff-20173cbc012a-config" seLinuxMountContext="" Mar 18 18:00:31.040589 master-0 kubenswrapper[30278]: I0318 18:00:31.038659 30278 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3a3a6c2c-78e7-41f3-acff-20173cbc012a" volumeName="kubernetes.io/secret/3a3a6c2c-78e7-41f3-acff-20173cbc012a-serving-cert" seLinuxMountContext="" Mar 18 18:00:31.040589 master-0 kubenswrapper[30278]: I0318 18:00:31.038671 30278 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="489dd872-39c3-4ce2-8dc1-9d0552b88616" volumeName="kubernetes.io/empty-dir/489dd872-39c3-4ce2-8dc1-9d0552b88616-utilities" seLinuxMountContext="" Mar 18 18:00:31.040589 master-0 kubenswrapper[30278]: I0318 18:00:31.038684 30278 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9875ed82-813c-483d-8471-8f9b74b774ee" volumeName="kubernetes.io/configmap/9875ed82-813c-483d-8471-8f9b74b774ee-ovnkube-identity-cm" seLinuxMountContext="" Mar 18 18:00:31.040589 master-0 kubenswrapper[30278]: I0318 18:00:31.038705 30278 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="994fff04-c1d7-4f10-8d4b-6b49a6934829" volumeName="kubernetes.io/configmap/994fff04-c1d7-4f10-8d4b-6b49a6934829-env-overrides" seLinuxMountContext="" Mar 18 18:00:31.040589 master-0 kubenswrapper[30278]: I0318 18:00:31.038717 30278 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c57f282a-829b-41b2-827a-f4bc598245a2" volumeName="kubernetes.io/secret/c57f282a-829b-41b2-827a-f4bc598245a2-stats-auth" seLinuxMountContext="" Mar 18 18:00:31.040589 master-0 kubenswrapper[30278]: I0318 18:00:31.038729 30278 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="253ec853-f637-4aa4-8e8e-eb655dfccccb" volumeName="kubernetes.io/projected/253ec853-f637-4aa4-8e8e-eb655dfccccb-kube-api-access-cx596" seLinuxMountContext="" Mar 18 18:00:31.040589 master-0 kubenswrapper[30278]: I0318 18:00:31.038764 30278 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="26575d68-0488-4dfa-a5d0-5016e481dba6" volumeName="kubernetes.io/secret/26575d68-0488-4dfa-a5d0-5016e481dba6-serving-cert" seLinuxMountContext="" Mar 18 18:00:31.040589 master-0 kubenswrapper[30278]: I0318 18:00:31.038775 30278 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="dc110414-3a6b-474c-bce3-33450cab8fcd" volumeName="kubernetes.io/empty-dir/dc110414-3a6b-474c-bce3-33450cab8fcd-utilities" seLinuxMountContext="" Mar 18 18:00:31.040589 master-0 kubenswrapper[30278]: I0318 18:00:31.038819 30278 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9875ed82-813c-483d-8471-8f9b74b774ee" volumeName="kubernetes.io/projected/9875ed82-813c-483d-8471-8f9b74b774ee-kube-api-access-76j8w" seLinuxMountContext="" Mar 18 18:00:31.040589 master-0 kubenswrapper[30278]: I0318 18:00:31.038833 30278 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="59407fdf-b1e9-4992-a3c8-54b4e26f496c" volumeName="kubernetes.io/secret/59407fdf-b1e9-4992-a3c8-54b4e26f496c-metrics-tls" seLinuxMountContext="" Mar 18 18:00:31.040589 master-0 kubenswrapper[30278]: I0318 18:00:31.038846 30278 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="822080a5-2926-4a51-866d-86bb0b437da2" volumeName="kubernetes.io/empty-dir/822080a5-2926-4a51-866d-86bb0b437da2-etc-tuned" seLinuxMountContext="" Mar 18 18:00:31.040589 master-0 kubenswrapper[30278]: I0318 18:00:31.038860 30278 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="30d77a7c-222e-41c7-8a98-219854aa3da2" volumeName="kubernetes.io/configmap/30d77a7c-222e-41c7-8a98-219854aa3da2-trusted-ca-bundle" seLinuxMountContext="" Mar 18 18:00:31.040589 master-0 kubenswrapper[30278]: I0318 18:00:31.038881 30278 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="30d77a7c-222e-41c7-8a98-219854aa3da2" volumeName="kubernetes.io/configmap/30d77a7c-222e-41c7-8a98-219854aa3da2-etcd-serving-ca" seLinuxMountContext="" Mar 18 18:00:31.040589 master-0 kubenswrapper[30278]: I0318 18:00:31.038901 30278 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="30d77a7c-222e-41c7-8a98-219854aa3da2" volumeName="kubernetes.io/secret/30d77a7c-222e-41c7-8a98-219854aa3da2-encryption-config" seLinuxMountContext="" Mar 18 18:00:31.040589 master-0 kubenswrapper[30278]: I0318 18:00:31.038917 30278 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43fab0f2-5cfd-4b5e-a632-728fd5b960fd" volumeName="kubernetes.io/secret/43fab0f2-5cfd-4b5e-a632-728fd5b960fd-etcd-client" seLinuxMountContext="" Mar 18 18:00:31.040589 master-0 kubenswrapper[30278]: I0318 18:00:31.038938 30278 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9a240ab7-a1d5-4e9a-96f3-4590681cc7ed" volumeName="kubernetes.io/projected/9a240ab7-a1d5-4e9a-96f3-4590681cc7ed-kube-api-access-l5tw2" seLinuxMountContext="" Mar 18 18:00:31.040589 master-0 kubenswrapper[30278]: I0318 18:00:31.038949 30278 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b3385316-45f0-46c5-ac82-683168db5878" volumeName="kubernetes.io/projected/b3385316-45f0-46c5-ac82-683168db5878-kube-api-access-wd9sc" seLinuxMountContext="" Mar 18 18:00:31.040589 master-0 kubenswrapper[30278]: I0318 18:00:31.038961 30278 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0100a259-1358-45e8-8191-4e1f9a14ec89" volumeName="kubernetes.io/configmap/0100a259-1358-45e8-8191-4e1f9a14ec89-etcd-ca" seLinuxMountContext="" Mar 18 18:00:31.040589 master-0 kubenswrapper[30278]: I0318 18:00:31.038976 30278 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b9ff55a-73fb-473f-b406-1f8b6cffdb89" volumeName="kubernetes.io/configmap/0b9ff55a-73fb-473f-b406-1f8b6cffdb89-config" seLinuxMountContext="" Mar 18 18:00:31.040589 master-0 kubenswrapper[30278]: I0318 18:00:31.038992 30278 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="de189d27-4c60-49f1-9119-d1fde5c37b1e" volumeName="kubernetes.io/projected/de189d27-4c60-49f1-9119-d1fde5c37b1e-kube-api-access-tf476" seLinuxMountContext="" Mar 18 18:00:31.040589 master-0 kubenswrapper[30278]: I0318 18:00:31.039005 30278 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f7ff61c7-32d1-4407-a792-8e22bb4d50f9" volumeName="kubernetes.io/configmap/f7ff61c7-32d1-4407-a792-8e22bb4d50f9-config" seLinuxMountContext="" Mar 18 18:00:31.040589 master-0 kubenswrapper[30278]: I0318 18:00:31.039017 30278 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="822080a5-2926-4a51-866d-86bb0b437da2" volumeName="kubernetes.io/empty-dir/822080a5-2926-4a51-866d-86bb0b437da2-tmp" seLinuxMountContext="" Mar 18 18:00:31.040589 master-0 kubenswrapper[30278]: I0318 18:00:31.039030 30278 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c087ce06-a16b-41f4-ba93-8fccdee09003" volumeName="kubernetes.io/configmap/c087ce06-a16b-41f4-ba93-8fccdee09003-service-ca-bundle" seLinuxMountContext="" Mar 18 18:00:31.040589 master-0 kubenswrapper[30278]: I0318 18:00:31.039043 30278 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d4c75bee-d0d2-4261-8f89-8c3375dbd868" volumeName="kubernetes.io/empty-dir/d4c75bee-d0d2-4261-8f89-8c3375dbd868-snapshots" seLinuxMountContext="" Mar 18 18:00:31.040589 master-0 kubenswrapper[30278]: I0318 18:00:31.039055 30278 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fea7b899-fde4-4463-9520-4d433a8ebe21" volumeName="kubernetes.io/configmap/fea7b899-fde4-4463-9520-4d433a8ebe21-cni-sysctl-allowlist" seLinuxMountContext="" Mar 18 18:00:31.040589 master-0 kubenswrapper[30278]: I0318 18:00:31.039072 30278 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="30d77a7c-222e-41c7-8a98-219854aa3da2" volumeName="kubernetes.io/projected/30d77a7c-222e-41c7-8a98-219854aa3da2-kube-api-access-x47z7" seLinuxMountContext="" Mar 18 18:00:31.040589 master-0 kubenswrapper[30278]: I0318 18:00:31.039086 30278 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7d39d93e-9be3-47e1-a44e-be2d18b55446" volumeName="kubernetes.io/projected/7d39d93e-9be3-47e1-a44e-be2d18b55446-kube-api-access-vkcx9" seLinuxMountContext="" Mar 18 18:00:31.040589 master-0 kubenswrapper[30278]: I0318 18:00:31.039098 30278 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ce5831a6-5a8d-4cda-9299-5d86437bcab2" volumeName="kubernetes.io/secret/ce5831a6-5a8d-4cda-9299-5d86437bcab2-marketplace-operator-metrics" seLinuxMountContext="" Mar 18 18:00:31.040589 master-0 kubenswrapper[30278]: I0318 18:00:31.039110 30278 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="14a0661b-7bde-4e22-a9a9-5e3fb24df77f" volumeName="kubernetes.io/projected/14a0661b-7bde-4e22-a9a9-5e3fb24df77f-kube-api-access-2pqww" seLinuxMountContext="" Mar 18 18:00:31.040589 master-0 kubenswrapper[30278]: I0318 18:00:31.039122 30278 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7d72bb42-1ee6-4f61-9515-d1c5bafa896f" volumeName="kubernetes.io/projected/7d72bb42-1ee6-4f61-9515-d1c5bafa896f-kube-api-access-ljbl7" seLinuxMountContext="" Mar 18 18:00:31.040589 master-0 kubenswrapper[30278]: I0318 18:00:31.039135 30278 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="99e215da-759d-4fff-af65-0fb64245fbd0" volumeName="kubernetes.io/empty-dir/99e215da-759d-4fff-af65-0fb64245fbd0-operand-assets" seLinuxMountContext="" Mar 18 18:00:31.040589 master-0 kubenswrapper[30278]: I0318 18:00:31.039148 30278 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="99e215da-759d-4fff-af65-0fb64245fbd0" volumeName="kubernetes.io/secret/99e215da-759d-4fff-af65-0fb64245fbd0-cluster-olm-operator-serving-cert" seLinuxMountContext="" Mar 18 18:00:31.040589 master-0 kubenswrapper[30278]: I0318 18:00:31.039161 30278 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c087ce06-a16b-41f4-ba93-8fccdee09003" volumeName="kubernetes.io/configmap/c087ce06-a16b-41f4-ba93-8fccdee09003-config" seLinuxMountContext="" Mar 18 18:00:31.040589 master-0 kubenswrapper[30278]: I0318 18:00:31.039172 30278 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1db0a246-ca43-4e7c-b09e-e80218ae99b1" volumeName="kubernetes.io/configmap/1db0a246-ca43-4e7c-b09e-e80218ae99b1-proxy-ca-bundles" seLinuxMountContext="" Mar 18 18:00:31.040589 master-0 kubenswrapper[30278]: I0318 18:00:31.039183 30278 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="89e6c3d6-7bd5-4df6-90db-3a349f644afb" volumeName="kubernetes.io/configmap/89e6c3d6-7bd5-4df6-90db-3a349f644afb-mcc-auth-proxy-config" seLinuxMountContext="" Mar 18 18:00:31.040589 master-0 kubenswrapper[30278]: I0318 18:00:31.039197 30278 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b1352cc7-4099-44c5-9c31-8259fb783bc7" volumeName="kubernetes.io/projected/b1352cc7-4099-44c5-9c31-8259fb783bc7-kube-api-access-9pp5f" seLinuxMountContext="" Mar 18 18:00:31.040589 master-0 kubenswrapper[30278]: I0318 18:00:31.039208 30278 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fea7b899-fde4-4463-9520-4d433a8ebe21" volumeName="kubernetes.io/configmap/fea7b899-fde4-4463-9520-4d433a8ebe21-whereabouts-flatfile-configmap" seLinuxMountContext="" Mar 18 18:00:31.040589 master-0 kubenswrapper[30278]: I0318 18:00:31.039219 30278 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="427e5ce9-f4b3-4f12-bb77-2b13775aa334" volumeName="kubernetes.io/empty-dir/427e5ce9-f4b3-4f12-bb77-2b13775aa334-utilities" seLinuxMountContext="" Mar 18 18:00:31.040589 master-0 kubenswrapper[30278]: I0318 18:00:31.039230 30278 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8db04037-c7cc-4246-92c3-6e7985384b14" volumeName="kubernetes.io/projected/8db04037-c7cc-4246-92c3-6e7985384b14-kube-api-access-fglbh" seLinuxMountContext="" Mar 18 18:00:31.040589 master-0 kubenswrapper[30278]: I0318 18:00:31.039252 30278 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a94f7bff-ad61-4c53-a8eb-000a13f26971" volumeName="kubernetes.io/configmap/a94f7bff-ad61-4c53-a8eb-000a13f26971-auth-proxy-config" seLinuxMountContext="" Mar 18 18:00:31.040589 master-0 kubenswrapper[30278]: I0318 18:00:31.039266 30278 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c3267271-e0c5-45d6-980c-d78e4f9eef35" volumeName="kubernetes.io/configmap/c3267271-e0c5-45d6-980c-d78e4f9eef35-images" seLinuxMountContext="" Mar 18 18:00:31.040589 master-0 kubenswrapper[30278]: I0318 18:00:31.039301 30278 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c355c750-ae2f-49fa-9a16-8fb4f688853e" volumeName="kubernetes.io/configmap/c355c750-ae2f-49fa-9a16-8fb4f688853e-config" seLinuxMountContext="" Mar 18 18:00:31.040589 master-0 kubenswrapper[30278]: I0318 18:00:31.039316 30278 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c38c5f03-a753-49f4-ab06-33e75a03bd45" volumeName="kubernetes.io/projected/c38c5f03-a753-49f4-ab06-33e75a03bd45-kube-api-access-d8d74" seLinuxMountContext="" Mar 18 18:00:31.040589 master-0 kubenswrapper[30278]: I0318 18:00:31.039330 30278 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="dba5f8d7-4d25-42b5-9c58-813221bf96bb" volumeName="kubernetes.io/projected/dba5f8d7-4d25-42b5-9c58-813221bf96bb-kube-api-access-lmsm4" seLinuxMountContext="" Mar 18 18:00:31.040589 master-0 kubenswrapper[30278]: I0318 18:00:31.039341 30278 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efbcb147-d077-4749-9289-1682daccb657" volumeName="kubernetes.io/empty-dir/efbcb147-d077-4749-9289-1682daccb657-cache" seLinuxMountContext="" Mar 18 18:00:31.040589 master-0 kubenswrapper[30278]: I0318 18:00:31.039356 30278 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0100a259-1358-45e8-8191-4e1f9a14ec89" volumeName="kubernetes.io/secret/0100a259-1358-45e8-8191-4e1f9a14ec89-serving-cert" seLinuxMountContext="" Mar 18 18:00:31.040589 master-0 kubenswrapper[30278]: I0318 18:00:31.039377 30278 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7c6694a8-ccd0-491b-9f21-215450f6ce67" volumeName="kubernetes.io/secret/7c6694a8-ccd0-491b-9f21-215450f6ce67-node-tuning-operator-tls" seLinuxMountContext="" Mar 18 18:00:31.040589 master-0 kubenswrapper[30278]: I0318 18:00:31.039389 30278 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f7ff61c7-32d1-4407-a792-8e22bb4d50f9" volumeName="kubernetes.io/projected/f7ff61c7-32d1-4407-a792-8e22bb4d50f9-kube-api-access-nf82n" seLinuxMountContext="" Mar 18 18:00:31.040589 master-0 kubenswrapper[30278]: I0318 18:00:31.039401 30278 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="56cde2f7-1742-45d6-aa22-8270cfb424a7" volumeName="kubernetes.io/projected/56cde2f7-1742-45d6-aa22-8270cfb424a7-ca-certs" seLinuxMountContext="" Mar 18 18:00:31.040589 master-0 kubenswrapper[30278]: I0318 18:00:31.039413 30278 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b1352cc7-4099-44c5-9c31-8259fb783bc7" volumeName="kubernetes.io/secret/b1352cc7-4099-44c5-9c31-8259fb783bc7-metrics-tls" seLinuxMountContext="" Mar 18 18:00:31.040589 master-0 kubenswrapper[30278]: I0318 18:00:31.039423 30278 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="dc110414-3a6b-474c-bce3-33450cab8fcd" volumeName="kubernetes.io/empty-dir/dc110414-3a6b-474c-bce3-33450cab8fcd-catalog-content" seLinuxMountContext="" Mar 18 18:00:31.040589 master-0 kubenswrapper[30278]: I0318 18:00:31.039438 30278 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="14a0661b-7bde-4e22-a9a9-5e3fb24df77f" volumeName="kubernetes.io/secret/14a0661b-7bde-4e22-a9a9-5e3fb24df77f-metrics-tls" seLinuxMountContext="" Mar 18 18:00:31.040589 master-0 kubenswrapper[30278]: I0318 18:00:31.039452 30278 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43fab0f2-5cfd-4b5e-a632-728fd5b960fd" volumeName="kubernetes.io/configmap/43fab0f2-5cfd-4b5e-a632-728fd5b960fd-audit-policies" seLinuxMountContext="" Mar 18 18:00:31.040589 master-0 kubenswrapper[30278]: I0318 18:00:31.039463 30278 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c087ce06-a16b-41f4-ba93-8fccdee09003" volumeName="kubernetes.io/configmap/c087ce06-a16b-41f4-ba93-8fccdee09003-trusted-ca-bundle" seLinuxMountContext="" Mar 18 18:00:31.040589 master-0 kubenswrapper[30278]: I0318 18:00:31.039476 30278 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ce5831a6-5a8d-4cda-9299-5d86437bcab2" volumeName="kubernetes.io/configmap/ce5831a6-5a8d-4cda-9299-5d86437bcab2-marketplace-trusted-ca" seLinuxMountContext="" Mar 18 18:00:31.040589 master-0 kubenswrapper[30278]: I0318 18:00:31.039489 30278 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e73f2834-c56c-4cef-ac3c-2317e9a4324c" volumeName="kubernetes.io/secret/e73f2834-c56c-4cef-ac3c-2317e9a4324c-srv-cert" seLinuxMountContext="" Mar 18 18:00:31.040589 master-0 kubenswrapper[30278]: I0318 18:00:31.039501 30278 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5a4f94f3-d63a-4869-b723-ae9637610b4b" volumeName="kubernetes.io/secret/5a4f94f3-d63a-4869-b723-ae9637610b4b-metrics-certs" seLinuxMountContext="" Mar 18 18:00:31.040589 master-0 kubenswrapper[30278]: I0318 18:00:31.039512 30278 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="978dcca6-b396-463f-9614-9e24194a1aaa" volumeName="kubernetes.io/projected/978dcca6-b396-463f-9614-9e24194a1aaa-kube-api-access-5s6f5" seLinuxMountContext="" Mar 18 18:00:31.040589 master-0 kubenswrapper[30278]: I0318 18:00:31.039525 30278 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c355c750-ae2f-49fa-9a16-8fb4f688853e" volumeName="kubernetes.io/projected/c355c750-ae2f-49fa-9a16-8fb4f688853e-kube-api-access-zfnqp" seLinuxMountContext="" Mar 18 18:00:31.040589 master-0 kubenswrapper[30278]: I0318 18:00:31.039536 30278 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7b94e08c-7944-445e-bfb7-6c7c14880c65" volumeName="kubernetes.io/configmap/7b94e08c-7944-445e-bfb7-6c7c14880c65-env-overrides" seLinuxMountContext="" Mar 18 18:00:31.040589 master-0 kubenswrapper[30278]: I0318 18:00:31.039549 30278 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e2d0d0d-54ca-475b-be8a-4eb6d4434e74" volumeName="kubernetes.io/secret/9e2d0d0d-54ca-475b-be8a-4eb6d4434e74-tls-certificates" seLinuxMountContext="" Mar 18 18:00:31.040589 master-0 kubenswrapper[30278]: I0318 18:00:31.039561 30278 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6f26e239-2988-4faa-bc1d-24b15b95b7f1" volumeName="kubernetes.io/projected/6f26e239-2988-4faa-bc1d-24b15b95b7f1-kube-api-access-5sl7p" seLinuxMountContext="" Mar 18 18:00:31.040589 master-0 kubenswrapper[30278]: I0318 18:00:31.039575 30278 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7c6694a8-ccd0-491b-9f21-215450f6ce67" volumeName="kubernetes.io/configmap/7c6694a8-ccd0-491b-9f21-215450f6ce67-trusted-ca" seLinuxMountContext="" Mar 18 18:00:31.040589 master-0 kubenswrapper[30278]: I0318 18:00:31.039587 30278 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="89e6c3d6-7bd5-4df6-90db-3a349f644afb" volumeName="kubernetes.io/projected/89e6c3d6-7bd5-4df6-90db-3a349f644afb-kube-api-access-88hkw" seLinuxMountContext="" Mar 18 18:00:31.040589 master-0 kubenswrapper[30278]: I0318 18:00:31.039600 30278 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cb522b02-0b93-4711-9041-566daa06b95a" volumeName="kubernetes.io/empty-dir/cb522b02-0b93-4711-9041-566daa06b95a-available-featuregates" seLinuxMountContext="" Mar 18 18:00:31.040589 master-0 kubenswrapper[30278]: I0318 18:00:31.039613 30278 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fcf459dc-bd30-4143-b5c4-60fd01b46548" volumeName="kubernetes.io/configmap/fcf459dc-bd30-4143-b5c4-60fd01b46548-mcd-auth-proxy-config" seLinuxMountContext="" Mar 18 18:00:31.040589 master-0 kubenswrapper[30278]: I0318 18:00:31.039626 30278 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="26575d68-0488-4dfa-a5d0-5016e481dba6" volumeName="kubernetes.io/configmap/26575d68-0488-4dfa-a5d0-5016e481dba6-config" seLinuxMountContext="" Mar 18 18:00:31.040589 master-0 kubenswrapper[30278]: I0318 18:00:31.039640 30278 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43fab0f2-5cfd-4b5e-a632-728fd5b960fd" volumeName="kubernetes.io/configmap/43fab0f2-5cfd-4b5e-a632-728fd5b960fd-etcd-serving-ca" seLinuxMountContext="" Mar 18 18:00:31.040589 master-0 kubenswrapper[30278]: I0318 18:00:31.039652 30278 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7b94e08c-7944-445e-bfb7-6c7c14880c65" volumeName="kubernetes.io/projected/7b94e08c-7944-445e-bfb7-6c7c14880c65-kube-api-access-g4zcv" seLinuxMountContext="" Mar 18 18:00:31.040589 master-0 kubenswrapper[30278]: I0318 18:00:31.039663 30278 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c57f282a-829b-41b2-827a-f4bc598245a2" volumeName="kubernetes.io/secret/c57f282a-829b-41b2-827a-f4bc598245a2-default-certificate" seLinuxMountContext="" Mar 18 18:00:31.040589 master-0 kubenswrapper[30278]: I0318 18:00:31.039675 30278 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e9e04572-1425-440e-9869-6deef05e13e3" volumeName="kubernetes.io/projected/e9e04572-1425-440e-9869-6deef05e13e3-kube-api-access-t92bz" seLinuxMountContext="" Mar 18 18:00:31.040589 master-0 kubenswrapper[30278]: I0318 18:00:31.039687 30278 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fd4c81e2-699b-4fdf-ac7d-1607cde6a8ab" volumeName="kubernetes.io/projected/fd4c81e2-699b-4fdf-ac7d-1607cde6a8ab-kube-api-access-rf2qx" seLinuxMountContext="" Mar 18 18:00:31.040589 master-0 kubenswrapper[30278]: I0318 18:00:31.039701 30278 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0751c002-fe0e-4f13-bb9c-9accd8ca0df3" volumeName="kubernetes.io/projected/0751c002-fe0e-4f13-bb9c-9accd8ca0df3-kube-api-access-njx6n" seLinuxMountContext="" Mar 18 18:00:31.040589 master-0 kubenswrapper[30278]: I0318 18:00:31.039713 30278 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="253ec853-f637-4aa4-8e8e-eb655dfccccb" volumeName="kubernetes.io/configmap/253ec853-f637-4aa4-8e8e-eb655dfccccb-config" seLinuxMountContext="" Mar 18 18:00:31.040589 master-0 kubenswrapper[30278]: I0318 18:00:31.039726 30278 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43fab0f2-5cfd-4b5e-a632-728fd5b960fd" volumeName="kubernetes.io/configmap/43fab0f2-5cfd-4b5e-a632-728fd5b960fd-trusted-ca-bundle" seLinuxMountContext="" Mar 18 18:00:31.040589 master-0 kubenswrapper[30278]: I0318 18:00:31.039738 30278 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="92153864-7959-4482-bf24-c8db36435fb5" volumeName="kubernetes.io/projected/92153864-7959-4482-bf24-c8db36435fb5-kube-api-access-sb496" seLinuxMountContext="" Mar 18 18:00:31.040589 master-0 kubenswrapper[30278]: I0318 18:00:31.039752 30278 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c087ce06-a16b-41f4-ba93-8fccdee09003" volumeName="kubernetes.io/secret/c087ce06-a16b-41f4-ba93-8fccdee09003-serving-cert" seLinuxMountContext="" Mar 18 18:00:31.040589 master-0 kubenswrapper[30278]: I0318 18:00:31.039766 30278 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ce5831a6-5a8d-4cda-9299-5d86437bcab2" volumeName="kubernetes.io/projected/ce5831a6-5a8d-4cda-9299-5d86437bcab2-kube-api-access-756j8" seLinuxMountContext="" Mar 18 18:00:31.040589 master-0 kubenswrapper[30278]: I0318 18:00:31.039779 30278 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fea7b899-fde4-4463-9520-4d433a8ebe21" volumeName="kubernetes.io/projected/fea7b899-fde4-4463-9520-4d433a8ebe21-kube-api-access-ts9b9" seLinuxMountContext="" Mar 18 18:00:31.040589 master-0 kubenswrapper[30278]: I0318 18:00:31.039790 30278 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="04cef0bd-f365-4bf6-864a-1895995015d6" volumeName="kubernetes.io/projected/04cef0bd-f365-4bf6-864a-1895995015d6-kube-api-access-qlhls" seLinuxMountContext="" Mar 18 18:00:31.040589 master-0 kubenswrapper[30278]: I0318 18:00:31.039802 30278 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b9ff55a-73fb-473f-b406-1f8b6cffdb89" volumeName="kubernetes.io/projected/0b9ff55a-73fb-473f-b406-1f8b6cffdb89-kube-api-access-2tvgq" seLinuxMountContext="" Mar 18 18:00:31.040589 master-0 kubenswrapper[30278]: I0318 18:00:31.039814 30278 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37b3753f-bf4f-4a9e-a4a8-d58296bada79" volumeName="kubernetes.io/secret/37b3753f-bf4f-4a9e-a4a8-d58296bada79-cluster-baremetal-operator-tls" seLinuxMountContext="" Mar 18 18:00:31.040589 master-0 kubenswrapper[30278]: I0318 18:00:31.039826 30278 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7b94e08c-7944-445e-bfb7-6c7c14880c65" volumeName="kubernetes.io/configmap/7b94e08c-7944-445e-bfb7-6c7c14880c65-ovnkube-config" seLinuxMountContext="" Mar 18 18:00:31.040589 master-0 kubenswrapper[30278]: I0318 18:00:31.039842 30278 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7e64a377-f497-4416-8f22-d5c7f52e0b65" volumeName="kubernetes.io/secret/7e64a377-f497-4416-8f22-d5c7f52e0b65-metrics-tls" seLinuxMountContext="" Mar 18 18:00:31.040589 master-0 kubenswrapper[30278]: I0318 18:00:31.039856 30278 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c3267271-e0c5-45d6-980c-d78e4f9eef35" volumeName="kubernetes.io/projected/c3267271-e0c5-45d6-980c-d78e4f9eef35-kube-api-access-z7xqg" seLinuxMountContext="" Mar 18 18:00:31.040589 master-0 kubenswrapper[30278]: I0318 18:00:31.039869 30278 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0751c002-fe0e-4f13-bb9c-9accd8ca0df3" volumeName="kubernetes.io/configmap/0751c002-fe0e-4f13-bb9c-9accd8ca0df3-images" seLinuxMountContext="" Mar 18 18:00:31.040589 master-0 kubenswrapper[30278]: I0318 18:00:31.039885 30278 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37b3753f-bf4f-4a9e-a4a8-d58296bada79" volumeName="kubernetes.io/projected/37b3753f-bf4f-4a9e-a4a8-d58296bada79-kube-api-access-zwlxb" seLinuxMountContext="" Mar 18 18:00:31.040589 master-0 kubenswrapper[30278]: I0318 18:00:31.039897 30278 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="56cde2f7-1742-45d6-aa22-8270cfb424a7" volumeName="kubernetes.io/secret/56cde2f7-1742-45d6-aa22-8270cfb424a7-catalogserver-certs" seLinuxMountContext="" Mar 18 18:00:31.040589 master-0 kubenswrapper[30278]: I0318 18:00:31.039910 30278 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9875ed82-813c-483d-8471-8f9b74b774ee" volumeName="kubernetes.io/secret/9875ed82-813c-483d-8471-8f9b74b774ee-webhook-cert" seLinuxMountContext="" Mar 18 18:00:31.040589 master-0 kubenswrapper[30278]: I0318 18:00:31.039922 30278 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b3385316-45f0-46c5-ac82-683168db5878" volumeName="kubernetes.io/secret/b3385316-45f0-46c5-ac82-683168db5878-certs" seLinuxMountContext="" Mar 18 18:00:31.040589 master-0 kubenswrapper[30278]: I0318 18:00:31.039935 30278 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c57f282a-829b-41b2-827a-f4bc598245a2" volumeName="kubernetes.io/projected/c57f282a-829b-41b2-827a-f4bc598245a2-kube-api-access-d6c68" seLinuxMountContext="" Mar 18 18:00:31.040589 master-0 kubenswrapper[30278]: I0318 18:00:31.039947 30278 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d26d4515-391e-41a5-8c82-1b2b8a375662" volumeName="kubernetes.io/secret/d26d4515-391e-41a5-8c82-1b2b8a375662-package-server-manager-serving-cert" seLinuxMountContext="" Mar 18 18:00:31.040589 master-0 kubenswrapper[30278]: I0318 18:00:31.039961 30278 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="253ec853-f637-4aa4-8e8e-eb655dfccccb" volumeName="kubernetes.io/configmap/253ec853-f637-4aa4-8e8e-eb655dfccccb-client-ca" seLinuxMountContext="" Mar 18 18:00:31.040589 master-0 kubenswrapper[30278]: I0318 18:00:31.039975 30278 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4460d3d3-c55f-4f1c-a623-e3feccf937bb" volumeName="kubernetes.io/projected/4460d3d3-c55f-4f1c-a623-e3feccf937bb-kube-api-access-2tskm" seLinuxMountContext="" Mar 18 18:00:31.040589 master-0 kubenswrapper[30278]: I0318 18:00:31.039987 30278 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43fab0f2-5cfd-4b5e-a632-728fd5b960fd" volumeName="kubernetes.io/secret/43fab0f2-5cfd-4b5e-a632-728fd5b960fd-encryption-config" seLinuxMountContext="" Mar 18 18:00:31.040589 master-0 kubenswrapper[30278]: I0318 18:00:31.040000 30278 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b0e38f3-3ab5-4519-86a6-68003deb94da" volumeName="kubernetes.io/configmap/5b0e38f3-3ab5-4519-86a6-68003deb94da-cni-binary-copy" seLinuxMountContext="" Mar 18 18:00:31.040589 master-0 kubenswrapper[30278]: I0318 18:00:31.040013 30278 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8d5e9525-6c0d-4c0b-a2ce-e42eaf66c311" volumeName="kubernetes.io/secret/8d5e9525-6c0d-4c0b-a2ce-e42eaf66c311-cluster-monitoring-operator-tls" seLinuxMountContext="" Mar 18 18:00:31.040589 master-0 kubenswrapper[30278]: I0318 18:00:31.040025 30278 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c57f282a-829b-41b2-827a-f4bc598245a2" volumeName="kubernetes.io/secret/c57f282a-829b-41b2-827a-f4bc598245a2-metrics-certs" seLinuxMountContext="" Mar 18 18:00:31.040589 master-0 kubenswrapper[30278]: I0318 18:00:31.040037 30278 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cb522b02-0b93-4711-9041-566daa06b95a" volumeName="kubernetes.io/secret/cb522b02-0b93-4711-9041-566daa06b95a-serving-cert" seLinuxMountContext="" Mar 18 18:00:31.040589 master-0 kubenswrapper[30278]: I0318 18:00:31.040052 30278 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d26d4515-391e-41a5-8c82-1b2b8a375662" volumeName="kubernetes.io/projected/d26d4515-391e-41a5-8c82-1b2b8a375662-kube-api-access-bm8jj" seLinuxMountContext="" Mar 18 18:00:31.040589 master-0 kubenswrapper[30278]: I0318 18:00:31.040064 30278 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0751c002-fe0e-4f13-bb9c-9accd8ca0df3" volumeName="kubernetes.io/secret/0751c002-fe0e-4f13-bb9c-9accd8ca0df3-cloud-controller-manager-operator-tls" seLinuxMountContext="" Mar 18 18:00:31.040589 master-0 kubenswrapper[30278]: I0318 18:00:31.040085 30278 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37b3753f-bf4f-4a9e-a4a8-d58296bada79" volumeName="kubernetes.io/configmap/37b3753f-bf4f-4a9e-a4a8-d58296bada79-config" seLinuxMountContext="" Mar 18 18:00:31.040589 master-0 kubenswrapper[30278]: I0318 18:00:31.040104 30278 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7f76afa-4b23-421c-8451-46323813f06e" volumeName="kubernetes.io/secret/e7f76afa-4b23-421c-8451-46323813f06e-webhook-certs" seLinuxMountContext="" Mar 18 18:00:31.040589 master-0 kubenswrapper[30278]: I0318 18:00:31.040118 30278 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e9e04572-1425-440e-9869-6deef05e13e3" volumeName="kubernetes.io/secret/e9e04572-1425-440e-9869-6deef05e13e3-srv-cert" seLinuxMountContext="" Mar 18 18:00:31.040589 master-0 kubenswrapper[30278]: I0318 18:00:31.040131 30278 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fdab27a1-1d7a-4dc5-b828-eba3f57592dd" volumeName="kubernetes.io/secret/fdab27a1-1d7a-4dc5-b828-eba3f57592dd-serving-cert" seLinuxMountContext="" Mar 18 18:00:31.040589 master-0 kubenswrapper[30278]: I0318 18:00:31.040144 30278 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d4c75bee-d0d2-4261-8f89-8c3375dbd868" volumeName="kubernetes.io/configmap/d4c75bee-d0d2-4261-8f89-8c3375dbd868-trusted-ca-bundle" seLinuxMountContext="" Mar 18 18:00:31.040589 master-0 kubenswrapper[30278]: I0318 18:00:31.040158 30278 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e73f2834-c56c-4cef-ac3c-2317e9a4324c" volumeName="kubernetes.io/projected/e73f2834-c56c-4cef-ac3c-2317e9a4324c-kube-api-access-qwps9" seLinuxMountContext="" Mar 18 18:00:31.040589 master-0 kubenswrapper[30278]: I0318 18:00:31.040177 30278 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37b3753f-bf4f-4a9e-a4a8-d58296bada79" volumeName="kubernetes.io/secret/37b3753f-bf4f-4a9e-a4a8-d58296bada79-cert" seLinuxMountContext="" Mar 18 18:00:31.040589 master-0 kubenswrapper[30278]: I0318 18:00:31.040211 30278 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4460d3d3-c55f-4f1c-a623-e3feccf937bb" volumeName="kubernetes.io/empty-dir/4460d3d3-c55f-4f1c-a623-e3feccf937bb-utilities" seLinuxMountContext="" Mar 18 18:00:31.040589 master-0 kubenswrapper[30278]: I0318 18:00:31.040242 30278 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9c0dbd44-7669-41d6-bf1b-d8c1343c9d98" volumeName="kubernetes.io/projected/9c0dbd44-7669-41d6-bf1b-d8c1343c9d98-kube-api-access-qbdth" seLinuxMountContext="" Mar 18 18:00:31.040589 master-0 kubenswrapper[30278]: I0318 18:00:31.040256 30278 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cb522b02-0b93-4711-9041-566daa06b95a" volumeName="kubernetes.io/projected/cb522b02-0b93-4711-9041-566daa06b95a-kube-api-access-fk59q" seLinuxMountContext="" Mar 18 18:00:31.040589 master-0 kubenswrapper[30278]: I0318 18:00:31.040288 30278 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fd4c81e2-699b-4fdf-ac7d-1607cde6a8ab" volumeName="kubernetes.io/secret/fd4c81e2-699b-4fdf-ac7d-1607cde6a8ab-signing-key" seLinuxMountContext="" Mar 18 18:00:31.040589 master-0 kubenswrapper[30278]: I0318 18:00:31.040303 30278 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b9ff55a-73fb-473f-b406-1f8b6cffdb89" volumeName="kubernetes.io/secret/0b9ff55a-73fb-473f-b406-1f8b6cffdb89-serving-cert" seLinuxMountContext="" Mar 18 18:00:31.040589 master-0 kubenswrapper[30278]: I0318 18:00:31.040314 30278 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="30d77a7c-222e-41c7-8a98-219854aa3da2" volumeName="kubernetes.io/configmap/30d77a7c-222e-41c7-8a98-219854aa3da2-audit" seLinuxMountContext="" Mar 18 18:00:31.040589 master-0 kubenswrapper[30278]: I0318 18:00:31.040327 30278 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="822080a5-2926-4a51-866d-86bb0b437da2" volumeName="kubernetes.io/projected/822080a5-2926-4a51-866d-86bb0b437da2-kube-api-access-f48gg" seLinuxMountContext="" Mar 18 18:00:31.040589 master-0 kubenswrapper[30278]: I0318 18:00:31.040343 30278 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8d5e9525-6c0d-4c0b-a2ce-e42eaf66c311" volumeName="kubernetes.io/projected/8d5e9525-6c0d-4c0b-a2ce-e42eaf66c311-kube-api-access-clm4b" seLinuxMountContext="" Mar 18 18:00:31.040589 master-0 kubenswrapper[30278]: I0318 18:00:31.040354 30278 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8db04037-c7cc-4246-92c3-6e7985384b14" volumeName="kubernetes.io/secret/8db04037-c7cc-4246-92c3-6e7985384b14-webhook-cert" seLinuxMountContext="" Mar 18 18:00:31.040589 master-0 kubenswrapper[30278]: I0318 18:00:31.040366 30278 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9875ed82-813c-483d-8471-8f9b74b774ee" volumeName="kubernetes.io/configmap/9875ed82-813c-483d-8471-8f9b74b774ee-env-overrides" seLinuxMountContext="" Mar 18 18:00:31.040589 master-0 kubenswrapper[30278]: I0318 18:00:31.040379 30278 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a94f7bff-ad61-4c53-a8eb-000a13f26971" volumeName="kubernetes.io/projected/a94f7bff-ad61-4c53-a8eb-000a13f26971-kube-api-access-5xvzx" seLinuxMountContext="" Mar 18 18:00:31.040589 master-0 kubenswrapper[30278]: I0318 18:00:31.040390 30278 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0751c002-fe0e-4f13-bb9c-9accd8ca0df3" volumeName="kubernetes.io/configmap/0751c002-fe0e-4f13-bb9c-9accd8ca0df3-auth-proxy-config" seLinuxMountContext="" Mar 18 18:00:31.040589 master-0 kubenswrapper[30278]: I0318 18:00:31.040403 30278 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7c6694a8-ccd0-491b-9f21-215450f6ce67" volumeName="kubernetes.io/projected/7c6694a8-ccd0-491b-9f21-215450f6ce67-kube-api-access-mrdqg" seLinuxMountContext="" Mar 18 18:00:31.040589 master-0 kubenswrapper[30278]: I0318 18:00:31.040414 30278 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d4c75bee-d0d2-4261-8f89-8c3375dbd868" volumeName="kubernetes.io/secret/d4c75bee-d0d2-4261-8f89-8c3375dbd868-serving-cert" seLinuxMountContext="" Mar 18 18:00:31.040589 master-0 kubenswrapper[30278]: I0318 18:00:31.040425 30278 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fcf459dc-bd30-4143-b5c4-60fd01b46548" volumeName="kubernetes.io/projected/fcf459dc-bd30-4143-b5c4-60fd01b46548-kube-api-access-xzp78" seLinuxMountContext="" Mar 18 18:00:31.040589 master-0 kubenswrapper[30278]: I0318 18:00:31.040437 30278 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1db0a246-ca43-4e7c-b09e-e80218ae99b1" volumeName="kubernetes.io/configmap/1db0a246-ca43-4e7c-b09e-e80218ae99b1-config" seLinuxMountContext="" Mar 18 18:00:31.040589 master-0 kubenswrapper[30278]: I0318 18:00:31.040450 30278 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6f26e239-2988-4faa-bc1d-24b15b95b7f1" volumeName="kubernetes.io/projected/6f26e239-2988-4faa-bc1d-24b15b95b7f1-bound-sa-token" seLinuxMountContext="" Mar 18 18:00:31.040589 master-0 kubenswrapper[30278]: I0318 18:00:31.040471 30278 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2d21e77e-8b61-4f03-8f17-941b7a1d8b1d" volumeName="kubernetes.io/projected/2d21e77e-8b61-4f03-8f17-941b7a1d8b1d-kube-api-access-fc27m" seLinuxMountContext="" Mar 18 18:00:31.040589 master-0 kubenswrapper[30278]: I0318 18:00:31.040485 30278 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4460d3d3-c55f-4f1c-a623-e3feccf937bb" volumeName="kubernetes.io/empty-dir/4460d3d3-c55f-4f1c-a623-e3feccf937bb-catalog-content" seLinuxMountContext="" Mar 18 18:00:31.040589 master-0 kubenswrapper[30278]: I0318 18:00:31.040497 30278 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="994fff04-c1d7-4f10-8d4b-6b49a6934829" volumeName="kubernetes.io/configmap/994fff04-c1d7-4f10-8d4b-6b49a6934829-ovnkube-config" seLinuxMountContext="" Mar 18 18:00:31.040589 master-0 kubenswrapper[30278]: I0318 18:00:31.040508 30278 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c087ce06-a16b-41f4-ba93-8fccdee09003" volumeName="kubernetes.io/projected/c087ce06-a16b-41f4-ba93-8fccdee09003-kube-api-access-789k6" seLinuxMountContext="" Mar 18 18:00:31.040589 master-0 kubenswrapper[30278]: I0318 18:00:31.040522 30278 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c3267271-e0c5-45d6-980c-d78e4f9eef35" volumeName="kubernetes.io/secret/c3267271-e0c5-45d6-980c-d78e4f9eef35-proxy-tls" seLinuxMountContext="" Mar 18 18:00:31.040589 master-0 kubenswrapper[30278]: I0318 18:00:31.040535 30278 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fd4c81e2-699b-4fdf-ac7d-1607cde6a8ab" volumeName="kubernetes.io/configmap/fd4c81e2-699b-4fdf-ac7d-1607cde6a8ab-signing-cabundle" seLinuxMountContext="" Mar 18 18:00:31.040589 master-0 kubenswrapper[30278]: I0318 18:00:31.040547 30278 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0100a259-1358-45e8-8191-4e1f9a14ec89" volumeName="kubernetes.io/secret/0100a259-1358-45e8-8191-4e1f9a14ec89-etcd-client" seLinuxMountContext="" Mar 18 18:00:31.040589 master-0 kubenswrapper[30278]: I0318 18:00:31.040559 30278 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1db0a246-ca43-4e7c-b09e-e80218ae99b1" volumeName="kubernetes.io/configmap/1db0a246-ca43-4e7c-b09e-e80218ae99b1-client-ca" seLinuxMountContext="" Mar 18 18:00:31.040589 master-0 kubenswrapper[30278]: I0318 18:00:31.040575 30278 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37b3753f-bf4f-4a9e-a4a8-d58296bada79" volumeName="kubernetes.io/configmap/37b3753f-bf4f-4a9e-a4a8-d58296bada79-images" seLinuxMountContext="" Mar 18 18:00:31.040589 master-0 kubenswrapper[30278]: I0318 18:00:31.040590 30278 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2d21e77e-8b61-4f03-8f17-941b7a1d8b1d" volumeName="kubernetes.io/configmap/2d21e77e-8b61-4f03-8f17-941b7a1d8b1d-config" seLinuxMountContext="" Mar 18 18:00:31.040589 master-0 kubenswrapper[30278]: I0318 18:00:31.040601 30278 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="30d77a7c-222e-41c7-8a98-219854aa3da2" volumeName="kubernetes.io/secret/30d77a7c-222e-41c7-8a98-219854aa3da2-serving-cert" seLinuxMountContext="" Mar 18 18:00:31.040589 master-0 kubenswrapper[30278]: I0318 18:00:31.040615 30278 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b0e38f3-3ab5-4519-86a6-68003deb94da" volumeName="kubernetes.io/configmap/5b0e38f3-3ab5-4519-86a6-68003deb94da-multus-daemon-config" seLinuxMountContext="" Mar 18 18:00:31.040589 master-0 kubenswrapper[30278]: I0318 18:00:31.040627 30278 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6f26e239-2988-4faa-bc1d-24b15b95b7f1" volumeName="kubernetes.io/secret/6f26e239-2988-4faa-bc1d-24b15b95b7f1-image-registry-operator-tls" seLinuxMountContext="" Mar 18 18:00:31.040589 master-0 kubenswrapper[30278]: I0318 18:00:31.040640 30278 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="89e6c3d6-7bd5-4df6-90db-3a349f644afb" volumeName="kubernetes.io/secret/89e6c3d6-7bd5-4df6-90db-3a349f644afb-proxy-tls" seLinuxMountContext="" Mar 18 18:00:31.040589 master-0 kubenswrapper[30278]: I0318 18:00:31.040652 30278 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8d5e9525-6c0d-4c0b-a2ce-e42eaf66c311" volumeName="kubernetes.io/configmap/8d5e9525-6c0d-4c0b-a2ce-e42eaf66c311-telemetry-config" seLinuxMountContext="" Mar 18 18:00:31.040589 master-0 kubenswrapper[30278]: I0318 18:00:31.040665 30278 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e0e04440-c08b-452d-9be6-9f70a4027c92" volumeName="kubernetes.io/projected/e0e04440-c08b-452d-9be6-9f70a4027c92-kube-api-access-767c7" seLinuxMountContext="" Mar 18 18:00:31.040589 master-0 kubenswrapper[30278]: I0318 18:00:31.040677 30278 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="489dd872-39c3-4ce2-8dc1-9d0552b88616" volumeName="kubernetes.io/projected/489dd872-39c3-4ce2-8dc1-9d0552b88616-kube-api-access-wjtg7" seLinuxMountContext="" Mar 18 18:00:31.040589 master-0 kubenswrapper[30278]: I0318 18:00:31.040690 30278 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="56cde2f7-1742-45d6-aa22-8270cfb424a7" volumeName="kubernetes.io/projected/56cde2f7-1742-45d6-aa22-8270cfb424a7-kube-api-access-mbctm" seLinuxMountContext="" Mar 18 18:00:31.040589 master-0 kubenswrapper[30278]: I0318 18:00:31.040701 30278 reconstruct.go:97] "Volume reconstruction finished" Mar 18 18:00:31.040589 master-0 kubenswrapper[30278]: I0318 18:00:31.040709 30278 reconciler.go:26] "Reconciler: start to sync state" Mar 18 18:00:31.048067 master-0 kubenswrapper[30278]: I0318 18:00:31.043191 30278 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Mar 18 18:00:31.049984 master-0 kubenswrapper[30278]: I0318 18:00:31.049636 30278 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Mar 18 18:00:31.053396 master-0 kubenswrapper[30278]: I0318 18:00:31.053017 30278 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Mar 18 18:00:31.053396 master-0 kubenswrapper[30278]: I0318 18:00:31.053062 30278 status_manager.go:217] "Starting to sync pod status with apiserver" Mar 18 18:00:31.053396 master-0 kubenswrapper[30278]: I0318 18:00:31.053091 30278 kubelet.go:2335] "Starting kubelet main sync loop" Mar 18 18:00:31.053396 master-0 kubenswrapper[30278]: E0318 18:00:31.053144 30278 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 18 18:00:31.054390 master-0 kubenswrapper[30278]: I0318 18:00:31.054357 30278 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Mar 18 18:00:31.073188 master-0 kubenswrapper[30278]: I0318 18:00:31.073091 30278 generic.go:334] "Generic (PLEG): container finished" podID="26575d68-0488-4dfa-a5d0-5016e481dba6" containerID="2206a7113dacde21996d9057f09cbc9465ab1858bcc433f5c546151c4ea00afa" exitCode=0 Mar 18 18:00:31.077160 master-0 kubenswrapper[30278]: I0318 18:00:31.077098 30278 generic.go:334] "Generic (PLEG): container finished" podID="49fac1b46a11e49501805e891baae4a9" containerID="b07f4eb106a117d2a3aedb26bb538e640c6545e341eb4a44bae581e10c947c17" exitCode=0 Mar 18 18:00:31.077160 master-0 kubenswrapper[30278]: I0318 18:00:31.077149 30278 generic.go:334] "Generic (PLEG): container finished" podID="49fac1b46a11e49501805e891baae4a9" containerID="3c6f642b736991fd20242697f9273f8f6a126bc6027f7c5ddd27e70569fd9054" exitCode=0 Mar 18 18:00:31.077160 master-0 kubenswrapper[30278]: I0318 18:00:31.077162 30278 generic.go:334] "Generic (PLEG): container finished" podID="49fac1b46a11e49501805e891baae4a9" containerID="9229a0847dcc4bfd99187b8d4d1c4189d57cc38cb01e1689224e1d421ed9426b" exitCode=0 Mar 18 18:00:31.084821 master-0 kubenswrapper[30278]: I0318 18:00:31.084770 30278 generic.go:334] "Generic (PLEG): container finished" podID="be6633f4-7370-49b8-a607-6a3fa52a098e" containerID="5a8c8b2dda583c7f8335b717181054066b935f797ea92e14efe72d4f776836d4" exitCode=0 Mar 18 18:00:31.088033 master-0 kubenswrapper[30278]: I0318 18:00:31.087994 30278 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-node-identity_network-node-identity-7s68k_9875ed82-813c-483d-8471-8f9b74b774ee/approver/1.log" Mar 18 18:00:31.088855 master-0 kubenswrapper[30278]: I0318 18:00:31.088809 30278 generic.go:334] "Generic (PLEG): container finished" podID="9875ed82-813c-483d-8471-8f9b74b774ee" containerID="d6933300553a8b09299df5113bf7cc86680b024bf430a5e7f3a091b6af9ab04a" exitCode=1 Mar 18 18:00:31.101386 master-0 kubenswrapper[30278]: I0318 18:00:31.099256 30278 generic.go:334] "Generic (PLEG): container finished" podID="994fff04-c1d7-4f10-8d4b-6b49a6934829" containerID="1a93390a62f28ef65e80a805fc6b9268f2506ce23dcb2e7e0c063ca4b86c7617" exitCode=0 Mar 18 18:00:31.102293 master-0 kubenswrapper[30278]: I0318 18:00:31.102226 30278 generic.go:334] "Generic (PLEG): container finished" podID="ce5831a6-5a8d-4cda-9299-5d86437bcab2" containerID="fe07019623ba4afabfbf6551b7028ec6e274c77f8b3075096e77bb2fa5ab0961" exitCode=0 Mar 18 18:00:31.118650 master-0 kubenswrapper[30278]: I0318 18:00:31.118609 30278 generic.go:334] "Generic (PLEG): container finished" podID="0b9ff55a-73fb-473f-b406-1f8b6cffdb89" containerID="208f151f73d2054e8fc1e7bad5a7840184b6f1a99cd1c642769a09479cee5ec9" exitCode=0 Mar 18 18:00:31.166300 master-0 kubenswrapper[30278]: E0318 18:00:31.153538 30278 kubelet.go:2359] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Mar 18 18:00:31.166300 master-0 kubenswrapper[30278]: I0318 18:00:31.155535 30278 generic.go:334] "Generic (PLEG): container finished" podID="cd9d8bd7-68a0-458f-9d25-f600932e303c" containerID="c609c2b3b4935f3bff5c215911aef6aecfcc54b41e1023b5431ec59542ec2f9d" exitCode=0 Mar 18 18:00:31.188134 master-0 kubenswrapper[30278]: I0318 18:00:31.188079 30278 generic.go:334] "Generic (PLEG): container finished" podID="0100a259-1358-45e8-8191-4e1f9a14ec89" containerID="1bb2dec1f59aff9832355c134a19ba762af95a3f61ff179296debc28c40ca05c" exitCode=0 Mar 18 18:00:31.200743 master-0 kubenswrapper[30278]: I0318 18:00:31.200689 30278 generic.go:334] "Generic (PLEG): container finished" podID="c83737980b9ee109184b1d78e942cf36" containerID="c0003daaaf5a355b3cb392bb03905611a5e11defed3a5bf40942d6e99ba55bcb" exitCode=0 Mar 18 18:00:31.203349 master-0 kubenswrapper[30278]: I0318 18:00:31.203325 30278 generic.go:334] "Generic (PLEG): container finished" podID="489dd872-39c3-4ce2-8dc1-9d0552b88616" containerID="70935598889d7ee02bf1833aebf4130f2e4fa22f2be159d783a76ae3260c0ec7" exitCode=0 Mar 18 18:00:31.203349 master-0 kubenswrapper[30278]: I0318 18:00:31.203346 30278 generic.go:334] "Generic (PLEG): container finished" podID="489dd872-39c3-4ce2-8dc1-9d0552b88616" containerID="a2e29b749bfbe09ff5972a0dffb8367afb6d9100abae8e59d66f807f2bb0aaac" exitCode=0 Mar 18 18:00:31.207865 master-0 kubenswrapper[30278]: I0318 18:00:31.207820 30278 generic.go:334] "Generic (PLEG): container finished" podID="c57f282a-829b-41b2-827a-f4bc598245a2" containerID="40665a65803f46b85c5841b161668f9dc53195967c924003dedfb177dd66895a" exitCode=0 Mar 18 18:00:31.212705 master-0 kubenswrapper[30278]: I0318 18:00:31.212661 30278 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication-operator_authentication-operator-5885bfd7f4-8sxdf_c087ce06-a16b-41f4-ba93-8fccdee09003/authentication-operator/2.log" Mar 18 18:00:31.212837 master-0 kubenswrapper[30278]: I0318 18:00:31.212718 30278 generic.go:334] "Generic (PLEG): container finished" podID="c087ce06-a16b-41f4-ba93-8fccdee09003" containerID="5ef1ad7d9de4700ea957d656ff99f57f457c91f9b150fe99e8b36beb88ed9c42" exitCode=255 Mar 18 18:00:31.216139 master-0 kubenswrapper[30278]: I0318 18:00:31.216118 30278 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-2-master-0_c9655d59-a594-499f-b474-dfc870239174/installer/0.log" Mar 18 18:00:31.216226 master-0 kubenswrapper[30278]: I0318 18:00:31.216147 30278 generic.go:334] "Generic (PLEG): container finished" podID="c9655d59-a594-499f-b474-dfc870239174" containerID="88c92e9d0661b28d9a41bcdec55c597d6015bf273bee5facfd2419530f4f2c64" exitCode=1 Mar 18 18:00:31.218337 master-0 kubenswrapper[30278]: I0318 18:00:31.218297 30278 generic.go:334] "Generic (PLEG): container finished" podID="89e6c3d6-7bd5-4df6-90db-3a349f644afb" containerID="c82dc79407cc2ebdd830e24e81c06ba7f22e81e0353adc5d05a21365ba7f195f" exitCode=0 Mar 18 18:00:31.220909 master-0 kubenswrapper[30278]: I0318 18:00:31.220868 30278 generic.go:334] "Generic (PLEG): container finished" podID="dba5f8d7-4d25-42b5-9c58-813221bf96bb" containerID="398454ad32431a1333f76c77a1b11d599119897614da05c5c31c8fb7c4b10bc1" exitCode=0 Mar 18 18:00:31.222327 master-0 kubenswrapper[30278]: I0318 18:00:31.222294 30278 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_installer-3-master-0_1a709ef9-91c0-4193-acb4-0594d02f554c/installer/0.log" Mar 18 18:00:31.222404 master-0 kubenswrapper[30278]: I0318 18:00:31.222346 30278 generic.go:334] "Generic (PLEG): container finished" podID="1a709ef9-91c0-4193-acb4-0594d02f554c" containerID="484988d6e1e2aeba58f6749a644020e240b6e9ebd0d813d191a1e837c5837362" exitCode=1 Mar 18 18:00:31.227201 master-0 kubenswrapper[30278]: I0318 18:00:31.227168 30278 generic.go:334] "Generic (PLEG): container finished" podID="08451d5b-cf84-45a1-a16d-7ce10a83a6e7" containerID="5314ec05fb03281eaddcd24c27457c3fda717a46b41bfa95e18bf5f7470daeb4" exitCode=0 Mar 18 18:00:31.230662 master-0 kubenswrapper[30278]: I0318 18:00:31.230572 30278 generic.go:334] "Generic (PLEG): container finished" podID="99e215da-759d-4fff-af65-0fb64245fbd0" containerID="991a1bf80cc5f91f8bda7e5c2511f88f98023ee76020f581b2ef2e76ff7bcf29" exitCode=0 Mar 18 18:00:31.230662 master-0 kubenswrapper[30278]: I0318 18:00:31.230614 30278 generic.go:334] "Generic (PLEG): container finished" podID="99e215da-759d-4fff-af65-0fb64245fbd0" containerID="345b9877bce66c031277690013e8db931d86b5ac05fc33b7cbd7c55a24998003" exitCode=0 Mar 18 18:00:31.230662 master-0 kubenswrapper[30278]: I0318 18:00:31.230622 30278 generic.go:334] "Generic (PLEG): container finished" podID="99e215da-759d-4fff-af65-0fb64245fbd0" containerID="836d36e41f9d465b68171473ea87c95a04be32a563d9abf3bd2beb4eacf6a497" exitCode=0 Mar 18 18:00:31.237072 master-0 kubenswrapper[30278]: I0318 18:00:31.237039 30278 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-66b84d69b-qb7n6_7e64a377-f497-4416-8f22-d5c7f52e0b65/ingress-operator/4.log" Mar 18 18:00:31.239815 master-0 kubenswrapper[30278]: I0318 18:00:31.239770 30278 generic.go:334] "Generic (PLEG): container finished" podID="7e64a377-f497-4416-8f22-d5c7f52e0b65" containerID="af159afaa033efb036b878b04bdffa8fd814f7fc2cf559b2f4b190fa136e0905" exitCode=1 Mar 18 18:00:31.246085 master-0 kubenswrapper[30278]: I0318 18:00:31.246053 30278 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-master-0_1249822f86f23526277d165c0d5d3c19/kube-rbac-proxy-crio/2.log" Mar 18 18:00:31.246634 master-0 kubenswrapper[30278]: I0318 18:00:31.246560 30278 generic.go:334] "Generic (PLEG): container finished" podID="1249822f86f23526277d165c0d5d3c19" containerID="7b9bddc18b37dc8b661d9d8aa6fa9e351c2bd5fe2e18de32692f6b5ce1bb25c8" exitCode=1 Mar 18 18:00:31.246634 master-0 kubenswrapper[30278]: I0318 18:00:31.246623 30278 generic.go:334] "Generic (PLEG): container finished" podID="1249822f86f23526277d165c0d5d3c19" containerID="3803a5540326c74452858cec12bf5343a5ecb670acc1d4e7c87a18dad91b712b" exitCode=0 Mar 18 18:00:31.254905 master-0 kubenswrapper[30278]: I0318 18:00:31.254787 30278 generic.go:334] "Generic (PLEG): container finished" podID="094204df314fe45bd5af12ca1b4622bb" containerID="bf1214a2258760165a58c692fdf834c33da4c7a8a15a2275bd354ac819d9c857" exitCode=0 Mar 18 18:00:31.254905 master-0 kubenswrapper[30278]: I0318 18:00:31.254831 30278 generic.go:334] "Generic (PLEG): container finished" podID="094204df314fe45bd5af12ca1b4622bb" containerID="8d1735bbfc7c3d66c7f4ca5e55aa86318920c68f2e40962c9c2d2008b6df984d" exitCode=0 Mar 18 18:00:31.254905 master-0 kubenswrapper[30278]: I0318 18:00:31.254840 30278 generic.go:334] "Generic (PLEG): container finished" podID="094204df314fe45bd5af12ca1b4622bb" containerID="57c8f7a47edecb41fe3286b9e71f767917df948188cdf7bbad415d2bd7f1ab5b" exitCode=0 Mar 18 18:00:31.259399 master-0 kubenswrapper[30278]: I0318 18:00:31.259365 30278 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-6f69995874-dh5zl_37b3753f-bf4f-4a9e-a4a8-d58296bada79/cluster-baremetal-operator/3.log" Mar 18 18:00:31.259921 master-0 kubenswrapper[30278]: I0318 18:00:31.259885 30278 generic.go:334] "Generic (PLEG): container finished" podID="37b3753f-bf4f-4a9e-a4a8-d58296bada79" containerID="921ec206afcda3ad2ed54f119faab2d531fbc22d2917452ab79dc39397439722" exitCode=1 Mar 18 18:00:31.263067 master-0 kubenswrapper[30278]: I0318 18:00:31.263023 30278 generic.go:334] "Generic (PLEG): container finished" podID="c355c750-ae2f-49fa-9a16-8fb4f688853e" containerID="82b3c41b778f6b2cb0358e27e4513c9d6911408756eafe9881b278fd4128f2db" exitCode=0 Mar 18 18:00:31.265127 master-0 kubenswrapper[30278]: I0318 18:00:31.265103 30278 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler-operator_openshift-kube-scheduler-operator-dddff6458-wlfj4_3a3a6c2c-78e7-41f3-acff-20173cbc012a/kube-scheduler-operator-container/1.log" Mar 18 18:00:31.265192 master-0 kubenswrapper[30278]: I0318 18:00:31.265132 30278 generic.go:334] "Generic (PLEG): container finished" podID="3a3a6c2c-78e7-41f3-acff-20173cbc012a" containerID="34db6c58d1d15ad2f0f08eec2a02536e2b02dd1b1c722e12e770c383ca33f635" exitCode=255 Mar 18 18:00:31.267200 master-0 kubenswrapper[30278]: I0318 18:00:31.267169 30278 generic.go:334] "Generic (PLEG): container finished" podID="8413125cf444e5c95f023c5dd9c6151e" containerID="54007053bb13b45932056c940a92e3590d13348f18ea18bf943b7365ae07e843" exitCode=0 Mar 18 18:00:31.268787 master-0 kubenswrapper[30278]: I0318 18:00:31.268743 30278 generic.go:334] "Generic (PLEG): container finished" podID="5e216493-e343-4c59-a3c1-5aad5edd67e2" containerID="b80c144acadff41c49bf3614230955b846d46e4c70083852e45c512d06842840" exitCode=0 Mar 18 18:00:31.275006 master-0 kubenswrapper[30278]: I0318 18:00:31.274963 30278 generic.go:334] "Generic (PLEG): container finished" podID="6f26e239-2988-4faa-bc1d-24b15b95b7f1" containerID="e31032eb3407bce853d0be38a115c77d3679d1c63fdc6c68fe19ac271b5e7c71" exitCode=0 Mar 18 18:00:31.276958 master-0 kubenswrapper[30278]: I0318 18:00:31.276923 30278 generic.go:334] "Generic (PLEG): container finished" podID="dc110414-3a6b-474c-bce3-33450cab8fcd" containerID="2718b408b0fd0508d3bbb65645adb3096e6a30b7fddd2e6d5a0da288259af5b6" exitCode=0 Mar 18 18:00:31.276958 master-0 kubenswrapper[30278]: I0318 18:00:31.276946 30278 generic.go:334] "Generic (PLEG): container finished" podID="dc110414-3a6b-474c-bce3-33450cab8fcd" containerID="8293ae1276c1f139d18ab84c79b4ef640dd21f0be4c4014a118798b7acdc2d44" exitCode=0 Mar 18 18:00:31.278942 master-0 kubenswrapper[30278]: I0318 18:00:31.278911 30278 generic.go:334] "Generic (PLEG): container finished" podID="7b94e08c-7944-445e-bfb7-6c7c14880c65" containerID="94d941e21f1ab13a20fa6356fcedca0030606e420e596dcef8825d0ce5bcf87a" exitCode=0 Mar 18 18:00:31.280756 master-0 kubenswrapper[30278]: I0318 18:00:31.280734 30278 generic.go:334] "Generic (PLEG): container finished" podID="427e5ce9-f4b3-4f12-bb77-2b13775aa334" containerID="22a31804731ff2ad6097e1478a33c0a03dfd73fd92e656c745ef5aa863cd5673" exitCode=0 Mar 18 18:00:31.280852 master-0 kubenswrapper[30278]: I0318 18:00:31.280759 30278 generic.go:334] "Generic (PLEG): container finished" podID="427e5ce9-f4b3-4f12-bb77-2b13775aa334" containerID="184cb76aa84a88cd3b8719a8bbdc255f068d4a3e6468482f6b7438107b9e68d8" exitCode=0 Mar 18 18:00:31.284392 master-0 kubenswrapper[30278]: I0318 18:00:31.284366 30278 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-master-0_b45ea2ef1cf2bc9d1d994d6538ae0a64/kube-apiserver-check-endpoints/0.log" Mar 18 18:00:31.285803 master-0 kubenswrapper[30278]: I0318 18:00:31.285781 30278 generic.go:334] "Generic (PLEG): container finished" podID="b45ea2ef1cf2bc9d1d994d6538ae0a64" containerID="f887def1d9b97d72f25ddb564fd0ecbae06aba6b64de1338a239aa08a40c032f" exitCode=255 Mar 18 18:00:31.285908 master-0 kubenswrapper[30278]: I0318 18:00:31.285891 30278 generic.go:334] "Generic (PLEG): container finished" podID="b45ea2ef1cf2bc9d1d994d6538ae0a64" containerID="4d1607d2ed493b59fcd5008ae3ef413f5adc41abf3185470578dd70106590036" exitCode=0 Mar 18 18:00:31.288627 master-0 kubenswrapper[30278]: I0318 18:00:31.288055 30278 generic.go:334] "Generic (PLEG): container finished" podID="4460d3d3-c55f-4f1c-a623-e3feccf937bb" containerID="98d863723a508017dfde5d2fba0f35e4c2c885a3faf38a07e44a5b8c49c1f0be" exitCode=0 Mar 18 18:00:31.288627 master-0 kubenswrapper[30278]: I0318 18:00:31.288088 30278 generic.go:334] "Generic (PLEG): container finished" podID="4460d3d3-c55f-4f1c-a623-e3feccf937bb" containerID="2508ebe9053440edc87c49e130a7b0e4cfa3dcec7c01ec67984f7b0b7290be83" exitCode=0 Mar 18 18:00:31.289806 master-0 kubenswrapper[30278]: I0318 18:00:31.289782 30278 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-controller_operator-controller-controller-manager-57777556ff-bk26c_efbcb147-d077-4749-9289-1682daccb657/manager/1.log" Mar 18 18:00:31.290050 master-0 kubenswrapper[30278]: I0318 18:00:31.290027 30278 generic.go:334] "Generic (PLEG): container finished" podID="efbcb147-d077-4749-9289-1682daccb657" containerID="e2d7bd945ff62383c4a337619ff4a53c695923ff63d0ce2cd5a9cb7b46a58867" exitCode=1 Mar 18 18:00:31.294921 master-0 kubenswrapper[30278]: I0318 18:00:31.293748 30278 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-controller-manager-operator_cluster-cloud-controller-manager-operator-7dff898856-kfzkl_0751c002-fe0e-4f13-bb9c-9accd8ca0df3/kube-rbac-proxy/6.log" Mar 18 18:00:31.294921 master-0 kubenswrapper[30278]: I0318 18:00:31.294095 30278 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-controller-manager-operator_cluster-cloud-controller-manager-operator-7dff898856-kfzkl_0751c002-fe0e-4f13-bb9c-9accd8ca0df3/config-sync-controllers/0.log" Mar 18 18:00:31.294921 master-0 kubenswrapper[30278]: I0318 18:00:31.294542 30278 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-controller-manager-operator_cluster-cloud-controller-manager-operator-7dff898856-kfzkl_0751c002-fe0e-4f13-bb9c-9accd8ca0df3/cluster-cloud-controller-manager/0.log" Mar 18 18:00:31.294921 master-0 kubenswrapper[30278]: I0318 18:00:31.294575 30278 generic.go:334] "Generic (PLEG): container finished" podID="0751c002-fe0e-4f13-bb9c-9accd8ca0df3" containerID="7805e3a084be32fd22f401c11f2cafaf6f6853b5c227bbbf9238a583daa6ea61" exitCode=1 Mar 18 18:00:31.294921 master-0 kubenswrapper[30278]: I0318 18:00:31.294611 30278 generic.go:334] "Generic (PLEG): container finished" podID="0751c002-fe0e-4f13-bb9c-9accd8ca0df3" containerID="a81203ae354d597c88c3b98386e062196ad2d6278f0f6ad5fc4ad9c4b04a9ff2" exitCode=1 Mar 18 18:00:31.294921 master-0 kubenswrapper[30278]: I0318 18:00:31.294619 30278 generic.go:334] "Generic (PLEG): container finished" podID="0751c002-fe0e-4f13-bb9c-9accd8ca0df3" containerID="19f22c241321c089522b514fbfd3f5b1ec6df250184c4997e1e9c0766f09796c" exitCode=1 Mar 18 18:00:31.296191 master-0 kubenswrapper[30278]: I0318 18:00:31.296156 30278 generic.go:334] "Generic (PLEG): container finished" podID="c38c5f03-a753-49f4-ab06-33e75a03bd45" containerID="a3a77ef6f8f671fb5f80e7a57420cd1c8a6c6e49b81d12a2df38ba7e576274fc" exitCode=0 Mar 18 18:00:31.297522 master-0 kubenswrapper[30278]: I0318 18:00:31.297495 30278 generic.go:334] "Generic (PLEG): container finished" podID="43fab0f2-5cfd-4b5e-a632-728fd5b960fd" containerID="e9d865c621673d95e24957da6c5efc56f4b4cde9d2216c676659bdbab854d23a" exitCode=0 Mar 18 18:00:31.299671 master-0 kubenswrapper[30278]: I0318 18:00:31.299649 30278 generic.go:334] "Generic (PLEG): container finished" podID="253ec853-f637-4aa4-8e8e-eb655dfccccb" containerID="2db7d7384c8a35f95ff52d00ab25bbedc70ce8cf90c5c6ca6aff91f2c9272cd8" exitCode=0 Mar 18 18:00:31.304477 master-0 kubenswrapper[30278]: I0318 18:00:31.304443 30278 generic.go:334] "Generic (PLEG): container finished" podID="cb522b02-0b93-4711-9041-566daa06b95a" containerID="399bf3be19e41993ba7e873949068ec6c32cf9d08ee1196692654605dc3ddd51" exitCode=0 Mar 18 18:00:31.304477 master-0 kubenswrapper[30278]: I0318 18:00:31.304466 30278 generic.go:334] "Generic (PLEG): container finished" podID="cb522b02-0b93-4711-9041-566daa06b95a" containerID="f7dedaead357f68edfb6b1633ceea1f3b2a9443afcc42c378f59d11efb0de8ae" exitCode=0 Mar 18 18:00:31.307319 master-0 kubenswrapper[30278]: I0318 18:00:31.307222 30278 generic.go:334] "Generic (PLEG): container finished" podID="fdab27a1-1d7a-4dc5-b828-eba3f57592dd" containerID="b533f593b28cafb60fbcf6432d0aa3477e72d3d1f721e9b883b828b9059da814" exitCode=0 Mar 18 18:00:31.310971 master-0 kubenswrapper[30278]: I0318 18:00:31.309385 30278 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-node-tuning-operator_cluster-node-tuning-operator-598fbc5f8f-7qwxn_7c6694a8-ccd0-491b-9f21-215450f6ce67/cluster-node-tuning-operator/1.log" Mar 18 18:00:31.310971 master-0 kubenswrapper[30278]: I0318 18:00:31.309418 30278 generic.go:334] "Generic (PLEG): container finished" podID="7c6694a8-ccd0-491b-9f21-215450f6ce67" containerID="54489b0edcfa24dfcbbb34581a482bdade21886266c2b553e30f0c64c39e011f" exitCode=1 Mar 18 18:00:31.313982 master-0 kubenswrapper[30278]: I0318 18:00:31.313926 30278 generic.go:334] "Generic (PLEG): container finished" podID="30d77a7c-222e-41c7-8a98-219854aa3da2" containerID="7dca962ecd78930d6ebff8babb7c8a998598fdaf8cc19f7bde50114fc03b1127" exitCode=0 Mar 18 18:00:31.315660 master-0 kubenswrapper[30278]: I0318 18:00:31.315633 30278 generic.go:334] "Generic (PLEG): container finished" podID="c3267271-e0c5-45d6-980c-d78e4f9eef35" containerID="4af4292c294ed18f4d7a20d7c6af6118981afc3f4dccaa087fc72c0bbc4f6572" exitCode=0 Mar 18 18:00:31.317141 master-0 kubenswrapper[30278]: I0318 18:00:31.317118 30278 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_3b3363934623637fdc1d37ff8b16880a/cluster-policy-controller/3.log" Mar 18 18:00:31.318207 master-0 kubenswrapper[30278]: I0318 18:00:31.318179 30278 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_3b3363934623637fdc1d37ff8b16880a/kube-controller-manager-cert-syncer/0.log" Mar 18 18:00:31.318797 master-0 kubenswrapper[30278]: I0318 18:00:31.318771 30278 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_3b3363934623637fdc1d37ff8b16880a/kube-controller-manager/0.log" Mar 18 18:00:31.318832 master-0 kubenswrapper[30278]: I0318 18:00:31.318809 30278 generic.go:334] "Generic (PLEG): container finished" podID="3b3363934623637fdc1d37ff8b16880a" containerID="230861fc75c4cf91f00521990347ee4c6eaab66ee62a9284086ae7fb81bebad6" exitCode=255 Mar 18 18:00:31.318832 master-0 kubenswrapper[30278]: I0318 18:00:31.318824 30278 generic.go:334] "Generic (PLEG): container finished" podID="3b3363934623637fdc1d37ff8b16880a" containerID="243a7398c383ba8c402d23dcf0f7c5b93b0d9dae2f29d0c0170f8b972de06495" exitCode=1 Mar 18 18:00:31.318832 master-0 kubenswrapper[30278]: I0318 18:00:31.318831 30278 generic.go:334] "Generic (PLEG): container finished" podID="3b3363934623637fdc1d37ff8b16880a" containerID="6007004024fecf1344918d5eba36f91c4644591c32375ce8f9e07fc9beb46c69" exitCode=1 Mar 18 18:00:31.324297 master-0 kubenswrapper[30278]: I0318 18:00:31.323682 30278 generic.go:334] "Generic (PLEG): container finished" podID="fea7b899-fde4-4463-9520-4d433a8ebe21" containerID="e43f9ea395a7c58acd7f5ae682a5f3d1676e30932b7eae1967401d8e7c98e640" exitCode=0 Mar 18 18:00:31.324297 master-0 kubenswrapper[30278]: I0318 18:00:31.323719 30278 generic.go:334] "Generic (PLEG): container finished" podID="fea7b899-fde4-4463-9520-4d433a8ebe21" containerID="49a577ee2ac2a159de0067da85450704e2357b11d86f52af06168530d5d8c67c" exitCode=0 Mar 18 18:00:31.324297 master-0 kubenswrapper[30278]: I0318 18:00:31.323731 30278 generic.go:334] "Generic (PLEG): container finished" podID="fea7b899-fde4-4463-9520-4d433a8ebe21" containerID="f487efac96ddc2a1600d3e4cc87d8a45b4d735699e028d3a82f0ba6a3bf9f4b3" exitCode=0 Mar 18 18:00:31.324297 master-0 kubenswrapper[30278]: I0318 18:00:31.323742 30278 generic.go:334] "Generic (PLEG): container finished" podID="fea7b899-fde4-4463-9520-4d433a8ebe21" containerID="3d5985c493f4dbc8ecc65a775668e215bdb1fee71a640074b8e4b3117da777c6" exitCode=0 Mar 18 18:00:31.324297 master-0 kubenswrapper[30278]: I0318 18:00:31.323752 30278 generic.go:334] "Generic (PLEG): container finished" podID="fea7b899-fde4-4463-9520-4d433a8ebe21" containerID="de9eecaae100670e0a012da69d0c99fbaef83817e585514383e37a63852714c7" exitCode=0 Mar 18 18:00:31.324297 master-0 kubenswrapper[30278]: I0318 18:00:31.323759 30278 generic.go:334] "Generic (PLEG): container finished" podID="fea7b899-fde4-4463-9520-4d433a8ebe21" containerID="88001466f79b98c5070d70264ed313350538e29ea013a0dee819ce0396f0e3a4" exitCode=0 Mar 18 18:00:31.327593 master-0 kubenswrapper[30278]: I0318 18:00:31.326661 30278 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-1-master-0_41191498-89c5-44dc-b648-dbea889c72f5/installer/0.log" Mar 18 18:00:31.327593 master-0 kubenswrapper[30278]: I0318 18:00:31.326709 30278 generic.go:334] "Generic (PLEG): container finished" podID="41191498-89c5-44dc-b648-dbea889c72f5" containerID="952d444a3fc2166b6fd7ae2111af2db0a2310710ae00c917ceccc2b70b6b3ce3" exitCode=1 Mar 18 18:00:31.328959 master-0 kubenswrapper[30278]: I0318 18:00:31.328615 30278 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_installer-3-master-0_98c88ce7-94dd-434c-99fc-96d900d544e6/installer/0.log" Mar 18 18:00:31.328959 master-0 kubenswrapper[30278]: I0318 18:00:31.328667 30278 generic.go:334] "Generic (PLEG): container finished" podID="98c88ce7-94dd-434c-99fc-96d900d544e6" containerID="f946a82c484d87fe7448697a732facf5002625190cba529f3bfbd4dceece22e3" exitCode=1 Mar 18 18:00:31.330869 master-0 kubenswrapper[30278]: I0318 18:00:31.330518 30278 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-catalogd_catalogd-controller-manager-6864dc98f7-8vmsv_56cde2f7-1742-45d6-aa22-8270cfb424a7/manager/1.log" Mar 18 18:00:31.331972 master-0 kubenswrapper[30278]: I0318 18:00:31.331771 30278 generic.go:334] "Generic (PLEG): container finished" podID="56cde2f7-1742-45d6-aa22-8270cfb424a7" containerID="c455513aeeb0a865514a01932b50b8b6b2a2bfaa8dc030657e848c60ae487c2b" exitCode=1 Mar 18 18:00:31.335005 master-0 kubenswrapper[30278]: I0318 18:00:31.333227 30278 generic.go:334] "Generic (PLEG): container finished" podID="1db0a246-ca43-4e7c-b09e-e80218ae99b1" containerID="a1b33ace751148a425bdf00f07398d55bcc2dc83ff83d8278e7851ed219c1db7" exitCode=0 Mar 18 18:00:31.335081 master-0 kubenswrapper[30278]: I0318 18:00:31.335052 30278 generic.go:334] "Generic (PLEG): container finished" podID="fd4c81e2-699b-4fdf-ac7d-1607cde6a8ab" containerID="a8a00810d795e748f7416b26291bd5e824cc9027054e6c1fabd83a4ff999def0" exitCode=0 Mar 18 18:00:31.338509 master-0 kubenswrapper[30278]: I0318 18:00:31.338464 30278 generic.go:334] "Generic (PLEG): container finished" podID="9b424d6c-7440-4c98-ac19-2d0642c696fd" containerID="733c4831624297f5112d8028d0486f0fad40d94494178f2290df8fe70a7c80e2" exitCode=0 Mar 18 18:00:31.342500 master-0 kubenswrapper[30278]: I0318 18:00:31.342473 30278 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-controller-manager-operator_openshift-controller-manager-operator-8c94f4649-hpsbd_9a240ab7-a1d5-4e9a-96f3-4590681cc7ed/openshift-controller-manager-operator/2.log" Mar 18 18:00:31.342735 master-0 kubenswrapper[30278]: I0318 18:00:31.342715 30278 generic.go:334] "Generic (PLEG): container finished" podID="9a240ab7-a1d5-4e9a-96f3-4590681cc7ed" containerID="c2fb973641e8d289ba0dd09efd68e97b47576d6bc93a3c1a721a673bea80ce81" exitCode=255 Mar 18 18:00:31.345367 master-0 kubenswrapper[30278]: I0318 18:00:31.345340 30278 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-lifecycle-manager_package-server-manager-7b95f86987-6qqz4_d26d4515-391e-41a5-8c82-1b2b8a375662/package-server-manager/1.log" Mar 18 18:00:31.345690 master-0 kubenswrapper[30278]: I0318 18:00:31.345663 30278 generic.go:334] "Generic (PLEG): container finished" podID="d26d4515-391e-41a5-8c82-1b2b8a375662" containerID="2bf18e51a1823185cc3f2ac648f42885a8d2aea94913a831a7d4285f0b01a344" exitCode=1 Mar 18 18:00:31.349256 master-0 kubenswrapper[30278]: I0318 18:00:31.349224 30278 generic.go:334] "Generic (PLEG): container finished" podID="14a0661b-7bde-4e22-a9a9-5e3fb24df77f" containerID="34974a400194e4abf23a570b3bcaf62e9c0cf2c55d12e3ded0eb4a493b533868" exitCode=0 Mar 18 18:00:31.351998 master-0 kubenswrapper[30278]: I0318 18:00:31.351956 30278 generic.go:334] "Generic (PLEG): container finished" podID="f7ff61c7-32d1-4407-a792-8e22bb4d50f9" containerID="26d9bad45253e9ed004980ee45ac455d4c739974d250f32d4e33bfde8ed6ef29" exitCode=0 Mar 18 18:00:31.353142 master-0 kubenswrapper[30278]: I0318 18:00:31.353107 30278 generic.go:334] "Generic (PLEG): container finished" podID="4285e80c-1ff9-42b3-9692-9f2ab6b61916" containerID="7af43e761f47509ec1402b4287569aac08cd400280ac0f2b280a0b47c6c678f0" exitCode=0 Mar 18 18:00:31.353599 master-0 kubenswrapper[30278]: E0318 18:00:31.353574 30278 kubelet.go:2359] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Mar 18 18:00:31.354484 master-0 kubenswrapper[30278]: I0318 18:00:31.354452 30278 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-64854d9cff-vpjmp_7d39d93e-9be3-47e1-a44e-be2d18b55446/snapshot-controller/4.log" Mar 18 18:00:31.354559 master-0 kubenswrapper[30278]: I0318 18:00:31.354491 30278 generic.go:334] "Generic (PLEG): container finished" podID="7d39d93e-9be3-47e1-a44e-be2d18b55446" containerID="2b3a93a1f208538619b8c053a215774b7a5b76ad51695ab4679fe93b9c8aef84" exitCode=1 Mar 18 18:00:31.355895 master-0 kubenswrapper[30278]: I0318 18:00:31.355850 30278 generic.go:334] "Generic (PLEG): container finished" podID="37bbec19-22b8-411c-901b-d89c92b0bd4d" containerID="96795dabdb6bc76b373e901a5376a2ae90d0d629bb5240323bbf35ecdc487386" exitCode=0 Mar 18 18:00:31.589289 master-0 kubenswrapper[30278]: I0318 18:00:31.589220 30278 manager.go:324] Recovery completed Mar 18 18:00:31.703456 master-0 kubenswrapper[30278]: I0318 18:00:31.703383 30278 cpu_manager.go:225] "Starting CPU manager" policy="none" Mar 18 18:00:31.703456 master-0 kubenswrapper[30278]: I0318 18:00:31.703434 30278 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Mar 18 18:00:31.703729 master-0 kubenswrapper[30278]: I0318 18:00:31.703483 30278 state_mem.go:36] "Initialized new in-memory state store" Mar 18 18:00:31.704054 master-0 kubenswrapper[30278]: I0318 18:00:31.704018 30278 state_mem.go:88] "Updated default CPUSet" cpuSet="" Mar 18 18:00:31.704104 master-0 kubenswrapper[30278]: I0318 18:00:31.704043 30278 state_mem.go:96] "Updated CPUSet assignments" assignments={} Mar 18 18:00:31.704104 master-0 kubenswrapper[30278]: I0318 18:00:31.704069 30278 state_checkpoint.go:136] "State checkpoint: restored state from checkpoint" Mar 18 18:00:31.704323 master-0 kubenswrapper[30278]: I0318 18:00:31.704106 30278 state_checkpoint.go:137] "State checkpoint: defaultCPUSet" defaultCpuSet="" Mar 18 18:00:31.704323 master-0 kubenswrapper[30278]: I0318 18:00:31.704116 30278 policy_none.go:49] "None policy: Start" Mar 18 18:00:31.710126 master-0 kubenswrapper[30278]: I0318 18:00:31.710080 30278 memory_manager.go:170] "Starting memorymanager" policy="None" Mar 18 18:00:31.710244 master-0 kubenswrapper[30278]: I0318 18:00:31.710135 30278 state_mem.go:35] "Initializing new in-memory state store" Mar 18 18:00:31.711739 master-0 kubenswrapper[30278]: I0318 18:00:31.711683 30278 state_mem.go:75] "Updated machine memory state" Mar 18 18:00:31.711739 master-0 kubenswrapper[30278]: I0318 18:00:31.711730 30278 state_checkpoint.go:82] "State checkpoint: restored state from checkpoint" Mar 18 18:00:31.728670 master-0 kubenswrapper[30278]: I0318 18:00:31.728624 30278 manager.go:334] "Starting Device Plugin manager" Mar 18 18:00:31.728952 master-0 kubenswrapper[30278]: I0318 18:00:31.728727 30278 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Mar 18 18:00:31.729049 master-0 kubenswrapper[30278]: I0318 18:00:31.728964 30278 server.go:79] "Starting device plugin registration server" Mar 18 18:00:31.729570 master-0 kubenswrapper[30278]: I0318 18:00:31.729545 30278 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 18 18:00:31.729663 master-0 kubenswrapper[30278]: I0318 18:00:31.729566 30278 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 18 18:00:31.729841 master-0 kubenswrapper[30278]: I0318 18:00:31.729809 30278 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Mar 18 18:00:31.729947 master-0 kubenswrapper[30278]: I0318 18:00:31.729915 30278 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Mar 18 18:00:31.729947 master-0 kubenswrapper[30278]: I0318 18:00:31.729930 30278 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 18 18:00:31.753780 master-0 kubenswrapper[30278]: I0318 18:00:31.753638 30278 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-master-0","openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0","openshift-kube-controller-manager/kube-controller-manager-master-0","openshift-kube-scheduler/openshift-kube-scheduler-master-0","openshift-machine-config-operator/kube-rbac-proxy-crio-master-0","openshift-etcd/etcd-master-0"] Mar 18 18:00:31.764361 master-0 kubenswrapper[30278]: I0318 18:00:31.759942 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" event={"ID":"8e7a82869988463543d3d8dd1f0b5fe3","Type":"ContainerStarted","Data":"25f0059cb7f28e57d54587af9a075f46b53e453c6a901d45bc7aae8b1f8557d8"} Mar 18 18:00:31.764361 master-0 kubenswrapper[30278]: I0318 18:00:31.760043 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" event={"ID":"8e7a82869988463543d3d8dd1f0b5fe3","Type":"ContainerStarted","Data":"1ea3dbfa5dfb13332a0f1977477497e5220b4bba3727358399c90d2b8664c6d7"} Mar 18 18:00:31.764361 master-0 kubenswrapper[30278]: I0318 18:00:31.760087 30278 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6a2b235f936941b3155c2b3d3485ca14ee2d3465fd9996759d1380c11160d84a" Mar 18 18:00:31.764361 master-0 kubenswrapper[30278]: I0318 18:00:31.760115 30278 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1d9e36c9c12a1291e1dc0d36bf35c4d9718af9aa6ca59ee2ad69bf2e6669af26" Mar 18 18:00:31.764361 master-0 kubenswrapper[30278]: I0318 18:00:31.760203 30278 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bc6ed26ff47dcb63cc0618959e2aa5d6fdd1facec54c0eb66675504b09f0fb7c" Mar 18 18:00:31.764361 master-0 kubenswrapper[30278]: I0318 18:00:31.760229 30278 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="95d9ed6b43dde926cc6bff2e2109470565941e7ec534301d19b34350a3fd9914" Mar 18 18:00:31.764361 master-0 kubenswrapper[30278]: I0318 18:00:31.760303 30278 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="202e717017ea47879d90c4603f14b936f4bf42a19ba2cb4cf9411280f3913d38" Mar 18 18:00:31.764361 master-0 kubenswrapper[30278]: I0318 18:00:31.760340 30278 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9e890c0b05b9ab9a66059688757a1f43723c4593388d1175f31db9b7e7ec8883" Mar 18 18:00:31.764361 master-0 kubenswrapper[30278]: I0318 18:00:31.760365 30278 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ee60fb39e538f57e3a2c9cf050408fd1ce812a3cd024c1de0ff7127a4236fd69" Mar 18 18:00:31.764361 master-0 kubenswrapper[30278]: I0318 18:00:31.760403 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"1249822f86f23526277d165c0d5d3c19","Type":"ContainerStarted","Data":"43d0194c7af8a79987b694f6624dcbd9737a923184624c98fa52f07e27abb8b3"} Mar 18 18:00:31.764361 master-0 kubenswrapper[30278]: I0318 18:00:31.760422 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"1249822f86f23526277d165c0d5d3c19","Type":"ContainerDied","Data":"7b9bddc18b37dc8b661d9d8aa6fa9e351c2bd5fe2e18de32692f6b5ce1bb25c8"} Mar 18 18:00:31.764361 master-0 kubenswrapper[30278]: I0318 18:00:31.760441 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"1249822f86f23526277d165c0d5d3c19","Type":"ContainerDied","Data":"3803a5540326c74452858cec12bf5343a5ecb670acc1d4e7c87a18dad91b712b"} Mar 18 18:00:31.764361 master-0 kubenswrapper[30278]: I0318 18:00:31.760462 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"1249822f86f23526277d165c0d5d3c19","Type":"ContainerStarted","Data":"e03411b7c2bf8367c968ed1adc09d9fecafd420d1122805ea765d466352e23a3"} Mar 18 18:00:31.764361 master-0 kubenswrapper[30278]: I0318 18:00:31.760479 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"094204df314fe45bd5af12ca1b4622bb","Type":"ContainerStarted","Data":"d2d8b53aa63600a513849d49d8afb7d6359ec5cfb72d80c1e09ca1dc600d4650"} Mar 18 18:00:31.764361 master-0 kubenswrapper[30278]: I0318 18:00:31.760496 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"094204df314fe45bd5af12ca1b4622bb","Type":"ContainerStarted","Data":"60b8beabf9bc2cea64f509c80af659d92f7e928ab7b8915a214c69b2dce558c8"} Mar 18 18:00:31.764361 master-0 kubenswrapper[30278]: I0318 18:00:31.760513 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"094204df314fe45bd5af12ca1b4622bb","Type":"ContainerStarted","Data":"25455a2ac49061fc7d9927f513d9b409d2c3568243e18d1a4eb9af39a224b7df"} Mar 18 18:00:31.764361 master-0 kubenswrapper[30278]: I0318 18:00:31.760529 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"094204df314fe45bd5af12ca1b4622bb","Type":"ContainerStarted","Data":"808efd19e16a0549495b7fa4df574bf88e4360937fd74bcc189cd80473a41295"} Mar 18 18:00:31.764361 master-0 kubenswrapper[30278]: I0318 18:00:31.760545 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"094204df314fe45bd5af12ca1b4622bb","Type":"ContainerStarted","Data":"2b27f565e17a3ee26335a0bdd98708332824c925381f1ed9987f74ef23fd2f1a"} Mar 18 18:00:31.764361 master-0 kubenswrapper[30278]: I0318 18:00:31.760562 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"094204df314fe45bd5af12ca1b4622bb","Type":"ContainerDied","Data":"bf1214a2258760165a58c692fdf834c33da4c7a8a15a2275bd354ac819d9c857"} Mar 18 18:00:31.764361 master-0 kubenswrapper[30278]: I0318 18:00:31.760583 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"094204df314fe45bd5af12ca1b4622bb","Type":"ContainerDied","Data":"8d1735bbfc7c3d66c7f4ca5e55aa86318920c68f2e40962c9c2d2008b6df984d"} Mar 18 18:00:31.764361 master-0 kubenswrapper[30278]: I0318 18:00:31.760600 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"094204df314fe45bd5af12ca1b4622bb","Type":"ContainerDied","Data":"57c8f7a47edecb41fe3286b9e71f767917df948188cdf7bbad415d2bd7f1ab5b"} Mar 18 18:00:31.764361 master-0 kubenswrapper[30278]: I0318 18:00:31.760618 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"094204df314fe45bd5af12ca1b4622bb","Type":"ContainerStarted","Data":"f975ed7e1c1dcf64feeba9dd4dfc173ec9be8b509e8d2f868a326c611d5b7d2d"} Mar 18 18:00:31.764361 master-0 kubenswrapper[30278]: I0318 18:00:31.760662 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"8413125cf444e5c95f023c5dd9c6151e","Type":"ContainerStarted","Data":"8dcb28e72b5e3d607cb0442eacc9389954c39aee0b6eacf8e715a788f8bfb9f4"} Mar 18 18:00:31.764361 master-0 kubenswrapper[30278]: I0318 18:00:31.760683 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"8413125cf444e5c95f023c5dd9c6151e","Type":"ContainerStarted","Data":"29879c8abe23bc57a7aa348868d9ac01b7adc18d9c27f2fd1e733adaceab54a9"} Mar 18 18:00:31.764361 master-0 kubenswrapper[30278]: I0318 18:00:31.760700 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"8413125cf444e5c95f023c5dd9c6151e","Type":"ContainerStarted","Data":"043429a2f809c60d137c59f31d4e052f1930753c2d8c68039661e422f3f8def6"} Mar 18 18:00:31.764361 master-0 kubenswrapper[30278]: I0318 18:00:31.760717 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"8413125cf444e5c95f023c5dd9c6151e","Type":"ContainerDied","Data":"54007053bb13b45932056c940a92e3590d13348f18ea18bf943b7365ae07e843"} Mar 18 18:00:31.764361 master-0 kubenswrapper[30278]: I0318 18:00:31.760775 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"8413125cf444e5c95f023c5dd9c6151e","Type":"ContainerStarted","Data":"89705fd182a90dfe140ac5efc8c14b16140f0a05f824bdb1f27db7295abcee76"} Mar 18 18:00:31.764361 master-0 kubenswrapper[30278]: I0318 18:00:31.760798 30278 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="62b095928168d47377d353f0ce39eebc777747ee26de9ec57d0eb0a49ec53d3a" Mar 18 18:00:31.764361 master-0 kubenswrapper[30278]: I0318 18:00:31.760926 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"b45ea2ef1cf2bc9d1d994d6538ae0a64","Type":"ContainerDied","Data":"f887def1d9b97d72f25ddb564fd0ecbae06aba6b64de1338a239aa08a40c032f"} Mar 18 18:00:31.764361 master-0 kubenswrapper[30278]: I0318 18:00:31.760948 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"b45ea2ef1cf2bc9d1d994d6538ae0a64","Type":"ContainerStarted","Data":"6efe9870fc0250f38406137d08a7823a1c0babfd0a7944e64c6366445e97696d"} Mar 18 18:00:31.764361 master-0 kubenswrapper[30278]: I0318 18:00:31.760995 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"b45ea2ef1cf2bc9d1d994d6538ae0a64","Type":"ContainerStarted","Data":"532613e49ea9e30ac1511410a4da92cfd72901d45720ba547c93524371db0317"} Mar 18 18:00:31.764361 master-0 kubenswrapper[30278]: I0318 18:00:31.761012 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"b45ea2ef1cf2bc9d1d994d6538ae0a64","Type":"ContainerStarted","Data":"f5d16ced4f31a4b0a79823e1a1297685f8a42b35e0b0e91f27b749a47d590dcc"} Mar 18 18:00:31.764361 master-0 kubenswrapper[30278]: I0318 18:00:31.761061 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"b45ea2ef1cf2bc9d1d994d6538ae0a64","Type":"ContainerStarted","Data":"f46251b34f5dd065effa27bf2880569964ed0fdb5993d9d07458a0798550963d"} Mar 18 18:00:31.764361 master-0 kubenswrapper[30278]: I0318 18:00:31.761210 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"b45ea2ef1cf2bc9d1d994d6538ae0a64","Type":"ContainerDied","Data":"4d1607d2ed493b59fcd5008ae3ef413f5adc41abf3185470578dd70106590036"} Mar 18 18:00:31.764361 master-0 kubenswrapper[30278]: I0318 18:00:31.761236 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"b45ea2ef1cf2bc9d1d994d6538ae0a64","Type":"ContainerStarted","Data":"796303c5f2a585dc8ab37c0a21b453aa0dd8797dea11dc3eee7c72e5dad9b158"} Mar 18 18:00:31.764361 master-0 kubenswrapper[30278]: I0318 18:00:31.761385 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"3b3363934623637fdc1d37ff8b16880a","Type":"ContainerStarted","Data":"3c51974ba55ce77de4db6060fda42dd205fc3b6d69ff15656f21b3a7b488ddc3"} Mar 18 18:00:31.764361 master-0 kubenswrapper[30278]: I0318 18:00:31.761404 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"3b3363934623637fdc1d37ff8b16880a","Type":"ContainerStarted","Data":"5e9de81daca56e7a14e9bb6ed5c647f47dd366c571087c15f6fae5baeebccd1e"} Mar 18 18:00:31.764361 master-0 kubenswrapper[30278]: I0318 18:00:31.761421 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"3b3363934623637fdc1d37ff8b16880a","Type":"ContainerDied","Data":"230861fc75c4cf91f00521990347ee4c6eaab66ee62a9284086ae7fb81bebad6"} Mar 18 18:00:31.764361 master-0 kubenswrapper[30278]: I0318 18:00:31.761440 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"3b3363934623637fdc1d37ff8b16880a","Type":"ContainerStarted","Data":"522b734ad03d049a879cfa7a8145e3b81a8d9061164b95712992e2f7f7b61d1d"} Mar 18 18:00:31.764361 master-0 kubenswrapper[30278]: I0318 18:00:31.761456 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"3b3363934623637fdc1d37ff8b16880a","Type":"ContainerStarted","Data":"b6f2e9aac67fef6d9cd60fe1d8d223b7762a7baf5bd08f250b7e213146055132"} Mar 18 18:00:31.764361 master-0 kubenswrapper[30278]: I0318 18:00:31.761474 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"3b3363934623637fdc1d37ff8b16880a","Type":"ContainerDied","Data":"243a7398c383ba8c402d23dcf0f7c5b93b0d9dae2f29d0c0170f8b972de06495"} Mar 18 18:00:31.764361 master-0 kubenswrapper[30278]: I0318 18:00:31.761492 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"3b3363934623637fdc1d37ff8b16880a","Type":"ContainerDied","Data":"6007004024fecf1344918d5eba36f91c4644591c32375ce8f9e07fc9beb46c69"} Mar 18 18:00:31.764361 master-0 kubenswrapper[30278]: I0318 18:00:31.761517 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"3b3363934623637fdc1d37ff8b16880a","Type":"ContainerStarted","Data":"989b2fdc7ef152f9acfe916ef0d60955c7235426f6fe1dd2fc891166e785e105"} Mar 18 18:00:31.764361 master-0 kubenswrapper[30278]: I0318 18:00:31.761568 30278 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ca7a0939c8771a3524a053fbcf05a6e4e340302ea878636e59812ce8a826b33c" Mar 18 18:00:31.764361 master-0 kubenswrapper[30278]: I0318 18:00:31.761590 30278 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c257b7064ba1ee282a10d14ba9ea68bf5e64596dfd922f601f3ce37e1e2104a5" Mar 18 18:00:31.764361 master-0 kubenswrapper[30278]: I0318 18:00:31.761680 30278 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f34d77ba93703fe1437d1652719d678aefcfb27c7b8ba0e8d8cf97f2d8fb7718" Mar 18 18:00:31.775594 master-0 kubenswrapper[30278]: E0318 18:00:31.775474 30278 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-master-0\" already exists" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 18:00:31.775984 master-0 kubenswrapper[30278]: I0318 18:00:31.775775 30278 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f95a076923e4629406022fc1044a23f8f3e37ea1e3db68f6f34125f8c501b177" Mar 18 18:00:31.775984 master-0 kubenswrapper[30278]: E0318 18:00:31.775784 30278 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"openshift-kube-scheduler-master-0\" already exists" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 18 18:00:31.776942 master-0 kubenswrapper[30278]: E0318 18:00:31.776890 30278 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-rbac-proxy-crio-master-0\" already exists" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 18 18:00:31.829898 master-0 kubenswrapper[30278]: I0318 18:00:31.829799 30278 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 18:00:31.832193 master-0 kubenswrapper[30278]: I0318 18:00:31.832113 30278 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 18 18:00:31.832193 master-0 kubenswrapper[30278]: I0318 18:00:31.832190 30278 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 18 18:00:31.832471 master-0 kubenswrapper[30278]: I0318 18:00:31.832208 30278 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 18 18:00:31.832471 master-0 kubenswrapper[30278]: I0318 18:00:31.832442 30278 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Mar 18 18:00:31.844518 master-0 kubenswrapper[30278]: I0318 18:00:31.844430 30278 kubelet_node_status.go:115] "Node was previously registered" node="master-0" Mar 18 18:00:31.844658 master-0 kubenswrapper[30278]: I0318 18:00:31.844535 30278 kubelet_node_status.go:79] "Successfully registered node" node="master-0" Mar 18 18:00:31.865877 master-0 kubenswrapper[30278]: I0318 18:00:31.865797 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/8e7a82869988463543d3d8dd1f0b5fe3-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"8e7a82869988463543d3d8dd1f0b5fe3\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 18 18:00:31.866134 master-0 kubenswrapper[30278]: I0318 18:00:31.865900 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3b3363934623637fdc1d37ff8b16880a-cert-dir\") pod \"kube-controller-manager-master-0\" (UID: \"3b3363934623637fdc1d37ff8b16880a\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 18:00:31.866134 master-0 kubenswrapper[30278]: I0318 18:00:31.865954 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/094204df314fe45bd5af12ca1b4622bb-resource-dir\") pod \"etcd-master-0\" (UID: \"094204df314fe45bd5af12ca1b4622bb\") " pod="openshift-etcd/etcd-master-0" Mar 18 18:00:31.866134 master-0 kubenswrapper[30278]: I0318 18:00:31.865981 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/094204df314fe45bd5af12ca1b4622bb-cert-dir\") pod \"etcd-master-0\" (UID: \"094204df314fe45bd5af12ca1b4622bb\") " pod="openshift-etcd/etcd-master-0" Mar 18 18:00:31.866322 master-0 kubenswrapper[30278]: I0318 18:00:31.866125 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/094204df314fe45bd5af12ca1b4622bb-usr-local-bin\") pod \"etcd-master-0\" (UID: \"094204df314fe45bd5af12ca1b4622bb\") " pod="openshift-etcd/etcd-master-0" Mar 18 18:00:31.866322 master-0 kubenswrapper[30278]: I0318 18:00:31.866189 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/8e7a82869988463543d3d8dd1f0b5fe3-var-lock\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"8e7a82869988463543d3d8dd1f0b5fe3\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 18 18:00:31.866322 master-0 kubenswrapper[30278]: I0318 18:00:31.866215 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/8e7a82869988463543d3d8dd1f0b5fe3-var-log\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"8e7a82869988463543d3d8dd1f0b5fe3\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 18 18:00:31.866322 master-0 kubenswrapper[30278]: I0318 18:00:31.866237 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3b3363934623637fdc1d37ff8b16880a-resource-dir\") pod \"kube-controller-manager-master-0\" (UID: \"3b3363934623637fdc1d37ff8b16880a\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 18:00:31.866322 master-0 kubenswrapper[30278]: I0318 18:00:31.866260 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/8413125cf444e5c95f023c5dd9c6151e-resource-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"8413125cf444e5c95f023c5dd9c6151e\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 18 18:00:31.866322 master-0 kubenswrapper[30278]: I0318 18:00:31.866304 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/1249822f86f23526277d165c0d5d3c19-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"1249822f86f23526277d165c0d5d3c19\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 18 18:00:31.866661 master-0 kubenswrapper[30278]: I0318 18:00:31.866327 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/b45ea2ef1cf2bc9d1d994d6538ae0a64-cert-dir\") pod \"kube-apiserver-master-0\" (UID: \"b45ea2ef1cf2bc9d1d994d6538ae0a64\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 18 18:00:31.866661 master-0 kubenswrapper[30278]: I0318 18:00:31.866384 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b45ea2ef1cf2bc9d1d994d6538ae0a64-audit-dir\") pod \"kube-apiserver-master-0\" (UID: \"b45ea2ef1cf2bc9d1d994d6538ae0a64\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 18 18:00:31.866661 master-0 kubenswrapper[30278]: I0318 18:00:31.866432 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/8e7a82869988463543d3d8dd1f0b5fe3-manifests\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"8e7a82869988463543d3d8dd1f0b5fe3\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 18 18:00:31.866661 master-0 kubenswrapper[30278]: I0318 18:00:31.866509 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/8413125cf444e5c95f023c5dd9c6151e-cert-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"8413125cf444e5c95f023c5dd9c6151e\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 18 18:00:31.866661 master-0 kubenswrapper[30278]: I0318 18:00:31.866570 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/1249822f86f23526277d165c0d5d3c19-etc-kube\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"1249822f86f23526277d165c0d5d3c19\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 18 18:00:31.866661 master-0 kubenswrapper[30278]: I0318 18:00:31.866617 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/094204df314fe45bd5af12ca1b4622bb-static-pod-dir\") pod \"etcd-master-0\" (UID: \"094204df314fe45bd5af12ca1b4622bb\") " pod="openshift-etcd/etcd-master-0" Mar 18 18:00:31.867033 master-0 kubenswrapper[30278]: I0318 18:00:31.866745 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/094204df314fe45bd5af12ca1b4622bb-data-dir\") pod \"etcd-master-0\" (UID: \"094204df314fe45bd5af12ca1b4622bb\") " pod="openshift-etcd/etcd-master-0" Mar 18 18:00:31.867033 master-0 kubenswrapper[30278]: I0318 18:00:31.866796 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/b45ea2ef1cf2bc9d1d994d6538ae0a64-resource-dir\") pod \"kube-apiserver-master-0\" (UID: \"b45ea2ef1cf2bc9d1d994d6538ae0a64\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 18 18:00:31.867033 master-0 kubenswrapper[30278]: I0318 18:00:31.866855 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/8e7a82869988463543d3d8dd1f0b5fe3-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"8e7a82869988463543d3d8dd1f0b5fe3\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 18 18:00:31.867033 master-0 kubenswrapper[30278]: I0318 18:00:31.866903 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/094204df314fe45bd5af12ca1b4622bb-log-dir\") pod \"etcd-master-0\" (UID: \"094204df314fe45bd5af12ca1b4622bb\") " pod="openshift-etcd/etcd-master-0" Mar 18 18:00:31.873910 master-0 kubenswrapper[30278]: E0318 18:00:31.873853 30278 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"etcd-master-0\" already exists" pod="openshift-etcd/etcd-master-0" Mar 18 18:00:31.968012 master-0 kubenswrapper[30278]: I0318 18:00:31.967904 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/094204df314fe45bd5af12ca1b4622bb-static-pod-dir\") pod \"etcd-master-0\" (UID: \"094204df314fe45bd5af12ca1b4622bb\") " pod="openshift-etcd/etcd-master-0" Mar 18 18:00:31.968012 master-0 kubenswrapper[30278]: I0318 18:00:31.967970 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/094204df314fe45bd5af12ca1b4622bb-data-dir\") pod \"etcd-master-0\" (UID: \"094204df314fe45bd5af12ca1b4622bb\") " pod="openshift-etcd/etcd-master-0" Mar 18 18:00:31.968012 master-0 kubenswrapper[30278]: I0318 18:00:31.967998 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/b45ea2ef1cf2bc9d1d994d6538ae0a64-cert-dir\") pod \"kube-apiserver-master-0\" (UID: \"b45ea2ef1cf2bc9d1d994d6538ae0a64\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 18 18:00:31.968012 master-0 kubenswrapper[30278]: I0318 18:00:31.968018 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b45ea2ef1cf2bc9d1d994d6538ae0a64-audit-dir\") pod \"kube-apiserver-master-0\" (UID: \"b45ea2ef1cf2bc9d1d994d6538ae0a64\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 18 18:00:31.968012 master-0 kubenswrapper[30278]: I0318 18:00:31.968040 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/8e7a82869988463543d3d8dd1f0b5fe3-manifests\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"8e7a82869988463543d3d8dd1f0b5fe3\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 18 18:00:31.969462 master-0 kubenswrapper[30278]: I0318 18:00:31.968063 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/8413125cf444e5c95f023c5dd9c6151e-cert-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"8413125cf444e5c95f023c5dd9c6151e\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 18 18:00:31.969462 master-0 kubenswrapper[30278]: I0318 18:00:31.968082 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/1249822f86f23526277d165c0d5d3c19-etc-kube\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"1249822f86f23526277d165c0d5d3c19\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 18 18:00:31.969462 master-0 kubenswrapper[30278]: I0318 18:00:31.968103 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/b45ea2ef1cf2bc9d1d994d6538ae0a64-resource-dir\") pod \"kube-apiserver-master-0\" (UID: \"b45ea2ef1cf2bc9d1d994d6538ae0a64\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 18 18:00:31.969462 master-0 kubenswrapper[30278]: I0318 18:00:31.968124 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/8e7a82869988463543d3d8dd1f0b5fe3-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"8e7a82869988463543d3d8dd1f0b5fe3\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 18 18:00:31.969462 master-0 kubenswrapper[30278]: I0318 18:00:31.968145 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/094204df314fe45bd5af12ca1b4622bb-log-dir\") pod \"etcd-master-0\" (UID: \"094204df314fe45bd5af12ca1b4622bb\") " pod="openshift-etcd/etcd-master-0" Mar 18 18:00:31.969462 master-0 kubenswrapper[30278]: I0318 18:00:31.968166 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/8e7a82869988463543d3d8dd1f0b5fe3-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"8e7a82869988463543d3d8dd1f0b5fe3\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 18 18:00:31.969462 master-0 kubenswrapper[30278]: I0318 18:00:31.968187 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3b3363934623637fdc1d37ff8b16880a-cert-dir\") pod \"kube-controller-manager-master-0\" (UID: \"3b3363934623637fdc1d37ff8b16880a\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 18:00:31.969462 master-0 kubenswrapper[30278]: I0318 18:00:31.968209 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/094204df314fe45bd5af12ca1b4622bb-resource-dir\") pod \"etcd-master-0\" (UID: \"094204df314fe45bd5af12ca1b4622bb\") " pod="openshift-etcd/etcd-master-0" Mar 18 18:00:31.969462 master-0 kubenswrapper[30278]: I0318 18:00:31.968228 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/094204df314fe45bd5af12ca1b4622bb-cert-dir\") pod \"etcd-master-0\" (UID: \"094204df314fe45bd5af12ca1b4622bb\") " pod="openshift-etcd/etcd-master-0" Mar 18 18:00:31.969462 master-0 kubenswrapper[30278]: I0318 18:00:31.968248 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/094204df314fe45bd5af12ca1b4622bb-usr-local-bin\") pod \"etcd-master-0\" (UID: \"094204df314fe45bd5af12ca1b4622bb\") " pod="openshift-etcd/etcd-master-0" Mar 18 18:00:31.969462 master-0 kubenswrapper[30278]: I0318 18:00:31.968293 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/8e7a82869988463543d3d8dd1f0b5fe3-var-lock\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"8e7a82869988463543d3d8dd1f0b5fe3\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 18 18:00:31.969462 master-0 kubenswrapper[30278]: I0318 18:00:31.968315 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/8e7a82869988463543d3d8dd1f0b5fe3-var-log\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"8e7a82869988463543d3d8dd1f0b5fe3\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 18 18:00:31.969462 master-0 kubenswrapper[30278]: I0318 18:00:31.968338 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3b3363934623637fdc1d37ff8b16880a-resource-dir\") pod \"kube-controller-manager-master-0\" (UID: \"3b3363934623637fdc1d37ff8b16880a\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 18:00:31.969462 master-0 kubenswrapper[30278]: I0318 18:00:31.968359 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/8413125cf444e5c95f023c5dd9c6151e-resource-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"8413125cf444e5c95f023c5dd9c6151e\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 18 18:00:31.969462 master-0 kubenswrapper[30278]: I0318 18:00:31.968381 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/1249822f86f23526277d165c0d5d3c19-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"1249822f86f23526277d165c0d5d3c19\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 18 18:00:31.969462 master-0 kubenswrapper[30278]: I0318 18:00:31.968434 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/1249822f86f23526277d165c0d5d3c19-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"1249822f86f23526277d165c0d5d3c19\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 18 18:00:31.969462 master-0 kubenswrapper[30278]: I0318 18:00:31.968502 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/094204df314fe45bd5af12ca1b4622bb-static-pod-dir\") pod \"etcd-master-0\" (UID: \"094204df314fe45bd5af12ca1b4622bb\") " pod="openshift-etcd/etcd-master-0" Mar 18 18:00:31.969462 master-0 kubenswrapper[30278]: I0318 18:00:31.968532 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/094204df314fe45bd5af12ca1b4622bb-data-dir\") pod \"etcd-master-0\" (UID: \"094204df314fe45bd5af12ca1b4622bb\") " pod="openshift-etcd/etcd-master-0" Mar 18 18:00:31.969462 master-0 kubenswrapper[30278]: I0318 18:00:31.968561 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/b45ea2ef1cf2bc9d1d994d6538ae0a64-cert-dir\") pod \"kube-apiserver-master-0\" (UID: \"b45ea2ef1cf2bc9d1d994d6538ae0a64\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 18 18:00:31.969462 master-0 kubenswrapper[30278]: I0318 18:00:31.968588 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b45ea2ef1cf2bc9d1d994d6538ae0a64-audit-dir\") pod \"kube-apiserver-master-0\" (UID: \"b45ea2ef1cf2bc9d1d994d6538ae0a64\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 18 18:00:31.969462 master-0 kubenswrapper[30278]: I0318 18:00:31.968621 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/8e7a82869988463543d3d8dd1f0b5fe3-manifests\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"8e7a82869988463543d3d8dd1f0b5fe3\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 18 18:00:31.969462 master-0 kubenswrapper[30278]: I0318 18:00:31.968649 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/8413125cf444e5c95f023c5dd9c6151e-cert-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"8413125cf444e5c95f023c5dd9c6151e\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 18 18:00:31.969462 master-0 kubenswrapper[30278]: I0318 18:00:31.968676 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/1249822f86f23526277d165c0d5d3c19-etc-kube\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"1249822f86f23526277d165c0d5d3c19\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 18 18:00:31.969462 master-0 kubenswrapper[30278]: I0318 18:00:31.968704 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/b45ea2ef1cf2bc9d1d994d6538ae0a64-resource-dir\") pod \"kube-apiserver-master-0\" (UID: \"b45ea2ef1cf2bc9d1d994d6538ae0a64\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 18 18:00:31.969462 master-0 kubenswrapper[30278]: I0318 18:00:31.968734 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/8e7a82869988463543d3d8dd1f0b5fe3-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"8e7a82869988463543d3d8dd1f0b5fe3\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 18 18:00:31.969462 master-0 kubenswrapper[30278]: I0318 18:00:31.968761 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/094204df314fe45bd5af12ca1b4622bb-log-dir\") pod \"etcd-master-0\" (UID: \"094204df314fe45bd5af12ca1b4622bb\") " pod="openshift-etcd/etcd-master-0" Mar 18 18:00:31.969462 master-0 kubenswrapper[30278]: I0318 18:00:31.968789 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/8e7a82869988463543d3d8dd1f0b5fe3-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"8e7a82869988463543d3d8dd1f0b5fe3\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 18 18:00:31.969462 master-0 kubenswrapper[30278]: I0318 18:00:31.968820 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3b3363934623637fdc1d37ff8b16880a-cert-dir\") pod \"kube-controller-manager-master-0\" (UID: \"3b3363934623637fdc1d37ff8b16880a\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 18:00:31.969462 master-0 kubenswrapper[30278]: I0318 18:00:31.968846 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/094204df314fe45bd5af12ca1b4622bb-resource-dir\") pod \"etcd-master-0\" (UID: \"094204df314fe45bd5af12ca1b4622bb\") " pod="openshift-etcd/etcd-master-0" Mar 18 18:00:31.969462 master-0 kubenswrapper[30278]: I0318 18:00:31.968886 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/094204df314fe45bd5af12ca1b4622bb-cert-dir\") pod \"etcd-master-0\" (UID: \"094204df314fe45bd5af12ca1b4622bb\") " pod="openshift-etcd/etcd-master-0" Mar 18 18:00:31.969462 master-0 kubenswrapper[30278]: I0318 18:00:31.968911 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/094204df314fe45bd5af12ca1b4622bb-usr-local-bin\") pod \"etcd-master-0\" (UID: \"094204df314fe45bd5af12ca1b4622bb\") " pod="openshift-etcd/etcd-master-0" Mar 18 18:00:31.969462 master-0 kubenswrapper[30278]: I0318 18:00:31.968939 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/8e7a82869988463543d3d8dd1f0b5fe3-var-lock\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"8e7a82869988463543d3d8dd1f0b5fe3\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 18 18:00:31.969462 master-0 kubenswrapper[30278]: I0318 18:00:31.968968 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/8e7a82869988463543d3d8dd1f0b5fe3-var-log\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"8e7a82869988463543d3d8dd1f0b5fe3\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 18 18:00:31.969462 master-0 kubenswrapper[30278]: I0318 18:00:31.968998 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3b3363934623637fdc1d37ff8b16880a-resource-dir\") pod \"kube-controller-manager-master-0\" (UID: \"3b3363934623637fdc1d37ff8b16880a\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 18:00:31.969462 master-0 kubenswrapper[30278]: I0318 18:00:31.969024 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/8413125cf444e5c95f023c5dd9c6151e-resource-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"8413125cf444e5c95f023c5dd9c6151e\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 18 18:00:31.977231 master-0 kubenswrapper[30278]: I0318 18:00:31.977186 30278 apiserver.go:52] "Watching apiserver" Mar 18 18:00:31.999597 master-0 kubenswrapper[30278]: I0318 18:00:31.998297 30278 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Mar 18 18:00:32.000547 master-0 kubenswrapper[30278]: I0318 18:00:32.000442 30278 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-storage-operator/csi-snapshot-controller-64854d9cff-vpjmp","openshift-marketplace/community-operators-8485d","openshift-kube-controller-manager-operator/kube-controller-manager-operator-ff989d6cc-qk279","openshift-machine-config-operator/kube-rbac-proxy-crio-master-0","openshift-service-ca/service-ca-79bc6b8d76-g5brm","assisted-installer/assisted-installer-controller-trlzv","openshift-cluster-samples-operator/cluster-samples-operator-85f7577d78-xnx8x","openshift-controller-manager/controller-manager-f5755b457-f4cbl","openshift-dns/node-resolver-bwcgq","openshift-image-registry/cluster-image-registry-operator-5549dc66cb-ljrq8","openshift-machine-api/cluster-baremetal-operator-6f69995874-dh5zl","openshift-operator-controller/operator-controller-controller-manager-57777556ff-bk26c","openshift-kube-apiserver-operator/kube-apiserver-operator-8b68b9d9b-p72m2","openshift-kube-apiserver/installer-2-master-0","openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-dddff6458-wlfj4","openshift-kube-scheduler/installer-4-master-0","openshift-monitoring/prometheus-operator-6c8df6d4b-fshkm","openshift-multus/multus-64tx9","openshift-operator-lifecycle-manager/olm-operator-5c9796789-6hngr","openshift-machine-config-operator/machine-config-operator-84d549f6d5-b5lps","openshift-ovn-kubernetes/ovnkube-control-plane-57f769d897-m82wx","openshift-etcd-operator/etcd-operator-8544cbcf9c-rws9x","openshift-etcd/installer-1-master-0","openshift-kube-storage-version-migrator/migrator-8487694857-8dsx2","openshift-machine-api/control-plane-machine-set-operator-6f97756bc8-zdqtc","openshift-machine-config-operator/machine-config-server-mpmxb","openshift-marketplace/certified-operators-vbglp","openshift-monitoring/prometheus-operator-admission-webhook-69c6b55594-7r9qg","openshift-ingress-operator/ingress-operator-66b84d69b-qb7n6","openshift-marketplace/marketplace-operator-89ccd998f-l5gm7","openshift-marketplace/redhat-marketplace-6xmx4","openshift-network-operator/iptables-alerter-f7jp5","openshift-operator-lifecycle-manager/catalog-operator-68f85b4d6c-qpgfz","openshift-multus/network-metrics-daemon-mfn52","openshift-cluster-machine-approver/machine-approver-5c6485487f-z74t2","openshift-cluster-storage-operator/cluster-storage-operator-7d87854d6-d4bmc","openshift-cluster-version/cluster-version-operator-7d58488df-l48xm","openshift-kube-apiserver/installer-3-master-0","openshift-kube-scheduler/openshift-kube-scheduler-master-0","openshift-machine-api/cluster-autoscaler-operator-866dc4744-l6hpt","openshift-machine-config-operator/machine-config-daemon-5l8hh","openshift-apiserver/apiserver-897b458c6-vsss9","openshift-config-operator/openshift-config-operator-95bf4f4d-q27fh","openshift-etcd/etcd-master-0","openshift-multus/multus-additional-cni-plugins-ttbr5","openshift-network-diagnostics/network-check-source-b4bf74f6-nlqpp","openshift-route-controller-manager/route-controller-manager-57dbfd879f-44tfw","openshift-kube-apiserver/installer-1-master-0","openshift-service-ca-operator/service-ca-operator-b865698dc-5zj8r","openshift-cloud-credential-operator/cloud-credential-operator-744f9dbf77-djgn7","openshift-cluster-node-tuning-operator/tuned-r6tf4","openshift-etcd/installer-2-master-0","openshift-multus/multus-admission-controller-58c9f8fc64-9c6bk","openshift-network-node-identity/network-node-identity-7s68k","openshift-network-operator/network-operator-7bd846bfc4-dxxbl","openshift-ovn-kubernetes/ovnkube-node-5l4qp","openshift-authentication-operator/authentication-operator-5885bfd7f4-8sxdf","openshift-catalogd/catalogd-controller-manager-6864dc98f7-8vmsv","openshift-controller-manager-operator/openshift-controller-manager-operator-8c94f4649-hpsbd","openshift-ingress/router-default-7dcf5569b5-m5dh4","openshift-kube-controller-manager/kube-controller-manager-master-0","openshift-network-diagnostics/network-check-target-ctd49","openshift-apiserver-operator/openshift-apiserver-operator-d65958b8-t266j","openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-7qwxn","openshift-cluster-storage-operator/csi-snapshot-controller-operator-5f5d689c6b-z9vvz","openshift-machine-api/machine-api-operator-6fbb6cf6f9-6x52p","openshift-monitoring/cluster-monitoring-operator-58845fbb57-vjrjg","openshift-oauth-apiserver/apiserver-688fbbb854-6n26v","openshift-kube-apiserver/kube-apiserver-master-0","openshift-kube-scheduler/installer-3-master-0","openshift-operator-lifecycle-manager/package-server-manager-7b95f86987-6qqz4","openshift-operator-lifecycle-manager/packageserver-b8b994c95-kglwt","openshift-cluster-olm-operator/cluster-olm-operator-67dcd4998-lljnt","openshift-dns/dns-default-lf9xl","openshift-kube-controller-manager/installer-2-master-0","openshift-kube-controller-manager/installer-3-master-0","openshift-machine-config-operator/machine-config-controller-b4f87c5b9-m84zq","openshift-marketplace/redhat-operators-bgdql","openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-kfzkl","openshift-dns-operator/dns-operator-9c5679d8f-7sc7v","openshift-insights/insights-operator-68bf6ff9d6-hm777","openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0","openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-6bb5bfb6fd-xwwcg"] Mar 18 18:00:32.001190 master-0 kubenswrapper[30278]: I0318 18:00:32.000836 30278 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="assisted-installer/assisted-installer-controller-trlzv" Mar 18 18:00:32.011960 master-0 kubenswrapper[30278]: I0318 18:00:32.011891 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Mar 18 18:00:32.015373 master-0 kubenswrapper[30278]: I0318 18:00:32.015307 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-olm-operator"/"openshift-service-ca.crt" Mar 18 18:00:32.016003 master-0 kubenswrapper[30278]: I0318 18:00:32.015834 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Mar 18 18:00:32.016480 master-0 kubenswrapper[30278]: I0318 18:00:32.016422 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Mar 18 18:00:32.016747 master-0 kubenswrapper[30278]: I0318 18:00:32.016669 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Mar 18 18:00:32.017221 master-0 kubenswrapper[30278]: I0318 18:00:32.017155 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Mar 18 18:00:32.017726 master-0 kubenswrapper[30278]: I0318 18:00:32.017673 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Mar 18 18:00:32.017905 master-0 kubenswrapper[30278]: I0318 18:00:32.017865 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kube-root-ca.crt" Mar 18 18:00:32.018503 master-0 kubenswrapper[30278]: I0318 18:00:32.018463 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Mar 18 18:00:32.018777 master-0 kubenswrapper[30278]: I0318 18:00:32.018697 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Mar 18 18:00:32.021137 master-0 kubenswrapper[30278]: I0318 18:00:32.021089 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"cluster-monitoring-operator-tls" Mar 18 18:00:32.021548 master-0 kubenswrapper[30278]: I0318 18:00:32.021500 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Mar 18 18:00:32.022348 master-0 kubenswrapper[30278]: I0318 18:00:32.022261 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-baremetal-webhook-server-cert" Mar 18 18:00:32.022546 master-0 kubenswrapper[30278]: I0318 18:00:32.022462 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Mar 18 18:00:32.022882 master-0 kubenswrapper[30278]: I0318 18:00:32.022843 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"cluster-baremetal-operator-images" Mar 18 18:00:32.022882 master-0 kubenswrapper[30278]: I0318 18:00:32.022875 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Mar 18 18:00:32.023025 master-0 kubenswrapper[30278]: I0318 18:00:32.022920 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Mar 18 18:00:32.023025 master-0 kubenswrapper[30278]: I0318 18:00:32.022956 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Mar 18 18:00:32.023989 master-0 kubenswrapper[30278]: I0318 18:00:32.023924 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Mar 18 18:00:32.024244 master-0 kubenswrapper[30278]: I0318 18:00:32.024206 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-olm-operator"/"kube-root-ca.crt" Mar 18 18:00:32.024397 master-0 kubenswrapper[30278]: I0318 18:00:32.024321 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Mar 18 18:00:32.024532 master-0 kubenswrapper[30278]: I0318 18:00:32.024329 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Mar 18 18:00:32.024532 master-0 kubenswrapper[30278]: I0318 18:00:32.024359 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Mar 18 18:00:32.025794 master-0 kubenswrapper[30278]: I0318 18:00:32.025742 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Mar 18 18:00:32.025945 master-0 kubenswrapper[30278]: I0318 18:00:32.024347 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-olm-operator"/"cluster-olm-operator-serving-cert" Mar 18 18:00:32.028194 master-0 kubenswrapper[30278]: I0318 18:00:32.026127 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Mar 18 18:00:32.028194 master-0 kubenswrapper[30278]: I0318 18:00:32.026139 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Mar 18 18:00:32.028194 master-0 kubenswrapper[30278]: I0318 18:00:32.026402 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Mar 18 18:00:32.028194 master-0 kubenswrapper[30278]: I0318 18:00:32.026633 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Mar 18 18:00:32.028194 master-0 kubenswrapper[30278]: I0318 18:00:32.026635 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Mar 18 18:00:32.028194 master-0 kubenswrapper[30278]: I0318 18:00:32.026719 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Mar 18 18:00:32.028194 master-0 kubenswrapper[30278]: I0318 18:00:32.026806 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Mar 18 18:00:32.028194 master-0 kubenswrapper[30278]: I0318 18:00:32.026871 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Mar 18 18:00:32.028194 master-0 kubenswrapper[30278]: I0318 18:00:32.026927 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Mar 18 18:00:32.028194 master-0 kubenswrapper[30278]: I0318 18:00:32.026998 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Mar 18 18:00:32.028194 master-0 kubenswrapper[30278]: I0318 18:00:32.027146 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Mar 18 18:00:32.028194 master-0 kubenswrapper[30278]: I0318 18:00:32.027334 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Mar 18 18:00:32.028194 master-0 kubenswrapper[30278]: I0318 18:00:32.027487 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Mar 18 18:00:32.028194 master-0 kubenswrapper[30278]: I0318 18:00:32.027545 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Mar 18 18:00:32.028194 master-0 kubenswrapper[30278]: I0318 18:00:32.027584 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Mar 18 18:00:32.028194 master-0 kubenswrapper[30278]: I0318 18:00:32.027630 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Mar 18 18:00:32.028194 master-0 kubenswrapper[30278]: I0318 18:00:32.027756 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Mar 18 18:00:32.028194 master-0 kubenswrapper[30278]: I0318 18:00:32.027787 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Mar 18 18:00:32.028194 master-0 kubenswrapper[30278]: I0318 18:00:32.027798 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Mar 18 18:00:32.028194 master-0 kubenswrapper[30278]: I0318 18:00:32.027831 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Mar 18 18:00:32.028194 master-0 kubenswrapper[30278]: I0318 18:00:32.027549 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Mar 18 18:00:32.028194 master-0 kubenswrapper[30278]: I0318 18:00:32.027981 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Mar 18 18:00:32.035856 master-0 kubenswrapper[30278]: I0318 18:00:32.028827 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Mar 18 18:00:32.035856 master-0 kubenswrapper[30278]: I0318 18:00:32.028920 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Mar 18 18:00:32.035856 master-0 kubenswrapper[30278]: I0318 18:00:32.029176 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"telemetry-config" Mar 18 18:00:32.035856 master-0 kubenswrapper[30278]: I0318 18:00:32.029188 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-baremetal-operator-tls" Mar 18 18:00:32.035856 master-0 kubenswrapper[30278]: I0318 18:00:32.029266 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"openshift-service-ca.crt" Mar 18 18:00:32.035856 master-0 kubenswrapper[30278]: I0318 18:00:32.029392 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Mar 18 18:00:32.035856 master-0 kubenswrapper[30278]: I0318 18:00:32.030037 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Mar 18 18:00:32.035856 master-0 kubenswrapper[30278]: I0318 18:00:32.030206 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Mar 18 18:00:32.035856 master-0 kubenswrapper[30278]: I0318 18:00:32.030288 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-node-tuning-operator"/"performance-addon-operator-webhook-cert" Mar 18 18:00:32.035856 master-0 kubenswrapper[30278]: I0318 18:00:32.032452 30278 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-1-master-0" Mar 18 18:00:32.035856 master-0 kubenswrapper[30278]: I0318 18:00:32.033686 30278 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-1-master-0" Mar 18 18:00:32.035856 master-0 kubenswrapper[30278]: I0318 18:00:32.033998 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Mar 18 18:00:32.035856 master-0 kubenswrapper[30278]: I0318 18:00:32.034538 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Mar 18 18:00:32.035856 master-0 kubenswrapper[30278]: I0318 18:00:32.035523 30278 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-3-master-0" Mar 18 18:00:32.039641 master-0 kubenswrapper[30278]: I0318 18:00:32.039593 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Mar 18 18:00:32.041383 master-0 kubenswrapper[30278]: I0318 18:00:32.041327 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Mar 18 18:00:32.041885 master-0 kubenswrapper[30278]: I0318 18:00:32.041601 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Mar 18 18:00:32.041885 master-0 kubenswrapper[30278]: I0318 18:00:32.041651 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"baremetal-kube-rbac-proxy" Mar 18 18:00:32.042128 master-0 kubenswrapper[30278]: I0318 18:00:32.042095 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Mar 18 18:00:32.042573 master-0 kubenswrapper[30278]: I0318 18:00:32.042543 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Mar 18 18:00:32.043690 master-0 kubenswrapper[30278]: I0318 18:00:32.043654 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Mar 18 18:00:32.043951 master-0 kubenswrapper[30278]: I0318 18:00:32.043918 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Mar 18 18:00:32.043999 master-0 kubenswrapper[30278]: I0318 18:00:32.043947 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"kube-root-ca.crt" Mar 18 18:00:32.044754 master-0 kubenswrapper[30278]: I0318 18:00:32.044713 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"openshift-service-ca.crt" Mar 18 18:00:32.044843 master-0 kubenswrapper[30278]: I0318 18:00:32.044758 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Mar 18 18:00:32.050640 master-0 kubenswrapper[30278]: I0318 18:00:32.050592 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Mar 18 18:00:32.051293 master-0 kubenswrapper[30278]: I0318 18:00:32.051226 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Mar 18 18:00:32.052665 master-0 kubenswrapper[30278]: I0318 18:00:32.052636 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Mar 18 18:00:32.052717 master-0 kubenswrapper[30278]: I0318 18:00:32.052651 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Mar 18 18:00:32.052769 master-0 kubenswrapper[30278]: I0318 18:00:32.052740 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Mar 18 18:00:32.052994 master-0 kubenswrapper[30278]: I0318 18:00:32.052973 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Mar 18 18:00:32.053038 master-0 kubenswrapper[30278]: I0318 18:00:32.052998 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Mar 18 18:00:32.053069 master-0 kubenswrapper[30278]: I0318 18:00:32.053032 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Mar 18 18:00:32.053127 master-0 kubenswrapper[30278]: I0318 18:00:32.053106 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Mar 18 18:00:32.053242 master-0 kubenswrapper[30278]: I0318 18:00:32.053220 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Mar 18 18:00:32.053312 master-0 kubenswrapper[30278]: I0318 18:00:32.053301 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Mar 18 18:00:32.053437 master-0 kubenswrapper[30278]: I0318 18:00:32.053409 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Mar 18 18:00:32.053581 master-0 kubenswrapper[30278]: I0318 18:00:32.053560 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Mar 18 18:00:32.053635 master-0 kubenswrapper[30278]: I0318 18:00:32.053605 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Mar 18 18:00:32.053670 master-0 kubenswrapper[30278]: I0318 18:00:32.053221 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"whereabouts-flatfile-config" Mar 18 18:00:32.053670 master-0 kubenswrapper[30278]: I0318 18:00:32.053662 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-node-tuning-operator"/"node-tuning-operator-tls" Mar 18 18:00:32.053884 master-0 kubenswrapper[30278]: I0318 18:00:32.053863 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Mar 18 18:00:32.054117 master-0 kubenswrapper[30278]: I0318 18:00:32.054097 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Mar 18 18:00:32.054341 master-0 kubenswrapper[30278]: I0318 18:00:32.054314 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Mar 18 18:00:32.054494 master-0 kubenswrapper[30278]: I0318 18:00:32.053112 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Mar 18 18:00:32.054535 master-0 kubenswrapper[30278]: I0318 18:00:32.054315 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Mar 18 18:00:32.054608 master-0 kubenswrapper[30278]: I0318 18:00:32.054510 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Mar 18 18:00:32.055288 master-0 kubenswrapper[30278]: I0318 18:00:32.055232 30278 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-2-master-0" Mar 18 18:00:32.059997 master-0 kubenswrapper[30278]: I0318 18:00:32.059951 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Mar 18 18:00:32.060835 master-0 kubenswrapper[30278]: I0318 18:00:32.060792 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Mar 18 18:00:32.061060 master-0 kubenswrapper[30278]: I0318 18:00:32.061025 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-storage-operator"/"kube-root-ca.crt" Mar 18 18:00:32.061148 master-0 kubenswrapper[30278]: I0318 18:00:32.061122 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Mar 18 18:00:32.061206 master-0 kubenswrapper[30278]: I0318 18:00:32.061164 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Mar 18 18:00:32.061259 master-0 kubenswrapper[30278]: I0318 18:00:32.061244 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Mar 18 18:00:32.062954 master-0 kubenswrapper[30278]: I0318 18:00:32.062916 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Mar 18 18:00:32.063669 master-0 kubenswrapper[30278]: I0318 18:00:32.063641 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Mar 18 18:00:32.068641 master-0 kubenswrapper[30278]: I0318 18:00:32.068604 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Mar 18 18:00:32.069642 master-0 kubenswrapper[30278]: I0318 18:00:32.068692 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/7c6694a8-ccd0-491b-9f21-215450f6ce67-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-598fbc5f8f-7qwxn\" (UID: \"7c6694a8-ccd0-491b-9f21-215450f6ce67\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-7qwxn" Mar 18 18:00:32.069642 master-0 kubenswrapper[30278]: I0318 18:00:32.068736 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2tvgq\" (UniqueName: \"kubernetes.io/projected/0b9ff55a-73fb-473f-b406-1f8b6cffdb89-kube-api-access-2tvgq\") pod \"openshift-apiserver-operator-d65958b8-t266j\" (UID: \"0b9ff55a-73fb-473f-b406-1f8b6cffdb89\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-d65958b8-t266j" Mar 18 18:00:32.069642 master-0 kubenswrapper[30278]: I0318 18:00:32.068937 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9b424d6c-7440-4c98-ac19-2d0642c696fd-config\") pod \"kube-controller-manager-operator-ff989d6cc-qk279\" (UID: \"9b424d6c-7440-4c98-ac19-2d0642c696fd\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-ff989d6cc-qk279" Mar 18 18:00:32.069642 master-0 kubenswrapper[30278]: I0318 18:00:32.069107 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c087ce06-a16b-41f4-ba93-8fccdee09003-serving-cert\") pod \"authentication-operator-5885bfd7f4-8sxdf\" (UID: \"c087ce06-a16b-41f4-ba93-8fccdee09003\") " pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-8sxdf" Mar 18 18:00:32.069642 master-0 kubenswrapper[30278]: I0318 18:00:32.069157 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/b1352cc7-4099-44c5-9c31-8259fb783bc7-metrics-tls\") pod \"dns-operator-9c5679d8f-7sc7v\" (UID: \"b1352cc7-4099-44c5-9c31-8259fb783bc7\") " pod="openshift-dns-operator/dns-operator-9c5679d8f-7sc7v" Mar 18 18:00:32.069642 master-0 kubenswrapper[30278]: I0318 18:00:32.069188 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f7ff61c7-32d1-4407-a792-8e22bb4d50f9-serving-cert\") pod \"kube-storage-version-migrator-operator-6bb5bfb6fd-xwwcg\" (UID: \"f7ff61c7-32d1-4407-a792-8e22bb4d50f9\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-6bb5bfb6fd-xwwcg" Mar 18 18:00:32.069642 master-0 kubenswrapper[30278]: I0318 18:00:32.069214 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/14a0661b-7bde-4e22-a9a9-5e3fb24df77f-metrics-tls\") pod \"network-operator-7bd846bfc4-dxxbl\" (UID: \"14a0661b-7bde-4e22-a9a9-5e3fb24df77f\") " pod="openshift-network-operator/network-operator-7bd846bfc4-dxxbl" Mar 18 18:00:32.069642 master-0 kubenswrapper[30278]: I0318 18:00:32.069215 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/7c6694a8-ccd0-491b-9f21-215450f6ce67-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-598fbc5f8f-7qwxn\" (UID: \"7c6694a8-ccd0-491b-9f21-215450f6ce67\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-7qwxn" Mar 18 18:00:32.069642 master-0 kubenswrapper[30278]: I0318 18:00:32.069397 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-6f97756bc8-zdqtc" Mar 18 18:00:32.069642 master-0 kubenswrapper[30278]: I0318 18:00:32.069468 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9b424d6c-7440-4c98-ac19-2d0642c696fd-config\") pod \"kube-controller-manager-operator-ff989d6cc-qk279\" (UID: \"9b424d6c-7440-4c98-ac19-2d0642c696fd\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-ff989d6cc-qk279" Mar 18 18:00:32.069642 master-0 kubenswrapper[30278]: I0318 18:00:32.069542 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f7ff61c7-32d1-4407-a792-8e22bb4d50f9-serving-cert\") pod \"kube-storage-version-migrator-operator-6bb5bfb6fd-xwwcg\" (UID: \"f7ff61c7-32d1-4407-a792-8e22bb4d50f9\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-6bb5bfb6fd-xwwcg" Mar 18 18:00:32.069642 master-0 kubenswrapper[30278]: I0318 18:00:32.069569 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c087ce06-a16b-41f4-ba93-8fccdee09003-serving-cert\") pod \"authentication-operator-5885bfd7f4-8sxdf\" (UID: \"c087ce06-a16b-41f4-ba93-8fccdee09003\") " pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-8sxdf" Mar 18 18:00:32.069642 master-0 kubenswrapper[30278]: I0318 18:00:32.069622 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/b1352cc7-4099-44c5-9c31-8259fb783bc7-metrics-tls\") pod \"dns-operator-9c5679d8f-7sc7v\" (UID: \"b1352cc7-4099-44c5-9c31-8259fb783bc7\") " pod="openshift-dns-operator/dns-operator-9c5679d8f-7sc7v" Mar 18 18:00:32.069642 master-0 kubenswrapper[30278]: I0318 18:00:32.069648 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/e73f2834-c56c-4cef-ac3c-2317e9a4324c-srv-cert\") pod \"olm-operator-5c9796789-6hngr\" (UID: \"e73f2834-c56c-4cef-ac3c-2317e9a4324c\") " pod="openshift-operator-lifecycle-manager/olm-operator-5c9796789-6hngr" Mar 18 18:00:32.070529 master-0 kubenswrapper[30278]: I0318 18:00:32.070512 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/14a0661b-7bde-4e22-a9a9-5e3fb24df77f-metrics-tls\") pod \"network-operator-7bd846bfc4-dxxbl\" (UID: \"14a0661b-7bde-4e22-a9a9-5e3fb24df77f\") " pod="openshift-network-operator/network-operator-7bd846bfc4-dxxbl" Mar 18 18:00:32.070633 master-0 kubenswrapper[30278]: I0318 18:00:32.070606 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9pp5f\" (UniqueName: \"kubernetes.io/projected/b1352cc7-4099-44c5-9c31-8259fb783bc7-kube-api-access-9pp5f\") pod \"dns-operator-9c5679d8f-7sc7v\" (UID: \"b1352cc7-4099-44c5-9c31-8259fb783bc7\") " pod="openshift-dns-operator/dns-operator-9c5679d8f-7sc7v" Mar 18 18:00:32.070676 master-0 kubenswrapper[30278]: I0318 18:00:32.070642 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/7e64a377-f497-4416-8f22-d5c7f52e0b65-bound-sa-token\") pod \"ingress-operator-66b84d69b-qb7n6\" (UID: \"7e64a377-f497-4416-8f22-d5c7f52e0b65\") " pod="openshift-ingress-operator/ingress-operator-66b84d69b-qb7n6" Mar 18 18:00:32.070738 master-0 kubenswrapper[30278]: I0318 18:00:32.070711 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/0100a259-1358-45e8-8191-4e1f9a14ec89-etcd-service-ca\") pod \"etcd-operator-8544cbcf9c-rws9x\" (UID: \"0100a259-1358-45e8-8191-4e1f9a14ec89\") " pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-rws9x" Mar 18 18:00:32.070738 master-0 kubenswrapper[30278]: I0318 18:00:32.070736 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/0100a259-1358-45e8-8191-4e1f9a14ec89-etcd-client\") pod \"etcd-operator-8544cbcf9c-rws9x\" (UID: \"0100a259-1358-45e8-8191-4e1f9a14ec89\") " pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-rws9x" Mar 18 18:00:32.070841 master-0 kubenswrapper[30278]: I0318 18:00:32.070755 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/6f26e239-2988-4faa-bc1d-24b15b95b7f1-bound-sa-token\") pod \"cluster-image-registry-operator-5549dc66cb-ljrq8\" (UID: \"6f26e239-2988-4faa-bc1d-24b15b95b7f1\") " pod="openshift-image-registry/cluster-image-registry-operator-5549dc66cb-ljrq8" Mar 18 18:00:32.070972 master-0 kubenswrapper[30278]: I0318 18:00:32.070928 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-85f7577d78-xnx8x" Mar 18 18:00:32.070972 master-0 kubenswrapper[30278]: I0318 18:00:32.070921 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zwlxb\" (UniqueName: \"kubernetes.io/projected/37b3753f-bf4f-4a9e-a4a8-d58296bada79-kube-api-access-zwlxb\") pod \"cluster-baremetal-operator-6f69995874-dh5zl\" (UID: \"37b3753f-bf4f-4a9e-a4a8-d58296bada79\") " pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-dh5zl" Mar 18 18:00:32.071032 master-0 kubenswrapper[30278]: I0318 18:00:32.070981 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3a3a6c2c-78e7-41f3-acff-20173cbc012a-serving-cert\") pod \"openshift-kube-scheduler-operator-dddff6458-wlfj4\" (UID: \"3a3a6c2c-78e7-41f3-acff-20173cbc012a\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-dddff6458-wlfj4" Mar 18 18:00:32.071068 master-0 kubenswrapper[30278]: I0318 18:00:32.071028 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/6f26e239-2988-4faa-bc1d-24b15b95b7f1-trusted-ca\") pod \"cluster-image-registry-operator-5549dc66cb-ljrq8\" (UID: \"6f26e239-2988-4faa-bc1d-24b15b95b7f1\") " pod="openshift-image-registry/cluster-image-registry-operator-5549dc66cb-ljrq8" Mar 18 18:00:32.071184 master-0 kubenswrapper[30278]: I0318 18:00:32.071151 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nf82n\" (UniqueName: \"kubernetes.io/projected/f7ff61c7-32d1-4407-a792-8e22bb4d50f9-kube-api-access-nf82n\") pod \"kube-storage-version-migrator-operator-6bb5bfb6fd-xwwcg\" (UID: \"f7ff61c7-32d1-4407-a792-8e22bb4d50f9\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-6bb5bfb6fd-xwwcg" Mar 18 18:00:32.071238 master-0 kubenswrapper[30278]: I0318 18:00:32.071203 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9b424d6c-7440-4c98-ac19-2d0642c696fd-kube-api-access\") pod \"kube-controller-manager-operator-ff989d6cc-qk279\" (UID: \"9b424d6c-7440-4c98-ac19-2d0642c696fd\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-ff989d6cc-qk279" Mar 18 18:00:32.071238 master-0 kubenswrapper[30278]: I0318 18:00:32.071212 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-5c6485487f-z74t2" Mar 18 18:00:32.071500 master-0 kubenswrapper[30278]: I0318 18:00:32.071451 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/e73f2834-c56c-4cef-ac3c-2317e9a4324c-srv-cert\") pod \"olm-operator-5c9796789-6hngr\" (UID: \"e73f2834-c56c-4cef-ac3c-2317e9a4324c\") " pod="openshift-operator-lifecycle-manager/olm-operator-5c9796789-6hngr" Mar 18 18:00:32.071599 master-0 kubenswrapper[30278]: I0318 18:00:32.071475 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-789k6\" (UniqueName: \"kubernetes.io/projected/c087ce06-a16b-41f4-ba93-8fccdee09003-kube-api-access-789k6\") pod \"authentication-operator-5885bfd7f4-8sxdf\" (UID: \"c087ce06-a16b-41f4-ba93-8fccdee09003\") " pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-8sxdf" Mar 18 18:00:32.071706 master-0 kubenswrapper[30278]: I0318 18:00:32.071214 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-6fbb6cf6f9-6x52p" Mar 18 18:00:32.071759 master-0 kubenswrapper[30278]: I0318 18:00:32.071716 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-storage-operator"/"openshift-service-ca.crt" Mar 18 18:00:32.071835 master-0 kubenswrapper[30278]: I0318 18:00:32.071694 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/cb522b02-0b93-4711-9041-566daa06b95a-available-featuregates\") pod \"openshift-config-operator-95bf4f4d-q27fh\" (UID: \"cb522b02-0b93-4711-9041-566daa06b95a\") " pod="openshift-config-operator/openshift-config-operator-95bf4f4d-q27fh" Mar 18 18:00:32.071929 master-0 kubenswrapper[30278]: I0318 18:00:32.071892 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/cb522b02-0b93-4711-9041-566daa06b95a-available-featuregates\") pod \"openshift-config-operator-95bf4f4d-q27fh\" (UID: \"cb522b02-0b93-4711-9041-566daa06b95a\") " pod="openshift-config-operator/openshift-config-operator-95bf4f4d-q27fh" Mar 18 18:00:32.072015 master-0 kubenswrapper[30278]: I0318 18:00:32.071983 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/0100a259-1358-45e8-8191-4e1f9a14ec89-etcd-service-ca\") pod \"etcd-operator-8544cbcf9c-rws9x\" (UID: \"0100a259-1358-45e8-8191-4e1f9a14ec89\") " pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-rws9x" Mar 18 18:00:32.072106 master-0 kubenswrapper[30278]: I0318 18:00:32.072080 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/26575d68-0488-4dfa-a5d0-5016e481dba6-config\") pod \"kube-apiserver-operator-8b68b9d9b-p72m2\" (UID: \"26575d68-0488-4dfa-a5d0-5016e481dba6\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-8b68b9d9b-p72m2" Mar 18 18:00:32.072209 master-0 kubenswrapper[30278]: I0318 18:00:32.072191 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7c6694a8-ccd0-491b-9f21-215450f6ce67-trusted-ca\") pod \"cluster-node-tuning-operator-598fbc5f8f-7qwxn\" (UID: \"7c6694a8-ccd0-491b-9f21-215450f6ce67\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-7qwxn" Mar 18 18:00:32.072312 master-0 kubenswrapper[30278]: I0318 18:00:32.072146 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/0100a259-1358-45e8-8191-4e1f9a14ec89-etcd-client\") pod \"etcd-operator-8544cbcf9c-rws9x\" (UID: \"0100a259-1358-45e8-8191-4e1f9a14ec89\") " pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-rws9x" Mar 18 18:00:32.072383 master-0 kubenswrapper[30278]: I0318 18:00:32.072204 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/26575d68-0488-4dfa-a5d0-5016e481dba6-config\") pod \"kube-apiserver-operator-8b68b9d9b-p72m2\" (UID: \"26575d68-0488-4dfa-a5d0-5016e481dba6\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-8b68b9d9b-p72m2" Mar 18 18:00:32.072383 master-0 kubenswrapper[30278]: I0318 18:00:32.072248 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3a3a6c2c-78e7-41f3-acff-20173cbc012a-serving-cert\") pod \"openshift-kube-scheduler-operator-dddff6458-wlfj4\" (UID: \"3a3a6c2c-78e7-41f3-acff-20173cbc012a\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-dddff6458-wlfj4" Mar 18 18:00:32.072525 master-0 kubenswrapper[30278]: I0318 18:00:32.072497 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/6f26e239-2988-4faa-bc1d-24b15b95b7f1-trusted-ca\") pod \"cluster-image-registry-operator-5549dc66cb-ljrq8\" (UID: \"6f26e239-2988-4faa-bc1d-24b15b95b7f1\") " pod="openshift-image-registry/cluster-image-registry-operator-5549dc66cb-ljrq8" Mar 18 18:00:32.072597 master-0 kubenswrapper[30278]: I0318 18:00:32.072505 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/ce5831a6-5a8d-4cda-9299-5d86437bcab2-marketplace-operator-metrics\") pod \"marketplace-operator-89ccd998f-l5gm7\" (UID: \"ce5831a6-5a8d-4cda-9299-5d86437bcab2\") " pod="openshift-marketplace/marketplace-operator-89ccd998f-l5gm7" Mar 18 18:00:32.072705 master-0 kubenswrapper[30278]: I0318 18:00:32.072683 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/8d5e9525-6c0d-4c0b-a2ce-e42eaf66c311-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-58845fbb57-vjrjg\" (UID: \"8d5e9525-6c0d-4c0b-a2ce-e42eaf66c311\") " pod="openshift-monitoring/cluster-monitoring-operator-58845fbb57-vjrjg" Mar 18 18:00:32.072807 master-0 kubenswrapper[30278]: I0318 18:00:32.072789 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sclm5\" (UniqueName: \"kubernetes.io/projected/7e64a377-f497-4416-8f22-d5c7f52e0b65-kube-api-access-sclm5\") pod \"ingress-operator-66b84d69b-qb7n6\" (UID: \"7e64a377-f497-4416-8f22-d5c7f52e0b65\") " pod="openshift-ingress-operator/ingress-operator-66b84d69b-qb7n6" Mar 18 18:00:32.073020 master-0 kubenswrapper[30278]: I0318 18:00:32.073000 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f7ff61c7-32d1-4407-a792-8e22bb4d50f9-config\") pod \"kube-storage-version-migrator-operator-6bb5bfb6fd-xwwcg\" (UID: \"f7ff61c7-32d1-4407-a792-8e22bb4d50f9\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-6bb5bfb6fd-xwwcg" Mar 18 18:00:32.073141 master-0 kubenswrapper[30278]: I0318 18:00:32.073122 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/37b3753f-bf4f-4a9e-a4a8-d58296bada79-images\") pod \"cluster-baremetal-operator-6f69995874-dh5zl\" (UID: \"37b3753f-bf4f-4a9e-a4a8-d58296bada79\") " pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-dh5zl" Mar 18 18:00:32.073256 master-0 kubenswrapper[30278]: I0318 18:00:32.073238 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-756j8\" (UniqueName: \"kubernetes.io/projected/ce5831a6-5a8d-4cda-9299-5d86437bcab2-kube-api-access-756j8\") pod \"marketplace-operator-89ccd998f-l5gm7\" (UID: \"ce5831a6-5a8d-4cda-9299-5d86437bcab2\") " pod="openshift-marketplace/marketplace-operator-89ccd998f-l5gm7" Mar 18 18:00:32.073394 master-0 kubenswrapper[30278]: I0318 18:00:32.073363 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f7ff61c7-32d1-4407-a792-8e22bb4d50f9-config\") pod \"kube-storage-version-migrator-operator-6bb5bfb6fd-xwwcg\" (UID: \"f7ff61c7-32d1-4407-a792-8e22bb4d50f9\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-6bb5bfb6fd-xwwcg" Mar 18 18:00:32.073394 master-0 kubenswrapper[30278]: I0318 18:00:32.072798 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/ce5831a6-5a8d-4cda-9299-5d86437bcab2-marketplace-operator-metrics\") pod \"marketplace-operator-89ccd998f-l5gm7\" (UID: \"ce5831a6-5a8d-4cda-9299-5d86437bcab2\") " pod="openshift-marketplace/marketplace-operator-89ccd998f-l5gm7" Mar 18 18:00:32.073492 master-0 kubenswrapper[30278]: I0318 18:00:32.073437 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/37b3753f-bf4f-4a9e-a4a8-d58296bada79-images\") pod \"cluster-baremetal-operator-6f69995874-dh5zl\" (UID: \"37b3753f-bf4f-4a9e-a4a8-d58296bada79\") " pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-dh5zl" Mar 18 18:00:32.073492 master-0 kubenswrapper[30278]: I0318 18:00:32.073459 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/cluster-autoscaler-operator-866dc4744-l6hpt" Mar 18 18:00:32.073599 master-0 kubenswrapper[30278]: I0318 18:00:32.073374 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7e64a377-f497-4416-8f22-d5c7f52e0b65-trusted-ca\") pod \"ingress-operator-66b84d69b-qb7n6\" (UID: \"7e64a377-f497-4416-8f22-d5c7f52e0b65\") " pod="openshift-ingress-operator/ingress-operator-66b84d69b-qb7n6" Mar 18 18:00:32.073682 master-0 kubenswrapper[30278]: I0318 18:00:32.072966 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/8d5e9525-6c0d-4c0b-a2ce-e42eaf66c311-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-58845fbb57-vjrjg\" (UID: \"8d5e9525-6c0d-4c0b-a2ce-e42eaf66c311\") " pod="openshift-monitoring/cluster-monitoring-operator-58845fbb57-vjrjg" Mar 18 18:00:32.073764 master-0 kubenswrapper[30278]: I0318 18:00:32.073747 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-baremetal-operator-tls\" (UniqueName: \"kubernetes.io/secret/37b3753f-bf4f-4a9e-a4a8-d58296bada79-cluster-baremetal-operator-tls\") pod \"cluster-baremetal-operator-6f69995874-dh5zl\" (UID: \"37b3753f-bf4f-4a9e-a4a8-d58296bada79\") " pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-dh5zl" Mar 18 18:00:32.073863 master-0 kubenswrapper[30278]: I0318 18:00:32.073846 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tnknt\" (UniqueName: \"kubernetes.io/projected/0100a259-1358-45e8-8191-4e1f9a14ec89-kube-api-access-tnknt\") pod \"etcd-operator-8544cbcf9c-rws9x\" (UID: \"0100a259-1358-45e8-8191-4e1f9a14ec89\") " pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-rws9x" Mar 18 18:00:32.073973 master-0 kubenswrapper[30278]: I0318 18:00:32.073933 30278 scope.go:117] "RemoveContainer" containerID="f887def1d9b97d72f25ddb564fd0ecbae06aba6b64de1338a239aa08a40c032f" Mar 18 18:00:32.074073 master-0 kubenswrapper[30278]: I0318 18:00:32.073943 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c087ce06-a16b-41f4-ba93-8fccdee09003-config\") pod \"authentication-operator-5885bfd7f4-8sxdf\" (UID: \"c087ce06-a16b-41f4-ba93-8fccdee09003\") " pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-8sxdf" Mar 18 18:00:32.074177 master-0 kubenswrapper[30278]: I0318 18:00:32.074148 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c087ce06-a16b-41f4-ba93-8fccdee09003-config\") pod \"authentication-operator-5885bfd7f4-8sxdf\" (UID: \"c087ce06-a16b-41f4-ba93-8fccdee09003\") " pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-8sxdf" Mar 18 18:00:32.074177 master-0 kubenswrapper[30278]: I0318 18:00:32.073941 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-baremetal-operator-tls\" (UniqueName: \"kubernetes.io/secret/37b3753f-bf4f-4a9e-a4a8-d58296bada79-cluster-baremetal-operator-tls\") pod \"cluster-baremetal-operator-6f69995874-dh5zl\" (UID: \"37b3753f-bf4f-4a9e-a4a8-d58296bada79\") " pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-dh5zl" Mar 18 18:00:32.074328 master-0 kubenswrapper[30278]: I0318 18:00:32.073753 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7e64a377-f497-4416-8f22-d5c7f52e0b65-trusted-ca\") pod \"ingress-operator-66b84d69b-qb7n6\" (UID: \"7e64a377-f497-4416-8f22-d5c7f52e0b65\") " pod="openshift-ingress-operator/ingress-operator-66b84d69b-qb7n6" Mar 18 18:00:32.074426 master-0 kubenswrapper[30278]: I0318 18:00:32.074403 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/26575d68-0488-4dfa-a5d0-5016e481dba6-serving-cert\") pod \"kube-apiserver-operator-8b68b9d9b-p72m2\" (UID: \"26575d68-0488-4dfa-a5d0-5016e481dba6\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-8b68b9d9b-p72m2" Mar 18 18:00:32.074530 master-0 kubenswrapper[30278]: I0318 18:00:32.074513 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/8d5e9525-6c0d-4c0b-a2ce-e42eaf66c311-telemetry-config\") pod \"cluster-monitoring-operator-58845fbb57-vjrjg\" (UID: \"8d5e9525-6c0d-4c0b-a2ce-e42eaf66c311\") " pod="openshift-monitoring/cluster-monitoring-operator-58845fbb57-vjrjg" Mar 18 18:00:32.074647 master-0 kubenswrapper[30278]: I0318 18:00:32.074629 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/7c6694a8-ccd0-491b-9f21-215450f6ce67-apiservice-cert\") pod \"cluster-node-tuning-operator-598fbc5f8f-7qwxn\" (UID: \"7c6694a8-ccd0-491b-9f21-215450f6ce67\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-7qwxn" Mar 18 18:00:32.074797 master-0 kubenswrapper[30278]: I0318 18:00:32.074756 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/14a0661b-7bde-4e22-a9a9-5e3fb24df77f-host-etc-kube\") pod \"network-operator-7bd846bfc4-dxxbl\" (UID: \"14a0661b-7bde-4e22-a9a9-5e3fb24df77f\") " pod="openshift-network-operator/network-operator-7bd846bfc4-dxxbl" Mar 18 18:00:32.074899 master-0 kubenswrapper[30278]: I0318 18:00:32.074865 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/7c6694a8-ccd0-491b-9f21-215450f6ce67-apiservice-cert\") pod \"cluster-node-tuning-operator-598fbc5f8f-7qwxn\" (UID: \"7c6694a8-ccd0-491b-9f21-215450f6ce67\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-7qwxn" Mar 18 18:00:32.074899 master-0 kubenswrapper[30278]: I0318 18:00:32.074879 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/8d5e9525-6c0d-4c0b-a2ce-e42eaf66c311-telemetry-config\") pod \"cluster-monitoring-operator-58845fbb57-vjrjg\" (UID: \"8d5e9525-6c0d-4c0b-a2ce-e42eaf66c311\") " pod="openshift-monitoring/cluster-monitoring-operator-58845fbb57-vjrjg" Mar 18 18:00:32.075001 master-0 kubenswrapper[30278]: I0318 18:00:32.074902 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/26575d68-0488-4dfa-a5d0-5016e481dba6-serving-cert\") pod \"kube-apiserver-operator-8b68b9d9b-p72m2\" (UID: \"26575d68-0488-4dfa-a5d0-5016e481dba6\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-8b68b9d9b-p72m2" Mar 18 18:00:32.075570 master-0 kubenswrapper[30278]: I0318 18:00:32.075239 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3a3a6c2c-78e7-41f3-acff-20173cbc012a-config\") pod \"openshift-kube-scheduler-operator-dddff6458-wlfj4\" (UID: \"3a3a6c2c-78e7-41f3-acff-20173cbc012a\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-dddff6458-wlfj4" Mar 18 18:00:32.075570 master-0 kubenswrapper[30278]: I0318 18:00:32.075330 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cloud-credential-operator/cloud-credential-operator-744f9dbf77-djgn7" Mar 18 18:00:32.075651 master-0 kubenswrapper[30278]: I0318 18:00:32.075615 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"trusted-ca" Mar 18 18:00:32.075734 master-0 kubenswrapper[30278]: I0318 18:00:32.075714 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3a3a6c2c-78e7-41f3-acff-20173cbc012a-config\") pod \"openshift-kube-scheduler-operator-dddff6458-wlfj4\" (UID: \"3a3a6c2c-78e7-41f3-acff-20173cbc012a\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-dddff6458-wlfj4" Mar 18 18:00:32.075844 master-0 kubenswrapper[30278]: I0318 18:00:32.075826 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5sl7p\" (UniqueName: \"kubernetes.io/projected/6f26e239-2988-4faa-bc1d-24b15b95b7f1-kube-api-access-5sl7p\") pod \"cluster-image-registry-operator-5549dc66cb-ljrq8\" (UID: \"6f26e239-2988-4faa-bc1d-24b15b95b7f1\") " pod="openshift-image-registry/cluster-image-registry-operator-5549dc66cb-ljrq8" Mar 18 18:00:32.075945 master-0 kubenswrapper[30278]: I0318 18:00:32.075928 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0100a259-1358-45e8-8191-4e1f9a14ec89-config\") pod \"etcd-operator-8544cbcf9c-rws9x\" (UID: \"0100a259-1358-45e8-8191-4e1f9a14ec89\") " pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-rws9x" Mar 18 18:00:32.076052 master-0 kubenswrapper[30278]: I0318 18:00:32.076031 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n8k5q\" (UniqueName: \"kubernetes.io/projected/99e215da-759d-4fff-af65-0fb64245fbd0-kube-api-access-n8k5q\") pod \"cluster-olm-operator-67dcd4998-lljnt\" (UID: \"99e215da-759d-4fff-af65-0fb64245fbd0\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-67dcd4998-lljnt" Mar 18 18:00:32.076155 master-0 kubenswrapper[30278]: I0318 18:00:32.076138 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9a240ab7-a1d5-4e9a-96f3-4590681cc7ed-serving-cert\") pod \"openshift-controller-manager-operator-8c94f4649-hpsbd\" (UID: \"9a240ab7-a1d5-4e9a-96f3-4590681cc7ed\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8c94f4649-hpsbd" Mar 18 18:00:32.076326 master-0 kubenswrapper[30278]: I0318 18:00:32.076306 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9a240ab7-a1d5-4e9a-96f3-4590681cc7ed-config\") pod \"openshift-controller-manager-operator-8c94f4649-hpsbd\" (UID: \"9a240ab7-a1d5-4e9a-96f3-4590681cc7ed\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8c94f4649-hpsbd" Mar 18 18:00:32.076483 master-0 kubenswrapper[30278]: I0318 18:00:32.076464 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9b424d6c-7440-4c98-ac19-2d0642c696fd-serving-cert\") pod \"kube-controller-manager-operator-ff989d6cc-qk279\" (UID: \"9b424d6c-7440-4c98-ac19-2d0642c696fd\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-ff989d6cc-qk279" Mar 18 18:00:32.076666 master-0 kubenswrapper[30278]: I0318 18:00:32.076634 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9a240ab7-a1d5-4e9a-96f3-4590681cc7ed-config\") pod \"openshift-controller-manager-operator-8c94f4649-hpsbd\" (UID: \"9a240ab7-a1d5-4e9a-96f3-4590681cc7ed\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8c94f4649-hpsbd" Mar 18 18:00:32.076666 master-0 kubenswrapper[30278]: I0318 18:00:32.076649 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9a240ab7-a1d5-4e9a-96f3-4590681cc7ed-serving-cert\") pod \"openshift-controller-manager-operator-8c94f4649-hpsbd\" (UID: \"9a240ab7-a1d5-4e9a-96f3-4590681cc7ed\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8c94f4649-hpsbd" Mar 18 18:00:32.076784 master-0 kubenswrapper[30278]: I0318 18:00:32.076655 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fk59q\" (UniqueName: \"kubernetes.io/projected/cb522b02-0b93-4711-9041-566daa06b95a-kube-api-access-fk59q\") pod \"openshift-config-operator-95bf4f4d-q27fh\" (UID: \"cb522b02-0b93-4711-9041-566daa06b95a\") " pod="openshift-config-operator/openshift-config-operator-95bf4f4d-q27fh" Mar 18 18:00:32.076875 master-0 kubenswrapper[30278]: I0318 18:00:32.076850 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0100a259-1358-45e8-8191-4e1f9a14ec89-config\") pod \"etcd-operator-8544cbcf9c-rws9x\" (UID: \"0100a259-1358-45e8-8191-4e1f9a14ec89\") " pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-rws9x" Mar 18 18:00:32.076932 master-0 kubenswrapper[30278]: I0318 18:00:32.076804 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9b424d6c-7440-4c98-ac19-2d0642c696fd-serving-cert\") pod \"kube-controller-manager-operator-ff989d6cc-qk279\" (UID: \"9b424d6c-7440-4c98-ac19-2d0642c696fd\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-ff989d6cc-qk279" Mar 18 18:00:32.077016 master-0 kubenswrapper[30278]: I0318 18:00:32.076997 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mrdqg\" (UniqueName: \"kubernetes.io/projected/7c6694a8-ccd0-491b-9f21-215450f6ce67-kube-api-access-mrdqg\") pod \"cluster-node-tuning-operator-598fbc5f8f-7qwxn\" (UID: \"7c6694a8-ccd0-491b-9f21-215450f6ce67\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-7qwxn" Mar 18 18:00:32.077138 master-0 kubenswrapper[30278]: I0318 18:00:32.077118 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0b9ff55a-73fb-473f-b406-1f8b6cffdb89-config\") pod \"openshift-apiserver-operator-d65958b8-t266j\" (UID: \"0b9ff55a-73fb-473f-b406-1f8b6cffdb89\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-d65958b8-t266j" Mar 18 18:00:32.077245 master-0 kubenswrapper[30278]: I0318 18:00:32.077226 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/0100a259-1358-45e8-8191-4e1f9a14ec89-etcd-ca\") pod \"etcd-operator-8544cbcf9c-rws9x\" (UID: \"0100a259-1358-45e8-8191-4e1f9a14ec89\") " pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-rws9x" Mar 18 18:00:32.077412 master-0 kubenswrapper[30278]: I0318 18:00:32.077383 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0b9ff55a-73fb-473f-b406-1f8b6cffdb89-config\") pod \"openshift-apiserver-operator-d65958b8-t266j\" (UID: \"0b9ff55a-73fb-473f-b406-1f8b6cffdb89\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-d65958b8-t266j" Mar 18 18:00:32.077506 master-0 kubenswrapper[30278]: I0318 18:00:32.077386 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3a3a6c2c-78e7-41f3-acff-20173cbc012a-kube-api-access\") pod \"openshift-kube-scheduler-operator-dddff6458-wlfj4\" (UID: \"3a3a6c2c-78e7-41f3-acff-20173cbc012a\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-dddff6458-wlfj4" Mar 18 18:00:32.077621 master-0 kubenswrapper[30278]: I0318 18:00:32.077603 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/d26d4515-391e-41a5-8c82-1b2b8a375662-package-server-manager-serving-cert\") pod \"package-server-manager-7b95f86987-6qqz4\" (UID: \"d26d4515-391e-41a5-8c82-1b2b8a375662\") " pod="openshift-operator-lifecycle-manager/package-server-manager-7b95f86987-6qqz4" Mar 18 18:00:32.077712 master-0 kubenswrapper[30278]: I0318 18:00:32.077696 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bm8jj\" (UniqueName: \"kubernetes.io/projected/d26d4515-391e-41a5-8c82-1b2b8a375662-kube-api-access-bm8jj\") pod \"package-server-manager-7b95f86987-6qqz4\" (UID: \"d26d4515-391e-41a5-8c82-1b2b8a375662\") " pod="openshift-operator-lifecycle-manager/package-server-manager-7b95f86987-6qqz4" Mar 18 18:00:32.077790 master-0 kubenswrapper[30278]: I0318 18:00:32.077521 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/0100a259-1358-45e8-8191-4e1f9a14ec89-etcd-ca\") pod \"etcd-operator-8544cbcf9c-rws9x\" (UID: \"0100a259-1358-45e8-8191-4e1f9a14ec89\") " pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-rws9x" Mar 18 18:00:32.077876 master-0 kubenswrapper[30278]: I0318 18:00:32.077859 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/37b3753f-bf4f-4a9e-a4a8-d58296bada79-cert\") pod \"cluster-baremetal-operator-6f69995874-dh5zl\" (UID: \"37b3753f-bf4f-4a9e-a4a8-d58296bada79\") " pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-dh5zl" Mar 18 18:00:32.078004 master-0 kubenswrapper[30278]: I0318 18:00:32.077988 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0100a259-1358-45e8-8191-4e1f9a14ec89-serving-cert\") pod \"etcd-operator-8544cbcf9c-rws9x\" (UID: \"0100a259-1358-45e8-8191-4e1f9a14ec89\") " pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-rws9x" Mar 18 18:00:32.078159 master-0 kubenswrapper[30278]: I0318 18:00:32.078125 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/37b3753f-bf4f-4a9e-a4a8-d58296bada79-cert\") pod \"cluster-baremetal-operator-6f69995874-dh5zl\" (UID: \"37b3753f-bf4f-4a9e-a4a8-d58296bada79\") " pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-dh5zl" Mar 18 18:00:32.078216 master-0 kubenswrapper[30278]: I0318 18:00:32.078019 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/d26d4515-391e-41a5-8c82-1b2b8a375662-package-server-manager-serving-cert\") pod \"package-server-manager-7b95f86987-6qqz4\" (UID: \"d26d4515-391e-41a5-8c82-1b2b8a375662\") " pod="openshift-operator-lifecycle-manager/package-server-manager-7b95f86987-6qqz4" Mar 18 18:00:32.078308 master-0 kubenswrapper[30278]: I0318 18:00:32.078269 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cb522b02-0b93-4711-9041-566daa06b95a-serving-cert\") pod \"openshift-config-operator-95bf4f4d-q27fh\" (UID: \"cb522b02-0b93-4711-9041-566daa06b95a\") " pod="openshift-config-operator/openshift-config-operator-95bf4f4d-q27fh" Mar 18 18:00:32.078479 master-0 kubenswrapper[30278]: I0318 18:00:32.078460 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/26575d68-0488-4dfa-a5d0-5016e481dba6-kube-api-access\") pod \"kube-apiserver-operator-8b68b9d9b-p72m2\" (UID: \"26575d68-0488-4dfa-a5d0-5016e481dba6\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-8b68b9d9b-p72m2" Mar 18 18:00:32.078593 master-0 kubenswrapper[30278]: I0318 18:00:32.078378 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0100a259-1358-45e8-8191-4e1f9a14ec89-serving-cert\") pod \"etcd-operator-8544cbcf9c-rws9x\" (UID: \"0100a259-1358-45e8-8191-4e1f9a14ec89\") " pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-rws9x" Mar 18 18:00:32.078647 master-0 kubenswrapper[30278]: I0318 18:00:32.078505 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cb522b02-0b93-4711-9041-566daa06b95a-serving-cert\") pod \"openshift-config-operator-95bf4f4d-q27fh\" (UID: \"cb522b02-0b93-4711-9041-566daa06b95a\") " pod="openshift-config-operator/openshift-config-operator-95bf4f4d-q27fh" Mar 18 18:00:32.078647 master-0 kubenswrapper[30278]: I0318 18:00:32.078575 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/6f26e239-2988-4faa-bc1d-24b15b95b7f1-image-registry-operator-tls\") pod \"cluster-image-registry-operator-5549dc66cb-ljrq8\" (UID: \"6f26e239-2988-4faa-bc1d-24b15b95b7f1\") " pod="openshift-image-registry/cluster-image-registry-operator-5549dc66cb-ljrq8" Mar 18 18:00:32.078647 master-0 kubenswrapper[30278]: I0318 18:00:32.078640 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2pqww\" (UniqueName: \"kubernetes.io/projected/14a0661b-7bde-4e22-a9a9-5e3fb24df77f-kube-api-access-2pqww\") pod \"network-operator-7bd846bfc4-dxxbl\" (UID: \"14a0661b-7bde-4e22-a9a9-5e3fb24df77f\") " pod="openshift-network-operator/network-operator-7bd846bfc4-dxxbl" Mar 18 18:00:32.078799 master-0 kubenswrapper[30278]: I0318 18:00:32.078665 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c087ce06-a16b-41f4-ba93-8fccdee09003-service-ca-bundle\") pod \"authentication-operator-5885bfd7f4-8sxdf\" (UID: \"c087ce06-a16b-41f4-ba93-8fccdee09003\") " pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-8sxdf" Mar 18 18:00:32.078799 master-0 kubenswrapper[30278]: I0318 18:00:32.078688 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-olm-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/99e215da-759d-4fff-af65-0fb64245fbd0-cluster-olm-operator-serving-cert\") pod \"cluster-olm-operator-67dcd4998-lljnt\" (UID: \"99e215da-759d-4fff-af65-0fb64245fbd0\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-67dcd4998-lljnt" Mar 18 18:00:32.078799 master-0 kubenswrapper[30278]: I0318 18:00:32.078709 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operand-assets\" (UniqueName: \"kubernetes.io/empty-dir/99e215da-759d-4fff-af65-0fb64245fbd0-operand-assets\") pod \"cluster-olm-operator-67dcd4998-lljnt\" (UID: \"99e215da-759d-4fff-af65-0fb64245fbd0\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-67dcd4998-lljnt" Mar 18 18:00:32.078906 master-0 kubenswrapper[30278]: I0318 18:00:32.078842 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ce5831a6-5a8d-4cda-9299-5d86437bcab2-marketplace-trusted-ca\") pod \"marketplace-operator-89ccd998f-l5gm7\" (UID: \"ce5831a6-5a8d-4cda-9299-5d86437bcab2\") " pod="openshift-marketplace/marketplace-operator-89ccd998f-l5gm7" Mar 18 18:00:32.078960 master-0 kubenswrapper[30278]: I0318 18:00:32.078902 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operand-assets\" (UniqueName: \"kubernetes.io/empty-dir/99e215da-759d-4fff-af65-0fb64245fbd0-operand-assets\") pod \"cluster-olm-operator-67dcd4998-lljnt\" (UID: \"99e215da-759d-4fff-af65-0fb64245fbd0\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-67dcd4998-lljnt" Mar 18 18:00:32.078960 master-0 kubenswrapper[30278]: I0318 18:00:32.078909 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-clm4b\" (UniqueName: \"kubernetes.io/projected/8d5e9525-6c0d-4c0b-a2ce-e42eaf66c311-kube-api-access-clm4b\") pod \"cluster-monitoring-operator-58845fbb57-vjrjg\" (UID: \"8d5e9525-6c0d-4c0b-a2ce-e42eaf66c311\") " pod="openshift-monitoring/cluster-monitoring-operator-58845fbb57-vjrjg" Mar 18 18:00:32.079221 master-0 kubenswrapper[30278]: I0318 18:00:32.079199 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ce5831a6-5a8d-4cda-9299-5d86437bcab2-marketplace-trusted-ca\") pod \"marketplace-operator-89ccd998f-l5gm7\" (UID: \"ce5831a6-5a8d-4cda-9299-5d86437bcab2\") " pod="openshift-marketplace/marketplace-operator-89ccd998f-l5gm7" Mar 18 18:00:32.079339 master-0 kubenswrapper[30278]: I0318 18:00:32.079305 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/7e64a377-f497-4416-8f22-d5c7f52e0b65-metrics-tls\") pod \"ingress-operator-66b84d69b-qb7n6\" (UID: \"7e64a377-f497-4416-8f22-d5c7f52e0b65\") " pod="openshift-ingress-operator/ingress-operator-66b84d69b-qb7n6" Mar 18 18:00:32.079405 master-0 kubenswrapper[30278]: I0318 18:00:32.079348 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/37b3753f-bf4f-4a9e-a4a8-d58296bada79-config\") pod \"cluster-baremetal-operator-6f69995874-dh5zl\" (UID: \"37b3753f-bf4f-4a9e-a4a8-d58296bada79\") " pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-dh5zl" Mar 18 18:00:32.079405 master-0 kubenswrapper[30278]: I0318 18:00:32.079351 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-6c8df6d4b-fshkm" Mar 18 18:00:32.079493 master-0 kubenswrapper[30278]: I0318 18:00:32.079383 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l5tw2\" (UniqueName: \"kubernetes.io/projected/9a240ab7-a1d5-4e9a-96f3-4590681cc7ed-kube-api-access-l5tw2\") pod \"openshift-controller-manager-operator-8c94f4649-hpsbd\" (UID: \"9a240ab7-a1d5-4e9a-96f3-4590681cc7ed\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8c94f4649-hpsbd" Mar 18 18:00:32.079540 master-0 kubenswrapper[30278]: I0318 18:00:32.079500 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b9ff55a-73fb-473f-b406-1f8b6cffdb89-serving-cert\") pod \"openshift-apiserver-operator-d65958b8-t266j\" (UID: \"0b9ff55a-73fb-473f-b406-1f8b6cffdb89\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-d65958b8-t266j" Mar 18 18:00:32.079614 master-0 kubenswrapper[30278]: I0318 18:00:32.079582 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c087ce06-a16b-41f4-ba93-8fccdee09003-trusted-ca-bundle\") pod \"authentication-operator-5885bfd7f4-8sxdf\" (UID: \"c087ce06-a16b-41f4-ba93-8fccdee09003\") " pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-8sxdf" Mar 18 18:00:32.079659 master-0 kubenswrapper[30278]: I0318 18:00:32.079585 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c087ce06-a16b-41f4-ba93-8fccdee09003-service-ca-bundle\") pod \"authentication-operator-5885bfd7f4-8sxdf\" (UID: \"c087ce06-a16b-41f4-ba93-8fccdee09003\") " pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-8sxdf" Mar 18 18:00:32.079688 master-0 kubenswrapper[30278]: I0318 18:00:32.079657 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/7e64a377-f497-4416-8f22-d5c7f52e0b65-metrics-tls\") pod \"ingress-operator-66b84d69b-qb7n6\" (UID: \"7e64a377-f497-4416-8f22-d5c7f52e0b65\") " pod="openshift-ingress-operator/ingress-operator-66b84d69b-qb7n6" Mar 18 18:00:32.079688 master-0 kubenswrapper[30278]: I0318 18:00:32.079629 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/37b3753f-bf4f-4a9e-a4a8-d58296bada79-config\") pod \"cluster-baremetal-operator-6f69995874-dh5zl\" (UID: \"37b3753f-bf4f-4a9e-a4a8-d58296bada79\") " pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-dh5zl" Mar 18 18:00:32.079757 master-0 kubenswrapper[30278]: I0318 18:00:32.079702 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qwps9\" (UniqueName: \"kubernetes.io/projected/e73f2834-c56c-4cef-ac3c-2317e9a4324c-kube-api-access-qwps9\") pod \"olm-operator-5c9796789-6hngr\" (UID: \"e73f2834-c56c-4cef-ac3c-2317e9a4324c\") " pod="openshift-operator-lifecycle-manager/olm-operator-5c9796789-6hngr" Mar 18 18:00:32.079948 master-0 kubenswrapper[30278]: I0318 18:00:32.079927 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/6f26e239-2988-4faa-bc1d-24b15b95b7f1-image-registry-operator-tls\") pod \"cluster-image-registry-operator-5549dc66cb-ljrq8\" (UID: \"6f26e239-2988-4faa-bc1d-24b15b95b7f1\") " pod="openshift-image-registry/cluster-image-registry-operator-5549dc66cb-ljrq8" Mar 18 18:00:32.080173 master-0 kubenswrapper[30278]: I0318 18:00:32.080151 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b9ff55a-73fb-473f-b406-1f8b6cffdb89-serving-cert\") pod \"openshift-apiserver-operator-d65958b8-t266j\" (UID: \"0b9ff55a-73fb-473f-b406-1f8b6cffdb89\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-d65958b8-t266j" Mar 18 18:00:32.080224 master-0 kubenswrapper[30278]: I0318 18:00:32.080188 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c087ce06-a16b-41f4-ba93-8fccdee09003-trusted-ca-bundle\") pod \"authentication-operator-5885bfd7f4-8sxdf\" (UID: \"c087ce06-a16b-41f4-ba93-8fccdee09003\") " pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-8sxdf" Mar 18 18:00:32.080379 master-0 kubenswrapper[30278]: I0318 18:00:32.080353 30278 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-3-master-0" Mar 18 18:00:32.080594 master-0 kubenswrapper[30278]: I0318 18:00:32.080570 30278 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-2-master-0" Mar 18 18:00:32.080823 master-0 kubenswrapper[30278]: I0318 18:00:32.080740 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-olm-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/99e215da-759d-4fff-af65-0fb64245fbd0-cluster-olm-operator-serving-cert\") pod \"cluster-olm-operator-67dcd4998-lljnt\" (UID: \"99e215da-759d-4fff-af65-0fb64245fbd0\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-67dcd4998-lljnt" Mar 18 18:00:32.080918 master-0 kubenswrapper[30278]: I0318 18:00:32.080823 30278 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-3-master-0" Mar 18 18:00:32.080918 master-0 kubenswrapper[30278]: I0318 18:00:32.080876 30278 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-4-master-0" Mar 18 18:00:32.080918 master-0 kubenswrapper[30278]: I0318 18:00:32.080641 30278 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-2-master-0" Mar 18 18:00:32.082823 master-0 kubenswrapper[30278]: I0318 18:00:32.082800 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7c6694a8-ccd0-491b-9f21-215450f6ce67-trusted-ca\") pod \"cluster-node-tuning-operator-598fbc5f8f-7qwxn\" (UID: \"7c6694a8-ccd0-491b-9f21-215450f6ce67\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-7qwxn" Mar 18 18:00:32.083467 master-0 kubenswrapper[30278]: I0318 18:00:32.083444 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Mar 18 18:00:32.103423 master-0 kubenswrapper[30278]: I0318 18:00:32.103374 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Mar 18 18:00:32.123383 master-0 kubenswrapper[30278]: I0318 18:00:32.123252 30278 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Mar 18 18:00:32.123515 master-0 kubenswrapper[30278]: I0318 18:00:32.123441 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Mar 18 18:00:32.142457 master-0 kubenswrapper[30278]: I0318 18:00:32.142413 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Mar 18 18:00:32.162689 master-0 kubenswrapper[30278]: I0318 18:00:32.162624 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Mar 18 18:00:32.180920 master-0 kubenswrapper[30278]: I0318 18:00:32.180846 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/56cde2f7-1742-45d6-aa22-8270cfb424a7-ca-certs\") pod \"catalogd-controller-manager-6864dc98f7-8vmsv\" (UID: \"56cde2f7-1742-45d6-aa22-8270cfb424a7\") " pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-8vmsv" Mar 18 18:00:32.180920 master-0 kubenswrapper[30278]: I0318 18:00:32.180897 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/30d77a7c-222e-41c7-8a98-219854aa3da2-etcd-client\") pod \"apiserver-897b458c6-vsss9\" (UID: \"30d77a7c-222e-41c7-8a98-219854aa3da2\") " pod="openshift-apiserver/apiserver-897b458c6-vsss9" Mar 18 18:00:32.180920 master-0 kubenswrapper[30278]: I0318 18:00:32.180930 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0751c002-fe0e-4f13-bb9c-9accd8ca0df3-auth-proxy-config\") pod \"cluster-cloud-controller-manager-operator-7dff898856-kfzkl\" (UID: \"0751c002-fe0e-4f13-bb9c-9accd8ca0df3\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-kfzkl" Mar 18 18:00:32.181304 master-0 kubenswrapper[30278]: I0318 18:00:32.181063 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-sysctl-conf\" (UniqueName: \"kubernetes.io/host-path/822080a5-2926-4a51-866d-86bb0b437da2-etc-sysctl-conf\") pod \"tuned-r6tf4\" (UID: \"822080a5-2926-4a51-866d-86bb0b437da2\") " pod="openshift-cluster-node-tuning-operator/tuned-r6tf4" Mar 18 18:00:32.181304 master-0 kubenswrapper[30278]: I0318 18:00:32.181175 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-76j8w\" (UniqueName: \"kubernetes.io/projected/9875ed82-813c-483d-8471-8f9b74b774ee-kube-api-access-76j8w\") pod \"network-node-identity-7s68k\" (UID: \"9875ed82-813c-483d-8471-8f9b74b774ee\") " pod="openshift-network-node-identity/network-node-identity-7s68k" Mar 18 18:00:32.181304 master-0 kubenswrapper[30278]: I0318 18:00:32.181218 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/2d21e77e-8b61-4f03-8f17-941b7a1d8b1d-images\") pod \"machine-api-operator-6fbb6cf6f9-6x52p\" (UID: \"2d21e77e-8b61-4f03-8f17-941b7a1d8b1d\") " pod="openshift-machine-api/machine-api-operator-6fbb6cf6f9-6x52p" Mar 18 18:00:32.181490 master-0 kubenswrapper[30278]: I0318 18:00:32.181434 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a94f7bff-ad61-4c53-a8eb-000a13f26971-cert\") pod \"cluster-autoscaler-operator-866dc4744-l6hpt\" (UID: \"a94f7bff-ad61-4c53-a8eb-000a13f26971\") " pod="openshift-machine-api/cluster-autoscaler-operator-866dc4744-l6hpt" Mar 18 18:00:32.181490 master-0 kubenswrapper[30278]: I0318 18:00:32.181479 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"whereabouts-flatfile-configmap\" (UniqueName: \"kubernetes.io/configmap/fea7b899-fde4-4463-9520-4d433a8ebe21-whereabouts-flatfile-configmap\") pod \"multus-additional-cni-plugins-ttbr5\" (UID: \"fea7b899-fde4-4463-9520-4d433a8ebe21\") " pod="openshift-multus/multus-additional-cni-plugins-ttbr5" Mar 18 18:00:32.181604 master-0 kubenswrapper[30278]: I0318 18:00:32.181510 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4460d3d3-c55f-4f1c-a623-e3feccf937bb-catalog-content\") pod \"redhat-operators-bgdql\" (UID: \"4460d3d3-c55f-4f1c-a623-e3feccf937bb\") " pod="openshift-marketplace/redhat-operators-bgdql" Mar 18 18:00:32.181604 master-0 kubenswrapper[30278]: I0318 18:00:32.181538 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9lwsm\" (UniqueName: \"kubernetes.io/projected/994fff04-c1d7-4f10-8d4b-6b49a6934829-kube-api-access-9lwsm\") pod \"ovnkube-node-5l4qp\" (UID: \"994fff04-c1d7-4f10-8d4b-6b49a6934829\") " pod="openshift-ovn-kubernetes/ovnkube-node-5l4qp" Mar 18 18:00:32.181604 master-0 kubenswrapper[30278]: I0318 18:00:32.181577 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/de189d27-4c60-49f1-9119-d1fde5c37b1e-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-6f97756bc8-zdqtc\" (UID: \"de189d27-4c60-49f1-9119-d1fde5c37b1e\") " pod="openshift-machine-api/control-plane-machine-set-operator-6f97756bc8-zdqtc" Mar 18 18:00:32.181604 master-0 kubenswrapper[30278]: I0318 18:00:32.181602 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/5b0e38f3-3ab5-4519-86a6-68003deb94da-multus-socket-dir-parent\") pod \"multus-64tx9\" (UID: \"5b0e38f3-3ab5-4519-86a6-68003deb94da\") " pod="openshift-multus/multus-64tx9" Mar 18 18:00:32.181795 master-0 kubenswrapper[30278]: I0318 18:00:32.181628 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rf2qx\" (UniqueName: \"kubernetes.io/projected/fd4c81e2-699b-4fdf-ac7d-1607cde6a8ab-kube-api-access-rf2qx\") pod \"service-ca-79bc6b8d76-g5brm\" (UID: \"fd4c81e2-699b-4fdf-ac7d-1607cde6a8ab\") " pod="openshift-service-ca/service-ca-79bc6b8d76-g5brm" Mar 18 18:00:32.181795 master-0 kubenswrapper[30278]: I0318 18:00:32.181655 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/56cde2f7-1742-45d6-aa22-8270cfb424a7-etc-docker\") pod \"catalogd-controller-manager-6864dc98f7-8vmsv\" (UID: \"56cde2f7-1742-45d6-aa22-8270cfb424a7\") " pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-8vmsv" Mar 18 18:00:32.181795 master-0 kubenswrapper[30278]: I0318 18:00:32.181723 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/2d21e77e-8b61-4f03-8f17-941b7a1d8b1d-machine-api-operator-tls\") pod \"machine-api-operator-6fbb6cf6f9-6x52p\" (UID: \"2d21e77e-8b61-4f03-8f17-941b7a1d8b1d\") " pod="openshift-machine-api/machine-api-operator-6fbb6cf6f9-6x52p" Mar 18 18:00:32.181795 master-0 kubenswrapper[30278]: I0318 18:00:32.181758 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/43fab0f2-5cfd-4b5e-a632-728fd5b960fd-etcd-serving-ca\") pod \"apiserver-688fbbb854-6n26v\" (UID: \"43fab0f2-5cfd-4b5e-a632-728fd5b960fd\") " pod="openshift-oauth-apiserver/apiserver-688fbbb854-6n26v" Mar 18 18:00:32.181795 master-0 kubenswrapper[30278]: I0318 18:00:32.181784 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fcf459dc-bd30-4143-b5c4-60fd01b46548-proxy-tls\") pod \"machine-config-daemon-5l8hh\" (UID: \"fcf459dc-bd30-4143-b5c4-60fd01b46548\") " pod="openshift-machine-config-operator/machine-config-daemon-5l8hh" Mar 18 18:00:32.181986 master-0 kubenswrapper[30278]: I0318 18:00:32.181811 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cco-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/04cef0bd-f365-4bf6-864a-1895995015d6-cco-trusted-ca\") pod \"cloud-credential-operator-744f9dbf77-djgn7\" (UID: \"04cef0bd-f365-4bf6-864a-1895995015d6\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-744f9dbf77-djgn7" Mar 18 18:00:32.181986 master-0 kubenswrapper[30278]: I0318 18:00:32.181835 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-systemd\" (UniqueName: \"kubernetes.io/host-path/822080a5-2926-4a51-866d-86bb0b437da2-etc-systemd\") pod \"tuned-r6tf4\" (UID: \"822080a5-2926-4a51-866d-86bb0b437da2\") " pod="openshift-cluster-node-tuning-operator/tuned-r6tf4" Mar 18 18:00:32.181986 master-0 kubenswrapper[30278]: I0318 18:00:32.181862 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/427e5ce9-f4b3-4f12-bb77-2b13775aa334-utilities\") pod \"redhat-marketplace-6xmx4\" (UID: \"427e5ce9-f4b3-4f12-bb77-2b13775aa334\") " pod="openshift-marketplace/redhat-marketplace-6xmx4" Mar 18 18:00:32.181986 master-0 kubenswrapper[30278]: I0318 18:00:32.181877 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"whereabouts-flatfile-configmap\" (UniqueName: \"kubernetes.io/configmap/fea7b899-fde4-4463-9520-4d433a8ebe21-whereabouts-flatfile-configmap\") pod \"multus-additional-cni-plugins-ttbr5\" (UID: \"fea7b899-fde4-4463-9520-4d433a8ebe21\") " pod="openshift-multus/multus-additional-cni-plugins-ttbr5" Mar 18 18:00:32.181986 master-0 kubenswrapper[30278]: I0318 18:00:32.181885 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/8db04037-c7cc-4246-92c3-6e7985384b14-apiservice-cert\") pod \"packageserver-b8b994c95-kglwt\" (UID: \"8db04037-c7cc-4246-92c3-6e7985384b14\") " pod="openshift-operator-lifecycle-manager/packageserver-b8b994c95-kglwt" Mar 18 18:00:32.182191 master-0 kubenswrapper[30278]: I0318 18:00:32.182047 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/5b0e38f3-3ab5-4519-86a6-68003deb94da-host-run-netns\") pod \"multus-64tx9\" (UID: \"5b0e38f3-3ab5-4519-86a6-68003deb94da\") " pod="openshift-multus/multus-64tx9" Mar 18 18:00:32.182191 master-0 kubenswrapper[30278]: I0318 18:00:32.182080 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/5b0e38f3-3ab5-4519-86a6-68003deb94da-host-var-lib-kubelet\") pod \"multus-64tx9\" (UID: \"5b0e38f3-3ab5-4519-86a6-68003deb94da\") " pod="openshift-multus/multus-64tx9" Mar 18 18:00:32.182191 master-0 kubenswrapper[30278]: I0318 18:00:32.182086 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4460d3d3-c55f-4f1c-a623-e3feccf937bb-catalog-content\") pod \"redhat-operators-bgdql\" (UID: \"4460d3d3-c55f-4f1c-a623-e3feccf937bb\") " pod="openshift-marketplace/redhat-operators-bgdql" Mar 18 18:00:32.182495 master-0 kubenswrapper[30278]: I0318 18:00:32.182439 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/30d77a7c-222e-41c7-8a98-219854aa3da2-serving-cert\") pod \"apiserver-897b458c6-vsss9\" (UID: \"30d77a7c-222e-41c7-8a98-219854aa3da2\") " pod="openshift-apiserver/apiserver-897b458c6-vsss9" Mar 18 18:00:32.182495 master-0 kubenswrapper[30278]: I0318 18:00:32.182470 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Mar 18 18:00:32.182632 master-0 kubenswrapper[30278]: I0318 18:00:32.182498 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/427e5ce9-f4b3-4f12-bb77-2b13775aa334-utilities\") pod \"redhat-marketplace-6xmx4\" (UID: \"427e5ce9-f4b3-4f12-bb77-2b13775aa334\") " pod="openshift-marketplace/redhat-marketplace-6xmx4" Mar 18 18:00:32.182632 master-0 kubenswrapper[30278]: I0318 18:00:32.182547 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-sysconfig\" (UniqueName: \"kubernetes.io/host-path/822080a5-2926-4a51-866d-86bb0b437da2-etc-sysconfig\") pod \"tuned-r6tf4\" (UID: \"822080a5-2926-4a51-866d-86bb0b437da2\") " pod="openshift-cluster-node-tuning-operator/tuned-r6tf4" Mar 18 18:00:32.182743 master-0 kubenswrapper[30278]: I0318 18:00:32.182672 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/30d77a7c-222e-41c7-8a98-219854aa3da2-encryption-config\") pod \"apiserver-897b458c6-vsss9\" (UID: \"30d77a7c-222e-41c7-8a98-219854aa3da2\") " pod="openshift-apiserver/apiserver-897b458c6-vsss9" Mar 18 18:00:32.182743 master-0 kubenswrapper[30278]: I0318 18:00:32.182740 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t92bz\" (UniqueName: \"kubernetes.io/projected/e9e04572-1425-440e-9869-6deef05e13e3-kube-api-access-t92bz\") pod \"catalog-operator-68f85b4d6c-qpgfz\" (UID: \"e9e04572-1425-440e-9869-6deef05e13e3\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68f85b4d6c-qpgfz" Mar 18 18:00:32.182921 master-0 kubenswrapper[30278]: I0318 18:00:32.182866 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-storage-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/c38c5f03-a753-49f4-ab06-33e75a03bd45-cluster-storage-operator-serving-cert\") pod \"cluster-storage-operator-7d87854d6-d4bmc\" (UID: \"c38c5f03-a753-49f4-ab06-33e75a03bd45\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-7d87854d6-d4bmc" Mar 18 18:00:32.182988 master-0 kubenswrapper[30278]: I0318 18:00:32.182972 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9dt8f\" (UniqueName: \"kubernetes.io/projected/59407fdf-b1e9-4992-a3c8-54b4e26f496c-kube-api-access-9dt8f\") pod \"dns-default-lf9xl\" (UID: \"59407fdf-b1e9-4992-a3c8-54b4e26f496c\") " pod="openshift-dns/dns-default-lf9xl" Mar 18 18:00:32.183104 master-0 kubenswrapper[30278]: I0318 18:00:32.183053 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/30d77a7c-222e-41c7-8a98-219854aa3da2-config\") pod \"apiserver-897b458c6-vsss9\" (UID: \"30d77a7c-222e-41c7-8a98-219854aa3da2\") " pod="openshift-apiserver/apiserver-897b458c6-vsss9" Mar 18 18:00:32.183210 master-0 kubenswrapper[30278]: I0318 18:00:32.183096 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/4285e80c-1ff9-42b3-9692-9f2ab6b61916-var-lock\") pod \"installer-3-master-0\" (UID: \"4285e80c-1ff9-42b3-9692-9f2ab6b61916\") " pod="openshift-kube-apiserver/installer-3-master-0" Mar 18 18:00:32.183315 master-0 kubenswrapper[30278]: I0318 18:00:32.183244 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/43fab0f2-5cfd-4b5e-a632-728fd5b960fd-audit-dir\") pod \"apiserver-688fbbb854-6n26v\" (UID: \"43fab0f2-5cfd-4b5e-a632-728fd5b960fd\") " pod="openshift-oauth-apiserver/apiserver-688fbbb854-6n26v" Mar 18 18:00:32.183315 master-0 kubenswrapper[30278]: I0318 18:00:32.183304 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/994fff04-c1d7-4f10-8d4b-6b49a6934829-host-kubelet\") pod \"ovnkube-node-5l4qp\" (UID: \"994fff04-c1d7-4f10-8d4b-6b49a6934829\") " pod="openshift-ovn-kubernetes/ovnkube-node-5l4qp" Mar 18 18:00:32.183452 master-0 kubenswrapper[30278]: I0318 18:00:32.183336 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/5b0e38f3-3ab5-4519-86a6-68003deb94da-host-run-k8s-cni-cncf-io\") pod \"multus-64tx9\" (UID: \"5b0e38f3-3ab5-4519-86a6-68003deb94da\") " pod="openshift-multus/multus-64tx9" Mar 18 18:00:32.183452 master-0 kubenswrapper[30278]: I0318 18:00:32.183374 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tf476\" (UniqueName: \"kubernetes.io/projected/de189d27-4c60-49f1-9119-d1fde5c37b1e-kube-api-access-tf476\") pod \"control-plane-machine-set-operator-6f97756bc8-zdqtc\" (UID: \"de189d27-4c60-49f1-9119-d1fde5c37b1e\") " pod="openshift-machine-api/control-plane-machine-set-operator-6f97756bc8-zdqtc" Mar 18 18:00:32.183620 master-0 kubenswrapper[30278]: I0318 18:00:32.183507 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/fea7b899-fde4-4463-9520-4d433a8ebe21-system-cni-dir\") pod \"multus-additional-cni-plugins-ttbr5\" (UID: \"fea7b899-fde4-4463-9520-4d433a8ebe21\") " pod="openshift-multus/multus-additional-cni-plugins-ttbr5" Mar 18 18:00:32.183620 master-0 kubenswrapper[30278]: I0318 18:00:32.183542 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c355c750-ae2f-49fa-9a16-8fb4f688853e-config\") pod \"service-ca-operator-b865698dc-5zj8r\" (UID: \"c355c750-ae2f-49fa-9a16-8fb4f688853e\") " pod="openshift-service-ca-operator/service-ca-operator-b865698dc-5zj8r" Mar 18 18:00:32.183750 master-0 kubenswrapper[30278]: I0318 18:00:32.183641 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/5b0e38f3-3ab5-4519-86a6-68003deb94da-host-run-multus-certs\") pod \"multus-64tx9\" (UID: \"5b0e38f3-3ab5-4519-86a6-68003deb94da\") " pod="openshift-multus/multus-64tx9" Mar 18 18:00:32.183750 master-0 kubenswrapper[30278]: I0318 18:00:32.183715 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/5b0e38f3-3ab5-4519-86a6-68003deb94da-multus-daemon-config\") pod \"multus-64tx9\" (UID: \"5b0e38f3-3ab5-4519-86a6-68003deb94da\") " pod="openshift-multus/multus-64tx9" Mar 18 18:00:32.183873 master-0 kubenswrapper[30278]: I0318 18:00:32.183773 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gzhsq\" (UniqueName: \"kubernetes.io/projected/e7f76afa-4b23-421c-8451-46323813f06e-kube-api-access-gzhsq\") pod \"multus-admission-controller-58c9f8fc64-9c6bk\" (UID: \"e7f76afa-4b23-421c-8451-46323813f06e\") " pod="openshift-multus/multus-admission-controller-58c9f8fc64-9c6bk" Mar 18 18:00:32.183873 master-0 kubenswrapper[30278]: I0318 18:00:32.183804 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1db0a246-ca43-4e7c-b09e-e80218ae99b1-serving-cert\") pod \"controller-manager-f5755b457-f4cbl\" (UID: \"1db0a246-ca43-4e7c-b09e-e80218ae99b1\") " pod="openshift-controller-manager/controller-manager-f5755b457-f4cbl" Mar 18 18:00:32.184079 master-0 kubenswrapper[30278]: I0318 18:00:32.184033 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/5b0e38f3-3ab5-4519-86a6-68003deb94da-multus-daemon-config\") pod \"multus-64tx9\" (UID: \"5b0e38f3-3ab5-4519-86a6-68003deb94da\") " pod="openshift-multus/multus-64tx9" Mar 18 18:00:32.184153 master-0 kubenswrapper[30278]: I0318 18:00:32.184128 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wd9sc\" (UniqueName: \"kubernetes.io/projected/b3385316-45f0-46c5-ac82-683168db5878-kube-api-access-wd9sc\") pod \"machine-config-server-mpmxb\" (UID: \"b3385316-45f0-46c5-ac82-683168db5878\") " pod="openshift-machine-config-operator/machine-config-server-mpmxb" Mar 18 18:00:32.184249 master-0 kubenswrapper[30278]: I0318 18:00:32.184226 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c355c750-ae2f-49fa-9a16-8fb4f688853e-config\") pod \"service-ca-operator-b865698dc-5zj8r\" (UID: \"c355c750-ae2f-49fa-9a16-8fb4f688853e\") " pod="openshift-service-ca-operator/service-ca-operator-b865698dc-5zj8r" Mar 18 18:00:32.185117 master-0 kubenswrapper[30278]: I0318 18:00:32.184648 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-njx6n\" (UniqueName: \"kubernetes.io/projected/0751c002-fe0e-4f13-bb9c-9accd8ca0df3-kube-api-access-njx6n\") pod \"cluster-cloud-controller-manager-operator-7dff898856-kfzkl\" (UID: \"0751c002-fe0e-4f13-bb9c-9accd8ca0df3\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-kfzkl" Mar 18 18:00:32.185117 master-0 kubenswrapper[30278]: I0318 18:00:32.184696 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/994fff04-c1d7-4f10-8d4b-6b49a6934829-host-run-ovn-kubernetes\") pod \"ovnkube-node-5l4qp\" (UID: \"994fff04-c1d7-4f10-8d4b-6b49a6934829\") " pod="openshift-ovn-kubernetes/ovnkube-node-5l4qp" Mar 18 18:00:32.185117 master-0 kubenswrapper[30278]: I0318 18:00:32.184855 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/fea7b899-fde4-4463-9520-4d433a8ebe21-cnibin\") pod \"multus-additional-cni-plugins-ttbr5\" (UID: \"fea7b899-fde4-4463-9520-4d433a8ebe21\") " pod="openshift-multus/multus-additional-cni-plugins-ttbr5" Mar 18 18:00:32.185117 master-0 kubenswrapper[30278]: I0318 18:00:32.184940 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cd868\" (UniqueName: \"kubernetes.io/projected/1d969530-c138-4fb7-9bfe-0825be66c009-kube-api-access-cd868\") pod \"iptables-alerter-f7jp5\" (UID: \"1d969530-c138-4fb7-9bfe-0825be66c009\") " pod="openshift-network-operator/iptables-alerter-f7jp5" Mar 18 18:00:32.185117 master-0 kubenswrapper[30278]: I0318 18:00:32.184974 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/9c0dbd44-7669-41d6-bf1b-d8c1343c9d98-prometheus-operator-tls\") pod \"prometheus-operator-6c8df6d4b-fshkm\" (UID: \"9c0dbd44-7669-41d6-bf1b-d8c1343c9d98\") " pod="openshift-monitoring/prometheus-operator-6c8df6d4b-fshkm" Mar 18 18:00:32.185117 master-0 kubenswrapper[30278]: I0318 18:00:32.185006 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/92153864-7959-4482-bf24-c8db36435fb5-config\") pod \"machine-approver-5c6485487f-z74t2\" (UID: \"92153864-7959-4482-bf24-c8db36435fb5\") " pod="openshift-cluster-machine-approver/machine-approver-5c6485487f-z74t2" Mar 18 18:00:32.185117 master-0 kubenswrapper[30278]: I0318 18:00:32.185121 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/822080a5-2926-4a51-866d-86bb0b437da2-sys\") pod \"tuned-r6tf4\" (UID: \"822080a5-2926-4a51-866d-86bb0b437da2\") " pod="openshift-cluster-node-tuning-operator/tuned-r6tf4" Mar 18 18:00:32.185533 master-0 kubenswrapper[30278]: I0318 18:00:32.185166 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qlhls\" (UniqueName: \"kubernetes.io/projected/04cef0bd-f365-4bf6-864a-1895995015d6-kube-api-access-qlhls\") pod \"cloud-credential-operator-744f9dbf77-djgn7\" (UID: \"04cef0bd-f365-4bf6-864a-1895995015d6\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-744f9dbf77-djgn7" Mar 18 18:00:32.185533 master-0 kubenswrapper[30278]: I0318 18:00:32.185191 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/9875ed82-813c-483d-8471-8f9b74b774ee-ovnkube-identity-cm\") pod \"network-node-identity-7s68k\" (UID: \"9875ed82-813c-483d-8471-8f9b74b774ee\") " pod="openshift-network-node-identity/network-node-identity-7s68k" Mar 18 18:00:32.185533 master-0 kubenswrapper[30278]: I0318 18:00:32.185217 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1db0a246-ca43-4e7c-b09e-e80218ae99b1-client-ca\") pod \"controller-manager-f5755b457-f4cbl\" (UID: \"1db0a246-ca43-4e7c-b09e-e80218ae99b1\") " pod="openshift-controller-manager/controller-manager-f5755b457-f4cbl" Mar 18 18:00:32.185533 master-0 kubenswrapper[30278]: I0318 18:00:32.185243 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1db0a246-ca43-4e7c-b09e-e80218ae99b1-config\") pod \"controller-manager-f5755b457-f4cbl\" (UID: \"1db0a246-ca43-4e7c-b09e-e80218ae99b1\") " pod="openshift-controller-manager/controller-manager-f5755b457-f4cbl" Mar 18 18:00:32.185533 master-0 kubenswrapper[30278]: I0318 18:00:32.185266 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/c3267271-e0c5-45d6-980c-d78e4f9eef35-proxy-tls\") pod \"machine-config-operator-84d549f6d5-b5lps\" (UID: \"c3267271-e0c5-45d6-980c-d78e4f9eef35\") " pod="openshift-machine-config-operator/machine-config-operator-84d549f6d5-b5lps" Mar 18 18:00:32.185533 master-0 kubenswrapper[30278]: I0318 18:00:32.185309 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d6c68\" (UniqueName: \"kubernetes.io/projected/c57f282a-829b-41b2-827a-f4bc598245a2-kube-api-access-d6c68\") pod \"router-default-7dcf5569b5-m5dh4\" (UID: \"c57f282a-829b-41b2-827a-f4bc598245a2\") " pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" Mar 18 18:00:32.185993 master-0 kubenswrapper[30278]: I0318 18:00:32.185584 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/994fff04-c1d7-4f10-8d4b-6b49a6934829-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-5l4qp\" (UID: \"994fff04-c1d7-4f10-8d4b-6b49a6934829\") " pod="openshift-ovn-kubernetes/ovnkube-node-5l4qp" Mar 18 18:00:32.185993 master-0 kubenswrapper[30278]: I0318 18:00:32.185629 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vkcx9\" (UniqueName: \"kubernetes.io/projected/7d39d93e-9be3-47e1-a44e-be2d18b55446-kube-api-access-vkcx9\") pod \"csi-snapshot-controller-64854d9cff-vpjmp\" (UID: \"7d39d93e-9be3-47e1-a44e-be2d18b55446\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-64854d9cff-vpjmp" Mar 18 18:00:32.185993 master-0 kubenswrapper[30278]: I0318 18:00:32.185672 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-88hkw\" (UniqueName: \"kubernetes.io/projected/89e6c3d6-7bd5-4df6-90db-3a349f644afb-kube-api-access-88hkw\") pod \"machine-config-controller-b4f87c5b9-m84zq\" (UID: \"89e6c3d6-7bd5-4df6-90db-3a349f644afb\") " pod="openshift-machine-config-operator/machine-config-controller-b4f87c5b9-m84zq" Mar 18 18:00:32.185993 master-0 kubenswrapper[30278]: I0318 18:00:32.185917 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4285e80c-1ff9-42b3-9692-9f2ab6b61916-kube-api-access\") pod \"installer-3-master-0\" (UID: \"4285e80c-1ff9-42b3-9692-9f2ab6b61916\") " pod="openshift-kube-apiserver/installer-3-master-0" Mar 18 18:00:32.186210 master-0 kubenswrapper[30278]: I0318 18:00:32.186019 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/5b0e38f3-3ab5-4519-86a6-68003deb94da-host-var-lib-cni-bin\") pod \"multus-64tx9\" (UID: \"5b0e38f3-3ab5-4519-86a6-68003deb94da\") " pod="openshift-multus/multus-64tx9" Mar 18 18:00:32.186296 master-0 kubenswrapper[30278]: I0318 18:00:32.186169 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/0751c002-fe0e-4f13-bb9c-9accd8ca0df3-host-etc-kube\") pod \"cluster-cloud-controller-manager-operator-7dff898856-kfzkl\" (UID: \"0751c002-fe0e-4f13-bb9c-9accd8ca0df3\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-kfzkl" Mar 18 18:00:32.186373 master-0 kubenswrapper[30278]: I0318 18:00:32.186324 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/5b0e38f3-3ab5-4519-86a6-68003deb94da-cnibin\") pod \"multus-64tx9\" (UID: \"5b0e38f3-3ab5-4519-86a6-68003deb94da\") " pod="openshift-multus/multus-64tx9" Mar 18 18:00:32.186373 master-0 kubenswrapper[30278]: I0318 18:00:32.186355 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/994fff04-c1d7-4f10-8d4b-6b49a6934829-systemd-units\") pod \"ovnkube-node-5l4qp\" (UID: \"994fff04-c1d7-4f10-8d4b-6b49a6934829\") " pod="openshift-ovn-kubernetes/ovnkube-node-5l4qp" Mar 18 18:00:32.186483 master-0 kubenswrapper[30278]: I0318 18:00:32.186386 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/56cde2f7-1742-45d6-aa22-8270cfb424a7-etc-containers\") pod \"catalogd-controller-manager-6864dc98f7-8vmsv\" (UID: \"56cde2f7-1742-45d6-aa22-8270cfb424a7\") " pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-8vmsv" Mar 18 18:00:32.186626 master-0 kubenswrapper[30278]: I0318 18:00:32.186589 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/7b94e08c-7944-445e-bfb7-6c7c14880c65-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-57f769d897-m82wx\" (UID: \"7b94e08c-7944-445e-bfb7-6c7c14880c65\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57f769d897-m82wx" Mar 18 18:00:32.186702 master-0 kubenswrapper[30278]: I0318 18:00:32.186640 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/fdab27a1-1d7a-4dc5-b828-eba3f57592dd-service-ca\") pod \"cluster-version-operator-7d58488df-l48xm\" (UID: \"fdab27a1-1d7a-4dc5-b828-eba3f57592dd\") " pod="openshift-cluster-version/cluster-version-operator-7d58488df-l48xm" Mar 18 18:00:32.186702 master-0 kubenswrapper[30278]: I0318 18:00:32.186672 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/822080a5-2926-4a51-866d-86bb0b437da2-tmp\") pod \"tuned-r6tf4\" (UID: \"822080a5-2926-4a51-866d-86bb0b437da2\") " pod="openshift-cluster-node-tuning-operator/tuned-r6tf4" Mar 18 18:00:32.186856 master-0 kubenswrapper[30278]: I0318 18:00:32.186739 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4285e80c-1ff9-42b3-9692-9f2ab6b61916-kubelet-dir\") pod \"installer-3-master-0\" (UID: \"4285e80c-1ff9-42b3-9692-9f2ab6b61916\") " pod="openshift-kube-apiserver/installer-3-master-0" Mar 18 18:00:32.186919 master-0 kubenswrapper[30278]: I0318 18:00:32.186893 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43fab0f2-5cfd-4b5e-a632-728fd5b960fd-trusted-ca-bundle\") pod \"apiserver-688fbbb854-6n26v\" (UID: \"43fab0f2-5cfd-4b5e-a632-728fd5b960fd\") " pod="openshift-oauth-apiserver/apiserver-688fbbb854-6n26v" Mar 18 18:00:32.186977 master-0 kubenswrapper[30278]: I0318 18:00:32.186923 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/822080a5-2926-4a51-866d-86bb0b437da2-tmp\") pod \"tuned-r6tf4\" (UID: \"822080a5-2926-4a51-866d-86bb0b437da2\") " pod="openshift-cluster-node-tuning-operator/tuned-r6tf4" Mar 18 18:00:32.186977 master-0 kubenswrapper[30278]: I0318 18:00:32.186938 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rsj86\" (UniqueName: \"kubernetes.io/projected/43fab0f2-5cfd-4b5e-a632-728fd5b960fd-kube-api-access-rsj86\") pod \"apiserver-688fbbb854-6n26v\" (UID: \"43fab0f2-5cfd-4b5e-a632-728fd5b960fd\") " pod="openshift-oauth-apiserver/apiserver-688fbbb854-6n26v" Mar 18 18:00:32.186977 master-0 kubenswrapper[30278]: I0318 18:00:32.186949 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/7b94e08c-7944-445e-bfb7-6c7c14880c65-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-57f769d897-m82wx\" (UID: \"7b94e08c-7944-445e-bfb7-6c7c14880c65\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57f769d897-m82wx" Mar 18 18:00:32.186977 master-0 kubenswrapper[30278]: I0318 18:00:32.186976 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ts9b9\" (UniqueName: \"kubernetes.io/projected/fea7b899-fde4-4463-9520-4d433a8ebe21-kube-api-access-ts9b9\") pod \"multus-additional-cni-plugins-ttbr5\" (UID: \"fea7b899-fde4-4463-9520-4d433a8ebe21\") " pod="openshift-multus/multus-additional-cni-plugins-ttbr5" Mar 18 18:00:32.187145 master-0 kubenswrapper[30278]: I0318 18:00:32.187121 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5xvzx\" (UniqueName: \"kubernetes.io/projected/a94f7bff-ad61-4c53-a8eb-000a13f26971-kube-api-access-5xvzx\") pod \"cluster-autoscaler-operator-866dc4744-l6hpt\" (UID: \"a94f7bff-ad61-4c53-a8eb-000a13f26971\") " pod="openshift-machine-api/cluster-autoscaler-operator-866dc4744-l6hpt" Mar 18 18:00:32.187206 master-0 kubenswrapper[30278]: I0318 18:00:32.187163 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/59407fdf-b1e9-4992-a3c8-54b4e26f496c-metrics-tls\") pod \"dns-default-lf9xl\" (UID: \"59407fdf-b1e9-4992-a3c8-54b4e26f496c\") " pod="openshift-dns/dns-default-lf9xl" Mar 18 18:00:32.187344 master-0 kubenswrapper[30278]: I0318 18:00:32.187315 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1db0a246-ca43-4e7c-b09e-e80218ae99b1-proxy-ca-bundles\") pod \"controller-manager-f5755b457-f4cbl\" (UID: \"1db0a246-ca43-4e7c-b09e-e80218ae99b1\") " pod="openshift-controller-manager/controller-manager-f5755b457-f4cbl" Mar 18 18:00:32.187422 master-0 kubenswrapper[30278]: I0318 18:00:32.187365 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/89e6c3d6-7bd5-4df6-90db-3a349f644afb-proxy-tls\") pod \"machine-config-controller-b4f87c5b9-m84zq\" (UID: \"89e6c3d6-7bd5-4df6-90db-3a349f644afb\") " pod="openshift-machine-config-operator/machine-config-controller-b4f87c5b9-m84zq" Mar 18 18:00:32.187422 master-0 kubenswrapper[30278]: I0318 18:00:32.187401 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/efbcb147-d077-4749-9289-1682daccb657-etc-containers\") pod \"operator-controller-controller-manager-57777556ff-bk26c\" (UID: \"efbcb147-d077-4749-9289-1682daccb657\") " pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-bk26c" Mar 18 18:00:32.187517 master-0 kubenswrapper[30278]: I0318 18:00:32.187428 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hgnz6\" (UniqueName: \"kubernetes.io/projected/5a4f94f3-d63a-4869-b723-ae9637610b4b-kube-api-access-hgnz6\") pod \"network-metrics-daemon-mfn52\" (UID: \"5a4f94f3-d63a-4869-b723-ae9637610b4b\") " pod="openshift-multus/network-metrics-daemon-mfn52" Mar 18 18:00:32.187517 master-0 kubenswrapper[30278]: I0318 18:00:32.187456 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/5b0e38f3-3ab5-4519-86a6-68003deb94da-host-var-lib-cni-multus\") pod \"multus-64tx9\" (UID: \"5b0e38f3-3ab5-4519-86a6-68003deb94da\") " pod="openshift-multus/multus-64tx9" Mar 18 18:00:32.187517 master-0 kubenswrapper[30278]: I0318 18:00:32.187487 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/c3267271-e0c5-45d6-980c-d78e4f9eef35-images\") pod \"machine-config-operator-84d549f6d5-b5lps\" (UID: \"c3267271-e0c5-45d6-980c-d78e4f9eef35\") " pod="openshift-machine-config-operator/machine-config-operator-84d549f6d5-b5lps" Mar 18 18:00:32.187641 master-0 kubenswrapper[30278]: I0318 18:00:32.187537 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c57f282a-829b-41b2-827a-f4bc598245a2-stats-auth\") pod \"router-default-7dcf5569b5-m5dh4\" (UID: \"c57f282a-829b-41b2-827a-f4bc598245a2\") " pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" Mar 18 18:00:32.187641 master-0 kubenswrapper[30278]: I0318 18:00:32.187570 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/7b94e08c-7944-445e-bfb7-6c7c14880c65-ovnkube-config\") pod \"ovnkube-control-plane-57f769d897-m82wx\" (UID: \"7b94e08c-7944-445e-bfb7-6c7c14880c65\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57f769d897-m82wx" Mar 18 18:00:32.187641 master-0 kubenswrapper[30278]: I0318 18:00:32.187594 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/822080a5-2926-4a51-866d-86bb0b437da2-run\") pod \"tuned-r6tf4\" (UID: \"822080a5-2926-4a51-866d-86bb0b437da2\") " pod="openshift-cluster-node-tuning-operator/tuned-r6tf4" Mar 18 18:00:32.187753 master-0 kubenswrapper[30278]: I0318 18:00:32.187643 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f48gg\" (UniqueName: \"kubernetes.io/projected/822080a5-2926-4a51-866d-86bb0b437da2-kube-api-access-f48gg\") pod \"tuned-r6tf4\" (UID: \"822080a5-2926-4a51-866d-86bb0b437da2\") " pod="openshift-cluster-node-tuning-operator/tuned-r6tf4" Mar 18 18:00:32.187753 master-0 kubenswrapper[30278]: I0318 18:00:32.187719 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/427e5ce9-f4b3-4f12-bb77-2b13775aa334-catalog-content\") pod \"redhat-marketplace-6xmx4\" (UID: \"427e5ce9-f4b3-4f12-bb77-2b13775aa334\") " pod="openshift-marketplace/redhat-marketplace-6xmx4" Mar 18 18:00:32.187753 master-0 kubenswrapper[30278]: I0318 18:00:32.187752 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n9g8f\" (UniqueName: \"kubernetes.io/projected/1db0a246-ca43-4e7c-b09e-e80218ae99b1-kube-api-access-n9g8f\") pod \"controller-manager-f5755b457-f4cbl\" (UID: \"1db0a246-ca43-4e7c-b09e-e80218ae99b1\") " pod="openshift-controller-manager/controller-manager-f5755b457-f4cbl" Mar 18 18:00:32.187881 master-0 kubenswrapper[30278]: I0318 18:00:32.187784 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/92153864-7959-4482-bf24-c8db36435fb5-machine-approver-tls\") pod \"machine-approver-5c6485487f-z74t2\" (UID: \"92153864-7959-4482-bf24-c8db36435fb5\") " pod="openshift-cluster-machine-approver/machine-approver-5c6485487f-z74t2" Mar 18 18:00:32.187881 master-0 kubenswrapper[30278]: I0318 18:00:32.187812 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/56cde2f7-1742-45d6-aa22-8270cfb424a7-cache\") pod \"catalogd-controller-manager-6864dc98f7-8vmsv\" (UID: \"56cde2f7-1742-45d6-aa22-8270cfb424a7\") " pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-8vmsv" Mar 18 18:00:32.187881 master-0 kubenswrapper[30278]: I0318 18:00:32.187836 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fglbh\" (UniqueName: \"kubernetes.io/projected/8db04037-c7cc-4246-92c3-6e7985384b14-kube-api-access-fglbh\") pod \"packageserver-b8b994c95-kglwt\" (UID: \"8db04037-c7cc-4246-92c3-6e7985384b14\") " pod="openshift-operator-lifecycle-manager/packageserver-b8b994c95-kglwt" Mar 18 18:00:32.187881 master-0 kubenswrapper[30278]: I0318 18:00:32.187858 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c57f282a-829b-41b2-827a-f4bc598245a2-service-ca-bundle\") pod \"router-default-7dcf5569b5-m5dh4\" (UID: \"c57f282a-829b-41b2-827a-f4bc598245a2\") " pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" Mar 18 18:00:32.187881 master-0 kubenswrapper[30278]: I0318 18:00:32.187880 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/994fff04-c1d7-4f10-8d4b-6b49a6934829-var-lib-openvswitch\") pod \"ovnkube-node-5l4qp\" (UID: \"994fff04-c1d7-4f10-8d4b-6b49a6934829\") " pod="openshift-ovn-kubernetes/ovnkube-node-5l4qp" Mar 18 18:00:32.188073 master-0 kubenswrapper[30278]: I0318 18:00:32.187905 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/994fff04-c1d7-4f10-8d4b-6b49a6934829-ovnkube-script-lib\") pod \"ovnkube-node-5l4qp\" (UID: \"994fff04-c1d7-4f10-8d4b-6b49a6934829\") " pod="openshift-ovn-kubernetes/ovnkube-node-5l4qp" Mar 18 18:00:32.188073 master-0 kubenswrapper[30278]: I0318 18:00:32.187928 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/e9e04572-1425-440e-9869-6deef05e13e3-srv-cert\") pod \"catalog-operator-68f85b4d6c-qpgfz\" (UID: \"e9e04572-1425-440e-9869-6deef05e13e3\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68f85b4d6c-qpgfz" Mar 18 18:00:32.188073 master-0 kubenswrapper[30278]: I0318 18:00:32.187951 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/fd4c81e2-699b-4fdf-ac7d-1607cde6a8ab-signing-key\") pod \"service-ca-79bc6b8d76-g5brm\" (UID: \"fd4c81e2-699b-4fdf-ac7d-1607cde6a8ab\") " pod="openshift-service-ca/service-ca-79bc6b8d76-g5brm" Mar 18 18:00:32.188073 master-0 kubenswrapper[30278]: I0318 18:00:32.187972 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/fea7b899-fde4-4463-9520-4d433a8ebe21-cni-binary-copy\") pod \"multus-additional-cni-plugins-ttbr5\" (UID: \"fea7b899-fde4-4463-9520-4d433a8ebe21\") " pod="openshift-multus/multus-additional-cni-plugins-ttbr5" Mar 18 18:00:32.188073 master-0 kubenswrapper[30278]: I0318 18:00:32.187987 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/7b94e08c-7944-445e-bfb7-6c7c14880c65-ovnkube-config\") pod \"ovnkube-control-plane-57f769d897-m82wx\" (UID: \"7b94e08c-7944-445e-bfb7-6c7c14880c65\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57f769d897-m82wx" Mar 18 18:00:32.188321 master-0 kubenswrapper[30278]: I0318 18:00:32.188142 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/427e5ce9-f4b3-4f12-bb77-2b13775aa334-catalog-content\") pod \"redhat-marketplace-6xmx4\" (UID: \"427e5ce9-f4b3-4f12-bb77-2b13775aa334\") " pod="openshift-marketplace/redhat-marketplace-6xmx4" Mar 18 18:00:32.188321 master-0 kubenswrapper[30278]: I0318 18:00:32.188152 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/56cde2f7-1742-45d6-aa22-8270cfb424a7-cache\") pod \"catalogd-controller-manager-6864dc98f7-8vmsv\" (UID: \"56cde2f7-1742-45d6-aa22-8270cfb424a7\") " pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-8vmsv" Mar 18 18:00:32.188321 master-0 kubenswrapper[30278]: I0318 18:00:32.187996 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x47z7\" (UniqueName: \"kubernetes.io/projected/30d77a7c-222e-41c7-8a98-219854aa3da2-kube-api-access-x47z7\") pod \"apiserver-897b458c6-vsss9\" (UID: \"30d77a7c-222e-41c7-8a98-219854aa3da2\") " pod="openshift-apiserver/apiserver-897b458c6-vsss9" Mar 18 18:00:32.188495 master-0 kubenswrapper[30278]: I0318 18:00:32.188318 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zfnqp\" (UniqueName: \"kubernetes.io/projected/c355c750-ae2f-49fa-9a16-8fb4f688853e-kube-api-access-zfnqp\") pod \"service-ca-operator-b865698dc-5zj8r\" (UID: \"c355c750-ae2f-49fa-9a16-8fb4f688853e\") " pod="openshift-service-ca-operator/service-ca-operator-b865698dc-5zj8r" Mar 18 18:00:32.188495 master-0 kubenswrapper[30278]: I0318 18:00:32.188381 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z5jd4\" (UniqueName: \"kubernetes.io/projected/427e5ce9-f4b3-4f12-bb77-2b13775aa334-kube-api-access-z5jd4\") pod \"redhat-marketplace-6xmx4\" (UID: \"427e5ce9-f4b3-4f12-bb77-2b13775aa334\") " pod="openshift-marketplace/redhat-marketplace-6xmx4" Mar 18 18:00:32.188495 master-0 kubenswrapper[30278]: I0318 18:00:32.188441 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/8db04037-c7cc-4246-92c3-6e7985384b14-webhook-cert\") pod \"packageserver-b8b994c95-kglwt\" (UID: \"8db04037-c7cc-4246-92c3-6e7985384b14\") " pod="openshift-operator-lifecycle-manager/packageserver-b8b994c95-kglwt" Mar 18 18:00:32.188661 master-0 kubenswrapper[30278]: I0318 18:00:32.188498 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/9875ed82-813c-483d-8471-8f9b74b774ee-ovnkube-identity-cm\") pod \"network-node-identity-7s68k\" (UID: \"9875ed82-813c-483d-8471-8f9b74b774ee\") " pod="openshift-network-node-identity/network-node-identity-7s68k" Mar 18 18:00:32.188661 master-0 kubenswrapper[30278]: I0318 18:00:32.188523 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/994fff04-c1d7-4f10-8d4b-6b49a6934829-ovnkube-script-lib\") pod \"ovnkube-node-5l4qp\" (UID: \"994fff04-c1d7-4f10-8d4b-6b49a6934829\") " pod="openshift-ovn-kubernetes/ovnkube-node-5l4qp" Mar 18 18:00:32.188791 master-0 kubenswrapper[30278]: I0318 18:00:32.188761 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dc110414-3a6b-474c-bce3-33450cab8fcd-catalog-content\") pod \"certified-operators-vbglp\" (UID: \"dc110414-3a6b-474c-bce3-33450cab8fcd\") " pod="openshift-marketplace/certified-operators-vbglp" Mar 18 18:00:32.188848 master-0 kubenswrapper[30278]: I0318 18:00:32.188526 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dc110414-3a6b-474c-bce3-33450cab8fcd-catalog-content\") pod \"certified-operators-vbglp\" (UID: \"dc110414-3a6b-474c-bce3-33450cab8fcd\") " pod="openshift-marketplace/certified-operators-vbglp" Mar 18 18:00:32.188940 master-0 kubenswrapper[30278]: I0318 18:00:32.188913 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/e9e04572-1425-440e-9869-6deef05e13e3-srv-cert\") pod \"catalog-operator-68f85b4d6c-qpgfz\" (UID: \"e9e04572-1425-440e-9869-6deef05e13e3\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68f85b4d6c-qpgfz" Mar 18 18:00:32.189044 master-0 kubenswrapper[30278]: I0318 18:00:32.189010 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/5b0e38f3-3ab5-4519-86a6-68003deb94da-multus-cni-dir\") pod \"multus-64tx9\" (UID: \"5b0e38f3-3ab5-4519-86a6-68003deb94da\") " pod="openshift-multus/multus-64tx9" Mar 18 18:00:32.189106 master-0 kubenswrapper[30278]: I0318 18:00:32.189078 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-767c7\" (UniqueName: \"kubernetes.io/projected/e0e04440-c08b-452d-9be6-9f70a4027c92-kube-api-access-767c7\") pod \"cluster-samples-operator-85f7577d78-xnx8x\" (UID: \"e0e04440-c08b-452d-9be6-9f70a4027c92\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-85f7577d78-xnx8x" Mar 18 18:00:32.189180 master-0 kubenswrapper[30278]: I0318 18:00:32.189106 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/efbcb147-d077-4749-9289-1682daccb657-etc-docker\") pod \"operator-controller-controller-manager-57777556ff-bk26c\" (UID: \"efbcb147-d077-4749-9289-1682daccb657\") " pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-bk26c" Mar 18 18:00:32.189231 master-0 kubenswrapper[30278]: I0318 18:00:32.189010 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/fea7b899-fde4-4463-9520-4d433a8ebe21-cni-binary-copy\") pod \"multus-additional-cni-plugins-ttbr5\" (UID: \"fea7b899-fde4-4463-9520-4d433a8ebe21\") " pod="openshift-multus/multus-additional-cni-plugins-ttbr5" Mar 18 18:00:32.189231 master-0 kubenswrapper[30278]: I0318 18:00:32.189179 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/b3385316-45f0-46c5-ac82-683168db5878-node-bootstrap-token\") pod \"machine-config-server-mpmxb\" (UID: \"b3385316-45f0-46c5-ac82-683168db5878\") " pod="openshift-machine-config-operator/machine-config-server-mpmxb" Mar 18 18:00:32.189349 master-0 kubenswrapper[30278]: I0318 18:00:32.189244 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/1d969530-c138-4fb7-9bfe-0825be66c009-iptables-alerter-script\") pod \"iptables-alerter-f7jp5\" (UID: \"1d969530-c138-4fb7-9bfe-0825be66c009\") " pod="openshift-network-operator/iptables-alerter-f7jp5" Mar 18 18:00:32.189349 master-0 kubenswrapper[30278]: I0318 18:00:32.189290 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/fea7b899-fde4-4463-9520-4d433a8ebe21-tuning-conf-dir\") pod \"multus-additional-cni-plugins-ttbr5\" (UID: \"fea7b899-fde4-4463-9520-4d433a8ebe21\") " pod="openshift-multus/multus-additional-cni-plugins-ttbr5" Mar 18 18:00:32.189349 master-0 kubenswrapper[30278]: I0318 18:00:32.189321 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/994fff04-c1d7-4f10-8d4b-6b49a6934829-ovnkube-config\") pod \"ovnkube-node-5l4qp\" (UID: \"994fff04-c1d7-4f10-8d4b-6b49a6934829\") " pod="openshift-ovn-kubernetes/ovnkube-node-5l4qp" Mar 18 18:00:32.189349 master-0 kubenswrapper[30278]: I0318 18:00:32.189345 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/822080a5-2926-4a51-866d-86bb0b437da2-var-lib-kubelet\") pod \"tuned-r6tf4\" (UID: \"822080a5-2926-4a51-866d-86bb0b437da2\") " pod="openshift-cluster-node-tuning-operator/tuned-r6tf4" Mar 18 18:00:32.189516 master-0 kubenswrapper[30278]: I0318 18:00:32.189376 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mnl7c\" (UniqueName: \"kubernetes.io/projected/dc110414-3a6b-474c-bce3-33450cab8fcd-kube-api-access-mnl7c\") pod \"certified-operators-vbglp\" (UID: \"dc110414-3a6b-474c-bce3-33450cab8fcd\") " pod="openshift-marketplace/certified-operators-vbglp" Mar 18 18:00:32.189516 master-0 kubenswrapper[30278]: I0318 18:00:32.189402 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/fd4c81e2-699b-4fdf-ac7d-1607cde6a8ab-signing-cabundle\") pod \"service-ca-79bc6b8d76-g5brm\" (UID: \"fd4c81e2-699b-4fdf-ac7d-1607cde6a8ab\") " pod="openshift-service-ca/service-ca-79bc6b8d76-g5brm" Mar 18 18:00:32.189516 master-0 kubenswrapper[30278]: I0318 18:00:32.189431 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5s6f5\" (UniqueName: \"kubernetes.io/projected/978dcca6-b396-463f-9614-9e24194a1aaa-kube-api-access-5s6f5\") pod \"network-check-target-ctd49\" (UID: \"978dcca6-b396-463f-9614-9e24194a1aaa\") " pod="openshift-network-diagnostics/network-check-target-ctd49" Mar 18 18:00:32.189516 master-0 kubenswrapper[30278]: I0318 18:00:32.189453 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/9875ed82-813c-483d-8471-8f9b74b774ee-webhook-cert\") pod \"network-node-identity-7s68k\" (UID: \"9875ed82-813c-483d-8471-8f9b74b774ee\") " pod="openshift-network-node-identity/network-node-identity-7s68k" Mar 18 18:00:32.189516 master-0 kubenswrapper[30278]: I0318 18:00:32.189488 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z7xqg\" (UniqueName: \"kubernetes.io/projected/c3267271-e0c5-45d6-980c-d78e4f9eef35-kube-api-access-z7xqg\") pod \"machine-config-operator-84d549f6d5-b5lps\" (UID: \"c3267271-e0c5-45d6-980c-d78e4f9eef35\") " pod="openshift-machine-config-operator/machine-config-operator-84d549f6d5-b5lps" Mar 18 18:00:32.189516 master-0 kubenswrapper[30278]: I0318 18:00:32.189511 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-modprobe-d\" (UniqueName: \"kubernetes.io/host-path/822080a5-2926-4a51-866d-86bb0b437da2-etc-modprobe-d\") pod \"tuned-r6tf4\" (UID: \"822080a5-2926-4a51-866d-86bb0b437da2\") " pod="openshift-cluster-node-tuning-operator/tuned-r6tf4" Mar 18 18:00:32.189768 master-0 kubenswrapper[30278]: I0318 18:00:32.189525 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/1d969530-c138-4fb7-9bfe-0825be66c009-iptables-alerter-script\") pod \"iptables-alerter-f7jp5\" (UID: \"1d969530-c138-4fb7-9bfe-0825be66c009\") " pod="openshift-network-operator/iptables-alerter-f7jp5" Mar 18 18:00:32.189768 master-0 kubenswrapper[30278]: I0318 18:00:32.189538 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mbctm\" (UniqueName: \"kubernetes.io/projected/56cde2f7-1742-45d6-aa22-8270cfb424a7-kube-api-access-mbctm\") pod \"catalogd-controller-manager-6864dc98f7-8vmsv\" (UID: \"56cde2f7-1742-45d6-aa22-8270cfb424a7\") " pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-8vmsv" Mar 18 18:00:32.189768 master-0 kubenswrapper[30278]: I0318 18:00:32.189570 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fc27m\" (UniqueName: \"kubernetes.io/projected/2d21e77e-8b61-4f03-8f17-941b7a1d8b1d-kube-api-access-fc27m\") pod \"machine-api-operator-6fbb6cf6f9-6x52p\" (UID: \"2d21e77e-8b61-4f03-8f17-941b7a1d8b1d\") " pod="openshift-machine-api/machine-api-operator-6fbb6cf6f9-6x52p" Mar 18 18:00:32.189768 master-0 kubenswrapper[30278]: I0318 18:00:32.189719 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/fd4c81e2-699b-4fdf-ac7d-1607cde6a8ab-signing-key\") pod \"service-ca-79bc6b8d76-g5brm\" (UID: \"fd4c81e2-699b-4fdf-ac7d-1607cde6a8ab\") " pod="openshift-service-ca/service-ca-79bc6b8d76-g5brm" Mar 18 18:00:32.189768 master-0 kubenswrapper[30278]: I0318 18:00:32.189739 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/994fff04-c1d7-4f10-8d4b-6b49a6934829-ovnkube-config\") pod \"ovnkube-node-5l4qp\" (UID: \"994fff04-c1d7-4f10-8d4b-6b49a6934829\") " pod="openshift-ovn-kubernetes/ovnkube-node-5l4qp" Mar 18 18:00:32.190029 master-0 kubenswrapper[30278]: I0318 18:00:32.189798 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/30d77a7c-222e-41c7-8a98-219854aa3da2-node-pullsecrets\") pod \"apiserver-897b458c6-vsss9\" (UID: \"30d77a7c-222e-41c7-8a98-219854aa3da2\") " pod="openshift-apiserver/apiserver-897b458c6-vsss9" Mar 18 18:00:32.190029 master-0 kubenswrapper[30278]: I0318 18:00:32.189843 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5a4f94f3-d63a-4869-b723-ae9637610b4b-metrics-certs\") pod \"network-metrics-daemon-mfn52\" (UID: \"5a4f94f3-d63a-4869-b723-ae9637610b4b\") " pod="openshift-multus/network-metrics-daemon-mfn52" Mar 18 18:00:32.190029 master-0 kubenswrapper[30278]: I0318 18:00:32.189921 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/994fff04-c1d7-4f10-8d4b-6b49a6934829-host-cni-bin\") pod \"ovnkube-node-5l4qp\" (UID: \"994fff04-c1d7-4f10-8d4b-6b49a6934829\") " pod="openshift-ovn-kubernetes/ovnkube-node-5l4qp" Mar 18 18:00:32.190029 master-0 kubenswrapper[30278]: I0318 18:00:32.189978 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/994fff04-c1d7-4f10-8d4b-6b49a6934829-env-overrides\") pod \"ovnkube-node-5l4qp\" (UID: \"994fff04-c1d7-4f10-8d4b-6b49a6934829\") " pod="openshift-ovn-kubernetes/ovnkube-node-5l4qp" Mar 18 18:00:32.190267 master-0 kubenswrapper[30278]: I0318 18:00:32.190067 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/92153864-7959-4482-bf24-c8db36435fb5-auth-proxy-config\") pod \"machine-approver-5c6485487f-z74t2\" (UID: \"92153864-7959-4482-bf24-c8db36435fb5\") " pod="openshift-cluster-machine-approver/machine-approver-5c6485487f-z74t2" Mar 18 18:00:32.190267 master-0 kubenswrapper[30278]: I0318 18:00:32.190099 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/9875ed82-813c-483d-8471-8f9b74b774ee-webhook-cert\") pod \"network-node-identity-7s68k\" (UID: \"9875ed82-813c-483d-8471-8f9b74b774ee\") " pod="openshift-network-node-identity/network-node-identity-7s68k" Mar 18 18:00:32.190267 master-0 kubenswrapper[30278]: I0318 18:00:32.190175 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/994fff04-c1d7-4f10-8d4b-6b49a6934829-env-overrides\") pod \"ovnkube-node-5l4qp\" (UID: \"994fff04-c1d7-4f10-8d4b-6b49a6934829\") " pod="openshift-ovn-kubernetes/ovnkube-node-5l4qp" Mar 18 18:00:32.190267 master-0 kubenswrapper[30278]: I0318 18:00:32.190235 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5a4f94f3-d63a-4869-b723-ae9637610b4b-metrics-certs\") pod \"network-metrics-daemon-mfn52\" (UID: \"5a4f94f3-d63a-4869-b723-ae9637610b4b\") " pod="openshift-multus/network-metrics-daemon-mfn52" Mar 18 18:00:32.190267 master-0 kubenswrapper[30278]: I0318 18:00:32.190252 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"snapshots\" (UniqueName: \"kubernetes.io/empty-dir/d4c75bee-d0d2-4261-8f89-8c3375dbd868-snapshots\") pod \"insights-operator-68bf6ff9d6-hm777\" (UID: \"d4c75bee-d0d2-4261-8f89-8c3375dbd868\") " pod="openshift-insights/insights-operator-68bf6ff9d6-hm777" Mar 18 18:00:32.190601 master-0 kubenswrapper[30278]: I0318 18:00:32.190335 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ljbl7\" (UniqueName: \"kubernetes.io/projected/7d72bb42-1ee6-4f61-9515-d1c5bafa896f-kube-api-access-ljbl7\") pod \"network-check-source-b4bf74f6-nlqpp\" (UID: \"7d72bb42-1ee6-4f61-9515-d1c5bafa896f\") " pod="openshift-network-diagnostics/network-check-source-b4bf74f6-nlqpp" Mar 18 18:00:32.190601 master-0 kubenswrapper[30278]: I0318 18:00:32.190523 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"snapshots\" (UniqueName: \"kubernetes.io/empty-dir/d4c75bee-d0d2-4261-8f89-8c3375dbd868-snapshots\") pod \"insights-operator-68bf6ff9d6-hm777\" (UID: \"d4c75bee-d0d2-4261-8f89-8c3375dbd868\") " pod="openshift-insights/insights-operator-68bf6ff9d6-hm777" Mar 18 18:00:32.190740 master-0 kubenswrapper[30278]: I0318 18:00:32.190711 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/fcf459dc-bd30-4143-b5c4-60fd01b46548-rootfs\") pod \"machine-config-daemon-5l8hh\" (UID: \"fcf459dc-bd30-4143-b5c4-60fd01b46548\") " pod="openshift-machine-config-operator/machine-config-daemon-5l8hh" Mar 18 18:00:32.190818 master-0 kubenswrapper[30278]: I0318 18:00:32.190774 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloud-credential-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/04cef0bd-f365-4bf6-864a-1895995015d6-cloud-credential-operator-serving-cert\") pod \"cloud-credential-operator-744f9dbf77-djgn7\" (UID: \"04cef0bd-f365-4bf6-864a-1895995015d6\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-744f9dbf77-djgn7" Mar 18 18:00:32.190889 master-0 kubenswrapper[30278]: I0318 18:00:32.190856 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cx596\" (UniqueName: \"kubernetes.io/projected/253ec853-f637-4aa4-8e8e-eb655dfccccb-kube-api-access-cx596\") pod \"route-controller-manager-57dbfd879f-44tfw\" (UID: \"253ec853-f637-4aa4-8e8e-eb655dfccccb\") " pod="openshift-route-controller-manager/route-controller-manager-57dbfd879f-44tfw" Mar 18 18:00:32.190949 master-0 kubenswrapper[30278]: I0318 18:00:32.190892 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/994fff04-c1d7-4f10-8d4b-6b49a6934829-run-openvswitch\") pod \"ovnkube-node-5l4qp\" (UID: \"994fff04-c1d7-4f10-8d4b-6b49a6934829\") " pod="openshift-ovn-kubernetes/ovnkube-node-5l4qp" Mar 18 18:00:32.190994 master-0 kubenswrapper[30278]: I0318 18:00:32.190960 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/43fab0f2-5cfd-4b5e-a632-728fd5b960fd-audit-policies\") pod \"apiserver-688fbbb854-6n26v\" (UID: \"43fab0f2-5cfd-4b5e-a632-728fd5b960fd\") " pod="openshift-oauth-apiserver/apiserver-688fbbb854-6n26v" Mar 18 18:00:32.191043 master-0 kubenswrapper[30278]: I0318 18:00:32.191017 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/fea7b899-fde4-4463-9520-4d433a8ebe21-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-ttbr5\" (UID: \"fea7b899-fde4-4463-9520-4d433a8ebe21\") " pod="openshift-multus/multus-additional-cni-plugins-ttbr5" Mar 18 18:00:32.191107 master-0 kubenswrapper[30278]: I0318 18:00:32.191045 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/994fff04-c1d7-4f10-8d4b-6b49a6934829-host-run-netns\") pod \"ovnkube-node-5l4qp\" (UID: \"994fff04-c1d7-4f10-8d4b-6b49a6934829\") " pod="openshift-ovn-kubernetes/ovnkube-node-5l4qp" Mar 18 18:00:32.191167 master-0 kubenswrapper[30278]: I0318 18:00:32.191126 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/c3267271-e0c5-45d6-980c-d78e4f9eef35-auth-proxy-config\") pod \"machine-config-operator-84d549f6d5-b5lps\" (UID: \"c3267271-e0c5-45d6-980c-d78e4f9eef35\") " pod="openshift-machine-config-operator/machine-config-operator-84d549f6d5-b5lps" Mar 18 18:00:32.191424 master-0 kubenswrapper[30278]: I0318 18:00:32.191391 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/253ec853-f637-4aa4-8e8e-eb655dfccccb-config\") pod \"route-controller-manager-57dbfd879f-44tfw\" (UID: \"253ec853-f637-4aa4-8e8e-eb655dfccccb\") " pod="openshift-route-controller-manager/route-controller-manager-57dbfd879f-44tfw" Mar 18 18:00:32.192544 master-0 kubenswrapper[30278]: I0318 18:00:32.192487 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/fdab27a1-1d7a-4dc5-b828-eba3f57592dd-etc-ssl-certs\") pod \"cluster-version-operator-7d58488df-l48xm\" (UID: \"fdab27a1-1d7a-4dc5-b828-eba3f57592dd\") " pod="openshift-cluster-version/cluster-version-operator-7d58488df-l48xm" Mar 18 18:00:32.192544 master-0 kubenswrapper[30278]: I0318 18:00:32.192298 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/fea7b899-fde4-4463-9520-4d433a8ebe21-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-ttbr5\" (UID: \"fea7b899-fde4-4463-9520-4d433a8ebe21\") " pod="openshift-multus/multus-additional-cni-plugins-ttbr5" Mar 18 18:00:32.192704 master-0 kubenswrapper[30278]: I0318 18:00:32.192639 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/822080a5-2926-4a51-866d-86bb0b437da2-etc-kubernetes\") pod \"tuned-r6tf4\" (UID: \"822080a5-2926-4a51-866d-86bb0b437da2\") " pod="openshift-cluster-node-tuning-operator/tuned-r6tf4" Mar 18 18:00:32.192704 master-0 kubenswrapper[30278]: I0318 18:00:32.192691 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qbdth\" (UniqueName: \"kubernetes.io/projected/9c0dbd44-7669-41d6-bf1b-d8c1343c9d98-kube-api-access-qbdth\") pod \"prometheus-operator-6c8df6d4b-fshkm\" (UID: \"9c0dbd44-7669-41d6-bf1b-d8c1343c9d98\") " pod="openshift-monitoring/prometheus-operator-6c8df6d4b-fshkm" Mar 18 18:00:32.192810 master-0 kubenswrapper[30278]: I0318 18:00:32.192727 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/e0e04440-c08b-452d-9be6-9f70a4027c92-samples-operator-tls\") pod \"cluster-samples-operator-85f7577d78-xnx8x\" (UID: \"e0e04440-c08b-452d-9be6-9f70a4027c92\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-85f7577d78-xnx8x" Mar 18 18:00:32.192810 master-0 kubenswrapper[30278]: I0318 18:00:32.192797 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c355c750-ae2f-49fa-9a16-8fb4f688853e-serving-cert\") pod \"service-ca-operator-b865698dc-5zj8r\" (UID: \"c355c750-ae2f-49fa-9a16-8fb4f688853e\") " pod="openshift-service-ca-operator/service-ca-operator-b865698dc-5zj8r" Mar 18 18:00:32.192916 master-0 kubenswrapper[30278]: I0318 18:00:32.192837 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/822080a5-2926-4a51-866d-86bb0b437da2-host\") pod \"tuned-r6tf4\" (UID: \"822080a5-2926-4a51-866d-86bb0b437da2\") " pod="openshift-cluster-node-tuning-operator/tuned-r6tf4" Mar 18 18:00:32.192916 master-0 kubenswrapper[30278]: I0318 18:00:32.192872 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/9c0dbd44-7669-41d6-bf1b-d8c1343c9d98-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-6c8df6d4b-fshkm\" (UID: \"9c0dbd44-7669-41d6-bf1b-d8c1343c9d98\") " pod="openshift-monitoring/prometheus-operator-6c8df6d4b-fshkm" Mar 18 18:00:32.193042 master-0 kubenswrapper[30278]: I0318 18:00:32.192939 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/30d77a7c-222e-41c7-8a98-219854aa3da2-audit\") pod \"apiserver-897b458c6-vsss9\" (UID: \"30d77a7c-222e-41c7-8a98-219854aa3da2\") " pod="openshift-apiserver/apiserver-897b458c6-vsss9" Mar 18 18:00:32.193042 master-0 kubenswrapper[30278]: I0318 18:00:32.192974 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fdab27a1-1d7a-4dc5-b828-eba3f57592dd-serving-cert\") pod \"cluster-version-operator-7d58488df-l48xm\" (UID: \"fdab27a1-1d7a-4dc5-b828-eba3f57592dd\") " pod="openshift-cluster-version/cluster-version-operator-7d58488df-l48xm" Mar 18 18:00:32.193042 master-0 kubenswrapper[30278]: I0318 18:00:32.193002 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/994fff04-c1d7-4f10-8d4b-6b49a6934829-log-socket\") pod \"ovnkube-node-5l4qp\" (UID: \"994fff04-c1d7-4f10-8d4b-6b49a6934829\") " pod="openshift-ovn-kubernetes/ovnkube-node-5l4qp" Mar 18 18:00:32.193042 master-0 kubenswrapper[30278]: I0318 18:00:32.193030 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/b3385316-45f0-46c5-ac82-683168db5878-certs\") pod \"machine-config-server-mpmxb\" (UID: \"b3385316-45f0-46c5-ac82-683168db5878\") " pod="openshift-machine-config-operator/machine-config-server-mpmxb" Mar 18 18:00:32.193212 master-0 kubenswrapper[30278]: I0318 18:00:32.193054 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dc110414-3a6b-474c-bce3-33450cab8fcd-utilities\") pod \"certified-operators-vbglp\" (UID: \"dc110414-3a6b-474c-bce3-33450cab8fcd\") " pod="openshift-marketplace/certified-operators-vbglp" Mar 18 18:00:32.193212 master-0 kubenswrapper[30278]: I0318 18:00:32.193084 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/89e6c3d6-7bd5-4df6-90db-3a349f644afb-mcc-auth-proxy-config\") pod \"machine-config-controller-b4f87c5b9-m84zq\" (UID: \"89e6c3d6-7bd5-4df6-90db-3a349f644afb\") " pod="openshift-machine-config-operator/machine-config-controller-b4f87c5b9-m84zq" Mar 18 18:00:32.193212 master-0 kubenswrapper[30278]: I0318 18:00:32.193120 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c355c750-ae2f-49fa-9a16-8fb4f688853e-serving-cert\") pod \"service-ca-operator-b865698dc-5zj8r\" (UID: \"c355c750-ae2f-49fa-9a16-8fb4f688853e\") " pod="openshift-service-ca-operator/service-ca-operator-b865698dc-5zj8r" Mar 18 18:00:32.193212 master-0 kubenswrapper[30278]: I0318 18:00:32.193133 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sb496\" (UniqueName: \"kubernetes.io/projected/92153864-7959-4482-bf24-c8db36435fb5-kube-api-access-sb496\") pod \"machine-approver-5c6485487f-z74t2\" (UID: \"92153864-7959-4482-bf24-c8db36435fb5\") " pod="openshift-cluster-machine-approver/machine-approver-5c6485487f-z74t2" Mar 18 18:00:32.193212 master-0 kubenswrapper[30278]: I0318 18:00:32.193160 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/994fff04-c1d7-4f10-8d4b-6b49a6934829-run-ovn\") pod \"ovnkube-node-5l4qp\" (UID: \"994fff04-c1d7-4f10-8d4b-6b49a6934829\") " pod="openshift-ovn-kubernetes/ovnkube-node-5l4qp" Mar 18 18:00:32.193212 master-0 kubenswrapper[30278]: I0318 18:00:32.193183 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/5b0e38f3-3ab5-4519-86a6-68003deb94da-os-release\") pod \"multus-64tx9\" (UID: \"5b0e38f3-3ab5-4519-86a6-68003deb94da\") " pod="openshift-multus/multus-64tx9" Mar 18 18:00:32.193562 master-0 kubenswrapper[30278]: I0318 18:00:32.193258 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2tskm\" (UniqueName: \"kubernetes.io/projected/4460d3d3-c55f-4f1c-a623-e3feccf937bb-kube-api-access-2tskm\") pod \"redhat-operators-bgdql\" (UID: \"4460d3d3-c55f-4f1c-a623-e3feccf937bb\") " pod="openshift-marketplace/redhat-operators-bgdql" Mar 18 18:00:32.193562 master-0 kubenswrapper[30278]: I0318 18:00:32.193327 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5wkqk\" (UniqueName: \"kubernetes.io/projected/efd0d6b1-652c-44b2-b918-5c7ced5d15c3-kube-api-access-5wkqk\") pod \"node-resolver-bwcgq\" (UID: \"efd0d6b1-652c-44b2-b918-5c7ced5d15c3\") " pod="openshift-dns/node-resolver-bwcgq" Mar 18 18:00:32.193562 master-0 kubenswrapper[30278]: I0318 18:00:32.193353 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/5b0e38f3-3ab5-4519-86a6-68003deb94da-hostroot\") pod \"multus-64tx9\" (UID: \"5b0e38f3-3ab5-4519-86a6-68003deb94da\") " pod="openshift-multus/multus-64tx9" Mar 18 18:00:32.193562 master-0 kubenswrapper[30278]: I0318 18:00:32.193368 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dc110414-3a6b-474c-bce3-33450cab8fcd-utilities\") pod \"certified-operators-vbglp\" (UID: \"dc110414-3a6b-474c-bce3-33450cab8fcd\") " pod="openshift-marketplace/certified-operators-vbglp" Mar 18 18:00:32.193562 master-0 kubenswrapper[30278]: I0318 18:00:32.193383 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/30d77a7c-222e-41c7-8a98-219854aa3da2-image-import-ca\") pod \"apiserver-897b458c6-vsss9\" (UID: \"30d77a7c-222e-41c7-8a98-219854aa3da2\") " pod="openshift-apiserver/apiserver-897b458c6-vsss9" Mar 18 18:00:32.193562 master-0 kubenswrapper[30278]: I0318 18:00:32.193407 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/994fff04-c1d7-4f10-8d4b-6b49a6934829-node-log\") pod \"ovnkube-node-5l4qp\" (UID: \"994fff04-c1d7-4f10-8d4b-6b49a6934829\") " pod="openshift-ovn-kubernetes/ovnkube-node-5l4qp" Mar 18 18:00:32.193562 master-0 kubenswrapper[30278]: I0318 18:00:32.193435 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalogserver-certs\" (UniqueName: \"kubernetes.io/secret/56cde2f7-1742-45d6-aa22-8270cfb424a7-catalogserver-certs\") pod \"catalogd-controller-manager-6864dc98f7-8vmsv\" (UID: \"56cde2f7-1742-45d6-aa22-8270cfb424a7\") " pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-8vmsv" Mar 18 18:00:32.193562 master-0 kubenswrapper[30278]: I0318 18:00:32.193460 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/30d77a7c-222e-41c7-8a98-219854aa3da2-audit-dir\") pod \"apiserver-897b458c6-vsss9\" (UID: \"30d77a7c-222e-41c7-8a98-219854aa3da2\") " pod="openshift-apiserver/apiserver-897b458c6-vsss9" Mar 18 18:00:32.193562 master-0 kubenswrapper[30278]: I0318 18:00:32.193488 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bz8rf\" (UniqueName: \"kubernetes.io/projected/d4c75bee-d0d2-4261-8f89-8c3375dbd868-kube-api-access-bz8rf\") pod \"insights-operator-68bf6ff9d6-hm777\" (UID: \"d4c75bee-d0d2-4261-8f89-8c3375dbd868\") " pod="openshift-insights/insights-operator-68bf6ff9d6-hm777" Mar 18 18:00:32.193562 master-0 kubenswrapper[30278]: I0318 18:00:32.193515 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/0751c002-fe0e-4f13-bb9c-9accd8ca0df3-images\") pod \"cluster-cloud-controller-manager-operator-7dff898856-kfzkl\" (UID: \"0751c002-fe0e-4f13-bb9c-9accd8ca0df3\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-kfzkl" Mar 18 18:00:32.194114 master-0 kubenswrapper[30278]: I0318 18:00:32.193578 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/994fff04-c1d7-4f10-8d4b-6b49a6934829-ovn-node-metrics-cert\") pod \"ovnkube-node-5l4qp\" (UID: \"994fff04-c1d7-4f10-8d4b-6b49a6934829\") " pod="openshift-ovn-kubernetes/ovnkube-node-5l4qp" Mar 18 18:00:32.194114 master-0 kubenswrapper[30278]: I0318 18:00:32.193650 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-sysctl-d\" (UniqueName: \"kubernetes.io/host-path/822080a5-2926-4a51-866d-86bb0b437da2-etc-sysctl-d\") pod \"tuned-r6tf4\" (UID: \"822080a5-2926-4a51-866d-86bb0b437da2\") " pod="openshift-cluster-node-tuning-operator/tuned-r6tf4" Mar 18 18:00:32.194114 master-0 kubenswrapper[30278]: I0318 18:00:32.193675 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/e7f76afa-4b23-421c-8451-46323813f06e-webhook-certs\") pod \"multus-admission-controller-58c9f8fc64-9c6bk\" (UID: \"e7f76afa-4b23-421c-8451-46323813f06e\") " pod="openshift-multus/multus-admission-controller-58c9f8fc64-9c6bk" Mar 18 18:00:32.194114 master-0 kubenswrapper[30278]: I0318 18:00:32.193698 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/994fff04-c1d7-4f10-8d4b-6b49a6934829-host-slash\") pod \"ovnkube-node-5l4qp\" (UID: \"994fff04-c1d7-4f10-8d4b-6b49a6934829\") " pod="openshift-ovn-kubernetes/ovnkube-node-5l4qp" Mar 18 18:00:32.194114 master-0 kubenswrapper[30278]: I0318 18:00:32.193717 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/efd0d6b1-652c-44b2-b918-5c7ced5d15c3-hosts-file\") pod \"node-resolver-bwcgq\" (UID: \"efd0d6b1-652c-44b2-b918-5c7ced5d15c3\") " pod="openshift-dns/node-resolver-bwcgq" Mar 18 18:00:32.194114 master-0 kubenswrapper[30278]: I0318 18:00:32.193737 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/5b0e38f3-3ab5-4519-86a6-68003deb94da-cni-binary-copy\") pod \"multus-64tx9\" (UID: \"5b0e38f3-3ab5-4519-86a6-68003deb94da\") " pod="openshift-multus/multus-64tx9" Mar 18 18:00:32.194114 master-0 kubenswrapper[30278]: I0318 18:00:32.193770 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wjtg7\" (UniqueName: \"kubernetes.io/projected/489dd872-39c3-4ce2-8dc1-9d0552b88616-kube-api-access-wjtg7\") pod \"community-operators-8485d\" (UID: \"489dd872-39c3-4ce2-8dc1-9d0552b88616\") " pod="openshift-marketplace/community-operators-8485d" Mar 18 18:00:32.194114 master-0 kubenswrapper[30278]: I0318 18:00:32.193796 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/5b0e38f3-3ab5-4519-86a6-68003deb94da-multus-conf-dir\") pod \"multus-64tx9\" (UID: \"5b0e38f3-3ab5-4519-86a6-68003deb94da\") " pod="openshift-multus/multus-64tx9" Mar 18 18:00:32.194114 master-0 kubenswrapper[30278]: I0318 18:00:32.193817 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/489dd872-39c3-4ce2-8dc1-9d0552b88616-utilities\") pod \"community-operators-8485d\" (UID: \"489dd872-39c3-4ce2-8dc1-9d0552b88616\") " pod="openshift-marketplace/community-operators-8485d" Mar 18 18:00:32.194114 master-0 kubenswrapper[30278]: I0318 18:00:32.193868 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/fdab27a1-1d7a-4dc5-b828-eba3f57592dd-etc-cvo-updatepayloads\") pod \"cluster-version-operator-7d58488df-l48xm\" (UID: \"fdab27a1-1d7a-4dc5-b828-eba3f57592dd\") " pod="openshift-cluster-version/cluster-version-operator-7d58488df-l48xm" Mar 18 18:00:32.194114 master-0 kubenswrapper[30278]: I0318 18:00:32.193887 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/994fff04-c1d7-4f10-8d4b-6b49a6934829-ovn-node-metrics-cert\") pod \"ovnkube-node-5l4qp\" (UID: \"994fff04-c1d7-4f10-8d4b-6b49a6934829\") " pod="openshift-ovn-kubernetes/ovnkube-node-5l4qp" Mar 18 18:00:32.194114 master-0 kubenswrapper[30278]: I0318 18:00:32.193920 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/1d969530-c138-4fb7-9bfe-0825be66c009-host-slash\") pod \"iptables-alerter-f7jp5\" (UID: \"1d969530-c138-4fb7-9bfe-0825be66c009\") " pod="openshift-network-operator/iptables-alerter-f7jp5" Mar 18 18:00:32.194114 master-0 kubenswrapper[30278]: I0318 18:00:32.193956 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/fdab27a1-1d7a-4dc5-b828-eba3f57592dd-kube-api-access\") pod \"cluster-version-operator-7d58488df-l48xm\" (UID: \"fdab27a1-1d7a-4dc5-b828-eba3f57592dd\") " pod="openshift-cluster-version/cluster-version-operator-7d58488df-l48xm" Mar 18 18:00:32.194114 master-0 kubenswrapper[30278]: I0318 18:00:32.193983 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/59407fdf-b1e9-4992-a3c8-54b4e26f496c-config-volume\") pod \"dns-default-lf9xl\" (UID: \"59407fdf-b1e9-4992-a3c8-54b4e26f496c\") " pod="openshift-dns/dns-default-lf9xl" Mar 18 18:00:32.194114 master-0 kubenswrapper[30278]: I0318 18:00:32.194009 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/8db04037-c7cc-4246-92c3-6e7985384b14-tmpfs\") pod \"packageserver-b8b994c95-kglwt\" (UID: \"8db04037-c7cc-4246-92c3-6e7985384b14\") " pod="openshift-operator-lifecycle-manager/packageserver-b8b994c95-kglwt" Mar 18 18:00:32.194114 master-0 kubenswrapper[30278]: I0318 18:00:32.194052 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/5b0e38f3-3ab5-4519-86a6-68003deb94da-cni-binary-copy\") pod \"multus-64tx9\" (UID: \"5b0e38f3-3ab5-4519-86a6-68003deb94da\") " pod="openshift-multus/multus-64tx9" Mar 18 18:00:32.194114 master-0 kubenswrapper[30278]: I0318 18:00:32.194087 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/43fab0f2-5cfd-4b5e-a632-728fd5b960fd-etcd-client\") pod \"apiserver-688fbbb854-6n26v\" (UID: \"43fab0f2-5cfd-4b5e-a632-728fd5b960fd\") " pod="openshift-oauth-apiserver/apiserver-688fbbb854-6n26v" Mar 18 18:00:32.194114 master-0 kubenswrapper[30278]: I0318 18:00:32.194096 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/8db04037-c7cc-4246-92c3-6e7985384b14-tmpfs\") pod \"packageserver-b8b994c95-kglwt\" (UID: \"8db04037-c7cc-4246-92c3-6e7985384b14\") " pod="openshift-operator-lifecycle-manager/packageserver-b8b994c95-kglwt" Mar 18 18:00:32.194114 master-0 kubenswrapper[30278]: I0318 18:00:32.194109 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/7b94e08c-7944-445e-bfb7-6c7c14880c65-env-overrides\") pod \"ovnkube-control-plane-57f769d897-m82wx\" (UID: \"7b94e08c-7944-445e-bfb7-6c7c14880c65\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57f769d897-m82wx" Mar 18 18:00:32.194114 master-0 kubenswrapper[30278]: I0318 18:00:32.193986 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/489dd872-39c3-4ce2-8dc1-9d0552b88616-utilities\") pod \"community-operators-8485d\" (UID: \"489dd872-39c3-4ce2-8dc1-9d0552b88616\") " pod="openshift-marketplace/community-operators-8485d" Mar 18 18:00:32.195265 master-0 kubenswrapper[30278]: I0318 18:00:32.194209 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/30d77a7c-222e-41c7-8a98-219854aa3da2-trusted-ca-bundle\") pod \"apiserver-897b458c6-vsss9\" (UID: \"30d77a7c-222e-41c7-8a98-219854aa3da2\") " pod="openshift-apiserver/apiserver-897b458c6-vsss9" Mar 18 18:00:32.195265 master-0 kubenswrapper[30278]: I0318 18:00:32.194251 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c57f282a-829b-41b2-827a-f4bc598245a2-default-certificate\") pod \"router-default-7dcf5569b5-m5dh4\" (UID: \"c57f282a-829b-41b2-827a-f4bc598245a2\") " pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" Mar 18 18:00:32.195265 master-0 kubenswrapper[30278]: I0318 18:00:32.194255 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/7b94e08c-7944-445e-bfb7-6c7c14880c65-env-overrides\") pod \"ovnkube-control-plane-57f769d897-m82wx\" (UID: \"7b94e08c-7944-445e-bfb7-6c7c14880c65\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57f769d897-m82wx" Mar 18 18:00:32.195265 master-0 kubenswrapper[30278]: I0318 18:00:32.194289 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/5b0e38f3-3ab5-4519-86a6-68003deb94da-etc-kubernetes\") pod \"multus-64tx9\" (UID: \"5b0e38f3-3ab5-4519-86a6-68003deb94da\") " pod="openshift-multus/multus-64tx9" Mar 18 18:00:32.195265 master-0 kubenswrapper[30278]: I0318 18:00:32.194453 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/14a0661b-7bde-4e22-a9a9-5e3fb24df77f-host-etc-kube\") pod \"network-operator-7bd846bfc4-dxxbl\" (UID: \"14a0661b-7bde-4e22-a9a9-5e3fb24df77f\") " pod="openshift-network-operator/network-operator-7bd846bfc4-dxxbl" Mar 18 18:00:32.195265 master-0 kubenswrapper[30278]: I0318 18:00:32.194484 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d4c75bee-d0d2-4261-8f89-8c3375dbd868-serving-cert\") pod \"insights-operator-68bf6ff9d6-hm777\" (UID: \"d4c75bee-d0d2-4261-8f89-8c3375dbd868\") " pod="openshift-insights/insights-operator-68bf6ff9d6-hm777" Mar 18 18:00:32.195265 master-0 kubenswrapper[30278]: I0318 18:00:32.194509 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/994fff04-c1d7-4f10-8d4b-6b49a6934829-host-cni-netd\") pod \"ovnkube-node-5l4qp\" (UID: \"994fff04-c1d7-4f10-8d4b-6b49a6934829\") " pod="openshift-ovn-kubernetes/ovnkube-node-5l4qp" Mar 18 18:00:32.195265 master-0 kubenswrapper[30278]: I0318 18:00:32.194538 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2d21e77e-8b61-4f03-8f17-941b7a1d8b1d-config\") pod \"machine-api-operator-6fbb6cf6f9-6x52p\" (UID: \"2d21e77e-8b61-4f03-8f17-941b7a1d8b1d\") " pod="openshift-machine-api/machine-api-operator-6fbb6cf6f9-6x52p" Mar 18 18:00:32.195265 master-0 kubenswrapper[30278]: I0318 18:00:32.194828 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/14a0661b-7bde-4e22-a9a9-5e3fb24df77f-host-etc-kube\") pod \"network-operator-7bd846bfc4-dxxbl\" (UID: \"14a0661b-7bde-4e22-a9a9-5e3fb24df77f\") " pod="openshift-network-operator/network-operator-7bd846bfc4-dxxbl" Mar 18 18:00:32.195265 master-0 kubenswrapper[30278]: I0318 18:00:32.194940 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xzp78\" (UniqueName: \"kubernetes.io/projected/fcf459dc-bd30-4143-b5c4-60fd01b46548-kube-api-access-xzp78\") pod \"machine-config-daemon-5l8hh\" (UID: \"fcf459dc-bd30-4143-b5c4-60fd01b46548\") " pod="openshift-machine-config-operator/machine-config-daemon-5l8hh" Mar 18 18:00:32.195265 master-0 kubenswrapper[30278]: I0318 18:00:32.194965 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/30d77a7c-222e-41c7-8a98-219854aa3da2-etcd-serving-ca\") pod \"apiserver-897b458c6-vsss9\" (UID: \"30d77a7c-222e-41c7-8a98-219854aa3da2\") " pod="openshift-apiserver/apiserver-897b458c6-vsss9" Mar 18 18:00:32.195265 master-0 kubenswrapper[30278]: I0318 18:00:32.194987 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4460d3d3-c55f-4f1c-a623-e3feccf937bb-utilities\") pod \"redhat-operators-bgdql\" (UID: \"4460d3d3-c55f-4f1c-a623-e3feccf937bb\") " pod="openshift-marketplace/redhat-operators-bgdql" Mar 18 18:00:32.195265 master-0 kubenswrapper[30278]: I0318 18:00:32.195010 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/994fff04-c1d7-4f10-8d4b-6b49a6934829-run-systemd\") pod \"ovnkube-node-5l4qp\" (UID: \"994fff04-c1d7-4f10-8d4b-6b49a6934829\") " pod="openshift-ovn-kubernetes/ovnkube-node-5l4qp" Mar 18 18:00:32.195265 master-0 kubenswrapper[30278]: I0318 18:00:32.195034 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/9875ed82-813c-483d-8471-8f9b74b774ee-env-overrides\") pod \"network-node-identity-7s68k\" (UID: \"9875ed82-813c-483d-8471-8f9b74b774ee\") " pod="openshift-network-node-identity/network-node-identity-7s68k" Mar 18 18:00:32.195265 master-0 kubenswrapper[30278]: I0318 18:00:32.195058 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/489dd872-39c3-4ce2-8dc1-9d0552b88616-catalog-content\") pod \"community-operators-8485d\" (UID: \"489dd872-39c3-4ce2-8dc1-9d0552b88616\") " pod="openshift-marketplace/community-operators-8485d" Mar 18 18:00:32.195265 master-0 kubenswrapper[30278]: I0318 18:00:32.195087 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/253ec853-f637-4aa4-8e8e-eb655dfccccb-serving-cert\") pod \"route-controller-manager-57dbfd879f-44tfw\" (UID: \"253ec853-f637-4aa4-8e8e-eb655dfccccb\") " pod="openshift-route-controller-manager/route-controller-manager-57dbfd879f-44tfw" Mar 18 18:00:32.195265 master-0 kubenswrapper[30278]: I0318 18:00:32.195113 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lmsm4\" (UniqueName: \"kubernetes.io/projected/dba5f8d7-4d25-42b5-9c58-813221bf96bb-kube-api-access-lmsm4\") pod \"csi-snapshot-controller-operator-5f5d689c6b-z9vvz\" (UID: \"dba5f8d7-4d25-42b5-9c58-813221bf96bb\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-5f5d689c6b-z9vvz" Mar 18 18:00:32.195265 master-0 kubenswrapper[30278]: I0318 18:00:32.195145 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d4c75bee-d0d2-4261-8f89-8c3375dbd868-service-ca-bundle\") pod \"insights-operator-68bf6ff9d6-hm777\" (UID: \"d4c75bee-d0d2-4261-8f89-8c3375dbd868\") " pod="openshift-insights/insights-operator-68bf6ff9d6-hm777" Mar 18 18:00:32.195265 master-0 kubenswrapper[30278]: I0318 18:00:32.195178 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d8d74\" (UniqueName: \"kubernetes.io/projected/c38c5f03-a753-49f4-ab06-33e75a03bd45-kube-api-access-d8d74\") pod \"cluster-storage-operator-7d87854d6-d4bmc\" (UID: \"c38c5f03-a753-49f4-ab06-33e75a03bd45\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-7d87854d6-d4bmc" Mar 18 18:00:32.195265 master-0 kubenswrapper[30278]: I0318 18:00:32.195203 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/43fab0f2-5cfd-4b5e-a632-728fd5b960fd-serving-cert\") pod \"apiserver-688fbbb854-6n26v\" (UID: \"43fab0f2-5cfd-4b5e-a632-728fd5b960fd\") " pod="openshift-oauth-apiserver/apiserver-688fbbb854-6n26v" Mar 18 18:00:32.195265 master-0 kubenswrapper[30278]: I0318 18:00:32.195229 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/43fab0f2-5cfd-4b5e-a632-728fd5b960fd-encryption-config\") pod \"apiserver-688fbbb854-6n26v\" (UID: \"43fab0f2-5cfd-4b5e-a632-728fd5b960fd\") " pod="openshift-oauth-apiserver/apiserver-688fbbb854-6n26v" Mar 18 18:00:32.195265 master-0 kubenswrapper[30278]: I0318 18:00:32.195254 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g4zcv\" (UniqueName: \"kubernetes.io/projected/7b94e08c-7944-445e-bfb7-6c7c14880c65-kube-api-access-g4zcv\") pod \"ovnkube-control-plane-57f769d897-m82wx\" (UID: \"7b94e08c-7944-445e-bfb7-6c7c14880c65\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57f769d897-m82wx" Mar 18 18:00:32.195265 master-0 kubenswrapper[30278]: I0318 18:00:32.195309 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4g42g\" (UniqueName: \"kubernetes.io/projected/7047a862-8cbe-46fb-9af3-06ba224cbe26-kube-api-access-4g42g\") pod \"migrator-8487694857-8dsx2\" (UID: \"7047a862-8cbe-46fb-9af3-06ba224cbe26\") " pod="openshift-kube-storage-version-migrator/migrator-8487694857-8dsx2" Mar 18 18:00:32.196580 master-0 kubenswrapper[30278]: I0318 18:00:32.195341 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/994fff04-c1d7-4f10-8d4b-6b49a6934829-etc-openvswitch\") pod \"ovnkube-node-5l4qp\" (UID: \"994fff04-c1d7-4f10-8d4b-6b49a6934829\") " pod="openshift-ovn-kubernetes/ovnkube-node-5l4qp" Mar 18 18:00:32.196580 master-0 kubenswrapper[30278]: I0318 18:00:32.195363 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/5b0e38f3-3ab5-4519-86a6-68003deb94da-system-cni-dir\") pod \"multus-64tx9\" (UID: \"5b0e38f3-3ab5-4519-86a6-68003deb94da\") " pod="openshift-multus/multus-64tx9" Mar 18 18:00:32.196580 master-0 kubenswrapper[30278]: I0318 18:00:32.195390 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fcf459dc-bd30-4143-b5c4-60fd01b46548-mcd-auth-proxy-config\") pod \"machine-config-daemon-5l8hh\" (UID: \"fcf459dc-bd30-4143-b5c4-60fd01b46548\") " pod="openshift-machine-config-operator/machine-config-daemon-5l8hh" Mar 18 18:00:32.196580 master-0 kubenswrapper[30278]: I0318 18:00:32.195416 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/253ec853-f637-4aa4-8e8e-eb655dfccccb-client-ca\") pod \"route-controller-manager-57dbfd879f-44tfw\" (UID: \"253ec853-f637-4aa4-8e8e-eb655dfccccb\") " pod="openshift-route-controller-manager/route-controller-manager-57dbfd879f-44tfw" Mar 18 18:00:32.196580 master-0 kubenswrapper[30278]: I0318 18:00:32.195450 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/efbcb147-d077-4749-9289-1682daccb657-ca-certs\") pod \"operator-controller-controller-manager-57777556ff-bk26c\" (UID: \"efbcb147-d077-4749-9289-1682daccb657\") " pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-bk26c" Mar 18 18:00:32.196580 master-0 kubenswrapper[30278]: I0318 18:00:32.195482 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c57f282a-829b-41b2-827a-f4bc598245a2-metrics-certs\") pod \"router-default-7dcf5569b5-m5dh4\" (UID: \"c57f282a-829b-41b2-827a-f4bc598245a2\") " pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" Mar 18 18:00:32.196580 master-0 kubenswrapper[30278]: I0318 18:00:32.195519 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vqrdl\" (UniqueName: \"kubernetes.io/projected/efbcb147-d077-4749-9289-1682daccb657-kube-api-access-vqrdl\") pod \"operator-controller-controller-manager-57777556ff-bk26c\" (UID: \"efbcb147-d077-4749-9289-1682daccb657\") " pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-bk26c" Mar 18 18:00:32.196580 master-0 kubenswrapper[30278]: I0318 18:00:32.195548 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloud-controller-manager-operator-tls\" (UniqueName: \"kubernetes.io/secret/0751c002-fe0e-4f13-bb9c-9accd8ca0df3-cloud-controller-manager-operator-tls\") pod \"cluster-cloud-controller-manager-operator-7dff898856-kfzkl\" (UID: \"0751c002-fe0e-4f13-bb9c-9accd8ca0df3\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-kfzkl" Mar 18 18:00:32.196580 master-0 kubenswrapper[30278]: I0318 18:00:32.195577 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/822080a5-2926-4a51-866d-86bb0b437da2-lib-modules\") pod \"tuned-r6tf4\" (UID: \"822080a5-2926-4a51-866d-86bb0b437da2\") " pod="openshift-cluster-node-tuning-operator/tuned-r6tf4" Mar 18 18:00:32.196580 master-0 kubenswrapper[30278]: I0318 18:00:32.195642 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-tuned\" (UniqueName: \"kubernetes.io/empty-dir/822080a5-2926-4a51-866d-86bb0b437da2-etc-tuned\") pod \"tuned-r6tf4\" (UID: \"822080a5-2926-4a51-866d-86bb0b437da2\") " pod="openshift-cluster-node-tuning-operator/tuned-r6tf4" Mar 18 18:00:32.196580 master-0 kubenswrapper[30278]: I0318 18:00:32.195669 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/9e2d0d0d-54ca-475b-be8a-4eb6d4434e74-tls-certificates\") pod \"prometheus-operator-admission-webhook-69c6b55594-7r9qg\" (UID: \"9e2d0d0d-54ca-475b-be8a-4eb6d4434e74\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-69c6b55594-7r9qg" Mar 18 18:00:32.196580 master-0 kubenswrapper[30278]: I0318 18:00:32.195701 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/efbcb147-d077-4749-9289-1682daccb657-cache\") pod \"operator-controller-controller-manager-57777556ff-bk26c\" (UID: \"efbcb147-d077-4749-9289-1682daccb657\") " pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-bk26c" Mar 18 18:00:32.196580 master-0 kubenswrapper[30278]: I0318 18:00:32.195727 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/a94f7bff-ad61-4c53-a8eb-000a13f26971-auth-proxy-config\") pod \"cluster-autoscaler-operator-866dc4744-l6hpt\" (UID: \"a94f7bff-ad61-4c53-a8eb-000a13f26971\") " pod="openshift-machine-api/cluster-autoscaler-operator-866dc4744-l6hpt" Mar 18 18:00:32.196580 master-0 kubenswrapper[30278]: I0318 18:00:32.195751 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-grnqn\" (UniqueName: \"kubernetes.io/projected/5b0e38f3-3ab5-4519-86a6-68003deb94da-kube-api-access-grnqn\") pod \"multus-64tx9\" (UID: \"5b0e38f3-3ab5-4519-86a6-68003deb94da\") " pod="openshift-multus/multus-64tx9" Mar 18 18:00:32.196580 master-0 kubenswrapper[30278]: I0318 18:00:32.195776 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/fea7b899-fde4-4463-9520-4d433a8ebe21-os-release\") pod \"multus-additional-cni-plugins-ttbr5\" (UID: \"fea7b899-fde4-4463-9520-4d433a8ebe21\") " pod="openshift-multus/multus-additional-cni-plugins-ttbr5" Mar 18 18:00:32.196580 master-0 kubenswrapper[30278]: I0318 18:00:32.195807 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d4c75bee-d0d2-4261-8f89-8c3375dbd868-trusted-ca-bundle\") pod \"insights-operator-68bf6ff9d6-hm777\" (UID: \"d4c75bee-d0d2-4261-8f89-8c3375dbd868\") " pod="openshift-insights/insights-operator-68bf6ff9d6-hm777" Mar 18 18:00:32.196580 master-0 kubenswrapper[30278]: I0318 18:00:32.195835 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/9c0dbd44-7669-41d6-bf1b-d8c1343c9d98-metrics-client-ca\") pod \"prometheus-operator-6c8df6d4b-fshkm\" (UID: \"9c0dbd44-7669-41d6-bf1b-d8c1343c9d98\") " pod="openshift-monitoring/prometheus-operator-6c8df6d4b-fshkm" Mar 18 18:00:32.196580 master-0 kubenswrapper[30278]: I0318 18:00:32.196029 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/9875ed82-813c-483d-8471-8f9b74b774ee-env-overrides\") pod \"network-node-identity-7s68k\" (UID: \"9875ed82-813c-483d-8471-8f9b74b774ee\") " pod="openshift-network-node-identity/network-node-identity-7s68k" Mar 18 18:00:32.196580 master-0 kubenswrapper[30278]: I0318 18:00:32.195176 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4460d3d3-c55f-4f1c-a623-e3feccf937bb-utilities\") pod \"redhat-operators-bgdql\" (UID: \"4460d3d3-c55f-4f1c-a623-e3feccf937bb\") " pod="openshift-marketplace/redhat-operators-bgdql" Mar 18 18:00:32.196580 master-0 kubenswrapper[30278]: I0318 18:00:32.196386 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/489dd872-39c3-4ce2-8dc1-9d0552b88616-catalog-content\") pod \"community-operators-8485d\" (UID: \"489dd872-39c3-4ce2-8dc1-9d0552b88616\") " pod="openshift-marketplace/community-operators-8485d" Mar 18 18:00:32.197613 master-0 kubenswrapper[30278]: I0318 18:00:32.196625 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/efbcb147-d077-4749-9289-1682daccb657-cache\") pod \"operator-controller-controller-manager-57777556ff-bk26c\" (UID: \"efbcb147-d077-4749-9289-1682daccb657\") " pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-bk26c" Mar 18 18:00:32.197613 master-0 kubenswrapper[30278]: I0318 18:00:32.196707 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-tuned\" (UniqueName: \"kubernetes.io/empty-dir/822080a5-2926-4a51-866d-86bb0b437da2-etc-tuned\") pod \"tuned-r6tf4\" (UID: \"822080a5-2926-4a51-866d-86bb0b437da2\") " pod="openshift-cluster-node-tuning-operator/tuned-r6tf4" Mar 18 18:00:32.204764 master-0 kubenswrapper[30278]: I0318 18:00:32.204702 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Mar 18 18:00:32.210001 master-0 kubenswrapper[30278]: I0318 18:00:32.209955 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/fd4c81e2-699b-4fdf-ac7d-1607cde6a8ab-signing-cabundle\") pod \"service-ca-79bc6b8d76-g5brm\" (UID: \"fd4c81e2-699b-4fdf-ac7d-1607cde6a8ab\") " pod="openshift-service-ca/service-ca-79bc6b8d76-g5brm" Mar 18 18:00:32.223228 master-0 kubenswrapper[30278]: I0318 18:00:32.223144 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Mar 18 18:00:32.244050 master-0 kubenswrapper[30278]: I0318 18:00:32.243975 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Mar 18 18:00:32.263015 master-0 kubenswrapper[30278]: I0318 18:00:32.262959 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Mar 18 18:00:32.283341 master-0 kubenswrapper[30278]: I0318 18:00:32.283295 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Mar 18 18:00:32.294597 master-0 kubenswrapper[30278]: I0318 18:00:32.294199 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/30d77a7c-222e-41c7-8a98-219854aa3da2-serving-cert\") pod \"apiserver-897b458c6-vsss9\" (UID: \"30d77a7c-222e-41c7-8a98-219854aa3da2\") " pod="openshift-apiserver/apiserver-897b458c6-vsss9" Mar 18 18:00:32.297343 master-0 kubenswrapper[30278]: I0318 18:00:32.297263 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/fea7b899-fde4-4463-9520-4d433a8ebe21-system-cni-dir\") pod \"multus-additional-cni-plugins-ttbr5\" (UID: \"fea7b899-fde4-4463-9520-4d433a8ebe21\") " pod="openshift-multus/multus-additional-cni-plugins-ttbr5" Mar 18 18:00:32.297425 master-0 kubenswrapper[30278]: I0318 18:00:32.297340 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/5b0e38f3-3ab5-4519-86a6-68003deb94da-host-run-multus-certs\") pod \"multus-64tx9\" (UID: \"5b0e38f3-3ab5-4519-86a6-68003deb94da\") " pod="openshift-multus/multus-64tx9" Mar 18 18:00:32.297425 master-0 kubenswrapper[30278]: I0318 18:00:32.297415 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/5b0e38f3-3ab5-4519-86a6-68003deb94da-host-run-multus-certs\") pod \"multus-64tx9\" (UID: \"5b0e38f3-3ab5-4519-86a6-68003deb94da\") " pod="openshift-multus/multus-64tx9" Mar 18 18:00:32.297485 master-0 kubenswrapper[30278]: I0318 18:00:32.297423 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/fea7b899-fde4-4463-9520-4d433a8ebe21-system-cni-dir\") pod \"multus-additional-cni-plugins-ttbr5\" (UID: \"fea7b899-fde4-4463-9520-4d433a8ebe21\") " pod="openshift-multus/multus-additional-cni-plugins-ttbr5" Mar 18 18:00:32.297568 master-0 kubenswrapper[30278]: I0318 18:00:32.297534 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/994fff04-c1d7-4f10-8d4b-6b49a6934829-host-run-ovn-kubernetes\") pod \"ovnkube-node-5l4qp\" (UID: \"994fff04-c1d7-4f10-8d4b-6b49a6934829\") " pod="openshift-ovn-kubernetes/ovnkube-node-5l4qp" Mar 18 18:00:32.297650 master-0 kubenswrapper[30278]: I0318 18:00:32.297616 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/9c0dbd44-7669-41d6-bf1b-d8c1343c9d98-prometheus-operator-tls\") pod \"prometheus-operator-6c8df6d4b-fshkm\" (UID: \"9c0dbd44-7669-41d6-bf1b-d8c1343c9d98\") " pod="openshift-monitoring/prometheus-operator-6c8df6d4b-fshkm" Mar 18 18:00:32.297698 master-0 kubenswrapper[30278]: I0318 18:00:32.297676 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/fea7b899-fde4-4463-9520-4d433a8ebe21-cnibin\") pod \"multus-additional-cni-plugins-ttbr5\" (UID: \"fea7b899-fde4-4463-9520-4d433a8ebe21\") " pod="openshift-multus/multus-additional-cni-plugins-ttbr5" Mar 18 18:00:32.297732 master-0 kubenswrapper[30278]: I0318 18:00:32.297702 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/994fff04-c1d7-4f10-8d4b-6b49a6934829-host-run-ovn-kubernetes\") pod \"ovnkube-node-5l4qp\" (UID: \"994fff04-c1d7-4f10-8d4b-6b49a6934829\") " pod="openshift-ovn-kubernetes/ovnkube-node-5l4qp" Mar 18 18:00:32.297732 master-0 kubenswrapper[30278]: I0318 18:00:32.297717 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/822080a5-2926-4a51-866d-86bb0b437da2-sys\") pod \"tuned-r6tf4\" (UID: \"822080a5-2926-4a51-866d-86bb0b437da2\") " pod="openshift-cluster-node-tuning-operator/tuned-r6tf4" Mar 18 18:00:32.297908 master-0 kubenswrapper[30278]: I0318 18:00:32.297829 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/822080a5-2926-4a51-866d-86bb0b437da2-sys\") pod \"tuned-r6tf4\" (UID: \"822080a5-2926-4a51-866d-86bb0b437da2\") " pod="openshift-cluster-node-tuning-operator/tuned-r6tf4" Mar 18 18:00:32.298009 master-0 kubenswrapper[30278]: I0318 18:00:32.297952 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/fea7b899-fde4-4463-9520-4d433a8ebe21-cnibin\") pod \"multus-additional-cni-plugins-ttbr5\" (UID: \"fea7b899-fde4-4463-9520-4d433a8ebe21\") " pod="openshift-multus/multus-additional-cni-plugins-ttbr5" Mar 18 18:00:32.298231 master-0 kubenswrapper[30278]: I0318 18:00:32.298039 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/994fff04-c1d7-4f10-8d4b-6b49a6934829-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-5l4qp\" (UID: \"994fff04-c1d7-4f10-8d4b-6b49a6934829\") " pod="openshift-ovn-kubernetes/ovnkube-node-5l4qp" Mar 18 18:00:32.298231 master-0 kubenswrapper[30278]: I0318 18:00:32.298111 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4285e80c-1ff9-42b3-9692-9f2ab6b61916-kube-api-access\") pod \"installer-3-master-0\" (UID: \"4285e80c-1ff9-42b3-9692-9f2ab6b61916\") " pod="openshift-kube-apiserver/installer-3-master-0" Mar 18 18:00:32.298231 master-0 kubenswrapper[30278]: I0318 18:00:32.298182 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/994fff04-c1d7-4f10-8d4b-6b49a6934829-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-5l4qp\" (UID: \"994fff04-c1d7-4f10-8d4b-6b49a6934829\") " pod="openshift-ovn-kubernetes/ovnkube-node-5l4qp" Mar 18 18:00:32.298493 master-0 kubenswrapper[30278]: I0318 18:00:32.298256 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/0751c002-fe0e-4f13-bb9c-9accd8ca0df3-host-etc-kube\") pod \"cluster-cloud-controller-manager-operator-7dff898856-kfzkl\" (UID: \"0751c002-fe0e-4f13-bb9c-9accd8ca0df3\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-kfzkl" Mar 18 18:00:32.298493 master-0 kubenswrapper[30278]: I0318 18:00:32.298337 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/5b0e38f3-3ab5-4519-86a6-68003deb94da-cnibin\") pod \"multus-64tx9\" (UID: \"5b0e38f3-3ab5-4519-86a6-68003deb94da\") " pod="openshift-multus/multus-64tx9" Mar 18 18:00:32.298493 master-0 kubenswrapper[30278]: I0318 18:00:32.298367 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/5b0e38f3-3ab5-4519-86a6-68003deb94da-host-var-lib-cni-bin\") pod \"multus-64tx9\" (UID: \"5b0e38f3-3ab5-4519-86a6-68003deb94da\") " pod="openshift-multus/multus-64tx9" Mar 18 18:00:32.298493 master-0 kubenswrapper[30278]: I0318 18:00:32.298413 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/994fff04-c1d7-4f10-8d4b-6b49a6934829-systemd-units\") pod \"ovnkube-node-5l4qp\" (UID: \"994fff04-c1d7-4f10-8d4b-6b49a6934829\") " pod="openshift-ovn-kubernetes/ovnkube-node-5l4qp" Mar 18 18:00:32.298493 master-0 kubenswrapper[30278]: I0318 18:00:32.298436 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/56cde2f7-1742-45d6-aa22-8270cfb424a7-etc-containers\") pod \"catalogd-controller-manager-6864dc98f7-8vmsv\" (UID: \"56cde2f7-1742-45d6-aa22-8270cfb424a7\") " pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-8vmsv" Mar 18 18:00:32.298493 master-0 kubenswrapper[30278]: I0318 18:00:32.298486 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/0751c002-fe0e-4f13-bb9c-9accd8ca0df3-host-etc-kube\") pod \"cluster-cloud-controller-manager-operator-7dff898856-kfzkl\" (UID: \"0751c002-fe0e-4f13-bb9c-9accd8ca0df3\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-kfzkl" Mar 18 18:00:32.298791 master-0 kubenswrapper[30278]: I0318 18:00:32.298490 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/5b0e38f3-3ab5-4519-86a6-68003deb94da-cnibin\") pod \"multus-64tx9\" (UID: \"5b0e38f3-3ab5-4519-86a6-68003deb94da\") " pod="openshift-multus/multus-64tx9" Mar 18 18:00:32.298791 master-0 kubenswrapper[30278]: I0318 18:00:32.298549 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/994fff04-c1d7-4f10-8d4b-6b49a6934829-systemd-units\") pod \"ovnkube-node-5l4qp\" (UID: \"994fff04-c1d7-4f10-8d4b-6b49a6934829\") " pod="openshift-ovn-kubernetes/ovnkube-node-5l4qp" Mar 18 18:00:32.298791 master-0 kubenswrapper[30278]: I0318 18:00:32.298583 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/5b0e38f3-3ab5-4519-86a6-68003deb94da-host-var-lib-cni-bin\") pod \"multus-64tx9\" (UID: \"5b0e38f3-3ab5-4519-86a6-68003deb94da\") " pod="openshift-multus/multus-64tx9" Mar 18 18:00:32.298791 master-0 kubenswrapper[30278]: I0318 18:00:32.298626 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4285e80c-1ff9-42b3-9692-9f2ab6b61916-kubelet-dir\") pod \"installer-3-master-0\" (UID: \"4285e80c-1ff9-42b3-9692-9f2ab6b61916\") " pod="openshift-kube-apiserver/installer-3-master-0" Mar 18 18:00:32.298791 master-0 kubenswrapper[30278]: I0318 18:00:32.298678 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/56cde2f7-1742-45d6-aa22-8270cfb424a7-etc-containers\") pod \"catalogd-controller-manager-6864dc98f7-8vmsv\" (UID: \"56cde2f7-1742-45d6-aa22-8270cfb424a7\") " pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-8vmsv" Mar 18 18:00:32.298791 master-0 kubenswrapper[30278]: I0318 18:00:32.298688 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4285e80c-1ff9-42b3-9692-9f2ab6b61916-kubelet-dir\") pod \"installer-3-master-0\" (UID: \"4285e80c-1ff9-42b3-9692-9f2ab6b61916\") " pod="openshift-kube-apiserver/installer-3-master-0" Mar 18 18:00:32.298791 master-0 kubenswrapper[30278]: I0318 18:00:32.298703 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/efbcb147-d077-4749-9289-1682daccb657-etc-containers\") pod \"operator-controller-controller-manager-57777556ff-bk26c\" (UID: \"efbcb147-d077-4749-9289-1682daccb657\") " pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-bk26c" Mar 18 18:00:32.299202 master-0 kubenswrapper[30278]: I0318 18:00:32.298801 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/efbcb147-d077-4749-9289-1682daccb657-etc-containers\") pod \"operator-controller-controller-manager-57777556ff-bk26c\" (UID: \"efbcb147-d077-4749-9289-1682daccb657\") " pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-bk26c" Mar 18 18:00:32.299202 master-0 kubenswrapper[30278]: I0318 18:00:32.298864 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/5b0e38f3-3ab5-4519-86a6-68003deb94da-host-var-lib-cni-multus\") pod \"multus-64tx9\" (UID: \"5b0e38f3-3ab5-4519-86a6-68003deb94da\") " pod="openshift-multus/multus-64tx9" Mar 18 18:00:32.299202 master-0 kubenswrapper[30278]: I0318 18:00:32.298911 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/822080a5-2926-4a51-866d-86bb0b437da2-run\") pod \"tuned-r6tf4\" (UID: \"822080a5-2926-4a51-866d-86bb0b437da2\") " pod="openshift-cluster-node-tuning-operator/tuned-r6tf4" Mar 18 18:00:32.299202 master-0 kubenswrapper[30278]: I0318 18:00:32.298944 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/5b0e38f3-3ab5-4519-86a6-68003deb94da-host-var-lib-cni-multus\") pod \"multus-64tx9\" (UID: \"5b0e38f3-3ab5-4519-86a6-68003deb94da\") " pod="openshift-multus/multus-64tx9" Mar 18 18:00:32.299202 master-0 kubenswrapper[30278]: I0318 18:00:32.298968 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/822080a5-2926-4a51-866d-86bb0b437da2-run\") pod \"tuned-r6tf4\" (UID: \"822080a5-2926-4a51-866d-86bb0b437da2\") " pod="openshift-cluster-node-tuning-operator/tuned-r6tf4" Mar 18 18:00:32.299399 master-0 kubenswrapper[30278]: I0318 18:00:32.299216 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/92153864-7959-4482-bf24-c8db36435fb5-machine-approver-tls\") pod \"machine-approver-5c6485487f-z74t2\" (UID: \"92153864-7959-4482-bf24-c8db36435fb5\") " pod="openshift-cluster-machine-approver/machine-approver-5c6485487f-z74t2" Mar 18 18:00:32.299399 master-0 kubenswrapper[30278]: I0318 18:00:32.299296 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/994fff04-c1d7-4f10-8d4b-6b49a6934829-var-lib-openvswitch\") pod \"ovnkube-node-5l4qp\" (UID: \"994fff04-c1d7-4f10-8d4b-6b49a6934829\") " pod="openshift-ovn-kubernetes/ovnkube-node-5l4qp" Mar 18 18:00:32.299614 master-0 kubenswrapper[30278]: I0318 18:00:32.299468 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/5b0e38f3-3ab5-4519-86a6-68003deb94da-multus-cni-dir\") pod \"multus-64tx9\" (UID: \"5b0e38f3-3ab5-4519-86a6-68003deb94da\") " pod="openshift-multus/multus-64tx9" Mar 18 18:00:32.299614 master-0 kubenswrapper[30278]: I0318 18:00:32.299483 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/994fff04-c1d7-4f10-8d4b-6b49a6934829-var-lib-openvswitch\") pod \"ovnkube-node-5l4qp\" (UID: \"994fff04-c1d7-4f10-8d4b-6b49a6934829\") " pod="openshift-ovn-kubernetes/ovnkube-node-5l4qp" Mar 18 18:00:32.299614 master-0 kubenswrapper[30278]: I0318 18:00:32.299543 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/efbcb147-d077-4749-9289-1682daccb657-etc-docker\") pod \"operator-controller-controller-manager-57777556ff-bk26c\" (UID: \"efbcb147-d077-4749-9289-1682daccb657\") " pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-bk26c" Mar 18 18:00:32.299614 master-0 kubenswrapper[30278]: I0318 18:00:32.299555 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/5b0e38f3-3ab5-4519-86a6-68003deb94da-multus-cni-dir\") pod \"multus-64tx9\" (UID: \"5b0e38f3-3ab5-4519-86a6-68003deb94da\") " pod="openshift-multus/multus-64tx9" Mar 18 18:00:32.299763 master-0 kubenswrapper[30278]: I0318 18:00:32.299674 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/fea7b899-fde4-4463-9520-4d433a8ebe21-tuning-conf-dir\") pod \"multus-additional-cni-plugins-ttbr5\" (UID: \"fea7b899-fde4-4463-9520-4d433a8ebe21\") " pod="openshift-multus/multus-additional-cni-plugins-ttbr5" Mar 18 18:00:32.299763 master-0 kubenswrapper[30278]: I0318 18:00:32.299713 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/822080a5-2926-4a51-866d-86bb0b437da2-var-lib-kubelet\") pod \"tuned-r6tf4\" (UID: \"822080a5-2926-4a51-866d-86bb0b437da2\") " pod="openshift-cluster-node-tuning-operator/tuned-r6tf4" Mar 18 18:00:32.299763 master-0 kubenswrapper[30278]: I0318 18:00:32.299741 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/efbcb147-d077-4749-9289-1682daccb657-etc-docker\") pod \"operator-controller-controller-manager-57777556ff-bk26c\" (UID: \"efbcb147-d077-4749-9289-1682daccb657\") " pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-bk26c" Mar 18 18:00:32.299918 master-0 kubenswrapper[30278]: I0318 18:00:32.299826 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/30d77a7c-222e-41c7-8a98-219854aa3da2-node-pullsecrets\") pod \"apiserver-897b458c6-vsss9\" (UID: \"30d77a7c-222e-41c7-8a98-219854aa3da2\") " pod="openshift-apiserver/apiserver-897b458c6-vsss9" Mar 18 18:00:32.299918 master-0 kubenswrapper[30278]: I0318 18:00:32.299850 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/fea7b899-fde4-4463-9520-4d433a8ebe21-tuning-conf-dir\") pod \"multus-additional-cni-plugins-ttbr5\" (UID: \"fea7b899-fde4-4463-9520-4d433a8ebe21\") " pod="openshift-multus/multus-additional-cni-plugins-ttbr5" Mar 18 18:00:32.299918 master-0 kubenswrapper[30278]: I0318 18:00:32.299874 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/994fff04-c1d7-4f10-8d4b-6b49a6934829-host-cni-bin\") pod \"ovnkube-node-5l4qp\" (UID: \"994fff04-c1d7-4f10-8d4b-6b49a6934829\") " pod="openshift-ovn-kubernetes/ovnkube-node-5l4qp" Mar 18 18:00:32.300052 master-0 kubenswrapper[30278]: I0318 18:00:32.299923 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-modprobe-d\" (UniqueName: \"kubernetes.io/host-path/822080a5-2926-4a51-866d-86bb0b437da2-etc-modprobe-d\") pod \"tuned-r6tf4\" (UID: \"822080a5-2926-4a51-866d-86bb0b437da2\") " pod="openshift-cluster-node-tuning-operator/tuned-r6tf4" Mar 18 18:00:32.300052 master-0 kubenswrapper[30278]: I0318 18:00:32.299929 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/822080a5-2926-4a51-866d-86bb0b437da2-var-lib-kubelet\") pod \"tuned-r6tf4\" (UID: \"822080a5-2926-4a51-866d-86bb0b437da2\") " pod="openshift-cluster-node-tuning-operator/tuned-r6tf4" Mar 18 18:00:32.300052 master-0 kubenswrapper[30278]: I0318 18:00:32.300010 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/994fff04-c1d7-4f10-8d4b-6b49a6934829-host-cni-bin\") pod \"ovnkube-node-5l4qp\" (UID: \"994fff04-c1d7-4f10-8d4b-6b49a6934829\") " pod="openshift-ovn-kubernetes/ovnkube-node-5l4qp" Mar 18 18:00:32.300435 master-0 kubenswrapper[30278]: I0318 18:00:32.300391 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/30d77a7c-222e-41c7-8a98-219854aa3da2-node-pullsecrets\") pod \"apiserver-897b458c6-vsss9\" (UID: \"30d77a7c-222e-41c7-8a98-219854aa3da2\") " pod="openshift-apiserver/apiserver-897b458c6-vsss9" Mar 18 18:00:32.300533 master-0 kubenswrapper[30278]: I0318 18:00:32.300500 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/fcf459dc-bd30-4143-b5c4-60fd01b46548-rootfs\") pod \"machine-config-daemon-5l8hh\" (UID: \"fcf459dc-bd30-4143-b5c4-60fd01b46548\") " pod="openshift-machine-config-operator/machine-config-daemon-5l8hh" Mar 18 18:00:32.300611 master-0 kubenswrapper[30278]: I0318 18:00:32.300577 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloud-credential-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/04cef0bd-f365-4bf6-864a-1895995015d6-cloud-credential-operator-serving-cert\") pod \"cloud-credential-operator-744f9dbf77-djgn7\" (UID: \"04cef0bd-f365-4bf6-864a-1895995015d6\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-744f9dbf77-djgn7" Mar 18 18:00:32.300685 master-0 kubenswrapper[30278]: I0318 18:00:32.300656 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-modprobe-d\" (UniqueName: \"kubernetes.io/host-path/822080a5-2926-4a51-866d-86bb0b437da2-etc-modprobe-d\") pod \"tuned-r6tf4\" (UID: \"822080a5-2926-4a51-866d-86bb0b437da2\") " pod="openshift-cluster-node-tuning-operator/tuned-r6tf4" Mar 18 18:00:32.300719 master-0 kubenswrapper[30278]: I0318 18:00:32.300681 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/fcf459dc-bd30-4143-b5c4-60fd01b46548-rootfs\") pod \"machine-config-daemon-5l8hh\" (UID: \"fcf459dc-bd30-4143-b5c4-60fd01b46548\") " pod="openshift-machine-config-operator/machine-config-daemon-5l8hh" Mar 18 18:00:32.301309 master-0 kubenswrapper[30278]: I0318 18:00:32.301207 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/994fff04-c1d7-4f10-8d4b-6b49a6934829-run-openvswitch\") pod \"ovnkube-node-5l4qp\" (UID: \"994fff04-c1d7-4f10-8d4b-6b49a6934829\") " pod="openshift-ovn-kubernetes/ovnkube-node-5l4qp" Mar 18 18:00:32.301379 master-0 kubenswrapper[30278]: I0318 18:00:32.301324 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/994fff04-c1d7-4f10-8d4b-6b49a6934829-run-openvswitch\") pod \"ovnkube-node-5l4qp\" (UID: \"994fff04-c1d7-4f10-8d4b-6b49a6934829\") " pod="openshift-ovn-kubernetes/ovnkube-node-5l4qp" Mar 18 18:00:32.301379 master-0 kubenswrapper[30278]: I0318 18:00:32.301356 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/994fff04-c1d7-4f10-8d4b-6b49a6934829-host-run-netns\") pod \"ovnkube-node-5l4qp\" (UID: \"994fff04-c1d7-4f10-8d4b-6b49a6934829\") " pod="openshift-ovn-kubernetes/ovnkube-node-5l4qp" Mar 18 18:00:32.302671 master-0 kubenswrapper[30278]: I0318 18:00:32.301415 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/fdab27a1-1d7a-4dc5-b828-eba3f57592dd-etc-ssl-certs\") pod \"cluster-version-operator-7d58488df-l48xm\" (UID: \"fdab27a1-1d7a-4dc5-b828-eba3f57592dd\") " pod="openshift-cluster-version/cluster-version-operator-7d58488df-l48xm" Mar 18 18:00:32.302671 master-0 kubenswrapper[30278]: I0318 18:00:32.301443 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/994fff04-c1d7-4f10-8d4b-6b49a6934829-host-run-netns\") pod \"ovnkube-node-5l4qp\" (UID: \"994fff04-c1d7-4f10-8d4b-6b49a6934829\") " pod="openshift-ovn-kubernetes/ovnkube-node-5l4qp" Mar 18 18:00:32.302671 master-0 kubenswrapper[30278]: I0318 18:00:32.301459 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/822080a5-2926-4a51-866d-86bb0b437da2-etc-kubernetes\") pod \"tuned-r6tf4\" (UID: \"822080a5-2926-4a51-866d-86bb0b437da2\") " pod="openshift-cluster-node-tuning-operator/tuned-r6tf4" Mar 18 18:00:32.302671 master-0 kubenswrapper[30278]: I0318 18:00:32.301508 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/fdab27a1-1d7a-4dc5-b828-eba3f57592dd-etc-ssl-certs\") pod \"cluster-version-operator-7d58488df-l48xm\" (UID: \"fdab27a1-1d7a-4dc5-b828-eba3f57592dd\") " pod="openshift-cluster-version/cluster-version-operator-7d58488df-l48xm" Mar 18 18:00:32.302671 master-0 kubenswrapper[30278]: I0318 18:00:32.301545 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/822080a5-2926-4a51-866d-86bb0b437da2-etc-kubernetes\") pod \"tuned-r6tf4\" (UID: \"822080a5-2926-4a51-866d-86bb0b437da2\") " pod="openshift-cluster-node-tuning-operator/tuned-r6tf4" Mar 18 18:00:32.302671 master-0 kubenswrapper[30278]: I0318 18:00:32.301548 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/e0e04440-c08b-452d-9be6-9f70a4027c92-samples-operator-tls\") pod \"cluster-samples-operator-85f7577d78-xnx8x\" (UID: \"e0e04440-c08b-452d-9be6-9f70a4027c92\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-85f7577d78-xnx8x" Mar 18 18:00:32.302671 master-0 kubenswrapper[30278]: I0318 18:00:32.301681 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/822080a5-2926-4a51-866d-86bb0b437da2-host\") pod \"tuned-r6tf4\" (UID: \"822080a5-2926-4a51-866d-86bb0b437da2\") " pod="openshift-cluster-node-tuning-operator/tuned-r6tf4" Mar 18 18:00:32.302671 master-0 kubenswrapper[30278]: I0318 18:00:32.301798 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/822080a5-2926-4a51-866d-86bb0b437da2-host\") pod \"tuned-r6tf4\" (UID: \"822080a5-2926-4a51-866d-86bb0b437da2\") " pod="openshift-cluster-node-tuning-operator/tuned-r6tf4" Mar 18 18:00:32.302671 master-0 kubenswrapper[30278]: I0318 18:00:32.301826 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/994fff04-c1d7-4f10-8d4b-6b49a6934829-log-socket\") pod \"ovnkube-node-5l4qp\" (UID: \"994fff04-c1d7-4f10-8d4b-6b49a6934829\") " pod="openshift-ovn-kubernetes/ovnkube-node-5l4qp" Mar 18 18:00:32.302671 master-0 kubenswrapper[30278]: I0318 18:00:32.301892 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/994fff04-c1d7-4f10-8d4b-6b49a6934829-run-ovn\") pod \"ovnkube-node-5l4qp\" (UID: \"994fff04-c1d7-4f10-8d4b-6b49a6934829\") " pod="openshift-ovn-kubernetes/ovnkube-node-5l4qp" Mar 18 18:00:32.302671 master-0 kubenswrapper[30278]: I0318 18:00:32.301912 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/5b0e38f3-3ab5-4519-86a6-68003deb94da-os-release\") pod \"multus-64tx9\" (UID: \"5b0e38f3-3ab5-4519-86a6-68003deb94da\") " pod="openshift-multus/multus-64tx9" Mar 18 18:00:32.302671 master-0 kubenswrapper[30278]: I0318 18:00:32.301915 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/994fff04-c1d7-4f10-8d4b-6b49a6934829-log-socket\") pod \"ovnkube-node-5l4qp\" (UID: \"994fff04-c1d7-4f10-8d4b-6b49a6934829\") " pod="openshift-ovn-kubernetes/ovnkube-node-5l4qp" Mar 18 18:00:32.302671 master-0 kubenswrapper[30278]: I0318 18:00:32.301953 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/994fff04-c1d7-4f10-8d4b-6b49a6934829-run-ovn\") pod \"ovnkube-node-5l4qp\" (UID: \"994fff04-c1d7-4f10-8d4b-6b49a6934829\") " pod="openshift-ovn-kubernetes/ovnkube-node-5l4qp" Mar 18 18:00:32.302671 master-0 kubenswrapper[30278]: I0318 18:00:32.302043 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/5b0e38f3-3ab5-4519-86a6-68003deb94da-hostroot\") pod \"multus-64tx9\" (UID: \"5b0e38f3-3ab5-4519-86a6-68003deb94da\") " pod="openshift-multus/multus-64tx9" Mar 18 18:00:32.302671 master-0 kubenswrapper[30278]: I0318 18:00:32.302059 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/5b0e38f3-3ab5-4519-86a6-68003deb94da-os-release\") pod \"multus-64tx9\" (UID: \"5b0e38f3-3ab5-4519-86a6-68003deb94da\") " pod="openshift-multus/multus-64tx9" Mar 18 18:00:32.302671 master-0 kubenswrapper[30278]: I0318 18:00:32.302107 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/994fff04-c1d7-4f10-8d4b-6b49a6934829-node-log\") pod \"ovnkube-node-5l4qp\" (UID: \"994fff04-c1d7-4f10-8d4b-6b49a6934829\") " pod="openshift-ovn-kubernetes/ovnkube-node-5l4qp" Mar 18 18:00:32.302671 master-0 kubenswrapper[30278]: I0318 18:00:32.302165 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-sysctl-d\" (UniqueName: \"kubernetes.io/host-path/822080a5-2926-4a51-866d-86bb0b437da2-etc-sysctl-d\") pod \"tuned-r6tf4\" (UID: \"822080a5-2926-4a51-866d-86bb0b437da2\") " pod="openshift-cluster-node-tuning-operator/tuned-r6tf4" Mar 18 18:00:32.302671 master-0 kubenswrapper[30278]: I0318 18:00:32.302182 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/5b0e38f3-3ab5-4519-86a6-68003deb94da-hostroot\") pod \"multus-64tx9\" (UID: \"5b0e38f3-3ab5-4519-86a6-68003deb94da\") " pod="openshift-multus/multus-64tx9" Mar 18 18:00:32.302671 master-0 kubenswrapper[30278]: I0318 18:00:32.302250 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/994fff04-c1d7-4f10-8d4b-6b49a6934829-node-log\") pod \"ovnkube-node-5l4qp\" (UID: \"994fff04-c1d7-4f10-8d4b-6b49a6934829\") " pod="openshift-ovn-kubernetes/ovnkube-node-5l4qp" Mar 18 18:00:32.302671 master-0 kubenswrapper[30278]: I0318 18:00:32.302320 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/30d77a7c-222e-41c7-8a98-219854aa3da2-audit-dir\") pod \"apiserver-897b458c6-vsss9\" (UID: \"30d77a7c-222e-41c7-8a98-219854aa3da2\") " pod="openshift-apiserver/apiserver-897b458c6-vsss9" Mar 18 18:00:32.302671 master-0 kubenswrapper[30278]: I0318 18:00:32.302260 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-sysctl-d\" (UniqueName: \"kubernetes.io/host-path/822080a5-2926-4a51-866d-86bb0b437da2-etc-sysctl-d\") pod \"tuned-r6tf4\" (UID: \"822080a5-2926-4a51-866d-86bb0b437da2\") " pod="openshift-cluster-node-tuning-operator/tuned-r6tf4" Mar 18 18:00:32.302671 master-0 kubenswrapper[30278]: I0318 18:00:32.302426 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/994fff04-c1d7-4f10-8d4b-6b49a6934829-host-slash\") pod \"ovnkube-node-5l4qp\" (UID: \"994fff04-c1d7-4f10-8d4b-6b49a6934829\") " pod="openshift-ovn-kubernetes/ovnkube-node-5l4qp" Mar 18 18:00:32.302671 master-0 kubenswrapper[30278]: I0318 18:00:32.302449 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/efd0d6b1-652c-44b2-b918-5c7ced5d15c3-hosts-file\") pod \"node-resolver-bwcgq\" (UID: \"efd0d6b1-652c-44b2-b918-5c7ced5d15c3\") " pod="openshift-dns/node-resolver-bwcgq" Mar 18 18:00:32.302671 master-0 kubenswrapper[30278]: I0318 18:00:32.302488 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/994fff04-c1d7-4f10-8d4b-6b49a6934829-host-slash\") pod \"ovnkube-node-5l4qp\" (UID: \"994fff04-c1d7-4f10-8d4b-6b49a6934829\") " pod="openshift-ovn-kubernetes/ovnkube-node-5l4qp" Mar 18 18:00:32.302671 master-0 kubenswrapper[30278]: I0318 18:00:32.302511 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/fdab27a1-1d7a-4dc5-b828-eba3f57592dd-etc-cvo-updatepayloads\") pod \"cluster-version-operator-7d58488df-l48xm\" (UID: \"fdab27a1-1d7a-4dc5-b828-eba3f57592dd\") " pod="openshift-cluster-version/cluster-version-operator-7d58488df-l48xm" Mar 18 18:00:32.302671 master-0 kubenswrapper[30278]: I0318 18:00:32.302547 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/30d77a7c-222e-41c7-8a98-219854aa3da2-audit-dir\") pod \"apiserver-897b458c6-vsss9\" (UID: \"30d77a7c-222e-41c7-8a98-219854aa3da2\") " pod="openshift-apiserver/apiserver-897b458c6-vsss9" Mar 18 18:00:32.303397 master-0 kubenswrapper[30278]: I0318 18:00:32.302767 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/1d969530-c138-4fb7-9bfe-0825be66c009-host-slash\") pod \"iptables-alerter-f7jp5\" (UID: \"1d969530-c138-4fb7-9bfe-0825be66c009\") " pod="openshift-network-operator/iptables-alerter-f7jp5" Mar 18 18:00:32.303397 master-0 kubenswrapper[30278]: I0318 18:00:32.302778 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/fdab27a1-1d7a-4dc5-b828-eba3f57592dd-etc-cvo-updatepayloads\") pod \"cluster-version-operator-7d58488df-l48xm\" (UID: \"fdab27a1-1d7a-4dc5-b828-eba3f57592dd\") " pod="openshift-cluster-version/cluster-version-operator-7d58488df-l48xm" Mar 18 18:00:32.303397 master-0 kubenswrapper[30278]: I0318 18:00:32.302856 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/5b0e38f3-3ab5-4519-86a6-68003deb94da-multus-conf-dir\") pod \"multus-64tx9\" (UID: \"5b0e38f3-3ab5-4519-86a6-68003deb94da\") " pod="openshift-multus/multus-64tx9" Mar 18 18:00:32.303397 master-0 kubenswrapper[30278]: I0318 18:00:32.302930 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Mar 18 18:00:32.303397 master-0 kubenswrapper[30278]: I0318 18:00:32.303078 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/1d969530-c138-4fb7-9bfe-0825be66c009-host-slash\") pod \"iptables-alerter-f7jp5\" (UID: \"1d969530-c138-4fb7-9bfe-0825be66c009\") " pod="openshift-network-operator/iptables-alerter-f7jp5" Mar 18 18:00:32.303397 master-0 kubenswrapper[30278]: I0318 18:00:32.303128 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/efd0d6b1-652c-44b2-b918-5c7ced5d15c3-hosts-file\") pod \"node-resolver-bwcgq\" (UID: \"efd0d6b1-652c-44b2-b918-5c7ced5d15c3\") " pod="openshift-dns/node-resolver-bwcgq" Mar 18 18:00:32.303397 master-0 kubenswrapper[30278]: I0318 18:00:32.303093 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/5b0e38f3-3ab5-4519-86a6-68003deb94da-multus-conf-dir\") pod \"multus-64tx9\" (UID: \"5b0e38f3-3ab5-4519-86a6-68003deb94da\") " pod="openshift-multus/multus-64tx9" Mar 18 18:00:32.303397 master-0 kubenswrapper[30278]: I0318 18:00:32.303210 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/994fff04-c1d7-4f10-8d4b-6b49a6934829-host-cni-netd\") pod \"ovnkube-node-5l4qp\" (UID: \"994fff04-c1d7-4f10-8d4b-6b49a6934829\") " pod="openshift-ovn-kubernetes/ovnkube-node-5l4qp" Mar 18 18:00:32.303397 master-0 kubenswrapper[30278]: I0318 18:00:32.303255 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/994fff04-c1d7-4f10-8d4b-6b49a6934829-host-cni-netd\") pod \"ovnkube-node-5l4qp\" (UID: \"994fff04-c1d7-4f10-8d4b-6b49a6934829\") " pod="openshift-ovn-kubernetes/ovnkube-node-5l4qp" Mar 18 18:00:32.303397 master-0 kubenswrapper[30278]: I0318 18:00:32.303267 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/5b0e38f3-3ab5-4519-86a6-68003deb94da-etc-kubernetes\") pod \"multus-64tx9\" (UID: \"5b0e38f3-3ab5-4519-86a6-68003deb94da\") " pod="openshift-multus/multus-64tx9" Mar 18 18:00:32.303397 master-0 kubenswrapper[30278]: I0318 18:00:32.303398 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/5b0e38f3-3ab5-4519-86a6-68003deb94da-etc-kubernetes\") pod \"multus-64tx9\" (UID: \"5b0e38f3-3ab5-4519-86a6-68003deb94da\") " pod="openshift-multus/multus-64tx9" Mar 18 18:00:32.303761 master-0 kubenswrapper[30278]: I0318 18:00:32.303434 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/994fff04-c1d7-4f10-8d4b-6b49a6934829-run-systemd\") pod \"ovnkube-node-5l4qp\" (UID: \"994fff04-c1d7-4f10-8d4b-6b49a6934829\") " pod="openshift-ovn-kubernetes/ovnkube-node-5l4qp" Mar 18 18:00:32.303761 master-0 kubenswrapper[30278]: I0318 18:00:32.303475 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/994fff04-c1d7-4f10-8d4b-6b49a6934829-run-systemd\") pod \"ovnkube-node-5l4qp\" (UID: \"994fff04-c1d7-4f10-8d4b-6b49a6934829\") " pod="openshift-ovn-kubernetes/ovnkube-node-5l4qp" Mar 18 18:00:32.303761 master-0 kubenswrapper[30278]: I0318 18:00:32.303547 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/994fff04-c1d7-4f10-8d4b-6b49a6934829-etc-openvswitch\") pod \"ovnkube-node-5l4qp\" (UID: \"994fff04-c1d7-4f10-8d4b-6b49a6934829\") " pod="openshift-ovn-kubernetes/ovnkube-node-5l4qp" Mar 18 18:00:32.303761 master-0 kubenswrapper[30278]: I0318 18:00:32.303643 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/5b0e38f3-3ab5-4519-86a6-68003deb94da-system-cni-dir\") pod \"multus-64tx9\" (UID: \"5b0e38f3-3ab5-4519-86a6-68003deb94da\") " pod="openshift-multus/multus-64tx9" Mar 18 18:00:32.303761 master-0 kubenswrapper[30278]: I0318 18:00:32.303658 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/994fff04-c1d7-4f10-8d4b-6b49a6934829-etc-openvswitch\") pod \"ovnkube-node-5l4qp\" (UID: \"994fff04-c1d7-4f10-8d4b-6b49a6934829\") " pod="openshift-ovn-kubernetes/ovnkube-node-5l4qp" Mar 18 18:00:32.303761 master-0 kubenswrapper[30278]: I0318 18:00:32.303682 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/822080a5-2926-4a51-866d-86bb0b437da2-lib-modules\") pod \"tuned-r6tf4\" (UID: \"822080a5-2926-4a51-866d-86bb0b437da2\") " pod="openshift-cluster-node-tuning-operator/tuned-r6tf4" Mar 18 18:00:32.303761 master-0 kubenswrapper[30278]: I0318 18:00:32.303724 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/5b0e38f3-3ab5-4519-86a6-68003deb94da-system-cni-dir\") pod \"multus-64tx9\" (UID: \"5b0e38f3-3ab5-4519-86a6-68003deb94da\") " pod="openshift-multus/multus-64tx9" Mar 18 18:00:32.304021 master-0 kubenswrapper[30278]: I0318 18:00:32.303768 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/fea7b899-fde4-4463-9520-4d433a8ebe21-os-release\") pod \"multus-additional-cni-plugins-ttbr5\" (UID: \"fea7b899-fde4-4463-9520-4d433a8ebe21\") " pod="openshift-multus/multus-additional-cni-plugins-ttbr5" Mar 18 18:00:32.304021 master-0 kubenswrapper[30278]: I0318 18:00:32.303837 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-sysctl-conf\" (UniqueName: \"kubernetes.io/host-path/822080a5-2926-4a51-866d-86bb0b437da2-etc-sysctl-conf\") pod \"tuned-r6tf4\" (UID: \"822080a5-2926-4a51-866d-86bb0b437da2\") " pod="openshift-cluster-node-tuning-operator/tuned-r6tf4" Mar 18 18:00:32.304021 master-0 kubenswrapper[30278]: I0318 18:00:32.303918 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a94f7bff-ad61-4c53-a8eb-000a13f26971-cert\") pod \"cluster-autoscaler-operator-866dc4744-l6hpt\" (UID: \"a94f7bff-ad61-4c53-a8eb-000a13f26971\") " pod="openshift-machine-api/cluster-autoscaler-operator-866dc4744-l6hpt" Mar 18 18:00:32.304021 master-0 kubenswrapper[30278]: I0318 18:00:32.303963 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/fea7b899-fde4-4463-9520-4d433a8ebe21-os-release\") pod \"multus-additional-cni-plugins-ttbr5\" (UID: \"fea7b899-fde4-4463-9520-4d433a8ebe21\") " pod="openshift-multus/multus-additional-cni-plugins-ttbr5" Mar 18 18:00:32.304021 master-0 kubenswrapper[30278]: I0318 18:00:32.303977 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/de189d27-4c60-49f1-9119-d1fde5c37b1e-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-6f97756bc8-zdqtc\" (UID: \"de189d27-4c60-49f1-9119-d1fde5c37b1e\") " pod="openshift-machine-api/control-plane-machine-set-operator-6f97756bc8-zdqtc" Mar 18 18:00:32.304021 master-0 kubenswrapper[30278]: I0318 18:00:32.303920 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/822080a5-2926-4a51-866d-86bb0b437da2-lib-modules\") pod \"tuned-r6tf4\" (UID: \"822080a5-2926-4a51-866d-86bb0b437da2\") " pod="openshift-cluster-node-tuning-operator/tuned-r6tf4" Mar 18 18:00:32.304182 master-0 kubenswrapper[30278]: I0318 18:00:32.304049 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/56cde2f7-1742-45d6-aa22-8270cfb424a7-etc-docker\") pod \"catalogd-controller-manager-6864dc98f7-8vmsv\" (UID: \"56cde2f7-1742-45d6-aa22-8270cfb424a7\") " pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-8vmsv" Mar 18 18:00:32.304182 master-0 kubenswrapper[30278]: I0318 18:00:32.304080 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/5b0e38f3-3ab5-4519-86a6-68003deb94da-multus-socket-dir-parent\") pod \"multus-64tx9\" (UID: \"5b0e38f3-3ab5-4519-86a6-68003deb94da\") " pod="openshift-multus/multus-64tx9" Mar 18 18:00:32.304240 master-0 kubenswrapper[30278]: I0318 18:00:32.304117 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-sysctl-conf\" (UniqueName: \"kubernetes.io/host-path/822080a5-2926-4a51-866d-86bb0b437da2-etc-sysctl-conf\") pod \"tuned-r6tf4\" (UID: \"822080a5-2926-4a51-866d-86bb0b437da2\") " pod="openshift-cluster-node-tuning-operator/tuned-r6tf4" Mar 18 18:00:32.304286 master-0 kubenswrapper[30278]: I0318 18:00:32.304156 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/5b0e38f3-3ab5-4519-86a6-68003deb94da-multus-socket-dir-parent\") pod \"multus-64tx9\" (UID: \"5b0e38f3-3ab5-4519-86a6-68003deb94da\") " pod="openshift-multus/multus-64tx9" Mar 18 18:00:32.304286 master-0 kubenswrapper[30278]: I0318 18:00:32.304173 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-systemd\" (UniqueName: \"kubernetes.io/host-path/822080a5-2926-4a51-866d-86bb0b437da2-etc-systemd\") pod \"tuned-r6tf4\" (UID: \"822080a5-2926-4a51-866d-86bb0b437da2\") " pod="openshift-cluster-node-tuning-operator/tuned-r6tf4" Mar 18 18:00:32.304430 master-0 kubenswrapper[30278]: I0318 18:00:32.304298 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/2d21e77e-8b61-4f03-8f17-941b7a1d8b1d-machine-api-operator-tls\") pod \"machine-api-operator-6fbb6cf6f9-6x52p\" (UID: \"2d21e77e-8b61-4f03-8f17-941b7a1d8b1d\") " pod="openshift-machine-api/machine-api-operator-6fbb6cf6f9-6x52p" Mar 18 18:00:32.304430 master-0 kubenswrapper[30278]: I0318 18:00:32.304387 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/5b0e38f3-3ab5-4519-86a6-68003deb94da-host-run-netns\") pod \"multus-64tx9\" (UID: \"5b0e38f3-3ab5-4519-86a6-68003deb94da\") " pod="openshift-multus/multus-64tx9" Mar 18 18:00:32.304430 master-0 kubenswrapper[30278]: I0318 18:00:32.304209 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-systemd\" (UniqueName: \"kubernetes.io/host-path/822080a5-2926-4a51-866d-86bb0b437da2-etc-systemd\") pod \"tuned-r6tf4\" (UID: \"822080a5-2926-4a51-866d-86bb0b437da2\") " pod="openshift-cluster-node-tuning-operator/tuned-r6tf4" Mar 18 18:00:32.304507 master-0 kubenswrapper[30278]: I0318 18:00:32.304452 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/5b0e38f3-3ab5-4519-86a6-68003deb94da-host-run-netns\") pod \"multus-64tx9\" (UID: \"5b0e38f3-3ab5-4519-86a6-68003deb94da\") " pod="openshift-multus/multus-64tx9" Mar 18 18:00:32.304507 master-0 kubenswrapper[30278]: I0318 18:00:32.304184 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/56cde2f7-1742-45d6-aa22-8270cfb424a7-etc-docker\") pod \"catalogd-controller-manager-6864dc98f7-8vmsv\" (UID: \"56cde2f7-1742-45d6-aa22-8270cfb424a7\") " pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-8vmsv" Mar 18 18:00:32.304507 master-0 kubenswrapper[30278]: I0318 18:00:32.304494 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/5b0e38f3-3ab5-4519-86a6-68003deb94da-host-var-lib-kubelet\") pod \"multus-64tx9\" (UID: \"5b0e38f3-3ab5-4519-86a6-68003deb94da\") " pod="openshift-multus/multus-64tx9" Mar 18 18:00:32.304590 master-0 kubenswrapper[30278]: I0318 18:00:32.304527 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-sysconfig\" (UniqueName: \"kubernetes.io/host-path/822080a5-2926-4a51-866d-86bb0b437da2-etc-sysconfig\") pod \"tuned-r6tf4\" (UID: \"822080a5-2926-4a51-866d-86bb0b437da2\") " pod="openshift-cluster-node-tuning-operator/tuned-r6tf4" Mar 18 18:00:32.304590 master-0 kubenswrapper[30278]: I0318 18:00:32.304536 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/5b0e38f3-3ab5-4519-86a6-68003deb94da-host-var-lib-kubelet\") pod \"multus-64tx9\" (UID: \"5b0e38f3-3ab5-4519-86a6-68003deb94da\") " pod="openshift-multus/multus-64tx9" Mar 18 18:00:32.304590 master-0 kubenswrapper[30278]: I0318 18:00:32.304584 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/4285e80c-1ff9-42b3-9692-9f2ab6b61916-var-lock\") pod \"installer-3-master-0\" (UID: \"4285e80c-1ff9-42b3-9692-9f2ab6b61916\") " pod="openshift-kube-apiserver/installer-3-master-0" Mar 18 18:00:32.304688 master-0 kubenswrapper[30278]: I0318 18:00:32.304611 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/43fab0f2-5cfd-4b5e-a632-728fd5b960fd-audit-dir\") pod \"apiserver-688fbbb854-6n26v\" (UID: \"43fab0f2-5cfd-4b5e-a632-728fd5b960fd\") " pod="openshift-oauth-apiserver/apiserver-688fbbb854-6n26v" Mar 18 18:00:32.304688 master-0 kubenswrapper[30278]: I0318 18:00:32.304631 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/994fff04-c1d7-4f10-8d4b-6b49a6934829-host-kubelet\") pod \"ovnkube-node-5l4qp\" (UID: \"994fff04-c1d7-4f10-8d4b-6b49a6934829\") " pod="openshift-ovn-kubernetes/ovnkube-node-5l4qp" Mar 18 18:00:32.304688 master-0 kubenswrapper[30278]: I0318 18:00:32.304635 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-sysconfig\" (UniqueName: \"kubernetes.io/host-path/822080a5-2926-4a51-866d-86bb0b437da2-etc-sysconfig\") pod \"tuned-r6tf4\" (UID: \"822080a5-2926-4a51-866d-86bb0b437da2\") " pod="openshift-cluster-node-tuning-operator/tuned-r6tf4" Mar 18 18:00:32.304688 master-0 kubenswrapper[30278]: I0318 18:00:32.304654 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/5b0e38f3-3ab5-4519-86a6-68003deb94da-host-run-k8s-cni-cncf-io\") pod \"multus-64tx9\" (UID: \"5b0e38f3-3ab5-4519-86a6-68003deb94da\") " pod="openshift-multus/multus-64tx9" Mar 18 18:00:32.304688 master-0 kubenswrapper[30278]: I0318 18:00:32.304670 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/43fab0f2-5cfd-4b5e-a632-728fd5b960fd-audit-dir\") pod \"apiserver-688fbbb854-6n26v\" (UID: \"43fab0f2-5cfd-4b5e-a632-728fd5b960fd\") " pod="openshift-oauth-apiserver/apiserver-688fbbb854-6n26v" Mar 18 18:00:32.304860 master-0 kubenswrapper[30278]: I0318 18:00:32.304827 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/5b0e38f3-3ab5-4519-86a6-68003deb94da-host-run-k8s-cni-cncf-io\") pod \"multus-64tx9\" (UID: \"5b0e38f3-3ab5-4519-86a6-68003deb94da\") " pod="openshift-multus/multus-64tx9" Mar 18 18:00:32.304860 master-0 kubenswrapper[30278]: I0318 18:00:32.304850 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/994fff04-c1d7-4f10-8d4b-6b49a6934829-host-kubelet\") pod \"ovnkube-node-5l4qp\" (UID: \"994fff04-c1d7-4f10-8d4b-6b49a6934829\") " pod="openshift-ovn-kubernetes/ovnkube-node-5l4qp" Mar 18 18:00:32.305004 master-0 kubenswrapper[30278]: I0318 18:00:32.304972 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/4285e80c-1ff9-42b3-9692-9f2ab6b61916-var-lock\") pod \"installer-3-master-0\" (UID: \"4285e80c-1ff9-42b3-9692-9f2ab6b61916\") " pod="openshift-kube-apiserver/installer-3-master-0" Mar 18 18:00:32.316882 master-0 kubenswrapper[30278]: I0318 18:00:32.316834 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/30d77a7c-222e-41c7-8a98-219854aa3da2-etcd-client\") pod \"apiserver-897b458c6-vsss9\" (UID: \"30d77a7c-222e-41c7-8a98-219854aa3da2\") " pod="openshift-apiserver/apiserver-897b458c6-vsss9" Mar 18 18:00:32.323863 master-0 kubenswrapper[30278]: I0318 18:00:32.323809 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Mar 18 18:00:32.344445 master-0 kubenswrapper[30278]: I0318 18:00:32.344371 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Mar 18 18:00:32.346364 master-0 kubenswrapper[30278]: I0318 18:00:32.346085 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/30d77a7c-222e-41c7-8a98-219854aa3da2-etcd-serving-ca\") pod \"apiserver-897b458c6-vsss9\" (UID: \"30d77a7c-222e-41c7-8a98-219854aa3da2\") " pod="openshift-apiserver/apiserver-897b458c6-vsss9" Mar 18 18:00:32.364214 master-0 kubenswrapper[30278]: I0318 18:00:32.364151 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Mar 18 18:00:32.372801 master-0 kubenswrapper[30278]: I0318 18:00:32.372754 30278 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-master-0_b45ea2ef1cf2bc9d1d994d6538ae0a64/kube-apiserver-check-endpoints/0.log" Mar 18 18:00:32.373563 master-0 kubenswrapper[30278]: I0318 18:00:32.373520 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/30d77a7c-222e-41c7-8a98-219854aa3da2-config\") pod \"apiserver-897b458c6-vsss9\" (UID: \"30d77a7c-222e-41c7-8a98-219854aa3da2\") " pod="openshift-apiserver/apiserver-897b458c6-vsss9" Mar 18 18:00:32.375990 master-0 kubenswrapper[30278]: I0318 18:00:32.375898 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"b45ea2ef1cf2bc9d1d994d6538ae0a64","Type":"ContainerStarted","Data":"76efa507770b7f3447613d0b53b9d5e24bb29472a2b50d81264800bfd86b0183"} Mar 18 18:00:32.376170 master-0 kubenswrapper[30278]: I0318 18:00:32.376130 30278 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-3-master-0" Mar 18 18:00:32.383000 master-0 kubenswrapper[30278]: I0318 18:00:32.382966 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Mar 18 18:00:32.383355 master-0 kubenswrapper[30278]: I0318 18:00:32.383325 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/30d77a7c-222e-41c7-8a98-219854aa3da2-encryption-config\") pod \"apiserver-897b458c6-vsss9\" (UID: \"30d77a7c-222e-41c7-8a98-219854aa3da2\") " pod="openshift-apiserver/apiserver-897b458c6-vsss9" Mar 18 18:00:32.385507 master-0 kubenswrapper[30278]: I0318 18:00:32.385469 30278 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-3-master-0" Mar 18 18:00:32.408972 master-0 kubenswrapper[30278]: I0318 18:00:32.408908 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Mar 18 18:00:32.414729 master-0 kubenswrapper[30278]: I0318 18:00:32.414690 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/30d77a7c-222e-41c7-8a98-219854aa3da2-trusted-ca-bundle\") pod \"apiserver-897b458c6-vsss9\" (UID: \"30d77a7c-222e-41c7-8a98-219854aa3da2\") " pod="openshift-apiserver/apiserver-897b458c6-vsss9" Mar 18 18:00:32.423691 master-0 kubenswrapper[30278]: I0318 18:00:32.423634 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Mar 18 18:00:32.423857 master-0 kubenswrapper[30278]: I0318 18:00:32.423819 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/30d77a7c-222e-41c7-8a98-219854aa3da2-image-import-ca\") pod \"apiserver-897b458c6-vsss9\" (UID: \"30d77a7c-222e-41c7-8a98-219854aa3da2\") " pod="openshift-apiserver/apiserver-897b458c6-vsss9" Mar 18 18:00:32.443226 master-0 kubenswrapper[30278]: I0318 18:00:32.443173 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Mar 18 18:00:32.444288 master-0 kubenswrapper[30278]: I0318 18:00:32.444235 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/30d77a7c-222e-41c7-8a98-219854aa3da2-audit\") pod \"apiserver-897b458c6-vsss9\" (UID: \"30d77a7c-222e-41c7-8a98-219854aa3da2\") " pod="openshift-apiserver/apiserver-897b458c6-vsss9" Mar 18 18:00:32.463647 master-0 kubenswrapper[30278]: I0318 18:00:32.463586 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Mar 18 18:00:32.483857 master-0 kubenswrapper[30278]: I0318 18:00:32.483801 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Mar 18 18:00:32.488512 master-0 kubenswrapper[30278]: I0318 18:00:32.488466 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c57f282a-829b-41b2-827a-f4bc598245a2-stats-auth\") pod \"router-default-7dcf5569b5-m5dh4\" (UID: \"c57f282a-829b-41b2-827a-f4bc598245a2\") " pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" Mar 18 18:00:32.502607 master-0 kubenswrapper[30278]: I0318 18:00:32.502529 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Mar 18 18:00:32.508549 master-0 kubenswrapper[30278]: I0318 18:00:32.508504 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4285e80c-1ff9-42b3-9692-9f2ab6b61916-kubelet-dir\") pod \"4285e80c-1ff9-42b3-9692-9f2ab6b61916\" (UID: \"4285e80c-1ff9-42b3-9692-9f2ab6b61916\") " Mar 18 18:00:32.508616 master-0 kubenswrapper[30278]: I0318 18:00:32.508555 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/4285e80c-1ff9-42b3-9692-9f2ab6b61916-var-lock\") pod \"4285e80c-1ff9-42b3-9692-9f2ab6b61916\" (UID: \"4285e80c-1ff9-42b3-9692-9f2ab6b61916\") " Mar 18 18:00:32.508649 master-0 kubenswrapper[30278]: I0318 18:00:32.508621 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4285e80c-1ff9-42b3-9692-9f2ab6b61916-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "4285e80c-1ff9-42b3-9692-9f2ab6b61916" (UID: "4285e80c-1ff9-42b3-9692-9f2ab6b61916"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 18:00:32.508761 master-0 kubenswrapper[30278]: I0318 18:00:32.508740 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4285e80c-1ff9-42b3-9692-9f2ab6b61916-var-lock" (OuterVolumeSpecName: "var-lock") pod "4285e80c-1ff9-42b3-9692-9f2ab6b61916" (UID: "4285e80c-1ff9-42b3-9692-9f2ab6b61916"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 18:00:32.509680 master-0 kubenswrapper[30278]: I0318 18:00:32.509656 30278 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4285e80c-1ff9-42b3-9692-9f2ab6b61916-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Mar 18 18:00:32.509680 master-0 kubenswrapper[30278]: I0318 18:00:32.509675 30278 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/4285e80c-1ff9-42b3-9692-9f2ab6b61916-var-lock\") on node \"master-0\" DevicePath \"\"" Mar 18 18:00:32.522929 master-0 kubenswrapper[30278]: I0318 18:00:32.522895 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Mar 18 18:00:32.525451 master-0 kubenswrapper[30278]: I0318 18:00:32.525425 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c57f282a-829b-41b2-827a-f4bc598245a2-default-certificate\") pod \"router-default-7dcf5569b5-m5dh4\" (UID: \"c57f282a-829b-41b2-827a-f4bc598245a2\") " pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" Mar 18 18:00:32.543879 master-0 kubenswrapper[30278]: I0318 18:00:32.543803 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Mar 18 18:00:32.547518 master-0 kubenswrapper[30278]: I0318 18:00:32.547479 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c57f282a-829b-41b2-827a-f4bc598245a2-metrics-certs\") pod \"router-default-7dcf5569b5-m5dh4\" (UID: \"c57f282a-829b-41b2-827a-f4bc598245a2\") " pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" Mar 18 18:00:32.563728 master-0 kubenswrapper[30278]: I0318 18:00:32.563675 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Mar 18 18:00:32.582942 master-0 kubenswrapper[30278]: I0318 18:00:32.582863 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Mar 18 18:00:32.589380 master-0 kubenswrapper[30278]: I0318 18:00:32.589260 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c57f282a-829b-41b2-827a-f4bc598245a2-service-ca-bundle\") pod \"router-default-7dcf5569b5-m5dh4\" (UID: \"c57f282a-829b-41b2-827a-f4bc598245a2\") " pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" Mar 18 18:00:32.625783 master-0 kubenswrapper[30278]: I0318 18:00:32.625659 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Mar 18 18:00:32.643398 master-0 kubenswrapper[30278]: I0318 18:00:32.643339 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Mar 18 18:00:32.648477 master-0 kubenswrapper[30278]: I0318 18:00:32.648436 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/59407fdf-b1e9-4992-a3c8-54b4e26f496c-metrics-tls\") pod \"dns-default-lf9xl\" (UID: \"59407fdf-b1e9-4992-a3c8-54b4e26f496c\") " pod="openshift-dns/dns-default-lf9xl" Mar 18 18:00:32.666229 master-0 kubenswrapper[30278]: I0318 18:00:32.666158 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Mar 18 18:00:32.675314 master-0 kubenswrapper[30278]: I0318 18:00:32.675255 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/59407fdf-b1e9-4992-a3c8-54b4e26f496c-config-volume\") pod \"dns-default-lf9xl\" (UID: \"59407fdf-b1e9-4992-a3c8-54b4e26f496c\") " pod="openshift-dns/dns-default-lf9xl" Mar 18 18:00:32.682530 master-0 kubenswrapper[30278]: I0318 18:00:32.682468 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Mar 18 18:00:32.703703 master-0 kubenswrapper[30278]: I0318 18:00:32.703638 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Mar 18 18:00:32.713234 master-0 kubenswrapper[30278]: I0318 18:00:32.713184 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/43fab0f2-5cfd-4b5e-a632-728fd5b960fd-audit-policies\") pod \"apiserver-688fbbb854-6n26v\" (UID: \"43fab0f2-5cfd-4b5e-a632-728fd5b960fd\") " pod="openshift-oauth-apiserver/apiserver-688fbbb854-6n26v" Mar 18 18:00:32.723319 master-0 kubenswrapper[30278]: I0318 18:00:32.723261 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Mar 18 18:00:32.727028 master-0 kubenswrapper[30278]: I0318 18:00:32.726982 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/43fab0f2-5cfd-4b5e-a632-728fd5b960fd-encryption-config\") pod \"apiserver-688fbbb854-6n26v\" (UID: \"43fab0f2-5cfd-4b5e-a632-728fd5b960fd\") " pod="openshift-oauth-apiserver/apiserver-688fbbb854-6n26v" Mar 18 18:00:32.743053 master-0 kubenswrapper[30278]: I0318 18:00:32.742999 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Mar 18 18:00:32.745253 master-0 kubenswrapper[30278]: I0318 18:00:32.745215 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/43fab0f2-5cfd-4b5e-a632-728fd5b960fd-etcd-client\") pod \"apiserver-688fbbb854-6n26v\" (UID: \"43fab0f2-5cfd-4b5e-a632-728fd5b960fd\") " pod="openshift-oauth-apiserver/apiserver-688fbbb854-6n26v" Mar 18 18:00:32.763093 master-0 kubenswrapper[30278]: I0318 18:00:32.763043 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Mar 18 18:00:32.767403 master-0 kubenswrapper[30278]: I0318 18:00:32.767375 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43fab0f2-5cfd-4b5e-a632-728fd5b960fd-trusted-ca-bundle\") pod \"apiserver-688fbbb854-6n26v\" (UID: \"43fab0f2-5cfd-4b5e-a632-728fd5b960fd\") " pod="openshift-oauth-apiserver/apiserver-688fbbb854-6n26v" Mar 18 18:00:32.782566 master-0 kubenswrapper[30278]: I0318 18:00:32.782522 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Mar 18 18:00:32.784798 master-0 kubenswrapper[30278]: I0318 18:00:32.784765 30278 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 18:00:32.787165 master-0 kubenswrapper[30278]: I0318 18:00:32.787133 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/43fab0f2-5cfd-4b5e-a632-728fd5b960fd-serving-cert\") pod \"apiserver-688fbbb854-6n26v\" (UID: \"43fab0f2-5cfd-4b5e-a632-728fd5b960fd\") " pod="openshift-oauth-apiserver/apiserver-688fbbb854-6n26v" Mar 18 18:00:32.790771 master-0 kubenswrapper[30278]: I0318 18:00:32.790722 30278 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 18:00:32.807500 master-0 kubenswrapper[30278]: I0318 18:00:32.807456 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Mar 18 18:00:32.823721 master-0 kubenswrapper[30278]: I0318 18:00:32.823669 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Mar 18 18:00:32.833133 master-0 kubenswrapper[30278]: I0318 18:00:32.833086 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/43fab0f2-5cfd-4b5e-a632-728fd5b960fd-etcd-serving-ca\") pod \"apiserver-688fbbb854-6n26v\" (UID: \"43fab0f2-5cfd-4b5e-a632-728fd5b960fd\") " pod="openshift-oauth-apiserver/apiserver-688fbbb854-6n26v" Mar 18 18:00:32.844487 master-0 kubenswrapper[30278]: I0318 18:00:32.844411 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Mar 18 18:00:32.876817 master-0 kubenswrapper[30278]: I0318 18:00:32.876716 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-controller"/"operator-controller-trusted-ca-bundle" Mar 18 18:00:32.886886 master-0 kubenswrapper[30278]: I0318 18:00:32.886828 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-controller"/"kube-root-ca.crt" Mar 18 18:00:32.903318 master-0 kubenswrapper[30278]: I0318 18:00:32.903229 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-controller"/"openshift-service-ca.crt" Mar 18 18:00:32.909586 master-0 kubenswrapper[30278]: I0318 18:00:32.909537 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/efbcb147-d077-4749-9289-1682daccb657-ca-certs\") pod \"operator-controller-controller-manager-57777556ff-bk26c\" (UID: \"efbcb147-d077-4749-9289-1682daccb657\") " pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-bk26c" Mar 18 18:00:32.923372 master-0 kubenswrapper[30278]: I0318 18:00:32.923340 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-catalogd"/"catalogserver-cert" Mar 18 18:00:32.924510 master-0 kubenswrapper[30278]: I0318 18:00:32.924476 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalogserver-certs\" (UniqueName: \"kubernetes.io/secret/56cde2f7-1742-45d6-aa22-8270cfb424a7-catalogserver-certs\") pod \"catalogd-controller-manager-6864dc98f7-8vmsv\" (UID: \"56cde2f7-1742-45d6-aa22-8270cfb424a7\") " pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-8vmsv" Mar 18 18:00:32.943534 master-0 kubenswrapper[30278]: I0318 18:00:32.943478 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-catalogd"/"kube-root-ca.crt" Mar 18 18:00:32.967900 master-0 kubenswrapper[30278]: I0318 18:00:32.967843 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-catalogd"/"catalogd-trusted-ca-bundle" Mar 18 18:00:32.983195 master-0 kubenswrapper[30278]: I0318 18:00:32.983145 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-catalogd"/"openshift-service-ca.crt" Mar 18 18:00:32.992823 master-0 kubenswrapper[30278]: I0318 18:00:32.992778 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/56cde2f7-1742-45d6-aa22-8270cfb424a7-ca-certs\") pod \"catalogd-controller-manager-6864dc98f7-8vmsv\" (UID: \"56cde2f7-1742-45d6-aa22-8270cfb424a7\") " pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-8vmsv" Mar 18 18:00:33.009761 master-0 kubenswrapper[30278]: I0318 18:00:33.009700 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Mar 18 18:00:33.025002 master-0 kubenswrapper[30278]: I0318 18:00:33.024960 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Mar 18 18:00:33.034373 master-0 kubenswrapper[30278]: I0318 18:00:33.034324 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1db0a246-ca43-4e7c-b09e-e80218ae99b1-serving-cert\") pod \"controller-manager-f5755b457-f4cbl\" (UID: \"1db0a246-ca43-4e7c-b09e-e80218ae99b1\") " pod="openshift-controller-manager/controller-manager-f5755b457-f4cbl" Mar 18 18:00:33.042950 master-0 kubenswrapper[30278]: I0318 18:00:33.042917 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Mar 18 18:00:33.061256 master-0 kubenswrapper[30278]: I0318 18:00:33.061210 30278 request.go:700] Waited for 1.015667772s due to client-side throttling, not priority and fairness, request: GET:https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-controller-manager/configmaps?fieldSelector=metadata.name%3Dconfig&limit=500&resourceVersion=0 Mar 18 18:00:33.063461 master-0 kubenswrapper[30278]: I0318 18:00:33.063423 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Mar 18 18:00:33.066744 master-0 kubenswrapper[30278]: I0318 18:00:33.066698 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1db0a246-ca43-4e7c-b09e-e80218ae99b1-config\") pod \"controller-manager-f5755b457-f4cbl\" (UID: \"1db0a246-ca43-4e7c-b09e-e80218ae99b1\") " pod="openshift-controller-manager/controller-manager-f5755b457-f4cbl" Mar 18 18:00:33.084407 master-0 kubenswrapper[30278]: I0318 18:00:33.084354 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Mar 18 18:00:33.087570 master-0 kubenswrapper[30278]: I0318 18:00:33.087526 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/253ec853-f637-4aa4-8e8e-eb655dfccccb-client-ca\") pod \"route-controller-manager-57dbfd879f-44tfw\" (UID: \"253ec853-f637-4aa4-8e8e-eb655dfccccb\") " pod="openshift-route-controller-manager/route-controller-manager-57dbfd879f-44tfw" Mar 18 18:00:33.103189 master-0 kubenswrapper[30278]: I0318 18:00:33.103124 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Mar 18 18:00:33.106399 master-0 kubenswrapper[30278]: I0318 18:00:33.106344 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1db0a246-ca43-4e7c-b09e-e80218ae99b1-client-ca\") pod \"controller-manager-f5755b457-f4cbl\" (UID: \"1db0a246-ca43-4e7c-b09e-e80218ae99b1\") " pod="openshift-controller-manager/controller-manager-f5755b457-f4cbl" Mar 18 18:00:33.123020 master-0 kubenswrapper[30278]: I0318 18:00:33.122934 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Mar 18 18:00:33.157631 master-0 kubenswrapper[30278]: I0318 18:00:33.157504 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Mar 18 18:00:33.158236 master-0 kubenswrapper[30278]: I0318 18:00:33.158174 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1db0a246-ca43-4e7c-b09e-e80218ae99b1-proxy-ca-bundles\") pod \"controller-manager-f5755b457-f4cbl\" (UID: \"1db0a246-ca43-4e7c-b09e-e80218ae99b1\") " pod="openshift-controller-manager/controller-manager-f5755b457-f4cbl" Mar 18 18:00:33.163875 master-0 kubenswrapper[30278]: I0318 18:00:33.163836 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Mar 18 18:00:33.173197 master-0 kubenswrapper[30278]: I0318 18:00:33.173137 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/253ec853-f637-4aa4-8e8e-eb655dfccccb-config\") pod \"route-controller-manager-57dbfd879f-44tfw\" (UID: \"253ec853-f637-4aa4-8e8e-eb655dfccccb\") " pod="openshift-route-controller-manager/route-controller-manager-57dbfd879f-44tfw" Mar 18 18:00:33.181510 master-0 kubenswrapper[30278]: E0318 18:00:33.181454 30278 configmap.go:193] Couldn't get configMap openshift-cloud-controller-manager-operator/kube-rbac-proxy: failed to sync configmap cache: timed out waiting for the condition Mar 18 18:00:33.181758 master-0 kubenswrapper[30278]: E0318 18:00:33.181574 30278 configmap.go:193] Couldn't get configMap openshift-machine-api/machine-api-operator-images: failed to sync configmap cache: timed out waiting for the condition Mar 18 18:00:33.181758 master-0 kubenswrapper[30278]: E0318 18:00:33.181621 30278 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0751c002-fe0e-4f13-bb9c-9accd8ca0df3-auth-proxy-config podName:0751c002-fe0e-4f13-bb9c-9accd8ca0df3 nodeName:}" failed. No retries permitted until 2026-03-18 18:00:33.681583579 +0000 UTC m=+2.848768374 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "auth-proxy-config" (UniqueName: "kubernetes.io/configmap/0751c002-fe0e-4f13-bb9c-9accd8ca0df3-auth-proxy-config") pod "cluster-cloud-controller-manager-operator-7dff898856-kfzkl" (UID: "0751c002-fe0e-4f13-bb9c-9accd8ca0df3") : failed to sync configmap cache: timed out waiting for the condition Mar 18 18:00:33.182151 master-0 kubenswrapper[30278]: E0318 18:00:33.182105 30278 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/packageserver-service-cert: failed to sync secret cache: timed out waiting for the condition Mar 18 18:00:33.182312 master-0 kubenswrapper[30278]: E0318 18:00:33.182214 30278 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8db04037-c7cc-4246-92c3-6e7985384b14-apiservice-cert podName:8db04037-c7cc-4246-92c3-6e7985384b14 nodeName:}" failed. No retries permitted until 2026-03-18 18:00:33.682188805 +0000 UTC m=+2.849373620 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/8db04037-c7cc-4246-92c3-6e7985384b14-apiservice-cert") pod "packageserver-b8b994c95-kglwt" (UID: "8db04037-c7cc-4246-92c3-6e7985384b14") : failed to sync secret cache: timed out waiting for the condition Mar 18 18:00:33.183332 master-0 kubenswrapper[30278]: E0318 18:00:33.183250 30278 secret.go:189] Couldn't get secret openshift-cluster-storage-operator/cluster-storage-operator-serving-cert: failed to sync secret cache: timed out waiting for the condition Mar 18 18:00:33.183478 master-0 kubenswrapper[30278]: E0318 18:00:33.183377 30278 secret.go:189] Couldn't get secret openshift-machine-config-operator/proxy-tls: failed to sync secret cache: timed out waiting for the condition Mar 18 18:00:33.183478 master-0 kubenswrapper[30278]: E0318 18:00:33.183415 30278 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/2d21e77e-8b61-4f03-8f17-941b7a1d8b1d-images podName:2d21e77e-8b61-4f03-8f17-941b7a1d8b1d nodeName:}" failed. No retries permitted until 2026-03-18 18:00:33.683376468 +0000 UTC m=+2.850561133 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "images" (UniqueName: "kubernetes.io/configmap/2d21e77e-8b61-4f03-8f17-941b7a1d8b1d-images") pod "machine-api-operator-6fbb6cf6f9-6x52p" (UID: "2d21e77e-8b61-4f03-8f17-941b7a1d8b1d") : failed to sync configmap cache: timed out waiting for the condition Mar 18 18:00:33.183478 master-0 kubenswrapper[30278]: E0318 18:00:33.183453 30278 configmap.go:193] Couldn't get configMap openshift-cloud-credential-operator/cco-trusted-ca: failed to sync configmap cache: timed out waiting for the condition Mar 18 18:00:33.183478 master-0 kubenswrapper[30278]: E0318 18:00:33.183476 30278 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fcf459dc-bd30-4143-b5c4-60fd01b46548-proxy-tls podName:fcf459dc-bd30-4143-b5c4-60fd01b46548 nodeName:}" failed. No retries permitted until 2026-03-18 18:00:33.68344401 +0000 UTC m=+2.850628875 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/fcf459dc-bd30-4143-b5c4-60fd01b46548-proxy-tls") pod "machine-config-daemon-5l8hh" (UID: "fcf459dc-bd30-4143-b5c4-60fd01b46548") : failed to sync secret cache: timed out waiting for the condition Mar 18 18:00:33.183881 master-0 kubenswrapper[30278]: E0318 18:00:33.183516 30278 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/04cef0bd-f365-4bf6-864a-1895995015d6-cco-trusted-ca podName:04cef0bd-f365-4bf6-864a-1895995015d6 nodeName:}" failed. No retries permitted until 2026-03-18 18:00:33.683491051 +0000 UTC m=+2.850675676 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cco-trusted-ca" (UniqueName: "kubernetes.io/configmap/04cef0bd-f365-4bf6-864a-1895995015d6-cco-trusted-ca") pod "cloud-credential-operator-744f9dbf77-djgn7" (UID: "04cef0bd-f365-4bf6-864a-1895995015d6") : failed to sync configmap cache: timed out waiting for the condition Mar 18 18:00:33.183881 master-0 kubenswrapper[30278]: E0318 18:00:33.183556 30278 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c38c5f03-a753-49f4-ab06-33e75a03bd45-cluster-storage-operator-serving-cert podName:c38c5f03-a753-49f4-ab06-33e75a03bd45 nodeName:}" failed. No retries permitted until 2026-03-18 18:00:33.683537822 +0000 UTC m=+2.850722667 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cluster-storage-operator-serving-cert" (UniqueName: "kubernetes.io/secret/c38c5f03-a753-49f4-ab06-33e75a03bd45-cluster-storage-operator-serving-cert") pod "cluster-storage-operator-7d87854d6-d4bmc" (UID: "c38c5f03-a753-49f4-ab06-33e75a03bd45") : failed to sync secret cache: timed out waiting for the condition Mar 18 18:00:33.183881 master-0 kubenswrapper[30278]: I0318 18:00:33.183693 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Mar 18 18:00:33.185464 master-0 kubenswrapper[30278]: E0318 18:00:33.185417 30278 configmap.go:193] Couldn't get configMap openshift-cluster-machine-approver/machine-approver-config: failed to sync configmap cache: timed out waiting for the condition Mar 18 18:00:33.185667 master-0 kubenswrapper[30278]: E0318 18:00:33.185533 30278 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/92153864-7959-4482-bf24-c8db36435fb5-config podName:92153864-7959-4482-bf24-c8db36435fb5 nodeName:}" failed. No retries permitted until 2026-03-18 18:00:33.685504375 +0000 UTC m=+2.852689180 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/92153864-7959-4482-bf24-c8db36435fb5-config") pod "machine-approver-5c6485487f-z74t2" (UID: "92153864-7959-4482-bf24-c8db36435fb5") : failed to sync configmap cache: timed out waiting for the condition Mar 18 18:00:33.185667 master-0 kubenswrapper[30278]: E0318 18:00:33.185657 30278 secret.go:189] Couldn't get secret openshift-machine-config-operator/mco-proxy-tls: failed to sync secret cache: timed out waiting for the condition Mar 18 18:00:33.185890 master-0 kubenswrapper[30278]: E0318 18:00:33.185742 30278 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c3267271-e0c5-45d6-980c-d78e4f9eef35-proxy-tls podName:c3267271-e0c5-45d6-980c-d78e4f9eef35 nodeName:}" failed. No retries permitted until 2026-03-18 18:00:33.685722451 +0000 UTC m=+2.852907076 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/c3267271-e0c5-45d6-980c-d78e4f9eef35-proxy-tls") pod "machine-config-operator-84d549f6d5-b5lps" (UID: "c3267271-e0c5-45d6-980c-d78e4f9eef35") : failed to sync secret cache: timed out waiting for the condition Mar 18 18:00:33.187704 master-0 kubenswrapper[30278]: E0318 18:00:33.187649 30278 configmap.go:193] Couldn't get configMap openshift-cluster-version/openshift-service-ca.crt: failed to sync configmap cache: timed out waiting for the condition Mar 18 18:00:33.187831 master-0 kubenswrapper[30278]: E0318 18:00:33.187776 30278 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/fdab27a1-1d7a-4dc5-b828-eba3f57592dd-service-ca podName:fdab27a1-1d7a-4dc5-b828-eba3f57592dd nodeName:}" failed. No retries permitted until 2026-03-18 18:00:33.687725956 +0000 UTC m=+2.854910771 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "service-ca" (UniqueName: "kubernetes.io/configmap/fdab27a1-1d7a-4dc5-b828-eba3f57592dd-service-ca") pod "cluster-version-operator-7d58488df-l48xm" (UID: "fdab27a1-1d7a-4dc5-b828-eba3f57592dd") : failed to sync configmap cache: timed out waiting for the condition Mar 18 18:00:33.187951 master-0 kubenswrapper[30278]: E0318 18:00:33.187917 30278 secret.go:189] Couldn't get secret openshift-machine-config-operator/mcc-proxy-tls: failed to sync secret cache: timed out waiting for the condition Mar 18 18:00:33.188043 master-0 kubenswrapper[30278]: E0318 18:00:33.187996 30278 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/89e6c3d6-7bd5-4df6-90db-3a349f644afb-proxy-tls podName:89e6c3d6-7bd5-4df6-90db-3a349f644afb nodeName:}" failed. No retries permitted until 2026-03-18 18:00:33.687978173 +0000 UTC m=+2.855162818 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/89e6c3d6-7bd5-4df6-90db-3a349f644afb-proxy-tls") pod "machine-config-controller-b4f87c5b9-m84zq" (UID: "89e6c3d6-7bd5-4df6-90db-3a349f644afb") : failed to sync secret cache: timed out waiting for the condition Mar 18 18:00:33.189100 master-0 kubenswrapper[30278]: E0318 18:00:33.189065 30278 configmap.go:193] Couldn't get configMap openshift-machine-config-operator/machine-config-operator-images: failed to sync configmap cache: timed out waiting for the condition Mar 18 18:00:33.189195 master-0 kubenswrapper[30278]: E0318 18:00:33.189134 30278 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c3267271-e0c5-45d6-980c-d78e4f9eef35-images podName:c3267271-e0c5-45d6-980c-d78e4f9eef35 nodeName:}" failed. No retries permitted until 2026-03-18 18:00:33.689117335 +0000 UTC m=+2.856302160 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "images" (UniqueName: "kubernetes.io/configmap/c3267271-e0c5-45d6-980c-d78e4f9eef35-images") pod "machine-config-operator-84d549f6d5-b5lps" (UID: "c3267271-e0c5-45d6-980c-d78e4f9eef35") : failed to sync configmap cache: timed out waiting for the condition Mar 18 18:00:33.189195 master-0 kubenswrapper[30278]: E0318 18:00:33.189169 30278 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/packageserver-service-cert: failed to sync secret cache: timed out waiting for the condition Mar 18 18:00:33.189351 master-0 kubenswrapper[30278]: E0318 18:00:33.189221 30278 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8db04037-c7cc-4246-92c3-6e7985384b14-webhook-cert podName:8db04037-c7cc-4246-92c3-6e7985384b14 nodeName:}" failed. No retries permitted until 2026-03-18 18:00:33.689202287 +0000 UTC m=+2.856387182 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/8db04037-c7cc-4246-92c3-6e7985384b14-webhook-cert") pod "packageserver-b8b994c95-kglwt" (UID: "8db04037-c7cc-4246-92c3-6e7985384b14") : failed to sync secret cache: timed out waiting for the condition Mar 18 18:00:33.190470 master-0 kubenswrapper[30278]: E0318 18:00:33.190423 30278 secret.go:189] Couldn't get secret openshift-machine-config-operator/node-bootstrapper-token: failed to sync secret cache: timed out waiting for the condition Mar 18 18:00:33.190568 master-0 kubenswrapper[30278]: E0318 18:00:33.190493 30278 configmap.go:193] Couldn't get configMap openshift-cluster-machine-approver/kube-rbac-proxy: failed to sync configmap cache: timed out waiting for the condition Mar 18 18:00:33.190568 master-0 kubenswrapper[30278]: E0318 18:00:33.190522 30278 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b3385316-45f0-46c5-ac82-683168db5878-node-bootstrap-token podName:b3385316-45f0-46c5-ac82-683168db5878 nodeName:}" failed. No retries permitted until 2026-03-18 18:00:33.690500542 +0000 UTC m=+2.857685167 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "node-bootstrap-token" (UniqueName: "kubernetes.io/secret/b3385316-45f0-46c5-ac82-683168db5878-node-bootstrap-token") pod "machine-config-server-mpmxb" (UID: "b3385316-45f0-46c5-ac82-683168db5878") : failed to sync secret cache: timed out waiting for the condition Mar 18 18:00:33.190703 master-0 kubenswrapper[30278]: E0318 18:00:33.190584 30278 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/92153864-7959-4482-bf24-c8db36435fb5-auth-proxy-config podName:92153864-7959-4482-bf24-c8db36435fb5 nodeName:}" failed. No retries permitted until 2026-03-18 18:00:33.690559373 +0000 UTC m=+2.857744168 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "auth-proxy-config" (UniqueName: "kubernetes.io/configmap/92153864-7959-4482-bf24-c8db36435fb5-auth-proxy-config") pod "machine-approver-5c6485487f-z74t2" (UID: "92153864-7959-4482-bf24-c8db36435fb5") : failed to sync configmap cache: timed out waiting for the condition Mar 18 18:00:33.191609 master-0 kubenswrapper[30278]: E0318 18:00:33.191562 30278 configmap.go:193] Couldn't get configMap openshift-machine-config-operator/kube-rbac-proxy: failed to sync configmap cache: timed out waiting for the condition Mar 18 18:00:33.191720 master-0 kubenswrapper[30278]: E0318 18:00:33.191672 30278 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c3267271-e0c5-45d6-980c-d78e4f9eef35-auth-proxy-config podName:c3267271-e0c5-45d6-980c-d78e4f9eef35 nodeName:}" failed. No retries permitted until 2026-03-18 18:00:33.691646294 +0000 UTC m=+2.858831069 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "auth-proxy-config" (UniqueName: "kubernetes.io/configmap/c3267271-e0c5-45d6-980c-d78e4f9eef35-auth-proxy-config") pod "machine-config-operator-84d549f6d5-b5lps" (UID: "c3267271-e0c5-45d6-980c-d78e4f9eef35") : failed to sync configmap cache: timed out waiting for the condition Mar 18 18:00:33.193789 master-0 kubenswrapper[30278]: E0318 18:00:33.193665 30278 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: failed to sync secret cache: timed out waiting for the condition Mar 18 18:00:33.193789 master-0 kubenswrapper[30278]: E0318 18:00:33.193784 30278 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fdab27a1-1d7a-4dc5-b828-eba3f57592dd-serving-cert podName:fdab27a1-1d7a-4dc5-b828-eba3f57592dd nodeName:}" failed. No retries permitted until 2026-03-18 18:00:33.69374851 +0000 UTC m=+2.860933155 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/fdab27a1-1d7a-4dc5-b828-eba3f57592dd-serving-cert") pod "cluster-version-operator-7d58488df-l48xm" (UID: "fdab27a1-1d7a-4dc5-b828-eba3f57592dd") : failed to sync secret cache: timed out waiting for the condition Mar 18 18:00:33.195439 master-0 kubenswrapper[30278]: E0318 18:00:33.195391 30278 secret.go:189] Couldn't get secret openshift-insights/openshift-insights-serving-cert: failed to sync secret cache: timed out waiting for the condition Mar 18 18:00:33.195559 master-0 kubenswrapper[30278]: E0318 18:00:33.195433 30278 configmap.go:193] Couldn't get configMap openshift-cloud-controller-manager-operator/cloud-controller-manager-images: failed to sync configmap cache: timed out waiting for the condition Mar 18 18:00:33.195559 master-0 kubenswrapper[30278]: E0318 18:00:33.195473 30278 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: failed to sync secret cache: timed out waiting for the condition Mar 18 18:00:33.195559 master-0 kubenswrapper[30278]: E0318 18:00:33.195510 30278 configmap.go:193] Couldn't get configMap openshift-machine-api/kube-rbac-proxy: failed to sync configmap cache: timed out waiting for the condition Mar 18 18:00:33.195559 master-0 kubenswrapper[30278]: E0318 18:00:33.195522 30278 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-operator-kube-rbac-proxy-config: failed to sync secret cache: timed out waiting for the condition Mar 18 18:00:33.195559 master-0 kubenswrapper[30278]: E0318 18:00:33.195487 30278 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d4c75bee-d0d2-4261-8f89-8c3375dbd868-serving-cert podName:d4c75bee-d0d2-4261-8f89-8c3375dbd868 nodeName:}" failed. No retries permitted until 2026-03-18 18:00:33.695464778 +0000 UTC m=+2.862649403 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/d4c75bee-d0d2-4261-8f89-8c3375dbd868-serving-cert") pod "insights-operator-68bf6ff9d6-hm777" (UID: "d4c75bee-d0d2-4261-8f89-8c3375dbd868") : failed to sync secret cache: timed out waiting for the condition Mar 18 18:00:33.195845 master-0 kubenswrapper[30278]: E0318 18:00:33.195569 30278 secret.go:189] Couldn't get secret openshift-machine-config-operator/machine-config-server-tls: failed to sync secret cache: timed out waiting for the condition Mar 18 18:00:33.195845 master-0 kubenswrapper[30278]: E0318 18:00:33.195607 30278 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0751c002-fe0e-4f13-bb9c-9accd8ca0df3-images podName:0751c002-fe0e-4f13-bb9c-9accd8ca0df3 nodeName:}" failed. No retries permitted until 2026-03-18 18:00:33.695577201 +0000 UTC m=+2.862762026 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "images" (UniqueName: "kubernetes.io/configmap/0751c002-fe0e-4f13-bb9c-9accd8ca0df3-images") pod "cluster-cloud-controller-manager-operator-7dff898856-kfzkl" (UID: "0751c002-fe0e-4f13-bb9c-9accd8ca0df3") : failed to sync configmap cache: timed out waiting for the condition Mar 18 18:00:33.195845 master-0 kubenswrapper[30278]: E0318 18:00:33.195651 30278 configmap.go:193] Couldn't get configMap openshift-machine-config-operator/kube-rbac-proxy: failed to sync configmap cache: timed out waiting for the condition Mar 18 18:00:33.195845 master-0 kubenswrapper[30278]: E0318 18:00:33.195670 30278 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/2d21e77e-8b61-4f03-8f17-941b7a1d8b1d-config podName:2d21e77e-8b61-4f03-8f17-941b7a1d8b1d nodeName:}" failed. No retries permitted until 2026-03-18 18:00:33.695645702 +0000 UTC m=+2.862830467 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/2d21e77e-8b61-4f03-8f17-941b7a1d8b1d-config") pod "machine-api-operator-6fbb6cf6f9-6x52p" (UID: "2d21e77e-8b61-4f03-8f17-941b7a1d8b1d") : failed to sync configmap cache: timed out waiting for the condition Mar 18 18:00:33.195845 master-0 kubenswrapper[30278]: E0318 18:00:33.195720 30278 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e7f76afa-4b23-421c-8451-46323813f06e-webhook-certs podName:e7f76afa-4b23-421c-8451-46323813f06e nodeName:}" failed. No retries permitted until 2026-03-18 18:00:33.695702734 +0000 UTC m=+2.862887589 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/e7f76afa-4b23-421c-8451-46323813f06e-webhook-certs") pod "multus-admission-controller-58c9f8fc64-9c6bk" (UID: "e7f76afa-4b23-421c-8451-46323813f06e") : failed to sync secret cache: timed out waiting for the condition Mar 18 18:00:33.195845 master-0 kubenswrapper[30278]: E0318 18:00:33.195764 30278 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/89e6c3d6-7bd5-4df6-90db-3a349f644afb-mcc-auth-proxy-config podName:89e6c3d6-7bd5-4df6-90db-3a349f644afb nodeName:}" failed. No retries permitted until 2026-03-18 18:00:33.695740065 +0000 UTC m=+2.862924670 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "mcc-auth-proxy-config" (UniqueName: "kubernetes.io/configmap/89e6c3d6-7bd5-4df6-90db-3a349f644afb-mcc-auth-proxy-config") pod "machine-config-controller-b4f87c5b9-m84zq" (UID: "89e6c3d6-7bd5-4df6-90db-3a349f644afb") : failed to sync configmap cache: timed out waiting for the condition Mar 18 18:00:33.195845 master-0 kubenswrapper[30278]: E0318 18:00:33.195791 30278 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9c0dbd44-7669-41d6-bf1b-d8c1343c9d98-prometheus-operator-kube-rbac-proxy-config podName:9c0dbd44-7669-41d6-bf1b-d8c1343c9d98 nodeName:}" failed. No retries permitted until 2026-03-18 18:00:33.695780946 +0000 UTC m=+2.862965551 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "prometheus-operator-kube-rbac-proxy-config" (UniqueName: "kubernetes.io/secret/9c0dbd44-7669-41d6-bf1b-d8c1343c9d98-prometheus-operator-kube-rbac-proxy-config") pod "prometheus-operator-6c8df6d4b-fshkm" (UID: "9c0dbd44-7669-41d6-bf1b-d8c1343c9d98") : failed to sync secret cache: timed out waiting for the condition Mar 18 18:00:33.195845 master-0 kubenswrapper[30278]: E0318 18:00:33.195816 30278 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b3385316-45f0-46c5-ac82-683168db5878-certs podName:b3385316-45f0-46c5-ac82-683168db5878 nodeName:}" failed. No retries permitted until 2026-03-18 18:00:33.695802837 +0000 UTC m=+2.862987442 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "certs" (UniqueName: "kubernetes.io/secret/b3385316-45f0-46c5-ac82-683168db5878-certs") pod "machine-config-server-mpmxb" (UID: "b3385316-45f0-46c5-ac82-683168db5878") : failed to sync secret cache: timed out waiting for the condition Mar 18 18:00:33.197080 master-0 kubenswrapper[30278]: E0318 18:00:33.196997 30278 secret.go:189] Couldn't get secret openshift-cloud-controller-manager-operator/cloud-controller-manager-operator-tls: failed to sync secret cache: timed out waiting for the condition Mar 18 18:00:33.197166 master-0 kubenswrapper[30278]: E0318 18:00:33.197093 30278 configmap.go:193] Couldn't get configMap openshift-machine-config-operator/kube-rbac-proxy: failed to sync configmap cache: timed out waiting for the condition Mar 18 18:00:33.197166 master-0 kubenswrapper[30278]: E0318 18:00:33.197128 30278 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0751c002-fe0e-4f13-bb9c-9accd8ca0df3-cloud-controller-manager-operator-tls podName:0751c002-fe0e-4f13-bb9c-9accd8ca0df3 nodeName:}" failed. No retries permitted until 2026-03-18 18:00:33.697090961 +0000 UTC m=+2.864275716 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cloud-controller-manager-operator-tls" (UniqueName: "kubernetes.io/secret/0751c002-fe0e-4f13-bb9c-9accd8ca0df3-cloud-controller-manager-operator-tls") pod "cluster-cloud-controller-manager-operator-7dff898856-kfzkl" (UID: "0751c002-fe0e-4f13-bb9c-9accd8ca0df3") : failed to sync secret cache: timed out waiting for the condition Mar 18 18:00:33.197331 master-0 kubenswrapper[30278]: E0318 18:00:33.197181 30278 configmap.go:193] Couldn't get configMap openshift-monitoring/metrics-client-ca: failed to sync configmap cache: timed out waiting for the condition Mar 18 18:00:33.197331 master-0 kubenswrapper[30278]: E0318 18:00:33.197182 30278 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/fcf459dc-bd30-4143-b5c4-60fd01b46548-mcd-auth-proxy-config podName:fcf459dc-bd30-4143-b5c4-60fd01b46548 nodeName:}" failed. No retries permitted until 2026-03-18 18:00:33.697160064 +0000 UTC m=+2.864344879 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "mcd-auth-proxy-config" (UniqueName: "kubernetes.io/configmap/fcf459dc-bd30-4143-b5c4-60fd01b46548-mcd-auth-proxy-config") pod "machine-config-daemon-5l8hh" (UID: "fcf459dc-bd30-4143-b5c4-60fd01b46548") : failed to sync configmap cache: timed out waiting for the condition Mar 18 18:00:33.197331 master-0 kubenswrapper[30278]: E0318 18:00:33.197241 30278 configmap.go:193] Couldn't get configMap openshift-machine-api/kube-rbac-proxy-cluster-autoscaler-operator: failed to sync configmap cache: timed out waiting for the condition Mar 18 18:00:33.197331 master-0 kubenswrapper[30278]: E0318 18:00:33.197304 30278 configmap.go:193] Couldn't get configMap openshift-insights/service-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Mar 18 18:00:33.197331 master-0 kubenswrapper[30278]: E0318 18:00:33.197245 30278 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9c0dbd44-7669-41d6-bf1b-d8c1343c9d98-metrics-client-ca podName:9c0dbd44-7669-41d6-bf1b-d8c1343c9d98 nodeName:}" failed. No retries permitted until 2026-03-18 18:00:33.697225676 +0000 UTC m=+2.864410311 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-client-ca" (UniqueName: "kubernetes.io/configmap/9c0dbd44-7669-41d6-bf1b-d8c1343c9d98-metrics-client-ca") pod "prometheus-operator-6c8df6d4b-fshkm" (UID: "9c0dbd44-7669-41d6-bf1b-d8c1343c9d98") : failed to sync configmap cache: timed out waiting for the condition Mar 18 18:00:33.197626 master-0 kubenswrapper[30278]: E0318 18:00:33.197364 30278 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/d4c75bee-d0d2-4261-8f89-8c3375dbd868-service-ca-bundle podName:d4c75bee-d0d2-4261-8f89-8c3375dbd868 nodeName:}" failed. No retries permitted until 2026-03-18 18:00:33.697347599 +0000 UTC m=+2.864532224 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "service-ca-bundle" (UniqueName: "kubernetes.io/configmap/d4c75bee-d0d2-4261-8f89-8c3375dbd868-service-ca-bundle") pod "insights-operator-68bf6ff9d6-hm777" (UID: "d4c75bee-d0d2-4261-8f89-8c3375dbd868") : failed to sync configmap cache: timed out waiting for the condition Mar 18 18:00:33.197626 master-0 kubenswrapper[30278]: E0318 18:00:33.197394 30278 configmap.go:193] Couldn't get configMap openshift-insights/trusted-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Mar 18 18:00:33.197626 master-0 kubenswrapper[30278]: E0318 18:00:33.197423 30278 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-operator-admission-webhook-tls: failed to sync secret cache: timed out waiting for the condition Mar 18 18:00:33.197626 master-0 kubenswrapper[30278]: E0318 18:00:33.197428 30278 secret.go:189] Couldn't get secret openshift-route-controller-manager/serving-cert: failed to sync secret cache: timed out waiting for the condition Mar 18 18:00:33.197626 master-0 kubenswrapper[30278]: E0318 18:00:33.197399 30278 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/a94f7bff-ad61-4c53-a8eb-000a13f26971-auth-proxy-config podName:a94f7bff-ad61-4c53-a8eb-000a13f26971 nodeName:}" failed. No retries permitted until 2026-03-18 18:00:33.69738108 +0000 UTC m=+2.864565705 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "auth-proxy-config" (UniqueName: "kubernetes.io/configmap/a94f7bff-ad61-4c53-a8eb-000a13f26971-auth-proxy-config") pod "cluster-autoscaler-operator-866dc4744-l6hpt" (UID: "a94f7bff-ad61-4c53-a8eb-000a13f26971") : failed to sync configmap cache: timed out waiting for the condition Mar 18 18:00:33.197626 master-0 kubenswrapper[30278]: E0318 18:00:33.197537 30278 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/d4c75bee-d0d2-4261-8f89-8c3375dbd868-trusted-ca-bundle podName:d4c75bee-d0d2-4261-8f89-8c3375dbd868 nodeName:}" failed. No retries permitted until 2026-03-18 18:00:33.697519294 +0000 UTC m=+2.864704059 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/d4c75bee-d0d2-4261-8f89-8c3375dbd868-trusted-ca-bundle") pod "insights-operator-68bf6ff9d6-hm777" (UID: "d4c75bee-d0d2-4261-8f89-8c3375dbd868") : failed to sync configmap cache: timed out waiting for the condition Mar 18 18:00:33.197626 master-0 kubenswrapper[30278]: E0318 18:00:33.197565 30278 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9e2d0d0d-54ca-475b-be8a-4eb6d4434e74-tls-certificates podName:9e2d0d0d-54ca-475b-be8a-4eb6d4434e74 nodeName:}" failed. No retries permitted until 2026-03-18 18:00:33.697550085 +0000 UTC m=+2.864734940 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tls-certificates" (UniqueName: "kubernetes.io/secret/9e2d0d0d-54ca-475b-be8a-4eb6d4434e74-tls-certificates") pod "prometheus-operator-admission-webhook-69c6b55594-7r9qg" (UID: "9e2d0d0d-54ca-475b-be8a-4eb6d4434e74") : failed to sync secret cache: timed out waiting for the condition Mar 18 18:00:33.197626 master-0 kubenswrapper[30278]: E0318 18:00:33.197586 30278 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/253ec853-f637-4aa4-8e8e-eb655dfccccb-serving-cert podName:253ec853-f637-4aa4-8e8e-eb655dfccccb nodeName:}" failed. No retries permitted until 2026-03-18 18:00:33.697576265 +0000 UTC m=+2.864761100 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/253ec853-f637-4aa4-8e8e-eb655dfccccb-serving-cert") pod "route-controller-manager-57dbfd879f-44tfw" (UID: "253ec853-f637-4aa4-8e8e-eb655dfccccb") : failed to sync secret cache: timed out waiting for the condition Mar 18 18:00:33.203095 master-0 kubenswrapper[30278]: I0318 18:00:33.203043 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-tns2v" Mar 18 18:00:33.223306 master-0 kubenswrapper[30278]: I0318 18:00:33.223206 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Mar 18 18:00:33.244118 master-0 kubenswrapper[30278]: I0318 18:00:33.244058 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Mar 18 18:00:33.264017 master-0 kubenswrapper[30278]: I0318 18:00:33.263959 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Mar 18 18:00:33.283181 master-0 kubenswrapper[30278]: I0318 18:00:33.283135 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Mar 18 18:00:33.298570 master-0 kubenswrapper[30278]: E0318 18:00:33.298513 30278 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-operator-tls: failed to sync secret cache: timed out waiting for the condition Mar 18 18:00:33.298763 master-0 kubenswrapper[30278]: E0318 18:00:33.298612 30278 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9c0dbd44-7669-41d6-bf1b-d8c1343c9d98-prometheus-operator-tls podName:9c0dbd44-7669-41d6-bf1b-d8c1343c9d98 nodeName:}" failed. No retries permitted until 2026-03-18 18:00:33.798590971 +0000 UTC m=+2.965775576 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "prometheus-operator-tls" (UniqueName: "kubernetes.io/secret/9c0dbd44-7669-41d6-bf1b-d8c1343c9d98-prometheus-operator-tls") pod "prometheus-operator-6c8df6d4b-fshkm" (UID: "9c0dbd44-7669-41d6-bf1b-d8c1343c9d98") : failed to sync secret cache: timed out waiting for the condition Mar 18 18:00:33.299599 master-0 kubenswrapper[30278]: E0318 18:00:33.299569 30278 secret.go:189] Couldn't get secret openshift-cluster-machine-approver/machine-approver-tls: failed to sync secret cache: timed out waiting for the condition Mar 18 18:00:33.299669 master-0 kubenswrapper[30278]: E0318 18:00:33.299624 30278 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/92153864-7959-4482-bf24-c8db36435fb5-machine-approver-tls podName:92153864-7959-4482-bf24-c8db36435fb5 nodeName:}" failed. No retries permitted until 2026-03-18 18:00:33.799613119 +0000 UTC m=+2.966797704 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "machine-approver-tls" (UniqueName: "kubernetes.io/secret/92153864-7959-4482-bf24-c8db36435fb5-machine-approver-tls") pod "machine-approver-5c6485487f-z74t2" (UID: "92153864-7959-4482-bf24-c8db36435fb5") : failed to sync secret cache: timed out waiting for the condition Mar 18 18:00:33.301621 master-0 kubenswrapper[30278]: E0318 18:00:33.301595 30278 secret.go:189] Couldn't get secret openshift-cloud-credential-operator/cloud-credential-operator-serving-cert: failed to sync secret cache: timed out waiting for the condition Mar 18 18:00:33.301690 master-0 kubenswrapper[30278]: E0318 18:00:33.301645 30278 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/04cef0bd-f365-4bf6-864a-1895995015d6-cloud-credential-operator-serving-cert podName:04cef0bd-f365-4bf6-864a-1895995015d6 nodeName:}" failed. No retries permitted until 2026-03-18 18:00:33.801634304 +0000 UTC m=+2.968818899 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cloud-credential-operator-serving-cert" (UniqueName: "kubernetes.io/secret/04cef0bd-f365-4bf6-864a-1895995015d6-cloud-credential-operator-serving-cert") pod "cloud-credential-operator-744f9dbf77-djgn7" (UID: "04cef0bd-f365-4bf6-864a-1895995015d6") : failed to sync secret cache: timed out waiting for the condition Mar 18 18:00:33.301828 master-0 kubenswrapper[30278]: E0318 18:00:33.301763 30278 secret.go:189] Couldn't get secret openshift-cluster-samples-operator/samples-operator-tls: failed to sync secret cache: timed out waiting for the condition Mar 18 18:00:33.301959 master-0 kubenswrapper[30278]: E0318 18:00:33.301930 30278 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e0e04440-c08b-452d-9be6-9f70a4027c92-samples-operator-tls podName:e0e04440-c08b-452d-9be6-9f70a4027c92 nodeName:}" failed. No retries permitted until 2026-03-18 18:00:33.801898831 +0000 UTC m=+2.969083536 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "samples-operator-tls" (UniqueName: "kubernetes.io/secret/e0e04440-c08b-452d-9be6-9f70a4027c92-samples-operator-tls") pod "cluster-samples-operator-85f7577d78-xnx8x" (UID: "e0e04440-c08b-452d-9be6-9f70a4027c92") : failed to sync secret cache: timed out waiting for the condition Mar 18 18:00:33.303041 master-0 kubenswrapper[30278]: I0318 18:00:33.302985 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-admission-webhook-tls" Mar 18 18:00:33.305148 master-0 kubenswrapper[30278]: E0318 18:00:33.305111 30278 secret.go:189] Couldn't get secret openshift-machine-api/cluster-autoscaler-operator-cert: failed to sync secret cache: timed out waiting for the condition Mar 18 18:00:33.305231 master-0 kubenswrapper[30278]: E0318 18:00:33.305152 30278 secret.go:189] Couldn't get secret openshift-machine-api/control-plane-machine-set-operator-tls: failed to sync secret cache: timed out waiting for the condition Mar 18 18:00:33.305231 master-0 kubenswrapper[30278]: E0318 18:00:33.305183 30278 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a94f7bff-ad61-4c53-a8eb-000a13f26971-cert podName:a94f7bff-ad61-4c53-a8eb-000a13f26971 nodeName:}" failed. No retries permitted until 2026-03-18 18:00:33.80516318 +0000 UTC m=+2.972347775 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/a94f7bff-ad61-4c53-a8eb-000a13f26971-cert") pod "cluster-autoscaler-operator-866dc4744-l6hpt" (UID: "a94f7bff-ad61-4c53-a8eb-000a13f26971") : failed to sync secret cache: timed out waiting for the condition Mar 18 18:00:33.305231 master-0 kubenswrapper[30278]: E0318 18:00:33.305183 30278 secret.go:189] Couldn't get secret openshift-machine-api/machine-api-operator-tls: failed to sync secret cache: timed out waiting for the condition Mar 18 18:00:33.305231 master-0 kubenswrapper[30278]: E0318 18:00:33.305216 30278 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/de189d27-4c60-49f1-9119-d1fde5c37b1e-control-plane-machine-set-operator-tls podName:de189d27-4c60-49f1-9119-d1fde5c37b1e nodeName:}" failed. No retries permitted until 2026-03-18 18:00:33.805198711 +0000 UTC m=+2.972383536 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "control-plane-machine-set-operator-tls" (UniqueName: "kubernetes.io/secret/de189d27-4c60-49f1-9119-d1fde5c37b1e-control-plane-machine-set-operator-tls") pod "control-plane-machine-set-operator-6f97756bc8-zdqtc" (UID: "de189d27-4c60-49f1-9119-d1fde5c37b1e") : failed to sync secret cache: timed out waiting for the condition Mar 18 18:00:33.305471 master-0 kubenswrapper[30278]: E0318 18:00:33.305256 30278 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2d21e77e-8b61-4f03-8f17-941b7a1d8b1d-machine-api-operator-tls podName:2d21e77e-8b61-4f03-8f17-941b7a1d8b1d nodeName:}" failed. No retries permitted until 2026-03-18 18:00:33.805236622 +0000 UTC m=+2.972421237 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "machine-api-operator-tls" (UniqueName: "kubernetes.io/secret/2d21e77e-8b61-4f03-8f17-941b7a1d8b1d-machine-api-operator-tls") pod "machine-api-operator-6fbb6cf6f9-6x52p" (UID: "2d21e77e-8b61-4f03-8f17-941b7a1d8b1d") : failed to sync secret cache: timed out waiting for the condition Mar 18 18:00:33.322949 master-0 kubenswrapper[30278]: I0318 18:00:33.322900 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-zxhl4" Mar 18 18:00:33.347241 master-0 kubenswrapper[30278]: I0318 18:00:33.347166 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Mar 18 18:00:33.364074 master-0 kubenswrapper[30278]: I0318 18:00:33.364020 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-22mk8" Mar 18 18:00:33.382399 master-0 kubenswrapper[30278]: I0318 18:00:33.382207 30278 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 18 18:00:33.382689 master-0 kubenswrapper[30278]: I0318 18:00:33.382561 30278 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-3-master-0" Mar 18 18:00:33.384496 master-0 kubenswrapper[30278]: I0318 18:00:33.384452 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-6fg48" Mar 18 18:00:33.402995 master-0 kubenswrapper[30278]: I0318 18:00:33.402948 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-kdvf8" Mar 18 18:00:33.423930 master-0 kubenswrapper[30278]: I0318 18:00:33.423795 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-btlbk" Mar 18 18:00:33.464990 master-0 kubenswrapper[30278]: I0318 18:00:33.464936 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-rgwwd" Mar 18 18:00:33.467143 master-0 kubenswrapper[30278]: I0318 18:00:33.467089 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2tvgq\" (UniqueName: \"kubernetes.io/projected/0b9ff55a-73fb-473f-b406-1f8b6cffdb89-kube-api-access-2tvgq\") pod \"openshift-apiserver-operator-d65958b8-t266j\" (UID: \"0b9ff55a-73fb-473f-b406-1f8b6cffdb89\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-d65958b8-t266j" Mar 18 18:00:33.484156 master-0 kubenswrapper[30278]: I0318 18:00:33.484092 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Mar 18 18:00:33.505159 master-0 kubenswrapper[30278]: I0318 18:00:33.505109 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-storage-operator"/"cluster-storage-operator-dockercfg-clcfd" Mar 18 18:00:33.523580 master-0 kubenswrapper[30278]: I0318 18:00:33.523520 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-storage-operator"/"cluster-storage-operator-serving-cert" Mar 18 18:00:33.543661 master-0 kubenswrapper[30278]: I0318 18:00:33.543605 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Mar 18 18:00:33.563058 master-0 kubenswrapper[30278]: I0318 18:00:33.562977 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Mar 18 18:00:33.584777 master-0 kubenswrapper[30278]: I0318 18:00:33.584713 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Mar 18 18:00:33.603597 master-0 kubenswrapper[30278]: I0318 18:00:33.603522 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Mar 18 18:00:33.623210 master-0 kubenswrapper[30278]: I0318 18:00:33.623150 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-ksrlj" Mar 18 18:00:33.644298 master-0 kubenswrapper[30278]: I0318 18:00:33.644231 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Mar 18 18:00:33.664986 master-0 kubenswrapper[30278]: I0318 18:00:33.664898 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Mar 18 18:00:33.683884 master-0 kubenswrapper[30278]: I0318 18:00:33.683713 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Mar 18 18:00:33.729071 master-0 kubenswrapper[30278]: I0318 18:00:33.729015 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/6f26e239-2988-4faa-bc1d-24b15b95b7f1-bound-sa-token\") pod \"cluster-image-registry-operator-5549dc66cb-ljrq8\" (UID: \"6f26e239-2988-4faa-bc1d-24b15b95b7f1\") " pod="openshift-image-registry/cluster-image-registry-operator-5549dc66cb-ljrq8" Mar 18 18:00:33.735402 master-0 kubenswrapper[30278]: I0318 18:00:33.735358 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fcf459dc-bd30-4143-b5c4-60fd01b46548-mcd-auth-proxy-config\") pod \"machine-config-daemon-5l8hh\" (UID: \"fcf459dc-bd30-4143-b5c4-60fd01b46548\") " pod="openshift-machine-config-operator/machine-config-daemon-5l8hh" Mar 18 18:00:33.735528 master-0 kubenswrapper[30278]: I0318 18:00:33.735406 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloud-controller-manager-operator-tls\" (UniqueName: \"kubernetes.io/secret/0751c002-fe0e-4f13-bb9c-9accd8ca0df3-cloud-controller-manager-operator-tls\") pod \"cluster-cloud-controller-manager-operator-7dff898856-kfzkl\" (UID: \"0751c002-fe0e-4f13-bb9c-9accd8ca0df3\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-kfzkl" Mar 18 18:00:33.735528 master-0 kubenswrapper[30278]: I0318 18:00:33.735434 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/9e2d0d0d-54ca-475b-be8a-4eb6d4434e74-tls-certificates\") pod \"prometheus-operator-admission-webhook-69c6b55594-7r9qg\" (UID: \"9e2d0d0d-54ca-475b-be8a-4eb6d4434e74\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-69c6b55594-7r9qg" Mar 18 18:00:33.735528 master-0 kubenswrapper[30278]: I0318 18:00:33.735456 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/a94f7bff-ad61-4c53-a8eb-000a13f26971-auth-proxy-config\") pod \"cluster-autoscaler-operator-866dc4744-l6hpt\" (UID: \"a94f7bff-ad61-4c53-a8eb-000a13f26971\") " pod="openshift-machine-api/cluster-autoscaler-operator-866dc4744-l6hpt" Mar 18 18:00:33.735528 master-0 kubenswrapper[30278]: I0318 18:00:33.735506 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d4c75bee-d0d2-4261-8f89-8c3375dbd868-trusted-ca-bundle\") pod \"insights-operator-68bf6ff9d6-hm777\" (UID: \"d4c75bee-d0d2-4261-8f89-8c3375dbd868\") " pod="openshift-insights/insights-operator-68bf6ff9d6-hm777" Mar 18 18:00:33.735715 master-0 kubenswrapper[30278]: I0318 18:00:33.735655 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/9c0dbd44-7669-41d6-bf1b-d8c1343c9d98-metrics-client-ca\") pod \"prometheus-operator-6c8df6d4b-fshkm\" (UID: \"9c0dbd44-7669-41d6-bf1b-d8c1343c9d98\") " pod="openshift-monitoring/prometheus-operator-6c8df6d4b-fshkm" Mar 18 18:00:33.735757 master-0 kubenswrapper[30278]: I0318 18:00:33.735734 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0751c002-fe0e-4f13-bb9c-9accd8ca0df3-auth-proxy-config\") pod \"cluster-cloud-controller-manager-operator-7dff898856-kfzkl\" (UID: \"0751c002-fe0e-4f13-bb9c-9accd8ca0df3\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-kfzkl" Mar 18 18:00:33.735798 master-0 kubenswrapper[30278]: I0318 18:00:33.735672 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fcf459dc-bd30-4143-b5c4-60fd01b46548-mcd-auth-proxy-config\") pod \"machine-config-daemon-5l8hh\" (UID: \"fcf459dc-bd30-4143-b5c4-60fd01b46548\") " pod="openshift-machine-config-operator/machine-config-daemon-5l8hh" Mar 18 18:00:33.735851 master-0 kubenswrapper[30278]: I0318 18:00:33.735823 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/2d21e77e-8b61-4f03-8f17-941b7a1d8b1d-images\") pod \"machine-api-operator-6fbb6cf6f9-6x52p\" (UID: \"2d21e77e-8b61-4f03-8f17-941b7a1d8b1d\") " pod="openshift-machine-api/machine-api-operator-6fbb6cf6f9-6x52p" Mar 18 18:00:33.735893 master-0 kubenswrapper[30278]: I0318 18:00:33.735861 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/9e2d0d0d-54ca-475b-be8a-4eb6d4434e74-tls-certificates\") pod \"prometheus-operator-admission-webhook-69c6b55594-7r9qg\" (UID: \"9e2d0d0d-54ca-475b-be8a-4eb6d4434e74\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-69c6b55594-7r9qg" Mar 18 18:00:33.735982 master-0 kubenswrapper[30278]: I0318 18:00:33.735947 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fcf459dc-bd30-4143-b5c4-60fd01b46548-proxy-tls\") pod \"machine-config-daemon-5l8hh\" (UID: \"fcf459dc-bd30-4143-b5c4-60fd01b46548\") " pod="openshift-machine-config-operator/machine-config-daemon-5l8hh" Mar 18 18:00:33.736033 master-0 kubenswrapper[30278]: I0318 18:00:33.736006 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cco-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/04cef0bd-f365-4bf6-864a-1895995015d6-cco-trusted-ca\") pod \"cloud-credential-operator-744f9dbf77-djgn7\" (UID: \"04cef0bd-f365-4bf6-864a-1895995015d6\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-744f9dbf77-djgn7" Mar 18 18:00:33.736080 master-0 kubenswrapper[30278]: I0318 18:00:33.736062 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/8db04037-c7cc-4246-92c3-6e7985384b14-apiservice-cert\") pod \"packageserver-b8b994c95-kglwt\" (UID: \"8db04037-c7cc-4246-92c3-6e7985384b14\") " pod="openshift-operator-lifecycle-manager/packageserver-b8b994c95-kglwt" Mar 18 18:00:33.736148 master-0 kubenswrapper[30278]: I0318 18:00:33.736125 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-storage-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/c38c5f03-a753-49f4-ab06-33e75a03bd45-cluster-storage-operator-serving-cert\") pod \"cluster-storage-operator-7d87854d6-d4bmc\" (UID: \"c38c5f03-a753-49f4-ab06-33e75a03bd45\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-7d87854d6-d4bmc" Mar 18 18:00:33.736384 master-0 kubenswrapper[30278]: I0318 18:00:33.736351 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/92153864-7959-4482-bf24-c8db36435fb5-config\") pod \"machine-approver-5c6485487f-z74t2\" (UID: \"92153864-7959-4482-bf24-c8db36435fb5\") " pod="openshift-cluster-machine-approver/machine-approver-5c6485487f-z74t2" Mar 18 18:00:33.736384 master-0 kubenswrapper[30278]: I0318 18:00:33.736371 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/8db04037-c7cc-4246-92c3-6e7985384b14-apiservice-cert\") pod \"packageserver-b8b994c95-kglwt\" (UID: \"8db04037-c7cc-4246-92c3-6e7985384b14\") " pod="openshift-operator-lifecycle-manager/packageserver-b8b994c95-kglwt" Mar 18 18:00:33.736500 master-0 kubenswrapper[30278]: I0318 18:00:33.736479 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/c3267271-e0c5-45d6-980c-d78e4f9eef35-proxy-tls\") pod \"machine-config-operator-84d549f6d5-b5lps\" (UID: \"c3267271-e0c5-45d6-980c-d78e4f9eef35\") " pod="openshift-machine-config-operator/machine-config-operator-84d549f6d5-b5lps" Mar 18 18:00:33.736590 master-0 kubenswrapper[30278]: I0318 18:00:33.736565 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/fdab27a1-1d7a-4dc5-b828-eba3f57592dd-service-ca\") pod \"cluster-version-operator-7d58488df-l48xm\" (UID: \"fdab27a1-1d7a-4dc5-b828-eba3f57592dd\") " pod="openshift-cluster-version/cluster-version-operator-7d58488df-l48xm" Mar 18 18:00:33.736688 master-0 kubenswrapper[30278]: I0318 18:00:33.736668 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/89e6c3d6-7bd5-4df6-90db-3a349f644afb-proxy-tls\") pod \"machine-config-controller-b4f87c5b9-m84zq\" (UID: \"89e6c3d6-7bd5-4df6-90db-3a349f644afb\") " pod="openshift-machine-config-operator/machine-config-controller-b4f87c5b9-m84zq" Mar 18 18:00:33.736732 master-0 kubenswrapper[30278]: I0318 18:00:33.736710 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/c3267271-e0c5-45d6-980c-d78e4f9eef35-proxy-tls\") pod \"machine-config-operator-84d549f6d5-b5lps\" (UID: \"c3267271-e0c5-45d6-980c-d78e4f9eef35\") " pod="openshift-machine-config-operator/machine-config-operator-84d549f6d5-b5lps" Mar 18 18:00:33.736772 master-0 kubenswrapper[30278]: I0318 18:00:33.736738 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-storage-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/c38c5f03-a753-49f4-ab06-33e75a03bd45-cluster-storage-operator-serving-cert\") pod \"cluster-storage-operator-7d87854d6-d4bmc\" (UID: \"c38c5f03-a753-49f4-ab06-33e75a03bd45\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-7d87854d6-d4bmc" Mar 18 18:00:33.736815 master-0 kubenswrapper[30278]: I0318 18:00:33.736769 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/c3267271-e0c5-45d6-980c-d78e4f9eef35-images\") pod \"machine-config-operator-84d549f6d5-b5lps\" (UID: \"c3267271-e0c5-45d6-980c-d78e4f9eef35\") " pod="openshift-machine-config-operator/machine-config-operator-84d549f6d5-b5lps" Mar 18 18:00:33.736912 master-0 kubenswrapper[30278]: I0318 18:00:33.736891 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/fdab27a1-1d7a-4dc5-b828-eba3f57592dd-service-ca\") pod \"cluster-version-operator-7d58488df-l48xm\" (UID: \"fdab27a1-1d7a-4dc5-b828-eba3f57592dd\") " pod="openshift-cluster-version/cluster-version-operator-7d58488df-l48xm" Mar 18 18:00:33.737098 master-0 kubenswrapper[30278]: I0318 18:00:33.737053 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/8db04037-c7cc-4246-92c3-6e7985384b14-webhook-cert\") pod \"packageserver-b8b994c95-kglwt\" (UID: \"8db04037-c7cc-4246-92c3-6e7985384b14\") " pod="openshift-operator-lifecycle-manager/packageserver-b8b994c95-kglwt" Mar 18 18:00:33.737192 master-0 kubenswrapper[30278]: I0318 18:00:33.737165 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/b3385316-45f0-46c5-ac82-683168db5878-node-bootstrap-token\") pod \"machine-config-server-mpmxb\" (UID: \"b3385316-45f0-46c5-ac82-683168db5878\") " pod="openshift-machine-config-operator/machine-config-server-mpmxb" Mar 18 18:00:33.737294 master-0 kubenswrapper[30278]: I0318 18:00:33.737173 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/c3267271-e0c5-45d6-980c-d78e4f9eef35-images\") pod \"machine-config-operator-84d549f6d5-b5lps\" (UID: \"c3267271-e0c5-45d6-980c-d78e4f9eef35\") " pod="openshift-machine-config-operator/machine-config-operator-84d549f6d5-b5lps" Mar 18 18:00:33.737340 master-0 kubenswrapper[30278]: I0318 18:00:33.737285 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/8db04037-c7cc-4246-92c3-6e7985384b14-webhook-cert\") pod \"packageserver-b8b994c95-kglwt\" (UID: \"8db04037-c7cc-4246-92c3-6e7985384b14\") " pod="openshift-operator-lifecycle-manager/packageserver-b8b994c95-kglwt" Mar 18 18:00:33.737453 master-0 kubenswrapper[30278]: I0318 18:00:33.737428 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/92153864-7959-4482-bf24-c8db36435fb5-auth-proxy-config\") pod \"machine-approver-5c6485487f-z74t2\" (UID: \"92153864-7959-4482-bf24-c8db36435fb5\") " pod="openshift-cluster-machine-approver/machine-approver-5c6485487f-z74t2" Mar 18 18:00:33.737605 master-0 kubenswrapper[30278]: I0318 18:00:33.737572 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/c3267271-e0c5-45d6-980c-d78e4f9eef35-auth-proxy-config\") pod \"machine-config-operator-84d549f6d5-b5lps\" (UID: \"c3267271-e0c5-45d6-980c-d78e4f9eef35\") " pod="openshift-machine-config-operator/machine-config-operator-84d549f6d5-b5lps" Mar 18 18:00:33.737649 master-0 kubenswrapper[30278]: I0318 18:00:33.737636 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/9c0dbd44-7669-41d6-bf1b-d8c1343c9d98-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-6c8df6d4b-fshkm\" (UID: \"9c0dbd44-7669-41d6-bf1b-d8c1343c9d98\") " pod="openshift-monitoring/prometheus-operator-6c8df6d4b-fshkm" Mar 18 18:00:33.737707 master-0 kubenswrapper[30278]: I0318 18:00:33.737687 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fdab27a1-1d7a-4dc5-b828-eba3f57592dd-serving-cert\") pod \"cluster-version-operator-7d58488df-l48xm\" (UID: \"fdab27a1-1d7a-4dc5-b828-eba3f57592dd\") " pod="openshift-cluster-version/cluster-version-operator-7d58488df-l48xm" Mar 18 18:00:33.737749 master-0 kubenswrapper[30278]: I0318 18:00:33.737734 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/b3385316-45f0-46c5-ac82-683168db5878-certs\") pod \"machine-config-server-mpmxb\" (UID: \"b3385316-45f0-46c5-ac82-683168db5878\") " pod="openshift-machine-config-operator/machine-config-server-mpmxb" Mar 18 18:00:33.737794 master-0 kubenswrapper[30278]: I0318 18:00:33.737773 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/c3267271-e0c5-45d6-980c-d78e4f9eef35-auth-proxy-config\") pod \"machine-config-operator-84d549f6d5-b5lps\" (UID: \"c3267271-e0c5-45d6-980c-d78e4f9eef35\") " pod="openshift-machine-config-operator/machine-config-operator-84d549f6d5-b5lps" Mar 18 18:00:33.737897 master-0 kubenswrapper[30278]: I0318 18:00:33.737874 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/89e6c3d6-7bd5-4df6-90db-3a349f644afb-mcc-auth-proxy-config\") pod \"machine-config-controller-b4f87c5b9-m84zq\" (UID: \"89e6c3d6-7bd5-4df6-90db-3a349f644afb\") " pod="openshift-machine-config-operator/machine-config-controller-b4f87c5b9-m84zq" Mar 18 18:00:33.737974 master-0 kubenswrapper[30278]: I0318 18:00:33.737953 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/0751c002-fe0e-4f13-bb9c-9accd8ca0df3-images\") pod \"cluster-cloud-controller-manager-operator-7dff898856-kfzkl\" (UID: \"0751c002-fe0e-4f13-bb9c-9accd8ca0df3\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-kfzkl" Mar 18 18:00:33.738019 master-0 kubenswrapper[30278]: I0318 18:00:33.737981 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/e7f76afa-4b23-421c-8451-46323813f06e-webhook-certs\") pod \"multus-admission-controller-58c9f8fc64-9c6bk\" (UID: \"e7f76afa-4b23-421c-8451-46323813f06e\") " pod="openshift-multus/multus-admission-controller-58c9f8fc64-9c6bk" Mar 18 18:00:33.738073 master-0 kubenswrapper[30278]: I0318 18:00:33.738033 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d4c75bee-d0d2-4261-8f89-8c3375dbd868-serving-cert\") pod \"insights-operator-68bf6ff9d6-hm777\" (UID: \"d4c75bee-d0d2-4261-8f89-8c3375dbd868\") " pod="openshift-insights/insights-operator-68bf6ff9d6-hm777" Mar 18 18:00:33.738073 master-0 kubenswrapper[30278]: I0318 18:00:33.738055 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2d21e77e-8b61-4f03-8f17-941b7a1d8b1d-config\") pod \"machine-api-operator-6fbb6cf6f9-6x52p\" (UID: \"2d21e77e-8b61-4f03-8f17-941b7a1d8b1d\") " pod="openshift-machine-api/machine-api-operator-6fbb6cf6f9-6x52p" Mar 18 18:00:33.738149 master-0 kubenswrapper[30278]: I0318 18:00:33.738083 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/253ec853-f637-4aa4-8e8e-eb655dfccccb-serving-cert\") pod \"route-controller-manager-57dbfd879f-44tfw\" (UID: \"253ec853-f637-4aa4-8e8e-eb655dfccccb\") " pod="openshift-route-controller-manager/route-controller-manager-57dbfd879f-44tfw" Mar 18 18:00:33.738149 master-0 kubenswrapper[30278]: I0318 18:00:33.738111 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d4c75bee-d0d2-4261-8f89-8c3375dbd868-service-ca-bundle\") pod \"insights-operator-68bf6ff9d6-hm777\" (UID: \"d4c75bee-d0d2-4261-8f89-8c3375dbd868\") " pod="openshift-insights/insights-operator-68bf6ff9d6-hm777" Mar 18 18:00:33.738149 master-0 kubenswrapper[30278]: I0318 18:00:33.738107 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fdab27a1-1d7a-4dc5-b828-eba3f57592dd-serving-cert\") pod \"cluster-version-operator-7d58488df-l48xm\" (UID: \"fdab27a1-1d7a-4dc5-b828-eba3f57592dd\") " pod="openshift-cluster-version/cluster-version-operator-7d58488df-l48xm" Mar 18 18:00:33.738361 master-0 kubenswrapper[30278]: I0318 18:00:33.738325 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/89e6c3d6-7bd5-4df6-90db-3a349f644afb-mcc-auth-proxy-config\") pod \"machine-config-controller-b4f87c5b9-m84zq\" (UID: \"89e6c3d6-7bd5-4df6-90db-3a349f644afb\") " pod="openshift-machine-config-operator/machine-config-controller-b4f87c5b9-m84zq" Mar 18 18:00:33.738574 master-0 kubenswrapper[30278]: I0318 18:00:33.738542 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/253ec853-f637-4aa4-8e8e-eb655dfccccb-serving-cert\") pod \"route-controller-manager-57dbfd879f-44tfw\" (UID: \"253ec853-f637-4aa4-8e8e-eb655dfccccb\") " pod="openshift-route-controller-manager/route-controller-manager-57dbfd879f-44tfw" Mar 18 18:00:33.743863 master-0 kubenswrapper[30278]: I0318 18:00:33.743804 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-4fc8r" Mar 18 18:00:33.746350 master-0 kubenswrapper[30278]: I0318 18:00:33.746255 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9b424d6c-7440-4c98-ac19-2d0642c696fd-kube-api-access\") pod \"kube-controller-manager-operator-ff989d6cc-qk279\" (UID: \"9b424d6c-7440-4c98-ac19-2d0642c696fd\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-ff989d6cc-qk279" Mar 18 18:00:33.784388 master-0 kubenswrapper[30278]: I0318 18:00:33.784266 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Mar 18 18:00:33.787339 master-0 kubenswrapper[30278]: I0318 18:00:33.787237 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9pp5f\" (UniqueName: \"kubernetes.io/projected/b1352cc7-4099-44c5-9c31-8259fb783bc7-kube-api-access-9pp5f\") pod \"dns-operator-9c5679d8f-7sc7v\" (UID: \"b1352cc7-4099-44c5-9c31-8259fb783bc7\") " pod="openshift-dns-operator/dns-operator-9c5679d8f-7sc7v" Mar 18 18:00:33.804600 master-0 kubenswrapper[30278]: I0318 18:00:33.804517 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Mar 18 18:00:33.824736 master-0 kubenswrapper[30278]: I0318 18:00:33.824663 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Mar 18 18:00:33.839798 master-0 kubenswrapper[30278]: I0318 18:00:33.839685 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a94f7bff-ad61-4c53-a8eb-000a13f26971-cert\") pod \"cluster-autoscaler-operator-866dc4744-l6hpt\" (UID: \"a94f7bff-ad61-4c53-a8eb-000a13f26971\") " pod="openshift-machine-api/cluster-autoscaler-operator-866dc4744-l6hpt" Mar 18 18:00:33.839936 master-0 kubenswrapper[30278]: I0318 18:00:33.839859 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/de189d27-4c60-49f1-9119-d1fde5c37b1e-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-6f97756bc8-zdqtc\" (UID: \"de189d27-4c60-49f1-9119-d1fde5c37b1e\") " pod="openshift-machine-api/control-plane-machine-set-operator-6f97756bc8-zdqtc" Mar 18 18:00:33.840082 master-0 kubenswrapper[30278]: I0318 18:00:33.840048 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/2d21e77e-8b61-4f03-8f17-941b7a1d8b1d-machine-api-operator-tls\") pod \"machine-api-operator-6fbb6cf6f9-6x52p\" (UID: \"2d21e77e-8b61-4f03-8f17-941b7a1d8b1d\") " pod="openshift-machine-api/machine-api-operator-6fbb6cf6f9-6x52p" Mar 18 18:00:33.840428 master-0 kubenswrapper[30278]: I0318 18:00:33.840357 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/9c0dbd44-7669-41d6-bf1b-d8c1343c9d98-prometheus-operator-tls\") pod \"prometheus-operator-6c8df6d4b-fshkm\" (UID: \"9c0dbd44-7669-41d6-bf1b-d8c1343c9d98\") " pod="openshift-monitoring/prometheus-operator-6c8df6d4b-fshkm" Mar 18 18:00:33.840903 master-0 kubenswrapper[30278]: I0318 18:00:33.840821 30278 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Mar 18 18:00:33.840994 master-0 kubenswrapper[30278]: I0318 18:00:33.840894 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/92153864-7959-4482-bf24-c8db36435fb5-machine-approver-tls\") pod \"machine-approver-5c6485487f-z74t2\" (UID: \"92153864-7959-4482-bf24-c8db36435fb5\") " pod="openshift-cluster-machine-approver/machine-approver-5c6485487f-z74t2" Mar 18 18:00:33.841356 master-0 kubenswrapper[30278]: I0318 18:00:33.841304 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloud-credential-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/04cef0bd-f365-4bf6-864a-1895995015d6-cloud-credential-operator-serving-cert\") pod \"cloud-credential-operator-744f9dbf77-djgn7\" (UID: \"04cef0bd-f365-4bf6-864a-1895995015d6\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-744f9dbf77-djgn7" Mar 18 18:00:33.841508 master-0 kubenswrapper[30278]: I0318 18:00:33.841462 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/e0e04440-c08b-452d-9be6-9f70a4027c92-samples-operator-tls\") pod \"cluster-samples-operator-85f7577d78-xnx8x\" (UID: \"e0e04440-c08b-452d-9be6-9f70a4027c92\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-85f7577d78-xnx8x" Mar 18 18:00:33.845539 master-0 kubenswrapper[30278]: I0318 18:00:33.844112 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Mar 18 18:00:33.846683 master-0 kubenswrapper[30278]: I0318 18:00:33.846623 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/92153864-7959-4482-bf24-c8db36435fb5-machine-approver-tls\") pod \"machine-approver-5c6485487f-z74t2\" (UID: \"92153864-7959-4482-bf24-c8db36435fb5\") " pod="openshift-cluster-machine-approver/machine-approver-5c6485487f-z74t2" Mar 18 18:00:33.847190 master-0 kubenswrapper[30278]: I0318 18:00:33.847109 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/92153864-7959-4482-bf24-c8db36435fb5-config\") pod \"machine-approver-5c6485487f-z74t2\" (UID: \"92153864-7959-4482-bf24-c8db36435fb5\") " pod="openshift-cluster-machine-approver/machine-approver-5c6485487f-z74t2" Mar 18 18:00:33.848640 master-0 kubenswrapper[30278]: I0318 18:00:33.848578 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/e0e04440-c08b-452d-9be6-9f70a4027c92-samples-operator-tls\") pod \"cluster-samples-operator-85f7577d78-xnx8x\" (UID: \"e0e04440-c08b-452d-9be6-9f70a4027c92\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-85f7577d78-xnx8x" Mar 18 18:00:33.851444 master-0 kubenswrapper[30278]: I0318 18:00:33.851389 30278 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 18 18:00:33.851782 master-0 kubenswrapper[30278]: I0318 18:00:33.851723 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/de189d27-4c60-49f1-9119-d1fde5c37b1e-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-6f97756bc8-zdqtc\" (UID: \"de189d27-4c60-49f1-9119-d1fde5c37b1e\") " pod="openshift-machine-api/control-plane-machine-set-operator-6f97756bc8-zdqtc" Mar 18 18:00:33.863361 master-0 kubenswrapper[30278]: I0318 18:00:33.863250 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Mar 18 18:00:33.868511 master-0 kubenswrapper[30278]: I0318 18:00:33.868452 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/92153864-7959-4482-bf24-c8db36435fb5-auth-proxy-config\") pod \"machine-approver-5c6485487f-z74t2\" (UID: \"92153864-7959-4482-bf24-c8db36435fb5\") " pod="openshift-cluster-machine-approver/machine-approver-5c6485487f-z74t2" Mar 18 18:00:33.883967 master-0 kubenswrapper[30278]: I0318 18:00:33.883887 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-gxxlp" Mar 18 18:00:33.927362 master-0 kubenswrapper[30278]: I0318 18:00:33.927261 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zwlxb\" (UniqueName: \"kubernetes.io/projected/37b3753f-bf4f-4a9e-a4a8-d58296bada79-kube-api-access-zwlxb\") pod \"cluster-baremetal-operator-6f69995874-dh5zl\" (UID: \"37b3753f-bf4f-4a9e-a4a8-d58296bada79\") " pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-dh5zl" Mar 18 18:00:33.940949 master-0 kubenswrapper[30278]: I0318 18:00:33.940773 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-789k6\" (UniqueName: \"kubernetes.io/projected/c087ce06-a16b-41f4-ba93-8fccdee09003-kube-api-access-789k6\") pod \"authentication-operator-5885bfd7f4-8sxdf\" (UID: \"c087ce06-a16b-41f4-ba93-8fccdee09003\") " pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-8sxdf" Mar 18 18:00:33.943728 master-0 kubenswrapper[30278]: I0318 18:00:33.943663 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Mar 18 18:00:33.984410 master-0 kubenswrapper[30278]: I0318 18:00:33.984350 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-rl6dv" Mar 18 18:00:33.987655 master-0 kubenswrapper[30278]: I0318 18:00:33.987614 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/7e64a377-f497-4416-8f22-d5c7f52e0b65-bound-sa-token\") pod \"ingress-operator-66b84d69b-qb7n6\" (UID: \"7e64a377-f497-4416-8f22-d5c7f52e0b65\") " pod="openshift-ingress-operator/ingress-operator-66b84d69b-qb7n6" Mar 18 18:00:34.004442 master-0 kubenswrapper[30278]: I0318 18:00:34.004388 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Mar 18 18:00:34.015018 master-0 kubenswrapper[30278]: I0318 18:00:34.014947 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/2d21e77e-8b61-4f03-8f17-941b7a1d8b1d-machine-api-operator-tls\") pod \"machine-api-operator-6fbb6cf6f9-6x52p\" (UID: \"2d21e77e-8b61-4f03-8f17-941b7a1d8b1d\") " pod="openshift-machine-api/machine-api-operator-6fbb6cf6f9-6x52p" Mar 18 18:00:34.023233 master-0 kubenswrapper[30278]: I0318 18:00:34.023151 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Mar 18 18:00:34.029630 master-0 kubenswrapper[30278]: I0318 18:00:34.029588 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2d21e77e-8b61-4f03-8f17-941b7a1d8b1d-config\") pod \"machine-api-operator-6fbb6cf6f9-6x52p\" (UID: \"2d21e77e-8b61-4f03-8f17-941b7a1d8b1d\") " pod="openshift-machine-api/machine-api-operator-6fbb6cf6f9-6x52p" Mar 18 18:00:34.040044 master-0 kubenswrapper[30278]: I0318 18:00:34.039979 30278 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 18 18:00:34.046839 master-0 kubenswrapper[30278]: I0318 18:00:34.046785 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Mar 18 18:00:34.046978 master-0 kubenswrapper[30278]: I0318 18:00:34.046937 30278 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 18 18:00:34.056921 master-0 kubenswrapper[30278]: I0318 18:00:34.056862 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/2d21e77e-8b61-4f03-8f17-941b7a1d8b1d-images\") pod \"machine-api-operator-6fbb6cf6f9-6x52p\" (UID: \"2d21e77e-8b61-4f03-8f17-941b7a1d8b1d\") " pod="openshift-machine-api/machine-api-operator-6fbb6cf6f9-6x52p" Mar 18 18:00:34.061556 master-0 kubenswrapper[30278]: I0318 18:00:34.061504 30278 request.go:700] Waited for 1.989523881s due to client-side throttling, not priority and fairness, request: POST:https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-storage-version-migrator-operator/serviceaccounts/kube-storage-version-migrator-operator/token Mar 18 18:00:34.077851 master-0 kubenswrapper[30278]: I0318 18:00:34.077799 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nf82n\" (UniqueName: \"kubernetes.io/projected/f7ff61c7-32d1-4407-a792-8e22bb4d50f9-kube-api-access-nf82n\") pod \"kube-storage-version-migrator-operator-6bb5bfb6fd-xwwcg\" (UID: \"f7ff61c7-32d1-4407-a792-8e22bb4d50f9\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-6bb5bfb6fd-xwwcg" Mar 18 18:00:34.083452 master-0 kubenswrapper[30278]: I0318 18:00:34.083402 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"service-ca-bundle" Mar 18 18:00:34.090492 master-0 kubenswrapper[30278]: I0318 18:00:34.090433 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d4c75bee-d0d2-4261-8f89-8c3375dbd868-service-ca-bundle\") pod \"insights-operator-68bf6ff9d6-hm777\" (UID: \"d4c75bee-d0d2-4261-8f89-8c3375dbd868\") " pod="openshift-insights/insights-operator-68bf6ff9d6-hm777" Mar 18 18:00:34.103118 master-0 kubenswrapper[30278]: I0318 18:00:34.103050 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"openshift-service-ca.crt" Mar 18 18:00:34.122834 master-0 kubenswrapper[30278]: I0318 18:00:34.122791 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-insights"/"operator-dockercfg-bnhc4" Mar 18 18:00:34.151037 master-0 kubenswrapper[30278]: I0318 18:00:34.150969 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"trusted-ca-bundle" Mar 18 18:00:34.156672 master-0 kubenswrapper[30278]: I0318 18:00:34.156630 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d4c75bee-d0d2-4261-8f89-8c3375dbd868-trusted-ca-bundle\") pod \"insights-operator-68bf6ff9d6-hm777\" (UID: \"d4c75bee-d0d2-4261-8f89-8c3375dbd868\") " pod="openshift-insights/insights-operator-68bf6ff9d6-hm777" Mar 18 18:00:34.164238 master-0 kubenswrapper[30278]: I0318 18:00:34.164166 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-insights"/"openshift-insights-serving-cert" Mar 18 18:00:34.170437 master-0 kubenswrapper[30278]: I0318 18:00:34.169817 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d4c75bee-d0d2-4261-8f89-8c3375dbd868-serving-cert\") pod \"insights-operator-68bf6ff9d6-hm777\" (UID: \"d4c75bee-d0d2-4261-8f89-8c3375dbd868\") " pod="openshift-insights/insights-operator-68bf6ff9d6-hm777" Mar 18 18:00:34.184828 master-0 kubenswrapper[30278]: I0318 18:00:34.184790 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"kube-root-ca.crt" Mar 18 18:00:34.192511 master-0 kubenswrapper[30278]: I0318 18:00:34.192409 30278 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-master-0" Mar 18 18:00:34.215919 master-0 kubenswrapper[30278]: I0318 18:00:34.215873 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sclm5\" (UniqueName: \"kubernetes.io/projected/7e64a377-f497-4416-8f22-d5c7f52e0b65-kube-api-access-sclm5\") pod \"ingress-operator-66b84d69b-qb7n6\" (UID: \"7e64a377-f497-4416-8f22-d5c7f52e0b65\") " pod="openshift-ingress-operator/ingress-operator-66b84d69b-qb7n6" Mar 18 18:00:34.242440 master-0 kubenswrapper[30278]: I0318 18:00:34.242376 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-756j8\" (UniqueName: \"kubernetes.io/projected/ce5831a6-5a8d-4cda-9299-5d86437bcab2-kube-api-access-756j8\") pod \"marketplace-operator-89ccd998f-l5gm7\" (UID: \"ce5831a6-5a8d-4cda-9299-5d86437bcab2\") " pod="openshift-marketplace/marketplace-operator-89ccd998f-l5gm7" Mar 18 18:00:34.243346 master-0 kubenswrapper[30278]: I0318 18:00:34.243317 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-autoscaler-operator-dockercfg-rqcfx" Mar 18 18:00:34.263178 master-0 kubenswrapper[30278]: I0318 18:00:34.263128 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-autoscaler-operator-cert" Mar 18 18:00:34.277505 master-0 kubenswrapper[30278]: I0318 18:00:34.277443 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a94f7bff-ad61-4c53-a8eb-000a13f26971-cert\") pod \"cluster-autoscaler-operator-866dc4744-l6hpt\" (UID: \"a94f7bff-ad61-4c53-a8eb-000a13f26971\") " pod="openshift-machine-api/cluster-autoscaler-operator-866dc4744-l6hpt" Mar 18 18:00:34.283120 master-0 kubenswrapper[30278]: I0318 18:00:34.283091 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy-cluster-autoscaler-operator" Mar 18 18:00:34.286503 master-0 kubenswrapper[30278]: I0318 18:00:34.286470 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/a94f7bff-ad61-4c53-a8eb-000a13f26971-auth-proxy-config\") pod \"cluster-autoscaler-operator-866dc4744-l6hpt\" (UID: \"a94f7bff-ad61-4c53-a8eb-000a13f26971\") " pod="openshift-machine-api/cluster-autoscaler-operator-866dc4744-l6hpt" Mar 18 18:00:34.323629 master-0 kubenswrapper[30278]: I0318 18:00:34.323536 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-credential-operator"/"cloud-credential-operator-dockercfg-4fdq4" Mar 18 18:00:34.328138 master-0 kubenswrapper[30278]: I0318 18:00:34.328068 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tnknt\" (UniqueName: \"kubernetes.io/projected/0100a259-1358-45e8-8191-4e1f9a14ec89-kube-api-access-tnknt\") pod \"etcd-operator-8544cbcf9c-rws9x\" (UID: \"0100a259-1358-45e8-8191-4e1f9a14ec89\") " pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-rws9x" Mar 18 18:00:34.343379 master-0 kubenswrapper[30278]: I0318 18:00:34.343330 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-credential-operator"/"kube-root-ca.crt" Mar 18 18:00:34.371213 master-0 kubenswrapper[30278]: I0318 18:00:34.371128 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-credential-operator"/"cco-trusted-ca" Mar 18 18:00:34.377006 master-0 kubenswrapper[30278]: I0318 18:00:34.376952 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cco-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/04cef0bd-f365-4bf6-864a-1895995015d6-cco-trusted-ca\") pod \"cloud-credential-operator-744f9dbf77-djgn7\" (UID: \"04cef0bd-f365-4bf6-864a-1895995015d6\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-744f9dbf77-djgn7" Mar 18 18:00:34.385555 master-0 kubenswrapper[30278]: I0318 18:00:34.385513 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-credential-operator"/"cloud-credential-operator-serving-cert" Mar 18 18:00:34.390270 master-0 kubenswrapper[30278]: I0318 18:00:34.390225 30278 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 18 18:00:34.390440 master-0 kubenswrapper[30278]: I0318 18:00:34.390239 30278 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 18 18:00:34.396864 master-0 kubenswrapper[30278]: I0318 18:00:34.396808 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloud-credential-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/04cef0bd-f365-4bf6-864a-1895995015d6-cloud-credential-operator-serving-cert\") pod \"cloud-credential-operator-744f9dbf77-djgn7\" (UID: \"04cef0bd-f365-4bf6-864a-1895995015d6\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-744f9dbf77-djgn7" Mar 18 18:00:34.397452 master-0 kubenswrapper[30278]: I0318 18:00:34.397404 30278 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 18 18:00:34.403214 master-0 kubenswrapper[30278]: I0318 18:00:34.403160 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-credential-operator"/"openshift-service-ca.crt" Mar 18 18:00:34.435181 master-0 kubenswrapper[30278]: I0318 18:00:34.435124 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5sl7p\" (UniqueName: \"kubernetes.io/projected/6f26e239-2988-4faa-bc1d-24b15b95b7f1-kube-api-access-5sl7p\") pod \"cluster-image-registry-operator-5549dc66cb-ljrq8\" (UID: \"6f26e239-2988-4faa-bc1d-24b15b95b7f1\") " pod="openshift-image-registry/cluster-image-registry-operator-5549dc66cb-ljrq8" Mar 18 18:00:34.444424 master-0 kubenswrapper[30278]: I0318 18:00:34.444257 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Mar 18 18:00:34.446692 master-0 kubenswrapper[30278]: I0318 18:00:34.446636 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fcf459dc-bd30-4143-b5c4-60fd01b46548-proxy-tls\") pod \"machine-config-daemon-5l8hh\" (UID: \"fcf459dc-bd30-4143-b5c4-60fd01b46548\") " pod="openshift-machine-config-operator/machine-config-daemon-5l8hh" Mar 18 18:00:34.463642 master-0 kubenswrapper[30278]: I0318 18:00:34.463574 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-cqcns" Mar 18 18:00:34.506919 master-0 kubenswrapper[30278]: I0318 18:00:34.506829 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n8k5q\" (UniqueName: \"kubernetes.io/projected/99e215da-759d-4fff-af65-0fb64245fbd0-kube-api-access-n8k5q\") pod \"cluster-olm-operator-67dcd4998-lljnt\" (UID: \"99e215da-759d-4fff-af65-0fb64245fbd0\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-67dcd4998-lljnt" Mar 18 18:00:34.525903 master-0 kubenswrapper[30278]: I0318 18:00:34.525838 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fk59q\" (UniqueName: \"kubernetes.io/projected/cb522b02-0b93-4711-9041-566daa06b95a-kube-api-access-fk59q\") pod \"openshift-config-operator-95bf4f4d-q27fh\" (UID: \"cb522b02-0b93-4711-9041-566daa06b95a\") " pod="openshift-config-operator/openshift-config-operator-95bf4f4d-q27fh" Mar 18 18:00:34.548085 master-0 kubenswrapper[30278]: I0318 18:00:34.548020 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mrdqg\" (UniqueName: \"kubernetes.io/projected/7c6694a8-ccd0-491b-9f21-215450f6ce67-kube-api-access-mrdqg\") pod \"cluster-node-tuning-operator-598fbc5f8f-7qwxn\" (UID: \"7c6694a8-ccd0-491b-9f21-215450f6ce67\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-7qwxn" Mar 18 18:00:34.567590 master-0 kubenswrapper[30278]: I0318 18:00:34.567516 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3a3a6c2c-78e7-41f3-acff-20173cbc012a-kube-api-access\") pod \"openshift-kube-scheduler-operator-dddff6458-wlfj4\" (UID: \"3a3a6c2c-78e7-41f3-acff-20173cbc012a\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-dddff6458-wlfj4" Mar 18 18:00:34.583474 master-0 kubenswrapper[30278]: I0318 18:00:34.583383 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-bwq44" Mar 18 18:00:34.587337 master-0 kubenswrapper[30278]: I0318 18:00:34.587225 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bm8jj\" (UniqueName: \"kubernetes.io/projected/d26d4515-391e-41a5-8c82-1b2b8a375662-kube-api-access-bm8jj\") pod \"package-server-manager-7b95f86987-6qqz4\" (UID: \"d26d4515-391e-41a5-8c82-1b2b8a375662\") " pod="openshift-operator-lifecycle-manager/package-server-manager-7b95f86987-6qqz4" Mar 18 18:00:34.604641 master-0 kubenswrapper[30278]: I0318 18:00:34.604585 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Mar 18 18:00:34.608242 master-0 kubenswrapper[30278]: I0318 18:00:34.608173 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/b3385316-45f0-46c5-ac82-683168db5878-node-bootstrap-token\") pod \"machine-config-server-mpmxb\" (UID: \"b3385316-45f0-46c5-ac82-683168db5878\") " pod="openshift-machine-config-operator/machine-config-server-mpmxb" Mar 18 18:00:34.623128 master-0 kubenswrapper[30278]: I0318 18:00:34.623073 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Mar 18 18:00:34.628749 master-0 kubenswrapper[30278]: I0318 18:00:34.628680 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/b3385316-45f0-46c5-ac82-683168db5878-certs\") pod \"machine-config-server-mpmxb\" (UID: \"b3385316-45f0-46c5-ac82-683168db5878\") " pod="openshift-machine-config-operator/machine-config-server-mpmxb" Mar 18 18:00:34.660446 master-0 kubenswrapper[30278]: I0318 18:00:34.660369 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/26575d68-0488-4dfa-a5d0-5016e481dba6-kube-api-access\") pod \"kube-apiserver-operator-8b68b9d9b-p72m2\" (UID: \"26575d68-0488-4dfa-a5d0-5016e481dba6\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-8b68b9d9b-p72m2" Mar 18 18:00:34.685439 master-0 kubenswrapper[30278]: I0318 18:00:34.685380 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2pqww\" (UniqueName: \"kubernetes.io/projected/14a0661b-7bde-4e22-a9a9-5e3fb24df77f-kube-api-access-2pqww\") pod \"network-operator-7bd846bfc4-dxxbl\" (UID: \"14a0661b-7bde-4e22-a9a9-5e3fb24df77f\") " pod="openshift-network-operator/network-operator-7bd846bfc4-dxxbl" Mar 18 18:00:34.696878 master-0 kubenswrapper[30278]: I0318 18:00:34.696720 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-clm4b\" (UniqueName: \"kubernetes.io/projected/8d5e9525-6c0d-4c0b-a2ce-e42eaf66c311-kube-api-access-clm4b\") pod \"cluster-monitoring-operator-58845fbb57-vjrjg\" (UID: \"8d5e9525-6c0d-4c0b-a2ce-e42eaf66c311\") " pod="openshift-monitoring/cluster-monitoring-operator-58845fbb57-vjrjg" Mar 18 18:00:34.706757 master-0 kubenswrapper[30278]: I0318 18:00:34.706678 30278 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-95bf4f4d-q27fh" Mar 18 18:00:34.711924 master-0 kubenswrapper[30278]: I0318 18:00:34.711861 30278 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-95bf4f4d-q27fh" Mar 18 18:00:34.722679 master-0 kubenswrapper[30278]: I0318 18:00:34.722609 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l5tw2\" (UniqueName: \"kubernetes.io/projected/9a240ab7-a1d5-4e9a-96f3-4590681cc7ed-kube-api-access-l5tw2\") pod \"openshift-controller-manager-operator-8c94f4649-hpsbd\" (UID: \"9a240ab7-a1d5-4e9a-96f3-4590681cc7ed\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8c94f4649-hpsbd" Mar 18 18:00:34.723629 master-0 kubenswrapper[30278]: I0318 18:00:34.723449 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-npx6j" Mar 18 18:00:34.736561 master-0 kubenswrapper[30278]: E0318 18:00:34.736011 30278 configmap.go:193] Couldn't get configMap openshift-cloud-controller-manager-operator/kube-rbac-proxy: failed to sync configmap cache: timed out waiting for the condition Mar 18 18:00:34.736561 master-0 kubenswrapper[30278]: E0318 18:00:34.736028 30278 secret.go:189] Couldn't get secret openshift-cloud-controller-manager-operator/cloud-controller-manager-operator-tls: failed to sync secret cache: timed out waiting for the condition Mar 18 18:00:34.736561 master-0 kubenswrapper[30278]: E0318 18:00:34.736081 30278 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0751c002-fe0e-4f13-bb9c-9accd8ca0df3-cloud-controller-manager-operator-tls podName:0751c002-fe0e-4f13-bb9c-9accd8ca0df3 nodeName:}" failed. No retries permitted until 2026-03-18 18:00:35.736065661 +0000 UTC m=+4.903250256 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cloud-controller-manager-operator-tls" (UniqueName: "kubernetes.io/secret/0751c002-fe0e-4f13-bb9c-9accd8ca0df3-cloud-controller-manager-operator-tls") pod "cluster-cloud-controller-manager-operator-7dff898856-kfzkl" (UID: "0751c002-fe0e-4f13-bb9c-9accd8ca0df3") : failed to sync secret cache: timed out waiting for the condition Mar 18 18:00:34.736561 master-0 kubenswrapper[30278]: E0318 18:00:34.736097 30278 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0751c002-fe0e-4f13-bb9c-9accd8ca0df3-auth-proxy-config podName:0751c002-fe0e-4f13-bb9c-9accd8ca0df3 nodeName:}" failed. No retries permitted until 2026-03-18 18:00:35.736090102 +0000 UTC m=+4.903274697 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "auth-proxy-config" (UniqueName: "kubernetes.io/configmap/0751c002-fe0e-4f13-bb9c-9accd8ca0df3-auth-proxy-config") pod "cluster-cloud-controller-manager-operator-7dff898856-kfzkl" (UID: "0751c002-fe0e-4f13-bb9c-9accd8ca0df3") : failed to sync configmap cache: timed out waiting for the condition Mar 18 18:00:34.736561 master-0 kubenswrapper[30278]: E0318 18:00:34.736102 30278 configmap.go:193] Couldn't get configMap openshift-monitoring/metrics-client-ca: failed to sync configmap cache: timed out waiting for the condition Mar 18 18:00:34.736561 master-0 kubenswrapper[30278]: E0318 18:00:34.736138 30278 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9c0dbd44-7669-41d6-bf1b-d8c1343c9d98-metrics-client-ca podName:9c0dbd44-7669-41d6-bf1b-d8c1343c9d98 nodeName:}" failed. No retries permitted until 2026-03-18 18:00:35.736124183 +0000 UTC m=+4.903308788 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-client-ca" (UniqueName: "kubernetes.io/configmap/9c0dbd44-7669-41d6-bf1b-d8c1343c9d98-metrics-client-ca") pod "prometheus-operator-6c8df6d4b-fshkm" (UID: "9c0dbd44-7669-41d6-bf1b-d8c1343c9d98") : failed to sync configmap cache: timed out waiting for the condition Mar 18 18:00:34.736914 master-0 kubenswrapper[30278]: E0318 18:00:34.736860 30278 secret.go:189] Couldn't get secret openshift-machine-config-operator/mcc-proxy-tls: failed to sync secret cache: timed out waiting for the condition Mar 18 18:00:34.736914 master-0 kubenswrapper[30278]: E0318 18:00:34.736895 30278 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/89e6c3d6-7bd5-4df6-90db-3a349f644afb-proxy-tls podName:89e6c3d6-7bd5-4df6-90db-3a349f644afb nodeName:}" failed. No retries permitted until 2026-03-18 18:00:35.736886753 +0000 UTC m=+4.904071348 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/89e6c3d6-7bd5-4df6-90db-3a349f644afb-proxy-tls") pod "machine-config-controller-b4f87c5b9-m84zq" (UID: "89e6c3d6-7bd5-4df6-90db-3a349f644afb") : failed to sync secret cache: timed out waiting for the condition Mar 18 18:00:34.738791 master-0 kubenswrapper[30278]: E0318 18:00:34.738695 30278 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: failed to sync secret cache: timed out waiting for the condition Mar 18 18:00:34.738791 master-0 kubenswrapper[30278]: E0318 18:00:34.738732 30278 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e7f76afa-4b23-421c-8451-46323813f06e-webhook-certs podName:e7f76afa-4b23-421c-8451-46323813f06e nodeName:}" failed. No retries permitted until 2026-03-18 18:00:35.738724714 +0000 UTC m=+4.905909309 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/e7f76afa-4b23-421c-8451-46323813f06e-webhook-certs") pod "multus-admission-controller-58c9f8fc64-9c6bk" (UID: "e7f76afa-4b23-421c-8451-46323813f06e") : failed to sync secret cache: timed out waiting for the condition Mar 18 18:00:34.738791 master-0 kubenswrapper[30278]: E0318 18:00:34.738759 30278 configmap.go:193] Couldn't get configMap openshift-cloud-controller-manager-operator/cloud-controller-manager-images: failed to sync configmap cache: timed out waiting for the condition Mar 18 18:00:34.738917 master-0 kubenswrapper[30278]: E0318 18:00:34.738767 30278 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-operator-kube-rbac-proxy-config: failed to sync secret cache: timed out waiting for the condition Mar 18 18:00:34.738917 master-0 kubenswrapper[30278]: E0318 18:00:34.738779 30278 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0751c002-fe0e-4f13-bb9c-9accd8ca0df3-images podName:0751c002-fe0e-4f13-bb9c-9accd8ca0df3 nodeName:}" failed. No retries permitted until 2026-03-18 18:00:35.738773495 +0000 UTC m=+4.905958090 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "images" (UniqueName: "kubernetes.io/configmap/0751c002-fe0e-4f13-bb9c-9accd8ca0df3-images") pod "cluster-cloud-controller-manager-operator-7dff898856-kfzkl" (UID: "0751c002-fe0e-4f13-bb9c-9accd8ca0df3") : failed to sync configmap cache: timed out waiting for the condition Mar 18 18:00:34.738977 master-0 kubenswrapper[30278]: E0318 18:00:34.738941 30278 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9c0dbd44-7669-41d6-bf1b-d8c1343c9d98-prometheus-operator-kube-rbac-proxy-config podName:9c0dbd44-7669-41d6-bf1b-d8c1343c9d98 nodeName:}" failed. No retries permitted until 2026-03-18 18:00:35.738909979 +0000 UTC m=+4.906094614 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "prometheus-operator-kube-rbac-proxy-config" (UniqueName: "kubernetes.io/secret/9c0dbd44-7669-41d6-bf1b-d8c1343c9d98-prometheus-operator-kube-rbac-proxy-config") pod "prometheus-operator-6c8df6d4b-fshkm" (UID: "9c0dbd44-7669-41d6-bf1b-d8c1343c9d98") : failed to sync secret cache: timed out waiting for the condition Mar 18 18:00:34.743471 master-0 kubenswrapper[30278]: I0318 18:00:34.743252 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Mar 18 18:00:34.774126 master-0 kubenswrapper[30278]: I0318 18:00:34.765378 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-dockercfg-kcjlz" Mar 18 18:00:34.803515 master-0 kubenswrapper[30278]: I0318 18:00:34.803411 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"metrics-client-ca" Mar 18 18:00:34.808818 master-0 kubenswrapper[30278]: I0318 18:00:34.808746 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qwps9\" (UniqueName: \"kubernetes.io/projected/e73f2834-c56c-4cef-ac3c-2317e9a4324c-kube-api-access-qwps9\") pod \"olm-operator-5c9796789-6hngr\" (UID: \"e73f2834-c56c-4cef-ac3c-2317e9a4324c\") " pod="openshift-operator-lifecycle-manager/olm-operator-5c9796789-6hngr" Mar 18 18:00:34.823588 master-0 kubenswrapper[30278]: I0318 18:00:34.823486 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-tls" Mar 18 18:00:34.835178 master-0 kubenswrapper[30278]: I0318 18:00:34.835114 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/9c0dbd44-7669-41d6-bf1b-d8c1343c9d98-prometheus-operator-tls\") pod \"prometheus-operator-6c8df6d4b-fshkm\" (UID: \"9c0dbd44-7669-41d6-bf1b-d8c1343c9d98\") " pod="openshift-monitoring/prometheus-operator-6c8df6d4b-fshkm" Mar 18 18:00:34.843727 master-0 kubenswrapper[30278]: I0318 18:00:34.843664 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-kube-rbac-proxy-config" Mar 18 18:00:34.864624 master-0 kubenswrapper[30278]: I0318 18:00:34.864136 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-controller-manager-operator"/"cluster-cloud-controller-manager-dockercfg-2mk4r" Mar 18 18:00:34.883615 master-0 kubenswrapper[30278]: I0318 18:00:34.883556 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-controller-manager-operator"/"cloud-controller-manager-operator-tls" Mar 18 18:00:34.904064 master-0 kubenswrapper[30278]: I0318 18:00:34.904032 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"cloud-controller-manager-images" Mar 18 18:00:34.924255 master-0 kubenswrapper[30278]: I0318 18:00:34.924221 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"openshift-service-ca.crt" Mar 18 18:00:34.943853 master-0 kubenswrapper[30278]: I0318 18:00:34.943803 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"kube-rbac-proxy" Mar 18 18:00:34.963325 master-0 kubenswrapper[30278]: I0318 18:00:34.963202 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"kube-root-ca.crt" Mar 18 18:00:34.984888 master-0 kubenswrapper[30278]: I0318 18:00:34.984861 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-r9bww" Mar 18 18:00:35.002803 master-0 kubenswrapper[30278]: I0318 18:00:35.002748 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Mar 18 18:00:35.047042 master-0 kubenswrapper[30278]: I0318 18:00:35.046967 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-76j8w\" (UniqueName: \"kubernetes.io/projected/9875ed82-813c-483d-8471-8f9b74b774ee-kube-api-access-76j8w\") pod \"network-node-identity-7s68k\" (UID: \"9875ed82-813c-483d-8471-8f9b74b774ee\") " pod="openshift-network-node-identity/network-node-identity-7s68k" Mar 18 18:00:35.061903 master-0 kubenswrapper[30278]: I0318 18:00:35.061836 30278 request.go:700] Waited for 2.879485673s due to client-side throttling, not priority and fairness, request: POST:https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-ovn-kubernetes/serviceaccounts/ovn-kubernetes-node/token Mar 18 18:00:35.067466 master-0 kubenswrapper[30278]: I0318 18:00:35.067379 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rf2qx\" (UniqueName: \"kubernetes.io/projected/fd4c81e2-699b-4fdf-ac7d-1607cde6a8ab-kube-api-access-rf2qx\") pod \"service-ca-79bc6b8d76-g5brm\" (UID: \"fd4c81e2-699b-4fdf-ac7d-1607cde6a8ab\") " pod="openshift-service-ca/service-ca-79bc6b8d76-g5brm" Mar 18 18:00:35.087030 master-0 kubenswrapper[30278]: I0318 18:00:35.086957 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9lwsm\" (UniqueName: \"kubernetes.io/projected/994fff04-c1d7-4f10-8d4b-6b49a6934829-kube-api-access-9lwsm\") pod \"ovnkube-node-5l4qp\" (UID: \"994fff04-c1d7-4f10-8d4b-6b49a6934829\") " pod="openshift-ovn-kubernetes/ovnkube-node-5l4qp" Mar 18 18:00:35.106833 master-0 kubenswrapper[30278]: I0318 18:00:35.106759 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t92bz\" (UniqueName: \"kubernetes.io/projected/e9e04572-1425-440e-9869-6deef05e13e3-kube-api-access-t92bz\") pod \"catalog-operator-68f85b4d6c-qpgfz\" (UID: \"e9e04572-1425-440e-9869-6deef05e13e3\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68f85b4d6c-qpgfz" Mar 18 18:00:35.126402 master-0 kubenswrapper[30278]: I0318 18:00:35.126367 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9dt8f\" (UniqueName: \"kubernetes.io/projected/59407fdf-b1e9-4992-a3c8-54b4e26f496c-kube-api-access-9dt8f\") pod \"dns-default-lf9xl\" (UID: \"59407fdf-b1e9-4992-a3c8-54b4e26f496c\") " pod="openshift-dns/dns-default-lf9xl" Mar 18 18:00:35.146484 master-0 kubenswrapper[30278]: I0318 18:00:35.146402 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tf476\" (UniqueName: \"kubernetes.io/projected/de189d27-4c60-49f1-9119-d1fde5c37b1e-kube-api-access-tf476\") pod \"control-plane-machine-set-operator-6f97756bc8-zdqtc\" (UID: \"de189d27-4c60-49f1-9119-d1fde5c37b1e\") " pod="openshift-machine-api/control-plane-machine-set-operator-6f97756bc8-zdqtc" Mar 18 18:00:35.164901 master-0 kubenswrapper[30278]: I0318 18:00:35.164816 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gzhsq\" (UniqueName: \"kubernetes.io/projected/e7f76afa-4b23-421c-8451-46323813f06e-kube-api-access-gzhsq\") pod \"multus-admission-controller-58c9f8fc64-9c6bk\" (UID: \"e7f76afa-4b23-421c-8451-46323813f06e\") " pod="openshift-multus/multus-admission-controller-58c9f8fc64-9c6bk" Mar 18 18:00:35.187672 master-0 kubenswrapper[30278]: I0318 18:00:35.187618 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wd9sc\" (UniqueName: \"kubernetes.io/projected/b3385316-45f0-46c5-ac82-683168db5878-kube-api-access-wd9sc\") pod \"machine-config-server-mpmxb\" (UID: \"b3385316-45f0-46c5-ac82-683168db5878\") " pod="openshift-machine-config-operator/machine-config-server-mpmxb" Mar 18 18:00:35.206145 master-0 kubenswrapper[30278]: I0318 18:00:35.206066 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-njx6n\" (UniqueName: \"kubernetes.io/projected/0751c002-fe0e-4f13-bb9c-9accd8ca0df3-kube-api-access-njx6n\") pod \"cluster-cloud-controller-manager-operator-7dff898856-kfzkl\" (UID: \"0751c002-fe0e-4f13-bb9c-9accd8ca0df3\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-kfzkl" Mar 18 18:00:35.226444 master-0 kubenswrapper[30278]: I0318 18:00:35.226263 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cd868\" (UniqueName: \"kubernetes.io/projected/1d969530-c138-4fb7-9bfe-0825be66c009-kube-api-access-cd868\") pod \"iptables-alerter-f7jp5\" (UID: \"1d969530-c138-4fb7-9bfe-0825be66c009\") " pod="openshift-network-operator/iptables-alerter-f7jp5" Mar 18 18:00:35.245519 master-0 kubenswrapper[30278]: I0318 18:00:35.245458 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qlhls\" (UniqueName: \"kubernetes.io/projected/04cef0bd-f365-4bf6-864a-1895995015d6-kube-api-access-qlhls\") pod \"cloud-credential-operator-744f9dbf77-djgn7\" (UID: \"04cef0bd-f365-4bf6-864a-1895995015d6\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-744f9dbf77-djgn7" Mar 18 18:00:35.268242 master-0 kubenswrapper[30278]: I0318 18:00:35.268152 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d6c68\" (UniqueName: \"kubernetes.io/projected/c57f282a-829b-41b2-827a-f4bc598245a2-kube-api-access-d6c68\") pod \"router-default-7dcf5569b5-m5dh4\" (UID: \"c57f282a-829b-41b2-827a-f4bc598245a2\") " pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" Mar 18 18:00:35.286943 master-0 kubenswrapper[30278]: I0318 18:00:35.286894 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vkcx9\" (UniqueName: \"kubernetes.io/projected/7d39d93e-9be3-47e1-a44e-be2d18b55446-kube-api-access-vkcx9\") pod \"csi-snapshot-controller-64854d9cff-vpjmp\" (UID: \"7d39d93e-9be3-47e1-a44e-be2d18b55446\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-64854d9cff-vpjmp" Mar 18 18:00:35.309134 master-0 kubenswrapper[30278]: I0318 18:00:35.309041 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-88hkw\" (UniqueName: \"kubernetes.io/projected/89e6c3d6-7bd5-4df6-90db-3a349f644afb-kube-api-access-88hkw\") pod \"machine-config-controller-b4f87c5b9-m84zq\" (UID: \"89e6c3d6-7bd5-4df6-90db-3a349f644afb\") " pod="openshift-machine-config-operator/machine-config-controller-b4f87c5b9-m84zq" Mar 18 18:00:35.319805 master-0 kubenswrapper[30278]: I0318 18:00:35.319734 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rsj86\" (UniqueName: \"kubernetes.io/projected/43fab0f2-5cfd-4b5e-a632-728fd5b960fd-kube-api-access-rsj86\") pod \"apiserver-688fbbb854-6n26v\" (UID: \"43fab0f2-5cfd-4b5e-a632-728fd5b960fd\") " pod="openshift-oauth-apiserver/apiserver-688fbbb854-6n26v" Mar 18 18:00:35.333135 master-0 kubenswrapper[30278]: I0318 18:00:35.333071 30278 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" Mar 18 18:00:35.333306 master-0 kubenswrapper[30278]: I0318 18:00:35.333165 30278 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-lf9xl" Mar 18 18:00:35.334072 master-0 kubenswrapper[30278]: I0318 18:00:35.334035 30278 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-lf9xl" Mar 18 18:00:35.343222 master-0 kubenswrapper[30278]: I0318 18:00:35.343166 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ts9b9\" (UniqueName: \"kubernetes.io/projected/fea7b899-fde4-4463-9520-4d433a8ebe21-kube-api-access-ts9b9\") pod \"multus-additional-cni-plugins-ttbr5\" (UID: \"fea7b899-fde4-4463-9520-4d433a8ebe21\") " pod="openshift-multus/multus-additional-cni-plugins-ttbr5" Mar 18 18:00:35.370934 master-0 kubenswrapper[30278]: I0318 18:00:35.370862 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-6f97756bc8-zdqtc" Mar 18 18:00:35.372106 master-0 kubenswrapper[30278]: I0318 18:00:35.371622 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5xvzx\" (UniqueName: \"kubernetes.io/projected/a94f7bff-ad61-4c53-a8eb-000a13f26971-kube-api-access-5xvzx\") pod \"cluster-autoscaler-operator-866dc4744-l6hpt\" (UID: \"a94f7bff-ad61-4c53-a8eb-000a13f26971\") " pod="openshift-machine-api/cluster-autoscaler-operator-866dc4744-l6hpt" Mar 18 18:00:35.374619 master-0 kubenswrapper[30278]: I0318 18:00:35.374003 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/cluster-autoscaler-operator-866dc4744-l6hpt" Mar 18 18:00:35.376932 master-0 kubenswrapper[30278]: I0318 18:00:35.376876 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cloud-credential-operator/cloud-credential-operator-744f9dbf77-djgn7" Mar 18 18:00:35.395251 master-0 kubenswrapper[30278]: I0318 18:00:35.395191 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hgnz6\" (UniqueName: \"kubernetes.io/projected/5a4f94f3-d63a-4869-b723-ae9637610b4b-kube-api-access-hgnz6\") pod \"network-metrics-daemon-mfn52\" (UID: \"5a4f94f3-d63a-4869-b723-ae9637610b4b\") " pod="openshift-multus/network-metrics-daemon-mfn52" Mar 18 18:00:35.400128 master-0 kubenswrapper[30278]: I0318 18:00:35.400061 30278 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 18 18:00:35.412757 master-0 kubenswrapper[30278]: I0318 18:00:35.412701 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n9g8f\" (UniqueName: \"kubernetes.io/projected/1db0a246-ca43-4e7c-b09e-e80218ae99b1-kube-api-access-n9g8f\") pod \"controller-manager-f5755b457-f4cbl\" (UID: \"1db0a246-ca43-4e7c-b09e-e80218ae99b1\") " pod="openshift-controller-manager/controller-manager-f5755b457-f4cbl" Mar 18 18:00:35.419775 master-0 kubenswrapper[30278]: I0318 18:00:35.419732 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fglbh\" (UniqueName: \"kubernetes.io/projected/8db04037-c7cc-4246-92c3-6e7985384b14-kube-api-access-fglbh\") pod \"packageserver-b8b994c95-kglwt\" (UID: \"8db04037-c7cc-4246-92c3-6e7985384b14\") " pod="openshift-operator-lifecycle-manager/packageserver-b8b994c95-kglwt" Mar 18 18:00:35.439697 master-0 kubenswrapper[30278]: I0318 18:00:35.439597 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zfnqp\" (UniqueName: \"kubernetes.io/projected/c355c750-ae2f-49fa-9a16-8fb4f688853e-kube-api-access-zfnqp\") pod \"service-ca-operator-b865698dc-5zj8r\" (UID: \"c355c750-ae2f-49fa-9a16-8fb4f688853e\") " pod="openshift-service-ca-operator/service-ca-operator-b865698dc-5zj8r" Mar 18 18:00:35.473232 master-0 kubenswrapper[30278]: I0318 18:00:35.471858 30278 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 18:00:35.473232 master-0 kubenswrapper[30278]: I0318 18:00:35.472044 30278 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 18 18:00:35.480078 master-0 kubenswrapper[30278]: I0318 18:00:35.479252 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z5jd4\" (UniqueName: \"kubernetes.io/projected/427e5ce9-f4b3-4f12-bb77-2b13775aa334-kube-api-access-z5jd4\") pod \"redhat-marketplace-6xmx4\" (UID: \"427e5ce9-f4b3-4f12-bb77-2b13775aa334\") " pod="openshift-marketplace/redhat-marketplace-6xmx4" Mar 18 18:00:35.481149 master-0 kubenswrapper[30278]: I0318 18:00:35.480943 30278 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 18:00:35.484933 master-0 kubenswrapper[30278]: I0318 18:00:35.484900 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f48gg\" (UniqueName: \"kubernetes.io/projected/822080a5-2926-4a51-866d-86bb0b437da2-kube-api-access-f48gg\") pod \"tuned-r6tf4\" (UID: \"822080a5-2926-4a51-866d-86bb0b437da2\") " pod="openshift-cluster-node-tuning-operator/tuned-r6tf4" Mar 18 18:00:35.501861 master-0 kubenswrapper[30278]: I0318 18:00:35.501791 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x47z7\" (UniqueName: \"kubernetes.io/projected/30d77a7c-222e-41c7-8a98-219854aa3da2-kube-api-access-x47z7\") pod \"apiserver-897b458c6-vsss9\" (UID: \"30d77a7c-222e-41c7-8a98-219854aa3da2\") " pod="openshift-apiserver/apiserver-897b458c6-vsss9" Mar 18 18:00:35.520902 master-0 kubenswrapper[30278]: I0318 18:00:35.520696 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-767c7\" (UniqueName: \"kubernetes.io/projected/e0e04440-c08b-452d-9be6-9f70a4027c92-kube-api-access-767c7\") pod \"cluster-samples-operator-85f7577d78-xnx8x\" (UID: \"e0e04440-c08b-452d-9be6-9f70a4027c92\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-85f7577d78-xnx8x" Mar 18 18:00:35.541358 master-0 kubenswrapper[30278]: I0318 18:00:35.540685 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mnl7c\" (UniqueName: \"kubernetes.io/projected/dc110414-3a6b-474c-bce3-33450cab8fcd-kube-api-access-mnl7c\") pod \"certified-operators-vbglp\" (UID: \"dc110414-3a6b-474c-bce3-33450cab8fcd\") " pod="openshift-marketplace/certified-operators-vbglp" Mar 18 18:00:35.559861 master-0 kubenswrapper[30278]: I0318 18:00:35.559801 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fc27m\" (UniqueName: \"kubernetes.io/projected/2d21e77e-8b61-4f03-8f17-941b7a1d8b1d-kube-api-access-fc27m\") pod \"machine-api-operator-6fbb6cf6f9-6x52p\" (UID: \"2d21e77e-8b61-4f03-8f17-941b7a1d8b1d\") " pod="openshift-machine-api/machine-api-operator-6fbb6cf6f9-6x52p" Mar 18 18:00:35.585160 master-0 kubenswrapper[30278]: I0318 18:00:35.584708 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5s6f5\" (UniqueName: \"kubernetes.io/projected/978dcca6-b396-463f-9614-9e24194a1aaa-kube-api-access-5s6f5\") pod \"network-check-target-ctd49\" (UID: \"978dcca6-b396-463f-9614-9e24194a1aaa\") " pod="openshift-network-diagnostics/network-check-target-ctd49" Mar 18 18:00:35.596953 master-0 kubenswrapper[30278]: I0318 18:00:35.596894 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mbctm\" (UniqueName: \"kubernetes.io/projected/56cde2f7-1742-45d6-aa22-8270cfb424a7-kube-api-access-mbctm\") pod \"catalogd-controller-manager-6864dc98f7-8vmsv\" (UID: \"56cde2f7-1742-45d6-aa22-8270cfb424a7\") " pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-8vmsv" Mar 18 18:00:35.620478 master-0 kubenswrapper[30278]: I0318 18:00:35.620420 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z7xqg\" (UniqueName: \"kubernetes.io/projected/c3267271-e0c5-45d6-980c-d78e4f9eef35-kube-api-access-z7xqg\") pod \"machine-config-operator-84d549f6d5-b5lps\" (UID: \"c3267271-e0c5-45d6-980c-d78e4f9eef35\") " pod="openshift-machine-config-operator/machine-config-operator-84d549f6d5-b5lps" Mar 18 18:00:35.644704 master-0 kubenswrapper[30278]: I0318 18:00:35.642299 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ljbl7\" (UniqueName: \"kubernetes.io/projected/7d72bb42-1ee6-4f61-9515-d1c5bafa896f-kube-api-access-ljbl7\") pod \"network-check-source-b4bf74f6-nlqpp\" (UID: \"7d72bb42-1ee6-4f61-9515-d1c5bafa896f\") " pod="openshift-network-diagnostics/network-check-source-b4bf74f6-nlqpp" Mar 18 18:00:35.659629 master-0 kubenswrapper[30278]: I0318 18:00:35.659561 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cx596\" (UniqueName: \"kubernetes.io/projected/253ec853-f637-4aa4-8e8e-eb655dfccccb-kube-api-access-cx596\") pod \"route-controller-manager-57dbfd879f-44tfw\" (UID: \"253ec853-f637-4aa4-8e8e-eb655dfccccb\") " pod="openshift-route-controller-manager/route-controller-manager-57dbfd879f-44tfw" Mar 18 18:00:35.671828 master-0 kubenswrapper[30278]: I0318 18:00:35.671765 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-85f7577d78-xnx8x" Mar 18 18:00:35.673837 master-0 kubenswrapper[30278]: I0318 18:00:35.673788 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-6fbb6cf6f9-6x52p" Mar 18 18:00:35.684406 master-0 kubenswrapper[30278]: I0318 18:00:35.681397 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qbdth\" (UniqueName: \"kubernetes.io/projected/9c0dbd44-7669-41d6-bf1b-d8c1343c9d98-kube-api-access-qbdth\") pod \"prometheus-operator-6c8df6d4b-fshkm\" (UID: \"9c0dbd44-7669-41d6-bf1b-d8c1343c9d98\") " pod="openshift-monitoring/prometheus-operator-6c8df6d4b-fshkm" Mar 18 18:00:35.709134 master-0 kubenswrapper[30278]: I0318 18:00:35.701802 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2tskm\" (UniqueName: \"kubernetes.io/projected/4460d3d3-c55f-4f1c-a623-e3feccf937bb-kube-api-access-2tskm\") pod \"redhat-operators-bgdql\" (UID: \"4460d3d3-c55f-4f1c-a623-e3feccf937bb\") " pod="openshift-marketplace/redhat-operators-bgdql" Mar 18 18:00:35.718178 master-0 kubenswrapper[30278]: I0318 18:00:35.718124 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sb496\" (UniqueName: \"kubernetes.io/projected/92153864-7959-4482-bf24-c8db36435fb5-kube-api-access-sb496\") pod \"machine-approver-5c6485487f-z74t2\" (UID: \"92153864-7959-4482-bf24-c8db36435fb5\") " pod="openshift-cluster-machine-approver/machine-approver-5c6485487f-z74t2" Mar 18 18:00:35.753774 master-0 kubenswrapper[30278]: I0318 18:00:35.753625 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5wkqk\" (UniqueName: \"kubernetes.io/projected/efd0d6b1-652c-44b2-b918-5c7ced5d15c3-kube-api-access-5wkqk\") pod \"node-resolver-bwcgq\" (UID: \"efd0d6b1-652c-44b2-b918-5c7ced5d15c3\") " pod="openshift-dns/node-resolver-bwcgq" Mar 18 18:00:35.758523 master-0 kubenswrapper[30278]: I0318 18:00:35.758476 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bz8rf\" (UniqueName: \"kubernetes.io/projected/d4c75bee-d0d2-4261-8f89-8c3375dbd868-kube-api-access-bz8rf\") pod \"insights-operator-68bf6ff9d6-hm777\" (UID: \"d4c75bee-d0d2-4261-8f89-8c3375dbd868\") " pod="openshift-insights/insights-operator-68bf6ff9d6-hm777" Mar 18 18:00:35.767757 master-0 kubenswrapper[30278]: I0318 18:00:35.767715 30278 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cloud-credential-operator/cloud-credential-operator-744f9dbf77-djgn7"] Mar 18 18:00:35.784889 master-0 kubenswrapper[30278]: I0318 18:00:35.784812 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wjtg7\" (UniqueName: \"kubernetes.io/projected/489dd872-39c3-4ce2-8dc1-9d0552b88616-kube-api-access-wjtg7\") pod \"community-operators-8485d\" (UID: \"489dd872-39c3-4ce2-8dc1-9d0552b88616\") " pod="openshift-marketplace/community-operators-8485d" Mar 18 18:00:35.787375 master-0 kubenswrapper[30278]: I0318 18:00:35.787316 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/89e6c3d6-7bd5-4df6-90db-3a349f644afb-proxy-tls\") pod \"machine-config-controller-b4f87c5b9-m84zq\" (UID: \"89e6c3d6-7bd5-4df6-90db-3a349f644afb\") " pod="openshift-machine-config-operator/machine-config-controller-b4f87c5b9-m84zq" Mar 18 18:00:35.787699 master-0 kubenswrapper[30278]: I0318 18:00:35.787636 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/9c0dbd44-7669-41d6-bf1b-d8c1343c9d98-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-6c8df6d4b-fshkm\" (UID: \"9c0dbd44-7669-41d6-bf1b-d8c1343c9d98\") " pod="openshift-monitoring/prometheus-operator-6c8df6d4b-fshkm" Mar 18 18:00:35.787834 master-0 kubenswrapper[30278]: W0318 18:00:35.787690 30278 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod04cef0bd_f365_4bf6_864a_1895995015d6.slice/crio-d8aee1d0c35cacddb409a79f79ff907fbe1e637517fb56328d18b8c854c91621 WatchSource:0}: Error finding container d8aee1d0c35cacddb409a79f79ff907fbe1e637517fb56328d18b8c854c91621: Status 404 returned error can't find the container with id d8aee1d0c35cacddb409a79f79ff907fbe1e637517fb56328d18b8c854c91621 Mar 18 18:00:35.787834 master-0 kubenswrapper[30278]: I0318 18:00:35.787756 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/0751c002-fe0e-4f13-bb9c-9accd8ca0df3-images\") pod \"cluster-cloud-controller-manager-operator-7dff898856-kfzkl\" (UID: \"0751c002-fe0e-4f13-bb9c-9accd8ca0df3\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-kfzkl" Mar 18 18:00:35.787834 master-0 kubenswrapper[30278]: I0318 18:00:35.787794 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/e7f76afa-4b23-421c-8451-46323813f06e-webhook-certs\") pod \"multus-admission-controller-58c9f8fc64-9c6bk\" (UID: \"e7f76afa-4b23-421c-8451-46323813f06e\") " pod="openshift-multus/multus-admission-controller-58c9f8fc64-9c6bk" Mar 18 18:00:35.788136 master-0 kubenswrapper[30278]: I0318 18:00:35.787949 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloud-controller-manager-operator-tls\" (UniqueName: \"kubernetes.io/secret/0751c002-fe0e-4f13-bb9c-9accd8ca0df3-cloud-controller-manager-operator-tls\") pod \"cluster-cloud-controller-manager-operator-7dff898856-kfzkl\" (UID: \"0751c002-fe0e-4f13-bb9c-9accd8ca0df3\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-kfzkl" Mar 18 18:00:35.788136 master-0 kubenswrapper[30278]: I0318 18:00:35.787963 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/9c0dbd44-7669-41d6-bf1b-d8c1343c9d98-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-6c8df6d4b-fshkm\" (UID: \"9c0dbd44-7669-41d6-bf1b-d8c1343c9d98\") " pod="openshift-monitoring/prometheus-operator-6c8df6d4b-fshkm" Mar 18 18:00:35.788136 master-0 kubenswrapper[30278]: I0318 18:00:35.787995 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/9c0dbd44-7669-41d6-bf1b-d8c1343c9d98-metrics-client-ca\") pod \"prometheus-operator-6c8df6d4b-fshkm\" (UID: \"9c0dbd44-7669-41d6-bf1b-d8c1343c9d98\") " pod="openshift-monitoring/prometheus-operator-6c8df6d4b-fshkm" Mar 18 18:00:35.788136 master-0 kubenswrapper[30278]: I0318 18:00:35.788032 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0751c002-fe0e-4f13-bb9c-9accd8ca0df3-auth-proxy-config\") pod \"cluster-cloud-controller-manager-operator-7dff898856-kfzkl\" (UID: \"0751c002-fe0e-4f13-bb9c-9accd8ca0df3\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-kfzkl" Mar 18 18:00:35.788617 master-0 kubenswrapper[30278]: I0318 18:00:35.788306 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/e7f76afa-4b23-421c-8451-46323813f06e-webhook-certs\") pod \"multus-admission-controller-58c9f8fc64-9c6bk\" (UID: \"e7f76afa-4b23-421c-8451-46323813f06e\") " pod="openshift-multus/multus-admission-controller-58c9f8fc64-9c6bk" Mar 18 18:00:35.788617 master-0 kubenswrapper[30278]: I0318 18:00:35.788548 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0751c002-fe0e-4f13-bb9c-9accd8ca0df3-auth-proxy-config\") pod \"cluster-cloud-controller-manager-operator-7dff898856-kfzkl\" (UID: \"0751c002-fe0e-4f13-bb9c-9accd8ca0df3\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-kfzkl" Mar 18 18:00:35.788809 master-0 kubenswrapper[30278]: I0318 18:00:35.788683 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/0751c002-fe0e-4f13-bb9c-9accd8ca0df3-images\") pod \"cluster-cloud-controller-manager-operator-7dff898856-kfzkl\" (UID: \"0751c002-fe0e-4f13-bb9c-9accd8ca0df3\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-kfzkl" Mar 18 18:00:35.788915 master-0 kubenswrapper[30278]: I0318 18:00:35.788810 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloud-controller-manager-operator-tls\" (UniqueName: \"kubernetes.io/secret/0751c002-fe0e-4f13-bb9c-9accd8ca0df3-cloud-controller-manager-operator-tls\") pod \"cluster-cloud-controller-manager-operator-7dff898856-kfzkl\" (UID: \"0751c002-fe0e-4f13-bb9c-9accd8ca0df3\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-kfzkl" Mar 18 18:00:35.789065 master-0 kubenswrapper[30278]: I0318 18:00:35.789025 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/9c0dbd44-7669-41d6-bf1b-d8c1343c9d98-metrics-client-ca\") pod \"prometheus-operator-6c8df6d4b-fshkm\" (UID: \"9c0dbd44-7669-41d6-bf1b-d8c1343c9d98\") " pod="openshift-monitoring/prometheus-operator-6c8df6d4b-fshkm" Mar 18 18:00:35.789595 master-0 kubenswrapper[30278]: I0318 18:00:35.789545 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/89e6c3d6-7bd5-4df6-90db-3a349f644afb-proxy-tls\") pod \"machine-config-controller-b4f87c5b9-m84zq\" (UID: \"89e6c3d6-7bd5-4df6-90db-3a349f644afb\") " pod="openshift-machine-config-operator/machine-config-controller-b4f87c5b9-m84zq" Mar 18 18:00:35.801543 master-0 kubenswrapper[30278]: I0318 18:00:35.801479 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/fdab27a1-1d7a-4dc5-b828-eba3f57592dd-kube-api-access\") pod \"cluster-version-operator-7d58488df-l48xm\" (UID: \"fdab27a1-1d7a-4dc5-b828-eba3f57592dd\") " pod="openshift-cluster-version/cluster-version-operator-7d58488df-l48xm" Mar 18 18:00:35.813023 master-0 kubenswrapper[30278]: I0318 18:00:35.812736 30278 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/cluster-autoscaler-operator-866dc4744-l6hpt"] Mar 18 18:00:35.829304 master-0 kubenswrapper[30278]: I0318 18:00:35.829218 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xzp78\" (UniqueName: \"kubernetes.io/projected/fcf459dc-bd30-4143-b5c4-60fd01b46548-kube-api-access-xzp78\") pod \"machine-config-daemon-5l8hh\" (UID: \"fcf459dc-bd30-4143-b5c4-60fd01b46548\") " pod="openshift-machine-config-operator/machine-config-daemon-5l8hh" Mar 18 18:00:35.841793 master-0 kubenswrapper[30278]: I0318 18:00:35.841738 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lmsm4\" (UniqueName: \"kubernetes.io/projected/dba5f8d7-4d25-42b5-9c58-813221bf96bb-kube-api-access-lmsm4\") pod \"csi-snapshot-controller-operator-5f5d689c6b-z9vvz\" (UID: \"dba5f8d7-4d25-42b5-9c58-813221bf96bb\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-5f5d689c6b-z9vvz" Mar 18 18:00:35.871964 master-0 kubenswrapper[30278]: I0318 18:00:35.862625 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d8d74\" (UniqueName: \"kubernetes.io/projected/c38c5f03-a753-49f4-ab06-33e75a03bd45-kube-api-access-d8d74\") pod \"cluster-storage-operator-7d87854d6-d4bmc\" (UID: \"c38c5f03-a753-49f4-ab06-33e75a03bd45\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-7d87854d6-d4bmc" Mar 18 18:00:35.871964 master-0 kubenswrapper[30278]: I0318 18:00:35.867086 30278 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-89ccd998f-l5gm7" Mar 18 18:00:35.889729 master-0 kubenswrapper[30278]: I0318 18:00:35.889621 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g4zcv\" (UniqueName: \"kubernetes.io/projected/7b94e08c-7944-445e-bfb7-6c7c14880c65-kube-api-access-g4zcv\") pod \"ovnkube-control-plane-57f769d897-m82wx\" (UID: \"7b94e08c-7944-445e-bfb7-6c7c14880c65\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57f769d897-m82wx" Mar 18 18:00:35.890542 master-0 kubenswrapper[30278]: I0318 18:00:35.890465 30278 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-89ccd998f-l5gm7" Mar 18 18:00:35.902205 master-0 kubenswrapper[30278]: I0318 18:00:35.900776 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4g42g\" (UniqueName: \"kubernetes.io/projected/7047a862-8cbe-46fb-9af3-06ba224cbe26-kube-api-access-4g42g\") pod \"migrator-8487694857-8dsx2\" (UID: \"7047a862-8cbe-46fb-9af3-06ba224cbe26\") " pod="openshift-kube-storage-version-migrator/migrator-8487694857-8dsx2" Mar 18 18:00:35.904883 master-0 kubenswrapper[30278]: I0318 18:00:35.903797 30278 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-6f97756bc8-zdqtc"] Mar 18 18:00:35.915786 master-0 kubenswrapper[30278]: I0318 18:00:35.915695 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vqrdl\" (UniqueName: \"kubernetes.io/projected/efbcb147-d077-4749-9289-1682daccb657-kube-api-access-vqrdl\") pod \"operator-controller-controller-manager-57777556ff-bk26c\" (UID: \"efbcb147-d077-4749-9289-1682daccb657\") " pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-bk26c" Mar 18 18:00:35.919090 master-0 kubenswrapper[30278]: W0318 18:00:35.918231 30278 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podde189d27_4c60_49f1_9119_d1fde5c37b1e.slice/crio-09ff9c4131e0661c434552b1cc4986239e8587762c91d4e02a0528d7f71cce02 WatchSource:0}: Error finding container 09ff9c4131e0661c434552b1cc4986239e8587762c91d4e02a0528d7f71cce02: Status 404 returned error can't find the container with id 09ff9c4131e0661c434552b1cc4986239e8587762c91d4e02a0528d7f71cce02 Mar 18 18:00:35.922098 master-0 kubenswrapper[30278]: I0318 18:00:35.921928 30278 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 18 18:00:35.935864 master-0 kubenswrapper[30278]: I0318 18:00:35.935819 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-grnqn\" (UniqueName: \"kubernetes.io/projected/5b0e38f3-3ab5-4519-86a6-68003deb94da-kube-api-access-grnqn\") pod \"multus-64tx9\" (UID: \"5b0e38f3-3ab5-4519-86a6-68003deb94da\") " pod="openshift-multus/multus-64tx9" Mar 18 18:00:35.954955 master-0 kubenswrapper[30278]: E0318 18:00:35.954912 30278 projected.go:288] Couldn't get configMap openshift-kube-apiserver/kube-root-ca.crt: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Mar 18 18:00:35.954955 master-0 kubenswrapper[30278]: E0318 18:00:35.954950 30278 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-kube-apiserver/installer-3-master-0: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Mar 18 18:00:35.955228 master-0 kubenswrapper[30278]: E0318 18:00:35.955012 30278 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/4285e80c-1ff9-42b3-9692-9f2ab6b61916-kube-api-access podName:4285e80c-1ff9-42b3-9692-9f2ab6b61916 nodeName:}" failed. No retries permitted until 2026-03-18 18:00:36.454993498 +0000 UTC m=+5.622178083 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/4285e80c-1ff9-42b3-9692-9f2ab6b61916-kube-api-access") pod "installer-3-master-0" (UID: "4285e80c-1ff9-42b3-9692-9f2ab6b61916") : object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Mar 18 18:00:35.972679 master-0 kubenswrapper[30278]: I0318 18:00:35.972628 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-5c6485487f-z74t2" Mar 18 18:00:35.972890 master-0 kubenswrapper[30278]: E0318 18:00:35.972793 30278 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"openshift-kube-scheduler-master-0\" already exists" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 18 18:00:35.981338 master-0 kubenswrapper[30278]: I0318 18:00:35.981248 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-6c8df6d4b-fshkm" Mar 18 18:00:35.993393 master-0 kubenswrapper[30278]: E0318 18:00:35.993145 30278 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-apiserver-startup-monitor-master-0\" already exists" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 18 18:00:36.048370 master-0 kubenswrapper[30278]: I0318 18:00:36.048309 30278 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-897b458c6-vsss9" Mar 18 18:00:36.157105 master-0 kubenswrapper[30278]: I0318 18:00:36.157047 30278 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-85f7577d78-xnx8x"] Mar 18 18:00:36.207004 master-0 kubenswrapper[30278]: I0318 18:00:36.206965 30278 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-vbglp" Mar 18 18:00:36.217524 master-0 kubenswrapper[30278]: I0318 18:00:36.217103 30278 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-6fbb6cf6f9-6x52p"] Mar 18 18:00:36.245397 master-0 kubenswrapper[30278]: W0318 18:00:36.245354 30278 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2d21e77e_8b61_4f03_8f17_941b7a1d8b1d.slice/crio-3fd9da261da4460b610d4e7ecbbabe48d473cc9137c01fa2243f1c1d96fedcdf WatchSource:0}: Error finding container 3fd9da261da4460b610d4e7ecbbabe48d473cc9137c01fa2243f1c1d96fedcdf: Status 404 returned error can't find the container with id 3fd9da261da4460b610d4e7ecbbabe48d473cc9137c01fa2243f1c1d96fedcdf Mar 18 18:00:36.338521 master-0 kubenswrapper[30278]: I0318 18:00:36.338443 30278 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" Mar 18 18:00:36.415738 master-0 kubenswrapper[30278]: I0318 18:00:36.415546 30278 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-operator-6c8df6d4b-fshkm"] Mar 18 18:00:36.416079 master-0 kubenswrapper[30278]: I0318 18:00:36.415902 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-6f97756bc8-zdqtc" event={"ID":"de189d27-4c60-49f1-9119-d1fde5c37b1e","Type":"ContainerStarted","Data":"09ff9c4131e0661c434552b1cc4986239e8587762c91d4e02a0528d7f71cce02"} Mar 18 18:00:36.417994 master-0 kubenswrapper[30278]: I0318 18:00:36.417931 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-6fbb6cf6f9-6x52p" event={"ID":"2d21e77e-8b61-4f03-8f17-941b7a1d8b1d","Type":"ContainerStarted","Data":"f2eefc339406afa0fc4c22326ea4d35a139e5ceaa9260e55e1fb7278564c5117"} Mar 18 18:00:36.417994 master-0 kubenswrapper[30278]: I0318 18:00:36.417960 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-6fbb6cf6f9-6x52p" event={"ID":"2d21e77e-8b61-4f03-8f17-941b7a1d8b1d","Type":"ContainerStarted","Data":"3fd9da261da4460b610d4e7ecbbabe48d473cc9137c01fa2243f1c1d96fedcdf"} Mar 18 18:00:36.420127 master-0 kubenswrapper[30278]: I0318 18:00:36.420021 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-credential-operator/cloud-credential-operator-744f9dbf77-djgn7" event={"ID":"04cef0bd-f365-4bf6-864a-1895995015d6","Type":"ContainerStarted","Data":"3bad85335013c5e5047acc8f551c4bf30e43c0b9bdfe646251716f979269ac65"} Mar 18 18:00:36.420127 master-0 kubenswrapper[30278]: I0318 18:00:36.420101 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-credential-operator/cloud-credential-operator-744f9dbf77-djgn7" event={"ID":"04cef0bd-f365-4bf6-864a-1895995015d6","Type":"ContainerStarted","Data":"d8aee1d0c35cacddb409a79f79ff907fbe1e637517fb56328d18b8c854c91621"} Mar 18 18:00:36.421874 master-0 kubenswrapper[30278]: I0318 18:00:36.421658 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-85f7577d78-xnx8x" event={"ID":"e0e04440-c08b-452d-9be6-9f70a4027c92","Type":"ContainerStarted","Data":"560026f62736ca3d6b49b1c4c1c3542a17b5dbb589715f7a17263a5e021d2ad2"} Mar 18 18:00:36.423208 master-0 kubenswrapper[30278]: I0318 18:00:36.423132 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-autoscaler-operator-866dc4744-l6hpt" event={"ID":"a94f7bff-ad61-4c53-a8eb-000a13f26971","Type":"ContainerStarted","Data":"e2c4d882124749eb933977e7e11e7b8dbc5be7aa02682d53dbbb4d0f0e78816f"} Mar 18 18:00:36.423208 master-0 kubenswrapper[30278]: I0318 18:00:36.423175 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-autoscaler-operator-866dc4744-l6hpt" event={"ID":"a94f7bff-ad61-4c53-a8eb-000a13f26971","Type":"ContainerStarted","Data":"ada9cd0ea818b69ccd397f69149a438984174fc298ab0868b797867330e9f291"} Mar 18 18:00:36.423404 master-0 kubenswrapper[30278]: W0318 18:00:36.423311 30278 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9c0dbd44_7669_41d6_bf1b_d8c1343c9d98.slice/crio-a98db32b2b8dd1924c892e3d5121c548a2f74179c414ca9976e0775f42c63cf4 WatchSource:0}: Error finding container a98db32b2b8dd1924c892e3d5121c548a2f74179c414ca9976e0775f42c63cf4: Status 404 returned error can't find the container with id a98db32b2b8dd1924c892e3d5121c548a2f74179c414ca9976e0775f42c63cf4 Mar 18 18:00:36.426440 master-0 kubenswrapper[30278]: I0318 18:00:36.425650 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-5c6485487f-z74t2" event={"ID":"92153864-7959-4482-bf24-c8db36435fb5","Type":"ContainerStarted","Data":"4f0cb3badd76679b66eccf8fccfb2a6e1e3421348b63ff35d3dbcc849dc29068"} Mar 18 18:00:36.426440 master-0 kubenswrapper[30278]: I0318 18:00:36.425728 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-5c6485487f-z74t2" event={"ID":"92153864-7959-4482-bf24-c8db36435fb5","Type":"ContainerStarted","Data":"cf8bd1306403ebd6e7a02c6130432fb61e62742c44916b59c4ffb76556e65c96"} Mar 18 18:00:36.426440 master-0 kubenswrapper[30278]: I0318 18:00:36.426043 30278 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 18 18:00:36.426440 master-0 kubenswrapper[30278]: I0318 18:00:36.426184 30278 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 18 18:00:36.512791 master-0 kubenswrapper[30278]: I0318 18:00:36.512604 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4285e80c-1ff9-42b3-9692-9f2ab6b61916-kube-api-access\") pod \"installer-3-master-0\" (UID: \"4285e80c-1ff9-42b3-9692-9f2ab6b61916\") " pod="openshift-kube-apiserver/installer-3-master-0" Mar 18 18:00:36.513080 master-0 kubenswrapper[30278]: E0318 18:00:36.512856 30278 projected.go:288] Couldn't get configMap openshift-kube-apiserver/kube-root-ca.crt: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Mar 18 18:00:36.513080 master-0 kubenswrapper[30278]: E0318 18:00:36.512914 30278 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-kube-apiserver/installer-3-master-0: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Mar 18 18:00:36.513080 master-0 kubenswrapper[30278]: E0318 18:00:36.513013 30278 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/4285e80c-1ff9-42b3-9692-9f2ab6b61916-kube-api-access podName:4285e80c-1ff9-42b3-9692-9f2ab6b61916 nodeName:}" failed. No retries permitted until 2026-03-18 18:00:37.512980891 +0000 UTC m=+6.680165496 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/4285e80c-1ff9-42b3-9692-9f2ab6b61916-kube-api-access") pod "installer-3-master-0" (UID: "4285e80c-1ff9-42b3-9692-9f2ab6b61916") : object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Mar 18 18:00:36.565971 master-0 kubenswrapper[30278]: I0318 18:00:36.565901 30278 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-57dbfd879f-44tfw" Mar 18 18:00:36.572295 master-0 kubenswrapper[30278]: I0318 18:00:36.572222 30278 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-57dbfd879f-44tfw" Mar 18 18:00:37.333096 master-0 kubenswrapper[30278]: I0318 18:00:37.333025 30278 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" podStartSLOduration=6.333005195 podStartE2EDuration="6.333005195s" podCreationTimestamp="2026-03-18 18:00:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 18:00:37.332712117 +0000 UTC m=+6.499896722" watchObservedRunningTime="2026-03-18 18:00:37.333005195 +0000 UTC m=+6.500189790" Mar 18 18:00:37.369369 master-0 kubenswrapper[30278]: I0318 18:00:37.369330 30278 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-688fbbb854-6n26v" Mar 18 18:00:37.374544 master-0 kubenswrapper[30278]: I0318 18:00:37.374434 30278 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-688fbbb854-6n26v" Mar 18 18:00:37.439420 master-0 kubenswrapper[30278]: I0318 18:00:37.439321 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-6c8df6d4b-fshkm" event={"ID":"9c0dbd44-7669-41d6-bf1b-d8c1343c9d98","Type":"ContainerStarted","Data":"a98db32b2b8dd1924c892e3d5121c548a2f74179c414ca9976e0775f42c63cf4"} Mar 18 18:00:37.463979 master-0 kubenswrapper[30278]: I0318 18:00:37.463907 30278 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-897b458c6-vsss9" Mar 18 18:00:37.530242 master-0 kubenswrapper[30278]: I0318 18:00:37.530187 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4285e80c-1ff9-42b3-9692-9f2ab6b61916-kube-api-access\") pod \"installer-3-master-0\" (UID: \"4285e80c-1ff9-42b3-9692-9f2ab6b61916\") " pod="openshift-kube-apiserver/installer-3-master-0" Mar 18 18:00:37.530538 master-0 kubenswrapper[30278]: E0318 18:00:37.530510 30278 projected.go:288] Couldn't get configMap openshift-kube-apiserver/kube-root-ca.crt: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Mar 18 18:00:37.530587 master-0 kubenswrapper[30278]: E0318 18:00:37.530549 30278 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-kube-apiserver/installer-3-master-0: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Mar 18 18:00:37.530621 master-0 kubenswrapper[30278]: E0318 18:00:37.530601 30278 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/4285e80c-1ff9-42b3-9692-9f2ab6b61916-kube-api-access podName:4285e80c-1ff9-42b3-9692-9f2ab6b61916 nodeName:}" failed. No retries permitted until 2026-03-18 18:00:39.530582345 +0000 UTC m=+8.697767150 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/4285e80c-1ff9-42b3-9692-9f2ab6b61916-kube-api-access") pod "installer-3-master-0" (UID: "4285e80c-1ff9-42b3-9692-9f2ab6b61916") : object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Mar 18 18:00:38.003255 master-0 kubenswrapper[30278]: I0318 18:00:38.003132 30278 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 18 18:00:38.003475 master-0 kubenswrapper[30278]: I0318 18:00:38.003333 30278 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 18 18:00:38.007486 master-0 kubenswrapper[30278]: I0318 18:00:38.007451 30278 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 18 18:00:38.133462 master-0 kubenswrapper[30278]: I0318 18:00:38.133388 30278 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-master-0" podStartSLOduration=7.133368542 podStartE2EDuration="7.133368542s" podCreationTimestamp="2026-03-18 18:00:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 18:00:38.133058243 +0000 UTC m=+7.300242848" watchObservedRunningTime="2026-03-18 18:00:38.133368542 +0000 UTC m=+7.300553137" Mar 18 18:00:38.140389 master-0 kubenswrapper[30278]: I0318 18:00:38.140353 30278 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-b8b994c95-kglwt" Mar 18 18:00:38.146173 master-0 kubenswrapper[30278]: I0318 18:00:38.146136 30278 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-b8b994c95-kglwt" Mar 18 18:00:38.167183 master-0 kubenswrapper[30278]: I0318 18:00:38.167127 30278 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-master-0" Mar 18 18:00:38.185556 master-0 kubenswrapper[30278]: I0318 18:00:38.185512 30278 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-master-0" Mar 18 18:00:38.428262 master-0 kubenswrapper[30278]: I0318 18:00:38.425387 30278 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/prometheus-operator-admission-webhook-69c6b55594-7r9qg" Mar 18 18:00:38.443254 master-0 kubenswrapper[30278]: I0318 18:00:38.443189 30278 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-5l4qp" Mar 18 18:00:38.445364 master-0 kubenswrapper[30278]: I0318 18:00:38.445322 30278 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/prometheus-operator-admission-webhook-69c6b55594-7r9qg" Mar 18 18:00:38.465164 master-0 kubenswrapper[30278]: I0318 18:00:38.465104 30278 generic.go:334] "Generic (PLEG): container finished" podID="d4c75bee-d0d2-4261-8f89-8c3375dbd868" containerID="350645ba3bc2c5d9132063ea0cd6e79ddd087baff486b5e73a7bad9c73b8c8c7" exitCode=0 Mar 18 18:00:38.465370 master-0 kubenswrapper[30278]: I0318 18:00:38.465193 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-insights/insights-operator-68bf6ff9d6-hm777" event={"ID":"d4c75bee-d0d2-4261-8f89-8c3375dbd868","Type":"ContainerDied","Data":"350645ba3bc2c5d9132063ea0cd6e79ddd087baff486b5e73a7bad9c73b8c8c7"} Mar 18 18:00:38.465972 master-0 kubenswrapper[30278]: I0318 18:00:38.465941 30278 scope.go:117] "RemoveContainer" containerID="350645ba3bc2c5d9132063ea0cd6e79ddd087baff486b5e73a7bad9c73b8c8c7" Mar 18 18:00:38.479985 master-0 kubenswrapper[30278]: I0318 18:00:38.479924 30278 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-master-0" Mar 18 18:00:38.487017 master-0 kubenswrapper[30278]: I0318 18:00:38.486984 30278 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-5l4qp" Mar 18 18:00:38.920087 master-0 kubenswrapper[30278]: I0318 18:00:38.919975 30278 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" Mar 18 18:00:38.923471 master-0 kubenswrapper[30278]: I0318 18:00:38.920147 30278 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 18 18:00:38.923471 master-0 kubenswrapper[30278]: I0318 18:00:38.922623 30278 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-7dcf5569b5-m5dh4" Mar 18 18:00:39.062912 master-0 kubenswrapper[30278]: I0318 18:00:39.062869 30278 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-bgdql" Mar 18 18:00:39.106326 master-0 kubenswrapper[30278]: I0318 18:00:39.106220 30278 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-bgdql" Mar 18 18:00:39.312526 master-0 kubenswrapper[30278]: I0318 18:00:39.309852 30278 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 18:00:39.331574 master-0 kubenswrapper[30278]: I0318 18:00:39.331499 30278 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-bgdql" Mar 18 18:00:39.384385 master-0 kubenswrapper[30278]: I0318 18:00:39.378918 30278 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-bgdql" Mar 18 18:00:39.474104 master-0 kubenswrapper[30278]: I0318 18:00:39.474067 30278 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-66b84d69b-qb7n6_7e64a377-f497-4416-8f22-d5c7f52e0b65/ingress-operator/5.log" Mar 18 18:00:39.475112 master-0 kubenswrapper[30278]: I0318 18:00:39.475076 30278 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-8485d" Mar 18 18:00:39.476561 master-0 kubenswrapper[30278]: I0318 18:00:39.475487 30278 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-66b84d69b-qb7n6_7e64a377-f497-4416-8f22-d5c7f52e0b65/ingress-operator/4.log" Mar 18 18:00:39.476978 master-0 kubenswrapper[30278]: I0318 18:00:39.476930 30278 generic.go:334] "Generic (PLEG): container finished" podID="7e64a377-f497-4416-8f22-d5c7f52e0b65" containerID="029fdec7254f162c629eedb8568b32645f8d7d59c5b8e802c4b2084d177c4d77" exitCode=1 Mar 18 18:00:39.477102 master-0 kubenswrapper[30278]: I0318 18:00:39.477062 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-66b84d69b-qb7n6" event={"ID":"7e64a377-f497-4416-8f22-d5c7f52e0b65","Type":"ContainerDied","Data":"029fdec7254f162c629eedb8568b32645f8d7d59c5b8e802c4b2084d177c4d77"} Mar 18 18:00:39.477162 master-0 kubenswrapper[30278]: I0318 18:00:39.477108 30278 scope.go:117] "RemoveContainer" containerID="af159afaa033efb036b878b04bdffa8fd814f7fc2cf559b2f4b190fa136e0905" Mar 18 18:00:39.477512 master-0 kubenswrapper[30278]: I0318 18:00:39.477484 30278 scope.go:117] "RemoveContainer" containerID="029fdec7254f162c629eedb8568b32645f8d7d59c5b8e802c4b2084d177c4d77" Mar 18 18:00:39.477655 master-0 kubenswrapper[30278]: I0318 18:00:39.477634 30278 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 18 18:00:39.477655 master-0 kubenswrapper[30278]: I0318 18:00:39.477651 30278 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 18 18:00:39.551653 master-0 kubenswrapper[30278]: I0318 18:00:39.551553 30278 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-8485d" Mar 18 18:00:39.580596 master-0 kubenswrapper[30278]: I0318 18:00:39.576830 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4285e80c-1ff9-42b3-9692-9f2ab6b61916-kube-api-access\") pod \"installer-3-master-0\" (UID: \"4285e80c-1ff9-42b3-9692-9f2ab6b61916\") " pod="openshift-kube-apiserver/installer-3-master-0" Mar 18 18:00:39.580596 master-0 kubenswrapper[30278]: E0318 18:00:39.577485 30278 projected.go:288] Couldn't get configMap openshift-kube-apiserver/kube-root-ca.crt: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Mar 18 18:00:39.580596 master-0 kubenswrapper[30278]: E0318 18:00:39.577506 30278 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-kube-apiserver/installer-3-master-0: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Mar 18 18:00:39.580596 master-0 kubenswrapper[30278]: E0318 18:00:39.577563 30278 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/4285e80c-1ff9-42b3-9692-9f2ab6b61916-kube-api-access podName:4285e80c-1ff9-42b3-9692-9f2ab6b61916 nodeName:}" failed. No retries permitted until 2026-03-18 18:00:43.577530954 +0000 UTC m=+12.744715549 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/4285e80c-1ff9-42b3-9692-9f2ab6b61916-kube-api-access") pod "installer-3-master-0" (UID: "4285e80c-1ff9-42b3-9692-9f2ab6b61916") : object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Mar 18 18:00:40.059531 master-0 kubenswrapper[30278]: I0318 18:00:40.059472 30278 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-8vmsv" Mar 18 18:00:40.065367 master-0 kubenswrapper[30278]: I0318 18:00:40.063353 30278 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-8vmsv" Mar 18 18:00:40.236654 master-0 kubenswrapper[30278]: I0318 18:00:40.236495 30278 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-688fbbb854-6n26v" Mar 18 18:00:40.242602 master-0 kubenswrapper[30278]: I0318 18:00:40.242560 30278 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-688fbbb854-6n26v" Mar 18 18:00:40.475436 master-0 kubenswrapper[30278]: I0318 18:00:40.475376 30278 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 18 18:00:40.801943 master-0 kubenswrapper[30278]: I0318 18:00:40.800924 30278 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 18:00:40.805068 master-0 kubenswrapper[30278]: I0318 18:00:40.805044 30278 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 18:00:40.981028 master-0 kubenswrapper[30278]: I0318 18:00:40.980968 30278 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-vbglp" Mar 18 18:00:41.065861 master-0 kubenswrapper[30278]: I0318 18:00:41.065419 30278 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-897b458c6-vsss9" Mar 18 18:00:41.071914 master-0 kubenswrapper[30278]: I0318 18:00:41.071847 30278 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-897b458c6-vsss9" Mar 18 18:00:41.338481 master-0 kubenswrapper[30278]: I0318 18:00:41.338305 30278 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-bk26c" Mar 18 18:00:41.341427 master-0 kubenswrapper[30278]: I0318 18:00:41.341404 30278 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-bk26c" Mar 18 18:00:41.524134 master-0 kubenswrapper[30278]: I0318 18:00:41.524063 30278 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 18:00:42.562777 master-0 kubenswrapper[30278]: I0318 18:00:42.561372 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-insights/insights-operator-68bf6ff9d6-hm777" event={"ID":"d4c75bee-d0d2-4261-8f89-8c3375dbd868","Type":"ContainerStarted","Data":"9890e276619ebeef2fdfb1c8e386ea0f74ad0cc5d40e53b9f1ccd6d8646d8339"} Mar 18 18:00:42.588165 master-0 kubenswrapper[30278]: I0318 18:00:42.587908 30278 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-66b84d69b-qb7n6_7e64a377-f497-4416-8f22-d5c7f52e0b65/ingress-operator/5.log" Mar 18 18:00:42.592410 master-0 kubenswrapper[30278]: I0318 18:00:42.591693 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-66b84d69b-qb7n6" event={"ID":"7e64a377-f497-4416-8f22-d5c7f52e0b65","Type":"ContainerStarted","Data":"a1dae1437d022d2a0e617e2ff61a4bdb4abbc546289d23e71375f04fe1056243"} Mar 18 18:00:42.990678 master-0 kubenswrapper[30278]: I0318 18:00:42.990619 30278 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-ctd49" Mar 18 18:00:42.999346 master-0 kubenswrapper[30278]: I0318 18:00:42.996691 30278 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-ctd49" Mar 18 18:00:43.140561 master-0 kubenswrapper[30278]: I0318 18:00:43.140423 30278 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-5l4qp" Mar 18 18:00:43.140800 master-0 kubenswrapper[30278]: I0318 18:00:43.140652 30278 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 18 18:00:43.140800 master-0 kubenswrapper[30278]: I0318 18:00:43.140662 30278 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 18 18:00:43.171413 master-0 kubenswrapper[30278]: I0318 18:00:43.170419 30278 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-5l4qp" Mar 18 18:00:43.354572 master-0 kubenswrapper[30278]: I0318 18:00:43.354520 30278 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-6xmx4" Mar 18 18:00:43.442308 master-0 kubenswrapper[30278]: I0318 18:00:43.440558 30278 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-6xmx4" Mar 18 18:00:43.584007 master-0 kubenswrapper[30278]: I0318 18:00:43.583945 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4285e80c-1ff9-42b3-9692-9f2ab6b61916-kube-api-access\") pod \"installer-3-master-0\" (UID: \"4285e80c-1ff9-42b3-9692-9f2ab6b61916\") " pod="openshift-kube-apiserver/installer-3-master-0" Mar 18 18:00:43.584529 master-0 kubenswrapper[30278]: E0318 18:00:43.584179 30278 projected.go:288] Couldn't get configMap openshift-kube-apiserver/kube-root-ca.crt: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Mar 18 18:00:43.584529 master-0 kubenswrapper[30278]: E0318 18:00:43.584213 30278 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-kube-apiserver/installer-3-master-0: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Mar 18 18:00:43.584529 master-0 kubenswrapper[30278]: E0318 18:00:43.584312 30278 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/4285e80c-1ff9-42b3-9692-9f2ab6b61916-kube-api-access podName:4285e80c-1ff9-42b3-9692-9f2ab6b61916 nodeName:}" failed. No retries permitted until 2026-03-18 18:00:51.584256173 +0000 UTC m=+20.751440758 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/4285e80c-1ff9-42b3-9692-9f2ab6b61916-kube-api-access") pod "installer-3-master-0" (UID: "4285e80c-1ff9-42b3-9692-9f2ab6b61916") : object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Mar 18 18:00:43.648133 master-0 kubenswrapper[30278]: I0318 18:00:43.648080 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-85f7577d78-xnx8x" event={"ID":"e0e04440-c08b-452d-9be6-9f70a4027c92","Type":"ContainerStarted","Data":"86291daa9e2be18b99b7ec40fc92b85d7cb257ac1af78ac2e5ca324d8dc670a1"} Mar 18 18:00:43.648133 master-0 kubenswrapper[30278]: I0318 18:00:43.648134 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-85f7577d78-xnx8x" event={"ID":"e0e04440-c08b-452d-9be6-9f70a4027c92","Type":"ContainerStarted","Data":"b7c8290de94d331041012e88757b8a123265ae8f856fb4642bbca2ff40f00d22"} Mar 18 18:00:43.664599 master-0 kubenswrapper[30278]: I0318 18:00:43.664425 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-autoscaler-operator-866dc4744-l6hpt" event={"ID":"a94f7bff-ad61-4c53-a8eb-000a13f26971","Type":"ContainerStarted","Data":"39b94a0a131e47a5cb50ae1c3f9b172ddf4fccb580319c8cd637b305d3f0ae4d"} Mar 18 18:00:43.676957 master-0 kubenswrapper[30278]: I0318 18:00:43.676881 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-6c8df6d4b-fshkm" event={"ID":"9c0dbd44-7669-41d6-bf1b-d8c1343c9d98","Type":"ContainerStarted","Data":"3e6f55b7943db520bc7b3f27e9df76459a43c27e18e2883ef001e96ebddd1ad5"} Mar 18 18:00:43.676957 master-0 kubenswrapper[30278]: I0318 18:00:43.676958 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-6c8df6d4b-fshkm" event={"ID":"9c0dbd44-7669-41d6-bf1b-d8c1343c9d98","Type":"ContainerStarted","Data":"a7245082726bb1b819d635f400af5fb625028bd030721e12e099fc44c0b8c051"} Mar 18 18:00:43.698871 master-0 kubenswrapper[30278]: I0318 18:00:43.696320 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-5c6485487f-z74t2" event={"ID":"92153864-7959-4482-bf24-c8db36435fb5","Type":"ContainerStarted","Data":"bd54ebb87ba3034a82e0f6ee8668308984e5db734cfae0473446dc28b371e23d"} Mar 18 18:00:43.707961 master-0 kubenswrapper[30278]: I0318 18:00:43.707897 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-6f97756bc8-zdqtc" event={"ID":"de189d27-4c60-49f1-9119-d1fde5c37b1e","Type":"ContainerStarted","Data":"4c9e3a3e844c376c725b998a0a35e65213001108da8278ec7f426182f906b310"} Mar 18 18:00:43.708953 master-0 kubenswrapper[30278]: I0318 18:00:43.708882 30278 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 18 18:00:43.947127 master-0 kubenswrapper[30278]: I0318 18:00:43.947040 30278 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-8485d" Mar 18 18:00:43.993762 master-0 kubenswrapper[30278]: I0318 18:00:43.993666 30278 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-8485d" Mar 18 18:00:44.038373 master-0 kubenswrapper[30278]: I0318 18:00:44.037951 30278 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68f85b4d6c-qpgfz" Mar 18 18:00:44.046261 master-0 kubenswrapper[30278]: I0318 18:00:44.046189 30278 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68f85b4d6c-qpgfz" Mar 18 18:00:44.190031 master-0 kubenswrapper[30278]: I0318 18:00:44.189959 30278 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-5c9796789-6hngr" Mar 18 18:00:44.198691 master-0 kubenswrapper[30278]: I0318 18:00:44.195890 30278 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-5c9796789-6hngr" Mar 18 18:00:44.320582 master-0 kubenswrapper[30278]: I0318 18:00:44.315844 30278 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-7b95f86987-6qqz4" Mar 18 18:00:44.321705 master-0 kubenswrapper[30278]: I0318 18:00:44.321662 30278 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-7b95f86987-6qqz4" Mar 18 18:00:44.828736 master-0 kubenswrapper[30278]: I0318 18:00:44.828671 30278 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-6xmx4" Mar 18 18:00:44.873002 master-0 kubenswrapper[30278]: I0318 18:00:44.872944 30278 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-6xmx4" Mar 18 18:00:45.165748 master-0 kubenswrapper[30278]: I0318 18:00:45.165614 30278 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-f5755b457-f4cbl" Mar 18 18:00:45.171311 master-0 kubenswrapper[30278]: I0318 18:00:45.170995 30278 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-f5755b457-f4cbl" Mar 18 18:00:45.620946 master-0 kubenswrapper[30278]: I0318 18:00:45.620896 30278 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/openshift-state-metrics-5dc6c74576-smd8t"] Mar 18 18:00:45.621196 master-0 kubenswrapper[30278]: E0318 18:00:45.621161 30278 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c83737980b9ee109184b1d78e942cf36" containerName="kube-scheduler" Mar 18 18:00:45.621196 master-0 kubenswrapper[30278]: I0318 18:00:45.621179 30278 state_mem.go:107] "Deleted CPUSet assignment" podUID="c83737980b9ee109184b1d78e942cf36" containerName="kube-scheduler" Mar 18 18:00:45.621797 master-0 kubenswrapper[30278]: E0318 18:00:45.621189 30278 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="41191498-89c5-44dc-b648-dbea889c72f5" containerName="installer" Mar 18 18:00:45.621797 master-0 kubenswrapper[30278]: I0318 18:00:45.621796 30278 state_mem.go:107] "Deleted CPUSet assignment" podUID="41191498-89c5-44dc-b648-dbea889c72f5" containerName="installer" Mar 18 18:00:45.621876 master-0 kubenswrapper[30278]: E0318 18:00:45.621808 30278 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c9655d59-a594-499f-b474-dfc870239174" containerName="installer" Mar 18 18:00:45.621876 master-0 kubenswrapper[30278]: I0318 18:00:45.621815 30278 state_mem.go:107] "Deleted CPUSet assignment" podUID="c9655d59-a594-499f-b474-dfc870239174" containerName="installer" Mar 18 18:00:45.621876 master-0 kubenswrapper[30278]: E0318 18:00:45.621830 30278 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="49fac1b46a11e49501805e891baae4a9" containerName="kube-apiserver" Mar 18 18:00:45.621876 master-0 kubenswrapper[30278]: I0318 18:00:45.621836 30278 state_mem.go:107] "Deleted CPUSet assignment" podUID="49fac1b46a11e49501805e891baae4a9" containerName="kube-apiserver" Mar 18 18:00:45.621876 master-0 kubenswrapper[30278]: E0318 18:00:45.621846 30278 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1a709ef9-91c0-4193-acb4-0594d02f554c" containerName="installer" Mar 18 18:00:45.621876 master-0 kubenswrapper[30278]: I0318 18:00:45.621852 30278 state_mem.go:107] "Deleted CPUSet assignment" podUID="1a709ef9-91c0-4193-acb4-0594d02f554c" containerName="installer" Mar 18 18:00:45.622051 master-0 kubenswrapper[30278]: E0318 18:00:45.621884 30278 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="be6633f4-7370-49b8-a607-6a3fa52a098e" containerName="assisted-installer-controller" Mar 18 18:00:45.622051 master-0 kubenswrapper[30278]: I0318 18:00:45.621894 30278 state_mem.go:107] "Deleted CPUSet assignment" podUID="be6633f4-7370-49b8-a607-6a3fa52a098e" containerName="assisted-installer-controller" Mar 18 18:00:45.622051 master-0 kubenswrapper[30278]: E0318 18:00:45.621904 30278 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd9d8bd7-68a0-458f-9d25-f600932e303c" containerName="installer" Mar 18 18:00:45.622051 master-0 kubenswrapper[30278]: I0318 18:00:45.621916 30278 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd9d8bd7-68a0-458f-9d25-f600932e303c" containerName="installer" Mar 18 18:00:45.622051 master-0 kubenswrapper[30278]: E0318 18:00:45.621926 30278 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="49fac1b46a11e49501805e891baae4a9" containerName="kube-apiserver-insecure-readyz" Mar 18 18:00:45.622051 master-0 kubenswrapper[30278]: I0318 18:00:45.621932 30278 state_mem.go:107] "Deleted CPUSet assignment" podUID="49fac1b46a11e49501805e891baae4a9" containerName="kube-apiserver-insecure-readyz" Mar 18 18:00:45.622051 master-0 kubenswrapper[30278]: E0318 18:00:45.621944 30278 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="49fac1b46a11e49501805e891baae4a9" containerName="setup" Mar 18 18:00:45.622051 master-0 kubenswrapper[30278]: I0318 18:00:45.621950 30278 state_mem.go:107] "Deleted CPUSet assignment" podUID="49fac1b46a11e49501805e891baae4a9" containerName="setup" Mar 18 18:00:45.622051 master-0 kubenswrapper[30278]: E0318 18:00:45.621958 30278 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4285e80c-1ff9-42b3-9692-9f2ab6b61916" containerName="installer" Mar 18 18:00:45.622051 master-0 kubenswrapper[30278]: I0318 18:00:45.621964 30278 state_mem.go:107] "Deleted CPUSet assignment" podUID="4285e80c-1ff9-42b3-9692-9f2ab6b61916" containerName="installer" Mar 18 18:00:45.622051 master-0 kubenswrapper[30278]: E0318 18:00:45.621971 30278 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5e216493-e343-4c59-a3c1-5aad5edd67e2" containerName="installer" Mar 18 18:00:45.622051 master-0 kubenswrapper[30278]: I0318 18:00:45.621976 30278 state_mem.go:107] "Deleted CPUSet assignment" podUID="5e216493-e343-4c59-a3c1-5aad5edd67e2" containerName="installer" Mar 18 18:00:45.622051 master-0 kubenswrapper[30278]: E0318 18:00:45.621987 30278 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="37bbec19-22b8-411c-901b-d89c92b0bd4d" containerName="installer" Mar 18 18:00:45.622051 master-0 kubenswrapper[30278]: I0318 18:00:45.621992 30278 state_mem.go:107] "Deleted CPUSet assignment" podUID="37bbec19-22b8-411c-901b-d89c92b0bd4d" containerName="installer" Mar 18 18:00:45.622051 master-0 kubenswrapper[30278]: E0318 18:00:45.621999 30278 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="98c88ce7-94dd-434c-99fc-96d900d544e6" containerName="installer" Mar 18 18:00:45.622051 master-0 kubenswrapper[30278]: I0318 18:00:45.622005 30278 state_mem.go:107] "Deleted CPUSet assignment" podUID="98c88ce7-94dd-434c-99fc-96d900d544e6" containerName="installer" Mar 18 18:00:45.622051 master-0 kubenswrapper[30278]: E0318 18:00:45.622012 30278 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="08451d5b-cf84-45a1-a16d-7ce10a83a6e7" containerName="installer" Mar 18 18:00:45.622051 master-0 kubenswrapper[30278]: I0318 18:00:45.622018 30278 state_mem.go:107] "Deleted CPUSet assignment" podUID="08451d5b-cf84-45a1-a16d-7ce10a83a6e7" containerName="installer" Mar 18 18:00:45.622535 master-0 kubenswrapper[30278]: I0318 18:00:45.622122 30278 memory_manager.go:354] "RemoveStaleState removing state" podUID="be6633f4-7370-49b8-a607-6a3fa52a098e" containerName="assisted-installer-controller" Mar 18 18:00:45.622535 master-0 kubenswrapper[30278]: I0318 18:00:45.622138 30278 memory_manager.go:354] "RemoveStaleState removing state" podUID="08451d5b-cf84-45a1-a16d-7ce10a83a6e7" containerName="installer" Mar 18 18:00:45.622535 master-0 kubenswrapper[30278]: I0318 18:00:45.622146 30278 memory_manager.go:354] "RemoveStaleState removing state" podUID="c9655d59-a594-499f-b474-dfc870239174" containerName="installer" Mar 18 18:00:45.622535 master-0 kubenswrapper[30278]: I0318 18:00:45.622154 30278 memory_manager.go:354] "RemoveStaleState removing state" podUID="49fac1b46a11e49501805e891baae4a9" containerName="kube-apiserver-insecure-readyz" Mar 18 18:00:45.622535 master-0 kubenswrapper[30278]: I0318 18:00:45.622163 30278 memory_manager.go:354] "RemoveStaleState removing state" podUID="1a709ef9-91c0-4193-acb4-0594d02f554c" containerName="installer" Mar 18 18:00:45.622535 master-0 kubenswrapper[30278]: I0318 18:00:45.622172 30278 memory_manager.go:354] "RemoveStaleState removing state" podUID="cd9d8bd7-68a0-458f-9d25-f600932e303c" containerName="installer" Mar 18 18:00:45.622535 master-0 kubenswrapper[30278]: I0318 18:00:45.622180 30278 memory_manager.go:354] "RemoveStaleState removing state" podUID="41191498-89c5-44dc-b648-dbea889c72f5" containerName="installer" Mar 18 18:00:45.622535 master-0 kubenswrapper[30278]: I0318 18:00:45.622190 30278 memory_manager.go:354] "RemoveStaleState removing state" podUID="4285e80c-1ff9-42b3-9692-9f2ab6b61916" containerName="installer" Mar 18 18:00:45.622535 master-0 kubenswrapper[30278]: I0318 18:00:45.622197 30278 memory_manager.go:354] "RemoveStaleState removing state" podUID="98c88ce7-94dd-434c-99fc-96d900d544e6" containerName="installer" Mar 18 18:00:45.622535 master-0 kubenswrapper[30278]: I0318 18:00:45.622206 30278 memory_manager.go:354] "RemoveStaleState removing state" podUID="37bbec19-22b8-411c-901b-d89c92b0bd4d" containerName="installer" Mar 18 18:00:45.622535 master-0 kubenswrapper[30278]: I0318 18:00:45.622216 30278 memory_manager.go:354] "RemoveStaleState removing state" podUID="c83737980b9ee109184b1d78e942cf36" containerName="kube-scheduler" Mar 18 18:00:45.622535 master-0 kubenswrapper[30278]: I0318 18:00:45.622223 30278 memory_manager.go:354] "RemoveStaleState removing state" podUID="5e216493-e343-4c59-a3c1-5aad5edd67e2" containerName="installer" Mar 18 18:00:45.622535 master-0 kubenswrapper[30278]: I0318 18:00:45.622234 30278 memory_manager.go:354] "RemoveStaleState removing state" podUID="49fac1b46a11e49501805e891baae4a9" containerName="kube-apiserver" Mar 18 18:00:45.622535 master-0 kubenswrapper[30278]: I0318 18:00:45.622240 30278 memory_manager.go:354] "RemoveStaleState removing state" podUID="49fac1b46a11e49501805e891baae4a9" containerName="setup" Mar 18 18:00:45.624586 master-0 kubenswrapper[30278]: I0318 18:00:45.624562 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/openshift-state-metrics-5dc6c74576-smd8t" Mar 18 18:00:45.631565 master-0 kubenswrapper[30278]: I0318 18:00:45.631171 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-kube-rbac-proxy-config" Mar 18 18:00:45.631565 master-0 kubenswrapper[30278]: I0318 18:00:45.631361 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-dockercfg-5g5z8" Mar 18 18:00:45.631783 master-0 kubenswrapper[30278]: I0318 18:00:45.631741 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-tls" Mar 18 18:00:45.636190 master-0 kubenswrapper[30278]: I0318 18:00:45.636152 30278 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/kube-state-metrics-7bbc969446-72wb5"] Mar 18 18:00:45.640283 master-0 kubenswrapper[30278]: I0318 18:00:45.637196 30278 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/node-exporter-v28rj"] Mar 18 18:00:45.640283 master-0 kubenswrapper[30278]: I0318 18:00:45.638038 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/node-exporter-v28rj" Mar 18 18:00:45.640283 master-0 kubenswrapper[30278]: I0318 18:00:45.638366 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/kube-state-metrics-7bbc969446-72wb5" Mar 18 18:00:45.640807 master-0 kubenswrapper[30278]: I0318 18:00:45.640787 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-tls" Mar 18 18:00:45.641045 master-0 kubenswrapper[30278]: I0318 18:00:45.641032 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-kube-rbac-proxy-config" Mar 18 18:00:45.641313 master-0 kubenswrapper[30278]: I0318 18:00:45.641300 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kube-state-metrics-custom-resource-state-configmap" Mar 18 18:00:45.641527 master-0 kubenswrapper[30278]: I0318 18:00:45.641487 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-dockercfg-wh6dt" Mar 18 18:00:45.641527 master-0 kubenswrapper[30278]: I0318 18:00:45.641500 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-tls" Mar 18 18:00:45.641683 master-0 kubenswrapper[30278]: I0318 18:00:45.641593 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-dockercfg-ncdpm" Mar 18 18:00:45.641763 master-0 kubenswrapper[30278]: I0318 18:00:45.641748 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-kube-rbac-proxy-config" Mar 18 18:00:45.657288 master-0 kubenswrapper[30278]: I0318 18:00:45.656053 30278 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/openshift-state-metrics-5dc6c74576-smd8t"] Mar 18 18:00:45.657288 master-0 kubenswrapper[30278]: I0318 18:00:45.657122 30278 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/kube-state-metrics-7bbc969446-72wb5"] Mar 18 18:00:45.663819 master-0 kubenswrapper[30278]: I0318 18:00:45.663775 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4wvwp\" (UniqueName: \"kubernetes.io/projected/2ee860d7-4262-43d7-aeb2-b77040a69133-kube-api-access-4wvwp\") pod \"openshift-state-metrics-5dc6c74576-smd8t\" (UID: \"2ee860d7-4262-43d7-aeb2-b77040a69133\") " pod="openshift-monitoring/openshift-state-metrics-5dc6c74576-smd8t" Mar 18 18:00:45.663819 master-0 kubenswrapper[30278]: I0318 18:00:45.663829 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/2ee860d7-4262-43d7-aeb2-b77040a69133-openshift-state-metrics-kube-rbac-proxy-config\") pod \"openshift-state-metrics-5dc6c74576-smd8t\" (UID: \"2ee860d7-4262-43d7-aeb2-b77040a69133\") " pod="openshift-monitoring/openshift-state-metrics-5dc6c74576-smd8t" Mar 18 18:00:45.664020 master-0 kubenswrapper[30278]: I0318 18:00:45.663860 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/1674d0a4-8c16-4535-ac1e-e3220ef50e57-node-exporter-textfile\") pod \"node-exporter-v28rj\" (UID: \"1674d0a4-8c16-4535-ac1e-e3220ef50e57\") " pod="openshift-monitoring/node-exporter-v28rj" Mar 18 18:00:45.664020 master-0 kubenswrapper[30278]: I0318 18:00:45.663884 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bkgqn\" (UniqueName: \"kubernetes.io/projected/1674d0a4-8c16-4535-ac1e-e3220ef50e57-kube-api-access-bkgqn\") pod \"node-exporter-v28rj\" (UID: \"1674d0a4-8c16-4535-ac1e-e3220ef50e57\") " pod="openshift-monitoring/node-exporter-v28rj" Mar 18 18:00:45.664020 master-0 kubenswrapper[30278]: I0318 18:00:45.663903 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/1674d0a4-8c16-4535-ac1e-e3220ef50e57-sys\") pod \"node-exporter-v28rj\" (UID: \"1674d0a4-8c16-4535-ac1e-e3220ef50e57\") " pod="openshift-monitoring/node-exporter-v28rj" Mar 18 18:00:45.664020 master-0 kubenswrapper[30278]: I0318 18:00:45.663925 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/2ee860d7-4262-43d7-aeb2-b77040a69133-openshift-state-metrics-tls\") pod \"openshift-state-metrics-5dc6c74576-smd8t\" (UID: \"2ee860d7-4262-43d7-aeb2-b77040a69133\") " pod="openshift-monitoring/openshift-state-metrics-5dc6c74576-smd8t" Mar 18 18:00:45.664020 master-0 kubenswrapper[30278]: I0318 18:00:45.663947 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wghlc\" (UniqueName: \"kubernetes.io/projected/5876677a-9e8a-4625-af71-833b259a1596-kube-api-access-wghlc\") pod \"kube-state-metrics-7bbc969446-72wb5\" (UID: \"5876677a-9e8a-4625-af71-833b259a1596\") " pod="openshift-monitoring/kube-state-metrics-7bbc969446-72wb5" Mar 18 18:00:45.664020 master-0 kubenswrapper[30278]: I0318 18:00:45.663966 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/5876677a-9e8a-4625-af71-833b259a1596-kube-state-metrics-kube-rbac-proxy-config\") pod \"kube-state-metrics-7bbc969446-72wb5\" (UID: \"5876677a-9e8a-4625-af71-833b259a1596\") " pod="openshift-monitoring/kube-state-metrics-7bbc969446-72wb5" Mar 18 18:00:45.664020 master-0 kubenswrapper[30278]: I0318 18:00:45.663988 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/1674d0a4-8c16-4535-ac1e-e3220ef50e57-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-v28rj\" (UID: \"1674d0a4-8c16-4535-ac1e-e3220ef50e57\") " pod="openshift-monitoring/node-exporter-v28rj" Mar 18 18:00:45.664020 master-0 kubenswrapper[30278]: I0318 18:00:45.664007 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-custom-resource-state-configmap\" (UniqueName: \"kubernetes.io/configmap/5876677a-9e8a-4625-af71-833b259a1596-kube-state-metrics-custom-resource-state-configmap\") pod \"kube-state-metrics-7bbc969446-72wb5\" (UID: \"5876677a-9e8a-4625-af71-833b259a1596\") " pod="openshift-monitoring/kube-state-metrics-7bbc969446-72wb5" Mar 18 18:00:45.664305 master-0 kubenswrapper[30278]: I0318 18:00:45.664044 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"root\" (UniqueName: \"kubernetes.io/host-path/1674d0a4-8c16-4535-ac1e-e3220ef50e57-root\") pod \"node-exporter-v28rj\" (UID: \"1674d0a4-8c16-4535-ac1e-e3220ef50e57\") " pod="openshift-monitoring/node-exporter-v28rj" Mar 18 18:00:45.664305 master-0 kubenswrapper[30278]: I0318 18:00:45.664063 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/1674d0a4-8c16-4535-ac1e-e3220ef50e57-node-exporter-wtmp\") pod \"node-exporter-v28rj\" (UID: \"1674d0a4-8c16-4535-ac1e-e3220ef50e57\") " pod="openshift-monitoring/node-exporter-v28rj" Mar 18 18:00:45.664305 master-0 kubenswrapper[30278]: I0318 18:00:45.664295 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/2ee860d7-4262-43d7-aeb2-b77040a69133-metrics-client-ca\") pod \"openshift-state-metrics-5dc6c74576-smd8t\" (UID: \"2ee860d7-4262-43d7-aeb2-b77040a69133\") " pod="openshift-monitoring/openshift-state-metrics-5dc6c74576-smd8t" Mar 18 18:00:45.672730 master-0 kubenswrapper[30278]: I0318 18:00:45.664319 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/5876677a-9e8a-4625-af71-833b259a1596-metrics-client-ca\") pod \"kube-state-metrics-7bbc969446-72wb5\" (UID: \"5876677a-9e8a-4625-af71-833b259a1596\") " pod="openshift-monitoring/kube-state-metrics-7bbc969446-72wb5" Mar 18 18:00:45.672730 master-0 kubenswrapper[30278]: I0318 18:00:45.664338 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/1674d0a4-8c16-4535-ac1e-e3220ef50e57-node-exporter-tls\") pod \"node-exporter-v28rj\" (UID: \"1674d0a4-8c16-4535-ac1e-e3220ef50e57\") " pod="openshift-monitoring/node-exporter-v28rj" Mar 18 18:00:45.672730 master-0 kubenswrapper[30278]: I0318 18:00:45.664475 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/5876677a-9e8a-4625-af71-833b259a1596-kube-state-metrics-tls\") pod \"kube-state-metrics-7bbc969446-72wb5\" (UID: \"5876677a-9e8a-4625-af71-833b259a1596\") " pod="openshift-monitoring/kube-state-metrics-7bbc969446-72wb5" Mar 18 18:00:45.672730 master-0 kubenswrapper[30278]: I0318 18:00:45.664545 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/1674d0a4-8c16-4535-ac1e-e3220ef50e57-metrics-client-ca\") pod \"node-exporter-v28rj\" (UID: \"1674d0a4-8c16-4535-ac1e-e3220ef50e57\") " pod="openshift-monitoring/node-exporter-v28rj" Mar 18 18:00:45.672730 master-0 kubenswrapper[30278]: I0318 18:00:45.664571 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"volume-directive-shadow\" (UniqueName: \"kubernetes.io/empty-dir/5876677a-9e8a-4625-af71-833b259a1596-volume-directive-shadow\") pod \"kube-state-metrics-7bbc969446-72wb5\" (UID: \"5876677a-9e8a-4625-af71-833b259a1596\") " pod="openshift-monitoring/kube-state-metrics-7bbc969446-72wb5" Mar 18 18:00:45.767430 master-0 kubenswrapper[30278]: I0318 18:00:45.765983 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"root\" (UniqueName: \"kubernetes.io/host-path/1674d0a4-8c16-4535-ac1e-e3220ef50e57-root\") pod \"node-exporter-v28rj\" (UID: \"1674d0a4-8c16-4535-ac1e-e3220ef50e57\") " pod="openshift-monitoring/node-exporter-v28rj" Mar 18 18:00:45.767430 master-0 kubenswrapper[30278]: I0318 18:00:45.766038 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/1674d0a4-8c16-4535-ac1e-e3220ef50e57-node-exporter-wtmp\") pod \"node-exporter-v28rj\" (UID: \"1674d0a4-8c16-4535-ac1e-e3220ef50e57\") " pod="openshift-monitoring/node-exporter-v28rj" Mar 18 18:00:45.767430 master-0 kubenswrapper[30278]: I0318 18:00:45.766080 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/2ee860d7-4262-43d7-aeb2-b77040a69133-metrics-client-ca\") pod \"openshift-state-metrics-5dc6c74576-smd8t\" (UID: \"2ee860d7-4262-43d7-aeb2-b77040a69133\") " pod="openshift-monitoring/openshift-state-metrics-5dc6c74576-smd8t" Mar 18 18:00:45.767430 master-0 kubenswrapper[30278]: I0318 18:00:45.766104 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/5876677a-9e8a-4625-af71-833b259a1596-metrics-client-ca\") pod \"kube-state-metrics-7bbc969446-72wb5\" (UID: \"5876677a-9e8a-4625-af71-833b259a1596\") " pod="openshift-monitoring/kube-state-metrics-7bbc969446-72wb5" Mar 18 18:00:45.767430 master-0 kubenswrapper[30278]: I0318 18:00:45.766126 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/1674d0a4-8c16-4535-ac1e-e3220ef50e57-node-exporter-tls\") pod \"node-exporter-v28rj\" (UID: \"1674d0a4-8c16-4535-ac1e-e3220ef50e57\") " pod="openshift-monitoring/node-exporter-v28rj" Mar 18 18:00:45.767430 master-0 kubenswrapper[30278]: I0318 18:00:45.766146 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/5876677a-9e8a-4625-af71-833b259a1596-kube-state-metrics-tls\") pod \"kube-state-metrics-7bbc969446-72wb5\" (UID: \"5876677a-9e8a-4625-af71-833b259a1596\") " pod="openshift-monitoring/kube-state-metrics-7bbc969446-72wb5" Mar 18 18:00:45.767430 master-0 kubenswrapper[30278]: I0318 18:00:45.766162 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/1674d0a4-8c16-4535-ac1e-e3220ef50e57-metrics-client-ca\") pod \"node-exporter-v28rj\" (UID: \"1674d0a4-8c16-4535-ac1e-e3220ef50e57\") " pod="openshift-monitoring/node-exporter-v28rj" Mar 18 18:00:45.767430 master-0 kubenswrapper[30278]: I0318 18:00:45.766178 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"volume-directive-shadow\" (UniqueName: \"kubernetes.io/empty-dir/5876677a-9e8a-4625-af71-833b259a1596-volume-directive-shadow\") pod \"kube-state-metrics-7bbc969446-72wb5\" (UID: \"5876677a-9e8a-4625-af71-833b259a1596\") " pod="openshift-monitoring/kube-state-metrics-7bbc969446-72wb5" Mar 18 18:00:45.767430 master-0 kubenswrapper[30278]: I0318 18:00:45.766197 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4wvwp\" (UniqueName: \"kubernetes.io/projected/2ee860d7-4262-43d7-aeb2-b77040a69133-kube-api-access-4wvwp\") pod \"openshift-state-metrics-5dc6c74576-smd8t\" (UID: \"2ee860d7-4262-43d7-aeb2-b77040a69133\") " pod="openshift-monitoring/openshift-state-metrics-5dc6c74576-smd8t" Mar 18 18:00:45.767430 master-0 kubenswrapper[30278]: I0318 18:00:45.766266 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/2ee860d7-4262-43d7-aeb2-b77040a69133-openshift-state-metrics-kube-rbac-proxy-config\") pod \"openshift-state-metrics-5dc6c74576-smd8t\" (UID: \"2ee860d7-4262-43d7-aeb2-b77040a69133\") " pod="openshift-monitoring/openshift-state-metrics-5dc6c74576-smd8t" Mar 18 18:00:45.767430 master-0 kubenswrapper[30278]: I0318 18:00:45.766322 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/1674d0a4-8c16-4535-ac1e-e3220ef50e57-node-exporter-textfile\") pod \"node-exporter-v28rj\" (UID: \"1674d0a4-8c16-4535-ac1e-e3220ef50e57\") " pod="openshift-monitoring/node-exporter-v28rj" Mar 18 18:00:45.767430 master-0 kubenswrapper[30278]: I0318 18:00:45.766362 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bkgqn\" (UniqueName: \"kubernetes.io/projected/1674d0a4-8c16-4535-ac1e-e3220ef50e57-kube-api-access-bkgqn\") pod \"node-exporter-v28rj\" (UID: \"1674d0a4-8c16-4535-ac1e-e3220ef50e57\") " pod="openshift-monitoring/node-exporter-v28rj" Mar 18 18:00:45.767430 master-0 kubenswrapper[30278]: I0318 18:00:45.766383 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/1674d0a4-8c16-4535-ac1e-e3220ef50e57-sys\") pod \"node-exporter-v28rj\" (UID: \"1674d0a4-8c16-4535-ac1e-e3220ef50e57\") " pod="openshift-monitoring/node-exporter-v28rj" Mar 18 18:00:45.767430 master-0 kubenswrapper[30278]: I0318 18:00:45.766404 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/2ee860d7-4262-43d7-aeb2-b77040a69133-openshift-state-metrics-tls\") pod \"openshift-state-metrics-5dc6c74576-smd8t\" (UID: \"2ee860d7-4262-43d7-aeb2-b77040a69133\") " pod="openshift-monitoring/openshift-state-metrics-5dc6c74576-smd8t" Mar 18 18:00:45.767430 master-0 kubenswrapper[30278]: I0318 18:00:45.766420 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wghlc\" (UniqueName: \"kubernetes.io/projected/5876677a-9e8a-4625-af71-833b259a1596-kube-api-access-wghlc\") pod \"kube-state-metrics-7bbc969446-72wb5\" (UID: \"5876677a-9e8a-4625-af71-833b259a1596\") " pod="openshift-monitoring/kube-state-metrics-7bbc969446-72wb5" Mar 18 18:00:45.767430 master-0 kubenswrapper[30278]: I0318 18:00:45.766440 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/5876677a-9e8a-4625-af71-833b259a1596-kube-state-metrics-kube-rbac-proxy-config\") pod \"kube-state-metrics-7bbc969446-72wb5\" (UID: \"5876677a-9e8a-4625-af71-833b259a1596\") " pod="openshift-monitoring/kube-state-metrics-7bbc969446-72wb5" Mar 18 18:00:45.767430 master-0 kubenswrapper[30278]: I0318 18:00:45.766458 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/1674d0a4-8c16-4535-ac1e-e3220ef50e57-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-v28rj\" (UID: \"1674d0a4-8c16-4535-ac1e-e3220ef50e57\") " pod="openshift-monitoring/node-exporter-v28rj" Mar 18 18:00:45.767430 master-0 kubenswrapper[30278]: I0318 18:00:45.766481 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-custom-resource-state-configmap\" (UniqueName: \"kubernetes.io/configmap/5876677a-9e8a-4625-af71-833b259a1596-kube-state-metrics-custom-resource-state-configmap\") pod \"kube-state-metrics-7bbc969446-72wb5\" (UID: \"5876677a-9e8a-4625-af71-833b259a1596\") " pod="openshift-monitoring/kube-state-metrics-7bbc969446-72wb5" Mar 18 18:00:45.767430 master-0 kubenswrapper[30278]: I0318 18:00:45.767332 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-custom-resource-state-configmap\" (UniqueName: \"kubernetes.io/configmap/5876677a-9e8a-4625-af71-833b259a1596-kube-state-metrics-custom-resource-state-configmap\") pod \"kube-state-metrics-7bbc969446-72wb5\" (UID: \"5876677a-9e8a-4625-af71-833b259a1596\") " pod="openshift-monitoring/kube-state-metrics-7bbc969446-72wb5" Mar 18 18:00:45.767430 master-0 kubenswrapper[30278]: I0318 18:00:45.767379 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"root\" (UniqueName: \"kubernetes.io/host-path/1674d0a4-8c16-4535-ac1e-e3220ef50e57-root\") pod \"node-exporter-v28rj\" (UID: \"1674d0a4-8c16-4535-ac1e-e3220ef50e57\") " pod="openshift-monitoring/node-exporter-v28rj" Mar 18 18:00:45.768144 master-0 kubenswrapper[30278]: I0318 18:00:45.767503 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/1674d0a4-8c16-4535-ac1e-e3220ef50e57-node-exporter-wtmp\") pod \"node-exporter-v28rj\" (UID: \"1674d0a4-8c16-4535-ac1e-e3220ef50e57\") " pod="openshift-monitoring/node-exporter-v28rj" Mar 18 18:00:45.768144 master-0 kubenswrapper[30278]: I0318 18:00:45.768115 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/2ee860d7-4262-43d7-aeb2-b77040a69133-metrics-client-ca\") pod \"openshift-state-metrics-5dc6c74576-smd8t\" (UID: \"2ee860d7-4262-43d7-aeb2-b77040a69133\") " pod="openshift-monitoring/openshift-state-metrics-5dc6c74576-smd8t" Mar 18 18:00:45.771209 master-0 kubenswrapper[30278]: I0318 18:00:45.768764 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/5876677a-9e8a-4625-af71-833b259a1596-metrics-client-ca\") pod \"kube-state-metrics-7bbc969446-72wb5\" (UID: \"5876677a-9e8a-4625-af71-833b259a1596\") " pod="openshift-monitoring/kube-state-metrics-7bbc969446-72wb5" Mar 18 18:00:45.771209 master-0 kubenswrapper[30278]: E0318 18:00:45.770324 30278 secret.go:189] Couldn't get secret openshift-monitoring/kube-state-metrics-tls: secret "kube-state-metrics-tls" not found Mar 18 18:00:45.771209 master-0 kubenswrapper[30278]: E0318 18:00:45.770393 30278 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5876677a-9e8a-4625-af71-833b259a1596-kube-state-metrics-tls podName:5876677a-9e8a-4625-af71-833b259a1596 nodeName:}" failed. No retries permitted until 2026-03-18 18:00:46.270375679 +0000 UTC m=+15.437560264 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-state-metrics-tls" (UniqueName: "kubernetes.io/secret/5876677a-9e8a-4625-af71-833b259a1596-kube-state-metrics-tls") pod "kube-state-metrics-7bbc969446-72wb5" (UID: "5876677a-9e8a-4625-af71-833b259a1596") : secret "kube-state-metrics-tls" not found Mar 18 18:00:45.771501 master-0 kubenswrapper[30278]: I0318 18:00:45.771224 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/1674d0a4-8c16-4535-ac1e-e3220ef50e57-metrics-client-ca\") pod \"node-exporter-v28rj\" (UID: \"1674d0a4-8c16-4535-ac1e-e3220ef50e57\") " pod="openshift-monitoring/node-exporter-v28rj" Mar 18 18:00:45.773960 master-0 kubenswrapper[30278]: I0318 18:00:45.771615 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"volume-directive-shadow\" (UniqueName: \"kubernetes.io/empty-dir/5876677a-9e8a-4625-af71-833b259a1596-volume-directive-shadow\") pod \"kube-state-metrics-7bbc969446-72wb5\" (UID: \"5876677a-9e8a-4625-af71-833b259a1596\") " pod="openshift-monitoring/kube-state-metrics-7bbc969446-72wb5" Mar 18 18:00:45.773960 master-0 kubenswrapper[30278]: I0318 18:00:45.773906 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/1674d0a4-8c16-4535-ac1e-e3220ef50e57-sys\") pod \"node-exporter-v28rj\" (UID: \"1674d0a4-8c16-4535-ac1e-e3220ef50e57\") " pod="openshift-monitoring/node-exporter-v28rj" Mar 18 18:00:45.775723 master-0 kubenswrapper[30278]: I0318 18:00:45.774325 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/1674d0a4-8c16-4535-ac1e-e3220ef50e57-node-exporter-textfile\") pod \"node-exporter-v28rj\" (UID: \"1674d0a4-8c16-4535-ac1e-e3220ef50e57\") " pod="openshift-monitoring/node-exporter-v28rj" Mar 18 18:00:45.781778 master-0 kubenswrapper[30278]: I0318 18:00:45.780978 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/1674d0a4-8c16-4535-ac1e-e3220ef50e57-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-v28rj\" (UID: \"1674d0a4-8c16-4535-ac1e-e3220ef50e57\") " pod="openshift-monitoring/node-exporter-v28rj" Mar 18 18:00:45.781778 master-0 kubenswrapper[30278]: I0318 18:00:45.781058 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/5876677a-9e8a-4625-af71-833b259a1596-kube-state-metrics-kube-rbac-proxy-config\") pod \"kube-state-metrics-7bbc969446-72wb5\" (UID: \"5876677a-9e8a-4625-af71-833b259a1596\") " pod="openshift-monitoring/kube-state-metrics-7bbc969446-72wb5" Mar 18 18:00:45.787290 master-0 kubenswrapper[30278]: I0318 18:00:45.782801 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/1674d0a4-8c16-4535-ac1e-e3220ef50e57-node-exporter-tls\") pod \"node-exporter-v28rj\" (UID: \"1674d0a4-8c16-4535-ac1e-e3220ef50e57\") " pod="openshift-monitoring/node-exporter-v28rj" Mar 18 18:00:45.787290 master-0 kubenswrapper[30278]: I0318 18:00:45.782871 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/2ee860d7-4262-43d7-aeb2-b77040a69133-openshift-state-metrics-kube-rbac-proxy-config\") pod \"openshift-state-metrics-5dc6c74576-smd8t\" (UID: \"2ee860d7-4262-43d7-aeb2-b77040a69133\") " pod="openshift-monitoring/openshift-state-metrics-5dc6c74576-smd8t" Mar 18 18:00:45.787290 master-0 kubenswrapper[30278]: I0318 18:00:45.784243 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/2ee860d7-4262-43d7-aeb2-b77040a69133-openshift-state-metrics-tls\") pod \"openshift-state-metrics-5dc6c74576-smd8t\" (UID: \"2ee860d7-4262-43d7-aeb2-b77040a69133\") " pod="openshift-monitoring/openshift-state-metrics-5dc6c74576-smd8t" Mar 18 18:00:45.806633 master-0 kubenswrapper[30278]: I0318 18:00:45.805178 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4wvwp\" (UniqueName: \"kubernetes.io/projected/2ee860d7-4262-43d7-aeb2-b77040a69133-kube-api-access-4wvwp\") pod \"openshift-state-metrics-5dc6c74576-smd8t\" (UID: \"2ee860d7-4262-43d7-aeb2-b77040a69133\") " pod="openshift-monitoring/openshift-state-metrics-5dc6c74576-smd8t" Mar 18 18:00:45.811375 master-0 kubenswrapper[30278]: I0318 18:00:45.806047 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bkgqn\" (UniqueName: \"kubernetes.io/projected/1674d0a4-8c16-4535-ac1e-e3220ef50e57-kube-api-access-bkgqn\") pod \"node-exporter-v28rj\" (UID: \"1674d0a4-8c16-4535-ac1e-e3220ef50e57\") " pod="openshift-monitoring/node-exporter-v28rj" Mar 18 18:00:45.822612 master-0 kubenswrapper[30278]: I0318 18:00:45.822552 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wghlc\" (UniqueName: \"kubernetes.io/projected/5876677a-9e8a-4625-af71-833b259a1596-kube-api-access-wghlc\") pod \"kube-state-metrics-7bbc969446-72wb5\" (UID: \"5876677a-9e8a-4625-af71-833b259a1596\") " pod="openshift-monitoring/kube-state-metrics-7bbc969446-72wb5" Mar 18 18:00:45.974445 master-0 kubenswrapper[30278]: I0318 18:00:45.966326 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/openshift-state-metrics-5dc6c74576-smd8t" Mar 18 18:00:45.985129 master-0 kubenswrapper[30278]: I0318 18:00:45.985084 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/node-exporter-v28rj" Mar 18 18:00:46.260155 master-0 kubenswrapper[30278]: I0318 18:00:46.260057 30278 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-vbglp" Mar 18 18:00:46.278299 master-0 kubenswrapper[30278]: I0318 18:00:46.278219 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/5876677a-9e8a-4625-af71-833b259a1596-kube-state-metrics-tls\") pod \"kube-state-metrics-7bbc969446-72wb5\" (UID: \"5876677a-9e8a-4625-af71-833b259a1596\") " pod="openshift-monitoring/kube-state-metrics-7bbc969446-72wb5" Mar 18 18:00:46.283302 master-0 kubenswrapper[30278]: I0318 18:00:46.282124 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/5876677a-9e8a-4625-af71-833b259a1596-kube-state-metrics-tls\") pod \"kube-state-metrics-7bbc969446-72wb5\" (UID: \"5876677a-9e8a-4625-af71-833b259a1596\") " pod="openshift-monitoring/kube-state-metrics-7bbc969446-72wb5" Mar 18 18:00:46.306835 master-0 kubenswrapper[30278]: I0318 18:00:46.306517 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/kube-state-metrics-7bbc969446-72wb5" Mar 18 18:00:46.324308 master-0 kubenswrapper[30278]: I0318 18:00:46.324188 30278 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-vbglp" Mar 18 18:00:49.107297 master-0 kubenswrapper[30278]: I0318 18:00:49.107026 30278 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/alertmanager-main-0"] Mar 18 18:00:49.115558 master-0 kubenswrapper[30278]: I0318 18:00:49.114896 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/alertmanager-main-0" Mar 18 18:00:49.140303 master-0 kubenswrapper[30278]: I0318 18:00:49.139498 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-dockercfg-2pg6x" Mar 18 18:00:49.140303 master-0 kubenswrapper[30278]: I0318 18:00:49.140020 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-web-config" Mar 18 18:00:49.140303 master-0 kubenswrapper[30278]: I0318 18:00:49.140177 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-tls" Mar 18 18:00:49.140303 master-0 kubenswrapper[30278]: I0318 18:00:49.140336 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-kube-rbac-proxy-metric" Mar 18 18:00:49.140659 master-0 kubenswrapper[30278]: I0318 18:00:49.140481 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-kube-rbac-proxy-web" Mar 18 18:00:49.140659 master-0 kubenswrapper[30278]: I0318 18:00:49.140537 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"alertmanager-trusted-ca-bundle" Mar 18 18:00:49.140735 master-0 kubenswrapper[30278]: I0318 18:00:49.140676 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-generated" Mar 18 18:00:49.147371 master-0 kubenswrapper[30278]: I0318 18:00:49.144724 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-kube-rbac-proxy" Mar 18 18:00:49.147371 master-0 kubenswrapper[30278]: I0318 18:00:49.144958 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-tls-assets-0" Mar 18 18:00:49.178296 master-0 kubenswrapper[30278]: I0318 18:00:49.174555 30278 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/alertmanager-main-0"] Mar 18 18:00:49.245302 master-0 kubenswrapper[30278]: I0318 18:00:49.244100 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/89b1dfdf-4633-45af-8abd-931a76eca960-config-out\") pod \"alertmanager-main-0\" (UID: \"89b1dfdf-4633-45af-8abd-931a76eca960\") " pod="openshift-monitoring/alertmanager-main-0" Mar 18 18:00:49.245302 master-0 kubenswrapper[30278]: I0318 18:00:49.244191 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/89b1dfdf-4633-45af-8abd-931a76eca960-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"89b1dfdf-4633-45af-8abd-931a76eca960\") " pod="openshift-monitoring/alertmanager-main-0" Mar 18 18:00:49.245302 master-0 kubenswrapper[30278]: I0318 18:00:49.244222 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ghl7k\" (UniqueName: \"kubernetes.io/projected/89b1dfdf-4633-45af-8abd-931a76eca960-kube-api-access-ghl7k\") pod \"alertmanager-main-0\" (UID: \"89b1dfdf-4633-45af-8abd-931a76eca960\") " pod="openshift-monitoring/alertmanager-main-0" Mar 18 18:00:49.245302 master-0 kubenswrapper[30278]: I0318 18:00:49.244253 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/89b1dfdf-4633-45af-8abd-931a76eca960-metrics-client-ca\") pod \"alertmanager-main-0\" (UID: \"89b1dfdf-4633-45af-8abd-931a76eca960\") " pod="openshift-monitoring/alertmanager-main-0" Mar 18 18:00:49.245302 master-0 kubenswrapper[30278]: I0318 18:00:49.244292 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/89b1dfdf-4633-45af-8abd-931a76eca960-secret-alertmanager-kube-rbac-proxy\") pod \"alertmanager-main-0\" (UID: \"89b1dfdf-4633-45af-8abd-931a76eca960\") " pod="openshift-monitoring/alertmanager-main-0" Mar 18 18:00:49.245302 master-0 kubenswrapper[30278]: I0318 18:00:49.244330 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/89b1dfdf-4633-45af-8abd-931a76eca960-web-config\") pod \"alertmanager-main-0\" (UID: \"89b1dfdf-4633-45af-8abd-931a76eca960\") " pod="openshift-monitoring/alertmanager-main-0" Mar 18 18:00:49.245302 master-0 kubenswrapper[30278]: I0318 18:00:49.244357 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/89b1dfdf-4633-45af-8abd-931a76eca960-alertmanager-main-db\") pod \"alertmanager-main-0\" (UID: \"89b1dfdf-4633-45af-8abd-931a76eca960\") " pod="openshift-monitoring/alertmanager-main-0" Mar 18 18:00:49.245302 master-0 kubenswrapper[30278]: I0318 18:00:49.244372 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/89b1dfdf-4633-45af-8abd-931a76eca960-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"89b1dfdf-4633-45af-8abd-931a76eca960\") " pod="openshift-monitoring/alertmanager-main-0" Mar 18 18:00:49.245302 master-0 kubenswrapper[30278]: I0318 18:00:49.244391 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/89b1dfdf-4633-45af-8abd-931a76eca960-secret-alertmanager-kube-rbac-proxy-web\") pod \"alertmanager-main-0\" (UID: \"89b1dfdf-4633-45af-8abd-931a76eca960\") " pod="openshift-monitoring/alertmanager-main-0" Mar 18 18:00:49.245302 master-0 kubenswrapper[30278]: I0318 18:00:49.244409 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/89b1dfdf-4633-45af-8abd-931a76eca960-config-volume\") pod \"alertmanager-main-0\" (UID: \"89b1dfdf-4633-45af-8abd-931a76eca960\") " pod="openshift-monitoring/alertmanager-main-0" Mar 18 18:00:49.245302 master-0 kubenswrapper[30278]: I0318 18:00:49.244426 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/89b1dfdf-4633-45af-8abd-931a76eca960-secret-alertmanager-kube-rbac-proxy-metric\") pod \"alertmanager-main-0\" (UID: \"89b1dfdf-4633-45af-8abd-931a76eca960\") " pod="openshift-monitoring/alertmanager-main-0" Mar 18 18:00:49.245302 master-0 kubenswrapper[30278]: I0318 18:00:49.244444 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/89b1dfdf-4633-45af-8abd-931a76eca960-tls-assets\") pod \"alertmanager-main-0\" (UID: \"89b1dfdf-4633-45af-8abd-931a76eca960\") " pod="openshift-monitoring/alertmanager-main-0" Mar 18 18:00:49.347301 master-0 kubenswrapper[30278]: I0318 18:00:49.346320 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/89b1dfdf-4633-45af-8abd-931a76eca960-metrics-client-ca\") pod \"alertmanager-main-0\" (UID: \"89b1dfdf-4633-45af-8abd-931a76eca960\") " pod="openshift-monitoring/alertmanager-main-0" Mar 18 18:00:49.347301 master-0 kubenswrapper[30278]: I0318 18:00:49.346432 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/89b1dfdf-4633-45af-8abd-931a76eca960-secret-alertmanager-kube-rbac-proxy\") pod \"alertmanager-main-0\" (UID: \"89b1dfdf-4633-45af-8abd-931a76eca960\") " pod="openshift-monitoring/alertmanager-main-0" Mar 18 18:00:49.347571 master-0 kubenswrapper[30278]: I0318 18:00:49.347310 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/89b1dfdf-4633-45af-8abd-931a76eca960-web-config\") pod \"alertmanager-main-0\" (UID: \"89b1dfdf-4633-45af-8abd-931a76eca960\") " pod="openshift-monitoring/alertmanager-main-0" Mar 18 18:00:49.347571 master-0 kubenswrapper[30278]: I0318 18:00:49.347462 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/89b1dfdf-4633-45af-8abd-931a76eca960-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"89b1dfdf-4633-45af-8abd-931a76eca960\") " pod="openshift-monitoring/alertmanager-main-0" Mar 18 18:00:49.347571 master-0 kubenswrapper[30278]: I0318 18:00:49.347492 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/89b1dfdf-4633-45af-8abd-931a76eca960-alertmanager-main-db\") pod \"alertmanager-main-0\" (UID: \"89b1dfdf-4633-45af-8abd-931a76eca960\") " pod="openshift-monitoring/alertmanager-main-0" Mar 18 18:00:49.347571 master-0 kubenswrapper[30278]: I0318 18:00:49.347519 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/89b1dfdf-4633-45af-8abd-931a76eca960-secret-alertmanager-kube-rbac-proxy-web\") pod \"alertmanager-main-0\" (UID: \"89b1dfdf-4633-45af-8abd-931a76eca960\") " pod="openshift-monitoring/alertmanager-main-0" Mar 18 18:00:49.347571 master-0 kubenswrapper[30278]: I0318 18:00:49.347572 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/89b1dfdf-4633-45af-8abd-931a76eca960-config-volume\") pod \"alertmanager-main-0\" (UID: \"89b1dfdf-4633-45af-8abd-931a76eca960\") " pod="openshift-monitoring/alertmanager-main-0" Mar 18 18:00:49.347722 master-0 kubenswrapper[30278]: I0318 18:00:49.347613 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/89b1dfdf-4633-45af-8abd-931a76eca960-secret-alertmanager-kube-rbac-proxy-metric\") pod \"alertmanager-main-0\" (UID: \"89b1dfdf-4633-45af-8abd-931a76eca960\") " pod="openshift-monitoring/alertmanager-main-0" Mar 18 18:00:49.347722 master-0 kubenswrapper[30278]: I0318 18:00:49.347637 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/89b1dfdf-4633-45af-8abd-931a76eca960-tls-assets\") pod \"alertmanager-main-0\" (UID: \"89b1dfdf-4633-45af-8abd-931a76eca960\") " pod="openshift-monitoring/alertmanager-main-0" Mar 18 18:00:49.347722 master-0 kubenswrapper[30278]: I0318 18:00:49.347698 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/89b1dfdf-4633-45af-8abd-931a76eca960-config-out\") pod \"alertmanager-main-0\" (UID: \"89b1dfdf-4633-45af-8abd-931a76eca960\") " pod="openshift-monitoring/alertmanager-main-0" Mar 18 18:00:49.347810 master-0 kubenswrapper[30278]: I0318 18:00:49.347786 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/89b1dfdf-4633-45af-8abd-931a76eca960-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"89b1dfdf-4633-45af-8abd-931a76eca960\") " pod="openshift-monitoring/alertmanager-main-0" Mar 18 18:00:49.347841 master-0 kubenswrapper[30278]: I0318 18:00:49.347830 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ghl7k\" (UniqueName: \"kubernetes.io/projected/89b1dfdf-4633-45af-8abd-931a76eca960-kube-api-access-ghl7k\") pod \"alertmanager-main-0\" (UID: \"89b1dfdf-4633-45af-8abd-931a76eca960\") " pod="openshift-monitoring/alertmanager-main-0" Mar 18 18:00:49.352308 master-0 kubenswrapper[30278]: I0318 18:00:49.349011 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/89b1dfdf-4633-45af-8abd-931a76eca960-metrics-client-ca\") pod \"alertmanager-main-0\" (UID: \"89b1dfdf-4633-45af-8abd-931a76eca960\") " pod="openshift-monitoring/alertmanager-main-0" Mar 18 18:00:49.352308 master-0 kubenswrapper[30278]: E0318 18:00:49.349201 30278 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/89b1dfdf-4633-45af-8abd-931a76eca960-alertmanager-trusted-ca-bundle podName:89b1dfdf-4633-45af-8abd-931a76eca960 nodeName:}" failed. No retries permitted until 2026-03-18 18:00:49.849164131 +0000 UTC m=+19.016348716 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "alertmanager-trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/89b1dfdf-4633-45af-8abd-931a76eca960-alertmanager-trusted-ca-bundle") pod "alertmanager-main-0" (UID: "89b1dfdf-4633-45af-8abd-931a76eca960") : configmap references non-existent config key: ca-bundle.crt Mar 18 18:00:49.352308 master-0 kubenswrapper[30278]: I0318 18:00:49.350146 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/89b1dfdf-4633-45af-8abd-931a76eca960-alertmanager-main-db\") pod \"alertmanager-main-0\" (UID: \"89b1dfdf-4633-45af-8abd-931a76eca960\") " pod="openshift-monitoring/alertmanager-main-0" Mar 18 18:00:49.359320 master-0 kubenswrapper[30278]: I0318 18:00:49.353474 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/89b1dfdf-4633-45af-8abd-931a76eca960-secret-alertmanager-kube-rbac-proxy-web\") pod \"alertmanager-main-0\" (UID: \"89b1dfdf-4633-45af-8abd-931a76eca960\") " pod="openshift-monitoring/alertmanager-main-0" Mar 18 18:00:49.359320 master-0 kubenswrapper[30278]: I0318 18:00:49.354651 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/89b1dfdf-4633-45af-8abd-931a76eca960-secret-alertmanager-kube-rbac-proxy-metric\") pod \"alertmanager-main-0\" (UID: \"89b1dfdf-4633-45af-8abd-931a76eca960\") " pod="openshift-monitoring/alertmanager-main-0" Mar 18 18:00:49.359320 master-0 kubenswrapper[30278]: I0318 18:00:49.354783 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/89b1dfdf-4633-45af-8abd-931a76eca960-config-volume\") pod \"alertmanager-main-0\" (UID: \"89b1dfdf-4633-45af-8abd-931a76eca960\") " pod="openshift-monitoring/alertmanager-main-0" Mar 18 18:00:49.359320 master-0 kubenswrapper[30278]: I0318 18:00:49.355191 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/89b1dfdf-4633-45af-8abd-931a76eca960-secret-alertmanager-kube-rbac-proxy\") pod \"alertmanager-main-0\" (UID: \"89b1dfdf-4633-45af-8abd-931a76eca960\") " pod="openshift-monitoring/alertmanager-main-0" Mar 18 18:00:49.359320 master-0 kubenswrapper[30278]: I0318 18:00:49.355727 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/89b1dfdf-4633-45af-8abd-931a76eca960-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"89b1dfdf-4633-45af-8abd-931a76eca960\") " pod="openshift-monitoring/alertmanager-main-0" Mar 18 18:00:49.359497 master-0 kubenswrapper[30278]: I0318 18:00:49.359314 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/89b1dfdf-4633-45af-8abd-931a76eca960-config-out\") pod \"alertmanager-main-0\" (UID: \"89b1dfdf-4633-45af-8abd-931a76eca960\") " pod="openshift-monitoring/alertmanager-main-0" Mar 18 18:00:49.363288 master-0 kubenswrapper[30278]: I0318 18:00:49.360747 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/89b1dfdf-4633-45af-8abd-931a76eca960-web-config\") pod \"alertmanager-main-0\" (UID: \"89b1dfdf-4633-45af-8abd-931a76eca960\") " pod="openshift-monitoring/alertmanager-main-0" Mar 18 18:00:49.363288 master-0 kubenswrapper[30278]: I0318 18:00:49.360747 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/89b1dfdf-4633-45af-8abd-931a76eca960-tls-assets\") pod \"alertmanager-main-0\" (UID: \"89b1dfdf-4633-45af-8abd-931a76eca960\") " pod="openshift-monitoring/alertmanager-main-0" Mar 18 18:00:49.379288 master-0 kubenswrapper[30278]: I0318 18:00:49.376312 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ghl7k\" (UniqueName: \"kubernetes.io/projected/89b1dfdf-4633-45af-8abd-931a76eca960-kube-api-access-ghl7k\") pod \"alertmanager-main-0\" (UID: \"89b1dfdf-4633-45af-8abd-931a76eca960\") " pod="openshift-monitoring/alertmanager-main-0" Mar 18 18:00:49.854810 master-0 kubenswrapper[30278]: I0318 18:00:49.854744 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/89b1dfdf-4633-45af-8abd-931a76eca960-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"89b1dfdf-4633-45af-8abd-931a76eca960\") " pod="openshift-monitoring/alertmanager-main-0" Mar 18 18:00:49.855100 master-0 kubenswrapper[30278]: E0318 18:00:49.855038 30278 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/89b1dfdf-4633-45af-8abd-931a76eca960-alertmanager-trusted-ca-bundle podName:89b1dfdf-4633-45af-8abd-931a76eca960 nodeName:}" failed. No retries permitted until 2026-03-18 18:00:50.854995873 +0000 UTC m=+20.022180468 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "alertmanager-trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/89b1dfdf-4633-45af-8abd-931a76eca960-alertmanager-trusted-ca-bundle") pod "alertmanager-main-0" (UID: "89b1dfdf-4633-45af-8abd-931a76eca960") : configmap references non-existent config key: ca-bundle.crt Mar 18 18:00:50.018960 master-0 kubenswrapper[30278]: I0318 18:00:50.018685 30278 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/thanos-querier-7cb46549d5-gm2ft"] Mar 18 18:00:50.021028 master-0 kubenswrapper[30278]: I0318 18:00:50.021002 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/thanos-querier-7cb46549d5-gm2ft" Mar 18 18:00:50.023723 master-0 kubenswrapper[30278]: I0318 18:00:50.023114 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-dockercfg-pwxkh" Mar 18 18:00:50.023723 master-0 kubenswrapper[30278]: I0318 18:00:50.023584 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-tls" Mar 18 18:00:50.024918 master-0 kubenswrapper[30278]: I0318 18:00:50.024879 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy" Mar 18 18:00:50.025461 master-0 kubenswrapper[30278]: I0318 18:00:50.025424 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-grpc-tls-2oo4hd4u5lrf1" Mar 18 18:00:50.025846 master-0 kubenswrapper[30278]: I0318 18:00:50.025822 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy-metrics" Mar 18 18:00:50.026095 master-0 kubenswrapper[30278]: I0318 18:00:50.025975 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy-rules" Mar 18 18:00:50.026145 master-0 kubenswrapper[30278]: I0318 18:00:50.026136 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy-web" Mar 18 18:00:50.036816 master-0 kubenswrapper[30278]: I0318 18:00:50.036343 30278 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/thanos-querier-7cb46549d5-gm2ft"] Mar 18 18:00:50.165579 master-0 kubenswrapper[30278]: I0318 18:00:50.165461 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-tls\" (UniqueName: \"kubernetes.io/secret/b0f7a4e5-c29e-43aa-8c76-b342e5abcc55-secret-thanos-querier-tls\") pod \"thanos-querier-7cb46549d5-gm2ft\" (UID: \"b0f7a4e5-c29e-43aa-8c76-b342e5abcc55\") " pod="openshift-monitoring/thanos-querier-7cb46549d5-gm2ft" Mar 18 18:00:50.165579 master-0 kubenswrapper[30278]: I0318 18:00:50.165509 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-rules\" (UniqueName: \"kubernetes.io/secret/b0f7a4e5-c29e-43aa-8c76-b342e5abcc55-secret-thanos-querier-kube-rbac-proxy-rules\") pod \"thanos-querier-7cb46549d5-gm2ft\" (UID: \"b0f7a4e5-c29e-43aa-8c76-b342e5abcc55\") " pod="openshift-monitoring/thanos-querier-7cb46549d5-gm2ft" Mar 18 18:00:50.165579 master-0 kubenswrapper[30278]: I0318 18:00:50.165541 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-metrics\" (UniqueName: \"kubernetes.io/secret/b0f7a4e5-c29e-43aa-8c76-b342e5abcc55-secret-thanos-querier-kube-rbac-proxy-metrics\") pod \"thanos-querier-7cb46549d5-gm2ft\" (UID: \"b0f7a4e5-c29e-43aa-8c76-b342e5abcc55\") " pod="openshift-monitoring/thanos-querier-7cb46549d5-gm2ft" Mar 18 18:00:50.166459 master-0 kubenswrapper[30278]: I0318 18:00:50.165564 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/b0f7a4e5-c29e-43aa-8c76-b342e5abcc55-secret-grpc-tls\") pod \"thanos-querier-7cb46549d5-gm2ft\" (UID: \"b0f7a4e5-c29e-43aa-8c76-b342e5abcc55\") " pod="openshift-monitoring/thanos-querier-7cb46549d5-gm2ft" Mar 18 18:00:50.166459 master-0 kubenswrapper[30278]: I0318 18:00:50.165669 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/b0f7a4e5-c29e-43aa-8c76-b342e5abcc55-secret-thanos-querier-kube-rbac-proxy\") pod \"thanos-querier-7cb46549d5-gm2ft\" (UID: \"b0f7a4e5-c29e-43aa-8c76-b342e5abcc55\") " pod="openshift-monitoring/thanos-querier-7cb46549d5-gm2ft" Mar 18 18:00:50.166459 master-0 kubenswrapper[30278]: I0318 18:00:50.165689 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/b0f7a4e5-c29e-43aa-8c76-b342e5abcc55-secret-thanos-querier-kube-rbac-proxy-web\") pod \"thanos-querier-7cb46549d5-gm2ft\" (UID: \"b0f7a4e5-c29e-43aa-8c76-b342e5abcc55\") " pod="openshift-monitoring/thanos-querier-7cb46549d5-gm2ft" Mar 18 18:00:50.166459 master-0 kubenswrapper[30278]: I0318 18:00:50.165720 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qltn7\" (UniqueName: \"kubernetes.io/projected/b0f7a4e5-c29e-43aa-8c76-b342e5abcc55-kube-api-access-qltn7\") pod \"thanos-querier-7cb46549d5-gm2ft\" (UID: \"b0f7a4e5-c29e-43aa-8c76-b342e5abcc55\") " pod="openshift-monitoring/thanos-querier-7cb46549d5-gm2ft" Mar 18 18:00:50.166459 master-0 kubenswrapper[30278]: I0318 18:00:50.165746 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/b0f7a4e5-c29e-43aa-8c76-b342e5abcc55-metrics-client-ca\") pod \"thanos-querier-7cb46549d5-gm2ft\" (UID: \"b0f7a4e5-c29e-43aa-8c76-b342e5abcc55\") " pod="openshift-monitoring/thanos-querier-7cb46549d5-gm2ft" Mar 18 18:00:50.267394 master-0 kubenswrapper[30278]: I0318 18:00:50.266754 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qltn7\" (UniqueName: \"kubernetes.io/projected/b0f7a4e5-c29e-43aa-8c76-b342e5abcc55-kube-api-access-qltn7\") pod \"thanos-querier-7cb46549d5-gm2ft\" (UID: \"b0f7a4e5-c29e-43aa-8c76-b342e5abcc55\") " pod="openshift-monitoring/thanos-querier-7cb46549d5-gm2ft" Mar 18 18:00:50.267394 master-0 kubenswrapper[30278]: I0318 18:00:50.266842 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/b0f7a4e5-c29e-43aa-8c76-b342e5abcc55-metrics-client-ca\") pod \"thanos-querier-7cb46549d5-gm2ft\" (UID: \"b0f7a4e5-c29e-43aa-8c76-b342e5abcc55\") " pod="openshift-monitoring/thanos-querier-7cb46549d5-gm2ft" Mar 18 18:00:50.267394 master-0 kubenswrapper[30278]: I0318 18:00:50.266921 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-tls\" (UniqueName: \"kubernetes.io/secret/b0f7a4e5-c29e-43aa-8c76-b342e5abcc55-secret-thanos-querier-tls\") pod \"thanos-querier-7cb46549d5-gm2ft\" (UID: \"b0f7a4e5-c29e-43aa-8c76-b342e5abcc55\") " pod="openshift-monitoring/thanos-querier-7cb46549d5-gm2ft" Mar 18 18:00:50.267394 master-0 kubenswrapper[30278]: I0318 18:00:50.266950 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-rules\" (UniqueName: \"kubernetes.io/secret/b0f7a4e5-c29e-43aa-8c76-b342e5abcc55-secret-thanos-querier-kube-rbac-proxy-rules\") pod \"thanos-querier-7cb46549d5-gm2ft\" (UID: \"b0f7a4e5-c29e-43aa-8c76-b342e5abcc55\") " pod="openshift-monitoring/thanos-querier-7cb46549d5-gm2ft" Mar 18 18:00:50.267394 master-0 kubenswrapper[30278]: I0318 18:00:50.267000 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-metrics\" (UniqueName: \"kubernetes.io/secret/b0f7a4e5-c29e-43aa-8c76-b342e5abcc55-secret-thanos-querier-kube-rbac-proxy-metrics\") pod \"thanos-querier-7cb46549d5-gm2ft\" (UID: \"b0f7a4e5-c29e-43aa-8c76-b342e5abcc55\") " pod="openshift-monitoring/thanos-querier-7cb46549d5-gm2ft" Mar 18 18:00:50.267394 master-0 kubenswrapper[30278]: I0318 18:00:50.267030 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/b0f7a4e5-c29e-43aa-8c76-b342e5abcc55-secret-grpc-tls\") pod \"thanos-querier-7cb46549d5-gm2ft\" (UID: \"b0f7a4e5-c29e-43aa-8c76-b342e5abcc55\") " pod="openshift-monitoring/thanos-querier-7cb46549d5-gm2ft" Mar 18 18:00:50.267394 master-0 kubenswrapper[30278]: I0318 18:00:50.267089 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/b0f7a4e5-c29e-43aa-8c76-b342e5abcc55-secret-thanos-querier-kube-rbac-proxy\") pod \"thanos-querier-7cb46549d5-gm2ft\" (UID: \"b0f7a4e5-c29e-43aa-8c76-b342e5abcc55\") " pod="openshift-monitoring/thanos-querier-7cb46549d5-gm2ft" Mar 18 18:00:50.267394 master-0 kubenswrapper[30278]: I0318 18:00:50.267115 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/b0f7a4e5-c29e-43aa-8c76-b342e5abcc55-secret-thanos-querier-kube-rbac-proxy-web\") pod \"thanos-querier-7cb46549d5-gm2ft\" (UID: \"b0f7a4e5-c29e-43aa-8c76-b342e5abcc55\") " pod="openshift-monitoring/thanos-querier-7cb46549d5-gm2ft" Mar 18 18:00:50.268285 master-0 kubenswrapper[30278]: I0318 18:00:50.268230 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/b0f7a4e5-c29e-43aa-8c76-b342e5abcc55-metrics-client-ca\") pod \"thanos-querier-7cb46549d5-gm2ft\" (UID: \"b0f7a4e5-c29e-43aa-8c76-b342e5abcc55\") " pod="openshift-monitoring/thanos-querier-7cb46549d5-gm2ft" Mar 18 18:00:50.270948 master-0 kubenswrapper[30278]: I0318 18:00:50.270921 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/b0f7a4e5-c29e-43aa-8c76-b342e5abcc55-secret-thanos-querier-kube-rbac-proxy-web\") pod \"thanos-querier-7cb46549d5-gm2ft\" (UID: \"b0f7a4e5-c29e-43aa-8c76-b342e5abcc55\") " pod="openshift-monitoring/thanos-querier-7cb46549d5-gm2ft" Mar 18 18:00:50.271625 master-0 kubenswrapper[30278]: I0318 18:00:50.271491 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy-rules\" (UniqueName: \"kubernetes.io/secret/b0f7a4e5-c29e-43aa-8c76-b342e5abcc55-secret-thanos-querier-kube-rbac-proxy-rules\") pod \"thanos-querier-7cb46549d5-gm2ft\" (UID: \"b0f7a4e5-c29e-43aa-8c76-b342e5abcc55\") " pod="openshift-monitoring/thanos-querier-7cb46549d5-gm2ft" Mar 18 18:00:50.273305 master-0 kubenswrapper[30278]: I0318 18:00:50.273228 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/b0f7a4e5-c29e-43aa-8c76-b342e5abcc55-secret-thanos-querier-kube-rbac-proxy\") pod \"thanos-querier-7cb46549d5-gm2ft\" (UID: \"b0f7a4e5-c29e-43aa-8c76-b342e5abcc55\") " pod="openshift-monitoring/thanos-querier-7cb46549d5-gm2ft" Mar 18 18:00:50.273373 master-0 kubenswrapper[30278]: I0318 18:00:50.273253 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy-metrics\" (UniqueName: \"kubernetes.io/secret/b0f7a4e5-c29e-43aa-8c76-b342e5abcc55-secret-thanos-querier-kube-rbac-proxy-metrics\") pod \"thanos-querier-7cb46549d5-gm2ft\" (UID: \"b0f7a4e5-c29e-43aa-8c76-b342e5abcc55\") " pod="openshift-monitoring/thanos-querier-7cb46549d5-gm2ft" Mar 18 18:00:50.276185 master-0 kubenswrapper[30278]: I0318 18:00:50.275759 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/b0f7a4e5-c29e-43aa-8c76-b342e5abcc55-secret-grpc-tls\") pod \"thanos-querier-7cb46549d5-gm2ft\" (UID: \"b0f7a4e5-c29e-43aa-8c76-b342e5abcc55\") " pod="openshift-monitoring/thanos-querier-7cb46549d5-gm2ft" Mar 18 18:00:50.279710 master-0 kubenswrapper[30278]: I0318 18:00:50.279676 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-tls\" (UniqueName: \"kubernetes.io/secret/b0f7a4e5-c29e-43aa-8c76-b342e5abcc55-secret-thanos-querier-tls\") pod \"thanos-querier-7cb46549d5-gm2ft\" (UID: \"b0f7a4e5-c29e-43aa-8c76-b342e5abcc55\") " pod="openshift-monitoring/thanos-querier-7cb46549d5-gm2ft" Mar 18 18:00:50.286901 master-0 kubenswrapper[30278]: I0318 18:00:50.286863 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qltn7\" (UniqueName: \"kubernetes.io/projected/b0f7a4e5-c29e-43aa-8c76-b342e5abcc55-kube-api-access-qltn7\") pod \"thanos-querier-7cb46549d5-gm2ft\" (UID: \"b0f7a4e5-c29e-43aa-8c76-b342e5abcc55\") " pod="openshift-monitoring/thanos-querier-7cb46549d5-gm2ft" Mar 18 18:00:50.358100 master-0 kubenswrapper[30278]: I0318 18:00:50.358028 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/thanos-querier-7cb46549d5-gm2ft" Mar 18 18:00:50.481181 master-0 kubenswrapper[30278]: I0318 18:00:50.481072 30278 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 18 18:00:50.876568 master-0 kubenswrapper[30278]: I0318 18:00:50.876486 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/89b1dfdf-4633-45af-8abd-931a76eca960-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"89b1dfdf-4633-45af-8abd-931a76eca960\") " pod="openshift-monitoring/alertmanager-main-0" Mar 18 18:00:50.876856 master-0 kubenswrapper[30278]: E0318 18:00:50.876709 30278 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/89b1dfdf-4633-45af-8abd-931a76eca960-alertmanager-trusted-ca-bundle podName:89b1dfdf-4633-45af-8abd-931a76eca960 nodeName:}" failed. No retries permitted until 2026-03-18 18:00:52.876687799 +0000 UTC m=+22.043872404 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "alertmanager-trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/89b1dfdf-4633-45af-8abd-931a76eca960-alertmanager-trusted-ca-bundle") pod "alertmanager-main-0" (UID: "89b1dfdf-4633-45af-8abd-931a76eca960") : configmap references non-existent config key: ca-bundle.crt Mar 18 18:00:51.423437 master-0 kubenswrapper[30278]: I0318 18:00:51.423350 30278 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/metrics-server-6b789d4fdf-d4nw8"] Mar 18 18:00:51.425257 master-0 kubenswrapper[30278]: I0318 18:00:51.424845 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/metrics-server-6b789d4fdf-d4nw8" Mar 18 18:00:51.427441 master-0 kubenswrapper[30278]: I0318 18:00:51.427325 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-ticnjnaemlaa" Mar 18 18:00:51.427563 master-0 kubenswrapper[30278]: I0318 18:00:51.427473 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-client-certs" Mar 18 18:00:51.428015 master-0 kubenswrapper[30278]: I0318 18:00:51.427979 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kubelet-serving-ca-bundle" Mar 18 18:00:51.428195 master-0 kubenswrapper[30278]: I0318 18:00:51.428169 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-dockercfg-h8kg7" Mar 18 18:00:51.428363 master-0 kubenswrapper[30278]: I0318 18:00:51.428335 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-tls" Mar 18 18:00:51.429337 master-0 kubenswrapper[30278]: I0318 18:00:51.429309 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"metrics-server-audit-profiles" Mar 18 18:00:51.449063 master-0 kubenswrapper[30278]: I0318 18:00:51.449001 30278 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/metrics-server-6b789d4fdf-d4nw8"] Mar 18 18:00:51.713998 master-0 kubenswrapper[30278]: I0318 18:00:51.713608 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/6f89981d-e643-4015-8af6-5e7582182466-metrics-server-audit-profiles\") pod \"metrics-server-6b789d4fdf-d4nw8\" (UID: \"6f89981d-e643-4015-8af6-5e7582182466\") " pod="openshift-monitoring/metrics-server-6b789d4fdf-d4nw8" Mar 18 18:00:51.713998 master-0 kubenswrapper[30278]: I0318 18:00:51.713673 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/6f89981d-e643-4015-8af6-5e7582182466-secret-metrics-client-certs\") pod \"metrics-server-6b789d4fdf-d4nw8\" (UID: \"6f89981d-e643-4015-8af6-5e7582182466\") " pod="openshift-monitoring/metrics-server-6b789d4fdf-d4nw8" Mar 18 18:00:51.713998 master-0 kubenswrapper[30278]: I0318 18:00:51.713911 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/6f89981d-e643-4015-8af6-5e7582182466-secret-metrics-server-tls\") pod \"metrics-server-6b789d4fdf-d4nw8\" (UID: \"6f89981d-e643-4015-8af6-5e7582182466\") " pod="openshift-monitoring/metrics-server-6b789d4fdf-d4nw8" Mar 18 18:00:51.714349 master-0 kubenswrapper[30278]: I0318 18:00:51.713984 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4285e80c-1ff9-42b3-9692-9f2ab6b61916-kube-api-access\") pod \"installer-3-master-0\" (UID: \"4285e80c-1ff9-42b3-9692-9f2ab6b61916\") " pod="openshift-kube-apiserver/installer-3-master-0" Mar 18 18:00:51.714349 master-0 kubenswrapper[30278]: I0318 18:00:51.714084 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sml8l\" (UniqueName: \"kubernetes.io/projected/6f89981d-e643-4015-8af6-5e7582182466-kube-api-access-sml8l\") pod \"metrics-server-6b789d4fdf-d4nw8\" (UID: \"6f89981d-e643-4015-8af6-5e7582182466\") " pod="openshift-monitoring/metrics-server-6b789d4fdf-d4nw8" Mar 18 18:00:51.714349 master-0 kubenswrapper[30278]: E0318 18:00:51.714129 30278 projected.go:288] Couldn't get configMap openshift-kube-apiserver/kube-root-ca.crt: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Mar 18 18:00:51.714349 master-0 kubenswrapper[30278]: E0318 18:00:51.714146 30278 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-kube-apiserver/installer-3-master-0: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Mar 18 18:00:51.714349 master-0 kubenswrapper[30278]: E0318 18:00:51.714183 30278 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/4285e80c-1ff9-42b3-9692-9f2ab6b61916-kube-api-access podName:4285e80c-1ff9-42b3-9692-9f2ab6b61916 nodeName:}" failed. No retries permitted until 2026-03-18 18:01:07.714170287 +0000 UTC m=+36.881354882 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/4285e80c-1ff9-42b3-9692-9f2ab6b61916-kube-api-access") pod "installer-3-master-0" (UID: "4285e80c-1ff9-42b3-9692-9f2ab6b61916") : object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Mar 18 18:00:51.714349 master-0 kubenswrapper[30278]: I0318 18:00:51.714208 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/6f89981d-e643-4015-8af6-5e7582182466-audit-log\") pod \"metrics-server-6b789d4fdf-d4nw8\" (UID: \"6f89981d-e643-4015-8af6-5e7582182466\") " pod="openshift-monitoring/metrics-server-6b789d4fdf-d4nw8" Mar 18 18:00:51.714349 master-0 kubenswrapper[30278]: I0318 18:00:51.714248 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6f89981d-e643-4015-8af6-5e7582182466-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-6b789d4fdf-d4nw8\" (UID: \"6f89981d-e643-4015-8af6-5e7582182466\") " pod="openshift-monitoring/metrics-server-6b789d4fdf-d4nw8" Mar 18 18:00:51.714349 master-0 kubenswrapper[30278]: I0318 18:00:51.714292 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6f89981d-e643-4015-8af6-5e7582182466-client-ca-bundle\") pod \"metrics-server-6b789d4fdf-d4nw8\" (UID: \"6f89981d-e643-4015-8af6-5e7582182466\") " pod="openshift-monitoring/metrics-server-6b789d4fdf-d4nw8" Mar 18 18:00:51.814886 master-0 kubenswrapper[30278]: I0318 18:00:51.814826 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/6f89981d-e643-4015-8af6-5e7582182466-audit-log\") pod \"metrics-server-6b789d4fdf-d4nw8\" (UID: \"6f89981d-e643-4015-8af6-5e7582182466\") " pod="openshift-monitoring/metrics-server-6b789d4fdf-d4nw8" Mar 18 18:00:51.815234 master-0 kubenswrapper[30278]: I0318 18:00:51.814987 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6f89981d-e643-4015-8af6-5e7582182466-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-6b789d4fdf-d4nw8\" (UID: \"6f89981d-e643-4015-8af6-5e7582182466\") " pod="openshift-monitoring/metrics-server-6b789d4fdf-d4nw8" Mar 18 18:00:51.815234 master-0 kubenswrapper[30278]: I0318 18:00:51.815017 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6f89981d-e643-4015-8af6-5e7582182466-client-ca-bundle\") pod \"metrics-server-6b789d4fdf-d4nw8\" (UID: \"6f89981d-e643-4015-8af6-5e7582182466\") " pod="openshift-monitoring/metrics-server-6b789d4fdf-d4nw8" Mar 18 18:00:51.815234 master-0 kubenswrapper[30278]: I0318 18:00:51.815064 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/6f89981d-e643-4015-8af6-5e7582182466-metrics-server-audit-profiles\") pod \"metrics-server-6b789d4fdf-d4nw8\" (UID: \"6f89981d-e643-4015-8af6-5e7582182466\") " pod="openshift-monitoring/metrics-server-6b789d4fdf-d4nw8" Mar 18 18:00:51.815234 master-0 kubenswrapper[30278]: I0318 18:00:51.815093 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/6f89981d-e643-4015-8af6-5e7582182466-secret-metrics-client-certs\") pod \"metrics-server-6b789d4fdf-d4nw8\" (UID: \"6f89981d-e643-4015-8af6-5e7582182466\") " pod="openshift-monitoring/metrics-server-6b789d4fdf-d4nw8" Mar 18 18:00:51.815234 master-0 kubenswrapper[30278]: I0318 18:00:51.815118 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/6f89981d-e643-4015-8af6-5e7582182466-secret-metrics-server-tls\") pod \"metrics-server-6b789d4fdf-d4nw8\" (UID: \"6f89981d-e643-4015-8af6-5e7582182466\") " pod="openshift-monitoring/metrics-server-6b789d4fdf-d4nw8" Mar 18 18:00:51.815234 master-0 kubenswrapper[30278]: I0318 18:00:51.815154 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sml8l\" (UniqueName: \"kubernetes.io/projected/6f89981d-e643-4015-8af6-5e7582182466-kube-api-access-sml8l\") pod \"metrics-server-6b789d4fdf-d4nw8\" (UID: \"6f89981d-e643-4015-8af6-5e7582182466\") " pod="openshift-monitoring/metrics-server-6b789d4fdf-d4nw8" Mar 18 18:00:51.816013 master-0 kubenswrapper[30278]: I0318 18:00:51.815459 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/6f89981d-e643-4015-8af6-5e7582182466-audit-log\") pod \"metrics-server-6b789d4fdf-d4nw8\" (UID: \"6f89981d-e643-4015-8af6-5e7582182466\") " pod="openshift-monitoring/metrics-server-6b789d4fdf-d4nw8" Mar 18 18:00:51.816985 master-0 kubenswrapper[30278]: I0318 18:00:51.816902 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6f89981d-e643-4015-8af6-5e7582182466-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-6b789d4fdf-d4nw8\" (UID: \"6f89981d-e643-4015-8af6-5e7582182466\") " pod="openshift-monitoring/metrics-server-6b789d4fdf-d4nw8" Mar 18 18:00:51.818186 master-0 kubenswrapper[30278]: I0318 18:00:51.818153 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/6f89981d-e643-4015-8af6-5e7582182466-secret-metrics-client-certs\") pod \"metrics-server-6b789d4fdf-d4nw8\" (UID: \"6f89981d-e643-4015-8af6-5e7582182466\") " pod="openshift-monitoring/metrics-server-6b789d4fdf-d4nw8" Mar 18 18:00:51.821046 master-0 kubenswrapper[30278]: I0318 18:00:51.820373 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6f89981d-e643-4015-8af6-5e7582182466-client-ca-bundle\") pod \"metrics-server-6b789d4fdf-d4nw8\" (UID: \"6f89981d-e643-4015-8af6-5e7582182466\") " pod="openshift-monitoring/metrics-server-6b789d4fdf-d4nw8" Mar 18 18:00:51.822509 master-0 kubenswrapper[30278]: I0318 18:00:51.821995 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/6f89981d-e643-4015-8af6-5e7582182466-secret-metrics-server-tls\") pod \"metrics-server-6b789d4fdf-d4nw8\" (UID: \"6f89981d-e643-4015-8af6-5e7582182466\") " pod="openshift-monitoring/metrics-server-6b789d4fdf-d4nw8" Mar 18 18:00:51.827528 master-0 kubenswrapper[30278]: I0318 18:00:51.824495 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/6f89981d-e643-4015-8af6-5e7582182466-metrics-server-audit-profiles\") pod \"metrics-server-6b789d4fdf-d4nw8\" (UID: \"6f89981d-e643-4015-8af6-5e7582182466\") " pod="openshift-monitoring/metrics-server-6b789d4fdf-d4nw8" Mar 18 18:00:51.835862 master-0 kubenswrapper[30278]: I0318 18:00:51.835804 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sml8l\" (UniqueName: \"kubernetes.io/projected/6f89981d-e643-4015-8af6-5e7582182466-kube-api-access-sml8l\") pod \"metrics-server-6b789d4fdf-d4nw8\" (UID: \"6f89981d-e643-4015-8af6-5e7582182466\") " pod="openshift-monitoring/metrics-server-6b789d4fdf-d4nw8" Mar 18 18:00:52.054113 master-0 kubenswrapper[30278]: I0318 18:00:52.053701 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/metrics-server-6b789d4fdf-d4nw8" Mar 18 18:00:52.791942 master-0 kubenswrapper[30278]: W0318 18:00:52.789178 30278 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1674d0a4_8c16_4535_ac1e_e3220ef50e57.slice/crio-129e665374520e86d9a484abb6d802b56529bdea72db51d4d4c7e3ae23ea5c3f WatchSource:0}: Error finding container 129e665374520e86d9a484abb6d802b56529bdea72db51d4d4c7e3ae23ea5c3f: Status 404 returned error can't find the container with id 129e665374520e86d9a484abb6d802b56529bdea72db51d4d4c7e3ae23ea5c3f Mar 18 18:00:52.944878 master-0 kubenswrapper[30278]: I0318 18:00:52.944743 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/89b1dfdf-4633-45af-8abd-931a76eca960-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"89b1dfdf-4633-45af-8abd-931a76eca960\") " pod="openshift-monitoring/alertmanager-main-0" Mar 18 18:00:52.950397 master-0 kubenswrapper[30278]: E0318 18:00:52.949898 30278 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/89b1dfdf-4633-45af-8abd-931a76eca960-alertmanager-trusted-ca-bundle podName:89b1dfdf-4633-45af-8abd-931a76eca960 nodeName:}" failed. No retries permitted until 2026-03-18 18:00:56.949854613 +0000 UTC m=+26.117039208 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "alertmanager-trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/89b1dfdf-4633-45af-8abd-931a76eca960-alertmanager-trusted-ca-bundle") pod "alertmanager-main-0" (UID: "89b1dfdf-4633-45af-8abd-931a76eca960") : configmap references non-existent config key: ca-bundle.crt Mar 18 18:00:53.284897 master-0 kubenswrapper[30278]: I0318 18:00:53.284057 30278 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/openshift-state-metrics-5dc6c74576-smd8t"] Mar 18 18:00:53.335251 master-0 kubenswrapper[30278]: I0318 18:00:53.335192 30278 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/thanos-querier-7cb46549d5-gm2ft"] Mar 18 18:00:53.343938 master-0 kubenswrapper[30278]: W0318 18:00:53.343752 30278 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb0f7a4e5_c29e_43aa_8c76_b342e5abcc55.slice/crio-28c60a098dae93202d03f971b2e0a699a89082d1926e930e0f5d28bbf7568a99 WatchSource:0}: Error finding container 28c60a098dae93202d03f971b2e0a699a89082d1926e930e0f5d28bbf7568a99: Status 404 returned error can't find the container with id 28c60a098dae93202d03f971b2e0a699a89082d1926e930e0f5d28bbf7568a99 Mar 18 18:00:53.354735 master-0 kubenswrapper[30278]: I0318 18:00:53.354685 30278 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/metrics-server-6b789d4fdf-d4nw8"] Mar 18 18:00:53.357063 master-0 kubenswrapper[30278]: I0318 18:00:53.357021 30278 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/kube-state-metrics-7bbc969446-72wb5"] Mar 18 18:00:53.373500 master-0 kubenswrapper[30278]: W0318 18:00:53.373452 30278 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5876677a_9e8a_4625_af71_833b259a1596.slice/crio-6933f435d3e00896cd152628fe51af8ae612f5c95000827220649407f9cae916 WatchSource:0}: Error finding container 6933f435d3e00896cd152628fe51af8ae612f5c95000827220649407f9cae916: Status 404 returned error can't find the container with id 6933f435d3e00896cd152628fe51af8ae612f5c95000827220649407f9cae916 Mar 18 18:00:53.613571 master-0 kubenswrapper[30278]: I0318 18:00:53.613525 30278 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/prometheus-k8s-0"] Mar 18 18:00:53.616031 master-0 kubenswrapper[30278]: I0318 18:00:53.616000 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-k8s-0" Mar 18 18:00:53.618256 master-0 kubenswrapper[30278]: I0318 18:00:53.618219 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-rbac-proxy" Mar 18 18:00:53.619210 master-0 kubenswrapper[30278]: I0318 18:00:53.619188 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-web-config" Mar 18 18:00:53.619455 master-0 kubenswrapper[30278]: I0318 18:00:53.619433 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-tls" Mar 18 18:00:53.619667 master-0 kubenswrapper[30278]: I0318 18:00:53.619649 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-dockercfg-pm4sf" Mar 18 18:00:53.619838 master-0 kubenswrapper[30278]: I0318 18:00:53.619819 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-tls-assets-0" Mar 18 18:00:53.619930 master-0 kubenswrapper[30278]: I0318 18:00:53.619906 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"serving-certs-ca-bundle" Mar 18 18:00:53.620015 master-0 kubenswrapper[30278]: I0318 18:00:53.619996 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-thanos-sidecar-tls" Mar 18 18:00:53.620110 master-0 kubenswrapper[30278]: I0318 18:00:53.620089 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-grpc-tls-66rqjfmn9qiqc" Mar 18 18:00:53.620191 master-0 kubenswrapper[30278]: I0318 18:00:53.620172 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s" Mar 18 18:00:53.628542 master-0 kubenswrapper[30278]: I0318 18:00:53.628483 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-thanos-prometheus-http-client-file" Mar 18 18:00:53.628993 master-0 kubenswrapper[30278]: I0318 18:00:53.628723 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-kube-rbac-proxy-web" Mar 18 18:00:53.632322 master-0 kubenswrapper[30278]: I0318 18:00:53.629061 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"prometheus-trusted-ca-bundle" Mar 18 18:00:53.640454 master-0 kubenswrapper[30278]: I0318 18:00:53.640114 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"prometheus-k8s-rulefiles-0" Mar 18 18:00:53.654379 master-0 kubenswrapper[30278]: I0318 18:00:53.654341 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/5c6aeb7b-9c05-470e-b31f-f4154aadf170-web-config\") pod \"prometheus-k8s-0\" (UID: \"5c6aeb7b-9c05-470e-b31f-f4154aadf170\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 18:00:53.654502 master-0 kubenswrapper[30278]: I0318 18:00:53.654408 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/5c6aeb7b-9c05-470e-b31f-f4154aadf170-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-0\" (UID: \"5c6aeb7b-9c05-470e-b31f-f4154aadf170\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 18:00:53.654502 master-0 kubenswrapper[30278]: I0318 18:00:53.654462 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/5c6aeb7b-9c05-470e-b31f-f4154aadf170-thanos-prometheus-http-client-file\") pod \"prometheus-k8s-0\" (UID: \"5c6aeb7b-9c05-470e-b31f-f4154aadf170\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 18:00:53.654502 master-0 kubenswrapper[30278]: I0318 18:00:53.654484 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/5c6aeb7b-9c05-470e-b31f-f4154aadf170-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-0\" (UID: \"5c6aeb7b-9c05-470e-b31f-f4154aadf170\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 18:00:53.654599 master-0 kubenswrapper[30278]: I0318 18:00:53.654515 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5c6aeb7b-9c05-470e-b31f-f4154aadf170-configmap-kubelet-serving-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"5c6aeb7b-9c05-470e-b31f-f4154aadf170\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 18:00:53.654599 master-0 kubenswrapper[30278]: I0318 18:00:53.654538 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-prometheus-k8s-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/5c6aeb7b-9c05-470e-b31f-f4154aadf170-secret-prometheus-k8s-kube-rbac-proxy-web\") pod \"prometheus-k8s-0\" (UID: \"5c6aeb7b-9c05-470e-b31f-f4154aadf170\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 18:00:53.654599 master-0 kubenswrapper[30278]: I0318 18:00:53.654557 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/5c6aeb7b-9c05-470e-b31f-f4154aadf170-prometheus-k8s-rulefiles-0\") pod \"prometheus-k8s-0\" (UID: \"5c6aeb7b-9c05-470e-b31f-f4154aadf170\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 18:00:53.654599 master-0 kubenswrapper[30278]: I0318 18:00:53.654579 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/5c6aeb7b-9c05-470e-b31f-f4154aadf170-configmap-metrics-client-ca\") pod \"prometheus-k8s-0\" (UID: \"5c6aeb7b-9c05-470e-b31f-f4154aadf170\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 18:00:53.654599 master-0 kubenswrapper[30278]: I0318 18:00:53.654597 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-k8s-db\" (UniqueName: \"kubernetes.io/empty-dir/5c6aeb7b-9c05-470e-b31f-f4154aadf170-prometheus-k8s-db\") pod \"prometheus-k8s-0\" (UID: \"5c6aeb7b-9c05-470e-b31f-f4154aadf170\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 18:00:53.654743 master-0 kubenswrapper[30278]: I0318 18:00:53.654616 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5c6aeb7b-9c05-470e-b31f-f4154aadf170-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"5c6aeb7b-9c05-470e-b31f-f4154aadf170\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 18:00:53.654743 master-0 kubenswrapper[30278]: I0318 18:00:53.654642 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/5c6aeb7b-9c05-470e-b31f-f4154aadf170-secret-kube-rbac-proxy\") pod \"prometheus-k8s-0\" (UID: \"5c6aeb7b-9c05-470e-b31f-f4154aadf170\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 18:00:53.654743 master-0 kubenswrapper[30278]: I0318 18:00:53.654658 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/5c6aeb7b-9c05-470e-b31f-f4154aadf170-secret-grpc-tls\") pod \"prometheus-k8s-0\" (UID: \"5c6aeb7b-9c05-470e-b31f-f4154aadf170\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 18:00:53.654743 master-0 kubenswrapper[30278]: I0318 18:00:53.654680 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/5c6aeb7b-9c05-470e-b31f-f4154aadf170-config\") pod \"prometheus-k8s-0\" (UID: \"5c6aeb7b-9c05-470e-b31f-f4154aadf170\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 18:00:53.654743 master-0 kubenswrapper[30278]: I0318 18:00:53.654710 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wg62n\" (UniqueName: \"kubernetes.io/projected/5c6aeb7b-9c05-470e-b31f-f4154aadf170-kube-api-access-wg62n\") pod \"prometheus-k8s-0\" (UID: \"5c6aeb7b-9c05-470e-b31f-f4154aadf170\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 18:00:53.654743 master-0 kubenswrapper[30278]: I0318 18:00:53.654743 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/5c6aeb7b-9c05-470e-b31f-f4154aadf170-tls-assets\") pod \"prometheus-k8s-0\" (UID: \"5c6aeb7b-9c05-470e-b31f-f4154aadf170\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 18:00:53.654977 master-0 kubenswrapper[30278]: I0318 18:00:53.654764 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5c6aeb7b-9c05-470e-b31f-f4154aadf170-configmap-serving-certs-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"5c6aeb7b-9c05-470e-b31f-f4154aadf170\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 18:00:53.654977 master-0 kubenswrapper[30278]: I0318 18:00:53.654785 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/5c6aeb7b-9c05-470e-b31f-f4154aadf170-secret-metrics-client-certs\") pod \"prometheus-k8s-0\" (UID: \"5c6aeb7b-9c05-470e-b31f-f4154aadf170\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 18:00:53.654977 master-0 kubenswrapper[30278]: I0318 18:00:53.654804 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/5c6aeb7b-9c05-470e-b31f-f4154aadf170-config-out\") pod \"prometheus-k8s-0\" (UID: \"5c6aeb7b-9c05-470e-b31f-f4154aadf170\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 18:00:53.666297 master-0 kubenswrapper[30278]: I0318 18:00:53.665268 30278 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-k8s-0"] Mar 18 18:00:53.731755 master-0 kubenswrapper[30278]: I0318 18:00:53.731695 30278 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0"] Mar 18 18:00:53.731969 master-0 kubenswrapper[30278]: I0318 18:00:53.731930 30278 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" podUID="8e7a82869988463543d3d8dd1f0b5fe3" containerName="startup-monitor" containerID="cri-o://25f0059cb7f28e57d54587af9a075f46b53e453c6a901d45bc7aae8b1f8557d8" gracePeriod=5 Mar 18 18:00:53.757295 master-0 kubenswrapper[30278]: I0318 18:00:53.756192 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/5c6aeb7b-9c05-470e-b31f-f4154aadf170-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-0\" (UID: \"5c6aeb7b-9c05-470e-b31f-f4154aadf170\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 18:00:53.757295 master-0 kubenswrapper[30278]: I0318 18:00:53.756260 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/5c6aeb7b-9c05-470e-b31f-f4154aadf170-thanos-prometheus-http-client-file\") pod \"prometheus-k8s-0\" (UID: \"5c6aeb7b-9c05-470e-b31f-f4154aadf170\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 18:00:53.757295 master-0 kubenswrapper[30278]: I0318 18:00:53.756292 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/5c6aeb7b-9c05-470e-b31f-f4154aadf170-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-0\" (UID: \"5c6aeb7b-9c05-470e-b31f-f4154aadf170\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 18:00:53.757295 master-0 kubenswrapper[30278]: I0318 18:00:53.756317 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5c6aeb7b-9c05-470e-b31f-f4154aadf170-configmap-kubelet-serving-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"5c6aeb7b-9c05-470e-b31f-f4154aadf170\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 18:00:53.757295 master-0 kubenswrapper[30278]: I0318 18:00:53.756335 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/5c6aeb7b-9c05-470e-b31f-f4154aadf170-secret-prometheus-k8s-kube-rbac-proxy-web\") pod \"prometheus-k8s-0\" (UID: \"5c6aeb7b-9c05-470e-b31f-f4154aadf170\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 18:00:53.757295 master-0 kubenswrapper[30278]: I0318 18:00:53.756352 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/5c6aeb7b-9c05-470e-b31f-f4154aadf170-prometheus-k8s-rulefiles-0\") pod \"prometheus-k8s-0\" (UID: \"5c6aeb7b-9c05-470e-b31f-f4154aadf170\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 18:00:53.757295 master-0 kubenswrapper[30278]: I0318 18:00:53.756369 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/5c6aeb7b-9c05-470e-b31f-f4154aadf170-configmap-metrics-client-ca\") pod \"prometheus-k8s-0\" (UID: \"5c6aeb7b-9c05-470e-b31f-f4154aadf170\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 18:00:53.757295 master-0 kubenswrapper[30278]: I0318 18:00:53.756384 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-k8s-db\" (UniqueName: \"kubernetes.io/empty-dir/5c6aeb7b-9c05-470e-b31f-f4154aadf170-prometheus-k8s-db\") pod \"prometheus-k8s-0\" (UID: \"5c6aeb7b-9c05-470e-b31f-f4154aadf170\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 18:00:53.757295 master-0 kubenswrapper[30278]: I0318 18:00:53.756400 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5c6aeb7b-9c05-470e-b31f-f4154aadf170-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"5c6aeb7b-9c05-470e-b31f-f4154aadf170\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 18:00:53.757295 master-0 kubenswrapper[30278]: I0318 18:00:53.756420 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/5c6aeb7b-9c05-470e-b31f-f4154aadf170-secret-kube-rbac-proxy\") pod \"prometheus-k8s-0\" (UID: \"5c6aeb7b-9c05-470e-b31f-f4154aadf170\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 18:00:53.757295 master-0 kubenswrapper[30278]: I0318 18:00:53.756434 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/5c6aeb7b-9c05-470e-b31f-f4154aadf170-secret-grpc-tls\") pod \"prometheus-k8s-0\" (UID: \"5c6aeb7b-9c05-470e-b31f-f4154aadf170\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 18:00:53.757295 master-0 kubenswrapper[30278]: I0318 18:00:53.756453 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/5c6aeb7b-9c05-470e-b31f-f4154aadf170-config\") pod \"prometheus-k8s-0\" (UID: \"5c6aeb7b-9c05-470e-b31f-f4154aadf170\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 18:00:53.757295 master-0 kubenswrapper[30278]: I0318 18:00:53.756478 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wg62n\" (UniqueName: \"kubernetes.io/projected/5c6aeb7b-9c05-470e-b31f-f4154aadf170-kube-api-access-wg62n\") pod \"prometheus-k8s-0\" (UID: \"5c6aeb7b-9c05-470e-b31f-f4154aadf170\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 18:00:53.757295 master-0 kubenswrapper[30278]: I0318 18:00:53.756504 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/5c6aeb7b-9c05-470e-b31f-f4154aadf170-tls-assets\") pod \"prometheus-k8s-0\" (UID: \"5c6aeb7b-9c05-470e-b31f-f4154aadf170\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 18:00:53.757295 master-0 kubenswrapper[30278]: I0318 18:00:53.756521 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5c6aeb7b-9c05-470e-b31f-f4154aadf170-configmap-serving-certs-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"5c6aeb7b-9c05-470e-b31f-f4154aadf170\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 18:00:53.757295 master-0 kubenswrapper[30278]: I0318 18:00:53.756536 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/5c6aeb7b-9c05-470e-b31f-f4154aadf170-secret-metrics-client-certs\") pod \"prometheus-k8s-0\" (UID: \"5c6aeb7b-9c05-470e-b31f-f4154aadf170\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 18:00:53.757295 master-0 kubenswrapper[30278]: I0318 18:00:53.756553 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/5c6aeb7b-9c05-470e-b31f-f4154aadf170-config-out\") pod \"prometheus-k8s-0\" (UID: \"5c6aeb7b-9c05-470e-b31f-f4154aadf170\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 18:00:53.757295 master-0 kubenswrapper[30278]: I0318 18:00:53.756574 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/5c6aeb7b-9c05-470e-b31f-f4154aadf170-web-config\") pod \"prometheus-k8s-0\" (UID: \"5c6aeb7b-9c05-470e-b31f-f4154aadf170\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 18:00:53.757295 master-0 kubenswrapper[30278]: E0318 18:00:53.757216 30278 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5c6aeb7b-9c05-470e-b31f-f4154aadf170-prometheus-trusted-ca-bundle podName:5c6aeb7b-9c05-470e-b31f-f4154aadf170 nodeName:}" failed. No retries permitted until 2026-03-18 18:00:54.25719447 +0000 UTC m=+23.424379065 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "prometheus-trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/5c6aeb7b-9c05-470e-b31f-f4154aadf170-prometheus-trusted-ca-bundle") pod "prometheus-k8s-0" (UID: "5c6aeb7b-9c05-470e-b31f-f4154aadf170") : configmap references non-existent config key: ca-bundle.crt Mar 18 18:00:53.760253 master-0 kubenswrapper[30278]: I0318 18:00:53.760044 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/5c6aeb7b-9c05-470e-b31f-f4154aadf170-web-config\") pod \"prometheus-k8s-0\" (UID: \"5c6aeb7b-9c05-470e-b31f-f4154aadf170\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 18:00:53.760253 master-0 kubenswrapper[30278]: I0318 18:00:53.760176 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/5c6aeb7b-9c05-470e-b31f-f4154aadf170-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-0\" (UID: \"5c6aeb7b-9c05-470e-b31f-f4154aadf170\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 18:00:53.761848 master-0 kubenswrapper[30278]: I0318 18:00:53.761446 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-prometheus-k8s-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/5c6aeb7b-9c05-470e-b31f-f4154aadf170-secret-prometheus-k8s-kube-rbac-proxy-web\") pod \"prometheus-k8s-0\" (UID: \"5c6aeb7b-9c05-470e-b31f-f4154aadf170\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 18:00:53.770647 master-0 kubenswrapper[30278]: I0318 18:00:53.763003 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/5c6aeb7b-9c05-470e-b31f-f4154aadf170-thanos-prometheus-http-client-file\") pod \"prometheus-k8s-0\" (UID: \"5c6aeb7b-9c05-470e-b31f-f4154aadf170\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 18:00:53.770647 master-0 kubenswrapper[30278]: I0318 18:00:53.763064 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5c6aeb7b-9c05-470e-b31f-f4154aadf170-configmap-serving-certs-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"5c6aeb7b-9c05-470e-b31f-f4154aadf170\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 18:00:53.770647 master-0 kubenswrapper[30278]: I0318 18:00:53.763255 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/5c6aeb7b-9c05-470e-b31f-f4154aadf170-tls-assets\") pod \"prometheus-k8s-0\" (UID: \"5c6aeb7b-9c05-470e-b31f-f4154aadf170\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 18:00:53.770647 master-0 kubenswrapper[30278]: I0318 18:00:53.763415 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-k8s-db\" (UniqueName: \"kubernetes.io/empty-dir/5c6aeb7b-9c05-470e-b31f-f4154aadf170-prometheus-k8s-db\") pod \"prometheus-k8s-0\" (UID: \"5c6aeb7b-9c05-470e-b31f-f4154aadf170\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 18:00:53.770647 master-0 kubenswrapper[30278]: I0318 18:00:53.763946 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5c6aeb7b-9c05-470e-b31f-f4154aadf170-configmap-kubelet-serving-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"5c6aeb7b-9c05-470e-b31f-f4154aadf170\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 18:00:53.770647 master-0 kubenswrapper[30278]: I0318 18:00:53.764105 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/5c6aeb7b-9c05-470e-b31f-f4154aadf170-configmap-metrics-client-ca\") pod \"prometheus-k8s-0\" (UID: \"5c6aeb7b-9c05-470e-b31f-f4154aadf170\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 18:00:53.770647 master-0 kubenswrapper[30278]: I0318 18:00:53.764536 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/5c6aeb7b-9c05-470e-b31f-f4154aadf170-config-out\") pod \"prometheus-k8s-0\" (UID: \"5c6aeb7b-9c05-470e-b31f-f4154aadf170\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 18:00:53.770647 master-0 kubenswrapper[30278]: I0318 18:00:53.764983 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/5c6aeb7b-9c05-470e-b31f-f4154aadf170-config\") pod \"prometheus-k8s-0\" (UID: \"5c6aeb7b-9c05-470e-b31f-f4154aadf170\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 18:00:53.770647 master-0 kubenswrapper[30278]: I0318 18:00:53.766252 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/5c6aeb7b-9c05-470e-b31f-f4154aadf170-secret-metrics-client-certs\") pod \"prometheus-k8s-0\" (UID: \"5c6aeb7b-9c05-470e-b31f-f4154aadf170\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 18:00:53.772848 master-0 kubenswrapper[30278]: I0318 18:00:53.770980 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/5c6aeb7b-9c05-470e-b31f-f4154aadf170-secret-kube-rbac-proxy\") pod \"prometheus-k8s-0\" (UID: \"5c6aeb7b-9c05-470e-b31f-f4154aadf170\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 18:00:53.772848 master-0 kubenswrapper[30278]: I0318 18:00:53.772606 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/5c6aeb7b-9c05-470e-b31f-f4154aadf170-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-0\" (UID: \"5c6aeb7b-9c05-470e-b31f-f4154aadf170\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 18:00:53.773370 master-0 kubenswrapper[30278]: I0318 18:00:53.773334 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/5c6aeb7b-9c05-470e-b31f-f4154aadf170-prometheus-k8s-rulefiles-0\") pod \"prometheus-k8s-0\" (UID: \"5c6aeb7b-9c05-470e-b31f-f4154aadf170\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 18:00:53.773682 master-0 kubenswrapper[30278]: I0318 18:00:53.773661 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/5c6aeb7b-9c05-470e-b31f-f4154aadf170-secret-grpc-tls\") pod \"prometheus-k8s-0\" (UID: \"5c6aeb7b-9c05-470e-b31f-f4154aadf170\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 18:00:53.777787 master-0 kubenswrapper[30278]: I0318 18:00:53.777751 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wg62n\" (UniqueName: \"kubernetes.io/projected/5c6aeb7b-9c05-470e-b31f-f4154aadf170-kube-api-access-wg62n\") pod \"prometheus-k8s-0\" (UID: \"5c6aeb7b-9c05-470e-b31f-f4154aadf170\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 18:00:53.798365 master-0 kubenswrapper[30278]: I0318 18:00:53.798259 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-7cb46549d5-gm2ft" event={"ID":"b0f7a4e5-c29e-43aa-8c76-b342e5abcc55","Type":"ContainerStarted","Data":"28c60a098dae93202d03f971b2e0a699a89082d1926e930e0f5d28bbf7568a99"} Mar 18 18:00:53.799739 master-0 kubenswrapper[30278]: I0318 18:00:53.799694 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/metrics-server-6b789d4fdf-d4nw8" event={"ID":"6f89981d-e643-4015-8af6-5e7582182466","Type":"ContainerStarted","Data":"3eac99d9439632617528bbe5b7144d3425a33977854dd665ca6f926bd9a32ebb"} Mar 18 18:00:53.801705 master-0 kubenswrapper[30278]: I0318 18:00:53.801643 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-5dc6c74576-smd8t" event={"ID":"2ee860d7-4262-43d7-aeb2-b77040a69133","Type":"ContainerStarted","Data":"f9a67081112f96ede907ae4b773d2f5b726675756e7d0dd301699a6340fca6b9"} Mar 18 18:00:53.801705 master-0 kubenswrapper[30278]: I0318 18:00:53.801667 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-5dc6c74576-smd8t" event={"ID":"2ee860d7-4262-43d7-aeb2-b77040a69133","Type":"ContainerStarted","Data":"33efff990df5007dce68695192ba4422ded6653d3a0d9838e50a069b804b4b9f"} Mar 18 18:00:53.801705 master-0 kubenswrapper[30278]: I0318 18:00:53.801677 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-5dc6c74576-smd8t" event={"ID":"2ee860d7-4262-43d7-aeb2-b77040a69133","Type":"ContainerStarted","Data":"361acc8b3cf91ece67dd859d45d4124a56d42517f09ba83cc14659cdf364dab8"} Mar 18 18:00:53.805467 master-0 kubenswrapper[30278]: I0318 18:00:53.805126 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-credential-operator/cloud-credential-operator-744f9dbf77-djgn7" event={"ID":"04cef0bd-f365-4bf6-864a-1895995015d6","Type":"ContainerStarted","Data":"f161faff06e68b035d191677a93c2082bcfff856c18c5e937521293ab0589f02"} Mar 18 18:00:53.811382 master-0 kubenswrapper[30278]: I0318 18:00:53.811331 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-6fbb6cf6f9-6x52p" event={"ID":"2d21e77e-8b61-4f03-8f17-941b7a1d8b1d","Type":"ContainerStarted","Data":"702640d31056cebfd9743d53fa7bf0e61115bde7921594c9cbee8bb941f1d1b0"} Mar 18 18:00:53.816129 master-0 kubenswrapper[30278]: I0318 18:00:53.816093 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-7bbc969446-72wb5" event={"ID":"5876677a-9e8a-4625-af71-833b259a1596","Type":"ContainerStarted","Data":"6933f435d3e00896cd152628fe51af8ae612f5c95000827220649407f9cae916"} Mar 18 18:00:53.817735 master-0 kubenswrapper[30278]: I0318 18:00:53.817416 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-v28rj" event={"ID":"1674d0a4-8c16-4535-ac1e-e3220ef50e57","Type":"ContainerStarted","Data":"129e665374520e86d9a484abb6d802b56529bdea72db51d4d4c7e3ae23ea5c3f"} Mar 18 18:00:54.268131 master-0 kubenswrapper[30278]: I0318 18:00:54.268007 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5c6aeb7b-9c05-470e-b31f-f4154aadf170-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"5c6aeb7b-9c05-470e-b31f-f4154aadf170\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 18:00:54.268338 master-0 kubenswrapper[30278]: E0318 18:00:54.268208 30278 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5c6aeb7b-9c05-470e-b31f-f4154aadf170-prometheus-trusted-ca-bundle podName:5c6aeb7b-9c05-470e-b31f-f4154aadf170 nodeName:}" failed. No retries permitted until 2026-03-18 18:00:55.268191051 +0000 UTC m=+24.435375646 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "prometheus-trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/5c6aeb7b-9c05-470e-b31f-f4154aadf170-prometheus-trusted-ca-bundle") pod "prometheus-k8s-0" (UID: "5c6aeb7b-9c05-470e-b31f-f4154aadf170") : configmap references non-existent config key: ca-bundle.crt Mar 18 18:00:55.283730 master-0 kubenswrapper[30278]: I0318 18:00:55.283671 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5c6aeb7b-9c05-470e-b31f-f4154aadf170-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"5c6aeb7b-9c05-470e-b31f-f4154aadf170\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 18:00:55.284220 master-0 kubenswrapper[30278]: E0318 18:00:55.283943 30278 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5c6aeb7b-9c05-470e-b31f-f4154aadf170-prometheus-trusted-ca-bundle podName:5c6aeb7b-9c05-470e-b31f-f4154aadf170 nodeName:}" failed. No retries permitted until 2026-03-18 18:00:57.283915614 +0000 UTC m=+26.451100209 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "prometheus-trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/5c6aeb7b-9c05-470e-b31f-f4154aadf170-prometheus-trusted-ca-bundle") pod "prometheus-k8s-0" (UID: "5c6aeb7b-9c05-470e-b31f-f4154aadf170") : configmap references non-existent config key: ca-bundle.crt Mar 18 18:00:57.013303 master-0 kubenswrapper[30278]: I0318 18:00:57.013248 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/89b1dfdf-4633-45af-8abd-931a76eca960-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"89b1dfdf-4633-45af-8abd-931a76eca960\") " pod="openshift-monitoring/alertmanager-main-0" Mar 18 18:00:57.013702 master-0 kubenswrapper[30278]: E0318 18:00:57.013568 30278 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/89b1dfdf-4633-45af-8abd-931a76eca960-alertmanager-trusted-ca-bundle podName:89b1dfdf-4633-45af-8abd-931a76eca960 nodeName:}" failed. No retries permitted until 2026-03-18 18:01:05.013527164 +0000 UTC m=+34.180711759 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "alertmanager-trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/89b1dfdf-4633-45af-8abd-931a76eca960-alertmanager-trusted-ca-bundle") pod "alertmanager-main-0" (UID: "89b1dfdf-4633-45af-8abd-931a76eca960") : configmap references non-existent config key: ca-bundle.crt Mar 18 18:00:57.319259 master-0 kubenswrapper[30278]: I0318 18:00:57.318739 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5c6aeb7b-9c05-470e-b31f-f4154aadf170-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"5c6aeb7b-9c05-470e-b31f-f4154aadf170\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 18:00:57.319259 master-0 kubenswrapper[30278]: E0318 18:00:57.319050 30278 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5c6aeb7b-9c05-470e-b31f-f4154aadf170-prometheus-trusted-ca-bundle podName:5c6aeb7b-9c05-470e-b31f-f4154aadf170 nodeName:}" failed. No retries permitted until 2026-03-18 18:01:01.31903174 +0000 UTC m=+30.486216335 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "prometheus-trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/5c6aeb7b-9c05-470e-b31f-f4154aadf170-prometheus-trusted-ca-bundle") pod "prometheus-k8s-0" (UID: "5c6aeb7b-9c05-470e-b31f-f4154aadf170") : configmap references non-existent config key: ca-bundle.crt Mar 18 18:00:57.847987 master-0 kubenswrapper[30278]: I0318 18:00:57.847924 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/metrics-server-6b789d4fdf-d4nw8" event={"ID":"6f89981d-e643-4015-8af6-5e7582182466","Type":"ContainerStarted","Data":"4d79cf6114a76ecf0e17186a312dea4a3b6355a66c096c71bc629480d770eb06"} Mar 18 18:00:57.851209 master-0 kubenswrapper[30278]: I0318 18:00:57.851156 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-5dc6c74576-smd8t" event={"ID":"2ee860d7-4262-43d7-aeb2-b77040a69133","Type":"ContainerStarted","Data":"03e1dc730c5689b90835b53d032579e7daae6082e4d6122717e610cb79ba8bde"} Mar 18 18:00:57.853861 master-0 kubenswrapper[30278]: I0318 18:00:57.853801 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-7bbc969446-72wb5" event={"ID":"5876677a-9e8a-4625-af71-833b259a1596","Type":"ContainerStarted","Data":"ab50aa8748b7e9b90b3f7e4e8b9afc580b1e40b3fee9e864d3920a94a68c0af2"} Mar 18 18:00:57.853861 master-0 kubenswrapper[30278]: I0318 18:00:57.853861 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-7bbc969446-72wb5" event={"ID":"5876677a-9e8a-4625-af71-833b259a1596","Type":"ContainerStarted","Data":"36b3e57ba7fb59a4480ba71893022d713b2ce39c1247dd350870cfa47df62079"} Mar 18 18:00:57.853999 master-0 kubenswrapper[30278]: I0318 18:00:57.853879 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-7bbc969446-72wb5" event={"ID":"5876677a-9e8a-4625-af71-833b259a1596","Type":"ContainerStarted","Data":"df25fa35e3a359be06730e285b2e50a4174b984344e8e63b50bcc680e63ed19b"} Mar 18 18:00:57.855234 master-0 kubenswrapper[30278]: I0318 18:00:57.855179 30278 generic.go:334] "Generic (PLEG): container finished" podID="1674d0a4-8c16-4535-ac1e-e3220ef50e57" containerID="e533e58932beafad5c915af1db100d42287ef893419695bd685f90f123980a83" exitCode=0 Mar 18 18:00:57.855468 master-0 kubenswrapper[30278]: I0318 18:00:57.855432 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-v28rj" event={"ID":"1674d0a4-8c16-4535-ac1e-e3220ef50e57","Type":"ContainerDied","Data":"e533e58932beafad5c915af1db100d42287ef893419695bd685f90f123980a83"} Mar 18 18:00:57.857358 master-0 kubenswrapper[30278]: I0318 18:00:57.857316 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-7cb46549d5-gm2ft" event={"ID":"b0f7a4e5-c29e-43aa-8c76-b342e5abcc55","Type":"ContainerStarted","Data":"3bebb5665f5cce54ed72e70c30b5977bc6f53cdf24965eff664389408ab3990f"} Mar 18 18:00:57.857443 master-0 kubenswrapper[30278]: I0318 18:00:57.857362 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-7cb46549d5-gm2ft" event={"ID":"b0f7a4e5-c29e-43aa-8c76-b342e5abcc55","Type":"ContainerStarted","Data":"dd9ce64646a6aa589a94c213f34f36101158d9ad229647c9271e3ae9e348ab68"} Mar 18 18:00:57.857443 master-0 kubenswrapper[30278]: I0318 18:00:57.857374 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-7cb46549d5-gm2ft" event={"ID":"b0f7a4e5-c29e-43aa-8c76-b342e5abcc55","Type":"ContainerStarted","Data":"2a0c70fb5f9b1ca8f86170940adf2865f9f88a3a4e0cea5b4736421e98ac17ec"} Mar 18 18:00:57.881871 master-0 kubenswrapper[30278]: I0318 18:00:57.881787 30278 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/metrics-server-6b789d4fdf-d4nw8" podStartSLOduration=3.277720511 podStartE2EDuration="6.881765613s" podCreationTimestamp="2026-03-18 18:00:51 +0000 UTC" firstStartedPulling="2026-03-18 18:00:53.358516142 +0000 UTC m=+22.525700737" lastFinishedPulling="2026-03-18 18:00:56.962561244 +0000 UTC m=+26.129745839" observedRunningTime="2026-03-18 18:00:57.880195171 +0000 UTC m=+27.047379816" watchObservedRunningTime="2026-03-18 18:00:57.881765613 +0000 UTC m=+27.048950208" Mar 18 18:00:57.975541 master-0 kubenswrapper[30278]: I0318 18:00:57.975466 30278 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/kube-state-metrics-7bbc969446-72wb5" podStartSLOduration=9.393064559 podStartE2EDuration="12.97544613s" podCreationTimestamp="2026-03-18 18:00:45 +0000 UTC" firstStartedPulling="2026-03-18 18:00:53.377762267 +0000 UTC m=+22.544946862" lastFinishedPulling="2026-03-18 18:00:56.960143838 +0000 UTC m=+26.127328433" observedRunningTime="2026-03-18 18:00:57.95125893 +0000 UTC m=+27.118443535" watchObservedRunningTime="2026-03-18 18:00:57.97544613 +0000 UTC m=+27.142630735" Mar 18 18:00:57.979527 master-0 kubenswrapper[30278]: I0318 18:00:57.979438 30278 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/openshift-state-metrics-5dc6c74576-smd8t" podStartSLOduration=9.614952602 podStartE2EDuration="12.979410438s" podCreationTimestamp="2026-03-18 18:00:45 +0000 UTC" firstStartedPulling="2026-03-18 18:00:53.594930312 +0000 UTC m=+22.762114907" lastFinishedPulling="2026-03-18 18:00:56.959388148 +0000 UTC m=+26.126572743" observedRunningTime="2026-03-18 18:00:57.976235291 +0000 UTC m=+27.143419906" watchObservedRunningTime="2026-03-18 18:00:57.979410438 +0000 UTC m=+27.146595073" Mar 18 18:00:58.868722 master-0 kubenswrapper[30278]: I0318 18:00:58.868605 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-v28rj" event={"ID":"1674d0a4-8c16-4535-ac1e-e3220ef50e57","Type":"ContainerStarted","Data":"f4b52c54794d9e1b566691110c40d06d8494c5da03243cac8527005fb9524344"} Mar 18 18:00:58.868722 master-0 kubenswrapper[30278]: I0318 18:00:58.868698 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-v28rj" event={"ID":"1674d0a4-8c16-4535-ac1e-e3220ef50e57","Type":"ContainerStarted","Data":"9c293795a89aa4ce01144962c1517800874a928ec737d832cdef513fcc624600"} Mar 18 18:00:58.877433 master-0 kubenswrapper[30278]: I0318 18:00:58.877262 30278 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-master-0_8e7a82869988463543d3d8dd1f0b5fe3/startup-monitor/0.log" Mar 18 18:00:58.877433 master-0 kubenswrapper[30278]: I0318 18:00:58.877397 30278 generic.go:334] "Generic (PLEG): container finished" podID="8e7a82869988463543d3d8dd1f0b5fe3" containerID="25f0059cb7f28e57d54587af9a075f46b53e453c6a901d45bc7aae8b1f8557d8" exitCode=137 Mar 18 18:00:58.902536 master-0 kubenswrapper[30278]: I0318 18:00:58.902080 30278 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/node-exporter-v28rj" podStartSLOduration=9.735583183 podStartE2EDuration="13.9020509s" podCreationTimestamp="2026-03-18 18:00:45 +0000 UTC" firstStartedPulling="2026-03-18 18:00:52.792944551 +0000 UTC m=+21.960129156" lastFinishedPulling="2026-03-18 18:00:56.959412248 +0000 UTC m=+26.126596873" observedRunningTime="2026-03-18 18:00:58.898091443 +0000 UTC m=+28.065276038" watchObservedRunningTime="2026-03-18 18:00:58.9020509 +0000 UTC m=+28.069235505" Mar 18 18:00:59.261806 master-0 kubenswrapper[30278]: I0318 18:00:59.261746 30278 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-5l4qp" Mar 18 18:00:59.262211 master-0 kubenswrapper[30278]: I0318 18:00:59.261989 30278 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 18 18:00:59.286684 master-0 kubenswrapper[30278]: I0318 18:00:59.286634 30278 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-5l4qp" Mar 18 18:00:59.475372 master-0 kubenswrapper[30278]: I0318 18:00:59.475136 30278 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-master-0_8e7a82869988463543d3d8dd1f0b5fe3/startup-monitor/0.log" Mar 18 18:00:59.475372 master-0 kubenswrapper[30278]: I0318 18:00:59.475250 30278 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 18 18:00:59.663394 master-0 kubenswrapper[30278]: I0318 18:00:59.663359 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/8e7a82869988463543d3d8dd1f0b5fe3-resource-dir\") pod \"8e7a82869988463543d3d8dd1f0b5fe3\" (UID: \"8e7a82869988463543d3d8dd1f0b5fe3\") " Mar 18 18:00:59.663636 master-0 kubenswrapper[30278]: I0318 18:00:59.663623 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/8e7a82869988463543d3d8dd1f0b5fe3-var-lock\") pod \"8e7a82869988463543d3d8dd1f0b5fe3\" (UID: \"8e7a82869988463543d3d8dd1f0b5fe3\") " Mar 18 18:00:59.663742 master-0 kubenswrapper[30278]: I0318 18:00:59.663730 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/8e7a82869988463543d3d8dd1f0b5fe3-var-log\") pod \"8e7a82869988463543d3d8dd1f0b5fe3\" (UID: \"8e7a82869988463543d3d8dd1f0b5fe3\") " Mar 18 18:00:59.663826 master-0 kubenswrapper[30278]: I0318 18:00:59.663607 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8e7a82869988463543d3d8dd1f0b5fe3-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "8e7a82869988463543d3d8dd1f0b5fe3" (UID: "8e7a82869988463543d3d8dd1f0b5fe3"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 18:00:59.663889 master-0 kubenswrapper[30278]: I0318 18:00:59.663658 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8e7a82869988463543d3d8dd1f0b5fe3-var-lock" (OuterVolumeSpecName: "var-lock") pod "8e7a82869988463543d3d8dd1f0b5fe3" (UID: "8e7a82869988463543d3d8dd1f0b5fe3"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 18:00:59.663889 master-0 kubenswrapper[30278]: I0318 18:00:59.663803 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8e7a82869988463543d3d8dd1f0b5fe3-var-log" (OuterVolumeSpecName: "var-log") pod "8e7a82869988463543d3d8dd1f0b5fe3" (UID: "8e7a82869988463543d3d8dd1f0b5fe3"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 18:00:59.663947 master-0 kubenswrapper[30278]: I0318 18:00:59.663918 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8e7a82869988463543d3d8dd1f0b5fe3-manifests" (OuterVolumeSpecName: "manifests") pod "8e7a82869988463543d3d8dd1f0b5fe3" (UID: "8e7a82869988463543d3d8dd1f0b5fe3"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 18:00:59.664006 master-0 kubenswrapper[30278]: I0318 18:00:59.663993 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/8e7a82869988463543d3d8dd1f0b5fe3-manifests\") pod \"8e7a82869988463543d3d8dd1f0b5fe3\" (UID: \"8e7a82869988463543d3d8dd1f0b5fe3\") " Mar 18 18:00:59.664088 master-0 kubenswrapper[30278]: I0318 18:00:59.664076 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/8e7a82869988463543d3d8dd1f0b5fe3-pod-resource-dir\") pod \"8e7a82869988463543d3d8dd1f0b5fe3\" (UID: \"8e7a82869988463543d3d8dd1f0b5fe3\") " Mar 18 18:00:59.664365 master-0 kubenswrapper[30278]: I0318 18:00:59.664351 30278 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/8e7a82869988463543d3d8dd1f0b5fe3-resource-dir\") on node \"master-0\" DevicePath \"\"" Mar 18 18:00:59.665146 master-0 kubenswrapper[30278]: I0318 18:00:59.665133 30278 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/8e7a82869988463543d3d8dd1f0b5fe3-var-lock\") on node \"master-0\" DevicePath \"\"" Mar 18 18:00:59.665225 master-0 kubenswrapper[30278]: I0318 18:00:59.665215 30278 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/8e7a82869988463543d3d8dd1f0b5fe3-var-log\") on node \"master-0\" DevicePath \"\"" Mar 18 18:00:59.665388 master-0 kubenswrapper[30278]: I0318 18:00:59.665378 30278 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/8e7a82869988463543d3d8dd1f0b5fe3-manifests\") on node \"master-0\" DevicePath \"\"" Mar 18 18:00:59.669227 master-0 kubenswrapper[30278]: I0318 18:00:59.669062 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8e7a82869988463543d3d8dd1f0b5fe3-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "8e7a82869988463543d3d8dd1f0b5fe3" (UID: "8e7a82869988463543d3d8dd1f0b5fe3"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 18:00:59.767174 master-0 kubenswrapper[30278]: I0318 18:00:59.767083 30278 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/8e7a82869988463543d3d8dd1f0b5fe3-pod-resource-dir\") on node \"master-0\" DevicePath \"\"" Mar 18 18:00:59.889362 master-0 kubenswrapper[30278]: I0318 18:00:59.889319 30278 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-master-0_8e7a82869988463543d3d8dd1f0b5fe3/startup-monitor/0.log" Mar 18 18:00:59.889841 master-0 kubenswrapper[30278]: I0318 18:00:59.889477 30278 scope.go:117] "RemoveContainer" containerID="25f0059cb7f28e57d54587af9a075f46b53e453c6a901d45bc7aae8b1f8557d8" Mar 18 18:00:59.889841 master-0 kubenswrapper[30278]: I0318 18:00:59.889489 30278 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 18 18:00:59.895564 master-0 kubenswrapper[30278]: I0318 18:00:59.895520 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-7cb46549d5-gm2ft" event={"ID":"b0f7a4e5-c29e-43aa-8c76-b342e5abcc55","Type":"ContainerStarted","Data":"4739185db49f4e951510f8cd68519490a4c5133d5b23476921da7c7dfad6c6a1"} Mar 18 18:00:59.895650 master-0 kubenswrapper[30278]: I0318 18:00:59.895579 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-7cb46549d5-gm2ft" event={"ID":"b0f7a4e5-c29e-43aa-8c76-b342e5abcc55","Type":"ContainerStarted","Data":"5d9e1aa75081531313366a36baef1223dad04318d9f3c76228d6884453750aaf"} Mar 18 18:00:59.954406 master-0 kubenswrapper[30278]: I0318 18:00:59.954338 30278 kubelet.go:2706] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" mirrorPodUID="6040f971-c2af-4c8a-87dd-360e0ec47faf" Mar 18 18:01:00.909929 master-0 kubenswrapper[30278]: I0318 18:01:00.909862 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-7cb46549d5-gm2ft" event={"ID":"b0f7a4e5-c29e-43aa-8c76-b342e5abcc55","Type":"ContainerStarted","Data":"88c200e55e1cf6bd6a6ff018bb14bfd4049e9e9ea3a58c2136bf18c37b74edab"} Mar 18 18:01:00.956171 master-0 kubenswrapper[30278]: I0318 18:01:00.954187 30278 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/thanos-querier-7cb46549d5-gm2ft" podStartSLOduration=5.844818174 podStartE2EDuration="11.95415819s" podCreationTimestamp="2026-03-18 18:00:49 +0000 UTC" firstStartedPulling="2026-03-18 18:00:53.349497026 +0000 UTC m=+22.516681621" lastFinishedPulling="2026-03-18 18:00:59.458837042 +0000 UTC m=+28.626021637" observedRunningTime="2026-03-18 18:01:00.949731619 +0000 UTC m=+30.116916214" watchObservedRunningTime="2026-03-18 18:01:00.95415819 +0000 UTC m=+30.121342785" Mar 18 18:01:01.064625 master-0 kubenswrapper[30278]: I0318 18:01:01.064551 30278 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8e7a82869988463543d3d8dd1f0b5fe3" path="/var/lib/kubelet/pods/8e7a82869988463543d3d8dd1f0b5fe3/volumes" Mar 18 18:01:01.064979 master-0 kubenswrapper[30278]: I0318 18:01:01.064864 30278 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" podUID="" Mar 18 18:01:01.086802 master-0 kubenswrapper[30278]: I0318 18:01:01.084842 30278 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0"] Mar 18 18:01:01.086802 master-0 kubenswrapper[30278]: I0318 18:01:01.084909 30278 kubelet.go:2649] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" mirrorPodUID="6040f971-c2af-4c8a-87dd-360e0ec47faf" Mar 18 18:01:01.086802 master-0 kubenswrapper[30278]: I0318 18:01:01.086394 30278 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0"] Mar 18 18:01:01.086802 master-0 kubenswrapper[30278]: I0318 18:01:01.086418 30278 kubelet.go:2673] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" mirrorPodUID="6040f971-c2af-4c8a-87dd-360e0ec47faf" Mar 18 18:01:01.396703 master-0 kubenswrapper[30278]: I0318 18:01:01.396611 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5c6aeb7b-9c05-470e-b31f-f4154aadf170-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"5c6aeb7b-9c05-470e-b31f-f4154aadf170\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 18:01:01.396997 master-0 kubenswrapper[30278]: E0318 18:01:01.396835 30278 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5c6aeb7b-9c05-470e-b31f-f4154aadf170-prometheus-trusted-ca-bundle podName:5c6aeb7b-9c05-470e-b31f-f4154aadf170 nodeName:}" failed. No retries permitted until 2026-03-18 18:01:09.396805117 +0000 UTC m=+38.563989722 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "prometheus-trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/5c6aeb7b-9c05-470e-b31f-f4154aadf170-prometheus-trusted-ca-bundle") pod "prometheus-k8s-0" (UID: "5c6aeb7b-9c05-470e-b31f-f4154aadf170") : configmap references non-existent config key: ca-bundle.crt Mar 18 18:01:01.919732 master-0 kubenswrapper[30278]: I0318 18:01:01.919603 30278 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/thanos-querier-7cb46549d5-gm2ft" Mar 18 18:01:02.938725 master-0 kubenswrapper[30278]: I0318 18:01:02.938382 30278 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/thanos-querier-7cb46549d5-gm2ft" Mar 18 18:01:05.054234 master-0 kubenswrapper[30278]: I0318 18:01:05.054162 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/89b1dfdf-4633-45af-8abd-931a76eca960-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"89b1dfdf-4633-45af-8abd-931a76eca960\") " pod="openshift-monitoring/alertmanager-main-0" Mar 18 18:01:05.054783 master-0 kubenswrapper[30278]: E0318 18:01:05.054423 30278 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/89b1dfdf-4633-45af-8abd-931a76eca960-alertmanager-trusted-ca-bundle podName:89b1dfdf-4633-45af-8abd-931a76eca960 nodeName:}" failed. No retries permitted until 2026-03-18 18:01:21.054390509 +0000 UTC m=+50.221575114 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "alertmanager-trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/89b1dfdf-4633-45af-8abd-931a76eca960-alertmanager-trusted-ca-bundle") pod "alertmanager-main-0" (UID: "89b1dfdf-4633-45af-8abd-931a76eca960") : configmap references non-existent config key: ca-bundle.crt Mar 18 18:01:07.801205 master-0 kubenswrapper[30278]: I0318 18:01:07.801118 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4285e80c-1ff9-42b3-9692-9f2ab6b61916-kube-api-access\") pod \"installer-3-master-0\" (UID: \"4285e80c-1ff9-42b3-9692-9f2ab6b61916\") " pod="openshift-kube-apiserver/installer-3-master-0" Mar 18 18:01:07.802098 master-0 kubenswrapper[30278]: E0318 18:01:07.802057 30278 projected.go:288] Couldn't get configMap openshift-kube-apiserver/kube-root-ca.crt: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Mar 18 18:01:07.802218 master-0 kubenswrapper[30278]: E0318 18:01:07.802202 30278 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-kube-apiserver/installer-3-master-0: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Mar 18 18:01:07.802420 master-0 kubenswrapper[30278]: E0318 18:01:07.802401 30278 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/4285e80c-1ff9-42b3-9692-9f2ab6b61916-kube-api-access podName:4285e80c-1ff9-42b3-9692-9f2ab6b61916 nodeName:}" failed. No retries permitted until 2026-03-18 18:01:39.802375545 +0000 UTC m=+68.969560150 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/4285e80c-1ff9-42b3-9692-9f2ab6b61916-kube-api-access") pod "installer-3-master-0" (UID: "4285e80c-1ff9-42b3-9692-9f2ab6b61916") : object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Mar 18 18:01:09.428244 master-0 kubenswrapper[30278]: I0318 18:01:09.428136 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5c6aeb7b-9c05-470e-b31f-f4154aadf170-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"5c6aeb7b-9c05-470e-b31f-f4154aadf170\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 18:01:09.429121 master-0 kubenswrapper[30278]: E0318 18:01:09.428449 30278 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5c6aeb7b-9c05-470e-b31f-f4154aadf170-prometheus-trusted-ca-bundle podName:5c6aeb7b-9c05-470e-b31f-f4154aadf170 nodeName:}" failed. No retries permitted until 2026-03-18 18:01:25.428400609 +0000 UTC m=+54.595585234 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "prometheus-trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/5c6aeb7b-9c05-470e-b31f-f4154aadf170-prometheus-trusted-ca-bundle") pod "prometheus-k8s-0" (UID: "5c6aeb7b-9c05-470e-b31f-f4154aadf170") : configmap references non-existent config key: ca-bundle.crt Mar 18 18:01:12.054556 master-0 kubenswrapper[30278]: I0318 18:01:12.054448 30278 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-monitoring/metrics-server-6b789d4fdf-d4nw8" Mar 18 18:01:12.054556 master-0 kubenswrapper[30278]: I0318 18:01:12.054536 30278 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/metrics-server-6b789d4fdf-d4nw8" Mar 18 18:01:17.447043 master-0 kubenswrapper[30278]: I0318 18:01:17.446782 30278 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-559754bf9d-sp5dr"] Mar 18 18:01:17.448013 master-0 kubenswrapper[30278]: E0318 18:01:17.447997 30278 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e7a82869988463543d3d8dd1f0b5fe3" containerName="startup-monitor" Mar 18 18:01:17.448092 master-0 kubenswrapper[30278]: I0318 18:01:17.448082 30278 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e7a82869988463543d3d8dd1f0b5fe3" containerName="startup-monitor" Mar 18 18:01:17.448333 master-0 kubenswrapper[30278]: I0318 18:01:17.448321 30278 memory_manager.go:354] "RemoveStaleState removing state" podUID="8e7a82869988463543d3d8dd1f0b5fe3" containerName="startup-monitor" Mar 18 18:01:17.448869 master-0 kubenswrapper[30278]: I0318 18:01:17.448848 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-559754bf9d-sp5dr" Mar 18 18:01:17.453132 master-0 kubenswrapper[30278]: I0318 18:01:17.452737 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Mar 18 18:01:17.453407 master-0 kubenswrapper[30278]: I0318 18:01:17.453375 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Mar 18 18:01:17.453480 master-0 kubenswrapper[30278]: I0318 18:01:17.453408 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-wftwz" Mar 18 18:01:17.453986 master-0 kubenswrapper[30278]: I0318 18:01:17.453852 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Mar 18 18:01:17.454063 master-0 kubenswrapper[30278]: I0318 18:01:17.454029 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Mar 18 18:01:17.454108 master-0 kubenswrapper[30278]: I0318 18:01:17.454069 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Mar 18 18:01:17.454543 master-0 kubenswrapper[30278]: I0318 18:01:17.454306 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Mar 18 18:01:17.454543 master-0 kubenswrapper[30278]: I0318 18:01:17.454478 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Mar 18 18:01:17.455702 master-0 kubenswrapper[30278]: I0318 18:01:17.454672 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Mar 18 18:01:17.455702 master-0 kubenswrapper[30278]: I0318 18:01:17.454793 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Mar 18 18:01:17.455702 master-0 kubenswrapper[30278]: I0318 18:01:17.454912 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Mar 18 18:01:17.462005 master-0 kubenswrapper[30278]: I0318 18:01:17.461954 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Mar 18 18:01:17.472980 master-0 kubenswrapper[30278]: I0318 18:01:17.472790 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Mar 18 18:01:17.478600 master-0 kubenswrapper[30278]: I0318 18:01:17.478534 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b36712fe-25e1-4259-aca9-33801be51a8c-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-559754bf9d-sp5dr\" (UID: \"b36712fe-25e1-4259-aca9-33801be51a8c\") " pod="openshift-authentication/oauth-openshift-559754bf9d-sp5dr" Mar 18 18:01:17.478790 master-0 kubenswrapper[30278]: I0318 18:01:17.478755 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/b36712fe-25e1-4259-aca9-33801be51a8c-v4-0-config-user-template-error\") pod \"oauth-openshift-559754bf9d-sp5dr\" (UID: \"b36712fe-25e1-4259-aca9-33801be51a8c\") " pod="openshift-authentication/oauth-openshift-559754bf9d-sp5dr" Mar 18 18:01:17.478861 master-0 kubenswrapper[30278]: I0318 18:01:17.478827 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/b36712fe-25e1-4259-aca9-33801be51a8c-audit-policies\") pod \"oauth-openshift-559754bf9d-sp5dr\" (UID: \"b36712fe-25e1-4259-aca9-33801be51a8c\") " pod="openshift-authentication/oauth-openshift-559754bf9d-sp5dr" Mar 18 18:01:17.478957 master-0 kubenswrapper[30278]: I0318 18:01:17.478927 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/b36712fe-25e1-4259-aca9-33801be51a8c-v4-0-config-system-cliconfig\") pod \"oauth-openshift-559754bf9d-sp5dr\" (UID: \"b36712fe-25e1-4259-aca9-33801be51a8c\") " pod="openshift-authentication/oauth-openshift-559754bf9d-sp5dr" Mar 18 18:01:17.479043 master-0 kubenswrapper[30278]: I0318 18:01:17.479012 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/b36712fe-25e1-4259-aca9-33801be51a8c-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-559754bf9d-sp5dr\" (UID: \"b36712fe-25e1-4259-aca9-33801be51a8c\") " pod="openshift-authentication/oauth-openshift-559754bf9d-sp5dr" Mar 18 18:01:17.479130 master-0 kubenswrapper[30278]: I0318 18:01:17.479077 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/b36712fe-25e1-4259-aca9-33801be51a8c-v4-0-config-system-service-ca\") pod \"oauth-openshift-559754bf9d-sp5dr\" (UID: \"b36712fe-25e1-4259-aca9-33801be51a8c\") " pod="openshift-authentication/oauth-openshift-559754bf9d-sp5dr" Mar 18 18:01:17.479231 master-0 kubenswrapper[30278]: I0318 18:01:17.479193 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/b36712fe-25e1-4259-aca9-33801be51a8c-v4-0-config-user-template-login\") pod \"oauth-openshift-559754bf9d-sp5dr\" (UID: \"b36712fe-25e1-4259-aca9-33801be51a8c\") " pod="openshift-authentication/oauth-openshift-559754bf9d-sp5dr" Mar 18 18:01:17.479323 master-0 kubenswrapper[30278]: I0318 18:01:17.479266 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/b36712fe-25e1-4259-aca9-33801be51a8c-v4-0-config-system-router-certs\") pod \"oauth-openshift-559754bf9d-sp5dr\" (UID: \"b36712fe-25e1-4259-aca9-33801be51a8c\") " pod="openshift-authentication/oauth-openshift-559754bf9d-sp5dr" Mar 18 18:01:17.479387 master-0 kubenswrapper[30278]: I0318 18:01:17.479352 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b36712fe-25e1-4259-aca9-33801be51a8c-audit-dir\") pod \"oauth-openshift-559754bf9d-sp5dr\" (UID: \"b36712fe-25e1-4259-aca9-33801be51a8c\") " pod="openshift-authentication/oauth-openshift-559754bf9d-sp5dr" Mar 18 18:01:17.479465 master-0 kubenswrapper[30278]: I0318 18:01:17.479424 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/b36712fe-25e1-4259-aca9-33801be51a8c-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-559754bf9d-sp5dr\" (UID: \"b36712fe-25e1-4259-aca9-33801be51a8c\") " pod="openshift-authentication/oauth-openshift-559754bf9d-sp5dr" Mar 18 18:01:17.479550 master-0 kubenswrapper[30278]: I0318 18:01:17.479510 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/b36712fe-25e1-4259-aca9-33801be51a8c-v4-0-config-system-session\") pod \"oauth-openshift-559754bf9d-sp5dr\" (UID: \"b36712fe-25e1-4259-aca9-33801be51a8c\") " pod="openshift-authentication/oauth-openshift-559754bf9d-sp5dr" Mar 18 18:01:17.479627 master-0 kubenswrapper[30278]: I0318 18:01:17.479586 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pm28b\" (UniqueName: \"kubernetes.io/projected/b36712fe-25e1-4259-aca9-33801be51a8c-kube-api-access-pm28b\") pod \"oauth-openshift-559754bf9d-sp5dr\" (UID: \"b36712fe-25e1-4259-aca9-33801be51a8c\") " pod="openshift-authentication/oauth-openshift-559754bf9d-sp5dr" Mar 18 18:01:17.479705 master-0 kubenswrapper[30278]: I0318 18:01:17.479671 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/b36712fe-25e1-4259-aca9-33801be51a8c-v4-0-config-system-serving-cert\") pod \"oauth-openshift-559754bf9d-sp5dr\" (UID: \"b36712fe-25e1-4259-aca9-33801be51a8c\") " pod="openshift-authentication/oauth-openshift-559754bf9d-sp5dr" Mar 18 18:01:17.491128 master-0 kubenswrapper[30278]: I0318 18:01:17.488232 30278 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-559754bf9d-sp5dr"] Mar 18 18:01:17.515765 master-0 kubenswrapper[30278]: I0318 18:01:17.512826 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Mar 18 18:01:17.582033 master-0 kubenswrapper[30278]: I0318 18:01:17.581948 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/b36712fe-25e1-4259-aca9-33801be51a8c-v4-0-config-system-cliconfig\") pod \"oauth-openshift-559754bf9d-sp5dr\" (UID: \"b36712fe-25e1-4259-aca9-33801be51a8c\") " pod="openshift-authentication/oauth-openshift-559754bf9d-sp5dr" Mar 18 18:01:17.582344 master-0 kubenswrapper[30278]: I0318 18:01:17.582049 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/b36712fe-25e1-4259-aca9-33801be51a8c-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-559754bf9d-sp5dr\" (UID: \"b36712fe-25e1-4259-aca9-33801be51a8c\") " pod="openshift-authentication/oauth-openshift-559754bf9d-sp5dr" Mar 18 18:01:17.582344 master-0 kubenswrapper[30278]: I0318 18:01:17.582082 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/b36712fe-25e1-4259-aca9-33801be51a8c-v4-0-config-system-service-ca\") pod \"oauth-openshift-559754bf9d-sp5dr\" (UID: \"b36712fe-25e1-4259-aca9-33801be51a8c\") " pod="openshift-authentication/oauth-openshift-559754bf9d-sp5dr" Mar 18 18:01:17.582344 master-0 kubenswrapper[30278]: I0318 18:01:17.582127 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/b36712fe-25e1-4259-aca9-33801be51a8c-v4-0-config-user-template-login\") pod \"oauth-openshift-559754bf9d-sp5dr\" (UID: \"b36712fe-25e1-4259-aca9-33801be51a8c\") " pod="openshift-authentication/oauth-openshift-559754bf9d-sp5dr" Mar 18 18:01:17.582344 master-0 kubenswrapper[30278]: I0318 18:01:17.582157 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/b36712fe-25e1-4259-aca9-33801be51a8c-v4-0-config-system-router-certs\") pod \"oauth-openshift-559754bf9d-sp5dr\" (UID: \"b36712fe-25e1-4259-aca9-33801be51a8c\") " pod="openshift-authentication/oauth-openshift-559754bf9d-sp5dr" Mar 18 18:01:17.582344 master-0 kubenswrapper[30278]: I0318 18:01:17.582186 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b36712fe-25e1-4259-aca9-33801be51a8c-audit-dir\") pod \"oauth-openshift-559754bf9d-sp5dr\" (UID: \"b36712fe-25e1-4259-aca9-33801be51a8c\") " pod="openshift-authentication/oauth-openshift-559754bf9d-sp5dr" Mar 18 18:01:17.582344 master-0 kubenswrapper[30278]: I0318 18:01:17.582215 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/b36712fe-25e1-4259-aca9-33801be51a8c-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-559754bf9d-sp5dr\" (UID: \"b36712fe-25e1-4259-aca9-33801be51a8c\") " pod="openshift-authentication/oauth-openshift-559754bf9d-sp5dr" Mar 18 18:01:17.582344 master-0 kubenswrapper[30278]: I0318 18:01:17.582244 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/b36712fe-25e1-4259-aca9-33801be51a8c-v4-0-config-system-session\") pod \"oauth-openshift-559754bf9d-sp5dr\" (UID: \"b36712fe-25e1-4259-aca9-33801be51a8c\") " pod="openshift-authentication/oauth-openshift-559754bf9d-sp5dr" Mar 18 18:01:17.582344 master-0 kubenswrapper[30278]: I0318 18:01:17.582286 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pm28b\" (UniqueName: \"kubernetes.io/projected/b36712fe-25e1-4259-aca9-33801be51a8c-kube-api-access-pm28b\") pod \"oauth-openshift-559754bf9d-sp5dr\" (UID: \"b36712fe-25e1-4259-aca9-33801be51a8c\") " pod="openshift-authentication/oauth-openshift-559754bf9d-sp5dr" Mar 18 18:01:17.582344 master-0 kubenswrapper[30278]: I0318 18:01:17.582315 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/b36712fe-25e1-4259-aca9-33801be51a8c-v4-0-config-system-serving-cert\") pod \"oauth-openshift-559754bf9d-sp5dr\" (UID: \"b36712fe-25e1-4259-aca9-33801be51a8c\") " pod="openshift-authentication/oauth-openshift-559754bf9d-sp5dr" Mar 18 18:01:17.582740 master-0 kubenswrapper[30278]: I0318 18:01:17.582362 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b36712fe-25e1-4259-aca9-33801be51a8c-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-559754bf9d-sp5dr\" (UID: \"b36712fe-25e1-4259-aca9-33801be51a8c\") " pod="openshift-authentication/oauth-openshift-559754bf9d-sp5dr" Mar 18 18:01:17.582740 master-0 kubenswrapper[30278]: I0318 18:01:17.582438 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/b36712fe-25e1-4259-aca9-33801be51a8c-v4-0-config-user-template-error\") pod \"oauth-openshift-559754bf9d-sp5dr\" (UID: \"b36712fe-25e1-4259-aca9-33801be51a8c\") " pod="openshift-authentication/oauth-openshift-559754bf9d-sp5dr" Mar 18 18:01:17.582740 master-0 kubenswrapper[30278]: I0318 18:01:17.582469 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/b36712fe-25e1-4259-aca9-33801be51a8c-audit-policies\") pod \"oauth-openshift-559754bf9d-sp5dr\" (UID: \"b36712fe-25e1-4259-aca9-33801be51a8c\") " pod="openshift-authentication/oauth-openshift-559754bf9d-sp5dr" Mar 18 18:01:17.583477 master-0 kubenswrapper[30278]: I0318 18:01:17.583435 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/b36712fe-25e1-4259-aca9-33801be51a8c-audit-policies\") pod \"oauth-openshift-559754bf9d-sp5dr\" (UID: \"b36712fe-25e1-4259-aca9-33801be51a8c\") " pod="openshift-authentication/oauth-openshift-559754bf9d-sp5dr" Mar 18 18:01:17.583594 master-0 kubenswrapper[30278]: E0318 18:01:17.583563 30278 configmap.go:193] Couldn't get configMap openshift-authentication/v4-0-config-system-cliconfig: configmap "v4-0-config-system-cliconfig" not found Mar 18 18:01:17.583648 master-0 kubenswrapper[30278]: E0318 18:01:17.583638 30278 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b36712fe-25e1-4259-aca9-33801be51a8c-v4-0-config-system-cliconfig podName:b36712fe-25e1-4259-aca9-33801be51a8c nodeName:}" failed. No retries permitted until 2026-03-18 18:01:18.083617514 +0000 UTC m=+47.250802129 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "v4-0-config-system-cliconfig" (UniqueName: "kubernetes.io/configmap/b36712fe-25e1-4259-aca9-33801be51a8c-v4-0-config-system-cliconfig") pod "oauth-openshift-559754bf9d-sp5dr" (UID: "b36712fe-25e1-4259-aca9-33801be51a8c") : configmap "v4-0-config-system-cliconfig" not found Mar 18 18:01:17.584495 master-0 kubenswrapper[30278]: E0318 18:01:17.584441 30278 secret.go:189] Couldn't get secret openshift-authentication/v4-0-config-system-session: secret "v4-0-config-system-session" not found Mar 18 18:01:17.584572 master-0 kubenswrapper[30278]: E0318 18:01:17.584547 30278 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b36712fe-25e1-4259-aca9-33801be51a8c-v4-0-config-system-session podName:b36712fe-25e1-4259-aca9-33801be51a8c nodeName:}" failed. No retries permitted until 2026-03-18 18:01:18.084520368 +0000 UTC m=+47.251704973 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "v4-0-config-system-session" (UniqueName: "kubernetes.io/secret/b36712fe-25e1-4259-aca9-33801be51a8c-v4-0-config-system-session") pod "oauth-openshift-559754bf9d-sp5dr" (UID: "b36712fe-25e1-4259-aca9-33801be51a8c") : secret "v4-0-config-system-session" not found Mar 18 18:01:17.584627 master-0 kubenswrapper[30278]: I0318 18:01:17.584544 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b36712fe-25e1-4259-aca9-33801be51a8c-audit-dir\") pod \"oauth-openshift-559754bf9d-sp5dr\" (UID: \"b36712fe-25e1-4259-aca9-33801be51a8c\") " pod="openshift-authentication/oauth-openshift-559754bf9d-sp5dr" Mar 18 18:01:17.585843 master-0 kubenswrapper[30278]: I0318 18:01:17.585806 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/b36712fe-25e1-4259-aca9-33801be51a8c-v4-0-config-system-service-ca\") pod \"oauth-openshift-559754bf9d-sp5dr\" (UID: \"b36712fe-25e1-4259-aca9-33801be51a8c\") " pod="openshift-authentication/oauth-openshift-559754bf9d-sp5dr" Mar 18 18:01:17.587943 master-0 kubenswrapper[30278]: I0318 18:01:17.587893 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b36712fe-25e1-4259-aca9-33801be51a8c-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-559754bf9d-sp5dr\" (UID: \"b36712fe-25e1-4259-aca9-33801be51a8c\") " pod="openshift-authentication/oauth-openshift-559754bf9d-sp5dr" Mar 18 18:01:17.588033 master-0 kubenswrapper[30278]: I0318 18:01:17.587983 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/b36712fe-25e1-4259-aca9-33801be51a8c-v4-0-config-user-template-login\") pod \"oauth-openshift-559754bf9d-sp5dr\" (UID: \"b36712fe-25e1-4259-aca9-33801be51a8c\") " pod="openshift-authentication/oauth-openshift-559754bf9d-sp5dr" Mar 18 18:01:17.589286 master-0 kubenswrapper[30278]: I0318 18:01:17.589197 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/b36712fe-25e1-4259-aca9-33801be51a8c-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-559754bf9d-sp5dr\" (UID: \"b36712fe-25e1-4259-aca9-33801be51a8c\") " pod="openshift-authentication/oauth-openshift-559754bf9d-sp5dr" Mar 18 18:01:17.589598 master-0 kubenswrapper[30278]: I0318 18:01:17.589558 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/b36712fe-25e1-4259-aca9-33801be51a8c-v4-0-config-system-router-certs\") pod \"oauth-openshift-559754bf9d-sp5dr\" (UID: \"b36712fe-25e1-4259-aca9-33801be51a8c\") " pod="openshift-authentication/oauth-openshift-559754bf9d-sp5dr" Mar 18 18:01:17.593257 master-0 kubenswrapper[30278]: I0318 18:01:17.590398 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/b36712fe-25e1-4259-aca9-33801be51a8c-v4-0-config-user-template-error\") pod \"oauth-openshift-559754bf9d-sp5dr\" (UID: \"b36712fe-25e1-4259-aca9-33801be51a8c\") " pod="openshift-authentication/oauth-openshift-559754bf9d-sp5dr" Mar 18 18:01:17.593257 master-0 kubenswrapper[30278]: I0318 18:01:17.590886 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/b36712fe-25e1-4259-aca9-33801be51a8c-v4-0-config-system-serving-cert\") pod \"oauth-openshift-559754bf9d-sp5dr\" (UID: \"b36712fe-25e1-4259-aca9-33801be51a8c\") " pod="openshift-authentication/oauth-openshift-559754bf9d-sp5dr" Mar 18 18:01:17.593257 master-0 kubenswrapper[30278]: I0318 18:01:17.591524 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/b36712fe-25e1-4259-aca9-33801be51a8c-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-559754bf9d-sp5dr\" (UID: \"b36712fe-25e1-4259-aca9-33801be51a8c\") " pod="openshift-authentication/oauth-openshift-559754bf9d-sp5dr" Mar 18 18:01:17.611442 master-0 kubenswrapper[30278]: I0318 18:01:17.611398 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pm28b\" (UniqueName: \"kubernetes.io/projected/b36712fe-25e1-4259-aca9-33801be51a8c-kube-api-access-pm28b\") pod \"oauth-openshift-559754bf9d-sp5dr\" (UID: \"b36712fe-25e1-4259-aca9-33801be51a8c\") " pod="openshift-authentication/oauth-openshift-559754bf9d-sp5dr" Mar 18 18:01:18.090537 master-0 kubenswrapper[30278]: I0318 18:01:18.090476 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/b36712fe-25e1-4259-aca9-33801be51a8c-v4-0-config-system-session\") pod \"oauth-openshift-559754bf9d-sp5dr\" (UID: \"b36712fe-25e1-4259-aca9-33801be51a8c\") " pod="openshift-authentication/oauth-openshift-559754bf9d-sp5dr" Mar 18 18:01:18.090848 master-0 kubenswrapper[30278]: I0318 18:01:18.090822 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/b36712fe-25e1-4259-aca9-33801be51a8c-v4-0-config-system-cliconfig\") pod \"oauth-openshift-559754bf9d-sp5dr\" (UID: \"b36712fe-25e1-4259-aca9-33801be51a8c\") " pod="openshift-authentication/oauth-openshift-559754bf9d-sp5dr" Mar 18 18:01:18.091068 master-0 kubenswrapper[30278]: E0318 18:01:18.091004 30278 configmap.go:193] Couldn't get configMap openshift-authentication/v4-0-config-system-cliconfig: configmap "v4-0-config-system-cliconfig" not found Mar 18 18:01:18.091187 master-0 kubenswrapper[30278]: E0318 18:01:18.091151 30278 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b36712fe-25e1-4259-aca9-33801be51a8c-v4-0-config-system-cliconfig podName:b36712fe-25e1-4259-aca9-33801be51a8c nodeName:}" failed. No retries permitted until 2026-03-18 18:01:19.09111573 +0000 UTC m=+48.258300365 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-cliconfig" (UniqueName: "kubernetes.io/configmap/b36712fe-25e1-4259-aca9-33801be51a8c-v4-0-config-system-cliconfig") pod "oauth-openshift-559754bf9d-sp5dr" (UID: "b36712fe-25e1-4259-aca9-33801be51a8c") : configmap "v4-0-config-system-cliconfig" not found Mar 18 18:01:18.096151 master-0 kubenswrapper[30278]: I0318 18:01:18.096100 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/b36712fe-25e1-4259-aca9-33801be51a8c-v4-0-config-system-session\") pod \"oauth-openshift-559754bf9d-sp5dr\" (UID: \"b36712fe-25e1-4259-aca9-33801be51a8c\") " pod="openshift-authentication/oauth-openshift-559754bf9d-sp5dr" Mar 18 18:01:19.107187 master-0 kubenswrapper[30278]: I0318 18:01:19.107086 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/b36712fe-25e1-4259-aca9-33801be51a8c-v4-0-config-system-cliconfig\") pod \"oauth-openshift-559754bf9d-sp5dr\" (UID: \"b36712fe-25e1-4259-aca9-33801be51a8c\") " pod="openshift-authentication/oauth-openshift-559754bf9d-sp5dr" Mar 18 18:01:19.107951 master-0 kubenswrapper[30278]: E0318 18:01:19.107263 30278 configmap.go:193] Couldn't get configMap openshift-authentication/v4-0-config-system-cliconfig: configmap "v4-0-config-system-cliconfig" not found Mar 18 18:01:19.107951 master-0 kubenswrapper[30278]: E0318 18:01:19.107385 30278 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b36712fe-25e1-4259-aca9-33801be51a8c-v4-0-config-system-cliconfig podName:b36712fe-25e1-4259-aca9-33801be51a8c nodeName:}" failed. No retries permitted until 2026-03-18 18:01:21.107367248 +0000 UTC m=+50.274551843 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-cliconfig" (UniqueName: "kubernetes.io/configmap/b36712fe-25e1-4259-aca9-33801be51a8c-v4-0-config-system-cliconfig") pod "oauth-openshift-559754bf9d-sp5dr" (UID: "b36712fe-25e1-4259-aca9-33801be51a8c") : configmap "v4-0-config-system-cliconfig" not found Mar 18 18:01:21.141678 master-0 kubenswrapper[30278]: I0318 18:01:21.141560 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/b36712fe-25e1-4259-aca9-33801be51a8c-v4-0-config-system-cliconfig\") pod \"oauth-openshift-559754bf9d-sp5dr\" (UID: \"b36712fe-25e1-4259-aca9-33801be51a8c\") " pod="openshift-authentication/oauth-openshift-559754bf9d-sp5dr" Mar 18 18:01:21.142620 master-0 kubenswrapper[30278]: I0318 18:01:21.141750 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/89b1dfdf-4633-45af-8abd-931a76eca960-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"89b1dfdf-4633-45af-8abd-931a76eca960\") " pod="openshift-monitoring/alertmanager-main-0" Mar 18 18:01:21.142620 master-0 kubenswrapper[30278]: E0318 18:01:21.141930 30278 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/89b1dfdf-4633-45af-8abd-931a76eca960-alertmanager-trusted-ca-bundle podName:89b1dfdf-4633-45af-8abd-931a76eca960 nodeName:}" failed. No retries permitted until 2026-03-18 18:01:53.141904473 +0000 UTC m=+82.309089068 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "alertmanager-trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/89b1dfdf-4633-45af-8abd-931a76eca960-alertmanager-trusted-ca-bundle") pod "alertmanager-main-0" (UID: "89b1dfdf-4633-45af-8abd-931a76eca960") : configmap references non-existent config key: ca-bundle.crt Mar 18 18:01:21.142794 master-0 kubenswrapper[30278]: E0318 18:01:21.142650 30278 configmap.go:193] Couldn't get configMap openshift-authentication/v4-0-config-system-cliconfig: configmap "v4-0-config-system-cliconfig" not found Mar 18 18:01:21.142794 master-0 kubenswrapper[30278]: E0318 18:01:21.142787 30278 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b36712fe-25e1-4259-aca9-33801be51a8c-v4-0-config-system-cliconfig podName:b36712fe-25e1-4259-aca9-33801be51a8c nodeName:}" failed. No retries permitted until 2026-03-18 18:01:25.142751846 +0000 UTC m=+54.309936491 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-cliconfig" (UniqueName: "kubernetes.io/configmap/b36712fe-25e1-4259-aca9-33801be51a8c-v4-0-config-system-cliconfig") pod "oauth-openshift-559754bf9d-sp5dr" (UID: "b36712fe-25e1-4259-aca9-33801be51a8c") : configmap "v4-0-config-system-cliconfig" not found Mar 18 18:01:23.639913 master-0 kubenswrapper[30278]: I0318 18:01:23.639816 30278 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-559754bf9d-sp5dr"] Mar 18 18:01:23.641100 master-0 kubenswrapper[30278]: E0318 18:01:23.640531 30278 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[v4-0-config-system-cliconfig], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openshift-authentication/oauth-openshift-559754bf9d-sp5dr" podUID="b36712fe-25e1-4259-aca9-33801be51a8c" Mar 18 18:01:24.095633 master-0 kubenswrapper[30278]: I0318 18:01:24.095581 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-559754bf9d-sp5dr" Mar 18 18:01:24.104358 master-0 kubenswrapper[30278]: I0318 18:01:24.104302 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-559754bf9d-sp5dr" Mar 18 18:01:24.214408 master-0 kubenswrapper[30278]: I0318 18:01:24.214355 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/b36712fe-25e1-4259-aca9-33801be51a8c-v4-0-config-system-serving-cert\") pod \"b36712fe-25e1-4259-aca9-33801be51a8c\" (UID: \"b36712fe-25e1-4259-aca9-33801be51a8c\") " Mar 18 18:01:24.214665 master-0 kubenswrapper[30278]: I0318 18:01:24.214443 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/b36712fe-25e1-4259-aca9-33801be51a8c-audit-policies\") pod \"b36712fe-25e1-4259-aca9-33801be51a8c\" (UID: \"b36712fe-25e1-4259-aca9-33801be51a8c\") " Mar 18 18:01:24.214665 master-0 kubenswrapper[30278]: I0318 18:01:24.214467 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/b36712fe-25e1-4259-aca9-33801be51a8c-v4-0-config-user-template-login\") pod \"b36712fe-25e1-4259-aca9-33801be51a8c\" (UID: \"b36712fe-25e1-4259-aca9-33801be51a8c\") " Mar 18 18:01:24.214665 master-0 kubenswrapper[30278]: I0318 18:01:24.214515 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pm28b\" (UniqueName: \"kubernetes.io/projected/b36712fe-25e1-4259-aca9-33801be51a8c-kube-api-access-pm28b\") pod \"b36712fe-25e1-4259-aca9-33801be51a8c\" (UID: \"b36712fe-25e1-4259-aca9-33801be51a8c\") " Mar 18 18:01:24.214665 master-0 kubenswrapper[30278]: I0318 18:01:24.214557 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/b36712fe-25e1-4259-aca9-33801be51a8c-v4-0-config-user-template-error\") pod \"b36712fe-25e1-4259-aca9-33801be51a8c\" (UID: \"b36712fe-25e1-4259-aca9-33801be51a8c\") " Mar 18 18:01:24.214665 master-0 kubenswrapper[30278]: I0318 18:01:24.214597 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/b36712fe-25e1-4259-aca9-33801be51a8c-v4-0-config-system-ocp-branding-template\") pod \"b36712fe-25e1-4259-aca9-33801be51a8c\" (UID: \"b36712fe-25e1-4259-aca9-33801be51a8c\") " Mar 18 18:01:24.214665 master-0 kubenswrapper[30278]: I0318 18:01:24.214645 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/b36712fe-25e1-4259-aca9-33801be51a8c-v4-0-config-system-router-certs\") pod \"b36712fe-25e1-4259-aca9-33801be51a8c\" (UID: \"b36712fe-25e1-4259-aca9-33801be51a8c\") " Mar 18 18:01:24.214665 master-0 kubenswrapper[30278]: I0318 18:01:24.214668 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b36712fe-25e1-4259-aca9-33801be51a8c-audit-dir\") pod \"b36712fe-25e1-4259-aca9-33801be51a8c\" (UID: \"b36712fe-25e1-4259-aca9-33801be51a8c\") " Mar 18 18:01:24.214966 master-0 kubenswrapper[30278]: I0318 18:01:24.214688 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/b36712fe-25e1-4259-aca9-33801be51a8c-v4-0-config-system-session\") pod \"b36712fe-25e1-4259-aca9-33801be51a8c\" (UID: \"b36712fe-25e1-4259-aca9-33801be51a8c\") " Mar 18 18:01:24.214966 master-0 kubenswrapper[30278]: I0318 18:01:24.214709 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/b36712fe-25e1-4259-aca9-33801be51a8c-v4-0-config-system-service-ca\") pod \"b36712fe-25e1-4259-aca9-33801be51a8c\" (UID: \"b36712fe-25e1-4259-aca9-33801be51a8c\") " Mar 18 18:01:24.214966 master-0 kubenswrapper[30278]: I0318 18:01:24.214737 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/b36712fe-25e1-4259-aca9-33801be51a8c-v4-0-config-user-template-provider-selection\") pod \"b36712fe-25e1-4259-aca9-33801be51a8c\" (UID: \"b36712fe-25e1-4259-aca9-33801be51a8c\") " Mar 18 18:01:24.214966 master-0 kubenswrapper[30278]: I0318 18:01:24.214784 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b36712fe-25e1-4259-aca9-33801be51a8c-v4-0-config-system-trusted-ca-bundle\") pod \"b36712fe-25e1-4259-aca9-33801be51a8c\" (UID: \"b36712fe-25e1-4259-aca9-33801be51a8c\") " Mar 18 18:01:24.214966 master-0 kubenswrapper[30278]: I0318 18:01:24.214892 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b36712fe-25e1-4259-aca9-33801be51a8c-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "b36712fe-25e1-4259-aca9-33801be51a8c" (UID: "b36712fe-25e1-4259-aca9-33801be51a8c"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 18:01:24.215188 master-0 kubenswrapper[30278]: I0318 18:01:24.215136 30278 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/b36712fe-25e1-4259-aca9-33801be51a8c-audit-policies\") on node \"master-0\" DevicePath \"\"" Mar 18 18:01:24.215188 master-0 kubenswrapper[30278]: I0318 18:01:24.215173 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b36712fe-25e1-4259-aca9-33801be51a8c-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "b36712fe-25e1-4259-aca9-33801be51a8c" (UID: "b36712fe-25e1-4259-aca9-33801be51a8c"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 18:01:24.215299 master-0 kubenswrapper[30278]: I0318 18:01:24.215242 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b36712fe-25e1-4259-aca9-33801be51a8c-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "b36712fe-25e1-4259-aca9-33801be51a8c" (UID: "b36712fe-25e1-4259-aca9-33801be51a8c"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 18:01:24.215901 master-0 kubenswrapper[30278]: I0318 18:01:24.215813 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b36712fe-25e1-4259-aca9-33801be51a8c-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "b36712fe-25e1-4259-aca9-33801be51a8c" (UID: "b36712fe-25e1-4259-aca9-33801be51a8c"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 18:01:24.217747 master-0 kubenswrapper[30278]: I0318 18:01:24.217688 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b36712fe-25e1-4259-aca9-33801be51a8c-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "b36712fe-25e1-4259-aca9-33801be51a8c" (UID: "b36712fe-25e1-4259-aca9-33801be51a8c"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 18:01:24.221476 master-0 kubenswrapper[30278]: I0318 18:01:24.221425 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b36712fe-25e1-4259-aca9-33801be51a8c-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "b36712fe-25e1-4259-aca9-33801be51a8c" (UID: "b36712fe-25e1-4259-aca9-33801be51a8c"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 18:01:24.221476 master-0 kubenswrapper[30278]: I0318 18:01:24.221458 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b36712fe-25e1-4259-aca9-33801be51a8c-kube-api-access-pm28b" (OuterVolumeSpecName: "kube-api-access-pm28b") pod "b36712fe-25e1-4259-aca9-33801be51a8c" (UID: "b36712fe-25e1-4259-aca9-33801be51a8c"). InnerVolumeSpecName "kube-api-access-pm28b". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 18:01:24.221590 master-0 kubenswrapper[30278]: I0318 18:01:24.221484 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b36712fe-25e1-4259-aca9-33801be51a8c-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "b36712fe-25e1-4259-aca9-33801be51a8c" (UID: "b36712fe-25e1-4259-aca9-33801be51a8c"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 18:01:24.221590 master-0 kubenswrapper[30278]: I0318 18:01:24.221503 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b36712fe-25e1-4259-aca9-33801be51a8c-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "b36712fe-25e1-4259-aca9-33801be51a8c" (UID: "b36712fe-25e1-4259-aca9-33801be51a8c"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 18:01:24.221590 master-0 kubenswrapper[30278]: I0318 18:01:24.221543 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b36712fe-25e1-4259-aca9-33801be51a8c-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "b36712fe-25e1-4259-aca9-33801be51a8c" (UID: "b36712fe-25e1-4259-aca9-33801be51a8c"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 18:01:24.221762 master-0 kubenswrapper[30278]: I0318 18:01:24.221582 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b36712fe-25e1-4259-aca9-33801be51a8c-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "b36712fe-25e1-4259-aca9-33801be51a8c" (UID: "b36712fe-25e1-4259-aca9-33801be51a8c"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 18:01:24.221762 master-0 kubenswrapper[30278]: I0318 18:01:24.221643 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b36712fe-25e1-4259-aca9-33801be51a8c-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "b36712fe-25e1-4259-aca9-33801be51a8c" (UID: "b36712fe-25e1-4259-aca9-33801be51a8c"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 18:01:24.317082 master-0 kubenswrapper[30278]: I0318 18:01:24.317007 30278 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/b36712fe-25e1-4259-aca9-33801be51a8c-v4-0-config-user-template-login\") on node \"master-0\" DevicePath \"\"" Mar 18 18:01:24.317082 master-0 kubenswrapper[30278]: I0318 18:01:24.317050 30278 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pm28b\" (UniqueName: \"kubernetes.io/projected/b36712fe-25e1-4259-aca9-33801be51a8c-kube-api-access-pm28b\") on node \"master-0\" DevicePath \"\"" Mar 18 18:01:24.317082 master-0 kubenswrapper[30278]: I0318 18:01:24.317063 30278 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/b36712fe-25e1-4259-aca9-33801be51a8c-v4-0-config-user-template-error\") on node \"master-0\" DevicePath \"\"" Mar 18 18:01:24.317082 master-0 kubenswrapper[30278]: I0318 18:01:24.317073 30278 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/b36712fe-25e1-4259-aca9-33801be51a8c-v4-0-config-system-ocp-branding-template\") on node \"master-0\" DevicePath \"\"" Mar 18 18:01:24.317082 master-0 kubenswrapper[30278]: I0318 18:01:24.317085 30278 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/b36712fe-25e1-4259-aca9-33801be51a8c-v4-0-config-system-router-certs\") on node \"master-0\" DevicePath \"\"" Mar 18 18:01:24.317082 master-0 kubenswrapper[30278]: I0318 18:01:24.317094 30278 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b36712fe-25e1-4259-aca9-33801be51a8c-audit-dir\") on node \"master-0\" DevicePath \"\"" Mar 18 18:01:24.317082 master-0 kubenswrapper[30278]: I0318 18:01:24.317106 30278 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/b36712fe-25e1-4259-aca9-33801be51a8c-v4-0-config-system-session\") on node \"master-0\" DevicePath \"\"" Mar 18 18:01:24.317082 master-0 kubenswrapper[30278]: I0318 18:01:24.317116 30278 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/b36712fe-25e1-4259-aca9-33801be51a8c-v4-0-config-system-service-ca\") on node \"master-0\" DevicePath \"\"" Mar 18 18:01:24.318380 master-0 kubenswrapper[30278]: I0318 18:01:24.317126 30278 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/b36712fe-25e1-4259-aca9-33801be51a8c-v4-0-config-user-template-provider-selection\") on node \"master-0\" DevicePath \"\"" Mar 18 18:01:24.318380 master-0 kubenswrapper[30278]: I0318 18:01:24.317136 30278 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b36712fe-25e1-4259-aca9-33801be51a8c-v4-0-config-system-trusted-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 18 18:01:24.318380 master-0 kubenswrapper[30278]: I0318 18:01:24.317145 30278 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/b36712fe-25e1-4259-aca9-33801be51a8c-v4-0-config-system-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 18 18:01:25.106672 master-0 kubenswrapper[30278]: I0318 18:01:25.106605 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-559754bf9d-sp5dr" Mar 18 18:01:25.173580 master-0 kubenswrapper[30278]: I0318 18:01:25.173486 30278 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-559754bf9d-sp5dr"] Mar 18 18:01:25.193640 master-0 kubenswrapper[30278]: I0318 18:01:25.191261 30278 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-596ffdf9db-g7vtf"] Mar 18 18:01:25.206256 master-0 kubenswrapper[30278]: I0318 18:01:25.205823 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-596ffdf9db-g7vtf" Mar 18 18:01:25.206256 master-0 kubenswrapper[30278]: I0318 18:01:25.206071 30278 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-559754bf9d-sp5dr"] Mar 18 18:01:25.212422 master-0 kubenswrapper[30278]: I0318 18:01:25.212343 30278 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-596ffdf9db-g7vtf"] Mar 18 18:01:25.221045 master-0 kubenswrapper[30278]: I0318 18:01:25.220654 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Mar 18 18:01:25.221305 master-0 kubenswrapper[30278]: I0318 18:01:25.221079 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Mar 18 18:01:25.221305 master-0 kubenswrapper[30278]: I0318 18:01:25.221141 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Mar 18 18:01:25.221305 master-0 kubenswrapper[30278]: I0318 18:01:25.221159 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Mar 18 18:01:25.221305 master-0 kubenswrapper[30278]: I0318 18:01:25.221073 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Mar 18 18:01:25.221621 master-0 kubenswrapper[30278]: I0318 18:01:25.221172 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-wftwz" Mar 18 18:01:25.221621 master-0 kubenswrapper[30278]: I0318 18:01:25.221419 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Mar 18 18:01:25.221621 master-0 kubenswrapper[30278]: I0318 18:01:25.221458 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Mar 18 18:01:25.223721 master-0 kubenswrapper[30278]: I0318 18:01:25.223680 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Mar 18 18:01:25.224068 master-0 kubenswrapper[30278]: I0318 18:01:25.223751 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Mar 18 18:01:25.224068 master-0 kubenswrapper[30278]: I0318 18:01:25.223899 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Mar 18 18:01:25.224380 master-0 kubenswrapper[30278]: I0318 18:01:25.224082 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Mar 18 18:01:25.230048 master-0 kubenswrapper[30278]: I0318 18:01:25.229991 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/b36712fe-25e1-4259-aca9-33801be51a8c-v4-0-config-system-cliconfig\") pod \"oauth-openshift-559754bf9d-sp5dr\" (UID: \"b36712fe-25e1-4259-aca9-33801be51a8c\") " pod="openshift-authentication/oauth-openshift-559754bf9d-sp5dr" Mar 18 18:01:25.231062 master-0 kubenswrapper[30278]: I0318 18:01:25.231021 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/b36712fe-25e1-4259-aca9-33801be51a8c-v4-0-config-system-cliconfig\") pod \"oauth-openshift-559754bf9d-sp5dr\" (UID: \"b36712fe-25e1-4259-aca9-33801be51a8c\") " pod="openshift-authentication/oauth-openshift-559754bf9d-sp5dr" Mar 18 18:01:25.234494 master-0 kubenswrapper[30278]: I0318 18:01:25.234455 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Mar 18 18:01:25.242607 master-0 kubenswrapper[30278]: I0318 18:01:25.242563 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Mar 18 18:01:25.331477 master-0 kubenswrapper[30278]: I0318 18:01:25.331423 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/b36712fe-25e1-4259-aca9-33801be51a8c-v4-0-config-system-cliconfig\") pod \"b36712fe-25e1-4259-aca9-33801be51a8c\" (UID: \"b36712fe-25e1-4259-aca9-33801be51a8c\") " Mar 18 18:01:25.331951 master-0 kubenswrapper[30278]: I0318 18:01:25.331910 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b36712fe-25e1-4259-aca9-33801be51a8c-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "b36712fe-25e1-4259-aca9-33801be51a8c" (UID: "b36712fe-25e1-4259-aca9-33801be51a8c"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 18:01:25.332162 master-0 kubenswrapper[30278]: I0318 18:01:25.332126 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/196136a4-31b2-484c-957a-49a994d9ca0d-v4-0-config-system-serving-cert\") pod \"oauth-openshift-596ffdf9db-g7vtf\" (UID: \"196136a4-31b2-484c-957a-49a994d9ca0d\") " pod="openshift-authentication/oauth-openshift-596ffdf9db-g7vtf" Mar 18 18:01:25.332398 master-0 kubenswrapper[30278]: I0318 18:01:25.332365 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/196136a4-31b2-484c-957a-49a994d9ca0d-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-596ffdf9db-g7vtf\" (UID: \"196136a4-31b2-484c-957a-49a994d9ca0d\") " pod="openshift-authentication/oauth-openshift-596ffdf9db-g7vtf" Mar 18 18:01:25.332586 master-0 kubenswrapper[30278]: I0318 18:01:25.332558 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/196136a4-31b2-484c-957a-49a994d9ca0d-v4-0-config-system-router-certs\") pod \"oauth-openshift-596ffdf9db-g7vtf\" (UID: \"196136a4-31b2-484c-957a-49a994d9ca0d\") " pod="openshift-authentication/oauth-openshift-596ffdf9db-g7vtf" Mar 18 18:01:25.332806 master-0 kubenswrapper[30278]: I0318 18:01:25.332778 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/196136a4-31b2-484c-957a-49a994d9ca0d-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-596ffdf9db-g7vtf\" (UID: \"196136a4-31b2-484c-957a-49a994d9ca0d\") " pod="openshift-authentication/oauth-openshift-596ffdf9db-g7vtf" Mar 18 18:01:25.333049 master-0 kubenswrapper[30278]: I0318 18:01:25.333002 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dd4cp\" (UniqueName: \"kubernetes.io/projected/196136a4-31b2-484c-957a-49a994d9ca0d-kube-api-access-dd4cp\") pod \"oauth-openshift-596ffdf9db-g7vtf\" (UID: \"196136a4-31b2-484c-957a-49a994d9ca0d\") " pod="openshift-authentication/oauth-openshift-596ffdf9db-g7vtf" Mar 18 18:01:25.333164 master-0 kubenswrapper[30278]: I0318 18:01:25.333147 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/196136a4-31b2-484c-957a-49a994d9ca0d-v4-0-config-user-template-login\") pod \"oauth-openshift-596ffdf9db-g7vtf\" (UID: \"196136a4-31b2-484c-957a-49a994d9ca0d\") " pod="openshift-authentication/oauth-openshift-596ffdf9db-g7vtf" Mar 18 18:01:25.333238 master-0 kubenswrapper[30278]: I0318 18:01:25.333188 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/196136a4-31b2-484c-957a-49a994d9ca0d-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-596ffdf9db-g7vtf\" (UID: \"196136a4-31b2-484c-957a-49a994d9ca0d\") " pod="openshift-authentication/oauth-openshift-596ffdf9db-g7vtf" Mar 18 18:01:25.333434 master-0 kubenswrapper[30278]: I0318 18:01:25.333266 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/196136a4-31b2-484c-957a-49a994d9ca0d-audit-dir\") pod \"oauth-openshift-596ffdf9db-g7vtf\" (UID: \"196136a4-31b2-484c-957a-49a994d9ca0d\") " pod="openshift-authentication/oauth-openshift-596ffdf9db-g7vtf" Mar 18 18:01:25.333535 master-0 kubenswrapper[30278]: I0318 18:01:25.333446 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/196136a4-31b2-484c-957a-49a994d9ca0d-audit-policies\") pod \"oauth-openshift-596ffdf9db-g7vtf\" (UID: \"196136a4-31b2-484c-957a-49a994d9ca0d\") " pod="openshift-authentication/oauth-openshift-596ffdf9db-g7vtf" Mar 18 18:01:25.333535 master-0 kubenswrapper[30278]: I0318 18:01:25.333468 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/196136a4-31b2-484c-957a-49a994d9ca0d-v4-0-config-user-template-error\") pod \"oauth-openshift-596ffdf9db-g7vtf\" (UID: \"196136a4-31b2-484c-957a-49a994d9ca0d\") " pod="openshift-authentication/oauth-openshift-596ffdf9db-g7vtf" Mar 18 18:01:25.333673 master-0 kubenswrapper[30278]: I0318 18:01:25.333543 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/196136a4-31b2-484c-957a-49a994d9ca0d-v4-0-config-system-service-ca\") pod \"oauth-openshift-596ffdf9db-g7vtf\" (UID: \"196136a4-31b2-484c-957a-49a994d9ca0d\") " pod="openshift-authentication/oauth-openshift-596ffdf9db-g7vtf" Mar 18 18:01:25.333673 master-0 kubenswrapper[30278]: I0318 18:01:25.333613 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/196136a4-31b2-484c-957a-49a994d9ca0d-v4-0-config-system-session\") pod \"oauth-openshift-596ffdf9db-g7vtf\" (UID: \"196136a4-31b2-484c-957a-49a994d9ca0d\") " pod="openshift-authentication/oauth-openshift-596ffdf9db-g7vtf" Mar 18 18:01:25.333806 master-0 kubenswrapper[30278]: I0318 18:01:25.333674 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/196136a4-31b2-484c-957a-49a994d9ca0d-v4-0-config-system-cliconfig\") pod \"oauth-openshift-596ffdf9db-g7vtf\" (UID: \"196136a4-31b2-484c-957a-49a994d9ca0d\") " pod="openshift-authentication/oauth-openshift-596ffdf9db-g7vtf" Mar 18 18:01:25.333806 master-0 kubenswrapper[30278]: I0318 18:01:25.333758 30278 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/b36712fe-25e1-4259-aca9-33801be51a8c-v4-0-config-system-cliconfig\") on node \"master-0\" DevicePath \"\"" Mar 18 18:01:25.437183 master-0 kubenswrapper[30278]: I0318 18:01:25.436158 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/196136a4-31b2-484c-957a-49a994d9ca0d-v4-0-config-system-router-certs\") pod \"oauth-openshift-596ffdf9db-g7vtf\" (UID: \"196136a4-31b2-484c-957a-49a994d9ca0d\") " pod="openshift-authentication/oauth-openshift-596ffdf9db-g7vtf" Mar 18 18:01:25.437183 master-0 kubenswrapper[30278]: I0318 18:01:25.436398 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/196136a4-31b2-484c-957a-49a994d9ca0d-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-596ffdf9db-g7vtf\" (UID: \"196136a4-31b2-484c-957a-49a994d9ca0d\") " pod="openshift-authentication/oauth-openshift-596ffdf9db-g7vtf" Mar 18 18:01:25.437183 master-0 kubenswrapper[30278]: I0318 18:01:25.436459 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dd4cp\" (UniqueName: \"kubernetes.io/projected/196136a4-31b2-484c-957a-49a994d9ca0d-kube-api-access-dd4cp\") pod \"oauth-openshift-596ffdf9db-g7vtf\" (UID: \"196136a4-31b2-484c-957a-49a994d9ca0d\") " pod="openshift-authentication/oauth-openshift-596ffdf9db-g7vtf" Mar 18 18:01:25.437183 master-0 kubenswrapper[30278]: I0318 18:01:25.436690 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/196136a4-31b2-484c-957a-49a994d9ca0d-v4-0-config-user-template-login\") pod \"oauth-openshift-596ffdf9db-g7vtf\" (UID: \"196136a4-31b2-484c-957a-49a994d9ca0d\") " pod="openshift-authentication/oauth-openshift-596ffdf9db-g7vtf" Mar 18 18:01:25.437183 master-0 kubenswrapper[30278]: I0318 18:01:25.436758 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/196136a4-31b2-484c-957a-49a994d9ca0d-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-596ffdf9db-g7vtf\" (UID: \"196136a4-31b2-484c-957a-49a994d9ca0d\") " pod="openshift-authentication/oauth-openshift-596ffdf9db-g7vtf" Mar 18 18:01:25.437183 master-0 kubenswrapper[30278]: I0318 18:01:25.436839 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5c6aeb7b-9c05-470e-b31f-f4154aadf170-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"5c6aeb7b-9c05-470e-b31f-f4154aadf170\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 18:01:25.437183 master-0 kubenswrapper[30278]: I0318 18:01:25.436918 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/196136a4-31b2-484c-957a-49a994d9ca0d-audit-dir\") pod \"oauth-openshift-596ffdf9db-g7vtf\" (UID: \"196136a4-31b2-484c-957a-49a994d9ca0d\") " pod="openshift-authentication/oauth-openshift-596ffdf9db-g7vtf" Mar 18 18:01:25.437183 master-0 kubenswrapper[30278]: I0318 18:01:25.436972 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/196136a4-31b2-484c-957a-49a994d9ca0d-audit-policies\") pod \"oauth-openshift-596ffdf9db-g7vtf\" (UID: \"196136a4-31b2-484c-957a-49a994d9ca0d\") " pod="openshift-authentication/oauth-openshift-596ffdf9db-g7vtf" Mar 18 18:01:25.437183 master-0 kubenswrapper[30278]: I0318 18:01:25.437031 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/196136a4-31b2-484c-957a-49a994d9ca0d-v4-0-config-user-template-error\") pod \"oauth-openshift-596ffdf9db-g7vtf\" (UID: \"196136a4-31b2-484c-957a-49a994d9ca0d\") " pod="openshift-authentication/oauth-openshift-596ffdf9db-g7vtf" Mar 18 18:01:25.437183 master-0 kubenswrapper[30278]: I0318 18:01:25.437148 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/196136a4-31b2-484c-957a-49a994d9ca0d-v4-0-config-system-service-ca\") pod \"oauth-openshift-596ffdf9db-g7vtf\" (UID: \"196136a4-31b2-484c-957a-49a994d9ca0d\") " pod="openshift-authentication/oauth-openshift-596ffdf9db-g7vtf" Mar 18 18:01:25.438369 master-0 kubenswrapper[30278]: I0318 18:01:25.437243 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/196136a4-31b2-484c-957a-49a994d9ca0d-v4-0-config-system-session\") pod \"oauth-openshift-596ffdf9db-g7vtf\" (UID: \"196136a4-31b2-484c-957a-49a994d9ca0d\") " pod="openshift-authentication/oauth-openshift-596ffdf9db-g7vtf" Mar 18 18:01:25.438369 master-0 kubenswrapper[30278]: I0318 18:01:25.437383 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/196136a4-31b2-484c-957a-49a994d9ca0d-v4-0-config-system-cliconfig\") pod \"oauth-openshift-596ffdf9db-g7vtf\" (UID: \"196136a4-31b2-484c-957a-49a994d9ca0d\") " pod="openshift-authentication/oauth-openshift-596ffdf9db-g7vtf" Mar 18 18:01:25.438369 master-0 kubenswrapper[30278]: I0318 18:01:25.437470 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/196136a4-31b2-484c-957a-49a994d9ca0d-v4-0-config-system-serving-cert\") pod \"oauth-openshift-596ffdf9db-g7vtf\" (UID: \"196136a4-31b2-484c-957a-49a994d9ca0d\") " pod="openshift-authentication/oauth-openshift-596ffdf9db-g7vtf" Mar 18 18:01:25.438369 master-0 kubenswrapper[30278]: I0318 18:01:25.437520 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/196136a4-31b2-484c-957a-49a994d9ca0d-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-596ffdf9db-g7vtf\" (UID: \"196136a4-31b2-484c-957a-49a994d9ca0d\") " pod="openshift-authentication/oauth-openshift-596ffdf9db-g7vtf" Mar 18 18:01:25.438854 master-0 kubenswrapper[30278]: I0318 18:01:25.438792 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/196136a4-31b2-484c-957a-49a994d9ca0d-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-596ffdf9db-g7vtf\" (UID: \"196136a4-31b2-484c-957a-49a994d9ca0d\") " pod="openshift-authentication/oauth-openshift-596ffdf9db-g7vtf" Mar 18 18:01:25.439359 master-0 kubenswrapper[30278]: I0318 18:01:25.439263 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/196136a4-31b2-484c-957a-49a994d9ca0d-audit-policies\") pod \"oauth-openshift-596ffdf9db-g7vtf\" (UID: \"196136a4-31b2-484c-957a-49a994d9ca0d\") " pod="openshift-authentication/oauth-openshift-596ffdf9db-g7vtf" Mar 18 18:01:25.439514 master-0 kubenswrapper[30278]: E0318 18:01:25.439481 30278 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5c6aeb7b-9c05-470e-b31f-f4154aadf170-prometheus-trusted-ca-bundle podName:5c6aeb7b-9c05-470e-b31f-f4154aadf170 nodeName:}" failed. No retries permitted until 2026-03-18 18:01:57.439451251 +0000 UTC m=+86.606635876 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "prometheus-trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/5c6aeb7b-9c05-470e-b31f-f4154aadf170-prometheus-trusted-ca-bundle") pod "prometheus-k8s-0" (UID: "5c6aeb7b-9c05-470e-b31f-f4154aadf170") : configmap references non-existent config key: ca-bundle.crt Mar 18 18:01:25.440040 master-0 kubenswrapper[30278]: I0318 18:01:25.439990 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/196136a4-31b2-484c-957a-49a994d9ca0d-audit-dir\") pod \"oauth-openshift-596ffdf9db-g7vtf\" (UID: \"196136a4-31b2-484c-957a-49a994d9ca0d\") " pod="openshift-authentication/oauth-openshift-596ffdf9db-g7vtf" Mar 18 18:01:25.440550 master-0 kubenswrapper[30278]: I0318 18:01:25.440468 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/196136a4-31b2-484c-957a-49a994d9ca0d-v4-0-config-system-router-certs\") pod \"oauth-openshift-596ffdf9db-g7vtf\" (UID: \"196136a4-31b2-484c-957a-49a994d9ca0d\") " pod="openshift-authentication/oauth-openshift-596ffdf9db-g7vtf" Mar 18 18:01:25.440994 master-0 kubenswrapper[30278]: I0318 18:01:25.440937 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/196136a4-31b2-484c-957a-49a994d9ca0d-v4-0-config-system-cliconfig\") pod \"oauth-openshift-596ffdf9db-g7vtf\" (UID: \"196136a4-31b2-484c-957a-49a994d9ca0d\") " pod="openshift-authentication/oauth-openshift-596ffdf9db-g7vtf" Mar 18 18:01:25.441920 master-0 kubenswrapper[30278]: I0318 18:01:25.441887 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/196136a4-31b2-484c-957a-49a994d9ca0d-v4-0-config-system-service-ca\") pod \"oauth-openshift-596ffdf9db-g7vtf\" (UID: \"196136a4-31b2-484c-957a-49a994d9ca0d\") " pod="openshift-authentication/oauth-openshift-596ffdf9db-g7vtf" Mar 18 18:01:25.445649 master-0 kubenswrapper[30278]: I0318 18:01:25.445013 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/196136a4-31b2-484c-957a-49a994d9ca0d-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-596ffdf9db-g7vtf\" (UID: \"196136a4-31b2-484c-957a-49a994d9ca0d\") " pod="openshift-authentication/oauth-openshift-596ffdf9db-g7vtf" Mar 18 18:01:25.446026 master-0 kubenswrapper[30278]: I0318 18:01:25.445973 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/196136a4-31b2-484c-957a-49a994d9ca0d-v4-0-config-system-serving-cert\") pod \"oauth-openshift-596ffdf9db-g7vtf\" (UID: \"196136a4-31b2-484c-957a-49a994d9ca0d\") " pod="openshift-authentication/oauth-openshift-596ffdf9db-g7vtf" Mar 18 18:01:25.449598 master-0 kubenswrapper[30278]: I0318 18:01:25.449515 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/196136a4-31b2-484c-957a-49a994d9ca0d-v4-0-config-user-template-error\") pod \"oauth-openshift-596ffdf9db-g7vtf\" (UID: \"196136a4-31b2-484c-957a-49a994d9ca0d\") " pod="openshift-authentication/oauth-openshift-596ffdf9db-g7vtf" Mar 18 18:01:25.449781 master-0 kubenswrapper[30278]: I0318 18:01:25.449726 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/196136a4-31b2-484c-957a-49a994d9ca0d-v4-0-config-user-template-login\") pod \"oauth-openshift-596ffdf9db-g7vtf\" (UID: \"196136a4-31b2-484c-957a-49a994d9ca0d\") " pod="openshift-authentication/oauth-openshift-596ffdf9db-g7vtf" Mar 18 18:01:25.451518 master-0 kubenswrapper[30278]: I0318 18:01:25.451438 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/196136a4-31b2-484c-957a-49a994d9ca0d-v4-0-config-system-session\") pod \"oauth-openshift-596ffdf9db-g7vtf\" (UID: \"196136a4-31b2-484c-957a-49a994d9ca0d\") " pod="openshift-authentication/oauth-openshift-596ffdf9db-g7vtf" Mar 18 18:01:25.451806 master-0 kubenswrapper[30278]: I0318 18:01:25.451643 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/196136a4-31b2-484c-957a-49a994d9ca0d-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-596ffdf9db-g7vtf\" (UID: \"196136a4-31b2-484c-957a-49a994d9ca0d\") " pod="openshift-authentication/oauth-openshift-596ffdf9db-g7vtf" Mar 18 18:01:25.474329 master-0 kubenswrapper[30278]: I0318 18:01:25.472889 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dd4cp\" (UniqueName: \"kubernetes.io/projected/196136a4-31b2-484c-957a-49a994d9ca0d-kube-api-access-dd4cp\") pod \"oauth-openshift-596ffdf9db-g7vtf\" (UID: \"196136a4-31b2-484c-957a-49a994d9ca0d\") " pod="openshift-authentication/oauth-openshift-596ffdf9db-g7vtf" Mar 18 18:01:25.532936 master-0 kubenswrapper[30278]: I0318 18:01:25.532870 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-596ffdf9db-g7vtf" Mar 18 18:01:26.162828 master-0 kubenswrapper[30278]: I0318 18:01:26.162750 30278 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-596ffdf9db-g7vtf"] Mar 18 18:01:26.175472 master-0 kubenswrapper[30278]: W0318 18:01:26.175416 30278 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod196136a4_31b2_484c_957a_49a994d9ca0d.slice/crio-32ab7d9ca4f7cadb5d46c27011006bbc2873365f5105837143d3e4db539365f6 WatchSource:0}: Error finding container 32ab7d9ca4f7cadb5d46c27011006bbc2873365f5105837143d3e4db539365f6: Status 404 returned error can't find the container with id 32ab7d9ca4f7cadb5d46c27011006bbc2873365f5105837143d3e4db539365f6 Mar 18 18:01:27.069697 master-0 kubenswrapper[30278]: I0318 18:01:27.069639 30278 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b36712fe-25e1-4259-aca9-33801be51a8c" path="/var/lib/kubelet/pods/b36712fe-25e1-4259-aca9-33801be51a8c/volumes" Mar 18 18:01:27.126613 master-0 kubenswrapper[30278]: I0318 18:01:27.126523 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-596ffdf9db-g7vtf" event={"ID":"196136a4-31b2-484c-957a-49a994d9ca0d","Type":"ContainerStarted","Data":"32ab7d9ca4f7cadb5d46c27011006bbc2873365f5105837143d3e4db539365f6"} Mar 18 18:01:28.668511 master-0 kubenswrapper[30278]: I0318 18:01:28.668421 30278 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-jbs9f"] Mar 18 18:01:28.671530 master-0 kubenswrapper[30278]: I0318 18:01:28.669950 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-jbs9f" Mar 18 18:01:28.678182 master-0 kubenswrapper[30278]: I0318 18:01:28.678132 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2dddk" Mar 18 18:01:28.679052 master-0 kubenswrapper[30278]: I0318 18:01:28.678669 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Mar 18 18:01:28.679182 master-0 kubenswrapper[30278]: I0318 18:01:28.679160 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Mar 18 18:01:28.679477 master-0 kubenswrapper[30278]: I0318 18:01:28.679435 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Mar 18 18:01:28.684788 master-0 kubenswrapper[30278]: I0318 18:01:28.684075 30278 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-jbs9f"] Mar 18 18:01:28.798327 master-0 kubenswrapper[30278]: I0318 18:01:28.797143 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a322ca7f-9095-4b43-96ff-ac8a637fae27-cert\") pod \"ingress-canary-jbs9f\" (UID: \"a322ca7f-9095-4b43-96ff-ac8a637fae27\") " pod="openshift-ingress-canary/ingress-canary-jbs9f" Mar 18 18:01:28.798327 master-0 kubenswrapper[30278]: I0318 18:01:28.797217 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zvs5q\" (UniqueName: \"kubernetes.io/projected/a322ca7f-9095-4b43-96ff-ac8a637fae27-kube-api-access-zvs5q\") pod \"ingress-canary-jbs9f\" (UID: \"a322ca7f-9095-4b43-96ff-ac8a637fae27\") " pod="openshift-ingress-canary/ingress-canary-jbs9f" Mar 18 18:01:28.899147 master-0 kubenswrapper[30278]: I0318 18:01:28.899060 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a322ca7f-9095-4b43-96ff-ac8a637fae27-cert\") pod \"ingress-canary-jbs9f\" (UID: \"a322ca7f-9095-4b43-96ff-ac8a637fae27\") " pod="openshift-ingress-canary/ingress-canary-jbs9f" Mar 18 18:01:28.899147 master-0 kubenswrapper[30278]: I0318 18:01:28.899161 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zvs5q\" (UniqueName: \"kubernetes.io/projected/a322ca7f-9095-4b43-96ff-ac8a637fae27-kube-api-access-zvs5q\") pod \"ingress-canary-jbs9f\" (UID: \"a322ca7f-9095-4b43-96ff-ac8a637fae27\") " pod="openshift-ingress-canary/ingress-canary-jbs9f" Mar 18 18:01:28.903088 master-0 kubenswrapper[30278]: I0318 18:01:28.903033 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a322ca7f-9095-4b43-96ff-ac8a637fae27-cert\") pod \"ingress-canary-jbs9f\" (UID: \"a322ca7f-9095-4b43-96ff-ac8a637fae27\") " pod="openshift-ingress-canary/ingress-canary-jbs9f" Mar 18 18:01:29.150419 master-0 kubenswrapper[30278]: I0318 18:01:29.150318 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-596ffdf9db-g7vtf" event={"ID":"196136a4-31b2-484c-957a-49a994d9ca0d","Type":"ContainerStarted","Data":"3d6e38f36dfd1767f4deed3e83519fca3fd8d600dbd5d8d9748aded9b70f103c"} Mar 18 18:01:29.150824 master-0 kubenswrapper[30278]: I0318 18:01:29.150787 30278 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-596ffdf9db-g7vtf" Mar 18 18:01:29.693420 master-0 kubenswrapper[30278]: I0318 18:01:29.693319 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zvs5q\" (UniqueName: \"kubernetes.io/projected/a322ca7f-9095-4b43-96ff-ac8a637fae27-kube-api-access-zvs5q\") pod \"ingress-canary-jbs9f\" (UID: \"a322ca7f-9095-4b43-96ff-ac8a637fae27\") " pod="openshift-ingress-canary/ingress-canary-jbs9f" Mar 18 18:01:29.909106 master-0 kubenswrapper[30278]: I0318 18:01:29.909031 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-jbs9f" Mar 18 18:01:29.959443 master-0 kubenswrapper[30278]: I0318 18:01:29.958899 30278 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-596ffdf9db-g7vtf" Mar 18 18:01:30.011912 master-0 kubenswrapper[30278]: I0318 18:01:30.011251 30278 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-596ffdf9db-g7vtf" podStartSLOduration=4.652089644 podStartE2EDuration="7.011230846s" podCreationTimestamp="2026-03-18 18:01:23 +0000 UTC" firstStartedPulling="2026-03-18 18:01:26.178660993 +0000 UTC m=+55.345845588" lastFinishedPulling="2026-03-18 18:01:28.537802175 +0000 UTC m=+57.704986790" observedRunningTime="2026-03-18 18:01:29.670220751 +0000 UTC m=+58.837405346" watchObservedRunningTime="2026-03-18 18:01:30.011230846 +0000 UTC m=+59.178415441" Mar 18 18:01:30.441179 master-0 kubenswrapper[30278]: I0318 18:01:30.441122 30278 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-jbs9f"] Mar 18 18:01:30.445979 master-0 kubenswrapper[30278]: W0318 18:01:30.445940 30278 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda322ca7f_9095_4b43_96ff_ac8a637fae27.slice/crio-6fedee08cc53be6e5fabd867e268a0d75ab43ad7c76253d91f1e0d5aab03f379 WatchSource:0}: Error finding container 6fedee08cc53be6e5fabd867e268a0d75ab43ad7c76253d91f1e0d5aab03f379: Status 404 returned error can't find the container with id 6fedee08cc53be6e5fabd867e268a0d75ab43ad7c76253d91f1e0d5aab03f379 Mar 18 18:01:31.040619 master-0 kubenswrapper[30278]: I0318 18:01:31.040368 30278 scope.go:117] "RemoveContainer" containerID="9229a0847dcc4bfd99187b8d4d1c4189d57cc38cb01e1689224e1d421ed9426b" Mar 18 18:01:31.082651 master-0 kubenswrapper[30278]: I0318 18:01:31.082447 30278 scope.go:117] "RemoveContainer" containerID="c0003daaaf5a355b3cb392bb03905611a5e11defed3a5bf40942d6e99ba55bcb" Mar 18 18:01:31.104880 master-0 kubenswrapper[30278]: I0318 18:01:31.104832 30278 scope.go:117] "RemoveContainer" containerID="3c6f642b736991fd20242697f9273f8f6a126bc6027f7c5ddd27e70569fd9054" Mar 18 18:01:31.125714 master-0 kubenswrapper[30278]: I0318 18:01:31.125662 30278 scope.go:117] "RemoveContainer" containerID="b07f4eb106a117d2a3aedb26bb538e640c6545e341eb4a44bae581e10c947c17" Mar 18 18:01:31.174077 master-0 kubenswrapper[30278]: I0318 18:01:31.174032 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-jbs9f" event={"ID":"a322ca7f-9095-4b43-96ff-ac8a637fae27","Type":"ContainerStarted","Data":"aa509d7627d41fb6c35035b8e1726e5a8bb209647deb5d6a2c3eaf2e2568f08e"} Mar 18 18:01:31.174203 master-0 kubenswrapper[30278]: I0318 18:01:31.174085 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-jbs9f" event={"ID":"a322ca7f-9095-4b43-96ff-ac8a637fae27","Type":"ContainerStarted","Data":"6fedee08cc53be6e5fabd867e268a0d75ab43ad7c76253d91f1e0d5aab03f379"} Mar 18 18:01:31.197467 master-0 kubenswrapper[30278]: I0318 18:01:31.197309 30278 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-jbs9f" podStartSLOduration=3.197288611 podStartE2EDuration="3.197288611s" podCreationTimestamp="2026-03-18 18:01:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 18:01:31.193683224 +0000 UTC m=+60.360867829" watchObservedRunningTime="2026-03-18 18:01:31.197288611 +0000 UTC m=+60.364473216" Mar 18 18:01:32.034518 master-0 kubenswrapper[30278]: I0318 18:01:32.034450 30278 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-d4c2p"] Mar 18 18:01:32.035637 master-0 kubenswrapper[30278]: I0318 18:01:32.035600 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-d4c2p" Mar 18 18:01:32.042616 master-0 kubenswrapper[30278]: I0318 18:01:32.042542 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-kzdnw" Mar 18 18:01:32.043266 master-0 kubenswrapper[30278]: I0318 18:01:32.042707 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Mar 18 18:01:32.067388 master-0 kubenswrapper[30278]: I0318 18:01:32.067329 30278 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-monitoring/metrics-server-6b789d4fdf-d4nw8" Mar 18 18:01:32.081951 master-0 kubenswrapper[30278]: I0318 18:01:32.081874 30278 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/metrics-server-6b789d4fdf-d4nw8" Mar 18 18:01:32.099253 master-0 kubenswrapper[30278]: I0318 18:01:32.099179 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ssb4j\" (UniqueName: \"kubernetes.io/projected/c6a21184-42b3-4dc1-bf4f-16fe9fa7b6f8-kube-api-access-ssb4j\") pod \"node-ca-d4c2p\" (UID: \"c6a21184-42b3-4dc1-bf4f-16fe9fa7b6f8\") " pod="openshift-image-registry/node-ca-d4c2p" Mar 18 18:01:32.099553 master-0 kubenswrapper[30278]: I0318 18:01:32.099477 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/c6a21184-42b3-4dc1-bf4f-16fe9fa7b6f8-host\") pod \"node-ca-d4c2p\" (UID: \"c6a21184-42b3-4dc1-bf4f-16fe9fa7b6f8\") " pod="openshift-image-registry/node-ca-d4c2p" Mar 18 18:01:32.099728 master-0 kubenswrapper[30278]: I0318 18:01:32.099674 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/c6a21184-42b3-4dc1-bf4f-16fe9fa7b6f8-serviceca\") pod \"node-ca-d4c2p\" (UID: \"c6a21184-42b3-4dc1-bf4f-16fe9fa7b6f8\") " pod="openshift-image-registry/node-ca-d4c2p" Mar 18 18:01:32.201057 master-0 kubenswrapper[30278]: I0318 18:01:32.200984 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/c6a21184-42b3-4dc1-bf4f-16fe9fa7b6f8-host\") pod \"node-ca-d4c2p\" (UID: \"c6a21184-42b3-4dc1-bf4f-16fe9fa7b6f8\") " pod="openshift-image-registry/node-ca-d4c2p" Mar 18 18:01:32.201057 master-0 kubenswrapper[30278]: I0318 18:01:32.201068 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/c6a21184-42b3-4dc1-bf4f-16fe9fa7b6f8-serviceca\") pod \"node-ca-d4c2p\" (UID: \"c6a21184-42b3-4dc1-bf4f-16fe9fa7b6f8\") " pod="openshift-image-registry/node-ca-d4c2p" Mar 18 18:01:32.201433 master-0 kubenswrapper[30278]: I0318 18:01:32.201243 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ssb4j\" (UniqueName: \"kubernetes.io/projected/c6a21184-42b3-4dc1-bf4f-16fe9fa7b6f8-kube-api-access-ssb4j\") pod \"node-ca-d4c2p\" (UID: \"c6a21184-42b3-4dc1-bf4f-16fe9fa7b6f8\") " pod="openshift-image-registry/node-ca-d4c2p" Mar 18 18:01:32.202019 master-0 kubenswrapper[30278]: I0318 18:01:32.201970 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/c6a21184-42b3-4dc1-bf4f-16fe9fa7b6f8-host\") pod \"node-ca-d4c2p\" (UID: \"c6a21184-42b3-4dc1-bf4f-16fe9fa7b6f8\") " pod="openshift-image-registry/node-ca-d4c2p" Mar 18 18:01:32.205302 master-0 kubenswrapper[30278]: I0318 18:01:32.203305 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/c6a21184-42b3-4dc1-bf4f-16fe9fa7b6f8-serviceca\") pod \"node-ca-d4c2p\" (UID: \"c6a21184-42b3-4dc1-bf4f-16fe9fa7b6f8\") " pod="openshift-image-registry/node-ca-d4c2p" Mar 18 18:01:32.224248 master-0 kubenswrapper[30278]: I0318 18:01:32.224198 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ssb4j\" (UniqueName: \"kubernetes.io/projected/c6a21184-42b3-4dc1-bf4f-16fe9fa7b6f8-kube-api-access-ssb4j\") pod \"node-ca-d4c2p\" (UID: \"c6a21184-42b3-4dc1-bf4f-16fe9fa7b6f8\") " pod="openshift-image-registry/node-ca-d4c2p" Mar 18 18:01:32.367005 master-0 kubenswrapper[30278]: I0318 18:01:32.366919 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-d4c2p" Mar 18 18:01:33.193125 master-0 kubenswrapper[30278]: I0318 18:01:33.193056 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-d4c2p" event={"ID":"c6a21184-42b3-4dc1-bf4f-16fe9fa7b6f8","Type":"ContainerStarted","Data":"d6421009a738f5e37c74d1c2231236b74801721dd4dc635e0225d79fbd347a99"} Mar 18 18:01:35.215968 master-0 kubenswrapper[30278]: I0318 18:01:35.215842 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-d4c2p" event={"ID":"c6a21184-42b3-4dc1-bf4f-16fe9fa7b6f8","Type":"ContainerStarted","Data":"d33c057743632580378c5769c5797fdbfe250c1bdb4d390b1e6fe8f817a85af2"} Mar 18 18:01:35.246922 master-0 kubenswrapper[30278]: I0318 18:01:35.246770 30278 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-d4c2p" podStartSLOduration=1.181089472 podStartE2EDuration="3.246737284s" podCreationTimestamp="2026-03-18 18:01:32 +0000 UTC" firstStartedPulling="2026-03-18 18:01:32.418391016 +0000 UTC m=+61.585575651" lastFinishedPulling="2026-03-18 18:01:34.484038858 +0000 UTC m=+63.651223463" observedRunningTime="2026-03-18 18:01:35.244197077 +0000 UTC m=+64.411381712" watchObservedRunningTime="2026-03-18 18:01:35.246737284 +0000 UTC m=+64.413921909" Mar 18 18:01:36.516603 master-0 kubenswrapper[30278]: I0318 18:01:36.516535 30278 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-596ffdf9db-g7vtf"] Mar 18 18:01:39.845665 master-0 kubenswrapper[30278]: I0318 18:01:39.845536 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4285e80c-1ff9-42b3-9692-9f2ab6b61916-kube-api-access\") pod \"installer-3-master-0\" (UID: \"4285e80c-1ff9-42b3-9692-9f2ab6b61916\") " pod="openshift-kube-apiserver/installer-3-master-0" Mar 18 18:01:39.846906 master-0 kubenswrapper[30278]: E0318 18:01:39.845882 30278 projected.go:288] Couldn't get configMap openshift-kube-apiserver/kube-root-ca.crt: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Mar 18 18:01:39.846906 master-0 kubenswrapper[30278]: E0318 18:01:39.845944 30278 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-kube-apiserver/installer-3-master-0: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Mar 18 18:01:39.846906 master-0 kubenswrapper[30278]: E0318 18:01:39.846068 30278 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/4285e80c-1ff9-42b3-9692-9f2ab6b61916-kube-api-access podName:4285e80c-1ff9-42b3-9692-9f2ab6b61916 nodeName:}" failed. No retries permitted until 2026-03-18 18:02:43.846026252 +0000 UTC m=+133.013210917 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/4285e80c-1ff9-42b3-9692-9f2ab6b61916-kube-api-access") pod "installer-3-master-0" (UID: "4285e80c-1ff9-42b3-9692-9f2ab6b61916") : object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Mar 18 18:01:47.450841 master-0 kubenswrapper[30278]: I0318 18:01:47.450737 30278 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-4-master-0"] Mar 18 18:01:47.452464 master-0 kubenswrapper[30278]: I0318 18:01:47.452422 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-4-master-0" Mar 18 18:01:47.458236 master-0 kubenswrapper[30278]: I0318 18:01:47.458169 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Mar 18 18:01:47.459074 master-0 kubenswrapper[30278]: I0318 18:01:47.458979 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-kzvvj" Mar 18 18:01:47.473786 master-0 kubenswrapper[30278]: I0318 18:01:47.473694 30278 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-4-master-0"] Mar 18 18:01:47.488846 master-0 kubenswrapper[30278]: I0318 18:01:47.488642 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/20865801-ac9a-4c2d-821e-126a9b463232-kubelet-dir\") pod \"installer-4-master-0\" (UID: \"20865801-ac9a-4c2d-821e-126a9b463232\") " pod="openshift-kube-apiserver/installer-4-master-0" Mar 18 18:01:47.488846 master-0 kubenswrapper[30278]: I0318 18:01:47.488712 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/20865801-ac9a-4c2d-821e-126a9b463232-kube-api-access\") pod \"installer-4-master-0\" (UID: \"20865801-ac9a-4c2d-821e-126a9b463232\") " pod="openshift-kube-apiserver/installer-4-master-0" Mar 18 18:01:47.489294 master-0 kubenswrapper[30278]: I0318 18:01:47.489035 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/20865801-ac9a-4c2d-821e-126a9b463232-var-lock\") pod \"installer-4-master-0\" (UID: \"20865801-ac9a-4c2d-821e-126a9b463232\") " pod="openshift-kube-apiserver/installer-4-master-0" Mar 18 18:01:47.591155 master-0 kubenswrapper[30278]: I0318 18:01:47.591048 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/20865801-ac9a-4c2d-821e-126a9b463232-kubelet-dir\") pod \"installer-4-master-0\" (UID: \"20865801-ac9a-4c2d-821e-126a9b463232\") " pod="openshift-kube-apiserver/installer-4-master-0" Mar 18 18:01:47.591155 master-0 kubenswrapper[30278]: I0318 18:01:47.591130 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/20865801-ac9a-4c2d-821e-126a9b463232-kube-api-access\") pod \"installer-4-master-0\" (UID: \"20865801-ac9a-4c2d-821e-126a9b463232\") " pod="openshift-kube-apiserver/installer-4-master-0" Mar 18 18:01:47.591896 master-0 kubenswrapper[30278]: I0318 18:01:47.591491 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/20865801-ac9a-4c2d-821e-126a9b463232-kubelet-dir\") pod \"installer-4-master-0\" (UID: \"20865801-ac9a-4c2d-821e-126a9b463232\") " pod="openshift-kube-apiserver/installer-4-master-0" Mar 18 18:01:47.591896 master-0 kubenswrapper[30278]: I0318 18:01:47.591546 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/20865801-ac9a-4c2d-821e-126a9b463232-var-lock\") pod \"installer-4-master-0\" (UID: \"20865801-ac9a-4c2d-821e-126a9b463232\") " pod="openshift-kube-apiserver/installer-4-master-0" Mar 18 18:01:47.591896 master-0 kubenswrapper[30278]: I0318 18:01:47.591511 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/20865801-ac9a-4c2d-821e-126a9b463232-var-lock\") pod \"installer-4-master-0\" (UID: \"20865801-ac9a-4c2d-821e-126a9b463232\") " pod="openshift-kube-apiserver/installer-4-master-0" Mar 18 18:01:47.618835 master-0 kubenswrapper[30278]: I0318 18:01:47.618722 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/20865801-ac9a-4c2d-821e-126a9b463232-kube-api-access\") pod \"installer-4-master-0\" (UID: \"20865801-ac9a-4c2d-821e-126a9b463232\") " pod="openshift-kube-apiserver/installer-4-master-0" Mar 18 18:01:47.792225 master-0 kubenswrapper[30278]: I0318 18:01:47.791982 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-4-master-0" Mar 18 18:01:48.305657 master-0 kubenswrapper[30278]: I0318 18:01:48.305566 30278 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-4-master-0"] Mar 18 18:01:48.326445 master-0 kubenswrapper[30278]: W0318 18:01:48.326351 30278 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod20865801_ac9a_4c2d_821e_126a9b463232.slice/crio-6f6e94445de6294550ec95dd8a8572234554f574d1cc00b5d3ac50fd3a83c4ce WatchSource:0}: Error finding container 6f6e94445de6294550ec95dd8a8572234554f574d1cc00b5d3ac50fd3a83c4ce: Status 404 returned error can't find the container with id 6f6e94445de6294550ec95dd8a8572234554f574d1cc00b5d3ac50fd3a83c4ce Mar 18 18:01:48.346742 master-0 kubenswrapper[30278]: I0318 18:01:48.346668 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-4-master-0" event={"ID":"20865801-ac9a-4c2d-821e-126a9b463232","Type":"ContainerStarted","Data":"6f6e94445de6294550ec95dd8a8572234554f574d1cc00b5d3ac50fd3a83c4ce"} Mar 18 18:01:49.357591 master-0 kubenswrapper[30278]: I0318 18:01:49.357544 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-4-master-0" event={"ID":"20865801-ac9a-4c2d-821e-126a9b463232","Type":"ContainerStarted","Data":"e114bb0d6fc2cd2ca24760eb893d95f17619be2af3bc2c5eb2b03c45d6a37626"} Mar 18 18:01:49.381019 master-0 kubenswrapper[30278]: I0318 18:01:49.380912 30278 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-4-master-0" podStartSLOduration=2.380884894 podStartE2EDuration="2.380884894s" podCreationTimestamp="2026-03-18 18:01:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 18:01:49.377986626 +0000 UTC m=+78.545171221" watchObservedRunningTime="2026-03-18 18:01:49.380884894 +0000 UTC m=+78.548069509" Mar 18 18:01:53.200060 master-0 kubenswrapper[30278]: I0318 18:01:53.199994 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/89b1dfdf-4633-45af-8abd-931a76eca960-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"89b1dfdf-4633-45af-8abd-931a76eca960\") " pod="openshift-monitoring/alertmanager-main-0" Mar 18 18:01:53.200767 master-0 kubenswrapper[30278]: E0318 18:01:53.200517 30278 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/89b1dfdf-4633-45af-8abd-931a76eca960-alertmanager-trusted-ca-bundle podName:89b1dfdf-4633-45af-8abd-931a76eca960 nodeName:}" failed. No retries permitted until 2026-03-18 18:02:57.200429352 +0000 UTC m=+146.367613957 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "alertmanager-trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/89b1dfdf-4633-45af-8abd-931a76eca960-alertmanager-trusted-ca-bundle") pod "alertmanager-main-0" (UID: "89b1dfdf-4633-45af-8abd-931a76eca960") : configmap references non-existent config key: ca-bundle.crt Mar 18 18:01:57.476236 master-0 kubenswrapper[30278]: I0318 18:01:57.476124 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5c6aeb7b-9c05-470e-b31f-f4154aadf170-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"5c6aeb7b-9c05-470e-b31f-f4154aadf170\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 18:01:57.477165 master-0 kubenswrapper[30278]: E0318 18:01:57.476354 30278 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5c6aeb7b-9c05-470e-b31f-f4154aadf170-prometheus-trusted-ca-bundle podName:5c6aeb7b-9c05-470e-b31f-f4154aadf170 nodeName:}" failed. No retries permitted until 2026-03-18 18:03:01.476331854 +0000 UTC m=+150.643516449 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "prometheus-trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/5c6aeb7b-9c05-470e-b31f-f4154aadf170-prometheus-trusted-ca-bundle") pod "prometheus-k8s-0" (UID: "5c6aeb7b-9c05-470e-b31f-f4154aadf170") : configmap references non-existent config key: ca-bundle.crt Mar 18 18:02:01.559196 master-0 kubenswrapper[30278]: I0318 18:02:01.559103 30278 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-596ffdf9db-g7vtf" podUID="196136a4-31b2-484c-957a-49a994d9ca0d" containerName="oauth-openshift" containerID="cri-o://3d6e38f36dfd1767f4deed3e83519fca3fd8d600dbd5d8d9748aded9b70f103c" gracePeriod=15 Mar 18 18:02:01.990796 master-0 kubenswrapper[30278]: I0318 18:02:01.990732 30278 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-596ffdf9db-g7vtf" Mar 18 18:02:02.032064 master-0 kubenswrapper[30278]: I0318 18:02:02.030193 30278 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-d89d9c4d9-57l4t"] Mar 18 18:02:02.032064 master-0 kubenswrapper[30278]: E0318 18:02:02.030593 30278 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="196136a4-31b2-484c-957a-49a994d9ca0d" containerName="oauth-openshift" Mar 18 18:02:02.032064 master-0 kubenswrapper[30278]: I0318 18:02:02.030608 30278 state_mem.go:107] "Deleted CPUSet assignment" podUID="196136a4-31b2-484c-957a-49a994d9ca0d" containerName="oauth-openshift" Mar 18 18:02:02.032064 master-0 kubenswrapper[30278]: I0318 18:02:02.030772 30278 memory_manager.go:354] "RemoveStaleState removing state" podUID="196136a4-31b2-484c-957a-49a994d9ca0d" containerName="oauth-openshift" Mar 18 18:02:02.032064 master-0 kubenswrapper[30278]: I0318 18:02:02.031420 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-d89d9c4d9-57l4t" Mar 18 18:02:02.058118 master-0 kubenswrapper[30278]: I0318 18:02:02.058049 30278 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-d89d9c4d9-57l4t"] Mar 18 18:02:02.062897 master-0 kubenswrapper[30278]: I0318 18:02:02.062843 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/196136a4-31b2-484c-957a-49a994d9ca0d-v4-0-config-system-router-certs\") pod \"196136a4-31b2-484c-957a-49a994d9ca0d\" (UID: \"196136a4-31b2-484c-957a-49a994d9ca0d\") " Mar 18 18:02:02.063030 master-0 kubenswrapper[30278]: I0318 18:02:02.062907 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/196136a4-31b2-484c-957a-49a994d9ca0d-v4-0-config-system-trusted-ca-bundle\") pod \"196136a4-31b2-484c-957a-49a994d9ca0d\" (UID: \"196136a4-31b2-484c-957a-49a994d9ca0d\") " Mar 18 18:02:02.063184 master-0 kubenswrapper[30278]: I0318 18:02:02.063141 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/196136a4-31b2-484c-957a-49a994d9ca0d-v4-0-config-system-serving-cert\") pod \"196136a4-31b2-484c-957a-49a994d9ca0d\" (UID: \"196136a4-31b2-484c-957a-49a994d9ca0d\") " Mar 18 18:02:02.063247 master-0 kubenswrapper[30278]: I0318 18:02:02.063197 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/196136a4-31b2-484c-957a-49a994d9ca0d-v4-0-config-system-cliconfig\") pod \"196136a4-31b2-484c-957a-49a994d9ca0d\" (UID: \"196136a4-31b2-484c-957a-49a994d9ca0d\") " Mar 18 18:02:02.063247 master-0 kubenswrapper[30278]: I0318 18:02:02.063242 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/196136a4-31b2-484c-957a-49a994d9ca0d-v4-0-config-system-service-ca\") pod \"196136a4-31b2-484c-957a-49a994d9ca0d\" (UID: \"196136a4-31b2-484c-957a-49a994d9ca0d\") " Mar 18 18:02:02.063357 master-0 kubenswrapper[30278]: I0318 18:02:02.063313 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/196136a4-31b2-484c-957a-49a994d9ca0d-v4-0-config-system-ocp-branding-template\") pod \"196136a4-31b2-484c-957a-49a994d9ca0d\" (UID: \"196136a4-31b2-484c-957a-49a994d9ca0d\") " Mar 18 18:02:02.063357 master-0 kubenswrapper[30278]: I0318 18:02:02.063341 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/196136a4-31b2-484c-957a-49a994d9ca0d-v4-0-config-system-session\") pod \"196136a4-31b2-484c-957a-49a994d9ca0d\" (UID: \"196136a4-31b2-484c-957a-49a994d9ca0d\") " Mar 18 18:02:02.063504 master-0 kubenswrapper[30278]: I0318 18:02:02.063461 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/196136a4-31b2-484c-957a-49a994d9ca0d-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "196136a4-31b2-484c-957a-49a994d9ca0d" (UID: "196136a4-31b2-484c-957a-49a994d9ca0d"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 18:02:02.064010 master-0 kubenswrapper[30278]: I0318 18:02:02.063971 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/196136a4-31b2-484c-957a-49a994d9ca0d-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "196136a4-31b2-484c-957a-49a994d9ca0d" (UID: "196136a4-31b2-484c-957a-49a994d9ca0d"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 18:02:02.064400 master-0 kubenswrapper[30278]: I0318 18:02:02.064372 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/196136a4-31b2-484c-957a-49a994d9ca0d-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "196136a4-31b2-484c-957a-49a994d9ca0d" (UID: "196136a4-31b2-484c-957a-49a994d9ca0d"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 18:02:02.065051 master-0 kubenswrapper[30278]: I0318 18:02:02.064894 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/196136a4-31b2-484c-957a-49a994d9ca0d-v4-0-config-user-template-login\") pod \"196136a4-31b2-484c-957a-49a994d9ca0d\" (UID: \"196136a4-31b2-484c-957a-49a994d9ca0d\") " Mar 18 18:02:02.065323 master-0 kubenswrapper[30278]: I0318 18:02:02.065302 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/196136a4-31b2-484c-957a-49a994d9ca0d-v4-0-config-user-template-provider-selection\") pod \"196136a4-31b2-484c-957a-49a994d9ca0d\" (UID: \"196136a4-31b2-484c-957a-49a994d9ca0d\") " Mar 18 18:02:02.068852 master-0 kubenswrapper[30278]: I0318 18:02:02.066449 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/196136a4-31b2-484c-957a-49a994d9ca0d-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "196136a4-31b2-484c-957a-49a994d9ca0d" (UID: "196136a4-31b2-484c-957a-49a994d9ca0d"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 18:02:02.068852 master-0 kubenswrapper[30278]: I0318 18:02:02.067560 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/196136a4-31b2-484c-957a-49a994d9ca0d-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "196136a4-31b2-484c-957a-49a994d9ca0d" (UID: "196136a4-31b2-484c-957a-49a994d9ca0d"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 18:02:02.068852 master-0 kubenswrapper[30278]: I0318 18:02:02.067797 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/196136a4-31b2-484c-957a-49a994d9ca0d-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "196136a4-31b2-484c-957a-49a994d9ca0d" (UID: "196136a4-31b2-484c-957a-49a994d9ca0d"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 18:02:02.068852 master-0 kubenswrapper[30278]: I0318 18:02:02.067881 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/196136a4-31b2-484c-957a-49a994d9ca0d-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "196136a4-31b2-484c-957a-49a994d9ca0d" (UID: "196136a4-31b2-484c-957a-49a994d9ca0d"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 18:02:02.069614 master-0 kubenswrapper[30278]: I0318 18:02:02.069415 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/196136a4-31b2-484c-957a-49a994d9ca0d-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "196136a4-31b2-484c-957a-49a994d9ca0d" (UID: "196136a4-31b2-484c-957a-49a994d9ca0d"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 18:02:02.069614 master-0 kubenswrapper[30278]: I0318 18:02:02.069437 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/196136a4-31b2-484c-957a-49a994d9ca0d-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "196136a4-31b2-484c-957a-49a994d9ca0d" (UID: "196136a4-31b2-484c-957a-49a994d9ca0d"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 18:02:02.070182 master-0 kubenswrapper[30278]: I0318 18:02:02.070121 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/196136a4-31b2-484c-957a-49a994d9ca0d-audit-dir\") pod \"196136a4-31b2-484c-957a-49a994d9ca0d\" (UID: \"196136a4-31b2-484c-957a-49a994d9ca0d\") " Mar 18 18:02:02.070374 master-0 kubenswrapper[30278]: I0318 18:02:02.070355 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/196136a4-31b2-484c-957a-49a994d9ca0d-audit-policies\") pod \"196136a4-31b2-484c-957a-49a994d9ca0d\" (UID: \"196136a4-31b2-484c-957a-49a994d9ca0d\") " Mar 18 18:02:02.070799 master-0 kubenswrapper[30278]: I0318 18:02:02.070764 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dd4cp\" (UniqueName: \"kubernetes.io/projected/196136a4-31b2-484c-957a-49a994d9ca0d-kube-api-access-dd4cp\") pod \"196136a4-31b2-484c-957a-49a994d9ca0d\" (UID: \"196136a4-31b2-484c-957a-49a994d9ca0d\") " Mar 18 18:02:02.071022 master-0 kubenswrapper[30278]: I0318 18:02:02.071003 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/196136a4-31b2-484c-957a-49a994d9ca0d-v4-0-config-user-template-error\") pod \"196136a4-31b2-484c-957a-49a994d9ca0d\" (UID: \"196136a4-31b2-484c-957a-49a994d9ca0d\") " Mar 18 18:02:02.071721 master-0 kubenswrapper[30278]: I0318 18:02:02.071675 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e5ec16cb-0d08-44d7-8f1c-8965a5613854-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-d89d9c4d9-57l4t\" (UID: \"e5ec16cb-0d08-44d7-8f1c-8965a5613854\") " pod="openshift-authentication/oauth-openshift-d89d9c4d9-57l4t" Mar 18 18:02:02.071919 master-0 kubenswrapper[30278]: I0318 18:02:02.071901 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/e5ec16cb-0d08-44d7-8f1c-8965a5613854-v4-0-config-system-service-ca\") pod \"oauth-openshift-d89d9c4d9-57l4t\" (UID: \"e5ec16cb-0d08-44d7-8f1c-8965a5613854\") " pod="openshift-authentication/oauth-openshift-d89d9c4d9-57l4t" Mar 18 18:02:02.072035 master-0 kubenswrapper[30278]: I0318 18:02:02.072018 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/e5ec16cb-0d08-44d7-8f1c-8965a5613854-v4-0-config-system-serving-cert\") pod \"oauth-openshift-d89d9c4d9-57l4t\" (UID: \"e5ec16cb-0d08-44d7-8f1c-8965a5613854\") " pod="openshift-authentication/oauth-openshift-d89d9c4d9-57l4t" Mar 18 18:02:02.072141 master-0 kubenswrapper[30278]: I0318 18:02:02.072124 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/e5ec16cb-0d08-44d7-8f1c-8965a5613854-v4-0-config-system-router-certs\") pod \"oauth-openshift-d89d9c4d9-57l4t\" (UID: \"e5ec16cb-0d08-44d7-8f1c-8965a5613854\") " pod="openshift-authentication/oauth-openshift-d89d9c4d9-57l4t" Mar 18 18:02:02.073777 master-0 kubenswrapper[30278]: I0318 18:02:02.073739 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mtxqt\" (UniqueName: \"kubernetes.io/projected/e5ec16cb-0d08-44d7-8f1c-8965a5613854-kube-api-access-mtxqt\") pod \"oauth-openshift-d89d9c4d9-57l4t\" (UID: \"e5ec16cb-0d08-44d7-8f1c-8965a5613854\") " pod="openshift-authentication/oauth-openshift-d89d9c4d9-57l4t" Mar 18 18:02:02.074084 master-0 kubenswrapper[30278]: I0318 18:02:02.074067 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/e5ec16cb-0d08-44d7-8f1c-8965a5613854-audit-policies\") pod \"oauth-openshift-d89d9c4d9-57l4t\" (UID: \"e5ec16cb-0d08-44d7-8f1c-8965a5613854\") " pod="openshift-authentication/oauth-openshift-d89d9c4d9-57l4t" Mar 18 18:02:02.075499 master-0 kubenswrapper[30278]: I0318 18:02:02.071901 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/196136a4-31b2-484c-957a-49a994d9ca0d-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "196136a4-31b2-484c-957a-49a994d9ca0d" (UID: "196136a4-31b2-484c-957a-49a994d9ca0d"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 18:02:02.075499 master-0 kubenswrapper[30278]: I0318 18:02:02.073229 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/196136a4-31b2-484c-957a-49a994d9ca0d-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "196136a4-31b2-484c-957a-49a994d9ca0d" (UID: "196136a4-31b2-484c-957a-49a994d9ca0d"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 18:02:02.075499 master-0 kubenswrapper[30278]: I0318 18:02:02.074628 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/196136a4-31b2-484c-957a-49a994d9ca0d-kube-api-access-dd4cp" (OuterVolumeSpecName: "kube-api-access-dd4cp") pod "196136a4-31b2-484c-957a-49a994d9ca0d" (UID: "196136a4-31b2-484c-957a-49a994d9ca0d"). InnerVolumeSpecName "kube-api-access-dd4cp". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 18:02:02.077004 master-0 kubenswrapper[30278]: I0318 18:02:02.074221 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/e5ec16cb-0d08-44d7-8f1c-8965a5613854-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-d89d9c4d9-57l4t\" (UID: \"e5ec16cb-0d08-44d7-8f1c-8965a5613854\") " pod="openshift-authentication/oauth-openshift-d89d9c4d9-57l4t" Mar 18 18:02:02.077164 master-0 kubenswrapper[30278]: I0318 18:02:02.077144 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/e5ec16cb-0d08-44d7-8f1c-8965a5613854-v4-0-config-system-cliconfig\") pod \"oauth-openshift-d89d9c4d9-57l4t\" (UID: \"e5ec16cb-0d08-44d7-8f1c-8965a5613854\") " pod="openshift-authentication/oauth-openshift-d89d9c4d9-57l4t" Mar 18 18:02:02.077331 master-0 kubenswrapper[30278]: I0318 18:02:02.077312 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/e5ec16cb-0d08-44d7-8f1c-8965a5613854-audit-dir\") pod \"oauth-openshift-d89d9c4d9-57l4t\" (UID: \"e5ec16cb-0d08-44d7-8f1c-8965a5613854\") " pod="openshift-authentication/oauth-openshift-d89d9c4d9-57l4t" Mar 18 18:02:02.077446 master-0 kubenswrapper[30278]: I0318 18:02:02.077427 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/e5ec16cb-0d08-44d7-8f1c-8965a5613854-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-d89d9c4d9-57l4t\" (UID: \"e5ec16cb-0d08-44d7-8f1c-8965a5613854\") " pod="openshift-authentication/oauth-openshift-d89d9c4d9-57l4t" Mar 18 18:02:02.077553 master-0 kubenswrapper[30278]: I0318 18:02:02.077536 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/e5ec16cb-0d08-44d7-8f1c-8965a5613854-v4-0-config-user-template-error\") pod \"oauth-openshift-d89d9c4d9-57l4t\" (UID: \"e5ec16cb-0d08-44d7-8f1c-8965a5613854\") " pod="openshift-authentication/oauth-openshift-d89d9c4d9-57l4t" Mar 18 18:02:02.077660 master-0 kubenswrapper[30278]: I0318 18:02:02.077644 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/e5ec16cb-0d08-44d7-8f1c-8965a5613854-v4-0-config-user-template-login\") pod \"oauth-openshift-d89d9c4d9-57l4t\" (UID: \"e5ec16cb-0d08-44d7-8f1c-8965a5613854\") " pod="openshift-authentication/oauth-openshift-d89d9c4d9-57l4t" Mar 18 18:02:02.077872 master-0 kubenswrapper[30278]: I0318 18:02:02.077853 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/e5ec16cb-0d08-44d7-8f1c-8965a5613854-v4-0-config-system-session\") pod \"oauth-openshift-d89d9c4d9-57l4t\" (UID: \"e5ec16cb-0d08-44d7-8f1c-8965a5613854\") " pod="openshift-authentication/oauth-openshift-d89d9c4d9-57l4t" Mar 18 18:02:02.078038 master-0 kubenswrapper[30278]: I0318 18:02:02.078022 30278 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/196136a4-31b2-484c-957a-49a994d9ca0d-v4-0-config-system-ocp-branding-template\") on node \"master-0\" DevicePath \"\"" Mar 18 18:02:02.078930 master-0 kubenswrapper[30278]: I0318 18:02:02.078911 30278 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/196136a4-31b2-484c-957a-49a994d9ca0d-v4-0-config-system-session\") on node \"master-0\" DevicePath \"\"" Mar 18 18:02:02.079143 master-0 kubenswrapper[30278]: I0318 18:02:02.079126 30278 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/196136a4-31b2-484c-957a-49a994d9ca0d-v4-0-config-user-template-login\") on node \"master-0\" DevicePath \"\"" Mar 18 18:02:02.085090 master-0 kubenswrapper[30278]: I0318 18:02:02.085043 30278 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/196136a4-31b2-484c-957a-49a994d9ca0d-v4-0-config-user-template-provider-selection\") on node \"master-0\" DevicePath \"\"" Mar 18 18:02:02.085345 master-0 kubenswrapper[30278]: I0318 18:02:02.085325 30278 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/196136a4-31b2-484c-957a-49a994d9ca0d-audit-dir\") on node \"master-0\" DevicePath \"\"" Mar 18 18:02:02.085459 master-0 kubenswrapper[30278]: I0318 18:02:02.085444 30278 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/196136a4-31b2-484c-957a-49a994d9ca0d-audit-policies\") on node \"master-0\" DevicePath \"\"" Mar 18 18:02:02.085559 master-0 kubenswrapper[30278]: I0318 18:02:02.085545 30278 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dd4cp\" (UniqueName: \"kubernetes.io/projected/196136a4-31b2-484c-957a-49a994d9ca0d-kube-api-access-dd4cp\") on node \"master-0\" DevicePath \"\"" Mar 18 18:02:02.089829 master-0 kubenswrapper[30278]: I0318 18:02:02.086364 30278 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/196136a4-31b2-484c-957a-49a994d9ca0d-v4-0-config-system-router-certs\") on node \"master-0\" DevicePath \"\"" Mar 18 18:02:02.089829 master-0 kubenswrapper[30278]: I0318 18:02:02.086406 30278 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/196136a4-31b2-484c-957a-49a994d9ca0d-v4-0-config-system-trusted-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 18 18:02:02.089829 master-0 kubenswrapper[30278]: I0318 18:02:02.086422 30278 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/196136a4-31b2-484c-957a-49a994d9ca0d-v4-0-config-system-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 18 18:02:02.089829 master-0 kubenswrapper[30278]: I0318 18:02:02.086433 30278 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/196136a4-31b2-484c-957a-49a994d9ca0d-v4-0-config-system-cliconfig\") on node \"master-0\" DevicePath \"\"" Mar 18 18:02:02.089829 master-0 kubenswrapper[30278]: I0318 18:02:02.086446 30278 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/196136a4-31b2-484c-957a-49a994d9ca0d-v4-0-config-system-service-ca\") on node \"master-0\" DevicePath \"\"" Mar 18 18:02:02.089829 master-0 kubenswrapper[30278]: I0318 18:02:02.076980 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/196136a4-31b2-484c-957a-49a994d9ca0d-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "196136a4-31b2-484c-957a-49a994d9ca0d" (UID: "196136a4-31b2-484c-957a-49a994d9ca0d"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 18:02:02.188130 master-0 kubenswrapper[30278]: I0318 18:02:02.187957 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/e5ec16cb-0d08-44d7-8f1c-8965a5613854-audit-policies\") pod \"oauth-openshift-d89d9c4d9-57l4t\" (UID: \"e5ec16cb-0d08-44d7-8f1c-8965a5613854\") " pod="openshift-authentication/oauth-openshift-d89d9c4d9-57l4t" Mar 18 18:02:02.188130 master-0 kubenswrapper[30278]: I0318 18:02:02.188015 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/e5ec16cb-0d08-44d7-8f1c-8965a5613854-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-d89d9c4d9-57l4t\" (UID: \"e5ec16cb-0d08-44d7-8f1c-8965a5613854\") " pod="openshift-authentication/oauth-openshift-d89d9c4d9-57l4t" Mar 18 18:02:02.188130 master-0 kubenswrapper[30278]: I0318 18:02:02.188052 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/e5ec16cb-0d08-44d7-8f1c-8965a5613854-v4-0-config-system-cliconfig\") pod \"oauth-openshift-d89d9c4d9-57l4t\" (UID: \"e5ec16cb-0d08-44d7-8f1c-8965a5613854\") " pod="openshift-authentication/oauth-openshift-d89d9c4d9-57l4t" Mar 18 18:02:02.188130 master-0 kubenswrapper[30278]: I0318 18:02:02.188088 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/e5ec16cb-0d08-44d7-8f1c-8965a5613854-audit-dir\") pod \"oauth-openshift-d89d9c4d9-57l4t\" (UID: \"e5ec16cb-0d08-44d7-8f1c-8965a5613854\") " pod="openshift-authentication/oauth-openshift-d89d9c4d9-57l4t" Mar 18 18:02:02.188130 master-0 kubenswrapper[30278]: I0318 18:02:02.188106 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/e5ec16cb-0d08-44d7-8f1c-8965a5613854-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-d89d9c4d9-57l4t\" (UID: \"e5ec16cb-0d08-44d7-8f1c-8965a5613854\") " pod="openshift-authentication/oauth-openshift-d89d9c4d9-57l4t" Mar 18 18:02:02.188130 master-0 kubenswrapper[30278]: I0318 18:02:02.188125 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/e5ec16cb-0d08-44d7-8f1c-8965a5613854-v4-0-config-user-template-error\") pod \"oauth-openshift-d89d9c4d9-57l4t\" (UID: \"e5ec16cb-0d08-44d7-8f1c-8965a5613854\") " pod="openshift-authentication/oauth-openshift-d89d9c4d9-57l4t" Mar 18 18:02:02.188693 master-0 kubenswrapper[30278]: I0318 18:02:02.188639 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/e5ec16cb-0d08-44d7-8f1c-8965a5613854-audit-dir\") pod \"oauth-openshift-d89d9c4d9-57l4t\" (UID: \"e5ec16cb-0d08-44d7-8f1c-8965a5613854\") " pod="openshift-authentication/oauth-openshift-d89d9c4d9-57l4t" Mar 18 18:02:02.189023 master-0 kubenswrapper[30278]: I0318 18:02:02.188982 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/e5ec16cb-0d08-44d7-8f1c-8965a5613854-v4-0-config-user-template-login\") pod \"oauth-openshift-d89d9c4d9-57l4t\" (UID: \"e5ec16cb-0d08-44d7-8f1c-8965a5613854\") " pod="openshift-authentication/oauth-openshift-d89d9c4d9-57l4t" Mar 18 18:02:02.189147 master-0 kubenswrapper[30278]: I0318 18:02:02.189114 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/e5ec16cb-0d08-44d7-8f1c-8965a5613854-v4-0-config-system-session\") pod \"oauth-openshift-d89d9c4d9-57l4t\" (UID: \"e5ec16cb-0d08-44d7-8f1c-8965a5613854\") " pod="openshift-authentication/oauth-openshift-d89d9c4d9-57l4t" Mar 18 18:02:02.189263 master-0 kubenswrapper[30278]: I0318 18:02:02.189236 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e5ec16cb-0d08-44d7-8f1c-8965a5613854-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-d89d9c4d9-57l4t\" (UID: \"e5ec16cb-0d08-44d7-8f1c-8965a5613854\") " pod="openshift-authentication/oauth-openshift-d89d9c4d9-57l4t" Mar 18 18:02:02.189340 master-0 kubenswrapper[30278]: I0318 18:02:02.189296 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/e5ec16cb-0d08-44d7-8f1c-8965a5613854-v4-0-config-system-service-ca\") pod \"oauth-openshift-d89d9c4d9-57l4t\" (UID: \"e5ec16cb-0d08-44d7-8f1c-8965a5613854\") " pod="openshift-authentication/oauth-openshift-d89d9c4d9-57l4t" Mar 18 18:02:02.189340 master-0 kubenswrapper[30278]: I0318 18:02:02.189330 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/e5ec16cb-0d08-44d7-8f1c-8965a5613854-v4-0-config-system-serving-cert\") pod \"oauth-openshift-d89d9c4d9-57l4t\" (UID: \"e5ec16cb-0d08-44d7-8f1c-8965a5613854\") " pod="openshift-authentication/oauth-openshift-d89d9c4d9-57l4t" Mar 18 18:02:02.189469 master-0 kubenswrapper[30278]: I0318 18:02:02.189369 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/e5ec16cb-0d08-44d7-8f1c-8965a5613854-v4-0-config-system-router-certs\") pod \"oauth-openshift-d89d9c4d9-57l4t\" (UID: \"e5ec16cb-0d08-44d7-8f1c-8965a5613854\") " pod="openshift-authentication/oauth-openshift-d89d9c4d9-57l4t" Mar 18 18:02:02.189723 master-0 kubenswrapper[30278]: I0318 18:02:02.189672 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/e5ec16cb-0d08-44d7-8f1c-8965a5613854-audit-policies\") pod \"oauth-openshift-d89d9c4d9-57l4t\" (UID: \"e5ec16cb-0d08-44d7-8f1c-8965a5613854\") " pod="openshift-authentication/oauth-openshift-d89d9c4d9-57l4t" Mar 18 18:02:02.189999 master-0 kubenswrapper[30278]: I0318 18:02:02.189957 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/e5ec16cb-0d08-44d7-8f1c-8965a5613854-v4-0-config-system-cliconfig\") pod \"oauth-openshift-d89d9c4d9-57l4t\" (UID: \"e5ec16cb-0d08-44d7-8f1c-8965a5613854\") " pod="openshift-authentication/oauth-openshift-d89d9c4d9-57l4t" Mar 18 18:02:02.190286 master-0 kubenswrapper[30278]: I0318 18:02:02.190230 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mtxqt\" (UniqueName: \"kubernetes.io/projected/e5ec16cb-0d08-44d7-8f1c-8965a5613854-kube-api-access-mtxqt\") pod \"oauth-openshift-d89d9c4d9-57l4t\" (UID: \"e5ec16cb-0d08-44d7-8f1c-8965a5613854\") " pod="openshift-authentication/oauth-openshift-d89d9c4d9-57l4t" Mar 18 18:02:02.190510 master-0 kubenswrapper[30278]: I0318 18:02:02.190470 30278 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/196136a4-31b2-484c-957a-49a994d9ca0d-v4-0-config-user-template-error\") on node \"master-0\" DevicePath \"\"" Mar 18 18:02:02.190736 master-0 kubenswrapper[30278]: I0318 18:02:02.190698 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/e5ec16cb-0d08-44d7-8f1c-8965a5613854-v4-0-config-system-service-ca\") pod \"oauth-openshift-d89d9c4d9-57l4t\" (UID: \"e5ec16cb-0d08-44d7-8f1c-8965a5613854\") " pod="openshift-authentication/oauth-openshift-d89d9c4d9-57l4t" Mar 18 18:02:02.191947 master-0 kubenswrapper[30278]: I0318 18:02:02.191910 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/e5ec16cb-0d08-44d7-8f1c-8965a5613854-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-d89d9c4d9-57l4t\" (UID: \"e5ec16cb-0d08-44d7-8f1c-8965a5613854\") " pod="openshift-authentication/oauth-openshift-d89d9c4d9-57l4t" Mar 18 18:02:02.192622 master-0 kubenswrapper[30278]: I0318 18:02:02.192547 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e5ec16cb-0d08-44d7-8f1c-8965a5613854-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-d89d9c4d9-57l4t\" (UID: \"e5ec16cb-0d08-44d7-8f1c-8965a5613854\") " pod="openshift-authentication/oauth-openshift-d89d9c4d9-57l4t" Mar 18 18:02:02.192893 master-0 kubenswrapper[30278]: I0318 18:02:02.192858 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/e5ec16cb-0d08-44d7-8f1c-8965a5613854-v4-0-config-user-template-login\") pod \"oauth-openshift-d89d9c4d9-57l4t\" (UID: \"e5ec16cb-0d08-44d7-8f1c-8965a5613854\") " pod="openshift-authentication/oauth-openshift-d89d9c4d9-57l4t" Mar 18 18:02:02.193644 master-0 kubenswrapper[30278]: I0318 18:02:02.193607 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/e5ec16cb-0d08-44d7-8f1c-8965a5613854-v4-0-config-system-serving-cert\") pod \"oauth-openshift-d89d9c4d9-57l4t\" (UID: \"e5ec16cb-0d08-44d7-8f1c-8965a5613854\") " pod="openshift-authentication/oauth-openshift-d89d9c4d9-57l4t" Mar 18 18:02:02.193644 master-0 kubenswrapper[30278]: I0318 18:02:02.193633 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/e5ec16cb-0d08-44d7-8f1c-8965a5613854-v4-0-config-system-session\") pod \"oauth-openshift-d89d9c4d9-57l4t\" (UID: \"e5ec16cb-0d08-44d7-8f1c-8965a5613854\") " pod="openshift-authentication/oauth-openshift-d89d9c4d9-57l4t" Mar 18 18:02:02.193871 master-0 kubenswrapper[30278]: I0318 18:02:02.193806 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/e5ec16cb-0d08-44d7-8f1c-8965a5613854-v4-0-config-system-router-certs\") pod \"oauth-openshift-d89d9c4d9-57l4t\" (UID: \"e5ec16cb-0d08-44d7-8f1c-8965a5613854\") " pod="openshift-authentication/oauth-openshift-d89d9c4d9-57l4t" Mar 18 18:02:02.196121 master-0 kubenswrapper[30278]: I0318 18:02:02.196055 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/e5ec16cb-0d08-44d7-8f1c-8965a5613854-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-d89d9c4d9-57l4t\" (UID: \"e5ec16cb-0d08-44d7-8f1c-8965a5613854\") " pod="openshift-authentication/oauth-openshift-d89d9c4d9-57l4t" Mar 18 18:02:02.196817 master-0 kubenswrapper[30278]: I0318 18:02:02.196778 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/e5ec16cb-0d08-44d7-8f1c-8965a5613854-v4-0-config-user-template-error\") pod \"oauth-openshift-d89d9c4d9-57l4t\" (UID: \"e5ec16cb-0d08-44d7-8f1c-8965a5613854\") " pod="openshift-authentication/oauth-openshift-d89d9c4d9-57l4t" Mar 18 18:02:02.222383 master-0 kubenswrapper[30278]: I0318 18:02:02.222344 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mtxqt\" (UniqueName: \"kubernetes.io/projected/e5ec16cb-0d08-44d7-8f1c-8965a5613854-kube-api-access-mtxqt\") pod \"oauth-openshift-d89d9c4d9-57l4t\" (UID: \"e5ec16cb-0d08-44d7-8f1c-8965a5613854\") " pod="openshift-authentication/oauth-openshift-d89d9c4d9-57l4t" Mar 18 18:02:02.357710 master-0 kubenswrapper[30278]: I0318 18:02:02.357613 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-d89d9c4d9-57l4t" Mar 18 18:02:02.475224 master-0 kubenswrapper[30278]: I0318 18:02:02.475029 30278 generic.go:334] "Generic (PLEG): container finished" podID="196136a4-31b2-484c-957a-49a994d9ca0d" containerID="3d6e38f36dfd1767f4deed3e83519fca3fd8d600dbd5d8d9748aded9b70f103c" exitCode=0 Mar 18 18:02:02.475224 master-0 kubenswrapper[30278]: I0318 18:02:02.475087 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-596ffdf9db-g7vtf" event={"ID":"196136a4-31b2-484c-957a-49a994d9ca0d","Type":"ContainerDied","Data":"3d6e38f36dfd1767f4deed3e83519fca3fd8d600dbd5d8d9748aded9b70f103c"} Mar 18 18:02:02.475224 master-0 kubenswrapper[30278]: I0318 18:02:02.475120 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-596ffdf9db-g7vtf" event={"ID":"196136a4-31b2-484c-957a-49a994d9ca0d","Type":"ContainerDied","Data":"32ab7d9ca4f7cadb5d46c27011006bbc2873365f5105837143d3e4db539365f6"} Mar 18 18:02:02.475224 master-0 kubenswrapper[30278]: I0318 18:02:02.475141 30278 scope.go:117] "RemoveContainer" containerID="3d6e38f36dfd1767f4deed3e83519fca3fd8d600dbd5d8d9748aded9b70f103c" Mar 18 18:02:02.475659 master-0 kubenswrapper[30278]: I0318 18:02:02.475385 30278 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-596ffdf9db-g7vtf" Mar 18 18:02:02.529014 master-0 kubenswrapper[30278]: I0318 18:02:02.528966 30278 scope.go:117] "RemoveContainer" containerID="3d6e38f36dfd1767f4deed3e83519fca3fd8d600dbd5d8d9748aded9b70f103c" Mar 18 18:02:02.529510 master-0 kubenswrapper[30278]: E0318 18:02:02.529467 30278 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3d6e38f36dfd1767f4deed3e83519fca3fd8d600dbd5d8d9748aded9b70f103c\": container with ID starting with 3d6e38f36dfd1767f4deed3e83519fca3fd8d600dbd5d8d9748aded9b70f103c not found: ID does not exist" containerID="3d6e38f36dfd1767f4deed3e83519fca3fd8d600dbd5d8d9748aded9b70f103c" Mar 18 18:02:02.529591 master-0 kubenswrapper[30278]: I0318 18:02:02.529521 30278 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3d6e38f36dfd1767f4deed3e83519fca3fd8d600dbd5d8d9748aded9b70f103c"} err="failed to get container status \"3d6e38f36dfd1767f4deed3e83519fca3fd8d600dbd5d8d9748aded9b70f103c\": rpc error: code = NotFound desc = could not find container \"3d6e38f36dfd1767f4deed3e83519fca3fd8d600dbd5d8d9748aded9b70f103c\": container with ID starting with 3d6e38f36dfd1767f4deed3e83519fca3fd8d600dbd5d8d9748aded9b70f103c not found: ID does not exist" Mar 18 18:02:02.536604 master-0 kubenswrapper[30278]: I0318 18:02:02.536567 30278 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-596ffdf9db-g7vtf"] Mar 18 18:02:02.548858 master-0 kubenswrapper[30278]: I0318 18:02:02.548769 30278 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-596ffdf9db-g7vtf"] Mar 18 18:02:02.827752 master-0 kubenswrapper[30278]: I0318 18:02:02.826022 30278 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-d89d9c4d9-57l4t"] Mar 18 18:02:03.064985 master-0 kubenswrapper[30278]: I0318 18:02:03.064940 30278 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="196136a4-31b2-484c-957a-49a994d9ca0d" path="/var/lib/kubelet/pods/196136a4-31b2-484c-957a-49a994d9ca0d/volumes" Mar 18 18:02:03.485352 master-0 kubenswrapper[30278]: I0318 18:02:03.485135 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-d89d9c4d9-57l4t" event={"ID":"e5ec16cb-0d08-44d7-8f1c-8965a5613854","Type":"ContainerStarted","Data":"0c6ff64b886c97c98a34d7858beca9cfac907017560a50040566b3ad5fc39ca4"} Mar 18 18:02:03.485352 master-0 kubenswrapper[30278]: I0318 18:02:03.485186 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-d89d9c4d9-57l4t" event={"ID":"e5ec16cb-0d08-44d7-8f1c-8965a5613854","Type":"ContainerStarted","Data":"3614cc6956911548067b704c5c0f5658ad46e793b076fb2d8a91f86f1be1a500"} Mar 18 18:02:03.485679 master-0 kubenswrapper[30278]: I0318 18:02:03.485641 30278 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-d89d9c4d9-57l4t" Mar 18 18:02:03.492074 master-0 kubenswrapper[30278]: I0318 18:02:03.491999 30278 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-d89d9c4d9-57l4t" Mar 18 18:02:03.516065 master-0 kubenswrapper[30278]: I0318 18:02:03.515965 30278 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-d89d9c4d9-57l4t" podStartSLOduration=27.515934151 podStartE2EDuration="27.515934151s" podCreationTimestamp="2026-03-18 18:01:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 18:02:03.514154584 +0000 UTC m=+92.681339199" watchObservedRunningTime="2026-03-18 18:02:03.515934151 +0000 UTC m=+92.683118746" Mar 18 18:02:06.507811 master-0 kubenswrapper[30278]: I0318 18:02:06.507720 30278 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/installer-4-master-0"] Mar 18 18:02:06.512489 master-0 kubenswrapper[30278]: I0318 18:02:06.512422 30278 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/installer-4-master-0" podUID="20865801-ac9a-4c2d-821e-126a9b463232" containerName="installer" containerID="cri-o://e114bb0d6fc2cd2ca24760eb893d95f17619be2af3bc2c5eb2b03c45d6a37626" gracePeriod=30 Mar 18 18:02:07.541428 master-0 kubenswrapper[30278]: I0318 18:02:07.541344 30278 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-f5755b457-f4cbl"] Mar 18 18:02:07.542072 master-0 kubenswrapper[30278]: I0318 18:02:07.541747 30278 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-f5755b457-f4cbl" podUID="1db0a246-ca43-4e7c-b09e-e80218ae99b1" containerName="controller-manager" containerID="cri-o://1612a1070b4cbefcdbe41900f384f9e6a016b7884254d5d1d20e3137f1bfc3ce" gracePeriod=30 Mar 18 18:02:07.574916 master-0 kubenswrapper[30278]: I0318 18:02:07.574832 30278 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-57dbfd879f-44tfw"] Mar 18 18:02:07.575209 master-0 kubenswrapper[30278]: I0318 18:02:07.575151 30278 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-57dbfd879f-44tfw" podUID="253ec853-f637-4aa4-8e8e-eb655dfccccb" containerName="route-controller-manager" containerID="cri-o://3a64cbfee76c34b2d29f80158f22a8a5ce70c126b4ec12a27218b02eb32ec139" gracePeriod=30 Mar 18 18:02:08.170864 master-0 kubenswrapper[30278]: I0318 18:02:08.170722 30278 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-f5755b457-f4cbl" Mar 18 18:02:08.176377 master-0 kubenswrapper[30278]: I0318 18:02:08.176321 30278 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-57dbfd879f-44tfw" Mar 18 18:02:08.326473 master-0 kubenswrapper[30278]: I0318 18:02:08.326336 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1db0a246-ca43-4e7c-b09e-e80218ae99b1-client-ca\") pod \"1db0a246-ca43-4e7c-b09e-e80218ae99b1\" (UID: \"1db0a246-ca43-4e7c-b09e-e80218ae99b1\") " Mar 18 18:02:08.326806 master-0 kubenswrapper[30278]: I0318 18:02:08.326521 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cx596\" (UniqueName: \"kubernetes.io/projected/253ec853-f637-4aa4-8e8e-eb655dfccccb-kube-api-access-cx596\") pod \"253ec853-f637-4aa4-8e8e-eb655dfccccb\" (UID: \"253ec853-f637-4aa4-8e8e-eb655dfccccb\") " Mar 18 18:02:08.326806 master-0 kubenswrapper[30278]: I0318 18:02:08.326590 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1db0a246-ca43-4e7c-b09e-e80218ae99b1-config\") pod \"1db0a246-ca43-4e7c-b09e-e80218ae99b1\" (UID: \"1db0a246-ca43-4e7c-b09e-e80218ae99b1\") " Mar 18 18:02:08.326963 master-0 kubenswrapper[30278]: I0318 18:02:08.326771 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/253ec853-f637-4aa4-8e8e-eb655dfccccb-client-ca\") pod \"253ec853-f637-4aa4-8e8e-eb655dfccccb\" (UID: \"253ec853-f637-4aa4-8e8e-eb655dfccccb\") " Mar 18 18:02:08.327037 master-0 kubenswrapper[30278]: I0318 18:02:08.326944 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1db0a246-ca43-4e7c-b09e-e80218ae99b1-client-ca" (OuterVolumeSpecName: "client-ca") pod "1db0a246-ca43-4e7c-b09e-e80218ae99b1" (UID: "1db0a246-ca43-4e7c-b09e-e80218ae99b1"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 18:02:08.327113 master-0 kubenswrapper[30278]: I0318 18:02:08.327063 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1db0a246-ca43-4e7c-b09e-e80218ae99b1-serving-cert\") pod \"1db0a246-ca43-4e7c-b09e-e80218ae99b1\" (UID: \"1db0a246-ca43-4e7c-b09e-e80218ae99b1\") " Mar 18 18:02:08.327182 master-0 kubenswrapper[30278]: I0318 18:02:08.327120 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/253ec853-f637-4aa4-8e8e-eb655dfccccb-config\") pod \"253ec853-f637-4aa4-8e8e-eb655dfccccb\" (UID: \"253ec853-f637-4aa4-8e8e-eb655dfccccb\") " Mar 18 18:02:08.327263 master-0 kubenswrapper[30278]: I0318 18:02:08.327216 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/253ec853-f637-4aa4-8e8e-eb655dfccccb-serving-cert\") pod \"253ec853-f637-4aa4-8e8e-eb655dfccccb\" (UID: \"253ec853-f637-4aa4-8e8e-eb655dfccccb\") " Mar 18 18:02:08.327380 master-0 kubenswrapper[30278]: I0318 18:02:08.327290 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n9g8f\" (UniqueName: \"kubernetes.io/projected/1db0a246-ca43-4e7c-b09e-e80218ae99b1-kube-api-access-n9g8f\") pod \"1db0a246-ca43-4e7c-b09e-e80218ae99b1\" (UID: \"1db0a246-ca43-4e7c-b09e-e80218ae99b1\") " Mar 18 18:02:08.327380 master-0 kubenswrapper[30278]: I0318 18:02:08.327330 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1db0a246-ca43-4e7c-b09e-e80218ae99b1-proxy-ca-bundles\") pod \"1db0a246-ca43-4e7c-b09e-e80218ae99b1\" (UID: \"1db0a246-ca43-4e7c-b09e-e80218ae99b1\") " Mar 18 18:02:08.327528 master-0 kubenswrapper[30278]: I0318 18:02:08.327415 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/253ec853-f637-4aa4-8e8e-eb655dfccccb-client-ca" (OuterVolumeSpecName: "client-ca") pod "253ec853-f637-4aa4-8e8e-eb655dfccccb" (UID: "253ec853-f637-4aa4-8e8e-eb655dfccccb"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 18:02:08.328041 master-0 kubenswrapper[30278]: I0318 18:02:08.327963 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/253ec853-f637-4aa4-8e8e-eb655dfccccb-config" (OuterVolumeSpecName: "config") pod "253ec853-f637-4aa4-8e8e-eb655dfccccb" (UID: "253ec853-f637-4aa4-8e8e-eb655dfccccb"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 18:02:08.328328 master-0 kubenswrapper[30278]: I0318 18:02:08.328065 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1db0a246-ca43-4e7c-b09e-e80218ae99b1-config" (OuterVolumeSpecName: "config") pod "1db0a246-ca43-4e7c-b09e-e80218ae99b1" (UID: "1db0a246-ca43-4e7c-b09e-e80218ae99b1"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 18:02:08.328421 master-0 kubenswrapper[30278]: I0318 18:02:08.328318 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1db0a246-ca43-4e7c-b09e-e80218ae99b1-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "1db0a246-ca43-4e7c-b09e-e80218ae99b1" (UID: "1db0a246-ca43-4e7c-b09e-e80218ae99b1"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 18:02:08.328498 master-0 kubenswrapper[30278]: I0318 18:02:08.328437 30278 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1db0a246-ca43-4e7c-b09e-e80218ae99b1-client-ca\") on node \"master-0\" DevicePath \"\"" Mar 18 18:02:08.328498 master-0 kubenswrapper[30278]: I0318 18:02:08.328480 30278 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1db0a246-ca43-4e7c-b09e-e80218ae99b1-config\") on node \"master-0\" DevicePath \"\"" Mar 18 18:02:08.328633 master-0 kubenswrapper[30278]: I0318 18:02:08.328510 30278 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/253ec853-f637-4aa4-8e8e-eb655dfccccb-client-ca\") on node \"master-0\" DevicePath \"\"" Mar 18 18:02:08.328633 master-0 kubenswrapper[30278]: I0318 18:02:08.328541 30278 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/253ec853-f637-4aa4-8e8e-eb655dfccccb-config\") on node \"master-0\" DevicePath \"\"" Mar 18 18:02:08.330801 master-0 kubenswrapper[30278]: I0318 18:02:08.330738 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/253ec853-f637-4aa4-8e8e-eb655dfccccb-kube-api-access-cx596" (OuterVolumeSpecName: "kube-api-access-cx596") pod "253ec853-f637-4aa4-8e8e-eb655dfccccb" (UID: "253ec853-f637-4aa4-8e8e-eb655dfccccb"). InnerVolumeSpecName "kube-api-access-cx596". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 18:02:08.330938 master-0 kubenswrapper[30278]: I0318 18:02:08.330843 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1db0a246-ca43-4e7c-b09e-e80218ae99b1-kube-api-access-n9g8f" (OuterVolumeSpecName: "kube-api-access-n9g8f") pod "1db0a246-ca43-4e7c-b09e-e80218ae99b1" (UID: "1db0a246-ca43-4e7c-b09e-e80218ae99b1"). InnerVolumeSpecName "kube-api-access-n9g8f". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 18:02:08.333164 master-0 kubenswrapper[30278]: I0318 18:02:08.333081 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/253ec853-f637-4aa4-8e8e-eb655dfccccb-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "253ec853-f637-4aa4-8e8e-eb655dfccccb" (UID: "253ec853-f637-4aa4-8e8e-eb655dfccccb"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 18:02:08.333358 master-0 kubenswrapper[30278]: I0318 18:02:08.333207 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1db0a246-ca43-4e7c-b09e-e80218ae99b1-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1db0a246-ca43-4e7c-b09e-e80218ae99b1" (UID: "1db0a246-ca43-4e7c-b09e-e80218ae99b1"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 18:02:08.436519 master-0 kubenswrapper[30278]: I0318 18:02:08.433217 30278 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1db0a246-ca43-4e7c-b09e-e80218ae99b1-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 18 18:02:08.436519 master-0 kubenswrapper[30278]: I0318 18:02:08.433315 30278 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/253ec853-f637-4aa4-8e8e-eb655dfccccb-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 18 18:02:08.436519 master-0 kubenswrapper[30278]: I0318 18:02:08.433340 30278 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n9g8f\" (UniqueName: \"kubernetes.io/projected/1db0a246-ca43-4e7c-b09e-e80218ae99b1-kube-api-access-n9g8f\") on node \"master-0\" DevicePath \"\"" Mar 18 18:02:08.436519 master-0 kubenswrapper[30278]: I0318 18:02:08.433364 30278 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1db0a246-ca43-4e7c-b09e-e80218ae99b1-proxy-ca-bundles\") on node \"master-0\" DevicePath \"\"" Mar 18 18:02:08.436519 master-0 kubenswrapper[30278]: I0318 18:02:08.433394 30278 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cx596\" (UniqueName: \"kubernetes.io/projected/253ec853-f637-4aa4-8e8e-eb655dfccccb-kube-api-access-cx596\") on node \"master-0\" DevicePath \"\"" Mar 18 18:02:08.539322 master-0 kubenswrapper[30278]: I0318 18:02:08.539231 30278 generic.go:334] "Generic (PLEG): container finished" podID="1db0a246-ca43-4e7c-b09e-e80218ae99b1" containerID="1612a1070b4cbefcdbe41900f384f9e6a016b7884254d5d1d20e3137f1bfc3ce" exitCode=0 Mar 18 18:02:08.539592 master-0 kubenswrapper[30278]: I0318 18:02:08.539320 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-f5755b457-f4cbl" event={"ID":"1db0a246-ca43-4e7c-b09e-e80218ae99b1","Type":"ContainerDied","Data":"1612a1070b4cbefcdbe41900f384f9e6a016b7884254d5d1d20e3137f1bfc3ce"} Mar 18 18:02:08.539592 master-0 kubenswrapper[30278]: I0318 18:02:08.539392 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-f5755b457-f4cbl" event={"ID":"1db0a246-ca43-4e7c-b09e-e80218ae99b1","Type":"ContainerDied","Data":"e34a7d43723491c0ffb4df04571420d726ec22d80fe5f50be4255c5ba300c922"} Mar 18 18:02:08.539592 master-0 kubenswrapper[30278]: I0318 18:02:08.539435 30278 scope.go:117] "RemoveContainer" containerID="1612a1070b4cbefcdbe41900f384f9e6a016b7884254d5d1d20e3137f1bfc3ce" Mar 18 18:02:08.541932 master-0 kubenswrapper[30278]: I0318 18:02:08.541902 30278 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-f5755b457-f4cbl" Mar 18 18:02:08.542865 master-0 kubenswrapper[30278]: I0318 18:02:08.542825 30278 generic.go:334] "Generic (PLEG): container finished" podID="253ec853-f637-4aa4-8e8e-eb655dfccccb" containerID="3a64cbfee76c34b2d29f80158f22a8a5ce70c126b4ec12a27218b02eb32ec139" exitCode=0 Mar 18 18:02:08.542968 master-0 kubenswrapper[30278]: I0318 18:02:08.542889 30278 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-57dbfd879f-44tfw" Mar 18 18:02:08.543044 master-0 kubenswrapper[30278]: I0318 18:02:08.542912 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-57dbfd879f-44tfw" event={"ID":"253ec853-f637-4aa4-8e8e-eb655dfccccb","Type":"ContainerDied","Data":"3a64cbfee76c34b2d29f80158f22a8a5ce70c126b4ec12a27218b02eb32ec139"} Mar 18 18:02:08.543044 master-0 kubenswrapper[30278]: I0318 18:02:08.543030 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-57dbfd879f-44tfw" event={"ID":"253ec853-f637-4aa4-8e8e-eb655dfccccb","Type":"ContainerDied","Data":"b84bd85aac3ddf41b65c4a3ee28624adfec16e2d4dd19c154137ff1a28ded42b"} Mar 18 18:02:08.569063 master-0 kubenswrapper[30278]: I0318 18:02:08.569007 30278 scope.go:117] "RemoveContainer" containerID="a1b33ace751148a425bdf00f07398d55bcc2dc83ff83d8278e7851ed219c1db7" Mar 18 18:02:08.621248 master-0 kubenswrapper[30278]: I0318 18:02:08.620080 30278 scope.go:117] "RemoveContainer" containerID="1612a1070b4cbefcdbe41900f384f9e6a016b7884254d5d1d20e3137f1bfc3ce" Mar 18 18:02:08.626178 master-0 kubenswrapper[30278]: E0318 18:02:08.626088 30278 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1612a1070b4cbefcdbe41900f384f9e6a016b7884254d5d1d20e3137f1bfc3ce\": container with ID starting with 1612a1070b4cbefcdbe41900f384f9e6a016b7884254d5d1d20e3137f1bfc3ce not found: ID does not exist" containerID="1612a1070b4cbefcdbe41900f384f9e6a016b7884254d5d1d20e3137f1bfc3ce" Mar 18 18:02:08.626356 master-0 kubenswrapper[30278]: I0318 18:02:08.626182 30278 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1612a1070b4cbefcdbe41900f384f9e6a016b7884254d5d1d20e3137f1bfc3ce"} err="failed to get container status \"1612a1070b4cbefcdbe41900f384f9e6a016b7884254d5d1d20e3137f1bfc3ce\": rpc error: code = NotFound desc = could not find container \"1612a1070b4cbefcdbe41900f384f9e6a016b7884254d5d1d20e3137f1bfc3ce\": container with ID starting with 1612a1070b4cbefcdbe41900f384f9e6a016b7884254d5d1d20e3137f1bfc3ce not found: ID does not exist" Mar 18 18:02:08.626356 master-0 kubenswrapper[30278]: I0318 18:02:08.626231 30278 scope.go:117] "RemoveContainer" containerID="a1b33ace751148a425bdf00f07398d55bcc2dc83ff83d8278e7851ed219c1db7" Mar 18 18:02:08.628714 master-0 kubenswrapper[30278]: E0318 18:02:08.628655 30278 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a1b33ace751148a425bdf00f07398d55bcc2dc83ff83d8278e7851ed219c1db7\": container with ID starting with a1b33ace751148a425bdf00f07398d55bcc2dc83ff83d8278e7851ed219c1db7 not found: ID does not exist" containerID="a1b33ace751148a425bdf00f07398d55bcc2dc83ff83d8278e7851ed219c1db7" Mar 18 18:02:08.628714 master-0 kubenswrapper[30278]: I0318 18:02:08.628695 30278 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a1b33ace751148a425bdf00f07398d55bcc2dc83ff83d8278e7851ed219c1db7"} err="failed to get container status \"a1b33ace751148a425bdf00f07398d55bcc2dc83ff83d8278e7851ed219c1db7\": rpc error: code = NotFound desc = could not find container \"a1b33ace751148a425bdf00f07398d55bcc2dc83ff83d8278e7851ed219c1db7\": container with ID starting with a1b33ace751148a425bdf00f07398d55bcc2dc83ff83d8278e7851ed219c1db7 not found: ID does not exist" Mar 18 18:02:08.628714 master-0 kubenswrapper[30278]: I0318 18:02:08.628720 30278 scope.go:117] "RemoveContainer" containerID="3a64cbfee76c34b2d29f80158f22a8a5ce70c126b4ec12a27218b02eb32ec139" Mar 18 18:02:08.655263 master-0 kubenswrapper[30278]: I0318 18:02:08.655218 30278 scope.go:117] "RemoveContainer" containerID="2db7d7384c8a35f95ff52d00ab25bbedc70ce8cf90c5c6ca6aff91f2c9272cd8" Mar 18 18:02:08.671934 master-0 kubenswrapper[30278]: I0318 18:02:08.671829 30278 scope.go:117] "RemoveContainer" containerID="3a64cbfee76c34b2d29f80158f22a8a5ce70c126b4ec12a27218b02eb32ec139" Mar 18 18:02:08.672609 master-0 kubenswrapper[30278]: E0318 18:02:08.672561 30278 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3a64cbfee76c34b2d29f80158f22a8a5ce70c126b4ec12a27218b02eb32ec139\": container with ID starting with 3a64cbfee76c34b2d29f80158f22a8a5ce70c126b4ec12a27218b02eb32ec139 not found: ID does not exist" containerID="3a64cbfee76c34b2d29f80158f22a8a5ce70c126b4ec12a27218b02eb32ec139" Mar 18 18:02:08.672696 master-0 kubenswrapper[30278]: I0318 18:02:08.672603 30278 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3a64cbfee76c34b2d29f80158f22a8a5ce70c126b4ec12a27218b02eb32ec139"} err="failed to get container status \"3a64cbfee76c34b2d29f80158f22a8a5ce70c126b4ec12a27218b02eb32ec139\": rpc error: code = NotFound desc = could not find container \"3a64cbfee76c34b2d29f80158f22a8a5ce70c126b4ec12a27218b02eb32ec139\": container with ID starting with 3a64cbfee76c34b2d29f80158f22a8a5ce70c126b4ec12a27218b02eb32ec139 not found: ID does not exist" Mar 18 18:02:08.672696 master-0 kubenswrapper[30278]: I0318 18:02:08.672636 30278 scope.go:117] "RemoveContainer" containerID="2db7d7384c8a35f95ff52d00ab25bbedc70ce8cf90c5c6ca6aff91f2c9272cd8" Mar 18 18:02:08.673045 master-0 kubenswrapper[30278]: E0318 18:02:08.673005 30278 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2db7d7384c8a35f95ff52d00ab25bbedc70ce8cf90c5c6ca6aff91f2c9272cd8\": container with ID starting with 2db7d7384c8a35f95ff52d00ab25bbedc70ce8cf90c5c6ca6aff91f2c9272cd8 not found: ID does not exist" containerID="2db7d7384c8a35f95ff52d00ab25bbedc70ce8cf90c5c6ca6aff91f2c9272cd8" Mar 18 18:02:08.673045 master-0 kubenswrapper[30278]: I0318 18:02:08.673033 30278 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2db7d7384c8a35f95ff52d00ab25bbedc70ce8cf90c5c6ca6aff91f2c9272cd8"} err="failed to get container status \"2db7d7384c8a35f95ff52d00ab25bbedc70ce8cf90c5c6ca6aff91f2c9272cd8\": rpc error: code = NotFound desc = could not find container \"2db7d7384c8a35f95ff52d00ab25bbedc70ce8cf90c5c6ca6aff91f2c9272cd8\": container with ID starting with 2db7d7384c8a35f95ff52d00ab25bbedc70ce8cf90c5c6ca6aff91f2c9272cd8 not found: ID does not exist" Mar 18 18:02:09.604172 master-0 kubenswrapper[30278]: I0318 18:02:09.603370 30278 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-6f66d74d5-vc6n8"] Mar 18 18:02:09.604172 master-0 kubenswrapper[30278]: E0318 18:02:09.603863 30278 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1db0a246-ca43-4e7c-b09e-e80218ae99b1" containerName="controller-manager" Mar 18 18:02:09.604172 master-0 kubenswrapper[30278]: I0318 18:02:09.603885 30278 state_mem.go:107] "Deleted CPUSet assignment" podUID="1db0a246-ca43-4e7c-b09e-e80218ae99b1" containerName="controller-manager" Mar 18 18:02:09.604172 master-0 kubenswrapper[30278]: E0318 18:02:09.603898 30278 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="253ec853-f637-4aa4-8e8e-eb655dfccccb" containerName="route-controller-manager" Mar 18 18:02:09.604172 master-0 kubenswrapper[30278]: I0318 18:02:09.603908 30278 state_mem.go:107] "Deleted CPUSet assignment" podUID="253ec853-f637-4aa4-8e8e-eb655dfccccb" containerName="route-controller-manager" Mar 18 18:02:09.604172 master-0 kubenswrapper[30278]: E0318 18:02:09.603954 30278 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1db0a246-ca43-4e7c-b09e-e80218ae99b1" containerName="controller-manager" Mar 18 18:02:09.604172 master-0 kubenswrapper[30278]: I0318 18:02:09.603963 30278 state_mem.go:107] "Deleted CPUSet assignment" podUID="1db0a246-ca43-4e7c-b09e-e80218ae99b1" containerName="controller-manager" Mar 18 18:02:09.604172 master-0 kubenswrapper[30278]: I0318 18:02:09.604145 30278 memory_manager.go:354] "RemoveStaleState removing state" podUID="253ec853-f637-4aa4-8e8e-eb655dfccccb" containerName="route-controller-manager" Mar 18 18:02:09.604172 master-0 kubenswrapper[30278]: I0318 18:02:09.604203 30278 memory_manager.go:354] "RemoveStaleState removing state" podUID="1db0a246-ca43-4e7c-b09e-e80218ae99b1" containerName="controller-manager" Mar 18 18:02:09.604172 master-0 kubenswrapper[30278]: I0318 18:02:09.604219 30278 memory_manager.go:354] "RemoveStaleState removing state" podUID="253ec853-f637-4aa4-8e8e-eb655dfccccb" containerName="route-controller-manager" Mar 18 18:02:09.605844 master-0 kubenswrapper[30278]: I0318 18:02:09.604239 30278 memory_manager.go:354] "RemoveStaleState removing state" podUID="1db0a246-ca43-4e7c-b09e-e80218ae99b1" containerName="controller-manager" Mar 18 18:02:09.605844 master-0 kubenswrapper[30278]: I0318 18:02:09.604960 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6f66d74d5-vc6n8" Mar 18 18:02:09.622496 master-0 kubenswrapper[30278]: I0318 18:02:09.606374 30278 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-f5755b457-f4cbl"] Mar 18 18:02:09.622496 master-0 kubenswrapper[30278]: I0318 18:02:09.610778 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Mar 18 18:02:09.622496 master-0 kubenswrapper[30278]: I0318 18:02:09.610859 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-6clkh" Mar 18 18:02:09.622496 master-0 kubenswrapper[30278]: I0318 18:02:09.611083 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Mar 18 18:02:09.622496 master-0 kubenswrapper[30278]: I0318 18:02:09.611789 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Mar 18 18:02:09.622496 master-0 kubenswrapper[30278]: I0318 18:02:09.612918 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Mar 18 18:02:09.627377 master-0 kubenswrapper[30278]: I0318 18:02:09.625357 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Mar 18 18:02:09.627377 master-0 kubenswrapper[30278]: I0318 18:02:09.625574 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Mar 18 18:02:09.641417 master-0 kubenswrapper[30278]: I0318 18:02:09.637970 30278 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-f5755b457-f4cbl"] Mar 18 18:02:09.647204 master-0 kubenswrapper[30278]: I0318 18:02:09.647135 30278 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6f66d74d5-vc6n8"] Mar 18 18:02:09.654699 master-0 kubenswrapper[30278]: I0318 18:02:09.652951 30278 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-5-master-0"] Mar 18 18:02:09.654699 master-0 kubenswrapper[30278]: E0318 18:02:09.653337 30278 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="253ec853-f637-4aa4-8e8e-eb655dfccccb" containerName="route-controller-manager" Mar 18 18:02:09.654699 master-0 kubenswrapper[30278]: I0318 18:02:09.653353 30278 state_mem.go:107] "Deleted CPUSet assignment" podUID="253ec853-f637-4aa4-8e8e-eb655dfccccb" containerName="route-controller-manager" Mar 18 18:02:09.654699 master-0 kubenswrapper[30278]: I0318 18:02:09.654309 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-5-master-0" Mar 18 18:02:09.693779 master-0 kubenswrapper[30278]: I0318 18:02:09.689216 30278 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-5-master-0"] Mar 18 18:02:09.704899 master-0 kubenswrapper[30278]: I0318 18:02:09.704861 30278 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-57dbfd879f-44tfw"] Mar 18 18:02:09.708799 master-0 kubenswrapper[30278]: I0318 18:02:09.708743 30278 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-57dbfd879f-44tfw"] Mar 18 18:02:09.739477 master-0 kubenswrapper[30278]: I0318 18:02:09.738786 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9c7c10bc-3812-41d9-bb09-eaa5d283311a-proxy-ca-bundles\") pod \"controller-manager-6f66d74d5-vc6n8\" (UID: \"9c7c10bc-3812-41d9-bb09-eaa5d283311a\") " pod="openshift-controller-manager/controller-manager-6f66d74d5-vc6n8" Mar 18 18:02:09.739477 master-0 kubenswrapper[30278]: I0318 18:02:09.738862 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9c7c10bc-3812-41d9-bb09-eaa5d283311a-config\") pod \"controller-manager-6f66d74d5-vc6n8\" (UID: \"9c7c10bc-3812-41d9-bb09-eaa5d283311a\") " pod="openshift-controller-manager/controller-manager-6f66d74d5-vc6n8" Mar 18 18:02:09.739477 master-0 kubenswrapper[30278]: I0318 18:02:09.738891 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9c7c10bc-3812-41d9-bb09-eaa5d283311a-serving-cert\") pod \"controller-manager-6f66d74d5-vc6n8\" (UID: \"9c7c10bc-3812-41d9-bb09-eaa5d283311a\") " pod="openshift-controller-manager/controller-manager-6f66d74d5-vc6n8" Mar 18 18:02:09.739477 master-0 kubenswrapper[30278]: I0318 18:02:09.738919 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ss49r\" (UniqueName: \"kubernetes.io/projected/9c7c10bc-3812-41d9-bb09-eaa5d283311a-kube-api-access-ss49r\") pod \"controller-manager-6f66d74d5-vc6n8\" (UID: \"9c7c10bc-3812-41d9-bb09-eaa5d283311a\") " pod="openshift-controller-manager/controller-manager-6f66d74d5-vc6n8" Mar 18 18:02:09.739477 master-0 kubenswrapper[30278]: I0318 18:02:09.738996 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9c7c10bc-3812-41d9-bb09-eaa5d283311a-client-ca\") pod \"controller-manager-6f66d74d5-vc6n8\" (UID: \"9c7c10bc-3812-41d9-bb09-eaa5d283311a\") " pod="openshift-controller-manager/controller-manager-6f66d74d5-vc6n8" Mar 18 18:02:09.842308 master-0 kubenswrapper[30278]: I0318 18:02:09.841448 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9c7c10bc-3812-41d9-bb09-eaa5d283311a-config\") pod \"controller-manager-6f66d74d5-vc6n8\" (UID: \"9c7c10bc-3812-41d9-bb09-eaa5d283311a\") " pod="openshift-controller-manager/controller-manager-6f66d74d5-vc6n8" Mar 18 18:02:09.842308 master-0 kubenswrapper[30278]: I0318 18:02:09.841516 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/53883b3b-18ee-403e-b7c5-31699e457fd6-kube-api-access\") pod \"installer-5-master-0\" (UID: \"53883b3b-18ee-403e-b7c5-31699e457fd6\") " pod="openshift-kube-apiserver/installer-5-master-0" Mar 18 18:02:09.842308 master-0 kubenswrapper[30278]: I0318 18:02:09.841582 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9c7c10bc-3812-41d9-bb09-eaa5d283311a-serving-cert\") pod \"controller-manager-6f66d74d5-vc6n8\" (UID: \"9c7c10bc-3812-41d9-bb09-eaa5d283311a\") " pod="openshift-controller-manager/controller-manager-6f66d74d5-vc6n8" Mar 18 18:02:09.842308 master-0 kubenswrapper[30278]: I0318 18:02:09.841619 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/53883b3b-18ee-403e-b7c5-31699e457fd6-var-lock\") pod \"installer-5-master-0\" (UID: \"53883b3b-18ee-403e-b7c5-31699e457fd6\") " pod="openshift-kube-apiserver/installer-5-master-0" Mar 18 18:02:09.842308 master-0 kubenswrapper[30278]: I0318 18:02:09.841651 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ss49r\" (UniqueName: \"kubernetes.io/projected/9c7c10bc-3812-41d9-bb09-eaa5d283311a-kube-api-access-ss49r\") pod \"controller-manager-6f66d74d5-vc6n8\" (UID: \"9c7c10bc-3812-41d9-bb09-eaa5d283311a\") " pod="openshift-controller-manager/controller-manager-6f66d74d5-vc6n8" Mar 18 18:02:09.842308 master-0 kubenswrapper[30278]: I0318 18:02:09.841781 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9c7c10bc-3812-41d9-bb09-eaa5d283311a-client-ca\") pod \"controller-manager-6f66d74d5-vc6n8\" (UID: \"9c7c10bc-3812-41d9-bb09-eaa5d283311a\") " pod="openshift-controller-manager/controller-manager-6f66d74d5-vc6n8" Mar 18 18:02:09.842308 master-0 kubenswrapper[30278]: I0318 18:02:09.842053 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/53883b3b-18ee-403e-b7c5-31699e457fd6-kubelet-dir\") pod \"installer-5-master-0\" (UID: \"53883b3b-18ee-403e-b7c5-31699e457fd6\") " pod="openshift-kube-apiserver/installer-5-master-0" Mar 18 18:02:09.842308 master-0 kubenswrapper[30278]: I0318 18:02:09.842247 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9c7c10bc-3812-41d9-bb09-eaa5d283311a-proxy-ca-bundles\") pod \"controller-manager-6f66d74d5-vc6n8\" (UID: \"9c7c10bc-3812-41d9-bb09-eaa5d283311a\") " pod="openshift-controller-manager/controller-manager-6f66d74d5-vc6n8" Mar 18 18:02:09.842915 master-0 kubenswrapper[30278]: I0318 18:02:09.842864 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9c7c10bc-3812-41d9-bb09-eaa5d283311a-config\") pod \"controller-manager-6f66d74d5-vc6n8\" (UID: \"9c7c10bc-3812-41d9-bb09-eaa5d283311a\") " pod="openshift-controller-manager/controller-manager-6f66d74d5-vc6n8" Mar 18 18:02:09.845326 master-0 kubenswrapper[30278]: I0318 18:02:09.843075 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9c7c10bc-3812-41d9-bb09-eaa5d283311a-client-ca\") pod \"controller-manager-6f66d74d5-vc6n8\" (UID: \"9c7c10bc-3812-41d9-bb09-eaa5d283311a\") " pod="openshift-controller-manager/controller-manager-6f66d74d5-vc6n8" Mar 18 18:02:09.845326 master-0 kubenswrapper[30278]: I0318 18:02:09.844011 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9c7c10bc-3812-41d9-bb09-eaa5d283311a-proxy-ca-bundles\") pod \"controller-manager-6f66d74d5-vc6n8\" (UID: \"9c7c10bc-3812-41d9-bb09-eaa5d283311a\") " pod="openshift-controller-manager/controller-manager-6f66d74d5-vc6n8" Mar 18 18:02:09.845632 master-0 kubenswrapper[30278]: I0318 18:02:09.845516 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9c7c10bc-3812-41d9-bb09-eaa5d283311a-serving-cert\") pod \"controller-manager-6f66d74d5-vc6n8\" (UID: \"9c7c10bc-3812-41d9-bb09-eaa5d283311a\") " pod="openshift-controller-manager/controller-manager-6f66d74d5-vc6n8" Mar 18 18:02:09.860576 master-0 kubenswrapper[30278]: I0318 18:02:09.860438 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ss49r\" (UniqueName: \"kubernetes.io/projected/9c7c10bc-3812-41d9-bb09-eaa5d283311a-kube-api-access-ss49r\") pod \"controller-manager-6f66d74d5-vc6n8\" (UID: \"9c7c10bc-3812-41d9-bb09-eaa5d283311a\") " pod="openshift-controller-manager/controller-manager-6f66d74d5-vc6n8" Mar 18 18:02:09.943593 master-0 kubenswrapper[30278]: I0318 18:02:09.943520 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/53883b3b-18ee-403e-b7c5-31699e457fd6-var-lock\") pod \"installer-5-master-0\" (UID: \"53883b3b-18ee-403e-b7c5-31699e457fd6\") " pod="openshift-kube-apiserver/installer-5-master-0" Mar 18 18:02:09.943964 master-0 kubenswrapper[30278]: I0318 18:02:09.943617 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/53883b3b-18ee-403e-b7c5-31699e457fd6-kubelet-dir\") pod \"installer-5-master-0\" (UID: \"53883b3b-18ee-403e-b7c5-31699e457fd6\") " pod="openshift-kube-apiserver/installer-5-master-0" Mar 18 18:02:09.943964 master-0 kubenswrapper[30278]: I0318 18:02:09.943718 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/53883b3b-18ee-403e-b7c5-31699e457fd6-var-lock\") pod \"installer-5-master-0\" (UID: \"53883b3b-18ee-403e-b7c5-31699e457fd6\") " pod="openshift-kube-apiserver/installer-5-master-0" Mar 18 18:02:09.944073 master-0 kubenswrapper[30278]: I0318 18:02:09.943978 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/53883b3b-18ee-403e-b7c5-31699e457fd6-kubelet-dir\") pod \"installer-5-master-0\" (UID: \"53883b3b-18ee-403e-b7c5-31699e457fd6\") " pod="openshift-kube-apiserver/installer-5-master-0" Mar 18 18:02:09.944358 master-0 kubenswrapper[30278]: I0318 18:02:09.944310 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/53883b3b-18ee-403e-b7c5-31699e457fd6-kube-api-access\") pod \"installer-5-master-0\" (UID: \"53883b3b-18ee-403e-b7c5-31699e457fd6\") " pod="openshift-kube-apiserver/installer-5-master-0" Mar 18 18:02:09.965171 master-0 kubenswrapper[30278]: I0318 18:02:09.965108 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/53883b3b-18ee-403e-b7c5-31699e457fd6-kube-api-access\") pod \"installer-5-master-0\" (UID: \"53883b3b-18ee-403e-b7c5-31699e457fd6\") " pod="openshift-kube-apiserver/installer-5-master-0" Mar 18 18:02:10.001457 master-0 kubenswrapper[30278]: I0318 18:02:10.001340 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6f66d74d5-vc6n8" Mar 18 18:02:10.033079 master-0 kubenswrapper[30278]: I0318 18:02:10.032991 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-5-master-0" Mar 18 18:02:10.351920 master-0 kubenswrapper[30278]: I0318 18:02:10.351858 30278 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-5-master-0"] Mar 18 18:02:10.363411 master-0 kubenswrapper[30278]: W0318 18:02:10.363138 30278 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod53883b3b_18ee_403e_b7c5_31699e457fd6.slice/crio-9c08464a43223b17491543fb19aead99d18a74871d4d569d5ebce2b2b30141cd WatchSource:0}: Error finding container 9c08464a43223b17491543fb19aead99d18a74871d4d569d5ebce2b2b30141cd: Status 404 returned error can't find the container with id 9c08464a43223b17491543fb19aead99d18a74871d4d569d5ebce2b2b30141cd Mar 18 18:02:10.452312 master-0 kubenswrapper[30278]: I0318 18:02:10.452226 30278 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6f66d74d5-vc6n8"] Mar 18 18:02:10.461362 master-0 kubenswrapper[30278]: W0318 18:02:10.461299 30278 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9c7c10bc_3812_41d9_bb09_eaa5d283311a.slice/crio-bb1e244095580505bdcba0e60867bb046df4cb6299fcd3e750f6077b51dd7ea5 WatchSource:0}: Error finding container bb1e244095580505bdcba0e60867bb046df4cb6299fcd3e750f6077b51dd7ea5: Status 404 returned error can't find the container with id bb1e244095580505bdcba0e60867bb046df4cb6299fcd3e750f6077b51dd7ea5 Mar 18 18:02:10.571813 master-0 kubenswrapper[30278]: I0318 18:02:10.571736 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-5-master-0" event={"ID":"53883b3b-18ee-403e-b7c5-31699e457fd6","Type":"ContainerStarted","Data":"9c08464a43223b17491543fb19aead99d18a74871d4d569d5ebce2b2b30141cd"} Mar 18 18:02:10.573084 master-0 kubenswrapper[30278]: I0318 18:02:10.573033 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6f66d74d5-vc6n8" event={"ID":"9c7c10bc-3812-41d9-bb09-eaa5d283311a","Type":"ContainerStarted","Data":"bb1e244095580505bdcba0e60867bb046df4cb6299fcd3e750f6077b51dd7ea5"} Mar 18 18:02:11.067751 master-0 kubenswrapper[30278]: I0318 18:02:11.067527 30278 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1db0a246-ca43-4e7c-b09e-e80218ae99b1" path="/var/lib/kubelet/pods/1db0a246-ca43-4e7c-b09e-e80218ae99b1/volumes" Mar 18 18:02:11.068504 master-0 kubenswrapper[30278]: I0318 18:02:11.068206 30278 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="253ec853-f637-4aa4-8e8e-eb655dfccccb" path="/var/lib/kubelet/pods/253ec853-f637-4aa4-8e8e-eb655dfccccb/volumes" Mar 18 18:02:11.586198 master-0 kubenswrapper[30278]: I0318 18:02:11.586138 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6f66d74d5-vc6n8" event={"ID":"9c7c10bc-3812-41d9-bb09-eaa5d283311a","Type":"ContainerStarted","Data":"1d89aafc642869869419b839f33350833bfd6cd5d4a247e62ef32038dce7daac"} Mar 18 18:02:11.588410 master-0 kubenswrapper[30278]: I0318 18:02:11.588387 30278 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-6f66d74d5-vc6n8" Mar 18 18:02:11.588952 master-0 kubenswrapper[30278]: I0318 18:02:11.588867 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-5-master-0" event={"ID":"53883b3b-18ee-403e-b7c5-31699e457fd6","Type":"ContainerStarted","Data":"5a5149843822ce8634404485bccbfc70d9742202218e5cd853ebcefd2186d40e"} Mar 18 18:02:11.592618 master-0 kubenswrapper[30278]: I0318 18:02:11.592527 30278 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-6f66d74d5-vc6n8" Mar 18 18:02:11.628561 master-0 kubenswrapper[30278]: I0318 18:02:11.628432 30278 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-6f66d74d5-vc6n8" podStartSLOduration=4.62839243 podStartE2EDuration="4.62839243s" podCreationTimestamp="2026-03-18 18:02:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 18:02:11.623554371 +0000 UTC m=+100.790739006" watchObservedRunningTime="2026-03-18 18:02:11.62839243 +0000 UTC m=+100.795577065" Mar 18 18:02:11.647226 master-0 kubenswrapper[30278]: I0318 18:02:11.647115 30278 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-5-master-0" podStartSLOduration=2.647072398 podStartE2EDuration="2.647072398s" podCreationTimestamp="2026-03-18 18:02:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 18:02:11.641203412 +0000 UTC m=+100.808388017" watchObservedRunningTime="2026-03-18 18:02:11.647072398 +0000 UTC m=+100.814257043" Mar 18 18:02:11.665351 master-0 kubenswrapper[30278]: I0318 18:02:11.665259 30278 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6dd4765df6-9c4vm"] Mar 18 18:02:11.670296 master-0 kubenswrapper[30278]: I0318 18:02:11.666969 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6dd4765df6-9c4vm" Mar 18 18:02:11.675452 master-0 kubenswrapper[30278]: I0318 18:02:11.675392 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-82cs2" Mar 18 18:02:11.679733 master-0 kubenswrapper[30278]: I0318 18:02:11.679673 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Mar 18 18:02:11.680053 master-0 kubenswrapper[30278]: I0318 18:02:11.680001 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Mar 18 18:02:11.680313 master-0 kubenswrapper[30278]: I0318 18:02:11.680300 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Mar 18 18:02:11.680698 master-0 kubenswrapper[30278]: I0318 18:02:11.679827 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Mar 18 18:02:11.681048 master-0 kubenswrapper[30278]: I0318 18:02:11.681023 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Mar 18 18:02:11.697716 master-0 kubenswrapper[30278]: I0318 18:02:11.697647 30278 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6dd4765df6-9c4vm"] Mar 18 18:02:11.790372 master-0 kubenswrapper[30278]: I0318 18:02:11.786434 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a6d2edf0-09ab-48e4-8873-05caf248134a-client-ca\") pod \"route-controller-manager-6dd4765df6-9c4vm\" (UID: \"a6d2edf0-09ab-48e4-8873-05caf248134a\") " pod="openshift-route-controller-manager/route-controller-manager-6dd4765df6-9c4vm" Mar 18 18:02:11.790372 master-0 kubenswrapper[30278]: I0318 18:02:11.786524 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a6d2edf0-09ab-48e4-8873-05caf248134a-serving-cert\") pod \"route-controller-manager-6dd4765df6-9c4vm\" (UID: \"a6d2edf0-09ab-48e4-8873-05caf248134a\") " pod="openshift-route-controller-manager/route-controller-manager-6dd4765df6-9c4vm" Mar 18 18:02:11.790372 master-0 kubenswrapper[30278]: I0318 18:02:11.786568 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l5qxc\" (UniqueName: \"kubernetes.io/projected/a6d2edf0-09ab-48e4-8873-05caf248134a-kube-api-access-l5qxc\") pod \"route-controller-manager-6dd4765df6-9c4vm\" (UID: \"a6d2edf0-09ab-48e4-8873-05caf248134a\") " pod="openshift-route-controller-manager/route-controller-manager-6dd4765df6-9c4vm" Mar 18 18:02:11.790372 master-0 kubenswrapper[30278]: I0318 18:02:11.786661 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a6d2edf0-09ab-48e4-8873-05caf248134a-config\") pod \"route-controller-manager-6dd4765df6-9c4vm\" (UID: \"a6d2edf0-09ab-48e4-8873-05caf248134a\") " pod="openshift-route-controller-manager/route-controller-manager-6dd4765df6-9c4vm" Mar 18 18:02:11.889497 master-0 kubenswrapper[30278]: I0318 18:02:11.889301 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a6d2edf0-09ab-48e4-8873-05caf248134a-client-ca\") pod \"route-controller-manager-6dd4765df6-9c4vm\" (UID: \"a6d2edf0-09ab-48e4-8873-05caf248134a\") " pod="openshift-route-controller-manager/route-controller-manager-6dd4765df6-9c4vm" Mar 18 18:02:11.889497 master-0 kubenswrapper[30278]: I0318 18:02:11.889407 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a6d2edf0-09ab-48e4-8873-05caf248134a-serving-cert\") pod \"route-controller-manager-6dd4765df6-9c4vm\" (UID: \"a6d2edf0-09ab-48e4-8873-05caf248134a\") " pod="openshift-route-controller-manager/route-controller-manager-6dd4765df6-9c4vm" Mar 18 18:02:11.889956 master-0 kubenswrapper[30278]: I0318 18:02:11.889915 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l5qxc\" (UniqueName: \"kubernetes.io/projected/a6d2edf0-09ab-48e4-8873-05caf248134a-kube-api-access-l5qxc\") pod \"route-controller-manager-6dd4765df6-9c4vm\" (UID: \"a6d2edf0-09ab-48e4-8873-05caf248134a\") " pod="openshift-route-controller-manager/route-controller-manager-6dd4765df6-9c4vm" Mar 18 18:02:11.890379 master-0 kubenswrapper[30278]: I0318 18:02:11.890357 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a6d2edf0-09ab-48e4-8873-05caf248134a-config\") pod \"route-controller-manager-6dd4765df6-9c4vm\" (UID: \"a6d2edf0-09ab-48e4-8873-05caf248134a\") " pod="openshift-route-controller-manager/route-controller-manager-6dd4765df6-9c4vm" Mar 18 18:02:11.890873 master-0 kubenswrapper[30278]: I0318 18:02:11.890802 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a6d2edf0-09ab-48e4-8873-05caf248134a-client-ca\") pod \"route-controller-manager-6dd4765df6-9c4vm\" (UID: \"a6d2edf0-09ab-48e4-8873-05caf248134a\") " pod="openshift-route-controller-manager/route-controller-manager-6dd4765df6-9c4vm" Mar 18 18:02:11.891866 master-0 kubenswrapper[30278]: I0318 18:02:11.891812 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a6d2edf0-09ab-48e4-8873-05caf248134a-config\") pod \"route-controller-manager-6dd4765df6-9c4vm\" (UID: \"a6d2edf0-09ab-48e4-8873-05caf248134a\") " pod="openshift-route-controller-manager/route-controller-manager-6dd4765df6-9c4vm" Mar 18 18:02:11.897263 master-0 kubenswrapper[30278]: I0318 18:02:11.897126 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a6d2edf0-09ab-48e4-8873-05caf248134a-serving-cert\") pod \"route-controller-manager-6dd4765df6-9c4vm\" (UID: \"a6d2edf0-09ab-48e4-8873-05caf248134a\") " pod="openshift-route-controller-manager/route-controller-manager-6dd4765df6-9c4vm" Mar 18 18:02:11.914717 master-0 kubenswrapper[30278]: I0318 18:02:11.914644 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l5qxc\" (UniqueName: \"kubernetes.io/projected/a6d2edf0-09ab-48e4-8873-05caf248134a-kube-api-access-l5qxc\") pod \"route-controller-manager-6dd4765df6-9c4vm\" (UID: \"a6d2edf0-09ab-48e4-8873-05caf248134a\") " pod="openshift-route-controller-manager/route-controller-manager-6dd4765df6-9c4vm" Mar 18 18:02:12.003690 master-0 kubenswrapper[30278]: I0318 18:02:12.003596 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6dd4765df6-9c4vm" Mar 18 18:02:12.464373 master-0 kubenswrapper[30278]: I0318 18:02:12.464244 30278 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6dd4765df6-9c4vm"] Mar 18 18:02:12.473672 master-0 kubenswrapper[30278]: W0318 18:02:12.473606 30278 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda6d2edf0_09ab_48e4_8873_05caf248134a.slice/crio-a42ce80dad8bd4add1e66520f97bc813dfaa644e3a6ecf7b49ffd272ced4ef12 WatchSource:0}: Error finding container a42ce80dad8bd4add1e66520f97bc813dfaa644e3a6ecf7b49ffd272ced4ef12: Status 404 returned error can't find the container with id a42ce80dad8bd4add1e66520f97bc813dfaa644e3a6ecf7b49ffd272ced4ef12 Mar 18 18:02:12.606222 master-0 kubenswrapper[30278]: I0318 18:02:12.606149 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6dd4765df6-9c4vm" event={"ID":"a6d2edf0-09ab-48e4-8873-05caf248134a","Type":"ContainerStarted","Data":"a42ce80dad8bd4add1e66520f97bc813dfaa644e3a6ecf7b49ffd272ced4ef12"} Mar 18 18:02:13.622182 master-0 kubenswrapper[30278]: I0318 18:02:13.621649 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6dd4765df6-9c4vm" event={"ID":"a6d2edf0-09ab-48e4-8873-05caf248134a","Type":"ContainerStarted","Data":"70a2ce041fee5662fa7a6f8a7d39e48351df424c6905f85bb10c6b72680ea54c"} Mar 18 18:02:13.622182 master-0 kubenswrapper[30278]: I0318 18:02:13.622046 30278 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6dd4765df6-9c4vm" Mar 18 18:02:13.634359 master-0 kubenswrapper[30278]: I0318 18:02:13.634299 30278 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6dd4765df6-9c4vm" Mar 18 18:02:13.661727 master-0 kubenswrapper[30278]: I0318 18:02:13.661581 30278 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6dd4765df6-9c4vm" podStartSLOduration=6.661554457 podStartE2EDuration="6.661554457s" podCreationTimestamp="2026-03-18 18:02:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 18:02:13.65602076 +0000 UTC m=+102.823205375" watchObservedRunningTime="2026-03-18 18:02:13.661554457 +0000 UTC m=+102.828739072" Mar 18 18:02:20.271457 master-0 kubenswrapper[30278]: I0318 18:02:20.271397 30278 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-4-master-0_20865801-ac9a-4c2d-821e-126a9b463232/installer/0.log" Mar 18 18:02:20.272222 master-0 kubenswrapper[30278]: I0318 18:02:20.271507 30278 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-4-master-0" Mar 18 18:02:20.340578 master-0 kubenswrapper[30278]: I0318 18:02:20.340507 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/20865801-ac9a-4c2d-821e-126a9b463232-var-lock\") pod \"20865801-ac9a-4c2d-821e-126a9b463232\" (UID: \"20865801-ac9a-4c2d-821e-126a9b463232\") " Mar 18 18:02:20.340578 master-0 kubenswrapper[30278]: I0318 18:02:20.340591 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/20865801-ac9a-4c2d-821e-126a9b463232-kubelet-dir\") pod \"20865801-ac9a-4c2d-821e-126a9b463232\" (UID: \"20865801-ac9a-4c2d-821e-126a9b463232\") " Mar 18 18:02:20.340906 master-0 kubenswrapper[30278]: I0318 18:02:20.340648 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/20865801-ac9a-4c2d-821e-126a9b463232-kube-api-access\") pod \"20865801-ac9a-4c2d-821e-126a9b463232\" (UID: \"20865801-ac9a-4c2d-821e-126a9b463232\") " Mar 18 18:02:20.341575 master-0 kubenswrapper[30278]: I0318 18:02:20.341403 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/20865801-ac9a-4c2d-821e-126a9b463232-var-lock" (OuterVolumeSpecName: "var-lock") pod "20865801-ac9a-4c2d-821e-126a9b463232" (UID: "20865801-ac9a-4c2d-821e-126a9b463232"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 18:02:20.341575 master-0 kubenswrapper[30278]: I0318 18:02:20.341541 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/20865801-ac9a-4c2d-821e-126a9b463232-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "20865801-ac9a-4c2d-821e-126a9b463232" (UID: "20865801-ac9a-4c2d-821e-126a9b463232"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 18:02:20.344576 master-0 kubenswrapper[30278]: I0318 18:02:20.344540 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20865801-ac9a-4c2d-821e-126a9b463232-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "20865801-ac9a-4c2d-821e-126a9b463232" (UID: "20865801-ac9a-4c2d-821e-126a9b463232"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 18:02:20.442126 master-0 kubenswrapper[30278]: I0318 18:02:20.442055 30278 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/20865801-ac9a-4c2d-821e-126a9b463232-var-lock\") on node \"master-0\" DevicePath \"\"" Mar 18 18:02:20.442126 master-0 kubenswrapper[30278]: I0318 18:02:20.442098 30278 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/20865801-ac9a-4c2d-821e-126a9b463232-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Mar 18 18:02:20.442126 master-0 kubenswrapper[30278]: I0318 18:02:20.442116 30278 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/20865801-ac9a-4c2d-821e-126a9b463232-kube-api-access\") on node \"master-0\" DevicePath \"\"" Mar 18 18:02:20.700742 master-0 kubenswrapper[30278]: I0318 18:02:20.700656 30278 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-4-master-0_20865801-ac9a-4c2d-821e-126a9b463232/installer/0.log" Mar 18 18:02:20.701061 master-0 kubenswrapper[30278]: I0318 18:02:20.700774 30278 generic.go:334] "Generic (PLEG): container finished" podID="20865801-ac9a-4c2d-821e-126a9b463232" containerID="e114bb0d6fc2cd2ca24760eb893d95f17619be2af3bc2c5eb2b03c45d6a37626" exitCode=1 Mar 18 18:02:20.701061 master-0 kubenswrapper[30278]: I0318 18:02:20.700836 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-4-master-0" event={"ID":"20865801-ac9a-4c2d-821e-126a9b463232","Type":"ContainerDied","Data":"e114bb0d6fc2cd2ca24760eb893d95f17619be2af3bc2c5eb2b03c45d6a37626"} Mar 18 18:02:20.701061 master-0 kubenswrapper[30278]: I0318 18:02:20.700897 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-4-master-0" event={"ID":"20865801-ac9a-4c2d-821e-126a9b463232","Type":"ContainerDied","Data":"6f6e94445de6294550ec95dd8a8572234554f574d1cc00b5d3ac50fd3a83c4ce"} Mar 18 18:02:20.701061 master-0 kubenswrapper[30278]: I0318 18:02:20.700913 30278 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-4-master-0" Mar 18 18:02:20.701489 master-0 kubenswrapper[30278]: I0318 18:02:20.700936 30278 scope.go:117] "RemoveContainer" containerID="e114bb0d6fc2cd2ca24760eb893d95f17619be2af3bc2c5eb2b03c45d6a37626" Mar 18 18:02:20.726605 master-0 kubenswrapper[30278]: I0318 18:02:20.726571 30278 scope.go:117] "RemoveContainer" containerID="e114bb0d6fc2cd2ca24760eb893d95f17619be2af3bc2c5eb2b03c45d6a37626" Mar 18 18:02:20.727453 master-0 kubenswrapper[30278]: E0318 18:02:20.727405 30278 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e114bb0d6fc2cd2ca24760eb893d95f17619be2af3bc2c5eb2b03c45d6a37626\": container with ID starting with e114bb0d6fc2cd2ca24760eb893d95f17619be2af3bc2c5eb2b03c45d6a37626 not found: ID does not exist" containerID="e114bb0d6fc2cd2ca24760eb893d95f17619be2af3bc2c5eb2b03c45d6a37626" Mar 18 18:02:20.727569 master-0 kubenswrapper[30278]: I0318 18:02:20.727455 30278 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e114bb0d6fc2cd2ca24760eb893d95f17619be2af3bc2c5eb2b03c45d6a37626"} err="failed to get container status \"e114bb0d6fc2cd2ca24760eb893d95f17619be2af3bc2c5eb2b03c45d6a37626\": rpc error: code = NotFound desc = could not find container \"e114bb0d6fc2cd2ca24760eb893d95f17619be2af3bc2c5eb2b03c45d6a37626\": container with ID starting with e114bb0d6fc2cd2ca24760eb893d95f17619be2af3bc2c5eb2b03c45d6a37626 not found: ID does not exist" Mar 18 18:02:20.758454 master-0 kubenswrapper[30278]: I0318 18:02:20.758379 30278 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/installer-4-master-0"] Mar 18 18:02:20.778775 master-0 kubenswrapper[30278]: I0318 18:02:20.778709 30278 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/installer-4-master-0"] Mar 18 18:02:21.079847 master-0 kubenswrapper[30278]: I0318 18:02:21.079749 30278 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20865801-ac9a-4c2d-821e-126a9b463232" path="/var/lib/kubelet/pods/20865801-ac9a-4c2d-821e-126a9b463232/volumes" Mar 18 18:02:43.875055 master-0 kubenswrapper[30278]: I0318 18:02:43.874893 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4285e80c-1ff9-42b3-9692-9f2ab6b61916-kube-api-access\") pod \"installer-3-master-0\" (UID: \"4285e80c-1ff9-42b3-9692-9f2ab6b61916\") " pod="openshift-kube-apiserver/installer-3-master-0" Mar 18 18:02:43.881105 master-0 kubenswrapper[30278]: I0318 18:02:43.881013 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4285e80c-1ff9-42b3-9692-9f2ab6b61916-kube-api-access\") pod \"installer-3-master-0\" (UID: \"4285e80c-1ff9-42b3-9692-9f2ab6b61916\") " pod="openshift-kube-apiserver/installer-3-master-0" Mar 18 18:02:43.976736 master-0 kubenswrapper[30278]: I0318 18:02:43.976616 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4285e80c-1ff9-42b3-9692-9f2ab6b61916-kube-api-access\") pod \"4285e80c-1ff9-42b3-9692-9f2ab6b61916\" (UID: \"4285e80c-1ff9-42b3-9692-9f2ab6b61916\") " Mar 18 18:02:43.981487 master-0 kubenswrapper[30278]: I0318 18:02:43.981409 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4285e80c-1ff9-42b3-9692-9f2ab6b61916-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "4285e80c-1ff9-42b3-9692-9f2ab6b61916" (UID: "4285e80c-1ff9-42b3-9692-9f2ab6b61916"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 18:02:44.078450 master-0 kubenswrapper[30278]: I0318 18:02:44.078365 30278 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4285e80c-1ff9-42b3-9692-9f2ab6b61916-kube-api-access\") on node \"master-0\" DevicePath \"\"" Mar 18 18:02:52.168554 master-0 kubenswrapper[30278]: E0318 18:02:52.168442 30278 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[alertmanager-trusted-ca-bundle], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="openshift-monitoring/alertmanager-main-0" podUID="89b1dfdf-4633-45af-8abd-931a76eca960" Mar 18 18:02:52.989182 master-0 kubenswrapper[30278]: I0318 18:02:52.989015 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/alertmanager-main-0" Mar 18 18:02:56.669605 master-0 kubenswrapper[30278]: E0318 18:02:56.669410 30278 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[prometheus-trusted-ca-bundle], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="openshift-monitoring/prometheus-k8s-0" podUID="5c6aeb7b-9c05-470e-b31f-f4154aadf170" Mar 18 18:02:57.032743 master-0 kubenswrapper[30278]: I0318 18:02:57.032566 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-k8s-0" Mar 18 18:02:57.267405 master-0 kubenswrapper[30278]: I0318 18:02:57.267313 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/89b1dfdf-4633-45af-8abd-931a76eca960-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"89b1dfdf-4633-45af-8abd-931a76eca960\") " pod="openshift-monitoring/alertmanager-main-0" Mar 18 18:02:57.267810 master-0 kubenswrapper[30278]: E0318 18:02:57.267678 30278 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/89b1dfdf-4633-45af-8abd-931a76eca960-alertmanager-trusted-ca-bundle podName:89b1dfdf-4633-45af-8abd-931a76eca960 nodeName:}" failed. No retries permitted until 2026-03-18 18:04:59.267612494 +0000 UTC m=+268.434797129 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "alertmanager-trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/89b1dfdf-4633-45af-8abd-931a76eca960-alertmanager-trusted-ca-bundle") pod "alertmanager-main-0" (UID: "89b1dfdf-4633-45af-8abd-931a76eca960") : configmap references non-existent config key: ca-bundle.crt Mar 18 18:02:58.545574 master-0 kubenswrapper[30278]: I0318 18:02:58.545481 30278 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-master-0"] Mar 18 18:02:58.546406 master-0 kubenswrapper[30278]: I0318 18:02:58.545910 30278 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="b45ea2ef1cf2bc9d1d994d6538ae0a64" containerName="kube-apiserver" containerID="cri-o://f46251b34f5dd065effa27bf2880569964ed0fdb5993d9d07458a0798550963d" gracePeriod=15 Mar 18 18:02:58.546406 master-0 kubenswrapper[30278]: I0318 18:02:58.545951 30278 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="b45ea2ef1cf2bc9d1d994d6538ae0a64" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://6efe9870fc0250f38406137d08a7823a1c0babfd0a7944e64c6366445e97696d" gracePeriod=15 Mar 18 18:02:58.546406 master-0 kubenswrapper[30278]: I0318 18:02:58.546105 30278 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="b45ea2ef1cf2bc9d1d994d6538ae0a64" containerName="kube-apiserver-check-endpoints" containerID="cri-o://76efa507770b7f3447613d0b53b9d5e24bb29472a2b50d81264800bfd86b0183" gracePeriod=15 Mar 18 18:02:58.546406 master-0 kubenswrapper[30278]: I0318 18:02:58.546200 30278 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="b45ea2ef1cf2bc9d1d994d6538ae0a64" containerName="kube-apiserver-cert-syncer" containerID="cri-o://f5d16ced4f31a4b0a79823e1a1297685f8a42b35e0b0e91f27b749a47d590dcc" gracePeriod=15 Mar 18 18:02:58.546406 master-0 kubenswrapper[30278]: I0318 18:02:58.546384 30278 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="b45ea2ef1cf2bc9d1d994d6538ae0a64" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://532613e49ea9e30ac1511410a4da92cfd72901d45720ba547c93524371db0317" gracePeriod=15 Mar 18 18:02:58.546955 master-0 kubenswrapper[30278]: I0318 18:02:58.546928 30278 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-master-0"] Mar 18 18:02:58.550295 master-0 kubenswrapper[30278]: E0318 18:02:58.550206 30278 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b45ea2ef1cf2bc9d1d994d6538ae0a64" containerName="kube-apiserver-cert-regeneration-controller" Mar 18 18:02:58.550295 master-0 kubenswrapper[30278]: I0318 18:02:58.550252 30278 state_mem.go:107] "Deleted CPUSet assignment" podUID="b45ea2ef1cf2bc9d1d994d6538ae0a64" containerName="kube-apiserver-cert-regeneration-controller" Mar 18 18:02:58.550665 master-0 kubenswrapper[30278]: E0318 18:02:58.550311 30278 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b45ea2ef1cf2bc9d1d994d6538ae0a64" containerName="kube-apiserver-insecure-readyz" Mar 18 18:02:58.550665 master-0 kubenswrapper[30278]: I0318 18:02:58.550322 30278 state_mem.go:107] "Deleted CPUSet assignment" podUID="b45ea2ef1cf2bc9d1d994d6538ae0a64" containerName="kube-apiserver-insecure-readyz" Mar 18 18:02:58.550665 master-0 kubenswrapper[30278]: E0318 18:02:58.550333 30278 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="20865801-ac9a-4c2d-821e-126a9b463232" containerName="installer" Mar 18 18:02:58.550665 master-0 kubenswrapper[30278]: I0318 18:02:58.550341 30278 state_mem.go:107] "Deleted CPUSet assignment" podUID="20865801-ac9a-4c2d-821e-126a9b463232" containerName="installer" Mar 18 18:02:58.550665 master-0 kubenswrapper[30278]: E0318 18:02:58.550359 30278 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b45ea2ef1cf2bc9d1d994d6538ae0a64" containerName="kube-apiserver-check-endpoints" Mar 18 18:02:58.550665 master-0 kubenswrapper[30278]: I0318 18:02:58.550369 30278 state_mem.go:107] "Deleted CPUSet assignment" podUID="b45ea2ef1cf2bc9d1d994d6538ae0a64" containerName="kube-apiserver-check-endpoints" Mar 18 18:02:58.550665 master-0 kubenswrapper[30278]: E0318 18:02:58.550406 30278 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b45ea2ef1cf2bc9d1d994d6538ae0a64" containerName="kube-apiserver-cert-syncer" Mar 18 18:02:58.550665 master-0 kubenswrapper[30278]: I0318 18:02:58.550604 30278 state_mem.go:107] "Deleted CPUSet assignment" podUID="b45ea2ef1cf2bc9d1d994d6538ae0a64" containerName="kube-apiserver-cert-syncer" Mar 18 18:02:58.550665 master-0 kubenswrapper[30278]: E0318 18:02:58.550648 30278 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b45ea2ef1cf2bc9d1d994d6538ae0a64" containerName="kube-apiserver" Mar 18 18:02:58.550665 master-0 kubenswrapper[30278]: I0318 18:02:58.550659 30278 state_mem.go:107] "Deleted CPUSet assignment" podUID="b45ea2ef1cf2bc9d1d994d6538ae0a64" containerName="kube-apiserver" Mar 18 18:02:58.551053 master-0 kubenswrapper[30278]: E0318 18:02:58.550684 30278 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b45ea2ef1cf2bc9d1d994d6538ae0a64" containerName="setup" Mar 18 18:02:58.551053 master-0 kubenswrapper[30278]: I0318 18:02:58.550693 30278 state_mem.go:107] "Deleted CPUSet assignment" podUID="b45ea2ef1cf2bc9d1d994d6538ae0a64" containerName="setup" Mar 18 18:02:58.552383 master-0 kubenswrapper[30278]: I0318 18:02:58.552293 30278 memory_manager.go:354] "RemoveStaleState removing state" podUID="b45ea2ef1cf2bc9d1d994d6538ae0a64" containerName="kube-apiserver" Mar 18 18:02:58.552383 master-0 kubenswrapper[30278]: I0318 18:02:58.552335 30278 memory_manager.go:354] "RemoveStaleState removing state" podUID="b45ea2ef1cf2bc9d1d994d6538ae0a64" containerName="kube-apiserver-cert-syncer" Mar 18 18:02:58.552383 master-0 kubenswrapper[30278]: I0318 18:02:58.552383 30278 memory_manager.go:354] "RemoveStaleState removing state" podUID="b45ea2ef1cf2bc9d1d994d6538ae0a64" containerName="kube-apiserver-insecure-readyz" Mar 18 18:02:58.552517 master-0 kubenswrapper[30278]: I0318 18:02:58.552410 30278 memory_manager.go:354] "RemoveStaleState removing state" podUID="b45ea2ef1cf2bc9d1d994d6538ae0a64" containerName="kube-apiserver-cert-regeneration-controller" Mar 18 18:02:58.552517 master-0 kubenswrapper[30278]: I0318 18:02:58.552432 30278 memory_manager.go:354] "RemoveStaleState removing state" podUID="b45ea2ef1cf2bc9d1d994d6538ae0a64" containerName="kube-apiserver-check-endpoints" Mar 18 18:02:58.552517 master-0 kubenswrapper[30278]: I0318 18:02:58.552452 30278 memory_manager.go:354] "RemoveStaleState removing state" podUID="20865801-ac9a-4c2d-821e-126a9b463232" containerName="installer" Mar 18 18:02:58.554421 master-0 kubenswrapper[30278]: E0318 18:02:58.554388 30278 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b45ea2ef1cf2bc9d1d994d6538ae0a64" containerName="kube-apiserver-check-endpoints" Mar 18 18:02:58.554421 master-0 kubenswrapper[30278]: I0318 18:02:58.554413 30278 state_mem.go:107] "Deleted CPUSet assignment" podUID="b45ea2ef1cf2bc9d1d994d6538ae0a64" containerName="kube-apiserver-check-endpoints" Mar 18 18:02:58.556004 master-0 kubenswrapper[30278]: I0318 18:02:58.555969 30278 memory_manager.go:354] "RemoveStaleState removing state" podUID="b45ea2ef1cf2bc9d1d994d6538ae0a64" containerName="kube-apiserver-check-endpoints" Mar 18 18:02:58.566691 master-0 kubenswrapper[30278]: I0318 18:02:58.565733 30278 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0"] Mar 18 18:02:58.582389 master-0 kubenswrapper[30278]: I0318 18:02:58.582249 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 18 18:02:58.590560 master-0 kubenswrapper[30278]: I0318 18:02:58.590394 30278 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-master-0" oldPodUID="b45ea2ef1cf2bc9d1d994d6538ae0a64" podUID="d5f502b117c7c8479f7f20848a50fec0" Mar 18 18:02:58.673705 master-0 kubenswrapper[30278]: E0318 18:02:58.673384 30278 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 192.168.32.10:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 18 18:02:58.695846 master-0 kubenswrapper[30278]: I0318 18:02:58.695784 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/85632c1cec8974aa874834e4cfff4c77-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"85632c1cec8974aa874834e4cfff4c77\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 18 18:02:58.696004 master-0 kubenswrapper[30278]: I0318 18:02:58.695859 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/85632c1cec8974aa874834e4cfff4c77-var-lock\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"85632c1cec8974aa874834e4cfff4c77\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 18 18:02:58.696004 master-0 kubenswrapper[30278]: I0318 18:02:58.695930 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/85632c1cec8974aa874834e4cfff4c77-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"85632c1cec8974aa874834e4cfff4c77\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 18 18:02:58.696114 master-0 kubenswrapper[30278]: I0318 18:02:58.696018 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/85632c1cec8974aa874834e4cfff4c77-var-log\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"85632c1cec8974aa874834e4cfff4c77\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 18 18:02:58.696162 master-0 kubenswrapper[30278]: I0318 18:02:58.696132 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/d5f502b117c7c8479f7f20848a50fec0-audit-dir\") pod \"kube-apiserver-master-0\" (UID: \"d5f502b117c7c8479f7f20848a50fec0\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 18 18:02:58.696220 master-0 kubenswrapper[30278]: I0318 18:02:58.696175 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/d5f502b117c7c8479f7f20848a50fec0-resource-dir\") pod \"kube-apiserver-master-0\" (UID: \"d5f502b117c7c8479f7f20848a50fec0\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 18 18:02:58.696220 master-0 kubenswrapper[30278]: I0318 18:02:58.696215 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/85632c1cec8974aa874834e4cfff4c77-manifests\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"85632c1cec8974aa874834e4cfff4c77\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 18 18:02:58.696341 master-0 kubenswrapper[30278]: I0318 18:02:58.696287 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/d5f502b117c7c8479f7f20848a50fec0-cert-dir\") pod \"kube-apiserver-master-0\" (UID: \"d5f502b117c7c8479f7f20848a50fec0\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 18 18:02:58.798698 master-0 kubenswrapper[30278]: I0318 18:02:58.798489 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/85632c1cec8974aa874834e4cfff4c77-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"85632c1cec8974aa874834e4cfff4c77\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 18 18:02:58.799003 master-0 kubenswrapper[30278]: I0318 18:02:58.798729 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/85632c1cec8974aa874834e4cfff4c77-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"85632c1cec8974aa874834e4cfff4c77\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 18 18:02:58.799003 master-0 kubenswrapper[30278]: I0318 18:02:58.798844 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/85632c1cec8974aa874834e4cfff4c77-var-log\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"85632c1cec8974aa874834e4cfff4c77\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 18 18:02:58.799003 master-0 kubenswrapper[30278]: I0318 18:02:58.798888 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/d5f502b117c7c8479f7f20848a50fec0-resource-dir\") pod \"kube-apiserver-master-0\" (UID: \"d5f502b117c7c8479f7f20848a50fec0\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 18 18:02:58.799003 master-0 kubenswrapper[30278]: I0318 18:02:58.798907 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/d5f502b117c7c8479f7f20848a50fec0-audit-dir\") pod \"kube-apiserver-master-0\" (UID: \"d5f502b117c7c8479f7f20848a50fec0\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 18 18:02:58.799003 master-0 kubenswrapper[30278]: I0318 18:02:58.798941 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/85632c1cec8974aa874834e4cfff4c77-manifests\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"85632c1cec8974aa874834e4cfff4c77\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 18 18:02:58.799164 master-0 kubenswrapper[30278]: I0318 18:02:58.799049 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/d5f502b117c7c8479f7f20848a50fec0-cert-dir\") pod \"kube-apiserver-master-0\" (UID: \"d5f502b117c7c8479f7f20848a50fec0\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 18 18:02:58.799357 master-0 kubenswrapper[30278]: I0318 18:02:58.799266 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/d5f502b117c7c8479f7f20848a50fec0-resource-dir\") pod \"kube-apiserver-master-0\" (UID: \"d5f502b117c7c8479f7f20848a50fec0\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 18 18:02:58.799475 master-0 kubenswrapper[30278]: I0318 18:02:58.799370 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/85632c1cec8974aa874834e4cfff4c77-manifests\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"85632c1cec8974aa874834e4cfff4c77\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 18 18:02:58.799546 master-0 kubenswrapper[30278]: I0318 18:02:58.799394 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/d5f502b117c7c8479f7f20848a50fec0-audit-dir\") pod \"kube-apiserver-master-0\" (UID: \"d5f502b117c7c8479f7f20848a50fec0\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 18 18:02:58.799609 master-0 kubenswrapper[30278]: I0318 18:02:58.799457 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/85632c1cec8974aa874834e4cfff4c77-var-log\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"85632c1cec8974aa874834e4cfff4c77\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 18 18:02:58.799684 master-0 kubenswrapper[30278]: I0318 18:02:58.799497 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/85632c1cec8974aa874834e4cfff4c77-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"85632c1cec8974aa874834e4cfff4c77\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 18 18:02:58.799834 master-0 kubenswrapper[30278]: I0318 18:02:58.799817 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/85632c1cec8974aa874834e4cfff4c77-var-lock\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"85632c1cec8974aa874834e4cfff4c77\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 18 18:02:58.799984 master-0 kubenswrapper[30278]: I0318 18:02:58.799542 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/85632c1cec8974aa874834e4cfff4c77-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"85632c1cec8974aa874834e4cfff4c77\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 18 18:02:58.800044 master-0 kubenswrapper[30278]: I0318 18:02:58.799492 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/d5f502b117c7c8479f7f20848a50fec0-cert-dir\") pod \"kube-apiserver-master-0\" (UID: \"d5f502b117c7c8479f7f20848a50fec0\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 18 18:02:58.800135 master-0 kubenswrapper[30278]: I0318 18:02:58.800012 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/85632c1cec8974aa874834e4cfff4c77-var-lock\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"85632c1cec8974aa874834e4cfff4c77\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 18 18:02:58.975235 master-0 kubenswrapper[30278]: I0318 18:02:58.975164 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 18 18:02:59.004254 master-0 kubenswrapper[30278]: E0318 18:02:59.004013 30278 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 192.168.32.10:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-master-0.189e01862bcf3691 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-master-0,UID:85632c1cec8974aa874834e4cfff4c77,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c5ce3d1134d6500e2b8528516c1889d7bbc6259aba4981c6983395b0e9eeff65\" already present on machine,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 18:02:59.002799761 +0000 UTC m=+148.169984356,LastTimestamp:2026-03-18 18:02:59.002799761 +0000 UTC m=+148.169984356,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 18:02:59.057312 master-0 kubenswrapper[30278]: I0318 18:02:59.057202 30278 generic.go:334] "Generic (PLEG): container finished" podID="53883b3b-18ee-403e-b7c5-31699e457fd6" containerID="5a5149843822ce8634404485bccbfc70d9742202218e5cd853ebcefd2186d40e" exitCode=0 Mar 18 18:02:59.060542 master-0 kubenswrapper[30278]: I0318 18:02:59.060487 30278 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-master-0_b45ea2ef1cf2bc9d1d994d6538ae0a64/kube-apiserver-check-endpoints/0.log" Mar 18 18:02:59.062348 master-0 kubenswrapper[30278]: I0318 18:02:59.062295 30278 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-master-0_b45ea2ef1cf2bc9d1d994d6538ae0a64/kube-apiserver-cert-syncer/0.log" Mar 18 18:02:59.063308 master-0 kubenswrapper[30278]: I0318 18:02:59.063251 30278 generic.go:334] "Generic (PLEG): container finished" podID="b45ea2ef1cf2bc9d1d994d6538ae0a64" containerID="76efa507770b7f3447613d0b53b9d5e24bb29472a2b50d81264800bfd86b0183" exitCode=0 Mar 18 18:02:59.063308 master-0 kubenswrapper[30278]: I0318 18:02:59.063300 30278 generic.go:334] "Generic (PLEG): container finished" podID="b45ea2ef1cf2bc9d1d994d6538ae0a64" containerID="6efe9870fc0250f38406137d08a7823a1c0babfd0a7944e64c6366445e97696d" exitCode=0 Mar 18 18:02:59.063308 master-0 kubenswrapper[30278]: I0318 18:02:59.063311 30278 generic.go:334] "Generic (PLEG): container finished" podID="b45ea2ef1cf2bc9d1d994d6538ae0a64" containerID="532613e49ea9e30ac1511410a4da92cfd72901d45720ba547c93524371db0317" exitCode=0 Mar 18 18:02:59.063522 master-0 kubenswrapper[30278]: I0318 18:02:59.063321 30278 generic.go:334] "Generic (PLEG): container finished" podID="b45ea2ef1cf2bc9d1d994d6538ae0a64" containerID="f5d16ced4f31a4b0a79823e1a1297685f8a42b35e0b0e91f27b749a47d590dcc" exitCode=2 Mar 18 18:02:59.069852 master-0 kubenswrapper[30278]: I0318 18:02:59.069771 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" event={"ID":"85632c1cec8974aa874834e4cfff4c77","Type":"ContainerStarted","Data":"43e26496d8246352eeed6af1c25b06324e89cf55f5854069d3ca9810372810d1"} Mar 18 18:02:59.069852 master-0 kubenswrapper[30278]: I0318 18:02:59.069841 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-5-master-0" event={"ID":"53883b3b-18ee-403e-b7c5-31699e457fd6","Type":"ContainerDied","Data":"5a5149843822ce8634404485bccbfc70d9742202218e5cd853ebcefd2186d40e"} Mar 18 18:02:59.070041 master-0 kubenswrapper[30278]: I0318 18:02:59.069911 30278 scope.go:117] "RemoveContainer" containerID="f887def1d9b97d72f25ddb564fd0ecbae06aba6b64de1338a239aa08a40c032f" Mar 18 18:02:59.071896 master-0 kubenswrapper[30278]: I0318 18:02:59.071818 30278 status_manager.go:851] "Failed to get status for pod" podUID="53883b3b-18ee-403e-b7c5-31699e457fd6" pod="openshift-kube-apiserver/installer-5-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-5-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 18 18:03:00.073383 master-0 kubenswrapper[30278]: I0318 18:03:00.073298 30278 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-master-0_b45ea2ef1cf2bc9d1d994d6538ae0a64/kube-apiserver-cert-syncer/0.log" Mar 18 18:03:00.075869 master-0 kubenswrapper[30278]: I0318 18:03:00.075773 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" event={"ID":"85632c1cec8974aa874834e4cfff4c77","Type":"ContainerStarted","Data":"571724322afa2bc809e3cb3005bbcd0d5daf3ba93f55353d097e625cd7da48c6"} Mar 18 18:03:00.077084 master-0 kubenswrapper[30278]: E0318 18:03:00.077028 30278 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 192.168.32.10:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 18 18:03:00.077147 master-0 kubenswrapper[30278]: I0318 18:03:00.077035 30278 status_manager.go:851] "Failed to get status for pod" podUID="53883b3b-18ee-403e-b7c5-31699e457fd6" pod="openshift-kube-apiserver/installer-5-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-5-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 18 18:03:00.425710 master-0 kubenswrapper[30278]: I0318 18:03:00.425617 30278 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-5-master-0" Mar 18 18:03:00.427079 master-0 kubenswrapper[30278]: I0318 18:03:00.427011 30278 status_manager.go:851] "Failed to get status for pod" podUID="53883b3b-18ee-403e-b7c5-31699e457fd6" pod="openshift-kube-apiserver/installer-5-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-5-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 18 18:03:00.436812 master-0 kubenswrapper[30278]: I0318 18:03:00.436762 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/53883b3b-18ee-403e-b7c5-31699e457fd6-var-lock\") pod \"53883b3b-18ee-403e-b7c5-31699e457fd6\" (UID: \"53883b3b-18ee-403e-b7c5-31699e457fd6\") " Mar 18 18:03:00.436952 master-0 kubenswrapper[30278]: I0318 18:03:00.436834 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/53883b3b-18ee-403e-b7c5-31699e457fd6-kubelet-dir\") pod \"53883b3b-18ee-403e-b7c5-31699e457fd6\" (UID: \"53883b3b-18ee-403e-b7c5-31699e457fd6\") " Mar 18 18:03:00.436952 master-0 kubenswrapper[30278]: I0318 18:03:00.436905 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/53883b3b-18ee-403e-b7c5-31699e457fd6-var-lock" (OuterVolumeSpecName: "var-lock") pod "53883b3b-18ee-403e-b7c5-31699e457fd6" (UID: "53883b3b-18ee-403e-b7c5-31699e457fd6"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 18:03:00.437124 master-0 kubenswrapper[30278]: I0318 18:03:00.436988 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/53883b3b-18ee-403e-b7c5-31699e457fd6-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "53883b3b-18ee-403e-b7c5-31699e457fd6" (UID: "53883b3b-18ee-403e-b7c5-31699e457fd6"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 18:03:00.437124 master-0 kubenswrapper[30278]: I0318 18:03:00.437040 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/53883b3b-18ee-403e-b7c5-31699e457fd6-kube-api-access\") pod \"53883b3b-18ee-403e-b7c5-31699e457fd6\" (UID: \"53883b3b-18ee-403e-b7c5-31699e457fd6\") " Mar 18 18:03:00.438470 master-0 kubenswrapper[30278]: I0318 18:03:00.438429 30278 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/53883b3b-18ee-403e-b7c5-31699e457fd6-var-lock\") on node \"master-0\" DevicePath \"\"" Mar 18 18:03:00.438571 master-0 kubenswrapper[30278]: I0318 18:03:00.438557 30278 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/53883b3b-18ee-403e-b7c5-31699e457fd6-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Mar 18 18:03:00.440756 master-0 kubenswrapper[30278]: I0318 18:03:00.440718 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/53883b3b-18ee-403e-b7c5-31699e457fd6-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "53883b3b-18ee-403e-b7c5-31699e457fd6" (UID: "53883b3b-18ee-403e-b7c5-31699e457fd6"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 18:03:00.540615 master-0 kubenswrapper[30278]: I0318 18:03:00.540526 30278 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/53883b3b-18ee-403e-b7c5-31699e457fd6-kube-api-access\") on node \"master-0\" DevicePath \"\"" Mar 18 18:03:00.951821 master-0 kubenswrapper[30278]: I0318 18:03:00.951761 30278 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-master-0_b45ea2ef1cf2bc9d1d994d6538ae0a64/kube-apiserver-cert-syncer/0.log" Mar 18 18:03:00.953429 master-0 kubenswrapper[30278]: I0318 18:03:00.953405 30278 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 18 18:03:00.954532 master-0 kubenswrapper[30278]: I0318 18:03:00.954481 30278 status_manager.go:851] "Failed to get status for pod" podUID="b45ea2ef1cf2bc9d1d994d6538ae0a64" pod="openshift-kube-apiserver/kube-apiserver-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 18 18:03:00.955189 master-0 kubenswrapper[30278]: I0318 18:03:00.955140 30278 status_manager.go:851] "Failed to get status for pod" podUID="53883b3b-18ee-403e-b7c5-31699e457fd6" pod="openshift-kube-apiserver/installer-5-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-5-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 18 18:03:01.061614 master-0 kubenswrapper[30278]: I0318 18:03:01.061532 30278 status_manager.go:851] "Failed to get status for pod" podUID="b45ea2ef1cf2bc9d1d994d6538ae0a64" pod="openshift-kube-apiserver/kube-apiserver-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 18 18:03:01.062462 master-0 kubenswrapper[30278]: I0318 18:03:01.062392 30278 status_manager.go:851] "Failed to get status for pod" podUID="53883b3b-18ee-403e-b7c5-31699e457fd6" pod="openshift-kube-apiserver/installer-5-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-5-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 18 18:03:01.087466 master-0 kubenswrapper[30278]: I0318 18:03:01.087403 30278 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-master-0_b45ea2ef1cf2bc9d1d994d6538ae0a64/kube-apiserver-cert-syncer/0.log" Mar 18 18:03:01.088447 master-0 kubenswrapper[30278]: I0318 18:03:01.088110 30278 generic.go:334] "Generic (PLEG): container finished" podID="b45ea2ef1cf2bc9d1d994d6538ae0a64" containerID="f46251b34f5dd065effa27bf2880569964ed0fdb5993d9d07458a0798550963d" exitCode=0 Mar 18 18:03:01.088447 master-0 kubenswrapper[30278]: I0318 18:03:01.088180 30278 scope.go:117] "RemoveContainer" containerID="76efa507770b7f3447613d0b53b9d5e24bb29472a2b50d81264800bfd86b0183" Mar 18 18:03:01.088447 master-0 kubenswrapper[30278]: I0318 18:03:01.088297 30278 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 18 18:03:01.091607 master-0 kubenswrapper[30278]: I0318 18:03:01.091533 30278 status_manager.go:851] "Failed to get status for pod" podUID="b45ea2ef1cf2bc9d1d994d6538ae0a64" pod="openshift-kube-apiserver/kube-apiserver-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 18 18:03:01.091900 master-0 kubenswrapper[30278]: I0318 18:03:01.091848 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-5-master-0" event={"ID":"53883b3b-18ee-403e-b7c5-31699e457fd6","Type":"ContainerDied","Data":"9c08464a43223b17491543fb19aead99d18a74871d4d569d5ebce2b2b30141cd"} Mar 18 18:03:01.091900 master-0 kubenswrapper[30278]: I0318 18:03:01.091898 30278 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9c08464a43223b17491543fb19aead99d18a74871d4d569d5ebce2b2b30141cd" Mar 18 18:03:01.092009 master-0 kubenswrapper[30278]: I0318 18:03:01.091940 30278 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-5-master-0" Mar 18 18:03:01.092858 master-0 kubenswrapper[30278]: E0318 18:03:01.092800 30278 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 192.168.32.10:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 18 18:03:01.092941 master-0 kubenswrapper[30278]: I0318 18:03:01.092798 30278 status_manager.go:851] "Failed to get status for pod" podUID="53883b3b-18ee-403e-b7c5-31699e457fd6" pod="openshift-kube-apiserver/installer-5-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-5-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 18 18:03:01.095947 master-0 kubenswrapper[30278]: I0318 18:03:01.095805 30278 status_manager.go:851] "Failed to get status for pod" podUID="b45ea2ef1cf2bc9d1d994d6538ae0a64" pod="openshift-kube-apiserver/kube-apiserver-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 18 18:03:01.096603 master-0 kubenswrapper[30278]: I0318 18:03:01.096525 30278 status_manager.go:851] "Failed to get status for pod" podUID="53883b3b-18ee-403e-b7c5-31699e457fd6" pod="openshift-kube-apiserver/installer-5-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-5-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 18 18:03:01.099068 master-0 kubenswrapper[30278]: I0318 18:03:01.099017 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/b45ea2ef1cf2bc9d1d994d6538ae0a64-cert-dir\") pod \"b45ea2ef1cf2bc9d1d994d6538ae0a64\" (UID: \"b45ea2ef1cf2bc9d1d994d6538ae0a64\") " Mar 18 18:03:01.099178 master-0 kubenswrapper[30278]: I0318 18:03:01.099144 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b45ea2ef1cf2bc9d1d994d6538ae0a64-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "b45ea2ef1cf2bc9d1d994d6538ae0a64" (UID: "b45ea2ef1cf2bc9d1d994d6538ae0a64"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 18:03:01.099178 master-0 kubenswrapper[30278]: I0318 18:03:01.099158 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/b45ea2ef1cf2bc9d1d994d6538ae0a64-resource-dir\") pod \"b45ea2ef1cf2bc9d1d994d6538ae0a64\" (UID: \"b45ea2ef1cf2bc9d1d994d6538ae0a64\") " Mar 18 18:03:01.099305 master-0 kubenswrapper[30278]: I0318 18:03:01.099218 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b45ea2ef1cf2bc9d1d994d6538ae0a64-audit-dir\") pod \"b45ea2ef1cf2bc9d1d994d6538ae0a64\" (UID: \"b45ea2ef1cf2bc9d1d994d6538ae0a64\") " Mar 18 18:03:01.099446 master-0 kubenswrapper[30278]: I0318 18:03:01.099252 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b45ea2ef1cf2bc9d1d994d6538ae0a64-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "b45ea2ef1cf2bc9d1d994d6538ae0a64" (UID: "b45ea2ef1cf2bc9d1d994d6538ae0a64"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 18:03:01.099446 master-0 kubenswrapper[30278]: I0318 18:03:01.099394 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b45ea2ef1cf2bc9d1d994d6538ae0a64-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "b45ea2ef1cf2bc9d1d994d6538ae0a64" (UID: "b45ea2ef1cf2bc9d1d994d6538ae0a64"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 18:03:01.099591 master-0 kubenswrapper[30278]: I0318 18:03:01.099548 30278 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/b45ea2ef1cf2bc9d1d994d6538ae0a64-cert-dir\") on node \"master-0\" DevicePath \"\"" Mar 18 18:03:01.099591 master-0 kubenswrapper[30278]: I0318 18:03:01.099575 30278 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/b45ea2ef1cf2bc9d1d994d6538ae0a64-resource-dir\") on node \"master-0\" DevicePath \"\"" Mar 18 18:03:01.099591 master-0 kubenswrapper[30278]: I0318 18:03:01.099586 30278 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b45ea2ef1cf2bc9d1d994d6538ae0a64-audit-dir\") on node \"master-0\" DevicePath \"\"" Mar 18 18:03:01.105262 master-0 kubenswrapper[30278]: I0318 18:03:01.105209 30278 scope.go:117] "RemoveContainer" containerID="6efe9870fc0250f38406137d08a7823a1c0babfd0a7944e64c6366445e97696d" Mar 18 18:03:01.122474 master-0 kubenswrapper[30278]: I0318 18:03:01.122364 30278 scope.go:117] "RemoveContainer" containerID="532613e49ea9e30ac1511410a4da92cfd72901d45720ba547c93524371db0317" Mar 18 18:03:01.141587 master-0 kubenswrapper[30278]: I0318 18:03:01.141528 30278 scope.go:117] "RemoveContainer" containerID="f5d16ced4f31a4b0a79823e1a1297685f8a42b35e0b0e91f27b749a47d590dcc" Mar 18 18:03:01.164601 master-0 kubenswrapper[30278]: I0318 18:03:01.164542 30278 scope.go:117] "RemoveContainer" containerID="f46251b34f5dd065effa27bf2880569964ed0fdb5993d9d07458a0798550963d" Mar 18 18:03:01.185588 master-0 kubenswrapper[30278]: I0318 18:03:01.185571 30278 scope.go:117] "RemoveContainer" containerID="4d1607d2ed493b59fcd5008ae3ef413f5adc41abf3185470578dd70106590036" Mar 18 18:03:01.202055 master-0 kubenswrapper[30278]: I0318 18:03:01.201906 30278 scope.go:117] "RemoveContainer" containerID="76efa507770b7f3447613d0b53b9d5e24bb29472a2b50d81264800bfd86b0183" Mar 18 18:03:01.202635 master-0 kubenswrapper[30278]: E0318 18:03:01.202582 30278 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"76efa507770b7f3447613d0b53b9d5e24bb29472a2b50d81264800bfd86b0183\": container with ID starting with 76efa507770b7f3447613d0b53b9d5e24bb29472a2b50d81264800bfd86b0183 not found: ID does not exist" containerID="76efa507770b7f3447613d0b53b9d5e24bb29472a2b50d81264800bfd86b0183" Mar 18 18:03:01.202715 master-0 kubenswrapper[30278]: I0318 18:03:01.202647 30278 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"76efa507770b7f3447613d0b53b9d5e24bb29472a2b50d81264800bfd86b0183"} err="failed to get container status \"76efa507770b7f3447613d0b53b9d5e24bb29472a2b50d81264800bfd86b0183\": rpc error: code = NotFound desc = could not find container \"76efa507770b7f3447613d0b53b9d5e24bb29472a2b50d81264800bfd86b0183\": container with ID starting with 76efa507770b7f3447613d0b53b9d5e24bb29472a2b50d81264800bfd86b0183 not found: ID does not exist" Mar 18 18:03:01.202715 master-0 kubenswrapper[30278]: I0318 18:03:01.202689 30278 scope.go:117] "RemoveContainer" containerID="6efe9870fc0250f38406137d08a7823a1c0babfd0a7944e64c6366445e97696d" Mar 18 18:03:01.203745 master-0 kubenswrapper[30278]: E0318 18:03:01.203703 30278 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6efe9870fc0250f38406137d08a7823a1c0babfd0a7944e64c6366445e97696d\": container with ID starting with 6efe9870fc0250f38406137d08a7823a1c0babfd0a7944e64c6366445e97696d not found: ID does not exist" containerID="6efe9870fc0250f38406137d08a7823a1c0babfd0a7944e64c6366445e97696d" Mar 18 18:03:01.203840 master-0 kubenswrapper[30278]: I0318 18:03:01.203748 30278 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6efe9870fc0250f38406137d08a7823a1c0babfd0a7944e64c6366445e97696d"} err="failed to get container status \"6efe9870fc0250f38406137d08a7823a1c0babfd0a7944e64c6366445e97696d\": rpc error: code = NotFound desc = could not find container \"6efe9870fc0250f38406137d08a7823a1c0babfd0a7944e64c6366445e97696d\": container with ID starting with 6efe9870fc0250f38406137d08a7823a1c0babfd0a7944e64c6366445e97696d not found: ID does not exist" Mar 18 18:03:01.203840 master-0 kubenswrapper[30278]: I0318 18:03:01.203780 30278 scope.go:117] "RemoveContainer" containerID="532613e49ea9e30ac1511410a4da92cfd72901d45720ba547c93524371db0317" Mar 18 18:03:01.204266 master-0 kubenswrapper[30278]: E0318 18:03:01.204191 30278 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"532613e49ea9e30ac1511410a4da92cfd72901d45720ba547c93524371db0317\": container with ID starting with 532613e49ea9e30ac1511410a4da92cfd72901d45720ba547c93524371db0317 not found: ID does not exist" containerID="532613e49ea9e30ac1511410a4da92cfd72901d45720ba547c93524371db0317" Mar 18 18:03:01.204376 master-0 kubenswrapper[30278]: I0318 18:03:01.204283 30278 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"532613e49ea9e30ac1511410a4da92cfd72901d45720ba547c93524371db0317"} err="failed to get container status \"532613e49ea9e30ac1511410a4da92cfd72901d45720ba547c93524371db0317\": rpc error: code = NotFound desc = could not find container \"532613e49ea9e30ac1511410a4da92cfd72901d45720ba547c93524371db0317\": container with ID starting with 532613e49ea9e30ac1511410a4da92cfd72901d45720ba547c93524371db0317 not found: ID does not exist" Mar 18 18:03:01.204376 master-0 kubenswrapper[30278]: I0318 18:03:01.204327 30278 scope.go:117] "RemoveContainer" containerID="f5d16ced4f31a4b0a79823e1a1297685f8a42b35e0b0e91f27b749a47d590dcc" Mar 18 18:03:01.205010 master-0 kubenswrapper[30278]: E0318 18:03:01.204970 30278 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f5d16ced4f31a4b0a79823e1a1297685f8a42b35e0b0e91f27b749a47d590dcc\": container with ID starting with f5d16ced4f31a4b0a79823e1a1297685f8a42b35e0b0e91f27b749a47d590dcc not found: ID does not exist" containerID="f5d16ced4f31a4b0a79823e1a1297685f8a42b35e0b0e91f27b749a47d590dcc" Mar 18 18:03:01.205010 master-0 kubenswrapper[30278]: I0318 18:03:01.205002 30278 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f5d16ced4f31a4b0a79823e1a1297685f8a42b35e0b0e91f27b749a47d590dcc"} err="failed to get container status \"f5d16ced4f31a4b0a79823e1a1297685f8a42b35e0b0e91f27b749a47d590dcc\": rpc error: code = NotFound desc = could not find container \"f5d16ced4f31a4b0a79823e1a1297685f8a42b35e0b0e91f27b749a47d590dcc\": container with ID starting with f5d16ced4f31a4b0a79823e1a1297685f8a42b35e0b0e91f27b749a47d590dcc not found: ID does not exist" Mar 18 18:03:01.205149 master-0 kubenswrapper[30278]: I0318 18:03:01.205020 30278 scope.go:117] "RemoveContainer" containerID="f46251b34f5dd065effa27bf2880569964ed0fdb5993d9d07458a0798550963d" Mar 18 18:03:01.205407 master-0 kubenswrapper[30278]: E0318 18:03:01.205359 30278 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f46251b34f5dd065effa27bf2880569964ed0fdb5993d9d07458a0798550963d\": container with ID starting with f46251b34f5dd065effa27bf2880569964ed0fdb5993d9d07458a0798550963d not found: ID does not exist" containerID="f46251b34f5dd065effa27bf2880569964ed0fdb5993d9d07458a0798550963d" Mar 18 18:03:01.205482 master-0 kubenswrapper[30278]: I0318 18:03:01.205398 30278 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f46251b34f5dd065effa27bf2880569964ed0fdb5993d9d07458a0798550963d"} err="failed to get container status \"f46251b34f5dd065effa27bf2880569964ed0fdb5993d9d07458a0798550963d\": rpc error: code = NotFound desc = could not find container \"f46251b34f5dd065effa27bf2880569964ed0fdb5993d9d07458a0798550963d\": container with ID starting with f46251b34f5dd065effa27bf2880569964ed0fdb5993d9d07458a0798550963d not found: ID does not exist" Mar 18 18:03:01.205482 master-0 kubenswrapper[30278]: I0318 18:03:01.205422 30278 scope.go:117] "RemoveContainer" containerID="4d1607d2ed493b59fcd5008ae3ef413f5adc41abf3185470578dd70106590036" Mar 18 18:03:01.205794 master-0 kubenswrapper[30278]: E0318 18:03:01.205756 30278 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4d1607d2ed493b59fcd5008ae3ef413f5adc41abf3185470578dd70106590036\": container with ID starting with 4d1607d2ed493b59fcd5008ae3ef413f5adc41abf3185470578dd70106590036 not found: ID does not exist" containerID="4d1607d2ed493b59fcd5008ae3ef413f5adc41abf3185470578dd70106590036" Mar 18 18:03:01.205867 master-0 kubenswrapper[30278]: I0318 18:03:01.205792 30278 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4d1607d2ed493b59fcd5008ae3ef413f5adc41abf3185470578dd70106590036"} err="failed to get container status \"4d1607d2ed493b59fcd5008ae3ef413f5adc41abf3185470578dd70106590036\": rpc error: code = NotFound desc = could not find container \"4d1607d2ed493b59fcd5008ae3ef413f5adc41abf3185470578dd70106590036\": container with ID starting with 4d1607d2ed493b59fcd5008ae3ef413f5adc41abf3185470578dd70106590036 not found: ID does not exist" Mar 18 18:03:01.406242 master-0 kubenswrapper[30278]: I0318 18:03:01.406147 30278 status_manager.go:851] "Failed to get status for pod" podUID="53883b3b-18ee-403e-b7c5-31699e457fd6" pod="openshift-kube-apiserver/installer-5-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-5-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 18 18:03:01.407050 master-0 kubenswrapper[30278]: I0318 18:03:01.406983 30278 status_manager.go:851] "Failed to get status for pod" podUID="b45ea2ef1cf2bc9d1d994d6538ae0a64" pod="openshift-kube-apiserver/kube-apiserver-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 18 18:03:01.507900 master-0 kubenswrapper[30278]: I0318 18:03:01.507818 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5c6aeb7b-9c05-470e-b31f-f4154aadf170-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"5c6aeb7b-9c05-470e-b31f-f4154aadf170\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 18:03:01.508341 master-0 kubenswrapper[30278]: E0318 18:03:01.508065 30278 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5c6aeb7b-9c05-470e-b31f-f4154aadf170-prometheus-trusted-ca-bundle podName:5c6aeb7b-9c05-470e-b31f-f4154aadf170 nodeName:}" failed. No retries permitted until 2026-03-18 18:05:03.508036371 +0000 UTC m=+272.675220966 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "prometheus-trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/5c6aeb7b-9c05-470e-b31f-f4154aadf170-prometheus-trusted-ca-bundle") pod "prometheus-k8s-0" (UID: "5c6aeb7b-9c05-470e-b31f-f4154aadf170") : configmap references non-existent config key: ca-bundle.crt Mar 18 18:03:03.065083 master-0 kubenswrapper[30278]: I0318 18:03:03.064988 30278 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b45ea2ef1cf2bc9d1d994d6538ae0a64" path="/var/lib/kubelet/pods/b45ea2ef1cf2bc9d1d994d6538ae0a64/volumes" Mar 18 18:03:04.256819 master-0 kubenswrapper[30278]: E0318 18:03:04.256602 30278 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 192.168.32.10:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-master-0.189e01862bcf3691 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-master-0,UID:85632c1cec8974aa874834e4cfff4c77,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c5ce3d1134d6500e2b8528516c1889d7bbc6259aba4981c6983395b0e9eeff65\" already present on machine,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 18:02:59.002799761 +0000 UTC m=+148.169984356,LastTimestamp:2026-03-18 18:02:59.002799761 +0000 UTC m=+148.169984356,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 18:03:05.666385 master-0 kubenswrapper[30278]: E0318 18:03:05.666259 30278 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 18 18:03:05.667585 master-0 kubenswrapper[30278]: E0318 18:03:05.667518 30278 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 18 18:03:05.668564 master-0 kubenswrapper[30278]: E0318 18:03:05.668505 30278 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 18 18:03:05.669592 master-0 kubenswrapper[30278]: E0318 18:03:05.669512 30278 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 18 18:03:05.670433 master-0 kubenswrapper[30278]: E0318 18:03:05.670394 30278 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 18 18:03:05.670492 master-0 kubenswrapper[30278]: I0318 18:03:05.670430 30278 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Mar 18 18:03:05.671296 master-0 kubenswrapper[30278]: E0318 18:03:05.671226 30278 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="200ms" Mar 18 18:03:05.872820 master-0 kubenswrapper[30278]: E0318 18:03:05.872711 30278 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="400ms" Mar 18 18:03:06.274936 master-0 kubenswrapper[30278]: E0318 18:03:06.274820 30278 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="800ms" Mar 18 18:03:07.076569 master-0 kubenswrapper[30278]: E0318 18:03:07.076478 30278 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="1.6s" Mar 18 18:03:08.678512 master-0 kubenswrapper[30278]: E0318 18:03:08.678441 30278 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="3.2s" Mar 18 18:03:09.054295 master-0 kubenswrapper[30278]: I0318 18:03:09.054219 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 18 18:03:09.058263 master-0 kubenswrapper[30278]: I0318 18:03:09.056919 30278 status_manager.go:851] "Failed to get status for pod" podUID="53883b3b-18ee-403e-b7c5-31699e457fd6" pod="openshift-kube-apiserver/installer-5-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-5-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 18 18:03:09.088586 master-0 kubenswrapper[30278]: I0318 18:03:09.088512 30278 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="74e0ff7d-2c3f-4da7-9374-51719d565894" Mar 18 18:03:09.088586 master-0 kubenswrapper[30278]: I0318 18:03:09.088571 30278 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="74e0ff7d-2c3f-4da7-9374-51719d565894" Mar 18 18:03:09.090717 master-0 kubenswrapper[30278]: E0318 18:03:09.090431 30278 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 18 18:03:09.091248 master-0 kubenswrapper[30278]: I0318 18:03:09.091200 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 18 18:03:09.117429 master-0 kubenswrapper[30278]: W0318 18:03:09.117378 30278 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd5f502b117c7c8479f7f20848a50fec0.slice/crio-a15135065bc353d68b862aad81427b77245499f6784d87151dbe9131dea98082 WatchSource:0}: Error finding container a15135065bc353d68b862aad81427b77245499f6784d87151dbe9131dea98082: Status 404 returned error can't find the container with id a15135065bc353d68b862aad81427b77245499f6784d87151dbe9131dea98082 Mar 18 18:03:09.165542 master-0 kubenswrapper[30278]: I0318 18:03:09.165451 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"d5f502b117c7c8479f7f20848a50fec0","Type":"ContainerStarted","Data":"a15135065bc353d68b862aad81427b77245499f6784d87151dbe9131dea98082"} Mar 18 18:03:10.177617 master-0 kubenswrapper[30278]: I0318 18:03:10.177529 30278 generic.go:334] "Generic (PLEG): container finished" podID="d5f502b117c7c8479f7f20848a50fec0" containerID="5c4eacc10202cc9c4d052ca71f707b3a4e64e4a9f45cdba64d88aad16bdfb5af" exitCode=0 Mar 18 18:03:10.177617 master-0 kubenswrapper[30278]: I0318 18:03:10.177611 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"d5f502b117c7c8479f7f20848a50fec0","Type":"ContainerDied","Data":"5c4eacc10202cc9c4d052ca71f707b3a4e64e4a9f45cdba64d88aad16bdfb5af"} Mar 18 18:03:10.179438 master-0 kubenswrapper[30278]: I0318 18:03:10.178026 30278 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="74e0ff7d-2c3f-4da7-9374-51719d565894" Mar 18 18:03:10.179438 master-0 kubenswrapper[30278]: I0318 18:03:10.178050 30278 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="74e0ff7d-2c3f-4da7-9374-51719d565894" Mar 18 18:03:10.179438 master-0 kubenswrapper[30278]: E0318 18:03:10.178898 30278 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 18 18:03:10.179802 master-0 kubenswrapper[30278]: I0318 18:03:10.179574 30278 status_manager.go:851] "Failed to get status for pod" podUID="53883b3b-18ee-403e-b7c5-31699e457fd6" pod="openshift-kube-apiserver/installer-5-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-5-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 18 18:03:11.190961 master-0 kubenswrapper[30278]: I0318 18:03:11.190752 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"d5f502b117c7c8479f7f20848a50fec0","Type":"ContainerStarted","Data":"5b6a79a53410ef83a62b132c09123cd7abc8eea60c8f418d1e3cd0c240ba8daf"} Mar 18 18:03:11.190961 master-0 kubenswrapper[30278]: I0318 18:03:11.190846 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"d5f502b117c7c8479f7f20848a50fec0","Type":"ContainerStarted","Data":"140e2775d62d699fb155d42bd8fb1998bd76b938f609b8d9d008ddd344c42a4f"} Mar 18 18:03:11.190961 master-0 kubenswrapper[30278]: I0318 18:03:11.190861 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"d5f502b117c7c8479f7f20848a50fec0","Type":"ContainerStarted","Data":"7b427131e95fd979592c63eb34266da7d198f926abfcde0b8ee234a58853d777"} Mar 18 18:03:12.202858 master-0 kubenswrapper[30278]: I0318 18:03:12.202742 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"d5f502b117c7c8479f7f20848a50fec0","Type":"ContainerStarted","Data":"68fb85f8feb2a71df37393892fcf105fc63c56dbee37f985ec890269695bb794"} Mar 18 18:03:12.202858 master-0 kubenswrapper[30278]: I0318 18:03:12.202804 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"d5f502b117c7c8479f7f20848a50fec0","Type":"ContainerStarted","Data":"dce39b5eebaddceba8d0657c39677709811457bcc91100f5c8c39f9349dd78e3"} Mar 18 18:03:12.204611 master-0 kubenswrapper[30278]: I0318 18:03:12.203110 30278 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="74e0ff7d-2c3f-4da7-9374-51719d565894" Mar 18 18:03:12.204611 master-0 kubenswrapper[30278]: I0318 18:03:12.203129 30278 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="74e0ff7d-2c3f-4da7-9374-51719d565894" Mar 18 18:03:12.204611 master-0 kubenswrapper[30278]: I0318 18:03:12.203572 30278 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 18 18:03:14.092406 master-0 kubenswrapper[30278]: I0318 18:03:14.092281 30278 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 18 18:03:14.092406 master-0 kubenswrapper[30278]: I0318 18:03:14.092375 30278 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 18 18:03:14.099331 master-0 kubenswrapper[30278]: I0318 18:03:14.099214 30278 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 18 18:03:14.562461 master-0 kubenswrapper[30278]: I0318 18:03:14.562409 30278 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_3b3363934623637fdc1d37ff8b16880a/cluster-policy-controller/3.log" Mar 18 18:03:14.563359 master-0 kubenswrapper[30278]: I0318 18:03:14.563226 30278 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_3b3363934623637fdc1d37ff8b16880a/kube-controller-manager/1.log" Mar 18 18:03:14.565306 master-0 kubenswrapper[30278]: I0318 18:03:14.565222 30278 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_3b3363934623637fdc1d37ff8b16880a/kube-controller-manager-cert-syncer/0.log" Mar 18 18:03:14.566160 master-0 kubenswrapper[30278]: I0318 18:03:14.566109 30278 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_3b3363934623637fdc1d37ff8b16880a/kube-controller-manager/0.log" Mar 18 18:03:14.566234 master-0 kubenswrapper[30278]: I0318 18:03:14.566172 30278 generic.go:334] "Generic (PLEG): container finished" podID="3b3363934623637fdc1d37ff8b16880a" containerID="522b734ad03d049a879cfa7a8145e3b81a8d9061164b95712992e2f7f7b61d1d" exitCode=1 Mar 18 18:03:14.566306 master-0 kubenswrapper[30278]: I0318 18:03:14.566224 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"3b3363934623637fdc1d37ff8b16880a","Type":"ContainerDied","Data":"522b734ad03d049a879cfa7a8145e3b81a8d9061164b95712992e2f7f7b61d1d"} Mar 18 18:03:14.566306 master-0 kubenswrapper[30278]: I0318 18:03:14.566301 30278 scope.go:117] "RemoveContainer" containerID="6007004024fecf1344918d5eba36f91c4644591c32375ce8f9e07fc9beb46c69" Mar 18 18:03:14.567114 master-0 kubenswrapper[30278]: I0318 18:03:14.567087 30278 scope.go:117] "RemoveContainer" containerID="522b734ad03d049a879cfa7a8145e3b81a8d9061164b95712992e2f7f7b61d1d" Mar 18 18:03:15.575975 master-0 kubenswrapper[30278]: I0318 18:03:15.575910 30278 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_3b3363934623637fdc1d37ff8b16880a/cluster-policy-controller/3.log" Mar 18 18:03:15.577468 master-0 kubenswrapper[30278]: I0318 18:03:15.577435 30278 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_3b3363934623637fdc1d37ff8b16880a/kube-controller-manager/1.log" Mar 18 18:03:15.578616 master-0 kubenswrapper[30278]: I0318 18:03:15.578576 30278 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_3b3363934623637fdc1d37ff8b16880a/kube-controller-manager-cert-syncer/0.log" Mar 18 18:03:15.578715 master-0 kubenswrapper[30278]: I0318 18:03:15.578630 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"3b3363934623637fdc1d37ff8b16880a","Type":"ContainerStarted","Data":"af3223d37de441a43e2bb9840f2c7d68ed9137889a1d1026233d1692393573ca"} Mar 18 18:03:17.396881 master-0 kubenswrapper[30278]: I0318 18:03:17.396213 30278 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 18 18:03:17.432291 master-0 kubenswrapper[30278]: I0318 18:03:17.432143 30278 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"74e0ff7d-2c3f-4da7-9374-51719d565894\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-18T18:03:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-18T18:03:10Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-18T18:03:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver kube-apiserver-cert-syncer kube-apiserver-cert-regeneration-controller kube-apiserver-insecure-readyz kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-18T18:03:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver kube-apiserver-cert-syncer kube-apiserver-cert-regeneration-controller kube-apiserver-insecure-readyz kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b23c544d3894e5b31f66a18c554f03b0d29f92c2000c46b57b1c96da7ec25db9\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c5ce3d1134d6500e2b8528516c1889d7bbc6259aba4981c6983395b0e9eeff65\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c5ce3d1134d6500e2b8528516c1889d7bbc6259aba4981c6983395b0e9eeff65\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c5ce3d1134d6500e2b8528516c1889d7bbc6259aba4981c6983395b0e9eeff65\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c5ce3d1134d6500e2b8528516c1889d7bbc6259aba4981c6983395b0e9eeff65\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c4eacc10202cc9c4d052ca71f707b3a4e64e4a9f45cdba64d88aad16bdfb5af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b23c544d3894e5b31f66a18c554f03b0d29f92c2000c46b57b1c96da7ec25db9\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b23c544d3894e5b31f66a18c554f03b0d29f92c2000c46b57b1c96da7ec25db9\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c4eacc10202cc9c4d052ca71f707b3a4e64e4a9f45cdba64d88aad16bdfb5af\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-18T18:03:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-18T18:03:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Pending\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-master-0\": Pod \"kube-apiserver-master-0\" is invalid: metadata.uid: Invalid value: \"74e0ff7d-2c3f-4da7-9374-51719d565894\": field is immutable" Mar 18 18:03:17.487660 master-0 kubenswrapper[30278]: I0318 18:03:17.487544 30278 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-master-0" oldPodUID="d5f502b117c7c8479f7f20848a50fec0" podUID="3c7f1fb9-16fe-455d-888f-59d671c6285e" Mar 18 18:03:17.594136 master-0 kubenswrapper[30278]: I0318 18:03:17.594064 30278 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="74e0ff7d-2c3f-4da7-9374-51719d565894" Mar 18 18:03:17.594136 master-0 kubenswrapper[30278]: I0318 18:03:17.594112 30278 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="74e0ff7d-2c3f-4da7-9374-51719d565894" Mar 18 18:03:17.598214 master-0 kubenswrapper[30278]: I0318 18:03:17.598170 30278 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-master-0" oldPodUID="d5f502b117c7c8479f7f20848a50fec0" podUID="3c7f1fb9-16fe-455d-888f-59d671c6285e" Mar 18 18:03:17.606111 master-0 kubenswrapper[30278]: I0318 18:03:17.606037 30278 status_manager.go:308] "Container readiness changed before pod has synced" pod="openshift-kube-apiserver/kube-apiserver-master-0" containerID="cri-o://7b427131e95fd979592c63eb34266da7d198f926abfcde0b8ee234a58853d777" Mar 18 18:03:17.606111 master-0 kubenswrapper[30278]: I0318 18:03:17.606083 30278 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 18 18:03:18.604063 master-0 kubenswrapper[30278]: I0318 18:03:18.603987 30278 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="74e0ff7d-2c3f-4da7-9374-51719d565894" Mar 18 18:03:18.604063 master-0 kubenswrapper[30278]: I0318 18:03:18.604040 30278 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="74e0ff7d-2c3f-4da7-9374-51719d565894" Mar 18 18:03:18.607993 master-0 kubenswrapper[30278]: I0318 18:03:18.607891 30278 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-master-0" oldPodUID="d5f502b117c7c8479f7f20848a50fec0" podUID="3c7f1fb9-16fe-455d-888f-59d671c6285e" Mar 18 18:03:19.310984 master-0 kubenswrapper[30278]: I0318 18:03:19.310906 30278 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 18:03:20.801513 master-0 kubenswrapper[30278]: I0318 18:03:20.801221 30278 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 18:03:20.805141 master-0 kubenswrapper[30278]: I0318 18:03:20.805104 30278 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 18:03:24.085139 master-0 kubenswrapper[30278]: I0318 18:03:24.085058 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"openshift-service-ca.crt" Mar 18 18:03:24.232973 master-0 kubenswrapper[30278]: I0318 18:03:24.232884 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Mar 18 18:03:24.487898 master-0 kubenswrapper[30278]: I0318 18:03:24.487692 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Mar 18 18:03:24.644883 master-0 kubenswrapper[30278]: I0318 18:03:24.644740 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Mar 18 18:03:24.715406 master-0 kubenswrapper[30278]: I0318 18:03:24.715338 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Mar 18 18:03:24.790462 master-0 kubenswrapper[30278]: I0318 18:03:24.790287 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Mar 18 18:03:24.998192 master-0 kubenswrapper[30278]: I0318 18:03:24.998086 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Mar 18 18:03:25.004336 master-0 kubenswrapper[30278]: I0318 18:03:25.004231 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Mar 18 18:03:25.179540 master-0 kubenswrapper[30278]: I0318 18:03:25.179443 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-kube-rbac-proxy-metric" Mar 18 18:03:25.205446 master-0 kubenswrapper[30278]: I0318 18:03:25.205341 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Mar 18 18:03:26.003585 master-0 kubenswrapper[30278]: I0318 18:03:26.003497 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Mar 18 18:03:26.590122 master-0 kubenswrapper[30278]: I0318 18:03:26.590007 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Mar 18 18:03:26.671792 master-0 kubenswrapper[30278]: I0318 18:03:26.671718 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-kube-rbac-proxy-config" Mar 18 18:03:26.814625 master-0 kubenswrapper[30278]: I0318 18:03:26.814546 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Mar 18 18:03:26.819231 master-0 kubenswrapper[30278]: I0318 18:03:26.819185 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kube-state-metrics-custom-resource-state-configmap" Mar 18 18:03:26.938904 master-0 kubenswrapper[30278]: I0318 18:03:26.938672 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-6clkh" Mar 18 18:03:27.089680 master-0 kubenswrapper[30278]: I0318 18:03:27.089588 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-cqcns" Mar 18 18:03:27.313975 master-0 kubenswrapper[30278]: I0318 18:03:27.313925 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Mar 18 18:03:27.754652 master-0 kubenswrapper[30278]: I0318 18:03:27.754459 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Mar 18 18:03:28.403211 master-0 kubenswrapper[30278]: I0318 18:03:28.403155 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Mar 18 18:03:28.748865 master-0 kubenswrapper[30278]: I0318 18:03:28.748723 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-wftwz" Mar 18 18:03:28.756130 master-0 kubenswrapper[30278]: I0318 18:03:28.756081 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Mar 18 18:03:29.264773 master-0 kubenswrapper[30278]: I0318 18:03:29.264693 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Mar 18 18:03:29.317839 master-0 kubenswrapper[30278]: I0318 18:03:29.317757 30278 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 18:03:29.436762 master-0 kubenswrapper[30278]: I0318 18:03:29.436678 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Mar 18 18:03:29.479968 master-0 kubenswrapper[30278]: I0318 18:03:29.479891 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-grpc-tls-2oo4hd4u5lrf1" Mar 18 18:03:29.497987 master-0 kubenswrapper[30278]: I0318 18:03:29.497842 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Mar 18 18:03:29.517350 master-0 kubenswrapper[30278]: I0318 18:03:29.517171 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Mar 18 18:03:29.707117 master-0 kubenswrapper[30278]: I0318 18:03:29.706971 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-storage-operator"/"cluster-storage-operator-serving-cert" Mar 18 18:03:29.749905 master-0 kubenswrapper[30278]: I0318 18:03:29.749812 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-controller"/"openshift-service-ca.crt" Mar 18 18:03:29.829783 master-0 kubenswrapper[30278]: I0318 18:03:29.829676 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-dockercfg-ncdpm" Mar 18 18:03:29.878445 master-0 kubenswrapper[30278]: I0318 18:03:29.878371 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-kube-rbac-proxy-web" Mar 18 18:03:30.032412 master-0 kubenswrapper[30278]: I0318 18:03:30.032333 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-bwq44" Mar 18 18:03:30.131220 master-0 kubenswrapper[30278]: I0318 18:03:30.130916 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-autoscaler-operator-cert" Mar 18 18:03:30.214834 master-0 kubenswrapper[30278]: I0318 18:03:30.214749 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Mar 18 18:03:30.226721 master-0 kubenswrapper[30278]: I0318 18:03:30.226640 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"openshift-service-ca.crt" Mar 18 18:03:30.378496 master-0 kubenswrapper[30278]: I0318 18:03:30.378406 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-baremetal-webhook-server-cert" Mar 18 18:03:30.433756 master-0 kubenswrapper[30278]: I0318 18:03:30.433594 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Mar 18 18:03:30.440775 master-0 kubenswrapper[30278]: I0318 18:03:30.440723 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Mar 18 18:03:30.684764 master-0 kubenswrapper[30278]: I0318 18:03:30.684598 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Mar 18 18:03:30.889437 master-0 kubenswrapper[30278]: I0318 18:03:30.889326 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Mar 18 18:03:31.349392 master-0 kubenswrapper[30278]: I0318 18:03:31.349319 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Mar 18 18:03:31.432602 master-0 kubenswrapper[30278]: I0318 18:03:31.432543 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Mar 18 18:03:31.439558 master-0 kubenswrapper[30278]: I0318 18:03:31.439494 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-rl6dv" Mar 18 18:03:31.495386 master-0 kubenswrapper[30278]: I0318 18:03:31.495302 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Mar 18 18:03:31.750118 master-0 kubenswrapper[30278]: I0318 18:03:31.750052 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Mar 18 18:03:31.797374 master-0 kubenswrapper[30278]: I0318 18:03:31.797329 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Mar 18 18:03:31.937412 master-0 kubenswrapper[30278]: I0318 18:03:31.937337 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"openshift-service-ca.crt" Mar 18 18:03:32.164481 master-0 kubenswrapper[30278]: I0318 18:03:32.164406 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Mar 18 18:03:32.229541 master-0 kubenswrapper[30278]: I0318 18:03:32.229485 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-tls-assets-0" Mar 18 18:03:32.306814 master-0 kubenswrapper[30278]: I0318 18:03:32.306739 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Mar 18 18:03:32.316478 master-0 kubenswrapper[30278]: I0318 18:03:32.316441 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Mar 18 18:03:32.531502 master-0 kubenswrapper[30278]: I0318 18:03:32.531349 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-credential-operator"/"kube-root-ca.crt" Mar 18 18:03:32.566617 master-0 kubenswrapper[30278]: I0318 18:03:32.566561 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Mar 18 18:03:32.577890 master-0 kubenswrapper[30278]: I0318 18:03:32.577828 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-tns2v" Mar 18 18:03:32.592345 master-0 kubenswrapper[30278]: I0318 18:03:32.592294 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Mar 18 18:03:32.707327 master-0 kubenswrapper[30278]: I0318 18:03:32.707236 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-controller"/"operator-controller-trusted-ca-bundle" Mar 18 18:03:32.765648 master-0 kubenswrapper[30278]: I0318 18:03:32.765585 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-kzdnw" Mar 18 18:03:32.784322 master-0 kubenswrapper[30278]: I0318 18:03:32.784127 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Mar 18 18:03:32.894124 master-0 kubenswrapper[30278]: I0318 18:03:32.894053 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-controller-manager-operator"/"cloud-controller-manager-operator-tls" Mar 18 18:03:32.987367 master-0 kubenswrapper[30278]: I0318 18:03:32.987248 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-ticnjnaemlaa" Mar 18 18:03:33.073128 master-0 kubenswrapper[30278]: I0318 18:03:33.073051 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-tls" Mar 18 18:03:33.109549 master-0 kubenswrapper[30278]: I0318 18:03:33.109486 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-dockercfg-h8kg7" Mar 18 18:03:33.111440 master-0 kubenswrapper[30278]: I0318 18:03:33.111386 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-thanos-prometheus-http-client-file" Mar 18 18:03:33.117716 master-0 kubenswrapper[30278]: I0318 18:03:33.117677 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Mar 18 18:03:33.160921 master-0 kubenswrapper[30278]: I0318 18:03:33.160844 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-node-tuning-operator"/"performance-addon-operator-webhook-cert" Mar 18 18:03:33.173344 master-0 kubenswrapper[30278]: I0318 18:03:33.173250 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2dddk" Mar 18 18:03:33.282154 master-0 kubenswrapper[30278]: I0318 18:03:33.282100 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"kube-root-ca.crt" Mar 18 18:03:33.445908 master-0 kubenswrapper[30278]: I0318 18:03:33.445801 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-tls" Mar 18 18:03:33.652010 master-0 kubenswrapper[30278]: I0318 18:03:33.651952 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-olm-operator"/"kube-root-ca.crt" Mar 18 18:03:33.682846 master-0 kubenswrapper[30278]: I0318 18:03:33.682801 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-tls" Mar 18 18:03:33.747931 master-0 kubenswrapper[30278]: I0318 18:03:33.747818 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-web-config" Mar 18 18:03:33.870404 master-0 kubenswrapper[30278]: I0318 18:03:33.870310 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Mar 18 18:03:33.936630 master-0 kubenswrapper[30278]: I0318 18:03:33.936570 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-4fc8r" Mar 18 18:03:33.968819 master-0 kubenswrapper[30278]: I0318 18:03:33.968748 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Mar 18 18:03:34.002912 master-0 kubenswrapper[30278]: I0318 18:03:34.002746 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Mar 18 18:03:34.003811 master-0 kubenswrapper[30278]: I0318 18:03:34.003621 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-dockercfg-wh6dt" Mar 18 18:03:34.121229 master-0 kubenswrapper[30278]: I0318 18:03:34.121155 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Mar 18 18:03:34.160020 master-0 kubenswrapper[30278]: I0318 18:03:34.159956 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Mar 18 18:03:34.184325 master-0 kubenswrapper[30278]: I0318 18:03:34.184236 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-zxhl4" Mar 18 18:03:34.246039 master-0 kubenswrapper[30278]: I0318 18:03:34.245943 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Mar 18 18:03:34.415986 master-0 kubenswrapper[30278]: I0318 18:03:34.415907 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Mar 18 18:03:34.433652 master-0 kubenswrapper[30278]: I0318 18:03:34.433592 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Mar 18 18:03:34.500300 master-0 kubenswrapper[30278]: I0318 18:03:34.500230 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Mar 18 18:03:34.537777 master-0 kubenswrapper[30278]: I0318 18:03:34.537711 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Mar 18 18:03:34.679141 master-0 kubenswrapper[30278]: I0318 18:03:34.678990 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-catalogd"/"openshift-service-ca.crt" Mar 18 18:03:34.789010 master-0 kubenswrapper[30278]: I0318 18:03:34.788959 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Mar 18 18:03:34.892893 master-0 kubenswrapper[30278]: I0318 18:03:34.892836 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Mar 18 18:03:34.947264 master-0 kubenswrapper[30278]: I0318 18:03:34.947150 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kubelet-serving-ca-bundle" Mar 18 18:03:34.955556 master-0 kubenswrapper[30278]: I0318 18:03:34.955525 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-storage-operator"/"kube-root-ca.crt" Mar 18 18:03:35.033460 master-0 kubenswrapper[30278]: I0318 18:03:35.033394 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Mar 18 18:03:35.074703 master-0 kubenswrapper[30278]: I0318 18:03:35.074643 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Mar 18 18:03:35.097542 master-0 kubenswrapper[30278]: I0318 18:03:35.097489 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Mar 18 18:03:35.126949 master-0 kubenswrapper[30278]: I0318 18:03:35.126884 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Mar 18 18:03:35.237807 master-0 kubenswrapper[30278]: I0318 18:03:35.237651 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"kube-rbac-proxy" Mar 18 18:03:35.295309 master-0 kubenswrapper[30278]: I0318 18:03:35.292437 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Mar 18 18:03:35.419664 master-0 kubenswrapper[30278]: I0318 18:03:35.419500 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Mar 18 18:03:35.461302 master-0 kubenswrapper[30278]: I0318 18:03:35.461163 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Mar 18 18:03:35.473594 master-0 kubenswrapper[30278]: I0318 18:03:35.473445 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Mar 18 18:03:35.477216 master-0 kubenswrapper[30278]: I0318 18:03:35.477168 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Mar 18 18:03:35.607114 master-0 kubenswrapper[30278]: I0318 18:03:35.607056 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Mar 18 18:03:35.664166 master-0 kubenswrapper[30278]: I0318 18:03:35.664098 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Mar 18 18:03:35.672207 master-0 kubenswrapper[30278]: I0318 18:03:35.672062 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Mar 18 18:03:35.719106 master-0 kubenswrapper[30278]: I0318 18:03:35.719041 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Mar 18 18:03:35.766795 master-0 kubenswrapper[30278]: I0318 18:03:35.766562 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Mar 18 18:03:35.769440 master-0 kubenswrapper[30278]: I0318 18:03:35.769381 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-tls" Mar 18 18:03:35.812142 master-0 kubenswrapper[30278]: I0318 18:03:35.812030 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"prometheus-trusted-ca-bundle" Mar 18 18:03:35.882867 master-0 kubenswrapper[30278]: I0318 18:03:35.882695 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-credential-operator"/"cco-trusted-ca" Mar 18 18:03:35.958603 master-0 kubenswrapper[30278]: I0318 18:03:35.958523 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Mar 18 18:03:36.017806 master-0 kubenswrapper[30278]: I0318 18:03:36.017760 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy-metrics" Mar 18 18:03:36.032345 master-0 kubenswrapper[30278]: I0318 18:03:36.032267 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Mar 18 18:03:36.075671 master-0 kubenswrapper[30278]: I0318 18:03:36.075629 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-controller-manager-operator"/"cluster-cloud-controller-manager-dockercfg-2mk4r" Mar 18 18:03:36.094617 master-0 kubenswrapper[30278]: I0318 18:03:36.094547 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Mar 18 18:03:36.095003 master-0 kubenswrapper[30278]: I0318 18:03:36.094941 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Mar 18 18:03:36.103747 master-0 kubenswrapper[30278]: I0318 18:03:36.103712 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Mar 18 18:03:36.127484 master-0 kubenswrapper[30278]: I0318 18:03:36.127411 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Mar 18 18:03:36.147136 master-0 kubenswrapper[30278]: I0318 18:03:36.146993 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"metrics-client-ca" Mar 18 18:03:36.178351 master-0 kubenswrapper[30278]: I0318 18:03:36.178260 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-autoscaler-operator-dockercfg-rqcfx" Mar 18 18:03:36.179913 master-0 kubenswrapper[30278]: I0318 18:03:36.179885 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-6fg48" Mar 18 18:03:36.278507 master-0 kubenswrapper[30278]: I0318 18:03:36.278462 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Mar 18 18:03:36.367461 master-0 kubenswrapper[30278]: I0318 18:03:36.367400 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Mar 18 18:03:36.400609 master-0 kubenswrapper[30278]: I0318 18:03:36.400499 30278 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Mar 18 18:03:36.402806 master-0 kubenswrapper[30278]: I0318 18:03:36.402770 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Mar 18 18:03:36.423779 master-0 kubenswrapper[30278]: I0318 18:03:36.423389 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Mar 18 18:03:36.612430 master-0 kubenswrapper[30278]: I0318 18:03:36.612356 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Mar 18 18:03:36.726350 master-0 kubenswrapper[30278]: I0318 18:03:36.726180 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Mar 18 18:03:36.755433 master-0 kubenswrapper[30278]: I0318 18:03:36.755359 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"whereabouts-flatfile-config" Mar 18 18:03:36.759918 master-0 kubenswrapper[30278]: I0318 18:03:36.759853 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Mar 18 18:03:36.811960 master-0 kubenswrapper[30278]: I0318 18:03:36.811901 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Mar 18 18:03:36.861314 master-0 kubenswrapper[30278]: I0318 18:03:36.861251 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Mar 18 18:03:36.872738 master-0 kubenswrapper[30278]: I0318 18:03:36.872712 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Mar 18 18:03:36.953750 master-0 kubenswrapper[30278]: I0318 18:03:36.953703 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy-web" Mar 18 18:03:37.004901 master-0 kubenswrapper[30278]: I0318 18:03:37.004856 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-olm-operator"/"cluster-olm-operator-serving-cert" Mar 18 18:03:37.018748 master-0 kubenswrapper[30278]: I0318 18:03:37.018707 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Mar 18 18:03:37.031976 master-0 kubenswrapper[30278]: I0318 18:03:37.031913 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"trusted-ca-bundle" Mar 18 18:03:37.080512 master-0 kubenswrapper[30278]: I0318 18:03:37.080468 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Mar 18 18:03:37.085100 master-0 kubenswrapper[30278]: I0318 18:03:37.085063 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-generated" Mar 18 18:03:37.187530 master-0 kubenswrapper[30278]: I0318 18:03:37.187472 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Mar 18 18:03:37.203339 master-0 kubenswrapper[30278]: I0318 18:03:37.203303 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Mar 18 18:03:37.245186 master-0 kubenswrapper[30278]: I0318 18:03:37.245137 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Mar 18 18:03:37.304959 master-0 kubenswrapper[30278]: I0318 18:03:37.304915 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Mar 18 18:03:37.360175 master-0 kubenswrapper[30278]: I0318 18:03:37.360114 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Mar 18 18:03:37.375298 master-0 kubenswrapper[30278]: I0318 18:03:37.375241 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Mar 18 18:03:37.506498 master-0 kubenswrapper[30278]: I0318 18:03:37.506437 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Mar 18 18:03:37.548250 master-0 kubenswrapper[30278]: I0318 18:03:37.548193 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Mar 18 18:03:37.581096 master-0 kubenswrapper[30278]: I0318 18:03:37.580988 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Mar 18 18:03:37.610723 master-0 kubenswrapper[30278]: I0318 18:03:37.610679 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Mar 18 18:03:37.694023 master-0 kubenswrapper[30278]: I0318 18:03:37.693915 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Mar 18 18:03:37.745726 master-0 kubenswrapper[30278]: I0318 18:03:37.745308 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-tls" Mar 18 18:03:37.760490 master-0 kubenswrapper[30278]: I0318 18:03:37.760447 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Mar 18 18:03:37.785033 master-0 kubenswrapper[30278]: I0318 18:03:37.784963 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-credential-operator"/"cloud-credential-operator-serving-cert" Mar 18 18:03:37.810675 master-0 kubenswrapper[30278]: I0318 18:03:37.810500 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Mar 18 18:03:37.869356 master-0 kubenswrapper[30278]: I0318 18:03:37.867813 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-dockercfg-5g5z8" Mar 18 18:03:37.958263 master-0 kubenswrapper[30278]: I0318 18:03:37.958204 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Mar 18 18:03:38.009980 master-0 kubenswrapper[30278]: I0318 18:03:38.009877 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Mar 18 18:03:38.050515 master-0 kubenswrapper[30278]: I0318 18:03:38.048125 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-grpc-tls-66rqjfmn9qiqc" Mar 18 18:03:38.085759 master-0 kubenswrapper[30278]: I0318 18:03:38.085694 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-rbac-proxy" Mar 18 18:03:38.094484 master-0 kubenswrapper[30278]: I0318 18:03:38.094437 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Mar 18 18:03:38.168811 master-0 kubenswrapper[30278]: I0318 18:03:38.168654 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Mar 18 18:03:38.200311 master-0 kubenswrapper[30278]: I0318 18:03:38.200177 30278 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Mar 18 18:03:38.206448 master-0 kubenswrapper[30278]: I0318 18:03:38.206396 30278 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-master-0"] Mar 18 18:03:38.206448 master-0 kubenswrapper[30278]: I0318 18:03:38.206453 30278 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-master-0"] Mar 18 18:03:38.212791 master-0 kubenswrapper[30278]: I0318 18:03:38.212717 30278 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 18 18:03:38.229750 master-0 kubenswrapper[30278]: I0318 18:03:38.229559 30278 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-master-0" podStartSLOduration=21.229537056 podStartE2EDuration="21.229537056s" podCreationTimestamp="2026-03-18 18:03:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 18:03:38.227588523 +0000 UTC m=+187.394773188" watchObservedRunningTime="2026-03-18 18:03:38.229537056 +0000 UTC m=+187.396721671" Mar 18 18:03:38.254672 master-0 kubenswrapper[30278]: I0318 18:03:38.254554 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Mar 18 18:03:38.265705 master-0 kubenswrapper[30278]: I0318 18:03:38.265638 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Mar 18 18:03:38.290668 master-0 kubenswrapper[30278]: I0318 18:03:38.289369 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Mar 18 18:03:38.301378 master-0 kubenswrapper[30278]: I0318 18:03:38.300919 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Mar 18 18:03:38.314107 master-0 kubenswrapper[30278]: I0318 18:03:38.313708 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Mar 18 18:03:38.380243 master-0 kubenswrapper[30278]: I0318 18:03:38.380174 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-dockercfg-kcjlz" Mar 18 18:03:38.497162 master-0 kubenswrapper[30278]: I0318 18:03:38.497030 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"trusted-ca" Mar 18 18:03:38.502887 master-0 kubenswrapper[30278]: I0318 18:03:38.502834 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-rgwwd" Mar 18 18:03:38.508054 master-0 kubenswrapper[30278]: I0318 18:03:38.508019 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Mar 18 18:03:38.622040 master-0 kubenswrapper[30278]: I0318 18:03:38.621958 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"openshift-service-ca.crt" Mar 18 18:03:38.625882 master-0 kubenswrapper[30278]: I0318 18:03:38.625782 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Mar 18 18:03:38.650725 master-0 kubenswrapper[30278]: I0318 18:03:38.650674 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-storage-operator"/"openshift-service-ca.crt" Mar 18 18:03:38.680400 master-0 kubenswrapper[30278]: I0318 18:03:38.679674 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Mar 18 18:03:38.682005 master-0 kubenswrapper[30278]: I0318 18:03:38.681845 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-catalogd"/"kube-root-ca.crt" Mar 18 18:03:38.707087 master-0 kubenswrapper[30278]: I0318 18:03:38.707034 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Mar 18 18:03:38.718080 master-0 kubenswrapper[30278]: I0318 18:03:38.718029 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-tls-assets-0" Mar 18 18:03:38.723619 master-0 kubenswrapper[30278]: I0318 18:03:38.723566 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"serving-certs-ca-bundle" Mar 18 18:03:38.751628 master-0 kubenswrapper[30278]: I0318 18:03:38.751330 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"cloud-controller-manager-images" Mar 18 18:03:38.783453 master-0 kubenswrapper[30278]: I0318 18:03:38.783313 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-storage-operator"/"cluster-storage-operator-dockercfg-clcfd" Mar 18 18:03:38.790901 master-0 kubenswrapper[30278]: I0318 18:03:38.790716 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Mar 18 18:03:38.790901 master-0 kubenswrapper[30278]: I0318 18:03:38.790726 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Mar 18 18:03:38.835090 master-0 kubenswrapper[30278]: I0318 18:03:38.834968 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy-cluster-autoscaler-operator" Mar 18 18:03:38.924316 master-0 kubenswrapper[30278]: I0318 18:03:38.924177 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Mar 18 18:03:38.995093 master-0 kubenswrapper[30278]: I0318 18:03:38.995022 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Mar 18 18:03:39.033870 master-0 kubenswrapper[30278]: I0318 18:03:39.033720 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Mar 18 18:03:39.059312 master-0 kubenswrapper[30278]: I0318 18:03:39.059226 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-node-tuning-operator"/"node-tuning-operator-tls" Mar 18 18:03:39.079348 master-0 kubenswrapper[30278]: I0318 18:03:39.078189 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Mar 18 18:03:39.143057 master-0 kubenswrapper[30278]: I0318 18:03:39.142984 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Mar 18 18:03:39.204120 master-0 kubenswrapper[30278]: I0318 18:03:39.204047 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Mar 18 18:03:39.277015 master-0 kubenswrapper[30278]: I0318 18:03:39.276919 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Mar 18 18:03:39.282790 master-0 kubenswrapper[30278]: I0318 18:03:39.282715 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-ksrlj" Mar 18 18:03:39.785466 master-0 kubenswrapper[30278]: I0318 18:03:39.785389 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-dockercfg-pwxkh" Mar 18 18:03:39.803867 master-0 kubenswrapper[30278]: I0318 18:03:39.803772 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"kube-root-ca.crt" Mar 18 18:03:39.810570 master-0 kubenswrapper[30278]: I0318 18:03:39.810493 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Mar 18 18:03:39.830100 master-0 kubenswrapper[30278]: I0318 18:03:39.828108 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-client-certs" Mar 18 18:03:39.976846 master-0 kubenswrapper[30278]: I0318 18:03:39.976792 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"telemetry-config" Mar 18 18:03:40.118562 master-0 kubenswrapper[30278]: I0318 18:03:40.118465 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-tls" Mar 18 18:03:40.118872 master-0 kubenswrapper[30278]: I0318 18:03:40.118760 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"alertmanager-trusted-ca-bundle" Mar 18 18:03:40.149297 master-0 kubenswrapper[30278]: I0318 18:03:40.149190 30278 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0"] Mar 18 18:03:40.150564 master-0 kubenswrapper[30278]: I0318 18:03:40.149720 30278 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" podUID="85632c1cec8974aa874834e4cfff4c77" containerName="startup-monitor" containerID="cri-o://571724322afa2bc809e3cb3005bbcd0d5daf3ba93f55353d097e625cd7da48c6" gracePeriod=5 Mar 18 18:03:40.155199 master-0 kubenswrapper[30278]: I0318 18:03:40.155120 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-82cs2" Mar 18 18:03:40.163018 master-0 kubenswrapper[30278]: I0318 18:03:40.162940 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Mar 18 18:03:40.205893 master-0 kubenswrapper[30278]: I0318 18:03:40.205814 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-catalogd"/"catalogserver-cert" Mar 18 18:03:40.206767 master-0 kubenswrapper[30278]: I0318 18:03:40.206714 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Mar 18 18:03:40.315814 master-0 kubenswrapper[30278]: I0318 18:03:40.315716 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-controller"/"kube-root-ca.crt" Mar 18 18:03:40.371221 master-0 kubenswrapper[30278]: I0318 18:03:40.371035 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"kube-root-ca.crt" Mar 18 18:03:40.438069 master-0 kubenswrapper[30278]: I0318 18:03:40.438002 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Mar 18 18:03:40.471855 master-0 kubenswrapper[30278]: I0318 18:03:40.471791 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Mar 18 18:03:40.504732 master-0 kubenswrapper[30278]: I0318 18:03:40.504666 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Mar 18 18:03:40.613692 master-0 kubenswrapper[30278]: I0318 18:03:40.613607 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-r9bww" Mar 18 18:03:40.638396 master-0 kubenswrapper[30278]: I0318 18:03:40.638087 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Mar 18 18:03:40.677156 master-0 kubenswrapper[30278]: I0318 18:03:40.677074 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Mar 18 18:03:40.734626 master-0 kubenswrapper[30278]: I0318 18:03:40.734556 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Mar 18 18:03:40.862644 master-0 kubenswrapper[30278]: I0318 18:03:40.862522 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Mar 18 18:03:40.879513 master-0 kubenswrapper[30278]: I0318 18:03:40.876410 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"service-ca-bundle" Mar 18 18:03:40.970301 master-0 kubenswrapper[30278]: I0318 18:03:40.970106 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-kube-rbac-proxy-config" Mar 18 18:03:40.971001 master-0 kubenswrapper[30278]: I0318 18:03:40.970925 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s" Mar 18 18:03:41.038100 master-0 kubenswrapper[30278]: I0318 18:03:41.037938 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-kube-rbac-proxy-config" Mar 18 18:03:41.085744 master-0 kubenswrapper[30278]: I0318 18:03:41.085687 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Mar 18 18:03:41.113083 master-0 kubenswrapper[30278]: I0318 18:03:41.113030 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Mar 18 18:03:41.127181 master-0 kubenswrapper[30278]: I0318 18:03:41.127106 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy" Mar 18 18:03:41.158403 master-0 kubenswrapper[30278]: I0318 18:03:41.158310 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-thanos-sidecar-tls" Mar 18 18:03:41.315509 master-0 kubenswrapper[30278]: I0318 18:03:41.315441 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kube-root-ca.crt" Mar 18 18:03:41.378534 master-0 kubenswrapper[30278]: I0318 18:03:41.378443 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy-rules" Mar 18 18:03:41.513116 master-0 kubenswrapper[30278]: I0318 18:03:41.511768 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Mar 18 18:03:41.528530 master-0 kubenswrapper[30278]: I0318 18:03:41.528474 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Mar 18 18:03:41.621994 master-0 kubenswrapper[30278]: I0318 18:03:41.621793 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Mar 18 18:03:41.651825 master-0 kubenswrapper[30278]: I0318 18:03:41.651733 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"metrics-server-audit-profiles" Mar 18 18:03:41.672314 master-0 kubenswrapper[30278]: I0318 18:03:41.672238 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-credential-operator"/"cloud-credential-operator-dockercfg-4fdq4" Mar 18 18:03:41.692867 master-0 kubenswrapper[30278]: I0318 18:03:41.692794 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-kube-rbac-proxy-config" Mar 18 18:03:41.750313 master-0 kubenswrapper[30278]: I0318 18:03:41.750206 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Mar 18 18:03:41.765085 master-0 kubenswrapper[30278]: I0318 18:03:41.765006 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Mar 18 18:03:41.860572 master-0 kubenswrapper[30278]: I0318 18:03:41.860529 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Mar 18 18:03:41.880184 master-0 kubenswrapper[30278]: I0318 18:03:41.879893 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Mar 18 18:03:41.894395 master-0 kubenswrapper[30278]: I0318 18:03:41.894333 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"prometheus-k8s-rulefiles-0" Mar 18 18:03:41.901371 master-0 kubenswrapper[30278]: I0318 18:03:41.901296 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-kdvf8" Mar 18 18:03:41.905036 master-0 kubenswrapper[30278]: I0318 18:03:41.905005 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Mar 18 18:03:41.975453 master-0 kubenswrapper[30278]: I0318 18:03:41.975361 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-insights"/"openshift-insights-serving-cert" Mar 18 18:03:42.008239 master-0 kubenswrapper[30278]: I0318 18:03:42.008169 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-olm-operator"/"openshift-service-ca.crt" Mar 18 18:03:42.039857 master-0 kubenswrapper[30278]: I0318 18:03:42.039780 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Mar 18 18:03:42.066345 master-0 kubenswrapper[30278]: I0318 18:03:42.066237 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Mar 18 18:03:42.185437 master-0 kubenswrapper[30278]: I0318 18:03:42.185297 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Mar 18 18:03:42.212353 master-0 kubenswrapper[30278]: I0318 18:03:42.212253 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Mar 18 18:03:42.218876 master-0 kubenswrapper[30278]: I0318 18:03:42.218833 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Mar 18 18:03:42.230317 master-0 kubenswrapper[30278]: I0318 18:03:42.230286 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Mar 18 18:03:42.301898 master-0 kubenswrapper[30278]: I0318 18:03:42.301784 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-kube-rbac-proxy-web" Mar 18 18:03:42.316476 master-0 kubenswrapper[30278]: I0318 18:03:42.316405 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Mar 18 18:03:42.357785 master-0 kubenswrapper[30278]: I0318 18:03:42.356055 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Mar 18 18:03:42.364430 master-0 kubenswrapper[30278]: I0318 18:03:42.364387 30278 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Mar 18 18:03:42.387815 master-0 kubenswrapper[30278]: I0318 18:03:42.387706 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Mar 18 18:03:42.405417 master-0 kubenswrapper[30278]: I0318 18:03:42.405267 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-credential-operator"/"openshift-service-ca.crt" Mar 18 18:03:42.574861 master-0 kubenswrapper[30278]: I0318 18:03:42.574789 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Mar 18 18:03:42.708726 master-0 kubenswrapper[30278]: I0318 18:03:42.708660 30278 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Mar 18 18:03:42.767721 master-0 kubenswrapper[30278]: I0318 18:03:42.767645 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Mar 18 18:03:42.779262 master-0 kubenswrapper[30278]: I0318 18:03:42.779213 30278 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Mar 18 18:03:42.792615 master-0 kubenswrapper[30278]: I0318 18:03:42.792565 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Mar 18 18:03:42.895315 master-0 kubenswrapper[30278]: I0318 18:03:42.894951 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Mar 18 18:03:42.914727 master-0 kubenswrapper[30278]: I0318 18:03:42.914670 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Mar 18 18:03:42.934984 master-0 kubenswrapper[30278]: I0318 18:03:42.934920 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Mar 18 18:03:42.979842 master-0 kubenswrapper[30278]: I0318 18:03:42.979757 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-baremetal-operator-tls" Mar 18 18:03:43.055042 master-0 kubenswrapper[30278]: I0318 18:03:43.054981 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Mar 18 18:03:43.057333 master-0 kubenswrapper[30278]: I0318 18:03:43.057261 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Mar 18 18:03:43.062070 master-0 kubenswrapper[30278]: I0318 18:03:43.062021 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-tls" Mar 18 18:03:43.071582 master-0 kubenswrapper[30278]: I0318 18:03:43.071532 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-insights"/"operator-dockercfg-bnhc4" Mar 18 18:03:43.084829 master-0 kubenswrapper[30278]: I0318 18:03:43.084775 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Mar 18 18:03:43.105479 master-0 kubenswrapper[30278]: I0318 18:03:43.105425 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-kube-rbac-proxy" Mar 18 18:03:43.144038 master-0 kubenswrapper[30278]: I0318 18:03:43.143976 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-tls" Mar 18 18:03:43.147267 master-0 kubenswrapper[30278]: I0318 18:03:43.147190 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Mar 18 18:03:43.211364 master-0 kubenswrapper[30278]: I0318 18:03:43.211295 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Mar 18 18:03:43.269016 master-0 kubenswrapper[30278]: I0318 18:03:43.268925 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Mar 18 18:03:43.336765 master-0 kubenswrapper[30278]: I0318 18:03:43.336698 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Mar 18 18:03:43.639710 master-0 kubenswrapper[30278]: I0318 18:03:43.639630 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Mar 18 18:03:43.713096 master-0 kubenswrapper[30278]: I0318 18:03:43.713055 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-catalogd"/"catalogd-trusted-ca-bundle" Mar 18 18:03:43.760218 master-0 kubenswrapper[30278]: I0318 18:03:43.760177 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Mar 18 18:03:43.813529 master-0 kubenswrapper[30278]: I0318 18:03:43.813442 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-web-config" Mar 18 18:03:43.854519 master-0 kubenswrapper[30278]: I0318 18:03:43.854463 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Mar 18 18:03:43.915883 master-0 kubenswrapper[30278]: I0318 18:03:43.915706 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Mar 18 18:03:43.985000 master-0 kubenswrapper[30278]: I0318 18:03:43.984933 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Mar 18 18:03:43.994294 master-0 kubenswrapper[30278]: I0318 18:03:43.994197 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Mar 18 18:03:44.074856 master-0 kubenswrapper[30278]: I0318 18:03:44.074799 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Mar 18 18:03:44.083328 master-0 kubenswrapper[30278]: I0318 18:03:44.083262 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Mar 18 18:03:44.262452 master-0 kubenswrapper[30278]: I0318 18:03:44.262265 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Mar 18 18:03:44.468618 master-0 kubenswrapper[30278]: I0318 18:03:44.468539 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-npx6j" Mar 18 18:03:44.633084 master-0 kubenswrapper[30278]: I0318 18:03:44.633033 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"cluster-baremetal-operator-images" Mar 18 18:03:44.675909 master-0 kubenswrapper[30278]: I0318 18:03:44.675846 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Mar 18 18:03:44.824826 master-0 kubenswrapper[30278]: I0318 18:03:44.824717 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"baremetal-kube-rbac-proxy" Mar 18 18:03:44.862748 master-0 kubenswrapper[30278]: I0318 18:03:44.862684 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Mar 18 18:03:44.936482 master-0 kubenswrapper[30278]: I0318 18:03:44.936306 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-22mk8" Mar 18 18:03:45.013750 master-0 kubenswrapper[30278]: I0318 18:03:45.013638 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Mar 18 18:03:45.081208 master-0 kubenswrapper[30278]: I0318 18:03:45.081067 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-btlbk" Mar 18 18:03:45.559894 master-0 kubenswrapper[30278]: I0318 18:03:45.559819 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-admission-webhook-tls" Mar 18 18:03:45.605735 master-0 kubenswrapper[30278]: I0318 18:03:45.605641 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Mar 18 18:03:45.748156 master-0 kubenswrapper[30278]: I0318 18:03:45.748098 30278 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-master-0_85632c1cec8974aa874834e4cfff4c77/startup-monitor/0.log" Mar 18 18:03:45.748429 master-0 kubenswrapper[30278]: I0318 18:03:45.748179 30278 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 18 18:03:45.869437 master-0 kubenswrapper[30278]: I0318 18:03:45.869218 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/85632c1cec8974aa874834e4cfff4c77-resource-dir\") pod \"85632c1cec8974aa874834e4cfff4c77\" (UID: \"85632c1cec8974aa874834e4cfff4c77\") " Mar 18 18:03:45.869437 master-0 kubenswrapper[30278]: I0318 18:03:45.869345 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/85632c1cec8974aa874834e4cfff4c77-pod-resource-dir\") pod \"85632c1cec8974aa874834e4cfff4c77\" (UID: \"85632c1cec8974aa874834e4cfff4c77\") " Mar 18 18:03:45.869437 master-0 kubenswrapper[30278]: I0318 18:03:45.869428 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/85632c1cec8974aa874834e4cfff4c77-var-lock\") pod \"85632c1cec8974aa874834e4cfff4c77\" (UID: \"85632c1cec8974aa874834e4cfff4c77\") " Mar 18 18:03:45.870030 master-0 kubenswrapper[30278]: I0318 18:03:45.869473 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/85632c1cec8974aa874834e4cfff4c77-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "85632c1cec8974aa874834e4cfff4c77" (UID: "85632c1cec8974aa874834e4cfff4c77"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 18:03:45.870030 master-0 kubenswrapper[30278]: I0318 18:03:45.869540 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/85632c1cec8974aa874834e4cfff4c77-manifests\") pod \"85632c1cec8974aa874834e4cfff4c77\" (UID: \"85632c1cec8974aa874834e4cfff4c77\") " Mar 18 18:03:45.870030 master-0 kubenswrapper[30278]: I0318 18:03:45.869582 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/85632c1cec8974aa874834e4cfff4c77-var-lock" (OuterVolumeSpecName: "var-lock") pod "85632c1cec8974aa874834e4cfff4c77" (UID: "85632c1cec8974aa874834e4cfff4c77"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 18:03:45.870030 master-0 kubenswrapper[30278]: I0318 18:03:45.869590 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/85632c1cec8974aa874834e4cfff4c77-var-log\") pod \"85632c1cec8974aa874834e4cfff4c77\" (UID: \"85632c1cec8974aa874834e4cfff4c77\") " Mar 18 18:03:45.870030 master-0 kubenswrapper[30278]: I0318 18:03:45.869686 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/85632c1cec8974aa874834e4cfff4c77-var-log" (OuterVolumeSpecName: "var-log") pod "85632c1cec8974aa874834e4cfff4c77" (UID: "85632c1cec8974aa874834e4cfff4c77"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 18:03:45.870030 master-0 kubenswrapper[30278]: I0318 18:03:45.869751 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/85632c1cec8974aa874834e4cfff4c77-manifests" (OuterVolumeSpecName: "manifests") pod "85632c1cec8974aa874834e4cfff4c77" (UID: "85632c1cec8974aa874834e4cfff4c77"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 18:03:45.870617 master-0 kubenswrapper[30278]: I0318 18:03:45.870507 30278 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/85632c1cec8974aa874834e4cfff4c77-manifests\") on node \"master-0\" DevicePath \"\"" Mar 18 18:03:45.870617 master-0 kubenswrapper[30278]: I0318 18:03:45.870536 30278 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/85632c1cec8974aa874834e4cfff4c77-var-log\") on node \"master-0\" DevicePath \"\"" Mar 18 18:03:45.870617 master-0 kubenswrapper[30278]: I0318 18:03:45.870555 30278 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/85632c1cec8974aa874834e4cfff4c77-resource-dir\") on node \"master-0\" DevicePath \"\"" Mar 18 18:03:45.870617 master-0 kubenswrapper[30278]: I0318 18:03:45.870571 30278 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/85632c1cec8974aa874834e4cfff4c77-var-lock\") on node \"master-0\" DevicePath \"\"" Mar 18 18:03:45.877154 master-0 kubenswrapper[30278]: I0318 18:03:45.876300 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/85632c1cec8974aa874834e4cfff4c77-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "85632c1cec8974aa874834e4cfff4c77" (UID: "85632c1cec8974aa874834e4cfff4c77"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 18:03:45.913598 master-0 kubenswrapper[30278]: I0318 18:03:45.913534 30278 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-master-0_85632c1cec8974aa874834e4cfff4c77/startup-monitor/0.log" Mar 18 18:03:45.913836 master-0 kubenswrapper[30278]: I0318 18:03:45.913637 30278 generic.go:334] "Generic (PLEG): container finished" podID="85632c1cec8974aa874834e4cfff4c77" containerID="571724322afa2bc809e3cb3005bbcd0d5daf3ba93f55353d097e625cd7da48c6" exitCode=137 Mar 18 18:03:45.913836 master-0 kubenswrapper[30278]: I0318 18:03:45.913722 30278 scope.go:117] "RemoveContainer" containerID="571724322afa2bc809e3cb3005bbcd0d5daf3ba93f55353d097e625cd7da48c6" Mar 18 18:03:45.913836 master-0 kubenswrapper[30278]: I0318 18:03:45.913731 30278 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 18 18:03:45.946023 master-0 kubenswrapper[30278]: I0318 18:03:45.945980 30278 scope.go:117] "RemoveContainer" containerID="571724322afa2bc809e3cb3005bbcd0d5daf3ba93f55353d097e625cd7da48c6" Mar 18 18:03:45.946703 master-0 kubenswrapper[30278]: E0318 18:03:45.946623 30278 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"571724322afa2bc809e3cb3005bbcd0d5daf3ba93f55353d097e625cd7da48c6\": container with ID starting with 571724322afa2bc809e3cb3005bbcd0d5daf3ba93f55353d097e625cd7da48c6 not found: ID does not exist" containerID="571724322afa2bc809e3cb3005bbcd0d5daf3ba93f55353d097e625cd7da48c6" Mar 18 18:03:45.946980 master-0 kubenswrapper[30278]: I0318 18:03:45.946713 30278 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"571724322afa2bc809e3cb3005bbcd0d5daf3ba93f55353d097e625cd7da48c6"} err="failed to get container status \"571724322afa2bc809e3cb3005bbcd0d5daf3ba93f55353d097e625cd7da48c6\": rpc error: code = NotFound desc = could not find container \"571724322afa2bc809e3cb3005bbcd0d5daf3ba93f55353d097e625cd7da48c6\": container with ID starting with 571724322afa2bc809e3cb3005bbcd0d5daf3ba93f55353d097e625cd7da48c6 not found: ID does not exist" Mar 18 18:03:45.972685 master-0 kubenswrapper[30278]: I0318 18:03:45.972623 30278 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/85632c1cec8974aa874834e4cfff4c77-pod-resource-dir\") on node \"master-0\" DevicePath \"\"" Mar 18 18:03:46.208514 master-0 kubenswrapper[30278]: I0318 18:03:46.208379 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"cluster-monitoring-operator-tls" Mar 18 18:03:46.401509 master-0 kubenswrapper[30278]: I0318 18:03:46.401425 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Mar 18 18:03:46.430626 master-0 kubenswrapper[30278]: I0318 18:03:46.430579 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-gxxlp" Mar 18 18:03:46.541725 master-0 kubenswrapper[30278]: I0318 18:03:46.541495 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Mar 18 18:03:46.814468 master-0 kubenswrapper[30278]: I0318 18:03:46.814369 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Mar 18 18:03:47.067132 master-0 kubenswrapper[30278]: I0318 18:03:47.066996 30278 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="85632c1cec8974aa874834e4cfff4c77" path="/var/lib/kubelet/pods/85632c1cec8974aa874834e4cfff4c77/volumes" Mar 18 18:03:47.302341 master-0 kubenswrapper[30278]: I0318 18:03:47.302244 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Mar 18 18:03:47.432992 master-0 kubenswrapper[30278]: I0318 18:03:47.432866 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Mar 18 18:04:19.651634 master-0 kubenswrapper[30278]: I0318 18:04:19.651564 30278 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-mz4bs"] Mar 18 18:04:19.652512 master-0 kubenswrapper[30278]: E0318 18:04:19.651963 30278 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="53883b3b-18ee-403e-b7c5-31699e457fd6" containerName="installer" Mar 18 18:04:19.652512 master-0 kubenswrapper[30278]: I0318 18:04:19.651980 30278 state_mem.go:107] "Deleted CPUSet assignment" podUID="53883b3b-18ee-403e-b7c5-31699e457fd6" containerName="installer" Mar 18 18:04:19.652512 master-0 kubenswrapper[30278]: E0318 18:04:19.652022 30278 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="85632c1cec8974aa874834e4cfff4c77" containerName="startup-monitor" Mar 18 18:04:19.652512 master-0 kubenswrapper[30278]: I0318 18:04:19.652031 30278 state_mem.go:107] "Deleted CPUSet assignment" podUID="85632c1cec8974aa874834e4cfff4c77" containerName="startup-monitor" Mar 18 18:04:19.652512 master-0 kubenswrapper[30278]: I0318 18:04:19.652176 30278 memory_manager.go:354] "RemoveStaleState removing state" podUID="53883b3b-18ee-403e-b7c5-31699e457fd6" containerName="installer" Mar 18 18:04:19.652512 master-0 kubenswrapper[30278]: I0318 18:04:19.652232 30278 memory_manager.go:354] "RemoveStaleState removing state" podUID="85632c1cec8974aa874834e4cfff4c77" containerName="startup-monitor" Mar 18 18:04:19.652923 master-0 kubenswrapper[30278]: I0318 18:04:19.652875 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-mz4bs" Mar 18 18:04:19.656267 master-0 kubenswrapper[30278]: I0318 18:04:19.656198 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-pjrlq" Mar 18 18:04:19.656449 master-0 kubenswrapper[30278]: I0318 18:04:19.656204 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-sysctl-allowlist" Mar 18 18:04:19.725788 master-0 kubenswrapper[30278]: I0318 18:04:19.725720 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/065e83cc-27ea-42b6-9b68-098d4fe354ca-tuning-conf-dir\") pod \"cni-sysctl-allowlist-ds-mz4bs\" (UID: \"065e83cc-27ea-42b6-9b68-098d4fe354ca\") " pod="openshift-multus/cni-sysctl-allowlist-ds-mz4bs" Mar 18 18:04:19.726001 master-0 kubenswrapper[30278]: I0318 18:04:19.725870 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/065e83cc-27ea-42b6-9b68-098d4fe354ca-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-mz4bs\" (UID: \"065e83cc-27ea-42b6-9b68-098d4fe354ca\") " pod="openshift-multus/cni-sysctl-allowlist-ds-mz4bs" Mar 18 18:04:19.726001 master-0 kubenswrapper[30278]: I0318 18:04:19.725939 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/065e83cc-27ea-42b6-9b68-098d4fe354ca-ready\") pod \"cni-sysctl-allowlist-ds-mz4bs\" (UID: \"065e83cc-27ea-42b6-9b68-098d4fe354ca\") " pod="openshift-multus/cni-sysctl-allowlist-ds-mz4bs" Mar 18 18:04:19.726073 master-0 kubenswrapper[30278]: I0318 18:04:19.726058 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7hgtp\" (UniqueName: \"kubernetes.io/projected/065e83cc-27ea-42b6-9b68-098d4fe354ca-kube-api-access-7hgtp\") pod \"cni-sysctl-allowlist-ds-mz4bs\" (UID: \"065e83cc-27ea-42b6-9b68-098d4fe354ca\") " pod="openshift-multus/cni-sysctl-allowlist-ds-mz4bs" Mar 18 18:04:19.827246 master-0 kubenswrapper[30278]: I0318 18:04:19.827176 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/065e83cc-27ea-42b6-9b68-098d4fe354ca-tuning-conf-dir\") pod \"cni-sysctl-allowlist-ds-mz4bs\" (UID: \"065e83cc-27ea-42b6-9b68-098d4fe354ca\") " pod="openshift-multus/cni-sysctl-allowlist-ds-mz4bs" Mar 18 18:04:19.827557 master-0 kubenswrapper[30278]: I0318 18:04:19.827405 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/065e83cc-27ea-42b6-9b68-098d4fe354ca-tuning-conf-dir\") pod \"cni-sysctl-allowlist-ds-mz4bs\" (UID: \"065e83cc-27ea-42b6-9b68-098d4fe354ca\") " pod="openshift-multus/cni-sysctl-allowlist-ds-mz4bs" Mar 18 18:04:19.827557 master-0 kubenswrapper[30278]: I0318 18:04:19.827494 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/065e83cc-27ea-42b6-9b68-098d4fe354ca-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-mz4bs\" (UID: \"065e83cc-27ea-42b6-9b68-098d4fe354ca\") " pod="openshift-multus/cni-sysctl-allowlist-ds-mz4bs" Mar 18 18:04:19.827557 master-0 kubenswrapper[30278]: I0318 18:04:19.827549 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/065e83cc-27ea-42b6-9b68-098d4fe354ca-ready\") pod \"cni-sysctl-allowlist-ds-mz4bs\" (UID: \"065e83cc-27ea-42b6-9b68-098d4fe354ca\") " pod="openshift-multus/cni-sysctl-allowlist-ds-mz4bs" Mar 18 18:04:19.827775 master-0 kubenswrapper[30278]: I0318 18:04:19.827673 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7hgtp\" (UniqueName: \"kubernetes.io/projected/065e83cc-27ea-42b6-9b68-098d4fe354ca-kube-api-access-7hgtp\") pod \"cni-sysctl-allowlist-ds-mz4bs\" (UID: \"065e83cc-27ea-42b6-9b68-098d4fe354ca\") " pod="openshift-multus/cni-sysctl-allowlist-ds-mz4bs" Mar 18 18:04:19.828473 master-0 kubenswrapper[30278]: I0318 18:04:19.828446 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/065e83cc-27ea-42b6-9b68-098d4fe354ca-ready\") pod \"cni-sysctl-allowlist-ds-mz4bs\" (UID: \"065e83cc-27ea-42b6-9b68-098d4fe354ca\") " pod="openshift-multus/cni-sysctl-allowlist-ds-mz4bs" Mar 18 18:04:19.828795 master-0 kubenswrapper[30278]: I0318 18:04:19.828765 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/065e83cc-27ea-42b6-9b68-098d4fe354ca-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-mz4bs\" (UID: \"065e83cc-27ea-42b6-9b68-098d4fe354ca\") " pod="openshift-multus/cni-sysctl-allowlist-ds-mz4bs" Mar 18 18:04:19.846204 master-0 kubenswrapper[30278]: I0318 18:04:19.846154 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7hgtp\" (UniqueName: \"kubernetes.io/projected/065e83cc-27ea-42b6-9b68-098d4fe354ca-kube-api-access-7hgtp\") pod \"cni-sysctl-allowlist-ds-mz4bs\" (UID: \"065e83cc-27ea-42b6-9b68-098d4fe354ca\") " pod="openshift-multus/cni-sysctl-allowlist-ds-mz4bs" Mar 18 18:04:19.972735 master-0 kubenswrapper[30278]: I0318 18:04:19.972608 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-mz4bs" Mar 18 18:04:20.205103 master-0 kubenswrapper[30278]: I0318 18:04:20.205004 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-mz4bs" event={"ID":"065e83cc-27ea-42b6-9b68-098d4fe354ca","Type":"ContainerStarted","Data":"e2d9647b085caf9df4cc2707d3341f3d71645e3ba31756b0b27eb972f71e3050"} Mar 18 18:04:21.217873 master-0 kubenswrapper[30278]: I0318 18:04:21.217777 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-mz4bs" event={"ID":"065e83cc-27ea-42b6-9b68-098d4fe354ca","Type":"ContainerStarted","Data":"61b24d57f9b17094b1923f1fd5d884f73e9a2016d772540e8371461502fd8141"} Mar 18 18:04:21.218647 master-0 kubenswrapper[30278]: I0318 18:04:21.218308 30278 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-multus/cni-sysctl-allowlist-ds-mz4bs" Mar 18 18:04:21.251053 master-0 kubenswrapper[30278]: I0318 18:04:21.250984 30278 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-multus/cni-sysctl-allowlist-ds-mz4bs" Mar 18 18:04:21.258731 master-0 kubenswrapper[30278]: I0318 18:04:21.258619 30278 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/cni-sysctl-allowlist-ds-mz4bs" podStartSLOduration=2.258593705 podStartE2EDuration="2.258593705s" podCreationTimestamp="2026-03-18 18:04:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 18:04:21.255530293 +0000 UTC m=+230.422714888" watchObservedRunningTime="2026-03-18 18:04:21.258593705 +0000 UTC m=+230.425778310" Mar 18 18:04:21.633084 master-0 kubenswrapper[30278]: I0318 18:04:21.633021 30278 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-mz4bs"] Mar 18 18:04:23.232864 master-0 kubenswrapper[30278]: I0318 18:04:23.232708 30278 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-multus/cni-sysctl-allowlist-ds-mz4bs" podUID="065e83cc-27ea-42b6-9b68-098d4fe354ca" containerName="kube-multus-additional-cni-plugins" containerID="cri-o://61b24d57f9b17094b1923f1fd5d884f73e9a2016d772540e8371461502fd8141" gracePeriod=30 Mar 18 18:04:29.975990 master-0 kubenswrapper[30278]: E0318 18:04:29.975880 30278 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="61b24d57f9b17094b1923f1fd5d884f73e9a2016d772540e8371461502fd8141" cmd=["/bin/bash","-c","test -f /ready/ready"] Mar 18 18:04:29.977777 master-0 kubenswrapper[30278]: E0318 18:04:29.977694 30278 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="61b24d57f9b17094b1923f1fd5d884f73e9a2016d772540e8371461502fd8141" cmd=["/bin/bash","-c","test -f /ready/ready"] Mar 18 18:04:29.980004 master-0 kubenswrapper[30278]: E0318 18:04:29.979933 30278 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="61b24d57f9b17094b1923f1fd5d884f73e9a2016d772540e8371461502fd8141" cmd=["/bin/bash","-c","test -f /ready/ready"] Mar 18 18:04:29.980104 master-0 kubenswrapper[30278]: E0318 18:04:29.980006 30278 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-mz4bs" podUID="065e83cc-27ea-42b6-9b68-098d4fe354ca" containerName="kube-multus-additional-cni-plugins" Mar 18 18:04:39.975444 master-0 kubenswrapper[30278]: E0318 18:04:39.975335 30278 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="61b24d57f9b17094b1923f1fd5d884f73e9a2016d772540e8371461502fd8141" cmd=["/bin/bash","-c","test -f /ready/ready"] Mar 18 18:04:39.976965 master-0 kubenswrapper[30278]: E0318 18:04:39.976922 30278 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="61b24d57f9b17094b1923f1fd5d884f73e9a2016d772540e8371461502fd8141" cmd=["/bin/bash","-c","test -f /ready/ready"] Mar 18 18:04:39.978237 master-0 kubenswrapper[30278]: E0318 18:04:39.978179 30278 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="61b24d57f9b17094b1923f1fd5d884f73e9a2016d772540e8371461502fd8141" cmd=["/bin/bash","-c","test -f /ready/ready"] Mar 18 18:04:39.978374 master-0 kubenswrapper[30278]: E0318 18:04:39.978235 30278 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-mz4bs" podUID="065e83cc-27ea-42b6-9b68-098d4fe354ca" containerName="kube-multus-additional-cni-plugins" Mar 18 18:04:42.022336 master-0 kubenswrapper[30278]: I0318 18:04:42.021047 30278 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/installer-3-retry-1-master-0"] Mar 18 18:04:42.022336 master-0 kubenswrapper[30278]: I0318 18:04:42.021970 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-3-retry-1-master-0" Mar 18 18:04:42.024764 master-0 kubenswrapper[30278]: I0318 18:04:42.024723 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-cskqs" Mar 18 18:04:42.025030 master-0 kubenswrapper[30278]: I0318 18:04:42.024725 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Mar 18 18:04:42.042464 master-0 kubenswrapper[30278]: I0318 18:04:42.042413 30278 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/installer-3-retry-1-master-0"] Mar 18 18:04:42.144387 master-0 kubenswrapper[30278]: I0318 18:04:42.143654 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/bcb7afbe-78dc-4d07-aa56-123aeceabcd6-kubelet-dir\") pod \"installer-3-retry-1-master-0\" (UID: \"bcb7afbe-78dc-4d07-aa56-123aeceabcd6\") " pod="openshift-kube-controller-manager/installer-3-retry-1-master-0" Mar 18 18:04:42.144387 master-0 kubenswrapper[30278]: I0318 18:04:42.143821 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/bcb7afbe-78dc-4d07-aa56-123aeceabcd6-kube-api-access\") pod \"installer-3-retry-1-master-0\" (UID: \"bcb7afbe-78dc-4d07-aa56-123aeceabcd6\") " pod="openshift-kube-controller-manager/installer-3-retry-1-master-0" Mar 18 18:04:42.144387 master-0 kubenswrapper[30278]: I0318 18:04:42.143976 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/bcb7afbe-78dc-4d07-aa56-123aeceabcd6-var-lock\") pod \"installer-3-retry-1-master-0\" (UID: \"bcb7afbe-78dc-4d07-aa56-123aeceabcd6\") " pod="openshift-kube-controller-manager/installer-3-retry-1-master-0" Mar 18 18:04:42.246308 master-0 kubenswrapper[30278]: I0318 18:04:42.246206 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/bcb7afbe-78dc-4d07-aa56-123aeceabcd6-var-lock\") pod \"installer-3-retry-1-master-0\" (UID: \"bcb7afbe-78dc-4d07-aa56-123aeceabcd6\") " pod="openshift-kube-controller-manager/installer-3-retry-1-master-0" Mar 18 18:04:42.246821 master-0 kubenswrapper[30278]: I0318 18:04:42.246747 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/bcb7afbe-78dc-4d07-aa56-123aeceabcd6-kubelet-dir\") pod \"installer-3-retry-1-master-0\" (UID: \"bcb7afbe-78dc-4d07-aa56-123aeceabcd6\") " pod="openshift-kube-controller-manager/installer-3-retry-1-master-0" Mar 18 18:04:42.246915 master-0 kubenswrapper[30278]: I0318 18:04:42.246879 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/bcb7afbe-78dc-4d07-aa56-123aeceabcd6-kubelet-dir\") pod \"installer-3-retry-1-master-0\" (UID: \"bcb7afbe-78dc-4d07-aa56-123aeceabcd6\") " pod="openshift-kube-controller-manager/installer-3-retry-1-master-0" Mar 18 18:04:42.246986 master-0 kubenswrapper[30278]: I0318 18:04:42.246910 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/bcb7afbe-78dc-4d07-aa56-123aeceabcd6-kube-api-access\") pod \"installer-3-retry-1-master-0\" (UID: \"bcb7afbe-78dc-4d07-aa56-123aeceabcd6\") " pod="openshift-kube-controller-manager/installer-3-retry-1-master-0" Mar 18 18:04:42.251411 master-0 kubenswrapper[30278]: I0318 18:04:42.251356 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/bcb7afbe-78dc-4d07-aa56-123aeceabcd6-var-lock\") pod \"installer-3-retry-1-master-0\" (UID: \"bcb7afbe-78dc-4d07-aa56-123aeceabcd6\") " pod="openshift-kube-controller-manager/installer-3-retry-1-master-0" Mar 18 18:04:42.273826 master-0 kubenswrapper[30278]: I0318 18:04:42.273686 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/bcb7afbe-78dc-4d07-aa56-123aeceabcd6-kube-api-access\") pod \"installer-3-retry-1-master-0\" (UID: \"bcb7afbe-78dc-4d07-aa56-123aeceabcd6\") " pod="openshift-kube-controller-manager/installer-3-retry-1-master-0" Mar 18 18:04:42.413435 master-0 kubenswrapper[30278]: I0318 18:04:42.412827 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-3-retry-1-master-0" Mar 18 18:04:42.883882 master-0 kubenswrapper[30278]: W0318 18:04:42.883683 30278 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-podbcb7afbe_78dc_4d07_aa56_123aeceabcd6.slice/crio-7ffee97df1c282edf05c027302408e950cb4b6cb2488fedc14cdf015b9f40549 WatchSource:0}: Error finding container 7ffee97df1c282edf05c027302408e950cb4b6cb2488fedc14cdf015b9f40549: Status 404 returned error can't find the container with id 7ffee97df1c282edf05c027302408e950cb4b6cb2488fedc14cdf015b9f40549 Mar 18 18:04:42.884671 master-0 kubenswrapper[30278]: I0318 18:04:42.884571 30278 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/installer-3-retry-1-master-0"] Mar 18 18:04:43.410178 master-0 kubenswrapper[30278]: I0318 18:04:43.410004 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-3-retry-1-master-0" event={"ID":"bcb7afbe-78dc-4d07-aa56-123aeceabcd6","Type":"ContainerStarted","Data":"d5960f392b00010ed91f8e6d7501b2845219dda833cadabbb7a7e6771bd6f9af"} Mar 18 18:04:43.410178 master-0 kubenswrapper[30278]: I0318 18:04:43.410077 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-3-retry-1-master-0" event={"ID":"bcb7afbe-78dc-4d07-aa56-123aeceabcd6","Type":"ContainerStarted","Data":"7ffee97df1c282edf05c027302408e950cb4b6cb2488fedc14cdf015b9f40549"} Mar 18 18:04:43.442931 master-0 kubenswrapper[30278]: I0318 18:04:43.442818 30278 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/installer-3-retry-1-master-0" podStartSLOduration=1.442789924 podStartE2EDuration="1.442789924s" podCreationTimestamp="2026-03-18 18:04:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 18:04:43.435063527 +0000 UTC m=+252.602248212" watchObservedRunningTime="2026-03-18 18:04:43.442789924 +0000 UTC m=+252.609974549" Mar 18 18:04:49.975204 master-0 kubenswrapper[30278]: E0318 18:04:49.975131 30278 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="61b24d57f9b17094b1923f1fd5d884f73e9a2016d772540e8371461502fd8141" cmd=["/bin/bash","-c","test -f /ready/ready"] Mar 18 18:04:49.976497 master-0 kubenswrapper[30278]: E0318 18:04:49.976436 30278 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="61b24d57f9b17094b1923f1fd5d884f73e9a2016d772540e8371461502fd8141" cmd=["/bin/bash","-c","test -f /ready/ready"] Mar 18 18:04:49.977649 master-0 kubenswrapper[30278]: E0318 18:04:49.977584 30278 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="61b24d57f9b17094b1923f1fd5d884f73e9a2016d772540e8371461502fd8141" cmd=["/bin/bash","-c","test -f /ready/ready"] Mar 18 18:04:49.977702 master-0 kubenswrapper[30278]: E0318 18:04:49.977673 30278 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-mz4bs" podUID="065e83cc-27ea-42b6-9b68-098d4fe354ca" containerName="kube-multus-additional-cni-plugins" Mar 18 18:04:53.384261 master-0 kubenswrapper[30278]: I0318 18:04:53.384218 30278 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_cni-sysctl-allowlist-ds-mz4bs_065e83cc-27ea-42b6-9b68-098d4fe354ca/kube-multus-additional-cni-plugins/0.log" Mar 18 18:04:53.384907 master-0 kubenswrapper[30278]: I0318 18:04:53.384392 30278 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-mz4bs" Mar 18 18:04:53.439052 master-0 kubenswrapper[30278]: I0318 18:04:53.438998 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/065e83cc-27ea-42b6-9b68-098d4fe354ca-ready\") pod \"065e83cc-27ea-42b6-9b68-098d4fe354ca\" (UID: \"065e83cc-27ea-42b6-9b68-098d4fe354ca\") " Mar 18 18:04:53.439052 master-0 kubenswrapper[30278]: I0318 18:04:53.439053 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/065e83cc-27ea-42b6-9b68-098d4fe354ca-tuning-conf-dir\") pod \"065e83cc-27ea-42b6-9b68-098d4fe354ca\" (UID: \"065e83cc-27ea-42b6-9b68-098d4fe354ca\") " Mar 18 18:04:53.439371 master-0 kubenswrapper[30278]: I0318 18:04:53.439074 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/065e83cc-27ea-42b6-9b68-098d4fe354ca-cni-sysctl-allowlist\") pod \"065e83cc-27ea-42b6-9b68-098d4fe354ca\" (UID: \"065e83cc-27ea-42b6-9b68-098d4fe354ca\") " Mar 18 18:04:53.439371 master-0 kubenswrapper[30278]: I0318 18:04:53.439112 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7hgtp\" (UniqueName: \"kubernetes.io/projected/065e83cc-27ea-42b6-9b68-098d4fe354ca-kube-api-access-7hgtp\") pod \"065e83cc-27ea-42b6-9b68-098d4fe354ca\" (UID: \"065e83cc-27ea-42b6-9b68-098d4fe354ca\") " Mar 18 18:04:53.439465 master-0 kubenswrapper[30278]: I0318 18:04:53.439387 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/065e83cc-27ea-42b6-9b68-098d4fe354ca-ready" (OuterVolumeSpecName: "ready") pod "065e83cc-27ea-42b6-9b68-098d4fe354ca" (UID: "065e83cc-27ea-42b6-9b68-098d4fe354ca"). InnerVolumeSpecName "ready". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 18 18:04:53.439547 master-0 kubenswrapper[30278]: I0318 18:04:53.439520 30278 reconciler_common.go:293] "Volume detached for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/065e83cc-27ea-42b6-9b68-098d4fe354ca-ready\") on node \"master-0\" DevicePath \"\"" Mar 18 18:04:53.439716 master-0 kubenswrapper[30278]: I0318 18:04:53.439674 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/065e83cc-27ea-42b6-9b68-098d4fe354ca-tuning-conf-dir" (OuterVolumeSpecName: "tuning-conf-dir") pod "065e83cc-27ea-42b6-9b68-098d4fe354ca" (UID: "065e83cc-27ea-42b6-9b68-098d4fe354ca"). InnerVolumeSpecName "tuning-conf-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 18:04:53.439881 master-0 kubenswrapper[30278]: I0318 18:04:53.439848 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/065e83cc-27ea-42b6-9b68-098d4fe354ca-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "065e83cc-27ea-42b6-9b68-098d4fe354ca" (UID: "065e83cc-27ea-42b6-9b68-098d4fe354ca"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 18:04:53.442436 master-0 kubenswrapper[30278]: I0318 18:04:53.442399 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/065e83cc-27ea-42b6-9b68-098d4fe354ca-kube-api-access-7hgtp" (OuterVolumeSpecName: "kube-api-access-7hgtp") pod "065e83cc-27ea-42b6-9b68-098d4fe354ca" (UID: "065e83cc-27ea-42b6-9b68-098d4fe354ca"). InnerVolumeSpecName "kube-api-access-7hgtp". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 18:04:53.517977 master-0 kubenswrapper[30278]: I0318 18:04:53.517834 30278 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_cni-sysctl-allowlist-ds-mz4bs_065e83cc-27ea-42b6-9b68-098d4fe354ca/kube-multus-additional-cni-plugins/0.log" Mar 18 18:04:53.517977 master-0 kubenswrapper[30278]: I0318 18:04:53.517882 30278 generic.go:334] "Generic (PLEG): container finished" podID="065e83cc-27ea-42b6-9b68-098d4fe354ca" containerID="61b24d57f9b17094b1923f1fd5d884f73e9a2016d772540e8371461502fd8141" exitCode=137 Mar 18 18:04:53.517977 master-0 kubenswrapper[30278]: I0318 18:04:53.517909 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-mz4bs" event={"ID":"065e83cc-27ea-42b6-9b68-098d4fe354ca","Type":"ContainerDied","Data":"61b24d57f9b17094b1923f1fd5d884f73e9a2016d772540e8371461502fd8141"} Mar 18 18:04:53.517977 master-0 kubenswrapper[30278]: I0318 18:04:53.517934 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-mz4bs" event={"ID":"065e83cc-27ea-42b6-9b68-098d4fe354ca","Type":"ContainerDied","Data":"e2d9647b085caf9df4cc2707d3341f3d71645e3ba31756b0b27eb972f71e3050"} Mar 18 18:04:53.517977 master-0 kubenswrapper[30278]: I0318 18:04:53.517949 30278 scope.go:117] "RemoveContainer" containerID="61b24d57f9b17094b1923f1fd5d884f73e9a2016d772540e8371461502fd8141" Mar 18 18:04:53.518610 master-0 kubenswrapper[30278]: I0318 18:04:53.517948 30278 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-mz4bs" Mar 18 18:04:53.541098 master-0 kubenswrapper[30278]: I0318 18:04:53.541041 30278 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/065e83cc-27ea-42b6-9b68-098d4fe354ca-cni-sysctl-allowlist\") on node \"master-0\" DevicePath \"\"" Mar 18 18:04:53.541098 master-0 kubenswrapper[30278]: I0318 18:04:53.541092 30278 reconciler_common.go:293] "Volume detached for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/065e83cc-27ea-42b6-9b68-098d4fe354ca-tuning-conf-dir\") on node \"master-0\" DevicePath \"\"" Mar 18 18:04:53.541098 master-0 kubenswrapper[30278]: I0318 18:04:53.541102 30278 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7hgtp\" (UniqueName: \"kubernetes.io/projected/065e83cc-27ea-42b6-9b68-098d4fe354ca-kube-api-access-7hgtp\") on node \"master-0\" DevicePath \"\"" Mar 18 18:04:53.543107 master-0 kubenswrapper[30278]: I0318 18:04:53.543079 30278 scope.go:117] "RemoveContainer" containerID="61b24d57f9b17094b1923f1fd5d884f73e9a2016d772540e8371461502fd8141" Mar 18 18:04:53.544035 master-0 kubenswrapper[30278]: E0318 18:04:53.544000 30278 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"61b24d57f9b17094b1923f1fd5d884f73e9a2016d772540e8371461502fd8141\": container with ID starting with 61b24d57f9b17094b1923f1fd5d884f73e9a2016d772540e8371461502fd8141 not found: ID does not exist" containerID="61b24d57f9b17094b1923f1fd5d884f73e9a2016d772540e8371461502fd8141" Mar 18 18:04:53.544116 master-0 kubenswrapper[30278]: I0318 18:04:53.544029 30278 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"61b24d57f9b17094b1923f1fd5d884f73e9a2016d772540e8371461502fd8141"} err="failed to get container status \"61b24d57f9b17094b1923f1fd5d884f73e9a2016d772540e8371461502fd8141\": rpc error: code = NotFound desc = could not find container \"61b24d57f9b17094b1923f1fd5d884f73e9a2016d772540e8371461502fd8141\": container with ID starting with 61b24d57f9b17094b1923f1fd5d884f73e9a2016d772540e8371461502fd8141 not found: ID does not exist" Mar 18 18:04:53.567030 master-0 kubenswrapper[30278]: I0318 18:04:53.566948 30278 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-mz4bs"] Mar 18 18:04:53.574600 master-0 kubenswrapper[30278]: I0318 18:04:53.574537 30278 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-mz4bs"] Mar 18 18:04:55.066135 master-0 kubenswrapper[30278]: I0318 18:04:55.066077 30278 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="065e83cc-27ea-42b6-9b68-098d4fe354ca" path="/var/lib/kubelet/pods/065e83cc-27ea-42b6-9b68-098d4fe354ca/volumes" Mar 18 18:04:55.880702 master-0 kubenswrapper[30278]: I0318 18:04:55.880639 30278 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-76b6568d85-5nwft"] Mar 18 18:04:55.881177 master-0 kubenswrapper[30278]: E0318 18:04:55.880968 30278 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="065e83cc-27ea-42b6-9b68-098d4fe354ca" containerName="kube-multus-additional-cni-plugins" Mar 18 18:04:55.881177 master-0 kubenswrapper[30278]: I0318 18:04:55.881175 30278 state_mem.go:107] "Deleted CPUSet assignment" podUID="065e83cc-27ea-42b6-9b68-098d4fe354ca" containerName="kube-multus-additional-cni-plugins" Mar 18 18:04:55.881432 master-0 kubenswrapper[30278]: I0318 18:04:55.881408 30278 memory_manager.go:354] "RemoveStaleState removing state" podUID="065e83cc-27ea-42b6-9b68-098d4fe354ca" containerName="kube-multus-additional-cni-plugins" Mar 18 18:04:55.881974 master-0 kubenswrapper[30278]: I0318 18:04:55.881954 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-76b6568d85-5nwft" Mar 18 18:04:55.886945 master-0 kubenswrapper[30278]: I0318 18:04:55.886534 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Mar 18 18:04:55.886945 master-0 kubenswrapper[30278]: I0318 18:04:55.886583 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Mar 18 18:04:55.890976 master-0 kubenswrapper[30278]: I0318 18:04:55.890937 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Mar 18 18:04:55.892644 master-0 kubenswrapper[30278]: I0318 18:04:55.892580 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Mar 18 18:04:55.894578 master-0 kubenswrapper[30278]: I0318 18:04:55.893654 30278 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-76b6568d85-5nwft"] Mar 18 18:04:55.900328 master-0 kubenswrapper[30278]: I0318 18:04:55.900213 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Mar 18 18:04:55.983669 master-0 kubenswrapper[30278]: I0318 18:04:55.983358 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5247k\" (UniqueName: \"kubernetes.io/projected/d5d15a23-f43f-4265-a7e5-8c28f680ede9-kube-api-access-5247k\") pod \"console-operator-76b6568d85-5nwft\" (UID: \"d5d15a23-f43f-4265-a7e5-8c28f680ede9\") " pod="openshift-console-operator/console-operator-76b6568d85-5nwft" Mar 18 18:04:55.983669 master-0 kubenswrapper[30278]: I0318 18:04:55.983402 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d5d15a23-f43f-4265-a7e5-8c28f680ede9-config\") pod \"console-operator-76b6568d85-5nwft\" (UID: \"d5d15a23-f43f-4265-a7e5-8c28f680ede9\") " pod="openshift-console-operator/console-operator-76b6568d85-5nwft" Mar 18 18:04:55.983669 master-0 kubenswrapper[30278]: I0318 18:04:55.983428 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d5d15a23-f43f-4265-a7e5-8c28f680ede9-serving-cert\") pod \"console-operator-76b6568d85-5nwft\" (UID: \"d5d15a23-f43f-4265-a7e5-8c28f680ede9\") " pod="openshift-console-operator/console-operator-76b6568d85-5nwft" Mar 18 18:04:55.983669 master-0 kubenswrapper[30278]: I0318 18:04:55.983528 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d5d15a23-f43f-4265-a7e5-8c28f680ede9-trusted-ca\") pod \"console-operator-76b6568d85-5nwft\" (UID: \"d5d15a23-f43f-4265-a7e5-8c28f680ede9\") " pod="openshift-console-operator/console-operator-76b6568d85-5nwft" Mar 18 18:04:55.991151 master-0 kubenswrapper[30278]: E0318 18:04:55.991058 30278 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[alertmanager-trusted-ca-bundle], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="openshift-monitoring/alertmanager-main-0" podUID="89b1dfdf-4633-45af-8abd-931a76eca960" Mar 18 18:04:56.085091 master-0 kubenswrapper[30278]: I0318 18:04:56.085015 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d5d15a23-f43f-4265-a7e5-8c28f680ede9-trusted-ca\") pod \"console-operator-76b6568d85-5nwft\" (UID: \"d5d15a23-f43f-4265-a7e5-8c28f680ede9\") " pod="openshift-console-operator/console-operator-76b6568d85-5nwft" Mar 18 18:04:56.085641 master-0 kubenswrapper[30278]: I0318 18:04:56.085601 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5247k\" (UniqueName: \"kubernetes.io/projected/d5d15a23-f43f-4265-a7e5-8c28f680ede9-kube-api-access-5247k\") pod \"console-operator-76b6568d85-5nwft\" (UID: \"d5d15a23-f43f-4265-a7e5-8c28f680ede9\") " pod="openshift-console-operator/console-operator-76b6568d85-5nwft" Mar 18 18:04:56.085686 master-0 kubenswrapper[30278]: I0318 18:04:56.085641 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d5d15a23-f43f-4265-a7e5-8c28f680ede9-config\") pod \"console-operator-76b6568d85-5nwft\" (UID: \"d5d15a23-f43f-4265-a7e5-8c28f680ede9\") " pod="openshift-console-operator/console-operator-76b6568d85-5nwft" Mar 18 18:04:56.085686 master-0 kubenswrapper[30278]: I0318 18:04:56.085674 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d5d15a23-f43f-4265-a7e5-8c28f680ede9-serving-cert\") pod \"console-operator-76b6568d85-5nwft\" (UID: \"d5d15a23-f43f-4265-a7e5-8c28f680ede9\") " pod="openshift-console-operator/console-operator-76b6568d85-5nwft" Mar 18 18:04:56.086206 master-0 kubenswrapper[30278]: I0318 18:04:56.086171 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d5d15a23-f43f-4265-a7e5-8c28f680ede9-trusted-ca\") pod \"console-operator-76b6568d85-5nwft\" (UID: \"d5d15a23-f43f-4265-a7e5-8c28f680ede9\") " pod="openshift-console-operator/console-operator-76b6568d85-5nwft" Mar 18 18:04:56.087074 master-0 kubenswrapper[30278]: I0318 18:04:56.087023 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d5d15a23-f43f-4265-a7e5-8c28f680ede9-config\") pod \"console-operator-76b6568d85-5nwft\" (UID: \"d5d15a23-f43f-4265-a7e5-8c28f680ede9\") " pod="openshift-console-operator/console-operator-76b6568d85-5nwft" Mar 18 18:04:56.091236 master-0 kubenswrapper[30278]: I0318 18:04:56.091185 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d5d15a23-f43f-4265-a7e5-8c28f680ede9-serving-cert\") pod \"console-operator-76b6568d85-5nwft\" (UID: \"d5d15a23-f43f-4265-a7e5-8c28f680ede9\") " pod="openshift-console-operator/console-operator-76b6568d85-5nwft" Mar 18 18:04:56.101422 master-0 kubenswrapper[30278]: I0318 18:04:56.101392 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5247k\" (UniqueName: \"kubernetes.io/projected/d5d15a23-f43f-4265-a7e5-8c28f680ede9-kube-api-access-5247k\") pod \"console-operator-76b6568d85-5nwft\" (UID: \"d5d15a23-f43f-4265-a7e5-8c28f680ede9\") " pod="openshift-console-operator/console-operator-76b6568d85-5nwft" Mar 18 18:04:56.206563 master-0 kubenswrapper[30278]: I0318 18:04:56.206415 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-76b6568d85-5nwft" Mar 18 18:04:56.550064 master-0 kubenswrapper[30278]: I0318 18:04:56.549856 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/alertmanager-main-0" Mar 18 18:04:56.604795 master-0 kubenswrapper[30278]: I0318 18:04:56.604744 30278 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-76b6568d85-5nwft"] Mar 18 18:04:57.557240 master-0 kubenswrapper[30278]: I0318 18:04:57.557172 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-76b6568d85-5nwft" event={"ID":"d5d15a23-f43f-4265-a7e5-8c28f680ede9","Type":"ContainerStarted","Data":"6c82f847dc6f04b4e0cbb709c5c393b01e98374bc4f0d4a9549bca87f0a39f02"} Mar 18 18:04:57.666266 master-0 kubenswrapper[30278]: I0318 18:04:57.664322 30278 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-6-master-0"] Mar 18 18:04:57.666266 master-0 kubenswrapper[30278]: I0318 18:04:57.666159 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-6-master-0" Mar 18 18:04:57.671362 master-0 kubenswrapper[30278]: I0318 18:04:57.668222 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-kzvvj" Mar 18 18:04:57.677181 master-0 kubenswrapper[30278]: I0318 18:04:57.677119 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Mar 18 18:04:57.679304 master-0 kubenswrapper[30278]: I0318 18:04:57.679238 30278 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-6-master-0"] Mar 18 18:04:57.710683 master-0 kubenswrapper[30278]: I0318 18:04:57.710523 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/257339d9-4efe-4659-ae45-5c1fee5ebba7-kube-api-access\") pod \"installer-6-master-0\" (UID: \"257339d9-4efe-4659-ae45-5c1fee5ebba7\") " pod="openshift-kube-apiserver/installer-6-master-0" Mar 18 18:04:57.710683 master-0 kubenswrapper[30278]: I0318 18:04:57.710620 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/257339d9-4efe-4659-ae45-5c1fee5ebba7-var-lock\") pod \"installer-6-master-0\" (UID: \"257339d9-4efe-4659-ae45-5c1fee5ebba7\") " pod="openshift-kube-apiserver/installer-6-master-0" Mar 18 18:04:57.711014 master-0 kubenswrapper[30278]: I0318 18:04:57.710737 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/257339d9-4efe-4659-ae45-5c1fee5ebba7-kubelet-dir\") pod \"installer-6-master-0\" (UID: \"257339d9-4efe-4659-ae45-5c1fee5ebba7\") " pod="openshift-kube-apiserver/installer-6-master-0" Mar 18 18:04:57.812977 master-0 kubenswrapper[30278]: I0318 18:04:57.812666 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/257339d9-4efe-4659-ae45-5c1fee5ebba7-kubelet-dir\") pod \"installer-6-master-0\" (UID: \"257339d9-4efe-4659-ae45-5c1fee5ebba7\") " pod="openshift-kube-apiserver/installer-6-master-0" Mar 18 18:04:57.812977 master-0 kubenswrapper[30278]: I0318 18:04:57.812776 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/257339d9-4efe-4659-ae45-5c1fee5ebba7-kube-api-access\") pod \"installer-6-master-0\" (UID: \"257339d9-4efe-4659-ae45-5c1fee5ebba7\") " pod="openshift-kube-apiserver/installer-6-master-0" Mar 18 18:04:57.812977 master-0 kubenswrapper[30278]: I0318 18:04:57.812797 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/257339d9-4efe-4659-ae45-5c1fee5ebba7-kubelet-dir\") pod \"installer-6-master-0\" (UID: \"257339d9-4efe-4659-ae45-5c1fee5ebba7\") " pod="openshift-kube-apiserver/installer-6-master-0" Mar 18 18:04:57.813526 master-0 kubenswrapper[30278]: I0318 18:04:57.813428 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/257339d9-4efe-4659-ae45-5c1fee5ebba7-var-lock\") pod \"installer-6-master-0\" (UID: \"257339d9-4efe-4659-ae45-5c1fee5ebba7\") " pod="openshift-kube-apiserver/installer-6-master-0" Mar 18 18:04:57.815970 master-0 kubenswrapper[30278]: I0318 18:04:57.813545 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/257339d9-4efe-4659-ae45-5c1fee5ebba7-var-lock\") pod \"installer-6-master-0\" (UID: \"257339d9-4efe-4659-ae45-5c1fee5ebba7\") " pod="openshift-kube-apiserver/installer-6-master-0" Mar 18 18:04:57.840545 master-0 kubenswrapper[30278]: I0318 18:04:57.840481 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/257339d9-4efe-4659-ae45-5c1fee5ebba7-kube-api-access\") pod \"installer-6-master-0\" (UID: \"257339d9-4efe-4659-ae45-5c1fee5ebba7\") " pod="openshift-kube-apiserver/installer-6-master-0" Mar 18 18:04:58.017229 master-0 kubenswrapper[30278]: I0318 18:04:58.017183 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-6-master-0" Mar 18 18:04:58.408978 master-0 kubenswrapper[30278]: I0318 18:04:58.408935 30278 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-6-master-0"] Mar 18 18:04:59.030237 master-0 kubenswrapper[30278]: W0318 18:04:59.030127 30278 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod257339d9_4efe_4659_ae45_5c1fee5ebba7.slice/crio-293213794c423395ab23e04b7ae8f93572e160c7c2f0f22ae50f7fafacdb250b WatchSource:0}: Error finding container 293213794c423395ab23e04b7ae8f93572e160c7c2f0f22ae50f7fafacdb250b: Status 404 returned error can't find the container with id 293213794c423395ab23e04b7ae8f93572e160c7c2f0f22ae50f7fafacdb250b Mar 18 18:04:59.350533 master-0 kubenswrapper[30278]: I0318 18:04:59.350469 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/89b1dfdf-4633-45af-8abd-931a76eca960-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"89b1dfdf-4633-45af-8abd-931a76eca960\") " pod="openshift-monitoring/alertmanager-main-0" Mar 18 18:04:59.351921 master-0 kubenswrapper[30278]: I0318 18:04:59.351890 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/89b1dfdf-4633-45af-8abd-931a76eca960-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"89b1dfdf-4633-45af-8abd-931a76eca960\") " pod="openshift-monitoring/alertmanager-main-0" Mar 18 18:04:59.554232 master-0 kubenswrapper[30278]: I0318 18:04:59.554139 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-dockercfg-2pg6x" Mar 18 18:04:59.562637 master-0 kubenswrapper[30278]: I0318 18:04:59.562573 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/alertmanager-main-0" Mar 18 18:04:59.585389 master-0 kubenswrapper[30278]: I0318 18:04:59.573624 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-6-master-0" event={"ID":"257339d9-4efe-4659-ae45-5c1fee5ebba7","Type":"ContainerStarted","Data":"901657905b63db2a204c2de049eb9b41c990c2f199f3af7731ee16f58b659483"} Mar 18 18:04:59.585389 master-0 kubenswrapper[30278]: I0318 18:04:59.573706 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-6-master-0" event={"ID":"257339d9-4efe-4659-ae45-5c1fee5ebba7","Type":"ContainerStarted","Data":"293213794c423395ab23e04b7ae8f93572e160c7c2f0f22ae50f7fafacdb250b"} Mar 18 18:04:59.585389 master-0 kubenswrapper[30278]: I0318 18:04:59.575119 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-76b6568d85-5nwft" event={"ID":"d5d15a23-f43f-4265-a7e5-8c28f680ede9","Type":"ContainerStarted","Data":"21c5f2cdd0892ecf9d752cdf6ce1a95d4eea2d5ed7a9d691762bb97c8e32563f"} Mar 18 18:04:59.585389 master-0 kubenswrapper[30278]: I0318 18:04:59.575808 30278 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-76b6568d85-5nwft" Mar 18 18:04:59.609975 master-0 kubenswrapper[30278]: I0318 18:04:59.607958 30278 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-6-master-0" podStartSLOduration=2.607934964 podStartE2EDuration="2.607934964s" podCreationTimestamp="2026-03-18 18:04:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 18:04:59.602097488 +0000 UTC m=+268.769282083" watchObservedRunningTime="2026-03-18 18:04:59.607934964 +0000 UTC m=+268.775119569" Mar 18 18:04:59.624633 master-0 kubenswrapper[30278]: I0318 18:04:59.624518 30278 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-76b6568d85-5nwft" podStartSLOduration=2.134112849 podStartE2EDuration="4.624494528s" podCreationTimestamp="2026-03-18 18:04:55 +0000 UTC" firstStartedPulling="2026-03-18 18:04:56.609126909 +0000 UTC m=+265.776311504" lastFinishedPulling="2026-03-18 18:04:59.099508568 +0000 UTC m=+268.266693183" observedRunningTime="2026-03-18 18:04:59.62274108 +0000 UTC m=+268.789925675" watchObservedRunningTime="2026-03-18 18:04:59.624494528 +0000 UTC m=+268.791679143" Mar 18 18:04:59.847976 master-0 kubenswrapper[30278]: I0318 18:04:59.847845 30278 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-76b6568d85-5nwft" Mar 18 18:05:00.040318 master-0 kubenswrapper[30278]: E0318 18:05:00.037899 30278 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[prometheus-trusted-ca-bundle], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="openshift-monitoring/prometheus-k8s-0" podUID="5c6aeb7b-9c05-470e-b31f-f4154aadf170" Mar 18 18:05:00.093836 master-0 kubenswrapper[30278]: I0318 18:05:00.093775 30278 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-66b8ffb895-5ftpz"] Mar 18 18:05:00.094760 master-0 kubenswrapper[30278]: I0318 18:05:00.094731 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-66b8ffb895-5ftpz" Mar 18 18:05:00.102000 master-0 kubenswrapper[30278]: I0318 18:05:00.101885 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Mar 18 18:05:00.116706 master-0 kubenswrapper[30278]: I0318 18:05:00.116389 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Mar 18 18:05:00.136229 master-0 kubenswrapper[30278]: I0318 18:05:00.134554 30278 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-66b8ffb895-5ftpz"] Mar 18 18:05:00.150504 master-0 kubenswrapper[30278]: I0318 18:05:00.148132 30278 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/alertmanager-main-0"] Mar 18 18:05:00.164590 master-0 kubenswrapper[30278]: I0318 18:05:00.164516 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sfdlj\" (UniqueName: \"kubernetes.io/projected/1c86ad24-b858-4dfa-802b-f4799093ffc0-kube-api-access-sfdlj\") pod \"downloads-66b8ffb895-5ftpz\" (UID: \"1c86ad24-b858-4dfa-802b-f4799093ffc0\") " pod="openshift-console/downloads-66b8ffb895-5ftpz" Mar 18 18:05:00.266904 master-0 kubenswrapper[30278]: I0318 18:05:00.266831 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sfdlj\" (UniqueName: \"kubernetes.io/projected/1c86ad24-b858-4dfa-802b-f4799093ffc0-kube-api-access-sfdlj\") pod \"downloads-66b8ffb895-5ftpz\" (UID: \"1c86ad24-b858-4dfa-802b-f4799093ffc0\") " pod="openshift-console/downloads-66b8ffb895-5ftpz" Mar 18 18:05:00.298382 master-0 kubenswrapper[30278]: I0318 18:05:00.298266 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sfdlj\" (UniqueName: \"kubernetes.io/projected/1c86ad24-b858-4dfa-802b-f4799093ffc0-kube-api-access-sfdlj\") pod \"downloads-66b8ffb895-5ftpz\" (UID: \"1c86ad24-b858-4dfa-802b-f4799093ffc0\") " pod="openshift-console/downloads-66b8ffb895-5ftpz" Mar 18 18:05:00.458479 master-0 kubenswrapper[30278]: I0318 18:05:00.458328 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-66b8ffb895-5ftpz" Mar 18 18:05:00.584043 master-0 kubenswrapper[30278]: I0318 18:05:00.583944 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-k8s-0" Mar 18 18:05:00.585654 master-0 kubenswrapper[30278]: I0318 18:05:00.585613 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"89b1dfdf-4633-45af-8abd-931a76eca960","Type":"ContainerStarted","Data":"6aed5aa23422f65dac7bda57b71903b3185d73e0dd8da2720937a75260d98b26"} Mar 18 18:05:00.912683 master-0 kubenswrapper[30278]: W0318 18:05:00.912611 30278 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1c86ad24_b858_4dfa_802b_f4799093ffc0.slice/crio-bb2f68f8c55fe86755a776c7cd686d7acf30cc07efd58b6aba472c3d8eda0c1d WatchSource:0}: Error finding container bb2f68f8c55fe86755a776c7cd686d7acf30cc07efd58b6aba472c3d8eda0c1d: Status 404 returned error can't find the container with id bb2f68f8c55fe86755a776c7cd686d7acf30cc07efd58b6aba472c3d8eda0c1d Mar 18 18:05:00.913684 master-0 kubenswrapper[30278]: I0318 18:05:00.913620 30278 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-66b8ffb895-5ftpz"] Mar 18 18:05:01.406675 master-0 kubenswrapper[30278]: I0318 18:05:01.406588 30278 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/monitoring-plugin-6855c56fbd-8t49z"] Mar 18 18:05:01.408754 master-0 kubenswrapper[30278]: I0318 18:05:01.408727 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/monitoring-plugin-6855c56fbd-8t49z" Mar 18 18:05:01.414395 master-0 kubenswrapper[30278]: I0318 18:05:01.411470 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"default-dockercfg-4sbm2" Mar 18 18:05:01.414395 master-0 kubenswrapper[30278]: I0318 18:05:01.411705 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"monitoring-plugin-cert" Mar 18 18:05:01.422180 master-0 kubenswrapper[30278]: I0318 18:05:01.422109 30278 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/monitoring-plugin-6855c56fbd-8t49z"] Mar 18 18:05:01.509354 master-0 kubenswrapper[30278]: I0318 18:05:01.509283 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"monitoring-plugin-cert\" (UniqueName: \"kubernetes.io/secret/4d0ccfde-5384-4e7a-bd9c-61ef79c4e44a-monitoring-plugin-cert\") pod \"monitoring-plugin-6855c56fbd-8t49z\" (UID: \"4d0ccfde-5384-4e7a-bd9c-61ef79c4e44a\") " pod="openshift-monitoring/monitoring-plugin-6855c56fbd-8t49z" Mar 18 18:05:01.597841 master-0 kubenswrapper[30278]: I0318 18:05:01.597764 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-66b8ffb895-5ftpz" event={"ID":"1c86ad24-b858-4dfa-802b-f4799093ffc0","Type":"ContainerStarted","Data":"bb2f68f8c55fe86755a776c7cd686d7acf30cc07efd58b6aba472c3d8eda0c1d"} Mar 18 18:05:01.612037 master-0 kubenswrapper[30278]: I0318 18:05:01.611856 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"monitoring-plugin-cert\" (UniqueName: \"kubernetes.io/secret/4d0ccfde-5384-4e7a-bd9c-61ef79c4e44a-monitoring-plugin-cert\") pod \"monitoring-plugin-6855c56fbd-8t49z\" (UID: \"4d0ccfde-5384-4e7a-bd9c-61ef79c4e44a\") " pod="openshift-monitoring/monitoring-plugin-6855c56fbd-8t49z" Mar 18 18:05:01.615835 master-0 kubenswrapper[30278]: I0318 18:05:01.615783 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"monitoring-plugin-cert\" (UniqueName: \"kubernetes.io/secret/4d0ccfde-5384-4e7a-bd9c-61ef79c4e44a-monitoring-plugin-cert\") pod \"monitoring-plugin-6855c56fbd-8t49z\" (UID: \"4d0ccfde-5384-4e7a-bd9c-61ef79c4e44a\") " pod="openshift-monitoring/monitoring-plugin-6855c56fbd-8t49z" Mar 18 18:05:01.771549 master-0 kubenswrapper[30278]: I0318 18:05:01.771434 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/monitoring-plugin-6855c56fbd-8t49z" Mar 18 18:05:02.254426 master-0 kubenswrapper[30278]: I0318 18:05:02.254322 30278 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/monitoring-plugin-6855c56fbd-8t49z"] Mar 18 18:05:02.256351 master-0 kubenswrapper[30278]: W0318 18:05:02.256265 30278 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4d0ccfde_5384_4e7a_bd9c_61ef79c4e44a.slice/crio-a0491d37ecc5176428970c0aa1fc3dd48c51a0bd0c7448cab962945c01b49900 WatchSource:0}: Error finding container a0491d37ecc5176428970c0aa1fc3dd48c51a0bd0c7448cab962945c01b49900: Status 404 returned error can't find the container with id a0491d37ecc5176428970c0aa1fc3dd48c51a0bd0c7448cab962945c01b49900 Mar 18 18:05:02.605693 master-0 kubenswrapper[30278]: I0318 18:05:02.605618 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/monitoring-plugin-6855c56fbd-8t49z" event={"ID":"4d0ccfde-5384-4e7a-bd9c-61ef79c4e44a","Type":"ContainerStarted","Data":"a0491d37ecc5176428970c0aa1fc3dd48c51a0bd0c7448cab962945c01b49900"} Mar 18 18:05:02.608200 master-0 kubenswrapper[30278]: I0318 18:05:02.608161 30278 generic.go:334] "Generic (PLEG): container finished" podID="89b1dfdf-4633-45af-8abd-931a76eca960" containerID="542b8a460709182e802cafd712d98a072c621022a5720b144290b9d16fc6737d" exitCode=0 Mar 18 18:05:02.608314 master-0 kubenswrapper[30278]: I0318 18:05:02.608208 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"89b1dfdf-4633-45af-8abd-931a76eca960","Type":"ContainerDied","Data":"542b8a460709182e802cafd712d98a072c621022a5720b144290b9d16fc6737d"} Mar 18 18:05:03.550902 master-0 kubenswrapper[30278]: I0318 18:05:03.550848 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5c6aeb7b-9c05-470e-b31f-f4154aadf170-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"5c6aeb7b-9c05-470e-b31f-f4154aadf170\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 18:05:03.552811 master-0 kubenswrapper[30278]: I0318 18:05:03.552767 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5c6aeb7b-9c05-470e-b31f-f4154aadf170-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"5c6aeb7b-9c05-470e-b31f-f4154aadf170\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 18:05:03.587807 master-0 kubenswrapper[30278]: I0318 18:05:03.587747 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-dockercfg-pm4sf" Mar 18 18:05:03.596444 master-0 kubenswrapper[30278]: I0318 18:05:03.596373 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-k8s-0" Mar 18 18:05:04.637100 master-0 kubenswrapper[30278]: I0318 18:05:04.637003 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/monitoring-plugin-6855c56fbd-8t49z" event={"ID":"4d0ccfde-5384-4e7a-bd9c-61ef79c4e44a","Type":"ContainerStarted","Data":"77543e6cc40460ee83976bdb5b4440d9b1150b9fdd6778142d54e5ce6fb1d11e"} Mar 18 18:05:04.638462 master-0 kubenswrapper[30278]: I0318 18:05:04.637576 30278 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/monitoring-plugin-6855c56fbd-8t49z" Mar 18 18:05:04.646418 master-0 kubenswrapper[30278]: I0318 18:05:04.645969 30278 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/monitoring-plugin-6855c56fbd-8t49z" Mar 18 18:05:04.666776 master-0 kubenswrapper[30278]: I0318 18:05:04.666658 30278 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/monitoring-plugin-6855c56fbd-8t49z" podStartSLOduration=1.616346606 podStartE2EDuration="3.666632267s" podCreationTimestamp="2026-03-18 18:05:01 +0000 UTC" firstStartedPulling="2026-03-18 18:05:02.260788493 +0000 UTC m=+271.427973098" lastFinishedPulling="2026-03-18 18:05:04.311074164 +0000 UTC m=+273.478258759" observedRunningTime="2026-03-18 18:05:04.656217128 +0000 UTC m=+273.823401733" watchObservedRunningTime="2026-03-18 18:05:04.666632267 +0000 UTC m=+273.833816862" Mar 18 18:05:04.726412 master-0 kubenswrapper[30278]: W0318 18:05:04.726009 30278 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5c6aeb7b_9c05_470e_b31f_f4154aadf170.slice/crio-eab143820739697b21d7c3673655eccadcd4d1f56b4c551303a748b8c3bd62a6 WatchSource:0}: Error finding container eab143820739697b21d7c3673655eccadcd4d1f56b4c551303a748b8c3bd62a6: Status 404 returned error can't find the container with id eab143820739697b21d7c3673655eccadcd4d1f56b4c551303a748b8c3bd62a6 Mar 18 18:05:04.729691 master-0 kubenswrapper[30278]: I0318 18:05:04.729630 30278 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-k8s-0"] Mar 18 18:05:05.645059 master-0 kubenswrapper[30278]: I0318 18:05:05.644999 30278 generic.go:334] "Generic (PLEG): container finished" podID="5c6aeb7b-9c05-470e-b31f-f4154aadf170" containerID="c8221e27a9c966e7f7abb1d734a50b4f7eadfeeed99bb31aef81d0cd99c3e523" exitCode=0 Mar 18 18:05:05.645632 master-0 kubenswrapper[30278]: I0318 18:05:05.645099 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"5c6aeb7b-9c05-470e-b31f-f4154aadf170","Type":"ContainerDied","Data":"c8221e27a9c966e7f7abb1d734a50b4f7eadfeeed99bb31aef81d0cd99c3e523"} Mar 18 18:05:05.646008 master-0 kubenswrapper[30278]: I0318 18:05:05.645905 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"5c6aeb7b-9c05-470e-b31f-f4154aadf170","Type":"ContainerStarted","Data":"eab143820739697b21d7c3673655eccadcd4d1f56b4c551303a748b8c3bd62a6"} Mar 18 18:05:06.660530 master-0 kubenswrapper[30278]: I0318 18:05:06.660445 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"89b1dfdf-4633-45af-8abd-931a76eca960","Type":"ContainerStarted","Data":"0b551827974ca1934d8f9a62505f47cc16f56f528ceb391855cad37846d46b67"} Mar 18 18:05:06.661396 master-0 kubenswrapper[30278]: I0318 18:05:06.660539 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"89b1dfdf-4633-45af-8abd-931a76eca960","Type":"ContainerStarted","Data":"3695e8b7d907a4ecd98dae9c6375016787fd4cf2e9dee7c3967cd4f43aeacc9c"} Mar 18 18:05:06.661396 master-0 kubenswrapper[30278]: I0318 18:05:06.660561 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"89b1dfdf-4633-45af-8abd-931a76eca960","Type":"ContainerStarted","Data":"1151cf0b337961a5368835bec6f85275df0f5f5ad3f456f4b8617a0988d68ab0"} Mar 18 18:05:06.661396 master-0 kubenswrapper[30278]: I0318 18:05:06.660581 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"89b1dfdf-4633-45af-8abd-931a76eca960","Type":"ContainerStarted","Data":"54d5ec6af3880a2ea24f8fc641b0fdabd67a3d38b658ef8b46030a7fbdcb7542"} Mar 18 18:05:07.392321 master-0 kubenswrapper[30278]: I0318 18:05:07.392241 30278 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-6b7657f69f-w666c"] Mar 18 18:05:07.393798 master-0 kubenswrapper[30278]: I0318 18:05:07.393773 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-6b7657f69f-w666c" Mar 18 18:05:07.403412 master-0 kubenswrapper[30278]: I0318 18:05:07.400109 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Mar 18 18:05:07.403412 master-0 kubenswrapper[30278]: I0318 18:05:07.400620 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Mar 18 18:05:07.403412 master-0 kubenswrapper[30278]: I0318 18:05:07.400880 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Mar 18 18:05:07.404584 master-0 kubenswrapper[30278]: I0318 18:05:07.404520 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Mar 18 18:05:07.404678 master-0 kubenswrapper[30278]: I0318 18:05:07.404591 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Mar 18 18:05:07.406380 master-0 kubenswrapper[30278]: I0318 18:05:07.406332 30278 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-6b7657f69f-w666c"] Mar 18 18:05:07.523915 master-0 kubenswrapper[30278]: I0318 18:05:07.523828 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/bc445b25-803f-4668-9a96-d539108d2527-console-serving-cert\") pod \"console-6b7657f69f-w666c\" (UID: \"bc445b25-803f-4668-9a96-d539108d2527\") " pod="openshift-console/console-6b7657f69f-w666c" Mar 18 18:05:07.524250 master-0 kubenswrapper[30278]: I0318 18:05:07.523961 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/bc445b25-803f-4668-9a96-d539108d2527-console-oauth-config\") pod \"console-6b7657f69f-w666c\" (UID: \"bc445b25-803f-4668-9a96-d539108d2527\") " pod="openshift-console/console-6b7657f69f-w666c" Mar 18 18:05:07.524250 master-0 kubenswrapper[30278]: I0318 18:05:07.524053 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/bc445b25-803f-4668-9a96-d539108d2527-service-ca\") pod \"console-6b7657f69f-w666c\" (UID: \"bc445b25-803f-4668-9a96-d539108d2527\") " pod="openshift-console/console-6b7657f69f-w666c" Mar 18 18:05:07.524250 master-0 kubenswrapper[30278]: I0318 18:05:07.524106 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/bc445b25-803f-4668-9a96-d539108d2527-oauth-serving-cert\") pod \"console-6b7657f69f-w666c\" (UID: \"bc445b25-803f-4668-9a96-d539108d2527\") " pod="openshift-console/console-6b7657f69f-w666c" Mar 18 18:05:07.524250 master-0 kubenswrapper[30278]: I0318 18:05:07.524133 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bbqtz\" (UniqueName: \"kubernetes.io/projected/bc445b25-803f-4668-9a96-d539108d2527-kube-api-access-bbqtz\") pod \"console-6b7657f69f-w666c\" (UID: \"bc445b25-803f-4668-9a96-d539108d2527\") " pod="openshift-console/console-6b7657f69f-w666c" Mar 18 18:05:07.524495 master-0 kubenswrapper[30278]: I0318 18:05:07.524407 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/bc445b25-803f-4668-9a96-d539108d2527-console-config\") pod \"console-6b7657f69f-w666c\" (UID: \"bc445b25-803f-4668-9a96-d539108d2527\") " pod="openshift-console/console-6b7657f69f-w666c" Mar 18 18:05:07.626177 master-0 kubenswrapper[30278]: I0318 18:05:07.626103 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/bc445b25-803f-4668-9a96-d539108d2527-console-oauth-config\") pod \"console-6b7657f69f-w666c\" (UID: \"bc445b25-803f-4668-9a96-d539108d2527\") " pod="openshift-console/console-6b7657f69f-w666c" Mar 18 18:05:07.626846 master-0 kubenswrapper[30278]: I0318 18:05:07.626804 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/bc445b25-803f-4668-9a96-d539108d2527-service-ca\") pod \"console-6b7657f69f-w666c\" (UID: \"bc445b25-803f-4668-9a96-d539108d2527\") " pod="openshift-console/console-6b7657f69f-w666c" Mar 18 18:05:07.626890 master-0 kubenswrapper[30278]: I0318 18:05:07.626866 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/bc445b25-803f-4668-9a96-d539108d2527-oauth-serving-cert\") pod \"console-6b7657f69f-w666c\" (UID: \"bc445b25-803f-4668-9a96-d539108d2527\") " pod="openshift-console/console-6b7657f69f-w666c" Mar 18 18:05:07.626924 master-0 kubenswrapper[30278]: I0318 18:05:07.626891 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bbqtz\" (UniqueName: \"kubernetes.io/projected/bc445b25-803f-4668-9a96-d539108d2527-kube-api-access-bbqtz\") pod \"console-6b7657f69f-w666c\" (UID: \"bc445b25-803f-4668-9a96-d539108d2527\") " pod="openshift-console/console-6b7657f69f-w666c" Mar 18 18:05:07.626968 master-0 kubenswrapper[30278]: I0318 18:05:07.626947 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/bc445b25-803f-4668-9a96-d539108d2527-console-config\") pod \"console-6b7657f69f-w666c\" (UID: \"bc445b25-803f-4668-9a96-d539108d2527\") " pod="openshift-console/console-6b7657f69f-w666c" Mar 18 18:05:07.627011 master-0 kubenswrapper[30278]: I0318 18:05:07.626983 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/bc445b25-803f-4668-9a96-d539108d2527-console-serving-cert\") pod \"console-6b7657f69f-w666c\" (UID: \"bc445b25-803f-4668-9a96-d539108d2527\") " pod="openshift-console/console-6b7657f69f-w666c" Mar 18 18:05:07.627780 master-0 kubenswrapper[30278]: I0318 18:05:07.627755 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/bc445b25-803f-4668-9a96-d539108d2527-console-config\") pod \"console-6b7657f69f-w666c\" (UID: \"bc445b25-803f-4668-9a96-d539108d2527\") " pod="openshift-console/console-6b7657f69f-w666c" Mar 18 18:05:07.628087 master-0 kubenswrapper[30278]: I0318 18:05:07.628058 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/bc445b25-803f-4668-9a96-d539108d2527-oauth-serving-cert\") pod \"console-6b7657f69f-w666c\" (UID: \"bc445b25-803f-4668-9a96-d539108d2527\") " pod="openshift-console/console-6b7657f69f-w666c" Mar 18 18:05:07.629182 master-0 kubenswrapper[30278]: I0318 18:05:07.628246 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/bc445b25-803f-4668-9a96-d539108d2527-service-ca\") pod \"console-6b7657f69f-w666c\" (UID: \"bc445b25-803f-4668-9a96-d539108d2527\") " pod="openshift-console/console-6b7657f69f-w666c" Mar 18 18:05:07.630606 master-0 kubenswrapper[30278]: I0318 18:05:07.630571 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/bc445b25-803f-4668-9a96-d539108d2527-console-serving-cert\") pod \"console-6b7657f69f-w666c\" (UID: \"bc445b25-803f-4668-9a96-d539108d2527\") " pod="openshift-console/console-6b7657f69f-w666c" Mar 18 18:05:07.640316 master-0 kubenswrapper[30278]: I0318 18:05:07.633660 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/bc445b25-803f-4668-9a96-d539108d2527-console-oauth-config\") pod \"console-6b7657f69f-w666c\" (UID: \"bc445b25-803f-4668-9a96-d539108d2527\") " pod="openshift-console/console-6b7657f69f-w666c" Mar 18 18:05:07.647362 master-0 kubenswrapper[30278]: I0318 18:05:07.646568 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bbqtz\" (UniqueName: \"kubernetes.io/projected/bc445b25-803f-4668-9a96-d539108d2527-kube-api-access-bbqtz\") pod \"console-6b7657f69f-w666c\" (UID: \"bc445b25-803f-4668-9a96-d539108d2527\") " pod="openshift-console/console-6b7657f69f-w666c" Mar 18 18:05:07.671446 master-0 kubenswrapper[30278]: I0318 18:05:07.671407 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"89b1dfdf-4633-45af-8abd-931a76eca960","Type":"ContainerStarted","Data":"6c03f43b3340afa5c24d3ab2e54d55fa56552e844242ec0e6bb87ed344e23aed"} Mar 18 18:05:07.671996 master-0 kubenswrapper[30278]: I0318 18:05:07.671979 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"89b1dfdf-4633-45af-8abd-931a76eca960","Type":"ContainerStarted","Data":"0b35e279dda5a722efe795c0143026c8b448b1734eaac9f3c72eac823353df90"} Mar 18 18:05:07.713793 master-0 kubenswrapper[30278]: I0318 18:05:07.713694 30278 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/alertmanager-main-0" podStartSLOduration=253.006213217 podStartE2EDuration="4m18.713675034s" podCreationTimestamp="2026-03-18 18:00:49 +0000 UTC" firstStartedPulling="2026-03-18 18:05:00.116706841 +0000 UTC m=+269.283891436" lastFinishedPulling="2026-03-18 18:05:05.824168658 +0000 UTC m=+274.991353253" observedRunningTime="2026-03-18 18:05:07.709041889 +0000 UTC m=+276.876226504" watchObservedRunningTime="2026-03-18 18:05:07.713675034 +0000 UTC m=+276.880859629" Mar 18 18:05:07.729450 master-0 kubenswrapper[30278]: I0318 18:05:07.729386 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-6b7657f69f-w666c" Mar 18 18:05:08.216994 master-0 kubenswrapper[30278]: I0318 18:05:08.216546 30278 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-6b7657f69f-w666c"] Mar 18 18:05:10.702446 master-0 kubenswrapper[30278]: I0318 18:05:10.702411 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"5c6aeb7b-9c05-470e-b31f-f4154aadf170","Type":"ContainerStarted","Data":"9f353111ed2ff9d55f736938f996eecff5f8bf842f96ab8decc0cca74464a5d6"} Mar 18 18:05:10.702906 master-0 kubenswrapper[30278]: I0318 18:05:10.702456 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"5c6aeb7b-9c05-470e-b31f-f4154aadf170","Type":"ContainerStarted","Data":"fbf00d88b8f5f234c5616d73c233119048706129aff08fe14ae0fd745e851f31"} Mar 18 18:05:10.702906 master-0 kubenswrapper[30278]: I0318 18:05:10.702467 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"5c6aeb7b-9c05-470e-b31f-f4154aadf170","Type":"ContainerStarted","Data":"3e7e5ed2596bcbac2ab91756313741ea4d24ac563598d8ab914b212f1f0abaec"} Mar 18 18:05:10.702906 master-0 kubenswrapper[30278]: I0318 18:05:10.702479 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"5c6aeb7b-9c05-470e-b31f-f4154aadf170","Type":"ContainerStarted","Data":"68e21c4284a08f52d019f354cb231dfaadd6758a8d35cc21c74f3c5191f9ed50"} Mar 18 18:05:10.704091 master-0 kubenswrapper[30278]: I0318 18:05:10.704068 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-6b7657f69f-w666c" event={"ID":"bc445b25-803f-4668-9a96-d539108d2527","Type":"ContainerStarted","Data":"a7fbd13c897d2bdfe694281f979b8537a87069cb5f00fe4155043737949583e5"} Mar 18 18:05:11.714070 master-0 kubenswrapper[30278]: I0318 18:05:11.714010 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"5c6aeb7b-9c05-470e-b31f-f4154aadf170","Type":"ContainerStarted","Data":"e05f4784ce7ed803e04b81bf5155c163626dc7bb5a2b519d1e6ad4d4be64ffcb"} Mar 18 18:05:11.714070 master-0 kubenswrapper[30278]: I0318 18:05:11.714078 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"5c6aeb7b-9c05-470e-b31f-f4154aadf170","Type":"ContainerStarted","Data":"fb690aadcc1a5dfcc8a6cf73791cc8218f074fece27ca8193b4a729b5036736e"} Mar 18 18:05:11.764517 master-0 kubenswrapper[30278]: I0318 18:05:11.764367 30278 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/prometheus-k8s-0" podStartSLOduration=254.551926572 podStartE2EDuration="4m18.76434531s" podCreationTimestamp="2026-03-18 18:00:53 +0000 UTC" firstStartedPulling="2026-03-18 18:05:05.649153921 +0000 UTC m=+274.816338516" lastFinishedPulling="2026-03-18 18:05:09.861572659 +0000 UTC m=+279.028757254" observedRunningTime="2026-03-18 18:05:11.756761357 +0000 UTC m=+280.923945972" watchObservedRunningTime="2026-03-18 18:05:11.76434531 +0000 UTC m=+280.931529915" Mar 18 18:05:13.597097 master-0 kubenswrapper[30278]: I0318 18:05:13.596968 30278 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/prometheus-k8s-0" Mar 18 18:05:14.760720 master-0 kubenswrapper[30278]: I0318 18:05:14.760635 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-6b7657f69f-w666c" event={"ID":"bc445b25-803f-4668-9a96-d539108d2527","Type":"ContainerStarted","Data":"3f98d79ffc4daa3f60484057dea724bfba2f41b2060e80bf35923e2ec901080d"} Mar 18 18:05:14.798753 master-0 kubenswrapper[30278]: I0318 18:05:14.798596 30278 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-6b7657f69f-w666c" podStartSLOduration=3.8682769219999997 podStartE2EDuration="7.798561674s" podCreationTimestamp="2026-03-18 18:05:07 +0000 UTC" firstStartedPulling="2026-03-18 18:05:09.807628314 +0000 UTC m=+278.974812909" lastFinishedPulling="2026-03-18 18:05:13.737913066 +0000 UTC m=+282.905097661" observedRunningTime="2026-03-18 18:05:14.795625735 +0000 UTC m=+283.962810340" watchObservedRunningTime="2026-03-18 18:05:14.798561674 +0000 UTC m=+283.965746269" Mar 18 18:05:16.364887 master-0 kubenswrapper[30278]: I0318 18:05:16.364809 30278 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-controller-manager/kube-controller-manager-master-0"] Mar 18 18:05:16.365733 master-0 kubenswrapper[30278]: I0318 18:05:16.365141 30278 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="3b3363934623637fdc1d37ff8b16880a" containerName="kube-controller-manager-recovery-controller" containerID="cri-o://b6f2e9aac67fef6d9cd60fe1d8d223b7762a7baf5bd08f250b7e213146055132" gracePeriod=30 Mar 18 18:05:16.365733 master-0 kubenswrapper[30278]: I0318 18:05:16.365218 30278 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="3b3363934623637fdc1d37ff8b16880a" containerName="cluster-policy-controller" containerID="cri-o://3c51974ba55ce77de4db6060fda42dd205fc3b6d69ff15656f21b3a7b488ddc3" gracePeriod=30 Mar 18 18:05:16.365733 master-0 kubenswrapper[30278]: I0318 18:05:16.365402 30278 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="3b3363934623637fdc1d37ff8b16880a" containerName="kube-controller-manager-cert-syncer" containerID="cri-o://5e9de81daca56e7a14e9bb6ed5c647f47dd366c571087c15f6fae5baeebccd1e" gracePeriod=30 Mar 18 18:05:16.368558 master-0 kubenswrapper[30278]: I0318 18:05:16.366118 30278 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="3b3363934623637fdc1d37ff8b16880a" containerName="kube-controller-manager" containerID="cri-o://af3223d37de441a43e2bb9840f2c7d68ed9137889a1d1026233d1692393573ca" gracePeriod=30 Mar 18 18:05:16.368558 master-0 kubenswrapper[30278]: I0318 18:05:16.367459 30278 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-controller-manager/kube-controller-manager-master-0"] Mar 18 18:05:16.368558 master-0 kubenswrapper[30278]: E0318 18:05:16.367926 30278 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b3363934623637fdc1d37ff8b16880a" containerName="cluster-policy-controller" Mar 18 18:05:16.368558 master-0 kubenswrapper[30278]: I0318 18:05:16.367942 30278 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b3363934623637fdc1d37ff8b16880a" containerName="cluster-policy-controller" Mar 18 18:05:16.368558 master-0 kubenswrapper[30278]: E0318 18:05:16.367969 30278 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b3363934623637fdc1d37ff8b16880a" containerName="kube-controller-manager-cert-syncer" Mar 18 18:05:16.368558 master-0 kubenswrapper[30278]: I0318 18:05:16.367976 30278 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b3363934623637fdc1d37ff8b16880a" containerName="kube-controller-manager-cert-syncer" Mar 18 18:05:16.368558 master-0 kubenswrapper[30278]: E0318 18:05:16.367991 30278 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b3363934623637fdc1d37ff8b16880a" containerName="kube-controller-manager-recovery-controller" Mar 18 18:05:16.368558 master-0 kubenswrapper[30278]: I0318 18:05:16.367997 30278 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b3363934623637fdc1d37ff8b16880a" containerName="kube-controller-manager-recovery-controller" Mar 18 18:05:16.368558 master-0 kubenswrapper[30278]: E0318 18:05:16.368017 30278 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b3363934623637fdc1d37ff8b16880a" containerName="kube-controller-manager-cert-syncer" Mar 18 18:05:16.368558 master-0 kubenswrapper[30278]: I0318 18:05:16.368023 30278 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b3363934623637fdc1d37ff8b16880a" containerName="kube-controller-manager-cert-syncer" Mar 18 18:05:16.368558 master-0 kubenswrapper[30278]: E0318 18:05:16.368041 30278 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b3363934623637fdc1d37ff8b16880a" containerName="cluster-policy-controller" Mar 18 18:05:16.368558 master-0 kubenswrapper[30278]: I0318 18:05:16.368047 30278 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b3363934623637fdc1d37ff8b16880a" containerName="cluster-policy-controller" Mar 18 18:05:16.368558 master-0 kubenswrapper[30278]: E0318 18:05:16.368085 30278 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b3363934623637fdc1d37ff8b16880a" containerName="kube-controller-manager" Mar 18 18:05:16.368558 master-0 kubenswrapper[30278]: I0318 18:05:16.368092 30278 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b3363934623637fdc1d37ff8b16880a" containerName="kube-controller-manager" Mar 18 18:05:16.368558 master-0 kubenswrapper[30278]: E0318 18:05:16.368118 30278 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b3363934623637fdc1d37ff8b16880a" containerName="kube-controller-manager" Mar 18 18:05:16.368558 master-0 kubenswrapper[30278]: I0318 18:05:16.368123 30278 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b3363934623637fdc1d37ff8b16880a" containerName="kube-controller-manager" Mar 18 18:05:16.368558 master-0 kubenswrapper[30278]: I0318 18:05:16.368307 30278 memory_manager.go:354] "RemoveStaleState removing state" podUID="3b3363934623637fdc1d37ff8b16880a" containerName="kube-controller-manager" Mar 18 18:05:16.368558 master-0 kubenswrapper[30278]: I0318 18:05:16.368321 30278 memory_manager.go:354] "RemoveStaleState removing state" podUID="3b3363934623637fdc1d37ff8b16880a" containerName="kube-controller-manager-cert-syncer" Mar 18 18:05:16.368558 master-0 kubenswrapper[30278]: I0318 18:05:16.368342 30278 memory_manager.go:354] "RemoveStaleState removing state" podUID="3b3363934623637fdc1d37ff8b16880a" containerName="kube-controller-manager" Mar 18 18:05:16.368558 master-0 kubenswrapper[30278]: I0318 18:05:16.368390 30278 memory_manager.go:354] "RemoveStaleState removing state" podUID="3b3363934623637fdc1d37ff8b16880a" containerName="kube-controller-manager" Mar 18 18:05:16.368558 master-0 kubenswrapper[30278]: I0318 18:05:16.368405 30278 memory_manager.go:354] "RemoveStaleState removing state" podUID="3b3363934623637fdc1d37ff8b16880a" containerName="cluster-policy-controller" Mar 18 18:05:16.368558 master-0 kubenswrapper[30278]: I0318 18:05:16.368419 30278 memory_manager.go:354] "RemoveStaleState removing state" podUID="3b3363934623637fdc1d37ff8b16880a" containerName="kube-controller-manager-recovery-controller" Mar 18 18:05:16.368558 master-0 kubenswrapper[30278]: I0318 18:05:16.368439 30278 memory_manager.go:354] "RemoveStaleState removing state" podUID="3b3363934623637fdc1d37ff8b16880a" containerName="kube-controller-manager-cert-syncer" Mar 18 18:05:16.368558 master-0 kubenswrapper[30278]: E0318 18:05:16.368585 30278 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b3363934623637fdc1d37ff8b16880a" containerName="kube-controller-manager" Mar 18 18:05:16.368558 master-0 kubenswrapper[30278]: I0318 18:05:16.368594 30278 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b3363934623637fdc1d37ff8b16880a" containerName="kube-controller-manager" Mar 18 18:05:16.369378 master-0 kubenswrapper[30278]: I0318 18:05:16.368728 30278 memory_manager.go:354] "RemoveStaleState removing state" podUID="3b3363934623637fdc1d37ff8b16880a" containerName="cluster-policy-controller" Mar 18 18:05:16.516491 master-0 kubenswrapper[30278]: I0318 18:05:16.516407 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/efc76217af9e7119e39d2455d00c223f-resource-dir\") pod \"kube-controller-manager-master-0\" (UID: \"efc76217af9e7119e39d2455d00c223f\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 18:05:16.516685 master-0 kubenswrapper[30278]: I0318 18:05:16.516501 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/efc76217af9e7119e39d2455d00c223f-cert-dir\") pod \"kube-controller-manager-master-0\" (UID: \"efc76217af9e7119e39d2455d00c223f\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 18:05:16.548037 master-0 kubenswrapper[30278]: I0318 18:05:16.547991 30278 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_3b3363934623637fdc1d37ff8b16880a/kube-controller-manager-cert-syncer/1.log" Mar 18 18:05:16.548732 master-0 kubenswrapper[30278]: I0318 18:05:16.548697 30278 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_3b3363934623637fdc1d37ff8b16880a/cluster-policy-controller/3.log" Mar 18 18:05:16.549460 master-0 kubenswrapper[30278]: I0318 18:05:16.549414 30278 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_3b3363934623637fdc1d37ff8b16880a/kube-controller-manager/1.log" Mar 18 18:05:16.550634 master-0 kubenswrapper[30278]: I0318 18:05:16.550603 30278 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_3b3363934623637fdc1d37ff8b16880a/kube-controller-manager-cert-syncer/0.log" Mar 18 18:05:16.550758 master-0 kubenswrapper[30278]: I0318 18:05:16.550727 30278 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 18:05:16.555381 master-0 kubenswrapper[30278]: I0318 18:05:16.554754 30278 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" oldPodUID="3b3363934623637fdc1d37ff8b16880a" podUID="efc76217af9e7119e39d2455d00c223f" Mar 18 18:05:16.618551 master-0 kubenswrapper[30278]: I0318 18:05:16.618292 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3b3363934623637fdc1d37ff8b16880a-resource-dir\") pod \"3b3363934623637fdc1d37ff8b16880a\" (UID: \"3b3363934623637fdc1d37ff8b16880a\") " Mar 18 18:05:16.618840 master-0 kubenswrapper[30278]: I0318 18:05:16.618565 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3b3363934623637fdc1d37ff8b16880a-cert-dir\") pod \"3b3363934623637fdc1d37ff8b16880a\" (UID: \"3b3363934623637fdc1d37ff8b16880a\") " Mar 18 18:05:16.618840 master-0 kubenswrapper[30278]: I0318 18:05:16.618667 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3b3363934623637fdc1d37ff8b16880a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "3b3363934623637fdc1d37ff8b16880a" (UID: "3b3363934623637fdc1d37ff8b16880a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 18:05:16.618840 master-0 kubenswrapper[30278]: I0318 18:05:16.618744 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3b3363934623637fdc1d37ff8b16880a-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "3b3363934623637fdc1d37ff8b16880a" (UID: "3b3363934623637fdc1d37ff8b16880a"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 18:05:16.619066 master-0 kubenswrapper[30278]: I0318 18:05:16.619031 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/efc76217af9e7119e39d2455d00c223f-resource-dir\") pod \"kube-controller-manager-master-0\" (UID: \"efc76217af9e7119e39d2455d00c223f\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 18:05:16.619111 master-0 kubenswrapper[30278]: I0318 18:05:16.619078 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/efc76217af9e7119e39d2455d00c223f-cert-dir\") pod \"kube-controller-manager-master-0\" (UID: \"efc76217af9e7119e39d2455d00c223f\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 18:05:16.619211 master-0 kubenswrapper[30278]: I0318 18:05:16.619184 30278 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3b3363934623637fdc1d37ff8b16880a-resource-dir\") on node \"master-0\" DevicePath \"\"" Mar 18 18:05:16.619251 master-0 kubenswrapper[30278]: I0318 18:05:16.619215 30278 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3b3363934623637fdc1d37ff8b16880a-cert-dir\") on node \"master-0\" DevicePath \"\"" Mar 18 18:05:16.619251 master-0 kubenswrapper[30278]: I0318 18:05:16.619182 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/efc76217af9e7119e39d2455d00c223f-resource-dir\") pod \"kube-controller-manager-master-0\" (UID: \"efc76217af9e7119e39d2455d00c223f\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 18:05:16.619338 master-0 kubenswrapper[30278]: I0318 18:05:16.619306 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/efc76217af9e7119e39d2455d00c223f-cert-dir\") pod \"kube-controller-manager-master-0\" (UID: \"efc76217af9e7119e39d2455d00c223f\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 18:05:16.791718 master-0 kubenswrapper[30278]: I0318 18:05:16.791661 30278 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_3b3363934623637fdc1d37ff8b16880a/kube-controller-manager-cert-syncer/1.log" Mar 18 18:05:16.792303 master-0 kubenswrapper[30278]: I0318 18:05:16.792262 30278 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_3b3363934623637fdc1d37ff8b16880a/cluster-policy-controller/3.log" Mar 18 18:05:16.792925 master-0 kubenswrapper[30278]: I0318 18:05:16.792904 30278 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_3b3363934623637fdc1d37ff8b16880a/kube-controller-manager/1.log" Mar 18 18:05:16.794642 master-0 kubenswrapper[30278]: I0318 18:05:16.794621 30278 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_3b3363934623637fdc1d37ff8b16880a/kube-controller-manager-cert-syncer/0.log" Mar 18 18:05:16.794730 master-0 kubenswrapper[30278]: I0318 18:05:16.794702 30278 generic.go:334] "Generic (PLEG): container finished" podID="3b3363934623637fdc1d37ff8b16880a" containerID="af3223d37de441a43e2bb9840f2c7d68ed9137889a1d1026233d1692393573ca" exitCode=0 Mar 18 18:05:16.794799 master-0 kubenswrapper[30278]: I0318 18:05:16.794729 30278 generic.go:334] "Generic (PLEG): container finished" podID="3b3363934623637fdc1d37ff8b16880a" containerID="3c51974ba55ce77de4db6060fda42dd205fc3b6d69ff15656f21b3a7b488ddc3" exitCode=0 Mar 18 18:05:16.794799 master-0 kubenswrapper[30278]: I0318 18:05:16.794739 30278 generic.go:334] "Generic (PLEG): container finished" podID="3b3363934623637fdc1d37ff8b16880a" containerID="5e9de81daca56e7a14e9bb6ed5c647f47dd366c571087c15f6fae5baeebccd1e" exitCode=2 Mar 18 18:05:16.794799 master-0 kubenswrapper[30278]: I0318 18:05:16.794751 30278 generic.go:334] "Generic (PLEG): container finished" podID="3b3363934623637fdc1d37ff8b16880a" containerID="b6f2e9aac67fef6d9cd60fe1d8d223b7762a7baf5bd08f250b7e213146055132" exitCode=0 Mar 18 18:05:16.794911 master-0 kubenswrapper[30278]: I0318 18:05:16.794809 30278 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 18:05:16.794990 master-0 kubenswrapper[30278]: I0318 18:05:16.794917 30278 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="989b2fdc7ef152f9acfe916ef0d60955c7235426f6fe1dd2fc891166e785e105" Mar 18 18:05:16.794990 master-0 kubenswrapper[30278]: I0318 18:05:16.794966 30278 scope.go:117] "RemoveContainer" containerID="522b734ad03d049a879cfa7a8145e3b81a8d9061164b95712992e2f7f7b61d1d" Mar 18 18:05:16.797030 master-0 kubenswrapper[30278]: I0318 18:05:16.797003 30278 generic.go:334] "Generic (PLEG): container finished" podID="bcb7afbe-78dc-4d07-aa56-123aeceabcd6" containerID="d5960f392b00010ed91f8e6d7501b2845219dda833cadabbb7a7e6771bd6f9af" exitCode=0 Mar 18 18:05:16.797122 master-0 kubenswrapper[30278]: I0318 18:05:16.797072 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-3-retry-1-master-0" event={"ID":"bcb7afbe-78dc-4d07-aa56-123aeceabcd6","Type":"ContainerDied","Data":"d5960f392b00010ed91f8e6d7501b2845219dda833cadabbb7a7e6771bd6f9af"} Mar 18 18:05:16.799932 master-0 kubenswrapper[30278]: I0318 18:05:16.799878 30278 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" oldPodUID="3b3363934623637fdc1d37ff8b16880a" podUID="efc76217af9e7119e39d2455d00c223f" Mar 18 18:05:16.813055 master-0 kubenswrapper[30278]: I0318 18:05:16.812998 30278 scope.go:117] "RemoveContainer" containerID="230861fc75c4cf91f00521990347ee4c6eaab66ee62a9284086ae7fb81bebad6" Mar 18 18:05:16.840978 master-0 kubenswrapper[30278]: I0318 18:05:16.840906 30278 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" oldPodUID="3b3363934623637fdc1d37ff8b16880a" podUID="efc76217af9e7119e39d2455d00c223f" Mar 18 18:05:16.851186 master-0 kubenswrapper[30278]: I0318 18:05:16.851118 30278 scope.go:117] "RemoveContainer" containerID="243a7398c383ba8c402d23dcf0f7c5b93b0d9dae2f29d0c0170f8b972de06495" Mar 18 18:05:17.063660 master-0 kubenswrapper[30278]: I0318 18:05:17.063600 30278 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3b3363934623637fdc1d37ff8b16880a" path="/var/lib/kubelet/pods/3b3363934623637fdc1d37ff8b16880a/volumes" Mar 18 18:05:17.731324 master-0 kubenswrapper[30278]: I0318 18:05:17.730445 30278 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-6b7657f69f-w666c" Mar 18 18:05:17.731324 master-0 kubenswrapper[30278]: I0318 18:05:17.730497 30278 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-6b7657f69f-w666c" Mar 18 18:05:17.736898 master-0 kubenswrapper[30278]: I0318 18:05:17.736713 30278 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-6b7657f69f-w666c" Mar 18 18:05:17.812094 master-0 kubenswrapper[30278]: I0318 18:05:17.812036 30278 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_3b3363934623637fdc1d37ff8b16880a/kube-controller-manager-cert-syncer/1.log" Mar 18 18:05:17.817484 master-0 kubenswrapper[30278]: I0318 18:05:17.817440 30278 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-6b7657f69f-w666c" Mar 18 18:05:18.152091 master-0 kubenswrapper[30278]: I0318 18:05:18.152041 30278 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-3-retry-1-master-0" Mar 18 18:05:18.253218 master-0 kubenswrapper[30278]: I0318 18:05:18.253176 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/bcb7afbe-78dc-4d07-aa56-123aeceabcd6-var-lock\") pod \"bcb7afbe-78dc-4d07-aa56-123aeceabcd6\" (UID: \"bcb7afbe-78dc-4d07-aa56-123aeceabcd6\") " Mar 18 18:05:18.253482 master-0 kubenswrapper[30278]: I0318 18:05:18.253293 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bcb7afbe-78dc-4d07-aa56-123aeceabcd6-var-lock" (OuterVolumeSpecName: "var-lock") pod "bcb7afbe-78dc-4d07-aa56-123aeceabcd6" (UID: "bcb7afbe-78dc-4d07-aa56-123aeceabcd6"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 18:05:18.253482 master-0 kubenswrapper[30278]: I0318 18:05:18.253348 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/bcb7afbe-78dc-4d07-aa56-123aeceabcd6-kube-api-access\") pod \"bcb7afbe-78dc-4d07-aa56-123aeceabcd6\" (UID: \"bcb7afbe-78dc-4d07-aa56-123aeceabcd6\") " Mar 18 18:05:18.253482 master-0 kubenswrapper[30278]: I0318 18:05:18.253419 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/bcb7afbe-78dc-4d07-aa56-123aeceabcd6-kubelet-dir\") pod \"bcb7afbe-78dc-4d07-aa56-123aeceabcd6\" (UID: \"bcb7afbe-78dc-4d07-aa56-123aeceabcd6\") " Mar 18 18:05:18.253656 master-0 kubenswrapper[30278]: I0318 18:05:18.253507 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bcb7afbe-78dc-4d07-aa56-123aeceabcd6-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "bcb7afbe-78dc-4d07-aa56-123aeceabcd6" (UID: "bcb7afbe-78dc-4d07-aa56-123aeceabcd6"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 18:05:18.253756 master-0 kubenswrapper[30278]: I0318 18:05:18.253735 30278 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/bcb7afbe-78dc-4d07-aa56-123aeceabcd6-var-lock\") on node \"master-0\" DevicePath \"\"" Mar 18 18:05:18.253756 master-0 kubenswrapper[30278]: I0318 18:05:18.253755 30278 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/bcb7afbe-78dc-4d07-aa56-123aeceabcd6-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Mar 18 18:05:18.256108 master-0 kubenswrapper[30278]: I0318 18:05:18.256061 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bcb7afbe-78dc-4d07-aa56-123aeceabcd6-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "bcb7afbe-78dc-4d07-aa56-123aeceabcd6" (UID: "bcb7afbe-78dc-4d07-aa56-123aeceabcd6"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 18:05:18.354928 master-0 kubenswrapper[30278]: I0318 18:05:18.354790 30278 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/bcb7afbe-78dc-4d07-aa56-123aeceabcd6-kube-api-access\") on node \"master-0\" DevicePath \"\"" Mar 18 18:05:18.823448 master-0 kubenswrapper[30278]: I0318 18:05:18.823383 30278 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-3-retry-1-master-0" Mar 18 18:05:18.825562 master-0 kubenswrapper[30278]: I0318 18:05:18.825487 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-3-retry-1-master-0" event={"ID":"bcb7afbe-78dc-4d07-aa56-123aeceabcd6","Type":"ContainerDied","Data":"7ffee97df1c282edf05c027302408e950cb4b6cb2488fedc14cdf015b9f40549"} Mar 18 18:05:18.825656 master-0 kubenswrapper[30278]: I0318 18:05:18.825560 30278 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7ffee97df1c282edf05c027302408e950cb4b6cb2488fedc14cdf015b9f40549" Mar 18 18:05:29.054146 master-0 kubenswrapper[30278]: I0318 18:05:29.054074 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 18:05:29.078227 master-0 kubenswrapper[30278]: I0318 18:05:29.078175 30278 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="6986e259-61e4-4d46-86cb-e7562cc63679" Mar 18 18:05:29.078227 master-0 kubenswrapper[30278]: I0318 18:05:29.078221 30278 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="6986e259-61e4-4d46-86cb-e7562cc63679" Mar 18 18:05:29.689968 master-0 kubenswrapper[30278]: I0318 18:05:29.689895 30278 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 18:05:29.697451 master-0 kubenswrapper[30278]: I0318 18:05:29.695809 30278 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-master-0"] Mar 18 18:05:29.703027 master-0 kubenswrapper[30278]: I0318 18:05:29.702976 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 18:05:29.703220 master-0 kubenswrapper[30278]: I0318 18:05:29.702985 30278 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-master-0"] Mar 18 18:05:29.709755 master-0 kubenswrapper[30278]: I0318 18:05:29.709704 30278 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-master-0"] Mar 18 18:05:31.313685 master-0 kubenswrapper[30278]: I0318 18:05:31.312587 30278 scope.go:117] "RemoveContainer" containerID="3c51974ba55ce77de4db6060fda42dd205fc3b6d69ff15656f21b3a7b488ddc3" Mar 18 18:05:36.098496 master-0 kubenswrapper[30278]: I0318 18:05:36.097691 30278 scope.go:117] "RemoveContainer" containerID="b6f2e9aac67fef6d9cd60fe1d8d223b7762a7baf5bd08f250b7e213146055132" Mar 18 18:05:36.149351 master-0 kubenswrapper[30278]: I0318 18:05:36.149310 30278 scope.go:117] "RemoveContainer" containerID="5e9de81daca56e7a14e9bb6ed5c647f47dd366c571087c15f6fae5baeebccd1e" Mar 18 18:05:36.187034 master-0 kubenswrapper[30278]: W0318 18:05:36.186986 30278 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podefc76217af9e7119e39d2455d00c223f.slice/crio-bf25ed1be4c3abef2ee86d44fadd6095dc54deb721dd3c3546ed28b136e56926 WatchSource:0}: Error finding container bf25ed1be4c3abef2ee86d44fadd6095dc54deb721dd3c3546ed28b136e56926: Status 404 returned error can't find the container with id bf25ed1be4c3abef2ee86d44fadd6095dc54deb721dd3c3546ed28b136e56926 Mar 18 18:05:36.251618 master-0 kubenswrapper[30278]: I0318 18:05:36.251585 30278 kubelet.go:1505] "Image garbage collection succeeded" Mar 18 18:05:36.969913 master-0 kubenswrapper[30278]: I0318 18:05:36.969760 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"efc76217af9e7119e39d2455d00c223f","Type":"ContainerStarted","Data":"346470c7e231870f2c02c668d780fdbc24cd909efb0248742f57a63237119f4a"} Mar 18 18:05:36.969913 master-0 kubenswrapper[30278]: I0318 18:05:36.969827 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"efc76217af9e7119e39d2455d00c223f","Type":"ContainerStarted","Data":"498a5c57b90053a76dc039b2bff8526c3d09fbb3c0193932a4070bb49e9eec20"} Mar 18 18:05:36.969913 master-0 kubenswrapper[30278]: I0318 18:05:36.969843 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"efc76217af9e7119e39d2455d00c223f","Type":"ContainerStarted","Data":"bf25ed1be4c3abef2ee86d44fadd6095dc54deb721dd3c3546ed28b136e56926"} Mar 18 18:05:36.990889 master-0 kubenswrapper[30278]: I0318 18:05:36.975422 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-66b8ffb895-5ftpz" event={"ID":"1c86ad24-b858-4dfa-802b-f4799093ffc0","Type":"ContainerStarted","Data":"6f5d2123d281242ec02a5b0e1be63d71e3cde1e198c35228543683eac99c6b94"} Mar 18 18:05:36.990889 master-0 kubenswrapper[30278]: I0318 18:05:36.976576 30278 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-66b8ffb895-5ftpz" Mar 18 18:05:36.990889 master-0 kubenswrapper[30278]: I0318 18:05:36.979407 30278 patch_prober.go:28] interesting pod/downloads-66b8ffb895-5ftpz container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.128.0.100:8080/\": dial tcp 10.128.0.100:8080: connect: connection refused" start-of-body= Mar 18 18:05:36.990889 master-0 kubenswrapper[30278]: I0318 18:05:36.979474 30278 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-66b8ffb895-5ftpz" podUID="1c86ad24-b858-4dfa-802b-f4799093ffc0" containerName="download-server" probeResult="failure" output="Get \"http://10.128.0.100:8080/\": dial tcp 10.128.0.100:8080: connect: connection refused" Mar 18 18:05:37.223835 master-0 kubenswrapper[30278]: I0318 18:05:37.222511 30278 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-66b8ffb895-5ftpz" podStartSLOduration=1.8102898870000002 podStartE2EDuration="37.222491528s" podCreationTimestamp="2026-03-18 18:05:00 +0000 UTC" firstStartedPulling="2026-03-18 18:05:00.915500663 +0000 UTC m=+270.082685258" lastFinishedPulling="2026-03-18 18:05:36.327702294 +0000 UTC m=+305.494886899" observedRunningTime="2026-03-18 18:05:37.222154969 +0000 UTC m=+306.389339604" watchObservedRunningTime="2026-03-18 18:05:37.222491528 +0000 UTC m=+306.389676123" Mar 18 18:05:37.288346 master-0 kubenswrapper[30278]: I0318 18:05:37.288249 30278 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0"] Mar 18 18:05:37.289351 master-0 kubenswrapper[30278]: E0318 18:05:37.288824 30278 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bcb7afbe-78dc-4d07-aa56-123aeceabcd6" containerName="installer" Mar 18 18:05:37.289351 master-0 kubenswrapper[30278]: I0318 18:05:37.288854 30278 state_mem.go:107] "Deleted CPUSet assignment" podUID="bcb7afbe-78dc-4d07-aa56-123aeceabcd6" containerName="installer" Mar 18 18:05:37.289351 master-0 kubenswrapper[30278]: I0318 18:05:37.289114 30278 memory_manager.go:354] "RemoveStaleState removing state" podUID="bcb7afbe-78dc-4d07-aa56-123aeceabcd6" containerName="installer" Mar 18 18:05:37.289839 master-0 kubenswrapper[30278]: I0318 18:05:37.289797 30278 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-master-0"] Mar 18 18:05:37.290037 master-0 kubenswrapper[30278]: I0318 18:05:37.289981 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 18 18:05:37.290827 master-0 kubenswrapper[30278]: I0318 18:05:37.290418 30278 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="d5f502b117c7c8479f7f20848a50fec0" containerName="kube-apiserver" containerID="cri-o://7b427131e95fd979592c63eb34266da7d198f926abfcde0b8ee234a58853d777" gracePeriod=15 Mar 18 18:05:37.290827 master-0 kubenswrapper[30278]: I0318 18:05:37.290476 30278 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="d5f502b117c7c8479f7f20848a50fec0" containerName="kube-apiserver-check-endpoints" containerID="cri-o://68fb85f8feb2a71df37393892fcf105fc63c56dbee37f985ec890269695bb794" gracePeriod=15 Mar 18 18:05:37.290827 master-0 kubenswrapper[30278]: I0318 18:05:37.290555 30278 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="d5f502b117c7c8479f7f20848a50fec0" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://5b6a79a53410ef83a62b132c09123cd7abc8eea60c8f418d1e3cd0c240ba8daf" gracePeriod=15 Mar 18 18:05:37.290827 master-0 kubenswrapper[30278]: I0318 18:05:37.290625 30278 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="d5f502b117c7c8479f7f20848a50fec0" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://dce39b5eebaddceba8d0657c39677709811457bcc91100f5c8c39f9349dd78e3" gracePeriod=15 Mar 18 18:05:37.290827 master-0 kubenswrapper[30278]: I0318 18:05:37.290653 30278 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="d5f502b117c7c8479f7f20848a50fec0" containerName="kube-apiserver-cert-syncer" containerID="cri-o://140e2775d62d699fb155d42bd8fb1998bd76b938f609b8d9d008ddd344c42a4f" gracePeriod=15 Mar 18 18:05:37.291492 master-0 kubenswrapper[30278]: I0318 18:05:37.291184 30278 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-master-0"] Mar 18 18:05:37.292590 master-0 kubenswrapper[30278]: E0318 18:05:37.292546 30278 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d5f502b117c7c8479f7f20848a50fec0" containerName="kube-apiserver" Mar 18 18:05:37.292590 master-0 kubenswrapper[30278]: I0318 18:05:37.292581 30278 state_mem.go:107] "Deleted CPUSet assignment" podUID="d5f502b117c7c8479f7f20848a50fec0" containerName="kube-apiserver" Mar 18 18:05:37.292868 master-0 kubenswrapper[30278]: E0318 18:05:37.292617 30278 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d5f502b117c7c8479f7f20848a50fec0" containerName="kube-apiserver-check-endpoints" Mar 18 18:05:37.292868 master-0 kubenswrapper[30278]: I0318 18:05:37.292628 30278 state_mem.go:107] "Deleted CPUSet assignment" podUID="d5f502b117c7c8479f7f20848a50fec0" containerName="kube-apiserver-check-endpoints" Mar 18 18:05:37.292868 master-0 kubenswrapper[30278]: E0318 18:05:37.292640 30278 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d5f502b117c7c8479f7f20848a50fec0" containerName="setup" Mar 18 18:05:37.292868 master-0 kubenswrapper[30278]: I0318 18:05:37.292647 30278 state_mem.go:107] "Deleted CPUSet assignment" podUID="d5f502b117c7c8479f7f20848a50fec0" containerName="setup" Mar 18 18:05:37.292868 master-0 kubenswrapper[30278]: E0318 18:05:37.292659 30278 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d5f502b117c7c8479f7f20848a50fec0" containerName="kube-apiserver-insecure-readyz" Mar 18 18:05:37.292868 master-0 kubenswrapper[30278]: I0318 18:05:37.292665 30278 state_mem.go:107] "Deleted CPUSet assignment" podUID="d5f502b117c7c8479f7f20848a50fec0" containerName="kube-apiserver-insecure-readyz" Mar 18 18:05:37.292868 master-0 kubenswrapper[30278]: E0318 18:05:37.292691 30278 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d5f502b117c7c8479f7f20848a50fec0" containerName="kube-apiserver-cert-syncer" Mar 18 18:05:37.292868 master-0 kubenswrapper[30278]: I0318 18:05:37.292697 30278 state_mem.go:107] "Deleted CPUSet assignment" podUID="d5f502b117c7c8479f7f20848a50fec0" containerName="kube-apiserver-cert-syncer" Mar 18 18:05:37.292868 master-0 kubenswrapper[30278]: E0318 18:05:37.292711 30278 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d5f502b117c7c8479f7f20848a50fec0" containerName="kube-apiserver-cert-regeneration-controller" Mar 18 18:05:37.292868 master-0 kubenswrapper[30278]: I0318 18:05:37.292718 30278 state_mem.go:107] "Deleted CPUSet assignment" podUID="d5f502b117c7c8479f7f20848a50fec0" containerName="kube-apiserver-cert-regeneration-controller" Mar 18 18:05:37.292868 master-0 kubenswrapper[30278]: I0318 18:05:37.292837 30278 memory_manager.go:354] "RemoveStaleState removing state" podUID="d5f502b117c7c8479f7f20848a50fec0" containerName="kube-apiserver-cert-syncer" Mar 18 18:05:37.292868 master-0 kubenswrapper[30278]: I0318 18:05:37.292858 30278 memory_manager.go:354] "RemoveStaleState removing state" podUID="d5f502b117c7c8479f7f20848a50fec0" containerName="kube-apiserver-insecure-readyz" Mar 18 18:05:37.292868 master-0 kubenswrapper[30278]: I0318 18:05:37.292882 30278 memory_manager.go:354] "RemoveStaleState removing state" podUID="d5f502b117c7c8479f7f20848a50fec0" containerName="kube-apiserver-check-endpoints" Mar 18 18:05:37.292868 master-0 kubenswrapper[30278]: I0318 18:05:37.292900 30278 memory_manager.go:354] "RemoveStaleState removing state" podUID="d5f502b117c7c8479f7f20848a50fec0" containerName="kube-apiserver" Mar 18 18:05:37.294135 master-0 kubenswrapper[30278]: I0318 18:05:37.292918 30278 memory_manager.go:354] "RemoveStaleState removing state" podUID="d5f502b117c7c8479f7f20848a50fec0" containerName="kube-apiserver-cert-regeneration-controller" Mar 18 18:05:37.413624 master-0 kubenswrapper[30278]: I0318 18:05:37.413513 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/ebbfbf2b56df0323ba118d68bfdad8b9-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"ebbfbf2b56df0323ba118d68bfdad8b9\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 18 18:05:37.413880 master-0 kubenswrapper[30278]: I0318 18:05:37.413640 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/ebbfbf2b56df0323ba118d68bfdad8b9-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"ebbfbf2b56df0323ba118d68bfdad8b9\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 18 18:05:37.413880 master-0 kubenswrapper[30278]: I0318 18:05:37.413716 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/ebbfbf2b56df0323ba118d68bfdad8b9-manifests\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"ebbfbf2b56df0323ba118d68bfdad8b9\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 18 18:05:37.413880 master-0 kubenswrapper[30278]: I0318 18:05:37.413762 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/ebbfbf2b56df0323ba118d68bfdad8b9-var-log\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"ebbfbf2b56df0323ba118d68bfdad8b9\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 18 18:05:37.413880 master-0 kubenswrapper[30278]: I0318 18:05:37.413825 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/274c4bebf95a655851b2cf276fe43ef7-cert-dir\") pod \"kube-apiserver-master-0\" (UID: \"274c4bebf95a655851b2cf276fe43ef7\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 18 18:05:37.413880 master-0 kubenswrapper[30278]: I0318 18:05:37.413872 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/274c4bebf95a655851b2cf276fe43ef7-audit-dir\") pod \"kube-apiserver-master-0\" (UID: \"274c4bebf95a655851b2cf276fe43ef7\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 18 18:05:37.414105 master-0 kubenswrapper[30278]: I0318 18:05:37.413932 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/ebbfbf2b56df0323ba118d68bfdad8b9-var-lock\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"ebbfbf2b56df0323ba118d68bfdad8b9\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 18 18:05:37.414105 master-0 kubenswrapper[30278]: I0318 18:05:37.413991 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/274c4bebf95a655851b2cf276fe43ef7-resource-dir\") pod \"kube-apiserver-master-0\" (UID: \"274c4bebf95a655851b2cf276fe43ef7\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 18 18:05:37.515144 master-0 kubenswrapper[30278]: I0318 18:05:37.515087 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/ebbfbf2b56df0323ba118d68bfdad8b9-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"ebbfbf2b56df0323ba118d68bfdad8b9\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 18 18:05:37.515144 master-0 kubenswrapper[30278]: I0318 18:05:37.515145 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/ebbfbf2b56df0323ba118d68bfdad8b9-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"ebbfbf2b56df0323ba118d68bfdad8b9\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 18 18:05:37.515407 master-0 kubenswrapper[30278]: I0318 18:05:37.515256 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/ebbfbf2b56df0323ba118d68bfdad8b9-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"ebbfbf2b56df0323ba118d68bfdad8b9\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 18 18:05:37.515407 master-0 kubenswrapper[30278]: I0318 18:05:37.515335 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/ebbfbf2b56df0323ba118d68bfdad8b9-manifests\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"ebbfbf2b56df0323ba118d68bfdad8b9\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 18 18:05:37.515407 master-0 kubenswrapper[30278]: I0318 18:05:37.515384 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/ebbfbf2b56df0323ba118d68bfdad8b9-var-log\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"ebbfbf2b56df0323ba118d68bfdad8b9\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 18 18:05:37.515494 master-0 kubenswrapper[30278]: I0318 18:05:37.515414 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/ebbfbf2b56df0323ba118d68bfdad8b9-manifests\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"ebbfbf2b56df0323ba118d68bfdad8b9\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 18 18:05:37.515494 master-0 kubenswrapper[30278]: I0318 18:05:37.515352 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/ebbfbf2b56df0323ba118d68bfdad8b9-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"ebbfbf2b56df0323ba118d68bfdad8b9\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 18 18:05:37.515494 master-0 kubenswrapper[30278]: I0318 18:05:37.515467 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/274c4bebf95a655851b2cf276fe43ef7-cert-dir\") pod \"kube-apiserver-master-0\" (UID: \"274c4bebf95a655851b2cf276fe43ef7\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 18 18:05:37.515494 master-0 kubenswrapper[30278]: I0318 18:05:37.515479 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/ebbfbf2b56df0323ba118d68bfdad8b9-var-log\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"ebbfbf2b56df0323ba118d68bfdad8b9\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 18 18:05:37.515629 master-0 kubenswrapper[30278]: I0318 18:05:37.515527 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/274c4bebf95a655851b2cf276fe43ef7-audit-dir\") pod \"kube-apiserver-master-0\" (UID: \"274c4bebf95a655851b2cf276fe43ef7\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 18 18:05:37.515629 master-0 kubenswrapper[30278]: I0318 18:05:37.515597 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/274c4bebf95a655851b2cf276fe43ef7-cert-dir\") pod \"kube-apiserver-master-0\" (UID: \"274c4bebf95a655851b2cf276fe43ef7\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 18 18:05:37.515629 master-0 kubenswrapper[30278]: I0318 18:05:37.515599 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/ebbfbf2b56df0323ba118d68bfdad8b9-var-lock\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"ebbfbf2b56df0323ba118d68bfdad8b9\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 18 18:05:37.515722 master-0 kubenswrapper[30278]: I0318 18:05:37.515625 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/ebbfbf2b56df0323ba118d68bfdad8b9-var-lock\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"ebbfbf2b56df0323ba118d68bfdad8b9\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 18 18:05:37.515722 master-0 kubenswrapper[30278]: I0318 18:05:37.515658 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/274c4bebf95a655851b2cf276fe43ef7-audit-dir\") pod \"kube-apiserver-master-0\" (UID: \"274c4bebf95a655851b2cf276fe43ef7\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 18 18:05:37.515722 master-0 kubenswrapper[30278]: I0318 18:05:37.515691 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/274c4bebf95a655851b2cf276fe43ef7-resource-dir\") pod \"kube-apiserver-master-0\" (UID: \"274c4bebf95a655851b2cf276fe43ef7\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 18 18:05:37.515860 master-0 kubenswrapper[30278]: I0318 18:05:37.515832 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/274c4bebf95a655851b2cf276fe43ef7-resource-dir\") pod \"kube-apiserver-master-0\" (UID: \"274c4bebf95a655851b2cf276fe43ef7\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 18 18:05:37.933526 master-0 kubenswrapper[30278]: I0318 18:05:37.933453 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 18 18:05:37.940050 master-0 kubenswrapper[30278]: I0318 18:05:37.939761 30278 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0"] Mar 18 18:05:37.999866 master-0 kubenswrapper[30278]: I0318 18:05:37.999825 30278 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-master-0_d5f502b117c7c8479f7f20848a50fec0/kube-apiserver-cert-syncer/0.log" Mar 18 18:05:38.000679 master-0 kubenswrapper[30278]: I0318 18:05:38.000612 30278 generic.go:334] "Generic (PLEG): container finished" podID="d5f502b117c7c8479f7f20848a50fec0" containerID="68fb85f8feb2a71df37393892fcf105fc63c56dbee37f985ec890269695bb794" exitCode=0 Mar 18 18:05:38.000679 master-0 kubenswrapper[30278]: I0318 18:05:38.000662 30278 generic.go:334] "Generic (PLEG): container finished" podID="d5f502b117c7c8479f7f20848a50fec0" containerID="dce39b5eebaddceba8d0657c39677709811457bcc91100f5c8c39f9349dd78e3" exitCode=0 Mar 18 18:05:38.000679 master-0 kubenswrapper[30278]: I0318 18:05:38.000674 30278 generic.go:334] "Generic (PLEG): container finished" podID="d5f502b117c7c8479f7f20848a50fec0" containerID="5b6a79a53410ef83a62b132c09123cd7abc8eea60c8f418d1e3cd0c240ba8daf" exitCode=0 Mar 18 18:05:38.000679 master-0 kubenswrapper[30278]: I0318 18:05:38.000683 30278 generic.go:334] "Generic (PLEG): container finished" podID="d5f502b117c7c8479f7f20848a50fec0" containerID="140e2775d62d699fb155d42bd8fb1998bd76b938f609b8d9d008ddd344c42a4f" exitCode=2 Mar 18 18:05:38.002593 master-0 kubenswrapper[30278]: I0318 18:05:38.002570 30278 generic.go:334] "Generic (PLEG): container finished" podID="257339d9-4efe-4659-ae45-5c1fee5ebba7" containerID="901657905b63db2a204c2de049eb9b41c990c2f199f3af7731ee16f58b659483" exitCode=0 Mar 18 18:05:38.002665 master-0 kubenswrapper[30278]: I0318 18:05:38.002629 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-6-master-0" event={"ID":"257339d9-4efe-4659-ae45-5c1fee5ebba7","Type":"ContainerDied","Data":"901657905b63db2a204c2de049eb9b41c990c2f199f3af7731ee16f58b659483"} Mar 18 18:05:38.005883 master-0 kubenswrapper[30278]: I0318 18:05:38.005848 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"efc76217af9e7119e39d2455d00c223f","Type":"ContainerStarted","Data":"31da287ae2ee280ceb25c6d586c08cddceb6988bdd57a314f7a80a3ffba9a2ae"} Mar 18 18:05:38.005984 master-0 kubenswrapper[30278]: I0318 18:05:38.005969 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"efc76217af9e7119e39d2455d00c223f","Type":"ContainerStarted","Data":"b58573729d641d7e86f1ec2365e091375bd8cf625b0a9697be4ea6b82ebe135b"} Mar 18 18:05:38.006481 master-0 kubenswrapper[30278]: I0318 18:05:38.006445 30278 patch_prober.go:28] interesting pod/downloads-66b8ffb895-5ftpz container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.128.0.100:8080/\": dial tcp 10.128.0.100:8080: connect: connection refused" start-of-body= Mar 18 18:05:38.006553 master-0 kubenswrapper[30278]: I0318 18:05:38.006514 30278 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-66b8ffb895-5ftpz" podUID="1c86ad24-b858-4dfa-802b-f4799093ffc0" containerName="download-server" probeResult="failure" output="Get \"http://10.128.0.100:8080/\": dial tcp 10.128.0.100:8080: connect: connection refused" Mar 18 18:05:39.026228 master-0 kubenswrapper[30278]: I0318 18:05:39.026041 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" event={"ID":"ebbfbf2b56df0323ba118d68bfdad8b9","Type":"ContainerStarted","Data":"f9cbfe10dad1ec0cd2a3c9c1167829f36cbd4f5345a6cf76d79054d13a9a468a"} Mar 18 18:05:39.026228 master-0 kubenswrapper[30278]: I0318 18:05:39.026121 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" event={"ID":"ebbfbf2b56df0323ba118d68bfdad8b9","Type":"ContainerStarted","Data":"992c31277387c88fb6e97dfcddaae05e380cdb52fb08c987c3ea8aabbcfece15"} Mar 18 18:05:39.028697 master-0 kubenswrapper[30278]: I0318 18:05:39.028652 30278 patch_prober.go:28] interesting pod/downloads-66b8ffb895-5ftpz container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.128.0.100:8080/\": dial tcp 10.128.0.100:8080: connect: connection refused" start-of-body= Mar 18 18:05:39.030169 master-0 kubenswrapper[30278]: I0318 18:05:39.028892 30278 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-66b8ffb895-5ftpz" podUID="1c86ad24-b858-4dfa-802b-f4799093ffc0" containerName="download-server" probeResult="failure" output="Get \"http://10.128.0.100:8080/\": dial tcp 10.128.0.100:8080: connect: connection refused" Mar 18 18:05:39.472350 master-0 kubenswrapper[30278]: I0318 18:05:39.472254 30278 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-6-master-0" Mar 18 18:05:39.486354 master-0 kubenswrapper[30278]: I0318 18:05:39.486282 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/257339d9-4efe-4659-ae45-5c1fee5ebba7-kube-api-access\") pod \"257339d9-4efe-4659-ae45-5c1fee5ebba7\" (UID: \"257339d9-4efe-4659-ae45-5c1fee5ebba7\") " Mar 18 18:05:39.486511 master-0 kubenswrapper[30278]: I0318 18:05:39.486407 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/257339d9-4efe-4659-ae45-5c1fee5ebba7-kubelet-dir\") pod \"257339d9-4efe-4659-ae45-5c1fee5ebba7\" (UID: \"257339d9-4efe-4659-ae45-5c1fee5ebba7\") " Mar 18 18:05:39.486511 master-0 kubenswrapper[30278]: I0318 18:05:39.486484 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/257339d9-4efe-4659-ae45-5c1fee5ebba7-var-lock\") pod \"257339d9-4efe-4659-ae45-5c1fee5ebba7\" (UID: \"257339d9-4efe-4659-ae45-5c1fee5ebba7\") " Mar 18 18:05:39.486882 master-0 kubenswrapper[30278]: I0318 18:05:39.486762 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/257339d9-4efe-4659-ae45-5c1fee5ebba7-var-lock" (OuterVolumeSpecName: "var-lock") pod "257339d9-4efe-4659-ae45-5c1fee5ebba7" (UID: "257339d9-4efe-4659-ae45-5c1fee5ebba7"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 18:05:39.487251 master-0 kubenswrapper[30278]: I0318 18:05:39.487211 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/257339d9-4efe-4659-ae45-5c1fee5ebba7-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "257339d9-4efe-4659-ae45-5c1fee5ebba7" (UID: "257339d9-4efe-4659-ae45-5c1fee5ebba7"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 18:05:39.492395 master-0 kubenswrapper[30278]: I0318 18:05:39.492348 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/257339d9-4efe-4659-ae45-5c1fee5ebba7-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "257339d9-4efe-4659-ae45-5c1fee5ebba7" (UID: "257339d9-4efe-4659-ae45-5c1fee5ebba7"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 18:05:39.594960 master-0 kubenswrapper[30278]: I0318 18:05:39.587883 30278 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/257339d9-4efe-4659-ae45-5c1fee5ebba7-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Mar 18 18:05:39.594960 master-0 kubenswrapper[30278]: I0318 18:05:39.587930 30278 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/257339d9-4efe-4659-ae45-5c1fee5ebba7-var-lock\") on node \"master-0\" DevicePath \"\"" Mar 18 18:05:39.594960 master-0 kubenswrapper[30278]: I0318 18:05:39.587942 30278 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/257339d9-4efe-4659-ae45-5c1fee5ebba7-kube-api-access\") on node \"master-0\" DevicePath \"\"" Mar 18 18:05:39.704377 master-0 kubenswrapper[30278]: I0318 18:05:39.704060 30278 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 18:05:39.704377 master-0 kubenswrapper[30278]: I0318 18:05:39.704142 30278 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 18:05:39.704377 master-0 kubenswrapper[30278]: I0318 18:05:39.704164 30278 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 18:05:39.704377 master-0 kubenswrapper[30278]: I0318 18:05:39.704234 30278 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 18:05:39.705082 master-0 kubenswrapper[30278]: I0318 18:05:39.704505 30278 patch_prober.go:28] interesting pod/kube-controller-manager-master-0 container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.32.10:10257/healthz\": dial tcp 192.168.32.10:10257: connect: connection refused" start-of-body= Mar 18 18:05:39.705082 master-0 kubenswrapper[30278]: I0318 18:05:39.704588 30278 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="efc76217af9e7119e39d2455d00c223f" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.32.10:10257/healthz\": dial tcp 192.168.32.10:10257: connect: connection refused" Mar 18 18:05:39.710432 master-0 kubenswrapper[30278]: I0318 18:05:39.710364 30278 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 18:05:40.040266 master-0 kubenswrapper[30278]: I0318 18:05:40.040144 30278 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-6-master-0" Mar 18 18:05:40.043240 master-0 kubenswrapper[30278]: I0318 18:05:40.043207 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-6-master-0" event={"ID":"257339d9-4efe-4659-ae45-5c1fee5ebba7","Type":"ContainerDied","Data":"293213794c423395ab23e04b7ae8f93572e160c7c2f0f22ae50f7fafacdb250b"} Mar 18 18:05:40.043323 master-0 kubenswrapper[30278]: I0318 18:05:40.043248 30278 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="293213794c423395ab23e04b7ae8f93572e160c7c2f0f22ae50f7fafacdb250b" Mar 18 18:05:40.474420 master-0 kubenswrapper[30278]: I0318 18:05:40.474335 30278 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-66b8ffb895-5ftpz" Mar 18 18:05:41.125721 master-0 kubenswrapper[30278]: E0318 18:05:41.125436 30278 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 18 18:05:41.127111 master-0 kubenswrapper[30278]: E0318 18:05:41.127027 30278 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 18 18:05:41.128155 master-0 kubenswrapper[30278]: E0318 18:05:41.128105 30278 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 18 18:05:41.128745 master-0 kubenswrapper[30278]: E0318 18:05:41.128692 30278 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 18 18:05:41.129227 master-0 kubenswrapper[30278]: E0318 18:05:41.129185 30278 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 18 18:05:41.129315 master-0 kubenswrapper[30278]: I0318 18:05:41.129226 30278 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Mar 18 18:05:41.130220 master-0 kubenswrapper[30278]: E0318 18:05:41.130168 30278 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="200ms" Mar 18 18:05:41.333131 master-0 kubenswrapper[30278]: E0318 18:05:41.333012 30278 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="400ms" Mar 18 18:05:41.677122 master-0 kubenswrapper[30278]: I0318 18:05:41.677061 30278 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-master-0_d5f502b117c7c8479f7f20848a50fec0/kube-apiserver-cert-syncer/0.log" Mar 18 18:05:41.678534 master-0 kubenswrapper[30278]: I0318 18:05:41.678487 30278 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 18 18:05:41.736976 master-0 kubenswrapper[30278]: E0318 18:05:41.736889 30278 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="800ms" Mar 18 18:05:41.823991 master-0 kubenswrapper[30278]: I0318 18:05:41.823925 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/d5f502b117c7c8479f7f20848a50fec0-audit-dir\") pod \"d5f502b117c7c8479f7f20848a50fec0\" (UID: \"d5f502b117c7c8479f7f20848a50fec0\") " Mar 18 18:05:41.824227 master-0 kubenswrapper[30278]: I0318 18:05:41.824057 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d5f502b117c7c8479f7f20848a50fec0-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "d5f502b117c7c8479f7f20848a50fec0" (UID: "d5f502b117c7c8479f7f20848a50fec0"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 18:05:41.824227 master-0 kubenswrapper[30278]: I0318 18:05:41.824126 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/d5f502b117c7c8479f7f20848a50fec0-resource-dir\") pod \"d5f502b117c7c8479f7f20848a50fec0\" (UID: \"d5f502b117c7c8479f7f20848a50fec0\") " Mar 18 18:05:41.824227 master-0 kubenswrapper[30278]: I0318 18:05:41.824168 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/d5f502b117c7c8479f7f20848a50fec0-cert-dir\") pod \"d5f502b117c7c8479f7f20848a50fec0\" (UID: \"d5f502b117c7c8479f7f20848a50fec0\") " Mar 18 18:05:41.824227 master-0 kubenswrapper[30278]: I0318 18:05:41.824210 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d5f502b117c7c8479f7f20848a50fec0-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "d5f502b117c7c8479f7f20848a50fec0" (UID: "d5f502b117c7c8479f7f20848a50fec0"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 18:05:41.824399 master-0 kubenswrapper[30278]: I0318 18:05:41.824292 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d5f502b117c7c8479f7f20848a50fec0-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "d5f502b117c7c8479f7f20848a50fec0" (UID: "d5f502b117c7c8479f7f20848a50fec0"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 18:05:41.824523 master-0 kubenswrapper[30278]: I0318 18:05:41.824493 30278 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/d5f502b117c7c8479f7f20848a50fec0-resource-dir\") on node \"master-0\" DevicePath \"\"" Mar 18 18:05:41.824523 master-0 kubenswrapper[30278]: I0318 18:05:41.824515 30278 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/d5f502b117c7c8479f7f20848a50fec0-cert-dir\") on node \"master-0\" DevicePath \"\"" Mar 18 18:05:41.824674 master-0 kubenswrapper[30278]: I0318 18:05:41.824526 30278 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/d5f502b117c7c8479f7f20848a50fec0-audit-dir\") on node \"master-0\" DevicePath \"\"" Mar 18 18:05:42.089114 master-0 kubenswrapper[30278]: I0318 18:05:42.089032 30278 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-master-0_d5f502b117c7c8479f7f20848a50fec0/kube-apiserver-cert-syncer/0.log" Mar 18 18:05:42.090449 master-0 kubenswrapper[30278]: I0318 18:05:42.090364 30278 generic.go:334] "Generic (PLEG): container finished" podID="d5f502b117c7c8479f7f20848a50fec0" containerID="7b427131e95fd979592c63eb34266da7d198f926abfcde0b8ee234a58853d777" exitCode=0 Mar 18 18:05:42.090632 master-0 kubenswrapper[30278]: I0318 18:05:42.090518 30278 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 18 18:05:42.090805 master-0 kubenswrapper[30278]: I0318 18:05:42.090527 30278 scope.go:117] "RemoveContainer" containerID="68fb85f8feb2a71df37393892fcf105fc63c56dbee37f985ec890269695bb794" Mar 18 18:05:42.115539 master-0 kubenswrapper[30278]: I0318 18:05:42.115498 30278 scope.go:117] "RemoveContainer" containerID="dce39b5eebaddceba8d0657c39677709811457bcc91100f5c8c39f9349dd78e3" Mar 18 18:05:42.141151 master-0 kubenswrapper[30278]: I0318 18:05:42.141096 30278 scope.go:117] "RemoveContainer" containerID="5b6a79a53410ef83a62b132c09123cd7abc8eea60c8f418d1e3cd0c240ba8daf" Mar 18 18:05:42.163255 master-0 kubenswrapper[30278]: I0318 18:05:42.163195 30278 scope.go:117] "RemoveContainer" containerID="140e2775d62d699fb155d42bd8fb1998bd76b938f609b8d9d008ddd344c42a4f" Mar 18 18:05:42.196175 master-0 kubenswrapper[30278]: I0318 18:05:42.195555 30278 scope.go:117] "RemoveContainer" containerID="7b427131e95fd979592c63eb34266da7d198f926abfcde0b8ee234a58853d777" Mar 18 18:05:42.221393 master-0 kubenswrapper[30278]: I0318 18:05:42.221363 30278 scope.go:117] "RemoveContainer" containerID="5c4eacc10202cc9c4d052ca71f707b3a4e64e4a9f45cdba64d88aad16bdfb5af" Mar 18 18:05:42.242074 master-0 kubenswrapper[30278]: I0318 18:05:42.242047 30278 scope.go:117] "RemoveContainer" containerID="68fb85f8feb2a71df37393892fcf105fc63c56dbee37f985ec890269695bb794" Mar 18 18:05:42.242587 master-0 kubenswrapper[30278]: E0318 18:05:42.242555 30278 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"68fb85f8feb2a71df37393892fcf105fc63c56dbee37f985ec890269695bb794\": container with ID starting with 68fb85f8feb2a71df37393892fcf105fc63c56dbee37f985ec890269695bb794 not found: ID does not exist" containerID="68fb85f8feb2a71df37393892fcf105fc63c56dbee37f985ec890269695bb794" Mar 18 18:05:42.242676 master-0 kubenswrapper[30278]: I0318 18:05:42.242582 30278 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"68fb85f8feb2a71df37393892fcf105fc63c56dbee37f985ec890269695bb794"} err="failed to get container status \"68fb85f8feb2a71df37393892fcf105fc63c56dbee37f985ec890269695bb794\": rpc error: code = NotFound desc = could not find container \"68fb85f8feb2a71df37393892fcf105fc63c56dbee37f985ec890269695bb794\": container with ID starting with 68fb85f8feb2a71df37393892fcf105fc63c56dbee37f985ec890269695bb794 not found: ID does not exist" Mar 18 18:05:42.242676 master-0 kubenswrapper[30278]: I0318 18:05:42.242604 30278 scope.go:117] "RemoveContainer" containerID="dce39b5eebaddceba8d0657c39677709811457bcc91100f5c8c39f9349dd78e3" Mar 18 18:05:42.243062 master-0 kubenswrapper[30278]: E0318 18:05:42.243038 30278 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dce39b5eebaddceba8d0657c39677709811457bcc91100f5c8c39f9349dd78e3\": container with ID starting with dce39b5eebaddceba8d0657c39677709811457bcc91100f5c8c39f9349dd78e3 not found: ID does not exist" containerID="dce39b5eebaddceba8d0657c39677709811457bcc91100f5c8c39f9349dd78e3" Mar 18 18:05:42.243062 master-0 kubenswrapper[30278]: I0318 18:05:42.243058 30278 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dce39b5eebaddceba8d0657c39677709811457bcc91100f5c8c39f9349dd78e3"} err="failed to get container status \"dce39b5eebaddceba8d0657c39677709811457bcc91100f5c8c39f9349dd78e3\": rpc error: code = NotFound desc = could not find container \"dce39b5eebaddceba8d0657c39677709811457bcc91100f5c8c39f9349dd78e3\": container with ID starting with dce39b5eebaddceba8d0657c39677709811457bcc91100f5c8c39f9349dd78e3 not found: ID does not exist" Mar 18 18:05:42.243156 master-0 kubenswrapper[30278]: I0318 18:05:42.243070 30278 scope.go:117] "RemoveContainer" containerID="5b6a79a53410ef83a62b132c09123cd7abc8eea60c8f418d1e3cd0c240ba8daf" Mar 18 18:05:42.243403 master-0 kubenswrapper[30278]: E0318 18:05:42.243375 30278 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5b6a79a53410ef83a62b132c09123cd7abc8eea60c8f418d1e3cd0c240ba8daf\": container with ID starting with 5b6a79a53410ef83a62b132c09123cd7abc8eea60c8f418d1e3cd0c240ba8daf not found: ID does not exist" containerID="5b6a79a53410ef83a62b132c09123cd7abc8eea60c8f418d1e3cd0c240ba8daf" Mar 18 18:05:42.243482 master-0 kubenswrapper[30278]: I0318 18:05:42.243402 30278 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5b6a79a53410ef83a62b132c09123cd7abc8eea60c8f418d1e3cd0c240ba8daf"} err="failed to get container status \"5b6a79a53410ef83a62b132c09123cd7abc8eea60c8f418d1e3cd0c240ba8daf\": rpc error: code = NotFound desc = could not find container \"5b6a79a53410ef83a62b132c09123cd7abc8eea60c8f418d1e3cd0c240ba8daf\": container with ID starting with 5b6a79a53410ef83a62b132c09123cd7abc8eea60c8f418d1e3cd0c240ba8daf not found: ID does not exist" Mar 18 18:05:42.243482 master-0 kubenswrapper[30278]: I0318 18:05:42.243422 30278 scope.go:117] "RemoveContainer" containerID="140e2775d62d699fb155d42bd8fb1998bd76b938f609b8d9d008ddd344c42a4f" Mar 18 18:05:42.243759 master-0 kubenswrapper[30278]: E0318 18:05:42.243732 30278 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"140e2775d62d699fb155d42bd8fb1998bd76b938f609b8d9d008ddd344c42a4f\": container with ID starting with 140e2775d62d699fb155d42bd8fb1998bd76b938f609b8d9d008ddd344c42a4f not found: ID does not exist" containerID="140e2775d62d699fb155d42bd8fb1998bd76b938f609b8d9d008ddd344c42a4f" Mar 18 18:05:42.243819 master-0 kubenswrapper[30278]: I0318 18:05:42.243755 30278 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"140e2775d62d699fb155d42bd8fb1998bd76b938f609b8d9d008ddd344c42a4f"} err="failed to get container status \"140e2775d62d699fb155d42bd8fb1998bd76b938f609b8d9d008ddd344c42a4f\": rpc error: code = NotFound desc = could not find container \"140e2775d62d699fb155d42bd8fb1998bd76b938f609b8d9d008ddd344c42a4f\": container with ID starting with 140e2775d62d699fb155d42bd8fb1998bd76b938f609b8d9d008ddd344c42a4f not found: ID does not exist" Mar 18 18:05:42.243819 master-0 kubenswrapper[30278]: I0318 18:05:42.243769 30278 scope.go:117] "RemoveContainer" containerID="7b427131e95fd979592c63eb34266da7d198f926abfcde0b8ee234a58853d777" Mar 18 18:05:42.244163 master-0 kubenswrapper[30278]: E0318 18:05:42.244132 30278 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7b427131e95fd979592c63eb34266da7d198f926abfcde0b8ee234a58853d777\": container with ID starting with 7b427131e95fd979592c63eb34266da7d198f926abfcde0b8ee234a58853d777 not found: ID does not exist" containerID="7b427131e95fd979592c63eb34266da7d198f926abfcde0b8ee234a58853d777" Mar 18 18:05:42.244231 master-0 kubenswrapper[30278]: I0318 18:05:42.244162 30278 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7b427131e95fd979592c63eb34266da7d198f926abfcde0b8ee234a58853d777"} err="failed to get container status \"7b427131e95fd979592c63eb34266da7d198f926abfcde0b8ee234a58853d777\": rpc error: code = NotFound desc = could not find container \"7b427131e95fd979592c63eb34266da7d198f926abfcde0b8ee234a58853d777\": container with ID starting with 7b427131e95fd979592c63eb34266da7d198f926abfcde0b8ee234a58853d777 not found: ID does not exist" Mar 18 18:05:42.244231 master-0 kubenswrapper[30278]: I0318 18:05:42.244188 30278 scope.go:117] "RemoveContainer" containerID="5c4eacc10202cc9c4d052ca71f707b3a4e64e4a9f45cdba64d88aad16bdfb5af" Mar 18 18:05:42.244770 master-0 kubenswrapper[30278]: E0318 18:05:42.244629 30278 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5c4eacc10202cc9c4d052ca71f707b3a4e64e4a9f45cdba64d88aad16bdfb5af\": container with ID starting with 5c4eacc10202cc9c4d052ca71f707b3a4e64e4a9f45cdba64d88aad16bdfb5af not found: ID does not exist" containerID="5c4eacc10202cc9c4d052ca71f707b3a4e64e4a9f45cdba64d88aad16bdfb5af" Mar 18 18:05:42.244770 master-0 kubenswrapper[30278]: I0318 18:05:42.244679 30278 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5c4eacc10202cc9c4d052ca71f707b3a4e64e4a9f45cdba64d88aad16bdfb5af"} err="failed to get container status \"5c4eacc10202cc9c4d052ca71f707b3a4e64e4a9f45cdba64d88aad16bdfb5af\": rpc error: code = NotFound desc = could not find container \"5c4eacc10202cc9c4d052ca71f707b3a4e64e4a9f45cdba64d88aad16bdfb5af\": container with ID starting with 5c4eacc10202cc9c4d052ca71f707b3a4e64e4a9f45cdba64d88aad16bdfb5af not found: ID does not exist" Mar 18 18:05:42.538367 master-0 kubenswrapper[30278]: E0318 18:05:42.538189 30278 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="1.6s" Mar 18 18:05:43.069025 master-0 kubenswrapper[30278]: I0318 18:05:43.068957 30278 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d5f502b117c7c8479f7f20848a50fec0" path="/var/lib/kubelet/pods/d5f502b117c7c8479f7f20848a50fec0/volumes" Mar 18 18:05:43.407413 master-0 kubenswrapper[30278]: I0318 18:05:43.404412 30278 status_manager.go:851] "Failed to get status for pod" podUID="d5f502b117c7c8479f7f20848a50fec0" pod="openshift-kube-apiserver/kube-apiserver-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 18 18:05:43.407413 master-0 kubenswrapper[30278]: I0318 18:05:43.405687 30278 status_manager.go:851] "Failed to get status for pod" podUID="ebbfbf2b56df0323ba118d68bfdad8b9" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 18 18:05:43.413412 master-0 kubenswrapper[30278]: I0318 18:05:43.408724 30278 status_manager.go:851] "Failed to get status for pod" podUID="efc76217af9e7119e39d2455d00c223f" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 18 18:05:43.413412 master-0 kubenswrapper[30278]: E0318 18:05:43.408700 30278 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 192.168.32.10:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-master-0.189e01ab067f0493 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-master-0,UID:d5f502b117c7c8479f7f20848a50fec0,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Killing,Message:Stopping container kube-apiserver-insecure-readyz,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 18:05:37.290577043 +0000 UTC m=+306.457761688,LastTimestamp:2026-03-18 18:05:37.290577043 +0000 UTC m=+306.457761688,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 18:05:43.413412 master-0 kubenswrapper[30278]: I0318 18:05:43.410183 30278 status_manager.go:851] "Failed to get status for pod" podUID="257339d9-4efe-4659-ae45-5c1fee5ebba7" pod="openshift-kube-apiserver/installer-6-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-6-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 18 18:05:43.413412 master-0 kubenswrapper[30278]: I0318 18:05:43.411378 30278 status_manager.go:851] "Failed to get status for pod" podUID="1c86ad24-b858-4dfa-802b-f4799093ffc0" pod="openshift-console/downloads-66b8ffb895-5ftpz" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-console/pods/downloads-66b8ffb895-5ftpz\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 18 18:05:43.413412 master-0 kubenswrapper[30278]: I0318 18:05:43.412197 30278 status_manager.go:851] "Failed to get status for pod" podUID="ebbfbf2b56df0323ba118d68bfdad8b9" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 18 18:05:43.422556 master-0 kubenswrapper[30278]: I0318 18:05:43.421793 30278 status_manager.go:851] "Failed to get status for pod" podUID="1c86ad24-b858-4dfa-802b-f4799093ffc0" pod="openshift-console/downloads-66b8ffb895-5ftpz" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-console/pods/downloads-66b8ffb895-5ftpz\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 18 18:05:43.423654 master-0 kubenswrapper[30278]: I0318 18:05:43.422921 30278 status_manager.go:851] "Failed to get status for pod" podUID="ebbfbf2b56df0323ba118d68bfdad8b9" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 18 18:05:43.423986 master-0 kubenswrapper[30278]: I0318 18:05:43.423935 30278 status_manager.go:851] "Failed to get status for pod" podUID="efc76217af9e7119e39d2455d00c223f" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 18 18:05:43.424875 master-0 kubenswrapper[30278]: I0318 18:05:43.424826 30278 status_manager.go:851] "Failed to get status for pod" podUID="257339d9-4efe-4659-ae45-5c1fee5ebba7" pod="openshift-kube-apiserver/installer-6-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-6-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 18 18:05:44.117992 master-0 kubenswrapper[30278]: I0318 18:05:44.117932 30278 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_efc76217af9e7119e39d2455d00c223f/cluster-policy-controller/0.log" Mar 18 18:05:44.118632 master-0 kubenswrapper[30278]: I0318 18:05:44.118581 30278 generic.go:334] "Generic (PLEG): container finished" podID="efc76217af9e7119e39d2455d00c223f" containerID="346470c7e231870f2c02c668d780fdbc24cd909efb0248742f57a63237119f4a" exitCode=255 Mar 18 18:05:44.118742 master-0 kubenswrapper[30278]: I0318 18:05:44.118635 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"efc76217af9e7119e39d2455d00c223f","Type":"ContainerDied","Data":"346470c7e231870f2c02c668d780fdbc24cd909efb0248742f57a63237119f4a"} Mar 18 18:05:44.119508 master-0 kubenswrapper[30278]: I0318 18:05:44.119424 30278 scope.go:117] "RemoveContainer" containerID="346470c7e231870f2c02c668d780fdbc24cd909efb0248742f57a63237119f4a" Mar 18 18:05:44.120642 master-0 kubenswrapper[30278]: I0318 18:05:44.120079 30278 status_manager.go:851] "Failed to get status for pod" podUID="1c86ad24-b858-4dfa-802b-f4799093ffc0" pod="openshift-console/downloads-66b8ffb895-5ftpz" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-console/pods/downloads-66b8ffb895-5ftpz\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 18 18:05:44.121853 master-0 kubenswrapper[30278]: I0318 18:05:44.121615 30278 status_manager.go:851] "Failed to get status for pod" podUID="ebbfbf2b56df0323ba118d68bfdad8b9" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 18 18:05:44.122737 master-0 kubenswrapper[30278]: I0318 18:05:44.122537 30278 status_manager.go:851] "Failed to get status for pod" podUID="efc76217af9e7119e39d2455d00c223f" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 18 18:05:44.124530 master-0 kubenswrapper[30278]: I0318 18:05:44.124458 30278 status_manager.go:851] "Failed to get status for pod" podUID="257339d9-4efe-4659-ae45-5c1fee5ebba7" pod="openshift-kube-apiserver/installer-6-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-6-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 18 18:05:44.140127 master-0 kubenswrapper[30278]: E0318 18:05:44.140061 30278 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="3.2s" Mar 18 18:05:44.524966 master-0 kubenswrapper[30278]: E0318 18:05:44.524486 30278 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 192.168.32.10:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-master-0.189e01ab067f0493 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-master-0,UID:d5f502b117c7c8479f7f20848a50fec0,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Killing,Message:Stopping container kube-apiserver-insecure-readyz,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 18:05:37.290577043 +0000 UTC m=+306.457761688,LastTimestamp:2026-03-18 18:05:37.290577043 +0000 UTC m=+306.457761688,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 18:05:45.141245 master-0 kubenswrapper[30278]: I0318 18:05:45.140694 30278 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_efc76217af9e7119e39d2455d00c223f/cluster-policy-controller/0.log" Mar 18 18:05:45.142068 master-0 kubenswrapper[30278]: I0318 18:05:45.141993 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"efc76217af9e7119e39d2455d00c223f","Type":"ContainerStarted","Data":"dec20dd282b8a1026853916cbbdbad7fcda801cf86223b20c47a3250f052fed3"} Mar 18 18:05:45.143735 master-0 kubenswrapper[30278]: I0318 18:05:45.143668 30278 status_manager.go:851] "Failed to get status for pod" podUID="1c86ad24-b858-4dfa-802b-f4799093ffc0" pod="openshift-console/downloads-66b8ffb895-5ftpz" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-console/pods/downloads-66b8ffb895-5ftpz\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 18 18:05:45.144616 master-0 kubenswrapper[30278]: I0318 18:05:45.144556 30278 status_manager.go:851] "Failed to get status for pod" podUID="ebbfbf2b56df0323ba118d68bfdad8b9" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 18 18:05:45.145444 master-0 kubenswrapper[30278]: I0318 18:05:45.145324 30278 status_manager.go:851] "Failed to get status for pod" podUID="efc76217af9e7119e39d2455d00c223f" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 18 18:05:45.145610 master-0 kubenswrapper[30278]: I0318 18:05:45.145534 30278 generic.go:334] "Generic (PLEG): container finished" podID="d4c75bee-d0d2-4261-8f89-8c3375dbd868" containerID="9890e276619ebeef2fdfb1c8e386ea0f74ad0cc5d40e53b9f1ccd6d8646d8339" exitCode=0 Mar 18 18:05:45.145699 master-0 kubenswrapper[30278]: I0318 18:05:45.145600 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-insights/insights-operator-68bf6ff9d6-hm777" event={"ID":"d4c75bee-d0d2-4261-8f89-8c3375dbd868","Type":"ContainerDied","Data":"9890e276619ebeef2fdfb1c8e386ea0f74ad0cc5d40e53b9f1ccd6d8646d8339"} Mar 18 18:05:45.145699 master-0 kubenswrapper[30278]: I0318 18:05:45.145662 30278 scope.go:117] "RemoveContainer" containerID="350645ba3bc2c5d9132063ea0cd6e79ddd087baff486b5e73a7bad9c73b8c8c7" Mar 18 18:05:45.146242 master-0 kubenswrapper[30278]: I0318 18:05:45.146182 30278 scope.go:117] "RemoveContainer" containerID="9890e276619ebeef2fdfb1c8e386ea0f74ad0cc5d40e53b9f1ccd6d8646d8339" Mar 18 18:05:45.146390 master-0 kubenswrapper[30278]: I0318 18:05:45.146258 30278 status_manager.go:851] "Failed to get status for pod" podUID="257339d9-4efe-4659-ae45-5c1fee5ebba7" pod="openshift-kube-apiserver/installer-6-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-6-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 18 18:05:45.147355 master-0 kubenswrapper[30278]: E0318 18:05:45.146562 30278 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"insights-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=insights-operator pod=insights-operator-68bf6ff9d6-hm777_openshift-insights(d4c75bee-d0d2-4261-8f89-8c3375dbd868)\"" pod="openshift-insights/insights-operator-68bf6ff9d6-hm777" podUID="d4c75bee-d0d2-4261-8f89-8c3375dbd868" Mar 18 18:05:45.155600 master-0 kubenswrapper[30278]: I0318 18:05:45.155460 30278 status_manager.go:851] "Failed to get status for pod" podUID="d4c75bee-d0d2-4261-8f89-8c3375dbd868" pod="openshift-insights/insights-operator-68bf6ff9d6-hm777" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-insights/pods/insights-operator-68bf6ff9d6-hm777\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 18 18:05:45.157953 master-0 kubenswrapper[30278]: I0318 18:05:45.157860 30278 status_manager.go:851] "Failed to get status for pod" podUID="efc76217af9e7119e39d2455d00c223f" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 18 18:05:45.158950 master-0 kubenswrapper[30278]: I0318 18:05:45.158884 30278 status_manager.go:851] "Failed to get status for pod" podUID="257339d9-4efe-4659-ae45-5c1fee5ebba7" pod="openshift-kube-apiserver/installer-6-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-6-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 18 18:05:45.159933 master-0 kubenswrapper[30278]: I0318 18:05:45.159868 30278 status_manager.go:851] "Failed to get status for pod" podUID="1c86ad24-b858-4dfa-802b-f4799093ffc0" pod="openshift-console/downloads-66b8ffb895-5ftpz" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-console/pods/downloads-66b8ffb895-5ftpz\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 18 18:05:45.161116 master-0 kubenswrapper[30278]: I0318 18:05:45.161049 30278 status_manager.go:851] "Failed to get status for pod" podUID="ebbfbf2b56df0323ba118d68bfdad8b9" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 18 18:05:47.344245 master-0 kubenswrapper[30278]: E0318 18:05:47.344153 30278 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="6.4s" Mar 18 18:05:49.704417 master-0 kubenswrapper[30278]: I0318 18:05:49.704326 30278 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 18:05:49.704417 master-0 kubenswrapper[30278]: I0318 18:05:49.704417 30278 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 18:05:49.705567 master-0 kubenswrapper[30278]: I0318 18:05:49.705505 30278 patch_prober.go:28] interesting pod/kube-controller-manager-master-0 container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.32.10:10257/healthz\": dial tcp 192.168.32.10:10257: connect: connection refused" start-of-body= Mar 18 18:05:49.705671 master-0 kubenswrapper[30278]: I0318 18:05:49.705577 30278 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="efc76217af9e7119e39d2455d00c223f" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.32.10:10257/healthz\": dial tcp 192.168.32.10:10257: connect: connection refused" Mar 18 18:05:51.054499 master-0 kubenswrapper[30278]: I0318 18:05:51.054427 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 18 18:05:51.062743 master-0 kubenswrapper[30278]: I0318 18:05:51.062312 30278 status_manager.go:851] "Failed to get status for pod" podUID="ebbfbf2b56df0323ba118d68bfdad8b9" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 18 18:05:51.064173 master-0 kubenswrapper[30278]: I0318 18:05:51.064089 30278 status_manager.go:851] "Failed to get status for pod" podUID="d4c75bee-d0d2-4261-8f89-8c3375dbd868" pod="openshift-insights/insights-operator-68bf6ff9d6-hm777" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-insights/pods/insights-operator-68bf6ff9d6-hm777\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 18 18:05:51.066361 master-0 kubenswrapper[30278]: I0318 18:05:51.066198 30278 status_manager.go:851] "Failed to get status for pod" podUID="efc76217af9e7119e39d2455d00c223f" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 18 18:05:51.067870 master-0 kubenswrapper[30278]: I0318 18:05:51.067481 30278 status_manager.go:851] "Failed to get status for pod" podUID="257339d9-4efe-4659-ae45-5c1fee5ebba7" pod="openshift-kube-apiserver/installer-6-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-6-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 18 18:05:51.068726 master-0 kubenswrapper[30278]: I0318 18:05:51.068632 30278 status_manager.go:851] "Failed to get status for pod" podUID="1c86ad24-b858-4dfa-802b-f4799093ffc0" pod="openshift-console/downloads-66b8ffb895-5ftpz" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-console/pods/downloads-66b8ffb895-5ftpz\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 18 18:05:51.070043 master-0 kubenswrapper[30278]: I0318 18:05:51.069959 30278 status_manager.go:851] "Failed to get status for pod" podUID="ebbfbf2b56df0323ba118d68bfdad8b9" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 18 18:05:51.071168 master-0 kubenswrapper[30278]: I0318 18:05:51.070810 30278 status_manager.go:851] "Failed to get status for pod" podUID="d4c75bee-d0d2-4261-8f89-8c3375dbd868" pod="openshift-insights/insights-operator-68bf6ff9d6-hm777" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-insights/pods/insights-operator-68bf6ff9d6-hm777\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 18 18:05:51.071749 master-0 kubenswrapper[30278]: I0318 18:05:51.071659 30278 status_manager.go:851] "Failed to get status for pod" podUID="efc76217af9e7119e39d2455d00c223f" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 18 18:05:51.072673 master-0 kubenswrapper[30278]: I0318 18:05:51.072608 30278 status_manager.go:851] "Failed to get status for pod" podUID="257339d9-4efe-4659-ae45-5c1fee5ebba7" pod="openshift-kube-apiserver/installer-6-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-6-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 18 18:05:51.073617 master-0 kubenswrapper[30278]: I0318 18:05:51.073539 30278 status_manager.go:851] "Failed to get status for pod" podUID="1c86ad24-b858-4dfa-802b-f4799093ffc0" pod="openshift-console/downloads-66b8ffb895-5ftpz" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-console/pods/downloads-66b8ffb895-5ftpz\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 18 18:05:51.097341 master-0 kubenswrapper[30278]: I0318 18:05:51.097229 30278 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="3c7f1fb9-16fe-455d-888f-59d671c6285e" Mar 18 18:05:51.097341 master-0 kubenswrapper[30278]: I0318 18:05:51.097307 30278 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="3c7f1fb9-16fe-455d-888f-59d671c6285e" Mar 18 18:05:51.098413 master-0 kubenswrapper[30278]: E0318 18:05:51.098357 30278 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 18 18:05:51.099255 master-0 kubenswrapper[30278]: I0318 18:05:51.099218 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 18 18:05:51.129628 master-0 kubenswrapper[30278]: W0318 18:05:51.129483 30278 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod274c4bebf95a655851b2cf276fe43ef7.slice/crio-20377ab67f8fd1b1d9525e1a0bbfae68f8798211394be3cb23972f0b91d7d6a4 WatchSource:0}: Error finding container 20377ab67f8fd1b1d9525e1a0bbfae68f8798211394be3cb23972f0b91d7d6a4: Status 404 returned error can't find the container with id 20377ab67f8fd1b1d9525e1a0bbfae68f8798211394be3cb23972f0b91d7d6a4 Mar 18 18:05:51.207840 master-0 kubenswrapper[30278]: I0318 18:05:51.207792 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"274c4bebf95a655851b2cf276fe43ef7","Type":"ContainerStarted","Data":"20377ab67f8fd1b1d9525e1a0bbfae68f8798211394be3cb23972f0b91d7d6a4"} Mar 18 18:05:52.215568 master-0 kubenswrapper[30278]: I0318 18:05:52.215499 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"274c4bebf95a655851b2cf276fe43ef7","Type":"ContainerStarted","Data":"f7168c9528cbd052b0d4970efd08ecd9dd0d777c3b54d608b38e86db84dee396"} Mar 18 18:05:52.704608 master-0 kubenswrapper[30278]: I0318 18:05:52.704525 30278 patch_prober.go:28] interesting pod/kube-controller-manager-master-0 container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 18 18:05:52.705014 master-0 kubenswrapper[30278]: I0318 18:05:52.704967 30278 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="efc76217af9e7119e39d2455d00c223f" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 18 18:05:53.232384 master-0 kubenswrapper[30278]: I0318 18:05:53.232303 30278 generic.go:334] "Generic (PLEG): container finished" podID="274c4bebf95a655851b2cf276fe43ef7" containerID="f7168c9528cbd052b0d4970efd08ecd9dd0d777c3b54d608b38e86db84dee396" exitCode=0 Mar 18 18:05:53.233088 master-0 kubenswrapper[30278]: I0318 18:05:53.232387 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"274c4bebf95a655851b2cf276fe43ef7","Type":"ContainerDied","Data":"f7168c9528cbd052b0d4970efd08ecd9dd0d777c3b54d608b38e86db84dee396"} Mar 18 18:05:53.233088 master-0 kubenswrapper[30278]: I0318 18:05:53.232792 30278 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="3c7f1fb9-16fe-455d-888f-59d671c6285e" Mar 18 18:05:53.233088 master-0 kubenswrapper[30278]: I0318 18:05:53.232841 30278 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="3c7f1fb9-16fe-455d-888f-59d671c6285e" Mar 18 18:05:53.234179 master-0 kubenswrapper[30278]: E0318 18:05:53.234115 30278 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 18 18:05:53.234179 master-0 kubenswrapper[30278]: I0318 18:05:53.234142 30278 status_manager.go:851] "Failed to get status for pod" podUID="1c86ad24-b858-4dfa-802b-f4799093ffc0" pod="openshift-console/downloads-66b8ffb895-5ftpz" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-console/pods/downloads-66b8ffb895-5ftpz\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 18 18:05:53.235450 master-0 kubenswrapper[30278]: I0318 18:05:53.235380 30278 status_manager.go:851] "Failed to get status for pod" podUID="ebbfbf2b56df0323ba118d68bfdad8b9" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 18 18:05:53.236311 master-0 kubenswrapper[30278]: I0318 18:05:53.236233 30278 status_manager.go:851] "Failed to get status for pod" podUID="d4c75bee-d0d2-4261-8f89-8c3375dbd868" pod="openshift-insights/insights-operator-68bf6ff9d6-hm777" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-insights/pods/insights-operator-68bf6ff9d6-hm777\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 18 18:05:53.237063 master-0 kubenswrapper[30278]: I0318 18:05:53.236988 30278 status_manager.go:851] "Failed to get status for pod" podUID="efc76217af9e7119e39d2455d00c223f" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 18 18:05:53.238533 master-0 kubenswrapper[30278]: I0318 18:05:53.238443 30278 status_manager.go:851] "Failed to get status for pod" podUID="257339d9-4efe-4659-ae45-5c1fee5ebba7" pod="openshift-kube-apiserver/installer-6-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-6-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 18 18:05:53.746411 master-0 kubenswrapper[30278]: E0318 18:05:53.746331 30278 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="7s" Mar 18 18:05:54.244111 master-0 kubenswrapper[30278]: I0318 18:05:54.244049 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"274c4bebf95a655851b2cf276fe43ef7","Type":"ContainerStarted","Data":"f0bfe096575d58c9f1a005dc07dd0e7bdb858db9eec327fdb140bdba903d7d44"} Mar 18 18:05:55.263639 master-0 kubenswrapper[30278]: I0318 18:05:55.263430 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"274c4bebf95a655851b2cf276fe43ef7","Type":"ContainerStarted","Data":"d7d0eb933c245d0ad32a7bb1ce76bad3fcdda48fac8a2474ffd404b26d85945a"} Mar 18 18:05:55.263639 master-0 kubenswrapper[30278]: I0318 18:05:55.263534 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"274c4bebf95a655851b2cf276fe43ef7","Type":"ContainerStarted","Data":"be5fb5374f8df87010919e4e0806a2e8d546a3b819d07519d86eff435e170780"} Mar 18 18:05:55.263639 master-0 kubenswrapper[30278]: I0318 18:05:55.263569 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"274c4bebf95a655851b2cf276fe43ef7","Type":"ContainerStarted","Data":"72bec28b10d2270ea5320cf051af50296414fc5b95897a2ea42232e83b1a1178"} Mar 18 18:05:55.263639 master-0 kubenswrapper[30278]: I0318 18:05:55.263600 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"274c4bebf95a655851b2cf276fe43ef7","Type":"ContainerStarted","Data":"034b8b61879dea004f17891f29239df93bf9a440b92d5e70a400056f719daf9f"} Mar 18 18:05:55.264790 master-0 kubenswrapper[30278]: I0318 18:05:55.263764 30278 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 18 18:05:55.264790 master-0 kubenswrapper[30278]: I0318 18:05:55.264030 30278 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="3c7f1fb9-16fe-455d-888f-59d671c6285e" Mar 18 18:05:55.264790 master-0 kubenswrapper[30278]: I0318 18:05:55.264078 30278 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="3c7f1fb9-16fe-455d-888f-59d671c6285e" Mar 18 18:05:56.099675 master-0 kubenswrapper[30278]: I0318 18:05:56.099580 30278 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 18 18:05:56.099675 master-0 kubenswrapper[30278]: I0318 18:05:56.099632 30278 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 18 18:05:56.109169 master-0 kubenswrapper[30278]: I0318 18:05:56.109067 30278 patch_prober.go:28] interesting pod/kube-apiserver-master-0 container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Mar 18 18:05:56.109169 master-0 kubenswrapper[30278]: [+]log ok Mar 18 18:05:56.109169 master-0 kubenswrapper[30278]: [+]etcd ok Mar 18 18:05:56.109169 master-0 kubenswrapper[30278]: [+]poststarthook/openshift.io-openshift-apiserver-reachable ok Mar 18 18:05:56.109169 master-0 kubenswrapper[30278]: [+]poststarthook/openshift.io-oauth-apiserver-reachable ok Mar 18 18:05:56.109169 master-0 kubenswrapper[30278]: [+]poststarthook/start-apiserver-admission-initializer ok Mar 18 18:05:56.109169 master-0 kubenswrapper[30278]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Mar 18 18:05:56.109169 master-0 kubenswrapper[30278]: [+]poststarthook/openshift.io-api-request-count-filter ok Mar 18 18:05:56.109169 master-0 kubenswrapper[30278]: [+]poststarthook/openshift.io-startkubeinformers ok Mar 18 18:05:56.109169 master-0 kubenswrapper[30278]: [+]poststarthook/generic-apiserver-start-informers ok Mar 18 18:05:56.109169 master-0 kubenswrapper[30278]: [+]poststarthook/priority-and-fairness-config-consumer ok Mar 18 18:05:56.109169 master-0 kubenswrapper[30278]: [+]poststarthook/priority-and-fairness-filter ok Mar 18 18:05:56.109169 master-0 kubenswrapper[30278]: [+]poststarthook/storage-object-count-tracker-hook ok Mar 18 18:05:56.109169 master-0 kubenswrapper[30278]: [+]poststarthook/start-apiextensions-informers ok Mar 18 18:05:56.109169 master-0 kubenswrapper[30278]: [+]poststarthook/start-apiextensions-controllers ok Mar 18 18:05:56.109169 master-0 kubenswrapper[30278]: [+]poststarthook/crd-informer-synced ok Mar 18 18:05:56.109169 master-0 kubenswrapper[30278]: [+]poststarthook/start-system-namespaces-controller ok Mar 18 18:05:56.109169 master-0 kubenswrapper[30278]: [+]poststarthook/start-cluster-authentication-info-controller ok Mar 18 18:05:56.109169 master-0 kubenswrapper[30278]: [+]poststarthook/start-kube-apiserver-identity-lease-controller ok Mar 18 18:05:56.109169 master-0 kubenswrapper[30278]: [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok Mar 18 18:05:56.109169 master-0 kubenswrapper[30278]: [+]poststarthook/start-legacy-token-tracking-controller ok Mar 18 18:05:56.109169 master-0 kubenswrapper[30278]: [+]poststarthook/start-service-ip-repair-controllers ok Mar 18 18:05:56.109169 master-0 kubenswrapper[30278]: [-]poststarthook/rbac/bootstrap-roles failed: reason withheld Mar 18 18:05:56.109169 master-0 kubenswrapper[30278]: [-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld Mar 18 18:05:56.109169 master-0 kubenswrapper[30278]: [+]poststarthook/priority-and-fairness-config-producer ok Mar 18 18:05:56.109169 master-0 kubenswrapper[30278]: [+]poststarthook/bootstrap-controller ok Mar 18 18:05:56.109169 master-0 kubenswrapper[30278]: [+]poststarthook/aggregator-reload-proxy-client-cert ok Mar 18 18:05:56.109169 master-0 kubenswrapper[30278]: [+]poststarthook/start-kube-aggregator-informers ok Mar 18 18:05:56.109169 master-0 kubenswrapper[30278]: [+]poststarthook/apiservice-status-local-available-controller ok Mar 18 18:05:56.109169 master-0 kubenswrapper[30278]: [+]poststarthook/apiservice-status-remote-available-controller ok Mar 18 18:05:56.109169 master-0 kubenswrapper[30278]: [+]poststarthook/apiservice-registration-controller ok Mar 18 18:05:56.109169 master-0 kubenswrapper[30278]: [+]poststarthook/apiservice-wait-for-first-sync ok Mar 18 18:05:56.109169 master-0 kubenswrapper[30278]: [+]poststarthook/apiservice-discovery-controller ok Mar 18 18:05:56.109169 master-0 kubenswrapper[30278]: [+]poststarthook/kube-apiserver-autoregistration ok Mar 18 18:05:56.109169 master-0 kubenswrapper[30278]: [+]autoregister-completion ok Mar 18 18:05:56.109169 master-0 kubenswrapper[30278]: [+]poststarthook/apiservice-openapi-controller ok Mar 18 18:05:56.109169 master-0 kubenswrapper[30278]: [+]poststarthook/apiservice-openapiv3-controller ok Mar 18 18:05:56.109169 master-0 kubenswrapper[30278]: livez check failed Mar 18 18:05:56.111883 master-0 kubenswrapper[30278]: I0318 18:05:56.109226 30278 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="274c4bebf95a655851b2cf276fe43ef7" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 18:05:58.054539 master-0 kubenswrapper[30278]: I0318 18:05:58.054468 30278 scope.go:117] "RemoveContainer" containerID="9890e276619ebeef2fdfb1c8e386ea0f74ad0cc5d40e53b9f1ccd6d8646d8339" Mar 18 18:05:58.305878 master-0 kubenswrapper[30278]: I0318 18:05:58.305708 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-insights/insights-operator-68bf6ff9d6-hm777" event={"ID":"d4c75bee-d0d2-4261-8f89-8c3375dbd868","Type":"ContainerStarted","Data":"0efa75ff5edd5af0aa740f106c84e8ebbfd54436e05457409b15c27a17d6aaed"} Mar 18 18:05:59.326407 master-0 kubenswrapper[30278]: I0318 18:05:59.326339 30278 generic.go:334] "Generic (PLEG): container finished" podID="d4c75bee-d0d2-4261-8f89-8c3375dbd868" containerID="0efa75ff5edd5af0aa740f106c84e8ebbfd54436e05457409b15c27a17d6aaed" exitCode=0 Mar 18 18:05:59.326407 master-0 kubenswrapper[30278]: I0318 18:05:59.326408 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-insights/insights-operator-68bf6ff9d6-hm777" event={"ID":"d4c75bee-d0d2-4261-8f89-8c3375dbd868","Type":"ContainerDied","Data":"0efa75ff5edd5af0aa740f106c84e8ebbfd54436e05457409b15c27a17d6aaed"} Mar 18 18:05:59.327388 master-0 kubenswrapper[30278]: I0318 18:05:59.326480 30278 scope.go:117] "RemoveContainer" containerID="9890e276619ebeef2fdfb1c8e386ea0f74ad0cc5d40e53b9f1ccd6d8646d8339" Mar 18 18:05:59.327853 master-0 kubenswrapper[30278]: I0318 18:05:59.327821 30278 scope.go:117] "RemoveContainer" containerID="0efa75ff5edd5af0aa740f106c84e8ebbfd54436e05457409b15c27a17d6aaed" Mar 18 18:05:59.328256 master-0 kubenswrapper[30278]: E0318 18:05:59.328218 30278 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"insights-operator\" with CrashLoopBackOff: \"back-off 20s restarting failed container=insights-operator pod=insights-operator-68bf6ff9d6-hm777_openshift-insights(d4c75bee-d0d2-4261-8f89-8c3375dbd868)\"" pod="openshift-insights/insights-operator-68bf6ff9d6-hm777" podUID="d4c75bee-d0d2-4261-8f89-8c3375dbd868" Mar 18 18:05:59.704584 master-0 kubenswrapper[30278]: I0318 18:05:59.704304 30278 patch_prober.go:28] interesting pod/kube-controller-manager-master-0 container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.32.10:10257/healthz\": dial tcp 192.168.32.10:10257: connect: connection refused" start-of-body= Mar 18 18:05:59.704584 master-0 kubenswrapper[30278]: I0318 18:05:59.704410 30278 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="efc76217af9e7119e39d2455d00c223f" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.32.10:10257/healthz\": dial tcp 192.168.32.10:10257: connect: connection refused" Mar 18 18:05:59.704584 master-0 kubenswrapper[30278]: I0318 18:05:59.704506 30278 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 18:05:59.705755 master-0 kubenswrapper[30278]: I0318 18:05:59.705681 30278 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="kube-controller-manager" containerStatusID={"Type":"cri-o","ID":"498a5c57b90053a76dc039b2bff8526c3d09fbb3c0193932a4070bb49e9eec20"} pod="openshift-kube-controller-manager/kube-controller-manager-master-0" containerMessage="Container kube-controller-manager failed startup probe, will be restarted" Mar 18 18:05:59.706015 master-0 kubenswrapper[30278]: I0318 18:05:59.705953 30278 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="efc76217af9e7119e39d2455d00c223f" containerName="kube-controller-manager" containerID="cri-o://498a5c57b90053a76dc039b2bff8526c3d09fbb3c0193932a4070bb49e9eec20" gracePeriod=30 Mar 18 18:06:00.486138 master-0 kubenswrapper[30278]: I0318 18:06:00.486073 30278 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 18 18:06:00.535899 master-0 kubenswrapper[30278]: I0318 18:06:00.535835 30278 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 18:06:00.630167 master-0 kubenswrapper[30278]: I0318 18:06:00.630091 30278 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-master-0" oldPodUID="274c4bebf95a655851b2cf276fe43ef7" podUID="fac69f37-7426-4a72-8cac-6f2960ccc48a" Mar 18 18:06:01.346366 master-0 kubenswrapper[30278]: I0318 18:06:01.346325 30278 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="3c7f1fb9-16fe-455d-888f-59d671c6285e" Mar 18 18:06:01.346366 master-0 kubenswrapper[30278]: I0318 18:06:01.346359 30278 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="3c7f1fb9-16fe-455d-888f-59d671c6285e" Mar 18 18:06:03.597050 master-0 kubenswrapper[30278]: I0318 18:06:03.596951 30278 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-monitoring/prometheus-k8s-0" Mar 18 18:06:03.641222 master-0 kubenswrapper[30278]: I0318 18:06:03.641118 30278 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-monitoring/prometheus-k8s-0" Mar 18 18:06:04.407800 master-0 kubenswrapper[30278]: I0318 18:06:04.407724 30278 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/prometheus-k8s-0" Mar 18 18:06:06.109735 master-0 kubenswrapper[30278]: I0318 18:06:06.109678 30278 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 18 18:06:06.111458 master-0 kubenswrapper[30278]: I0318 18:06:06.111413 30278 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="3c7f1fb9-16fe-455d-888f-59d671c6285e" Mar 18 18:06:06.111674 master-0 kubenswrapper[30278]: I0318 18:06:06.111637 30278 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="3c7f1fb9-16fe-455d-888f-59d671c6285e" Mar 18 18:06:06.116244 master-0 kubenswrapper[30278]: I0318 18:06:06.116161 30278 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 18 18:06:06.120249 master-0 kubenswrapper[30278]: I0318 18:06:06.120179 30278 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 18 18:06:06.120460 master-0 kubenswrapper[30278]: I0318 18:06:06.120245 30278 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-master-0" oldPodUID="274c4bebf95a655851b2cf276fe43ef7" podUID="fac69f37-7426-4a72-8cac-6f2960ccc48a" Mar 18 18:06:06.386045 master-0 kubenswrapper[30278]: I0318 18:06:06.385880 30278 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="3c7f1fb9-16fe-455d-888f-59d671c6285e" Mar 18 18:06:06.386045 master-0 kubenswrapper[30278]: I0318 18:06:06.385923 30278 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="3c7f1fb9-16fe-455d-888f-59d671c6285e" Mar 18 18:06:07.398168 master-0 kubenswrapper[30278]: I0318 18:06:07.398088 30278 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="3c7f1fb9-16fe-455d-888f-59d671c6285e" Mar 18 18:06:07.398168 master-0 kubenswrapper[30278]: I0318 18:06:07.398155 30278 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="3c7f1fb9-16fe-455d-888f-59d671c6285e" Mar 18 18:06:09.712399 master-0 kubenswrapper[30278]: I0318 18:06:09.712318 30278 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 18:06:10.615021 master-0 kubenswrapper[30278]: I0318 18:06:10.614934 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Mar 18 18:06:10.878963 master-0 kubenswrapper[30278]: I0318 18:06:10.878794 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Mar 18 18:06:10.927836 master-0 kubenswrapper[30278]: I0318 18:06:10.927764 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Mar 18 18:06:11.088977 master-0 kubenswrapper[30278]: I0318 18:06:11.088907 30278 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-master-0" oldPodUID="274c4bebf95a655851b2cf276fe43ef7" podUID="fac69f37-7426-4a72-8cac-6f2960ccc48a" Mar 18 18:06:11.145320 master-0 kubenswrapper[30278]: I0318 18:06:11.145109 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"default-dockercfg-4sbm2" Mar 18 18:06:11.161692 master-0 kubenswrapper[30278]: I0318 18:06:11.161634 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Mar 18 18:06:11.228382 master-0 kubenswrapper[30278]: I0318 18:06:11.228266 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Mar 18 18:06:11.286869 master-0 kubenswrapper[30278]: I0318 18:06:11.286796 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Mar 18 18:06:11.329757 master-0 kubenswrapper[30278]: I0318 18:06:11.329691 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-credential-operator"/"cloud-credential-operator-serving-cert" Mar 18 18:06:11.562889 master-0 kubenswrapper[30278]: I0318 18:06:11.558255 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"openshift-service-ca.crt" Mar 18 18:06:11.579141 master-0 kubenswrapper[30278]: I0318 18:06:11.579050 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Mar 18 18:06:11.827666 master-0 kubenswrapper[30278]: I0318 18:06:11.827484 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-tls" Mar 18 18:06:11.861856 master-0 kubenswrapper[30278]: I0318 18:06:11.860222 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Mar 18 18:06:11.946541 master-0 kubenswrapper[30278]: I0318 18:06:11.946477 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Mar 18 18:06:11.950159 master-0 kubenswrapper[30278]: I0318 18:06:11.950109 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Mar 18 18:06:11.954078 master-0 kubenswrapper[30278]: I0318 18:06:11.954029 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-olm-operator"/"cluster-olm-operator-serving-cert" Mar 18 18:06:12.049560 master-0 kubenswrapper[30278]: I0318 18:06:12.049479 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Mar 18 18:06:12.081512 master-0 kubenswrapper[30278]: I0318 18:06:12.081375 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Mar 18 18:06:12.137362 master-0 kubenswrapper[30278]: I0318 18:06:12.137307 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Mar 18 18:06:12.223156 master-0 kubenswrapper[30278]: I0318 18:06:12.223084 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Mar 18 18:06:12.236437 master-0 kubenswrapper[30278]: I0318 18:06:12.236338 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-tls" Mar 18 18:06:12.274987 master-0 kubenswrapper[30278]: I0318 18:06:12.274882 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Mar 18 18:06:12.380602 master-0 kubenswrapper[30278]: I0318 18:06:12.380396 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Mar 18 18:06:12.495467 master-0 kubenswrapper[30278]: I0318 18:06:12.495406 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Mar 18 18:06:12.512767 master-0 kubenswrapper[30278]: I0318 18:06:12.512640 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-kube-rbac-proxy-metric" Mar 18 18:06:12.639401 master-0 kubenswrapper[30278]: I0318 18:06:12.639165 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"openshift-service-ca.crt" Mar 18 18:06:12.798579 master-0 kubenswrapper[30278]: I0318 18:06:12.797482 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Mar 18 18:06:12.798579 master-0 kubenswrapper[30278]: I0318 18:06:12.798062 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"whereabouts-flatfile-config" Mar 18 18:06:12.873819 master-0 kubenswrapper[30278]: I0318 18:06:12.873757 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Mar 18 18:06:12.904116 master-0 kubenswrapper[30278]: I0318 18:06:12.903894 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-kube-rbac-proxy-config" Mar 18 18:06:12.953334 master-0 kubenswrapper[30278]: I0318 18:06:12.953216 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"telemetry-config" Mar 18 18:06:13.067037 master-0 kubenswrapper[30278]: I0318 18:06:13.066975 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kubelet-serving-ca-bundle" Mar 18 18:06:13.068584 master-0 kubenswrapper[30278]: I0318 18:06:13.068548 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-dockercfg-kcjlz" Mar 18 18:06:13.079741 master-0 kubenswrapper[30278]: I0318 18:06:13.079685 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Mar 18 18:06:13.295960 master-0 kubenswrapper[30278]: I0318 18:06:13.295821 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Mar 18 18:06:13.379584 master-0 kubenswrapper[30278]: I0318 18:06:13.379525 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Mar 18 18:06:13.394316 master-0 kubenswrapper[30278]: I0318 18:06:13.394255 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"trusted-ca" Mar 18 18:06:13.403135 master-0 kubenswrapper[30278]: I0318 18:06:13.403080 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Mar 18 18:06:13.404895 master-0 kubenswrapper[30278]: I0318 18:06:13.404690 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Mar 18 18:06:13.428882 master-0 kubenswrapper[30278]: I0318 18:06:13.428810 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Mar 18 18:06:13.545536 master-0 kubenswrapper[30278]: I0318 18:06:13.545334 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-insights"/"openshift-insights-serving-cert" Mar 18 18:06:13.688092 master-0 kubenswrapper[30278]: I0318 18:06:13.687986 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Mar 18 18:06:13.711253 master-0 kubenswrapper[30278]: I0318 18:06:13.711166 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Mar 18 18:06:13.714956 master-0 kubenswrapper[30278]: I0318 18:06:13.714856 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-zxhl4" Mar 18 18:06:13.725757 master-0 kubenswrapper[30278]: I0318 18:06:13.725682 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Mar 18 18:06:13.762765 master-0 kubenswrapper[30278]: I0318 18:06:13.762686 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Mar 18 18:06:13.808557 master-0 kubenswrapper[30278]: I0318 18:06:13.808418 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s" Mar 18 18:06:13.814881 master-0 kubenswrapper[30278]: I0318 18:06:13.814662 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-kube-rbac-proxy-config" Mar 18 18:06:13.845954 master-0 kubenswrapper[30278]: I0318 18:06:13.845896 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-admission-webhook-tls" Mar 18 18:06:13.861929 master-0 kubenswrapper[30278]: I0318 18:06:13.861868 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-tls-assets-0" Mar 18 18:06:13.906624 master-0 kubenswrapper[30278]: I0318 18:06:13.906556 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-npx6j" Mar 18 18:06:13.929669 master-0 kubenswrapper[30278]: I0318 18:06:13.929611 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Mar 18 18:06:13.964015 master-0 kubenswrapper[30278]: I0318 18:06:13.963882 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-thanos-sidecar-tls" Mar 18 18:06:14.002714 master-0 kubenswrapper[30278]: I0318 18:06:14.002672 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Mar 18 18:06:14.045181 master-0 kubenswrapper[30278]: I0318 18:06:14.045120 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-6clkh" Mar 18 18:06:14.054152 master-0 kubenswrapper[30278]: I0318 18:06:14.054119 30278 scope.go:117] "RemoveContainer" containerID="0efa75ff5edd5af0aa740f106c84e8ebbfd54436e05457409b15c27a17d6aaed" Mar 18 18:06:14.054554 master-0 kubenswrapper[30278]: E0318 18:06:14.054529 30278 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"insights-operator\" with CrashLoopBackOff: \"back-off 20s restarting failed container=insights-operator pod=insights-operator-68bf6ff9d6-hm777_openshift-insights(d4c75bee-d0d2-4261-8f89-8c3375dbd868)\"" pod="openshift-insights/insights-operator-68bf6ff9d6-hm777" podUID="d4c75bee-d0d2-4261-8f89-8c3375dbd868" Mar 18 18:06:14.116062 master-0 kubenswrapper[30278]: I0318 18:06:14.116008 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Mar 18 18:06:14.329974 master-0 kubenswrapper[30278]: I0318 18:06:14.329881 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-storage-operator"/"openshift-service-ca.crt" Mar 18 18:06:14.338754 master-0 kubenswrapper[30278]: I0318 18:06:14.338706 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Mar 18 18:06:14.368239 master-0 kubenswrapper[30278]: I0318 18:06:14.368169 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Mar 18 18:06:14.380968 master-0 kubenswrapper[30278]: I0318 18:06:14.380854 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Mar 18 18:06:14.398732 master-0 kubenswrapper[30278]: I0318 18:06:14.398655 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Mar 18 18:06:14.426309 master-0 kubenswrapper[30278]: I0318 18:06:14.426217 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Mar 18 18:06:14.520825 master-0 kubenswrapper[30278]: I0318 18:06:14.520786 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Mar 18 18:06:14.545421 master-0 kubenswrapper[30278]: I0318 18:06:14.545344 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Mar 18 18:06:14.549575 master-0 kubenswrapper[30278]: I0318 18:06:14.549470 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Mar 18 18:06:14.562108 master-0 kubenswrapper[30278]: I0318 18:06:14.562040 30278 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Mar 18 18:06:14.596755 master-0 kubenswrapper[30278]: I0318 18:06:14.596523 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy-rules" Mar 18 18:06:14.640128 master-0 kubenswrapper[30278]: I0318 18:06:14.640054 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"cloud-controller-manager-images" Mar 18 18:06:14.695933 master-0 kubenswrapper[30278]: I0318 18:06:14.695859 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-tls" Mar 18 18:06:14.802320 master-0 kubenswrapper[30278]: I0318 18:06:14.801581 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"metrics-server-audit-profiles" Mar 18 18:06:14.810329 master-0 kubenswrapper[30278]: I0318 18:06:14.808466 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Mar 18 18:06:14.836646 master-0 kubenswrapper[30278]: I0318 18:06:14.836572 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Mar 18 18:06:14.902264 master-0 kubenswrapper[30278]: I0318 18:06:14.902147 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Mar 18 18:06:14.919430 master-0 kubenswrapper[30278]: I0318 18:06:14.919388 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Mar 18 18:06:14.942866 master-0 kubenswrapper[30278]: I0318 18:06:14.942806 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Mar 18 18:06:14.995831 master-0 kubenswrapper[30278]: I0318 18:06:14.995736 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Mar 18 18:06:15.106267 master-0 kubenswrapper[30278]: I0318 18:06:15.106213 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Mar 18 18:06:15.110634 master-0 kubenswrapper[30278]: I0318 18:06:15.110577 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Mar 18 18:06:15.188609 master-0 kubenswrapper[30278]: I0318 18:06:15.188449 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Mar 18 18:06:15.192559 master-0 kubenswrapper[30278]: I0318 18:06:15.192530 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-baremetal-webhook-server-cert" Mar 18 18:06:15.265453 master-0 kubenswrapper[30278]: I0318 18:06:15.265372 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Mar 18 18:06:15.278193 master-0 kubenswrapper[30278]: I0318 18:06:15.278094 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-tls" Mar 18 18:06:15.279344 master-0 kubenswrapper[30278]: I0318 18:06:15.279314 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-rgwwd" Mar 18 18:06:15.306867 master-0 kubenswrapper[30278]: I0318 18:06:15.306772 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-tls" Mar 18 18:06:15.339505 master-0 kubenswrapper[30278]: I0318 18:06:15.339417 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-controller-manager-operator"/"cloud-controller-manager-operator-tls" Mar 18 18:06:15.350516 master-0 kubenswrapper[30278]: I0318 18:06:15.350431 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"alertmanager-trusted-ca-bundle" Mar 18 18:06:15.362077 master-0 kubenswrapper[30278]: I0318 18:06:15.362003 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Mar 18 18:06:15.369701 master-0 kubenswrapper[30278]: I0318 18:06:15.369627 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-catalogd"/"catalogd-trusted-ca-bundle" Mar 18 18:06:15.379838 master-0 kubenswrapper[30278]: I0318 18:06:15.379757 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-4fc8r" Mar 18 18:06:15.430329 master-0 kubenswrapper[30278]: I0318 18:06:15.430201 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Mar 18 18:06:15.455249 master-0 kubenswrapper[30278]: I0318 18:06:15.455047 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Mar 18 18:06:15.507475 master-0 kubenswrapper[30278]: I0318 18:06:15.507420 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-kube-rbac-proxy-config" Mar 18 18:06:15.616456 master-0 kubenswrapper[30278]: I0318 18:06:15.616391 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-catalogd"/"kube-root-ca.crt" Mar 18 18:06:15.731542 master-0 kubenswrapper[30278]: I0318 18:06:15.729819 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Mar 18 18:06:15.746496 master-0 kubenswrapper[30278]: I0318 18:06:15.746421 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-kube-rbac-proxy-config" Mar 18 18:06:15.765669 master-0 kubenswrapper[30278]: I0318 18:06:15.765599 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Mar 18 18:06:15.799192 master-0 kubenswrapper[30278]: I0318 18:06:15.799096 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Mar 18 18:06:15.817769 master-0 kubenswrapper[30278]: I0318 18:06:15.817700 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Mar 18 18:06:15.869377 master-0 kubenswrapper[30278]: I0318 18:06:15.869265 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Mar 18 18:06:15.873563 master-0 kubenswrapper[30278]: I0318 18:06:15.873511 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Mar 18 18:06:15.918927 master-0 kubenswrapper[30278]: I0318 18:06:15.918855 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Mar 18 18:06:16.028565 master-0 kubenswrapper[30278]: I0318 18:06:16.028387 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Mar 18 18:06:16.029960 master-0 kubenswrapper[30278]: I0318 18:06:16.029799 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-catalogd"/"openshift-service-ca.crt" Mar 18 18:06:16.037561 master-0 kubenswrapper[30278]: I0318 18:06:16.037503 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-kube-rbac-proxy-web" Mar 18 18:06:16.039227 master-0 kubenswrapper[30278]: I0318 18:06:16.039174 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Mar 18 18:06:16.048201 master-0 kubenswrapper[30278]: I0318 18:06:16.048126 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Mar 18 18:06:16.074909 master-0 kubenswrapper[30278]: I0318 18:06:16.074351 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Mar 18 18:06:16.119941 master-0 kubenswrapper[30278]: I0318 18:06:16.119834 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Mar 18 18:06:16.250611 master-0 kubenswrapper[30278]: I0318 18:06:16.250433 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Mar 18 18:06:16.314208 master-0 kubenswrapper[30278]: I0318 18:06:16.314124 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Mar 18 18:06:16.333743 master-0 kubenswrapper[30278]: I0318 18:06:16.333672 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-tls" Mar 18 18:06:16.368952 master-0 kubenswrapper[30278]: I0318 18:06:16.368879 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Mar 18 18:06:16.451866 master-0 kubenswrapper[30278]: I0318 18:06:16.451800 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Mar 18 18:06:16.484114 master-0 kubenswrapper[30278]: I0318 18:06:16.484013 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Mar 18 18:06:16.582347 master-0 kubenswrapper[30278]: I0318 18:06:16.582157 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Mar 18 18:06:16.590864 master-0 kubenswrapper[30278]: I0318 18:06:16.590792 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-olm-operator"/"openshift-service-ca.crt" Mar 18 18:06:16.624151 master-0 kubenswrapper[30278]: I0318 18:06:16.624097 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Mar 18 18:06:16.663671 master-0 kubenswrapper[30278]: I0318 18:06:16.663606 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Mar 18 18:06:16.691677 master-0 kubenswrapper[30278]: I0318 18:06:16.691600 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-autoscaler-operator-cert" Mar 18 18:06:16.698208 master-0 kubenswrapper[30278]: I0318 18:06:16.698160 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-6fg48" Mar 18 18:06:16.806855 master-0 kubenswrapper[30278]: I0318 18:06:16.806775 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-generated" Mar 18 18:06:16.829315 master-0 kubenswrapper[30278]: I0318 18:06:16.829216 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-tls" Mar 18 18:06:16.860164 master-0 kubenswrapper[30278]: I0318 18:06:16.860051 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Mar 18 18:06:16.860164 master-0 kubenswrapper[30278]: I0318 18:06:16.860143 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-tls-assets-0" Mar 18 18:06:16.980007 master-0 kubenswrapper[30278]: I0318 18:06:16.979932 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-credential-operator"/"kube-root-ca.crt" Mar 18 18:06:17.085730 master-0 kubenswrapper[30278]: I0318 18:06:17.085655 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"kube-root-ca.crt" Mar 18 18:06:17.093529 master-0 kubenswrapper[30278]: I0318 18:06:17.093479 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kube-root-ca.crt" Mar 18 18:06:17.104873 master-0 kubenswrapper[30278]: I0318 18:06:17.104823 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-thanos-prometheus-http-client-file" Mar 18 18:06:17.241532 master-0 kubenswrapper[30278]: I0318 18:06:17.241359 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Mar 18 18:06:17.379389 master-0 kubenswrapper[30278]: I0318 18:06:17.379300 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Mar 18 18:06:17.408461 master-0 kubenswrapper[30278]: I0318 18:06:17.408379 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Mar 18 18:06:17.498877 master-0 kubenswrapper[30278]: I0318 18:06:17.498659 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-client-certs" Mar 18 18:06:17.533105 master-0 kubenswrapper[30278]: I0318 18:06:17.533036 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-ksrlj" Mar 18 18:06:17.550184 master-0 kubenswrapper[30278]: I0318 18:06:17.550119 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Mar 18 18:06:17.553152 master-0 kubenswrapper[30278]: I0318 18:06:17.553091 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-dockercfg-h8kg7" Mar 18 18:06:17.578376 master-0 kubenswrapper[30278]: I0318 18:06:17.578241 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Mar 18 18:06:17.580658 master-0 kubenswrapper[30278]: I0318 18:06:17.580586 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-dockercfg-2pg6x" Mar 18 18:06:17.795267 master-0 kubenswrapper[30278]: I0318 18:06:17.795108 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Mar 18 18:06:17.802149 master-0 kubenswrapper[30278]: I0318 18:06:17.802107 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-dockercfg-wh6dt" Mar 18 18:06:17.830586 master-0 kubenswrapper[30278]: I0318 18:06:17.830467 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"prometheus-k8s-rulefiles-0" Mar 18 18:06:17.860155 master-0 kubenswrapper[30278]: I0318 18:06:17.860044 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Mar 18 18:06:17.957822 master-0 kubenswrapper[30278]: I0318 18:06:17.957716 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Mar 18 18:06:17.964344 master-0 kubenswrapper[30278]: I0318 18:06:17.964252 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Mar 18 18:06:17.965480 master-0 kubenswrapper[30278]: I0318 18:06:17.965401 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-dockercfg-pm4sf" Mar 18 18:06:18.241084 master-0 kubenswrapper[30278]: I0318 18:06:18.240990 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Mar 18 18:06:18.285626 master-0 kubenswrapper[30278]: I0318 18:06:18.285537 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Mar 18 18:06:18.319807 master-0 kubenswrapper[30278]: I0318 18:06:18.319681 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"service-ca-bundle" Mar 18 18:06:18.324997 master-0 kubenswrapper[30278]: I0318 18:06:18.324928 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Mar 18 18:06:18.336794 master-0 kubenswrapper[30278]: I0318 18:06:18.336700 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Mar 18 18:06:18.367937 master-0 kubenswrapper[30278]: I0318 18:06:18.367857 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-kube-rbac-proxy" Mar 18 18:06:18.395900 master-0 kubenswrapper[30278]: I0318 18:06:18.395828 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Mar 18 18:06:18.455323 master-0 kubenswrapper[30278]: I0318 18:06:18.455224 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-controller"/"kube-root-ca.crt" Mar 18 18:06:18.505557 master-0 kubenswrapper[30278]: I0318 18:06:18.505392 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Mar 18 18:06:18.641584 master-0 kubenswrapper[30278]: I0318 18:06:18.640570 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy" Mar 18 18:06:18.663370 master-0 kubenswrapper[30278]: I0318 18:06:18.663297 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Mar 18 18:06:18.684260 master-0 kubenswrapper[30278]: I0318 18:06:18.684170 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Mar 18 18:06:18.705590 master-0 kubenswrapper[30278]: I0318 18:06:18.705507 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Mar 18 18:06:18.753850 master-0 kubenswrapper[30278]: I0318 18:06:18.753776 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Mar 18 18:06:18.779006 master-0 kubenswrapper[30278]: I0318 18:06:18.778840 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"openshift-service-ca.crt" Mar 18 18:06:18.982058 master-0 kubenswrapper[30278]: I0318 18:06:18.981993 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-wftwz" Mar 18 18:06:18.987082 master-0 kubenswrapper[30278]: I0318 18:06:18.986919 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Mar 18 18:06:19.003700 master-0 kubenswrapper[30278]: I0318 18:06:19.003394 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kube-state-metrics-custom-resource-state-configmap" Mar 18 18:06:19.010630 master-0 kubenswrapper[30278]: I0318 18:06:19.009591 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-82cs2" Mar 18 18:06:19.098939 master-0 kubenswrapper[30278]: I0318 18:06:19.098879 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Mar 18 18:06:19.114016 master-0 kubenswrapper[30278]: I0318 18:06:19.113973 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Mar 18 18:06:19.159163 master-0 kubenswrapper[30278]: I0318 18:06:19.159058 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy-cluster-autoscaler-operator" Mar 18 18:06:19.273376 master-0 kubenswrapper[30278]: I0318 18:06:19.273310 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Mar 18 18:06:19.320995 master-0 kubenswrapper[30278]: I0318 18:06:19.320930 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Mar 18 18:06:19.354096 master-0 kubenswrapper[30278]: I0318 18:06:19.354006 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"kube-root-ca.crt" Mar 18 18:06:19.375889 master-0 kubenswrapper[30278]: I0318 18:06:19.375823 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Mar 18 18:06:19.409451 master-0 kubenswrapper[30278]: I0318 18:06:19.409355 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-kzdnw" Mar 18 18:06:19.459376 master-0 kubenswrapper[30278]: I0318 18:06:19.458594 30278 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Mar 18 18:06:19.489925 master-0 kubenswrapper[30278]: I0318 18:06:19.489855 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Mar 18 18:06:19.565817 master-0 kubenswrapper[30278]: I0318 18:06:19.565766 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Mar 18 18:06:19.610401 master-0 kubenswrapper[30278]: I0318 18:06:19.610207 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"metrics-client-ca" Mar 18 18:06:19.651538 master-0 kubenswrapper[30278]: I0318 18:06:19.651479 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-grpc-tls-66rqjfmn9qiqc" Mar 18 18:06:19.662079 master-0 kubenswrapper[30278]: I0318 18:06:19.662037 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-grpc-tls-2oo4hd4u5lrf1" Mar 18 18:06:19.779033 master-0 kubenswrapper[30278]: I0318 18:06:19.778941 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Mar 18 18:06:19.783200 master-0 kubenswrapper[30278]: I0318 18:06:19.783141 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Mar 18 18:06:19.805140 master-0 kubenswrapper[30278]: I0318 18:06:19.805074 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Mar 18 18:06:19.889796 master-0 kubenswrapper[30278]: I0318 18:06:19.889655 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Mar 18 18:06:19.931599 master-0 kubenswrapper[30278]: I0318 18:06:19.931523 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Mar 18 18:06:19.936950 master-0 kubenswrapper[30278]: I0318 18:06:19.936913 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-controller"/"openshift-service-ca.crt" Mar 18 18:06:19.953540 master-0 kubenswrapper[30278]: I0318 18:06:19.953479 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Mar 18 18:06:19.974488 master-0 kubenswrapper[30278]: I0318 18:06:19.974442 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Mar 18 18:06:20.019521 master-0 kubenswrapper[30278]: I0318 18:06:20.019429 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-gxxlp" Mar 18 18:06:20.025329 master-0 kubenswrapper[30278]: I0318 18:06:20.025240 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-cqcns" Mar 18 18:06:20.042705 master-0 kubenswrapper[30278]: I0318 18:06:20.042605 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Mar 18 18:06:20.122786 master-0 kubenswrapper[30278]: I0318 18:06:20.122693 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy-metrics" Mar 18 18:06:20.175714 master-0 kubenswrapper[30278]: I0318 18:06:20.175558 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Mar 18 18:06:20.186829 master-0 kubenswrapper[30278]: I0318 18:06:20.186747 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"monitoring-plugin-cert" Mar 18 18:06:20.197866 master-0 kubenswrapper[30278]: I0318 18:06:20.197810 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-olm-operator"/"kube-root-ca.crt" Mar 18 18:06:20.313688 master-0 kubenswrapper[30278]: I0318 18:06:20.313590 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Mar 18 18:06:20.393890 master-0 kubenswrapper[30278]: I0318 18:06:20.393789 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Mar 18 18:06:20.414103 master-0 kubenswrapper[30278]: I0318 18:06:20.414027 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-dockercfg-ncdpm" Mar 18 18:06:20.458100 master-0 kubenswrapper[30278]: I0318 18:06:20.457614 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Mar 18 18:06:20.533364 master-0 kubenswrapper[30278]: I0318 18:06:20.533258 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"openshift-service-ca.crt" Mar 18 18:06:20.640643 master-0 kubenswrapper[30278]: I0318 18:06:20.640589 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Mar 18 18:06:20.641146 master-0 kubenswrapper[30278]: I0318 18:06:20.640824 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Mar 18 18:06:20.655820 master-0 kubenswrapper[30278]: I0318 18:06:20.655070 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Mar 18 18:06:20.672307 master-0 kubenswrapper[30278]: I0318 18:06:20.669168 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Mar 18 18:06:20.689134 master-0 kubenswrapper[30278]: I0318 18:06:20.689080 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Mar 18 18:06:20.759533 master-0 kubenswrapper[30278]: I0318 18:06:20.759372 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-storage-operator"/"kube-root-ca.crt" Mar 18 18:06:20.784634 master-0 kubenswrapper[30278]: I0318 18:06:20.784554 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-kube-rbac-proxy-web" Mar 18 18:06:20.791721 master-0 kubenswrapper[30278]: I0318 18:06:20.791671 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Mar 18 18:06:21.037835 master-0 kubenswrapper[30278]: I0318 18:06:21.037652 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Mar 18 18:06:21.044068 master-0 kubenswrapper[30278]: I0318 18:06:21.044016 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"cluster-monitoring-operator-tls" Mar 18 18:06:21.080834 master-0 kubenswrapper[30278]: I0318 18:06:21.080770 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-insights"/"operator-dockercfg-bnhc4" Mar 18 18:06:21.191482 master-0 kubenswrapper[30278]: I0318 18:06:21.191408 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Mar 18 18:06:21.196537 master-0 kubenswrapper[30278]: I0318 18:06:21.196455 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-controller"/"operator-controller-trusted-ca-bundle" Mar 18 18:06:21.200606 master-0 kubenswrapper[30278]: I0318 18:06:21.200576 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Mar 18 18:06:21.246688 master-0 kubenswrapper[30278]: I0318 18:06:21.246608 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Mar 18 18:06:21.322096 master-0 kubenswrapper[30278]: I0318 18:06:21.322031 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Mar 18 18:06:21.323586 master-0 kubenswrapper[30278]: I0318 18:06:21.322379 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"trusted-ca-bundle" Mar 18 18:06:21.344439 master-0 kubenswrapper[30278]: I0318 18:06:21.344391 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Mar 18 18:06:21.384138 master-0 kubenswrapper[30278]: I0318 18:06:21.384069 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-tls" Mar 18 18:06:21.405761 master-0 kubenswrapper[30278]: I0318 18:06:21.405702 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Mar 18 18:06:21.417022 master-0 kubenswrapper[30278]: I0318 18:06:21.416963 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Mar 18 18:06:21.472053 master-0 kubenswrapper[30278]: I0318 18:06:21.471977 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Mar 18 18:06:21.483725 master-0 kubenswrapper[30278]: I0318 18:06:21.483620 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-tns2v" Mar 18 18:06:21.574044 master-0 kubenswrapper[30278]: I0318 18:06:21.573147 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-storage-operator"/"cluster-storage-operator-dockercfg-clcfd" Mar 18 18:06:21.583628 master-0 kubenswrapper[30278]: I0318 18:06:21.581260 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Mar 18 18:06:21.650867 master-0 kubenswrapper[30278]: I0318 18:06:21.650777 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Mar 18 18:06:21.668046 master-0 kubenswrapper[30278]: I0318 18:06:21.667975 30278 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Mar 18 18:06:21.668505 master-0 kubenswrapper[30278]: I0318 18:06:21.668409 30278 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" podStartSLOduration=44.668384947 podStartE2EDuration="44.668384947s" podCreationTimestamp="2026-03-18 18:05:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 18:06:00.533715602 +0000 UTC m=+329.700900197" watchObservedRunningTime="2026-03-18 18:06:21.668384947 +0000 UTC m=+350.835569552" Mar 18 18:06:21.672039 master-0 kubenswrapper[30278]: I0318 18:06:21.671971 30278 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podStartSLOduration=52.671955972 podStartE2EDuration="52.671955972s" podCreationTimestamp="2026-03-18 18:05:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 18:06:00.594803448 +0000 UTC m=+329.761988053" watchObservedRunningTime="2026-03-18 18:06:21.671955972 +0000 UTC m=+350.839140567" Mar 18 18:06:21.675115 master-0 kubenswrapper[30278]: I0318 18:06:21.675044 30278 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-master-0"] Mar 18 18:06:21.675115 master-0 kubenswrapper[30278]: I0318 18:06:21.675118 30278 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-master-0"] Mar 18 18:06:21.697844 master-0 kubenswrapper[30278]: I0318 18:06:21.697750 30278 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-master-0" podStartSLOduration=21.697733322 podStartE2EDuration="21.697733322s" podCreationTimestamp="2026-03-18 18:06:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 18:06:21.696195072 +0000 UTC m=+350.863379677" watchObservedRunningTime="2026-03-18 18:06:21.697733322 +0000 UTC m=+350.864917917" Mar 18 18:06:21.758720 master-0 kubenswrapper[30278]: I0318 18:06:21.758636 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-kdvf8" Mar 18 18:06:21.769812 master-0 kubenswrapper[30278]: I0318 18:06:21.769745 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-node-tuning-operator"/"node-tuning-operator-tls" Mar 18 18:06:21.817627 master-0 kubenswrapper[30278]: I0318 18:06:21.817548 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-dockercfg-5g5z8" Mar 18 18:06:21.883485 master-0 kubenswrapper[30278]: I0318 18:06:21.883339 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Mar 18 18:06:21.885985 master-0 kubenswrapper[30278]: I0318 18:06:21.885950 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"kube-root-ca.crt" Mar 18 18:06:21.898215 master-0 kubenswrapper[30278]: I0318 18:06:21.898181 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Mar 18 18:06:21.921414 master-0 kubenswrapper[30278]: I0318 18:06:21.921381 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Mar 18 18:06:21.927340 master-0 kubenswrapper[30278]: I0318 18:06:21.927246 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-credential-operator"/"openshift-service-ca.crt" Mar 18 18:06:21.930153 master-0 kubenswrapper[30278]: I0318 18:06:21.930107 30278 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0"] Mar 18 18:06:21.930821 master-0 kubenswrapper[30278]: I0318 18:06:21.930737 30278 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" podUID="ebbfbf2b56df0323ba118d68bfdad8b9" containerName="startup-monitor" containerID="cri-o://f9cbfe10dad1ec0cd2a3c9c1167829f36cbd4f5345a6cf76d79054d13a9a468a" gracePeriod=5 Mar 18 18:06:22.026860 master-0 kubenswrapper[30278]: I0318 18:06:22.026784 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Mar 18 18:06:22.056593 master-0 kubenswrapper[30278]: I0318 18:06:22.056552 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Mar 18 18:06:22.259785 master-0 kubenswrapper[30278]: I0318 18:06:22.259671 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-btlbk" Mar 18 18:06:22.274372 master-0 kubenswrapper[30278]: I0318 18:06:22.274325 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"serving-certs-ca-bundle" Mar 18 18:06:22.322363 master-0 kubenswrapper[30278]: I0318 18:06:22.322267 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-storage-operator"/"cluster-storage-operator-serving-cert" Mar 18 18:06:22.338077 master-0 kubenswrapper[30278]: I0318 18:06:22.338048 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-autoscaler-operator-dockercfg-rqcfx" Mar 18 18:06:22.394007 master-0 kubenswrapper[30278]: I0318 18:06:22.393954 30278 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Mar 18 18:06:22.416635 master-0 kubenswrapper[30278]: I0318 18:06:22.416589 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Mar 18 18:06:22.422500 master-0 kubenswrapper[30278]: I0318 18:06:22.422464 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Mar 18 18:06:22.504837 master-0 kubenswrapper[30278]: I0318 18:06:22.504778 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Mar 18 18:06:22.524486 master-0 kubenswrapper[30278]: I0318 18:06:22.524395 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Mar 18 18:06:22.653787 master-0 kubenswrapper[30278]: I0318 18:06:22.653719 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Mar 18 18:06:22.699472 master-0 kubenswrapper[30278]: I0318 18:06:22.699418 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-rl6dv" Mar 18 18:06:22.819347 master-0 kubenswrapper[30278]: I0318 18:06:22.819253 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Mar 18 18:06:22.935698 master-0 kubenswrapper[30278]: I0318 18:06:22.935624 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Mar 18 18:06:23.059568 master-0 kubenswrapper[30278]: I0318 18:06:23.059498 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-catalogd"/"catalogserver-cert" Mar 18 18:06:23.060615 master-0 kubenswrapper[30278]: I0318 18:06:23.060563 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Mar 18 18:06:23.117757 master-0 kubenswrapper[30278]: I0318 18:06:23.117569 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Mar 18 18:06:23.143770 master-0 kubenswrapper[30278]: I0318 18:06:23.143698 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Mar 18 18:06:23.153415 master-0 kubenswrapper[30278]: I0318 18:06:23.153361 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Mar 18 18:06:23.223643 master-0 kubenswrapper[30278]: I0318 18:06:23.223569 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Mar 18 18:06:23.280012 master-0 kubenswrapper[30278]: I0318 18:06:23.279922 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Mar 18 18:06:23.356227 master-0 kubenswrapper[30278]: I0318 18:06:23.356161 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Mar 18 18:06:23.368465 master-0 kubenswrapper[30278]: I0318 18:06:23.368379 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy-web" Mar 18 18:06:23.392350 master-0 kubenswrapper[30278]: I0318 18:06:23.392286 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"kube-rbac-proxy" Mar 18 18:06:23.403690 master-0 kubenswrapper[30278]: I0318 18:06:23.403614 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Mar 18 18:06:23.410130 master-0 kubenswrapper[30278]: I0318 18:06:23.410064 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Mar 18 18:06:23.448539 master-0 kubenswrapper[30278]: I0318 18:06:23.448455 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-controller-manager-operator"/"cluster-cloud-controller-manager-dockercfg-2mk4r" Mar 18 18:06:23.499250 master-0 kubenswrapper[30278]: I0318 18:06:23.499172 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-dockercfg-pwxkh" Mar 18 18:06:23.700434 master-0 kubenswrapper[30278]: I0318 18:06:23.700263 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Mar 18 18:06:23.762878 master-0 kubenswrapper[30278]: I0318 18:06:23.762807 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2dddk" Mar 18 18:06:23.863990 master-0 kubenswrapper[30278]: I0318 18:06:23.863904 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-baremetal-operator-tls" Mar 18 18:06:23.868023 master-0 kubenswrapper[30278]: I0318 18:06:23.867967 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Mar 18 18:06:23.969920 master-0 kubenswrapper[30278]: I0318 18:06:23.969751 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-credential-operator"/"cloud-credential-operator-dockercfg-4fdq4" Mar 18 18:06:24.112600 master-0 kubenswrapper[30278]: I0318 18:06:24.112509 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Mar 18 18:06:24.160861 master-0 kubenswrapper[30278]: I0318 18:06:24.160803 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Mar 18 18:06:24.189049 master-0 kubenswrapper[30278]: I0318 18:06:24.188969 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Mar 18 18:06:24.213209 master-0 kubenswrapper[30278]: I0318 18:06:24.213112 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Mar 18 18:06:24.254492 master-0 kubenswrapper[30278]: I0318 18:06:24.254349 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Mar 18 18:06:24.271401 master-0 kubenswrapper[30278]: I0318 18:06:24.271337 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Mar 18 18:06:24.356662 master-0 kubenswrapper[30278]: I0318 18:06:24.356589 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"prometheus-trusted-ca-bundle" Mar 18 18:06:24.363600 master-0 kubenswrapper[30278]: I0318 18:06:24.363570 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Mar 18 18:06:24.387179 master-0 kubenswrapper[30278]: I0318 18:06:24.387113 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-22mk8" Mar 18 18:06:24.401324 master-0 kubenswrapper[30278]: I0318 18:06:24.401247 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Mar 18 18:06:24.405494 master-0 kubenswrapper[30278]: I0318 18:06:24.405450 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Mar 18 18:06:24.621483 master-0 kubenswrapper[30278]: I0318 18:06:24.621418 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Mar 18 18:06:24.621841 master-0 kubenswrapper[30278]: I0318 18:06:24.621804 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Mar 18 18:06:24.767828 master-0 kubenswrapper[30278]: I0318 18:06:24.767752 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Mar 18 18:06:24.811174 master-0 kubenswrapper[30278]: I0318 18:06:24.811102 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Mar 18 18:06:24.955056 master-0 kubenswrapper[30278]: I0318 18:06:24.954930 30278 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-9df654797-6rk29"] Mar 18 18:06:24.955390 master-0 kubenswrapper[30278]: E0318 18:06:24.955371 30278 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ebbfbf2b56df0323ba118d68bfdad8b9" containerName="startup-monitor" Mar 18 18:06:24.955390 master-0 kubenswrapper[30278]: I0318 18:06:24.955391 30278 state_mem.go:107] "Deleted CPUSet assignment" podUID="ebbfbf2b56df0323ba118d68bfdad8b9" containerName="startup-monitor" Mar 18 18:06:24.955486 master-0 kubenswrapper[30278]: E0318 18:06:24.955412 30278 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="257339d9-4efe-4659-ae45-5c1fee5ebba7" containerName="installer" Mar 18 18:06:24.955486 master-0 kubenswrapper[30278]: I0318 18:06:24.955419 30278 state_mem.go:107] "Deleted CPUSet assignment" podUID="257339d9-4efe-4659-ae45-5c1fee5ebba7" containerName="installer" Mar 18 18:06:24.955652 master-0 kubenswrapper[30278]: I0318 18:06:24.955569 30278 memory_manager.go:354] "RemoveStaleState removing state" podUID="ebbfbf2b56df0323ba118d68bfdad8b9" containerName="startup-monitor" Mar 18 18:06:24.955652 master-0 kubenswrapper[30278]: I0318 18:06:24.955600 30278 memory_manager.go:354] "RemoveStaleState removing state" podUID="257339d9-4efe-4659-ae45-5c1fee5ebba7" containerName="installer" Mar 18 18:06:24.956210 master-0 kubenswrapper[30278]: I0318 18:06:24.956189 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-9df654797-6rk29" Mar 18 18:06:24.975557 master-0 kubenswrapper[30278]: I0318 18:06:24.975502 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Mar 18 18:06:24.977120 master-0 kubenswrapper[30278]: I0318 18:06:24.977074 30278 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-9df654797-6rk29"] Mar 18 18:06:25.043810 master-0 kubenswrapper[30278]: I0318 18:06:25.043746 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/722cfd9d-3251-4136-8680-742b888588e2-service-ca\") pod \"console-9df654797-6rk29\" (UID: \"722cfd9d-3251-4136-8680-742b888588e2\") " pod="openshift-console/console-9df654797-6rk29" Mar 18 18:06:25.043810 master-0 kubenswrapper[30278]: I0318 18:06:25.043806 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/722cfd9d-3251-4136-8680-742b888588e2-console-config\") pod \"console-9df654797-6rk29\" (UID: \"722cfd9d-3251-4136-8680-742b888588e2\") " pod="openshift-console/console-9df654797-6rk29" Mar 18 18:06:25.044084 master-0 kubenswrapper[30278]: I0318 18:06:25.043838 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/722cfd9d-3251-4136-8680-742b888588e2-console-oauth-config\") pod \"console-9df654797-6rk29\" (UID: \"722cfd9d-3251-4136-8680-742b888588e2\") " pod="openshift-console/console-9df654797-6rk29" Mar 18 18:06:25.044084 master-0 kubenswrapper[30278]: I0318 18:06:25.043916 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/722cfd9d-3251-4136-8680-742b888588e2-trusted-ca-bundle\") pod \"console-9df654797-6rk29\" (UID: \"722cfd9d-3251-4136-8680-742b888588e2\") " pod="openshift-console/console-9df654797-6rk29" Mar 18 18:06:25.044084 master-0 kubenswrapper[30278]: I0318 18:06:25.043940 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/722cfd9d-3251-4136-8680-742b888588e2-oauth-serving-cert\") pod \"console-9df654797-6rk29\" (UID: \"722cfd9d-3251-4136-8680-742b888588e2\") " pod="openshift-console/console-9df654797-6rk29" Mar 18 18:06:25.044084 master-0 kubenswrapper[30278]: I0318 18:06:25.043977 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d55q9\" (UniqueName: \"kubernetes.io/projected/722cfd9d-3251-4136-8680-742b888588e2-kube-api-access-d55q9\") pod \"console-9df654797-6rk29\" (UID: \"722cfd9d-3251-4136-8680-742b888588e2\") " pod="openshift-console/console-9df654797-6rk29" Mar 18 18:06:25.044084 master-0 kubenswrapper[30278]: I0318 18:06:25.044002 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/722cfd9d-3251-4136-8680-742b888588e2-console-serving-cert\") pod \"console-9df654797-6rk29\" (UID: \"722cfd9d-3251-4136-8680-742b888588e2\") " pod="openshift-console/console-9df654797-6rk29" Mar 18 18:06:25.055047 master-0 kubenswrapper[30278]: I0318 18:06:25.055002 30278 scope.go:117] "RemoveContainer" containerID="0efa75ff5edd5af0aa740f106c84e8ebbfd54436e05457409b15c27a17d6aaed" Mar 18 18:06:25.086283 master-0 kubenswrapper[30278]: I0318 18:06:25.086205 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Mar 18 18:06:25.147024 master-0 kubenswrapper[30278]: I0318 18:06:25.146314 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d55q9\" (UniqueName: \"kubernetes.io/projected/722cfd9d-3251-4136-8680-742b888588e2-kube-api-access-d55q9\") pod \"console-9df654797-6rk29\" (UID: \"722cfd9d-3251-4136-8680-742b888588e2\") " pod="openshift-console/console-9df654797-6rk29" Mar 18 18:06:25.147024 master-0 kubenswrapper[30278]: I0318 18:06:25.146432 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/722cfd9d-3251-4136-8680-742b888588e2-console-serving-cert\") pod \"console-9df654797-6rk29\" (UID: \"722cfd9d-3251-4136-8680-742b888588e2\") " pod="openshift-console/console-9df654797-6rk29" Mar 18 18:06:25.147024 master-0 kubenswrapper[30278]: I0318 18:06:25.146530 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/722cfd9d-3251-4136-8680-742b888588e2-service-ca\") pod \"console-9df654797-6rk29\" (UID: \"722cfd9d-3251-4136-8680-742b888588e2\") " pod="openshift-console/console-9df654797-6rk29" Mar 18 18:06:25.147024 master-0 kubenswrapper[30278]: I0318 18:06:25.146852 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/722cfd9d-3251-4136-8680-742b888588e2-console-config\") pod \"console-9df654797-6rk29\" (UID: \"722cfd9d-3251-4136-8680-742b888588e2\") " pod="openshift-console/console-9df654797-6rk29" Mar 18 18:06:25.148121 master-0 kubenswrapper[30278]: I0318 18:06:25.148081 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/722cfd9d-3251-4136-8680-742b888588e2-service-ca\") pod \"console-9df654797-6rk29\" (UID: \"722cfd9d-3251-4136-8680-742b888588e2\") " pod="openshift-console/console-9df654797-6rk29" Mar 18 18:06:25.149241 master-0 kubenswrapper[30278]: I0318 18:06:25.149183 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/722cfd9d-3251-4136-8680-742b888588e2-console-config\") pod \"console-9df654797-6rk29\" (UID: \"722cfd9d-3251-4136-8680-742b888588e2\") " pod="openshift-console/console-9df654797-6rk29" Mar 18 18:06:25.149549 master-0 kubenswrapper[30278]: I0318 18:06:25.149498 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/722cfd9d-3251-4136-8680-742b888588e2-console-oauth-config\") pod \"console-9df654797-6rk29\" (UID: \"722cfd9d-3251-4136-8680-742b888588e2\") " pod="openshift-console/console-9df654797-6rk29" Mar 18 18:06:25.149872 master-0 kubenswrapper[30278]: I0318 18:06:25.149837 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/722cfd9d-3251-4136-8680-742b888588e2-trusted-ca-bundle\") pod \"console-9df654797-6rk29\" (UID: \"722cfd9d-3251-4136-8680-742b888588e2\") " pod="openshift-console/console-9df654797-6rk29" Mar 18 18:06:25.149955 master-0 kubenswrapper[30278]: I0318 18:06:25.149931 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/722cfd9d-3251-4136-8680-742b888588e2-oauth-serving-cert\") pod \"console-9df654797-6rk29\" (UID: \"722cfd9d-3251-4136-8680-742b888588e2\") " pod="openshift-console/console-9df654797-6rk29" Mar 18 18:06:25.151054 master-0 kubenswrapper[30278]: I0318 18:06:25.151012 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/722cfd9d-3251-4136-8680-742b888588e2-oauth-serving-cert\") pod \"console-9df654797-6rk29\" (UID: \"722cfd9d-3251-4136-8680-742b888588e2\") " pod="openshift-console/console-9df654797-6rk29" Mar 18 18:06:25.151459 master-0 kubenswrapper[30278]: I0318 18:06:25.151408 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/722cfd9d-3251-4136-8680-742b888588e2-console-serving-cert\") pod \"console-9df654797-6rk29\" (UID: \"722cfd9d-3251-4136-8680-742b888588e2\") " pod="openshift-console/console-9df654797-6rk29" Mar 18 18:06:25.155068 master-0 kubenswrapper[30278]: I0318 18:06:25.154792 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/722cfd9d-3251-4136-8680-742b888588e2-console-oauth-config\") pod \"console-9df654797-6rk29\" (UID: \"722cfd9d-3251-4136-8680-742b888588e2\") " pod="openshift-console/console-9df654797-6rk29" Mar 18 18:06:25.155178 master-0 kubenswrapper[30278]: I0318 18:06:25.155147 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/722cfd9d-3251-4136-8680-742b888588e2-trusted-ca-bundle\") pod \"console-9df654797-6rk29\" (UID: \"722cfd9d-3251-4136-8680-742b888588e2\") " pod="openshift-console/console-9df654797-6rk29" Mar 18 18:06:25.171371 master-0 kubenswrapper[30278]: I0318 18:06:25.171324 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d55q9\" (UniqueName: \"kubernetes.io/projected/722cfd9d-3251-4136-8680-742b888588e2-kube-api-access-d55q9\") pod \"console-9df654797-6rk29\" (UID: \"722cfd9d-3251-4136-8680-742b888588e2\") " pod="openshift-console/console-9df654797-6rk29" Mar 18 18:06:25.185770 master-0 kubenswrapper[30278]: I0318 18:06:25.185716 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-rbac-proxy" Mar 18 18:06:25.294691 master-0 kubenswrapper[30278]: I0318 18:06:25.294620 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-9df654797-6rk29" Mar 18 18:06:25.335888 master-0 kubenswrapper[30278]: I0318 18:06:25.335824 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Mar 18 18:06:25.350992 master-0 kubenswrapper[30278]: I0318 18:06:25.350942 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Mar 18 18:06:25.411507 master-0 kubenswrapper[30278]: I0318 18:06:25.411447 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-node-tuning-operator"/"performance-addon-operator-webhook-cert" Mar 18 18:06:25.414694 master-0 kubenswrapper[30278]: I0318 18:06:25.414660 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Mar 18 18:06:25.425492 master-0 kubenswrapper[30278]: I0318 18:06:25.425422 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-credential-operator"/"cco-trusted-ca" Mar 18 18:06:25.477241 master-0 kubenswrapper[30278]: I0318 18:06:25.477119 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-bwq44" Mar 18 18:06:25.570130 master-0 kubenswrapper[30278]: I0318 18:06:25.570068 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-r9bww" Mar 18 18:06:25.581878 master-0 kubenswrapper[30278]: I0318 18:06:25.581822 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-insights/insights-operator-68bf6ff9d6-hm777" event={"ID":"d4c75bee-d0d2-4261-8f89-8c3375dbd868","Type":"ContainerStarted","Data":"cbb5d1315eeb0cc1ec4a26f707ba4c3192f78dacadc91634f0fc0bebd2103cb2"} Mar 18 18:06:25.596088 master-0 kubenswrapper[30278]: I0318 18:06:25.596039 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Mar 18 18:06:25.606187 master-0 kubenswrapper[30278]: I0318 18:06:25.606132 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Mar 18 18:06:25.722340 master-0 kubenswrapper[30278]: I0318 18:06:25.720859 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Mar 18 18:06:25.722340 master-0 kubenswrapper[30278]: I0318 18:06:25.721216 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Mar 18 18:06:25.726639 master-0 kubenswrapper[30278]: I0318 18:06:25.726568 30278 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Mar 18 18:06:25.758892 master-0 kubenswrapper[30278]: I0318 18:06:25.758680 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Mar 18 18:06:25.789300 master-0 kubenswrapper[30278]: I0318 18:06:25.786893 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Mar 18 18:06:25.807349 master-0 kubenswrapper[30278]: I0318 18:06:25.806374 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Mar 18 18:06:25.863721 master-0 kubenswrapper[30278]: I0318 18:06:25.862623 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Mar 18 18:06:26.085702 master-0 kubenswrapper[30278]: I0318 18:06:26.085653 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Mar 18 18:06:26.135119 master-0 kubenswrapper[30278]: I0318 18:06:26.135023 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Mar 18 18:06:26.265169 master-0 kubenswrapper[30278]: I0318 18:06:26.265114 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"cluster-baremetal-operator-images" Mar 18 18:06:26.358689 master-0 kubenswrapper[30278]: I0318 18:06:26.358561 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Mar 18 18:06:26.427428 master-0 kubenswrapper[30278]: I0318 18:06:26.427366 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Mar 18 18:06:26.455670 master-0 kubenswrapper[30278]: I0318 18:06:26.455636 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Mar 18 18:06:26.562822 master-0 kubenswrapper[30278]: I0318 18:06:26.562769 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Mar 18 18:06:26.600944 master-0 kubenswrapper[30278]: I0318 18:06:26.600888 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Mar 18 18:06:27.009873 master-0 kubenswrapper[30278]: I0318 18:06:27.009741 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-web-config" Mar 18 18:06:27.170235 master-0 kubenswrapper[30278]: I0318 18:06:27.170150 30278 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-9df654797-6rk29"] Mar 18 18:06:27.170471 master-0 kubenswrapper[30278]: W0318 18:06:27.170367 30278 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod722cfd9d_3251_4136_8680_742b888588e2.slice/crio-bc57e684fd02c35b74d2e8afdc2abf0538ab3fcef694bd06212503d110dd2ff0 WatchSource:0}: Error finding container bc57e684fd02c35b74d2e8afdc2abf0538ab3fcef694bd06212503d110dd2ff0: Status 404 returned error can't find the container with id bc57e684fd02c35b74d2e8afdc2abf0538ab3fcef694bd06212503d110dd2ff0 Mar 18 18:06:27.179747 master-0 kubenswrapper[30278]: I0318 18:06:27.179571 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-ticnjnaemlaa" Mar 18 18:06:27.515819 master-0 kubenswrapper[30278]: I0318 18:06:27.515775 30278 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-master-0_ebbfbf2b56df0323ba118d68bfdad8b9/startup-monitor/0.log" Mar 18 18:06:27.516457 master-0 kubenswrapper[30278]: I0318 18:06:27.515879 30278 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 18 18:06:27.606418 master-0 kubenswrapper[30278]: I0318 18:06:27.606345 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-9df654797-6rk29" event={"ID":"722cfd9d-3251-4136-8680-742b888588e2","Type":"ContainerStarted","Data":"370b7083f5c39256c7ff65a327d53f1ffe1c4db50255322e4c361b2c01073a28"} Mar 18 18:06:27.606418 master-0 kubenswrapper[30278]: I0318 18:06:27.606401 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-9df654797-6rk29" event={"ID":"722cfd9d-3251-4136-8680-742b888588e2","Type":"ContainerStarted","Data":"bc57e684fd02c35b74d2e8afdc2abf0538ab3fcef694bd06212503d110dd2ff0"} Mar 18 18:06:27.608626 master-0 kubenswrapper[30278]: I0318 18:06:27.608573 30278 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-master-0_ebbfbf2b56df0323ba118d68bfdad8b9/startup-monitor/0.log" Mar 18 18:06:27.608724 master-0 kubenswrapper[30278]: I0318 18:06:27.608669 30278 generic.go:334] "Generic (PLEG): container finished" podID="ebbfbf2b56df0323ba118d68bfdad8b9" containerID="f9cbfe10dad1ec0cd2a3c9c1167829f36cbd4f5345a6cf76d79054d13a9a468a" exitCode=137 Mar 18 18:06:27.608724 master-0 kubenswrapper[30278]: I0318 18:06:27.608705 30278 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 18 18:06:27.608821 master-0 kubenswrapper[30278]: I0318 18:06:27.608743 30278 scope.go:117] "RemoveContainer" containerID="f9cbfe10dad1ec0cd2a3c9c1167829f36cbd4f5345a6cf76d79054d13a9a468a" Mar 18 18:06:27.642241 master-0 kubenswrapper[30278]: I0318 18:06:27.642195 30278 scope.go:117] "RemoveContainer" containerID="f9cbfe10dad1ec0cd2a3c9c1167829f36cbd4f5345a6cf76d79054d13a9a468a" Mar 18 18:06:27.643409 master-0 kubenswrapper[30278]: E0318 18:06:27.643357 30278 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f9cbfe10dad1ec0cd2a3c9c1167829f36cbd4f5345a6cf76d79054d13a9a468a\": container with ID starting with f9cbfe10dad1ec0cd2a3c9c1167829f36cbd4f5345a6cf76d79054d13a9a468a not found: ID does not exist" containerID="f9cbfe10dad1ec0cd2a3c9c1167829f36cbd4f5345a6cf76d79054d13a9a468a" Mar 18 18:06:27.643483 master-0 kubenswrapper[30278]: I0318 18:06:27.643433 30278 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f9cbfe10dad1ec0cd2a3c9c1167829f36cbd4f5345a6cf76d79054d13a9a468a"} err="failed to get container status \"f9cbfe10dad1ec0cd2a3c9c1167829f36cbd4f5345a6cf76d79054d13a9a468a\": rpc error: code = NotFound desc = could not find container \"f9cbfe10dad1ec0cd2a3c9c1167829f36cbd4f5345a6cf76d79054d13a9a468a\": container with ID starting with f9cbfe10dad1ec0cd2a3c9c1167829f36cbd4f5345a6cf76d79054d13a9a468a not found: ID does not exist" Mar 18 18:06:27.711408 master-0 kubenswrapper[30278]: I0318 18:06:27.711351 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/ebbfbf2b56df0323ba118d68bfdad8b9-resource-dir\") pod \"ebbfbf2b56df0323ba118d68bfdad8b9\" (UID: \"ebbfbf2b56df0323ba118d68bfdad8b9\") " Mar 18 18:06:27.711408 master-0 kubenswrapper[30278]: I0318 18:06:27.711411 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/ebbfbf2b56df0323ba118d68bfdad8b9-pod-resource-dir\") pod \"ebbfbf2b56df0323ba118d68bfdad8b9\" (UID: \"ebbfbf2b56df0323ba118d68bfdad8b9\") " Mar 18 18:06:27.711626 master-0 kubenswrapper[30278]: I0318 18:06:27.711450 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/ebbfbf2b56df0323ba118d68bfdad8b9-manifests\") pod \"ebbfbf2b56df0323ba118d68bfdad8b9\" (UID: \"ebbfbf2b56df0323ba118d68bfdad8b9\") " Mar 18 18:06:27.711626 master-0 kubenswrapper[30278]: I0318 18:06:27.711477 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/ebbfbf2b56df0323ba118d68bfdad8b9-var-log\") pod \"ebbfbf2b56df0323ba118d68bfdad8b9\" (UID: \"ebbfbf2b56df0323ba118d68bfdad8b9\") " Mar 18 18:06:27.711626 master-0 kubenswrapper[30278]: I0318 18:06:27.711476 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ebbfbf2b56df0323ba118d68bfdad8b9-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "ebbfbf2b56df0323ba118d68bfdad8b9" (UID: "ebbfbf2b56df0323ba118d68bfdad8b9"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 18:06:27.711626 master-0 kubenswrapper[30278]: I0318 18:06:27.711572 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/ebbfbf2b56df0323ba118d68bfdad8b9-var-lock\") pod \"ebbfbf2b56df0323ba118d68bfdad8b9\" (UID: \"ebbfbf2b56df0323ba118d68bfdad8b9\") " Mar 18 18:06:27.711626 master-0 kubenswrapper[30278]: I0318 18:06:27.711583 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ebbfbf2b56df0323ba118d68bfdad8b9-manifests" (OuterVolumeSpecName: "manifests") pod "ebbfbf2b56df0323ba118d68bfdad8b9" (UID: "ebbfbf2b56df0323ba118d68bfdad8b9"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 18:06:27.711626 master-0 kubenswrapper[30278]: I0318 18:06:27.711617 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ebbfbf2b56df0323ba118d68bfdad8b9-var-log" (OuterVolumeSpecName: "var-log") pod "ebbfbf2b56df0323ba118d68bfdad8b9" (UID: "ebbfbf2b56df0323ba118d68bfdad8b9"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 18:06:27.711884 master-0 kubenswrapper[30278]: I0318 18:06:27.711726 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ebbfbf2b56df0323ba118d68bfdad8b9-var-lock" (OuterVolumeSpecName: "var-lock") pod "ebbfbf2b56df0323ba118d68bfdad8b9" (UID: "ebbfbf2b56df0323ba118d68bfdad8b9"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 18:06:27.711884 master-0 kubenswrapper[30278]: I0318 18:06:27.711848 30278 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/ebbfbf2b56df0323ba118d68bfdad8b9-var-lock\") on node \"master-0\" DevicePath \"\"" Mar 18 18:06:27.711884 master-0 kubenswrapper[30278]: I0318 18:06:27.711865 30278 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/ebbfbf2b56df0323ba118d68bfdad8b9-resource-dir\") on node \"master-0\" DevicePath \"\"" Mar 18 18:06:27.711884 master-0 kubenswrapper[30278]: I0318 18:06:27.711882 30278 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/ebbfbf2b56df0323ba118d68bfdad8b9-manifests\") on node \"master-0\" DevicePath \"\"" Mar 18 18:06:27.712041 master-0 kubenswrapper[30278]: I0318 18:06:27.711896 30278 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/ebbfbf2b56df0323ba118d68bfdad8b9-var-log\") on node \"master-0\" DevicePath \"\"" Mar 18 18:06:27.717332 master-0 kubenswrapper[30278]: I0318 18:06:27.717310 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ebbfbf2b56df0323ba118d68bfdad8b9-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "ebbfbf2b56df0323ba118d68bfdad8b9" (UID: "ebbfbf2b56df0323ba118d68bfdad8b9"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 18:06:27.812841 master-0 kubenswrapper[30278]: I0318 18:06:27.812778 30278 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/ebbfbf2b56df0323ba118d68bfdad8b9-pod-resource-dir\") on node \"master-0\" DevicePath \"\"" Mar 18 18:06:28.052834 master-0 kubenswrapper[30278]: I0318 18:06:28.052784 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Mar 18 18:06:29.064021 master-0 kubenswrapper[30278]: I0318 18:06:29.063953 30278 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ebbfbf2b56df0323ba118d68bfdad8b9" path="/var/lib/kubelet/pods/ebbfbf2b56df0323ba118d68bfdad8b9/volumes" Mar 18 18:06:29.064637 master-0 kubenswrapper[30278]: I0318 18:06:29.064200 30278 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" podUID="" Mar 18 18:06:29.080673 master-0 kubenswrapper[30278]: I0318 18:06:29.079962 30278 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-9df654797-6rk29" podStartSLOduration=5.079943666 podStartE2EDuration="5.079943666s" podCreationTimestamp="2026-03-18 18:06:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 18:06:27.628706058 +0000 UTC m=+356.795890683" watchObservedRunningTime="2026-03-18 18:06:29.079943666 +0000 UTC m=+358.247128271" Mar 18 18:06:29.081846 master-0 kubenswrapper[30278]: I0318 18:06:29.081801 30278 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0"] Mar 18 18:06:29.081846 master-0 kubenswrapper[30278]: I0318 18:06:29.081836 30278 kubelet.go:2649] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" mirrorPodUID="9c1e1105-a9e4-44cf-abf6-2368da275788" Mar 18 18:06:29.089493 master-0 kubenswrapper[30278]: I0318 18:06:29.089449 30278 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0"] Mar 18 18:06:29.089493 master-0 kubenswrapper[30278]: I0318 18:06:29.089488 30278 kubelet.go:2673] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" mirrorPodUID="9c1e1105-a9e4-44cf-abf6-2368da275788" Mar 18 18:06:30.642379 master-0 kubenswrapper[30278]: I0318 18:06:30.642312 30278 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_efc76217af9e7119e39d2455d00c223f/cluster-policy-controller/0.log" Mar 18 18:06:30.643401 master-0 kubenswrapper[30278]: I0318 18:06:30.643370 30278 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_efc76217af9e7119e39d2455d00c223f/kube-controller-manager/0.log" Mar 18 18:06:30.643496 master-0 kubenswrapper[30278]: I0318 18:06:30.643465 30278 generic.go:334] "Generic (PLEG): container finished" podID="efc76217af9e7119e39d2455d00c223f" containerID="498a5c57b90053a76dc039b2bff8526c3d09fbb3c0193932a4070bb49e9eec20" exitCode=137 Mar 18 18:06:30.643556 master-0 kubenswrapper[30278]: I0318 18:06:30.643509 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"efc76217af9e7119e39d2455d00c223f","Type":"ContainerDied","Data":"498a5c57b90053a76dc039b2bff8526c3d09fbb3c0193932a4070bb49e9eec20"} Mar 18 18:06:30.643556 master-0 kubenswrapper[30278]: I0318 18:06:30.643548 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"efc76217af9e7119e39d2455d00c223f","Type":"ContainerStarted","Data":"c974ce9bca98caf206cacb3590d85f8cb970581a77ff4f55db1e8e82efb4ff2c"} Mar 18 18:06:35.295135 master-0 kubenswrapper[30278]: I0318 18:06:35.295051 30278 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-9df654797-6rk29" Mar 18 18:06:35.295135 master-0 kubenswrapper[30278]: I0318 18:06:35.295116 30278 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-9df654797-6rk29" Mar 18 18:06:35.302449 master-0 kubenswrapper[30278]: I0318 18:06:35.302374 30278 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-9df654797-6rk29" Mar 18 18:06:35.696508 master-0 kubenswrapper[30278]: I0318 18:06:35.696431 30278 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-9df654797-6rk29" Mar 18 18:06:39.704751 master-0 kubenswrapper[30278]: I0318 18:06:39.704408 30278 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 18:06:39.704751 master-0 kubenswrapper[30278]: I0318 18:06:39.704530 30278 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 18:06:39.709637 master-0 kubenswrapper[30278]: I0318 18:06:39.709579 30278 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 18:06:39.735501 master-0 kubenswrapper[30278]: I0318 18:06:39.735408 30278 generic.go:334] "Generic (PLEG): container finished" podID="ce5831a6-5a8d-4cda-9299-5d86437bcab2" containerID="b73c8977b21f30cbbb9e502e36e5bebff03e78b4e5aff7d86803b34ab2c6326f" exitCode=0 Mar 18 18:06:39.736566 master-0 kubenswrapper[30278]: I0318 18:06:39.736512 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-89ccd998f-l5gm7" event={"ID":"ce5831a6-5a8d-4cda-9299-5d86437bcab2","Type":"ContainerDied","Data":"b73c8977b21f30cbbb9e502e36e5bebff03e78b4e5aff7d86803b34ab2c6326f"} Mar 18 18:06:39.736655 master-0 kubenswrapper[30278]: I0318 18:06:39.736573 30278 scope.go:117] "RemoveContainer" containerID="fe07019623ba4afabfbf6551b7028ec6e274c77f8b3075096e77bb2fa5ab0961" Mar 18 18:06:39.736979 master-0 kubenswrapper[30278]: I0318 18:06:39.736944 30278 scope.go:117] "RemoveContainer" containerID="b73c8977b21f30cbbb9e502e36e5bebff03e78b4e5aff7d86803b34ab2c6326f" Mar 18 18:06:40.749828 master-0 kubenswrapper[30278]: I0318 18:06:40.749763 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-89ccd998f-l5gm7" event={"ID":"ce5831a6-5a8d-4cda-9299-5d86437bcab2","Type":"ContainerStarted","Data":"99c72dba5438a7773be6cd18fd9f444fc601602cd6650c4638cc8d16b97cb1dc"} Mar 18 18:06:40.753898 master-0 kubenswrapper[30278]: I0318 18:06:40.750439 30278 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-89ccd998f-l5gm7" Mar 18 18:06:40.753898 master-0 kubenswrapper[30278]: I0318 18:06:40.752823 30278 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-89ccd998f-l5gm7" Mar 18 18:06:44.475110 master-0 kubenswrapper[30278]: I0318 18:06:44.474992 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-web-config" Mar 18 18:06:48.460355 master-0 kubenswrapper[30278]: I0318 18:06:48.460144 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Mar 18 18:06:49.474113 master-0 kubenswrapper[30278]: I0318 18:06:49.474029 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"baremetal-kube-rbac-proxy" Mar 18 18:06:49.709184 master-0 kubenswrapper[30278]: I0318 18:06:49.709108 30278 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 18:06:52.421296 master-0 kubenswrapper[30278]: I0318 18:06:52.417665 30278 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/telemeter-client-cf85db6cf-b9mbd"] Mar 18 18:06:52.421296 master-0 kubenswrapper[30278]: I0318 18:06:52.419359 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/telemeter-client-cf85db6cf-b9mbd" Mar 18 18:06:52.431292 master-0 kubenswrapper[30278]: I0318 18:06:52.430566 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"telemeter-client" Mar 18 18:06:52.431292 master-0 kubenswrapper[30278]: I0318 18:06:52.430810 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"telemeter-client-serving-certs-ca-bundle" Mar 18 18:06:52.431292 master-0 kubenswrapper[30278]: I0318 18:06:52.430923 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"federate-client-certs" Mar 18 18:06:52.431292 master-0 kubenswrapper[30278]: I0318 18:06:52.431023 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"telemeter-client-kube-rbac-proxy-config" Mar 18 18:06:52.431292 master-0 kubenswrapper[30278]: I0318 18:06:52.431136 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"telemeter-client-tls" Mar 18 18:06:52.431292 master-0 kubenswrapper[30278]: I0318 18:06:52.431256 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"telemeter-client-dockercfg-zmg72" Mar 18 18:06:52.438693 master-0 kubenswrapper[30278]: I0318 18:06:52.436713 30278 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-d89d9c4d9-57l4t"] Mar 18 18:06:52.441388 master-0 kubenswrapper[30278]: I0318 18:06:52.440813 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"telemeter-trusted-ca-bundle-8i12ta5c71j38" Mar 18 18:06:52.457583 master-0 kubenswrapper[30278]: I0318 18:06:52.457530 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qkbwt\" (UniqueName: \"kubernetes.io/projected/49ae0fd5-b0ec-4b37-b441-4943f3b160d4-kube-api-access-qkbwt\") pod \"telemeter-client-cf85db6cf-b9mbd\" (UID: \"49ae0fd5-b0ec-4b37-b441-4943f3b160d4\") " pod="openshift-monitoring/telemeter-client-cf85db6cf-b9mbd" Mar 18 18:06:52.457583 master-0 kubenswrapper[30278]: I0318 18:06:52.457582 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49ae0fd5-b0ec-4b37-b441-4943f3b160d4-serving-certs-ca-bundle\") pod \"telemeter-client-cf85db6cf-b9mbd\" (UID: \"49ae0fd5-b0ec-4b37-b441-4943f3b160d4\") " pod="openshift-monitoring/telemeter-client-cf85db6cf-b9mbd" Mar 18 18:06:52.457812 master-0 kubenswrapper[30278]: I0318 18:06:52.457613 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-telemeter-client-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/49ae0fd5-b0ec-4b37-b441-4943f3b160d4-secret-telemeter-client-kube-rbac-proxy-config\") pod \"telemeter-client-cf85db6cf-b9mbd\" (UID: \"49ae0fd5-b0ec-4b37-b441-4943f3b160d4\") " pod="openshift-monitoring/telemeter-client-cf85db6cf-b9mbd" Mar 18 18:06:52.457812 master-0 kubenswrapper[30278]: I0318 18:06:52.457656 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/49ae0fd5-b0ec-4b37-b441-4943f3b160d4-metrics-client-ca\") pod \"telemeter-client-cf85db6cf-b9mbd\" (UID: \"49ae0fd5-b0ec-4b37-b441-4943f3b160d4\") " pod="openshift-monitoring/telemeter-client-cf85db6cf-b9mbd" Mar 18 18:06:52.457812 master-0 kubenswrapper[30278]: I0318 18:06:52.457698 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"federate-client-tls\" (UniqueName: \"kubernetes.io/secret/49ae0fd5-b0ec-4b37-b441-4943f3b160d4-federate-client-tls\") pod \"telemeter-client-cf85db6cf-b9mbd\" (UID: \"49ae0fd5-b0ec-4b37-b441-4943f3b160d4\") " pod="openshift-monitoring/telemeter-client-cf85db6cf-b9mbd" Mar 18 18:06:52.459329 master-0 kubenswrapper[30278]: I0318 18:06:52.458728 30278 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/telemeter-client-cf85db6cf-b9mbd"] Mar 18 18:06:52.471434 master-0 kubenswrapper[30278]: I0318 18:06:52.457720 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemeter-client-tls\" (UniqueName: \"kubernetes.io/secret/49ae0fd5-b0ec-4b37-b441-4943f3b160d4-telemeter-client-tls\") pod \"telemeter-client-cf85db6cf-b9mbd\" (UID: \"49ae0fd5-b0ec-4b37-b441-4943f3b160d4\") " pod="openshift-monitoring/telemeter-client-cf85db6cf-b9mbd" Mar 18 18:06:52.471682 master-0 kubenswrapper[30278]: I0318 18:06:52.471482 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-telemeter-client\" (UniqueName: \"kubernetes.io/secret/49ae0fd5-b0ec-4b37-b441-4943f3b160d4-secret-telemeter-client\") pod \"telemeter-client-cf85db6cf-b9mbd\" (UID: \"49ae0fd5-b0ec-4b37-b441-4943f3b160d4\") " pod="openshift-monitoring/telemeter-client-cf85db6cf-b9mbd" Mar 18 18:06:52.471682 master-0 kubenswrapper[30278]: I0318 18:06:52.471510 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemeter-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49ae0fd5-b0ec-4b37-b441-4943f3b160d4-telemeter-trusted-ca-bundle\") pod \"telemeter-client-cf85db6cf-b9mbd\" (UID: \"49ae0fd5-b0ec-4b37-b441-4943f3b160d4\") " pod="openshift-monitoring/telemeter-client-cf85db6cf-b9mbd" Mar 18 18:06:52.477604 master-0 kubenswrapper[30278]: I0318 18:06:52.477553 30278 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-6b7657f69f-w666c"] Mar 18 18:06:52.573507 master-0 kubenswrapper[30278]: I0318 18:06:52.573428 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/49ae0fd5-b0ec-4b37-b441-4943f3b160d4-metrics-client-ca\") pod \"telemeter-client-cf85db6cf-b9mbd\" (UID: \"49ae0fd5-b0ec-4b37-b441-4943f3b160d4\") " pod="openshift-monitoring/telemeter-client-cf85db6cf-b9mbd" Mar 18 18:06:52.573760 master-0 kubenswrapper[30278]: I0318 18:06:52.573533 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"federate-client-tls\" (UniqueName: \"kubernetes.io/secret/49ae0fd5-b0ec-4b37-b441-4943f3b160d4-federate-client-tls\") pod \"telemeter-client-cf85db6cf-b9mbd\" (UID: \"49ae0fd5-b0ec-4b37-b441-4943f3b160d4\") " pod="openshift-monitoring/telemeter-client-cf85db6cf-b9mbd" Mar 18 18:06:52.573760 master-0 kubenswrapper[30278]: I0318 18:06:52.573572 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemeter-client-tls\" (UniqueName: \"kubernetes.io/secret/49ae0fd5-b0ec-4b37-b441-4943f3b160d4-telemeter-client-tls\") pod \"telemeter-client-cf85db6cf-b9mbd\" (UID: \"49ae0fd5-b0ec-4b37-b441-4943f3b160d4\") " pod="openshift-monitoring/telemeter-client-cf85db6cf-b9mbd" Mar 18 18:06:52.573760 master-0 kubenswrapper[30278]: I0318 18:06:52.573602 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-telemeter-client\" (UniqueName: \"kubernetes.io/secret/49ae0fd5-b0ec-4b37-b441-4943f3b160d4-secret-telemeter-client\") pod \"telemeter-client-cf85db6cf-b9mbd\" (UID: \"49ae0fd5-b0ec-4b37-b441-4943f3b160d4\") " pod="openshift-monitoring/telemeter-client-cf85db6cf-b9mbd" Mar 18 18:06:52.573760 master-0 kubenswrapper[30278]: I0318 18:06:52.573627 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemeter-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49ae0fd5-b0ec-4b37-b441-4943f3b160d4-telemeter-trusted-ca-bundle\") pod \"telemeter-client-cf85db6cf-b9mbd\" (UID: \"49ae0fd5-b0ec-4b37-b441-4943f3b160d4\") " pod="openshift-monitoring/telemeter-client-cf85db6cf-b9mbd" Mar 18 18:06:52.573760 master-0 kubenswrapper[30278]: I0318 18:06:52.573676 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qkbwt\" (UniqueName: \"kubernetes.io/projected/49ae0fd5-b0ec-4b37-b441-4943f3b160d4-kube-api-access-qkbwt\") pod \"telemeter-client-cf85db6cf-b9mbd\" (UID: \"49ae0fd5-b0ec-4b37-b441-4943f3b160d4\") " pod="openshift-monitoring/telemeter-client-cf85db6cf-b9mbd" Mar 18 18:06:52.573760 master-0 kubenswrapper[30278]: I0318 18:06:52.573709 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49ae0fd5-b0ec-4b37-b441-4943f3b160d4-serving-certs-ca-bundle\") pod \"telemeter-client-cf85db6cf-b9mbd\" (UID: \"49ae0fd5-b0ec-4b37-b441-4943f3b160d4\") " pod="openshift-monitoring/telemeter-client-cf85db6cf-b9mbd" Mar 18 18:06:52.573760 master-0 kubenswrapper[30278]: I0318 18:06:52.573755 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-telemeter-client-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/49ae0fd5-b0ec-4b37-b441-4943f3b160d4-secret-telemeter-client-kube-rbac-proxy-config\") pod \"telemeter-client-cf85db6cf-b9mbd\" (UID: \"49ae0fd5-b0ec-4b37-b441-4943f3b160d4\") " pod="openshift-monitoring/telemeter-client-cf85db6cf-b9mbd" Mar 18 18:06:52.574466 master-0 kubenswrapper[30278]: I0318 18:06:52.574427 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/49ae0fd5-b0ec-4b37-b441-4943f3b160d4-metrics-client-ca\") pod \"telemeter-client-cf85db6cf-b9mbd\" (UID: \"49ae0fd5-b0ec-4b37-b441-4943f3b160d4\") " pod="openshift-monitoring/telemeter-client-cf85db6cf-b9mbd" Mar 18 18:06:52.574807 master-0 kubenswrapper[30278]: E0318 18:06:52.574766 30278 secret.go:189] Couldn't get secret openshift-monitoring/telemeter-client-tls: secret "telemeter-client-tls" not found Mar 18 18:06:52.574877 master-0 kubenswrapper[30278]: E0318 18:06:52.574830 30278 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/49ae0fd5-b0ec-4b37-b441-4943f3b160d4-telemeter-client-tls podName:49ae0fd5-b0ec-4b37-b441-4943f3b160d4 nodeName:}" failed. No retries permitted until 2026-03-18 18:06:53.074812894 +0000 UTC m=+382.241997489 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "telemeter-client-tls" (UniqueName: "kubernetes.io/secret/49ae0fd5-b0ec-4b37-b441-4943f3b160d4-telemeter-client-tls") pod "telemeter-client-cf85db6cf-b9mbd" (UID: "49ae0fd5-b0ec-4b37-b441-4943f3b160d4") : secret "telemeter-client-tls" not found Mar 18 18:06:52.574937 master-0 kubenswrapper[30278]: I0318 18:06:52.574918 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49ae0fd5-b0ec-4b37-b441-4943f3b160d4-serving-certs-ca-bundle\") pod \"telemeter-client-cf85db6cf-b9mbd\" (UID: \"49ae0fd5-b0ec-4b37-b441-4943f3b160d4\") " pod="openshift-monitoring/telemeter-client-cf85db6cf-b9mbd" Mar 18 18:06:52.575296 master-0 kubenswrapper[30278]: I0318 18:06:52.575232 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemeter-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49ae0fd5-b0ec-4b37-b441-4943f3b160d4-telemeter-trusted-ca-bundle\") pod \"telemeter-client-cf85db6cf-b9mbd\" (UID: \"49ae0fd5-b0ec-4b37-b441-4943f3b160d4\") " pod="openshift-monitoring/telemeter-client-cf85db6cf-b9mbd" Mar 18 18:06:52.578612 master-0 kubenswrapper[30278]: I0318 18:06:52.577642 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"federate-client-tls\" (UniqueName: \"kubernetes.io/secret/49ae0fd5-b0ec-4b37-b441-4943f3b160d4-federate-client-tls\") pod \"telemeter-client-cf85db6cf-b9mbd\" (UID: \"49ae0fd5-b0ec-4b37-b441-4943f3b160d4\") " pod="openshift-monitoring/telemeter-client-cf85db6cf-b9mbd" Mar 18 18:06:52.580097 master-0 kubenswrapper[30278]: I0318 18:06:52.580055 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-telemeter-client-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/49ae0fd5-b0ec-4b37-b441-4943f3b160d4-secret-telemeter-client-kube-rbac-proxy-config\") pod \"telemeter-client-cf85db6cf-b9mbd\" (UID: \"49ae0fd5-b0ec-4b37-b441-4943f3b160d4\") " pod="openshift-monitoring/telemeter-client-cf85db6cf-b9mbd" Mar 18 18:06:52.580587 master-0 kubenswrapper[30278]: I0318 18:06:52.580551 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-telemeter-client\" (UniqueName: \"kubernetes.io/secret/49ae0fd5-b0ec-4b37-b441-4943f3b160d4-secret-telemeter-client\") pod \"telemeter-client-cf85db6cf-b9mbd\" (UID: \"49ae0fd5-b0ec-4b37-b441-4943f3b160d4\") " pod="openshift-monitoring/telemeter-client-cf85db6cf-b9mbd" Mar 18 18:06:52.598965 master-0 kubenswrapper[30278]: I0318 18:06:52.598750 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qkbwt\" (UniqueName: \"kubernetes.io/projected/49ae0fd5-b0ec-4b37-b441-4943f3b160d4-kube-api-access-qkbwt\") pod \"telemeter-client-cf85db6cf-b9mbd\" (UID: \"49ae0fd5-b0ec-4b37-b441-4943f3b160d4\") " pod="openshift-monitoring/telemeter-client-cf85db6cf-b9mbd" Mar 18 18:06:53.086102 master-0 kubenswrapper[30278]: I0318 18:06:53.086051 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemeter-client-tls\" (UniqueName: \"kubernetes.io/secret/49ae0fd5-b0ec-4b37-b441-4943f3b160d4-telemeter-client-tls\") pod \"telemeter-client-cf85db6cf-b9mbd\" (UID: \"49ae0fd5-b0ec-4b37-b441-4943f3b160d4\") " pod="openshift-monitoring/telemeter-client-cf85db6cf-b9mbd" Mar 18 18:06:53.086599 master-0 kubenswrapper[30278]: E0318 18:06:53.086393 30278 secret.go:189] Couldn't get secret openshift-monitoring/telemeter-client-tls: secret "telemeter-client-tls" not found Mar 18 18:06:53.086599 master-0 kubenswrapper[30278]: E0318 18:06:53.086473 30278 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/49ae0fd5-b0ec-4b37-b441-4943f3b160d4-telemeter-client-tls podName:49ae0fd5-b0ec-4b37-b441-4943f3b160d4 nodeName:}" failed. No retries permitted until 2026-03-18 18:06:54.086449106 +0000 UTC m=+383.253633781 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "telemeter-client-tls" (UniqueName: "kubernetes.io/secret/49ae0fd5-b0ec-4b37-b441-4943f3b160d4-telemeter-client-tls") pod "telemeter-client-cf85db6cf-b9mbd" (UID: "49ae0fd5-b0ec-4b37-b441-4943f3b160d4") : secret "telemeter-client-tls" not found Mar 18 18:06:54.105354 master-0 kubenswrapper[30278]: I0318 18:06:54.105232 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemeter-client-tls\" (UniqueName: \"kubernetes.io/secret/49ae0fd5-b0ec-4b37-b441-4943f3b160d4-telemeter-client-tls\") pod \"telemeter-client-cf85db6cf-b9mbd\" (UID: \"49ae0fd5-b0ec-4b37-b441-4943f3b160d4\") " pod="openshift-monitoring/telemeter-client-cf85db6cf-b9mbd" Mar 18 18:06:54.106100 master-0 kubenswrapper[30278]: E0318 18:06:54.105443 30278 secret.go:189] Couldn't get secret openshift-monitoring/telemeter-client-tls: secret "telemeter-client-tls" not found Mar 18 18:06:54.106100 master-0 kubenswrapper[30278]: E0318 18:06:54.105527 30278 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/49ae0fd5-b0ec-4b37-b441-4943f3b160d4-telemeter-client-tls podName:49ae0fd5-b0ec-4b37-b441-4943f3b160d4 nodeName:}" failed. No retries permitted until 2026-03-18 18:06:56.105504009 +0000 UTC m=+385.272688614 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "telemeter-client-tls" (UniqueName: "kubernetes.io/secret/49ae0fd5-b0ec-4b37-b441-4943f3b160d4-telemeter-client-tls") pod "telemeter-client-cf85db6cf-b9mbd" (UID: "49ae0fd5-b0ec-4b37-b441-4943f3b160d4") : secret "telemeter-client-tls" not found Mar 18 18:06:56.141951 master-0 kubenswrapper[30278]: I0318 18:06:56.141864 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemeter-client-tls\" (UniqueName: \"kubernetes.io/secret/49ae0fd5-b0ec-4b37-b441-4943f3b160d4-telemeter-client-tls\") pod \"telemeter-client-cf85db6cf-b9mbd\" (UID: \"49ae0fd5-b0ec-4b37-b441-4943f3b160d4\") " pod="openshift-monitoring/telemeter-client-cf85db6cf-b9mbd" Mar 18 18:06:56.142892 master-0 kubenswrapper[30278]: E0318 18:06:56.142191 30278 secret.go:189] Couldn't get secret openshift-monitoring/telemeter-client-tls: secret "telemeter-client-tls" not found Mar 18 18:06:56.142892 master-0 kubenswrapper[30278]: E0318 18:06:56.142365 30278 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/49ae0fd5-b0ec-4b37-b441-4943f3b160d4-telemeter-client-tls podName:49ae0fd5-b0ec-4b37-b441-4943f3b160d4 nodeName:}" failed. No retries permitted until 2026-03-18 18:07:00.142333739 +0000 UTC m=+389.309518374 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "telemeter-client-tls" (UniqueName: "kubernetes.io/secret/49ae0fd5-b0ec-4b37-b441-4943f3b160d4-telemeter-client-tls") pod "telemeter-client-cf85db6cf-b9mbd" (UID: "49ae0fd5-b0ec-4b37-b441-4943f3b160d4") : secret "telemeter-client-tls" not found Mar 18 18:07:00.221344 master-0 kubenswrapper[30278]: I0318 18:07:00.221292 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemeter-client-tls\" (UniqueName: \"kubernetes.io/secret/49ae0fd5-b0ec-4b37-b441-4943f3b160d4-telemeter-client-tls\") pod \"telemeter-client-cf85db6cf-b9mbd\" (UID: \"49ae0fd5-b0ec-4b37-b441-4943f3b160d4\") " pod="openshift-monitoring/telemeter-client-cf85db6cf-b9mbd" Mar 18 18:07:00.222007 master-0 kubenswrapper[30278]: E0318 18:07:00.221542 30278 secret.go:189] Couldn't get secret openshift-monitoring/telemeter-client-tls: secret "telemeter-client-tls" not found Mar 18 18:07:00.222157 master-0 kubenswrapper[30278]: E0318 18:07:00.222143 30278 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/49ae0fd5-b0ec-4b37-b441-4943f3b160d4-telemeter-client-tls podName:49ae0fd5-b0ec-4b37-b441-4943f3b160d4 nodeName:}" failed. No retries permitted until 2026-03-18 18:07:08.222118102 +0000 UTC m=+397.389302697 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "telemeter-client-tls" (UniqueName: "kubernetes.io/secret/49ae0fd5-b0ec-4b37-b441-4943f3b160d4-telemeter-client-tls") pod "telemeter-client-cf85db6cf-b9mbd" (UID: "49ae0fd5-b0ec-4b37-b441-4943f3b160d4") : secret "telemeter-client-tls" not found Mar 18 18:07:02.659313 master-0 kubenswrapper[30278]: I0318 18:07:02.659199 30278 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-monitoring/prometheus-k8s-0"] Mar 18 18:07:02.660106 master-0 kubenswrapper[30278]: I0318 18:07:02.659979 30278 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-monitoring/prometheus-k8s-0" podUID="5c6aeb7b-9c05-470e-b31f-f4154aadf170" containerName="prometheus" containerID="cri-o://68e21c4284a08f52d019f354cb231dfaadd6758a8d35cc21c74f3c5191f9ed50" gracePeriod=600 Mar 18 18:07:02.660448 master-0 kubenswrapper[30278]: I0318 18:07:02.660357 30278 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-monitoring/prometheus-k8s-0" podUID="5c6aeb7b-9c05-470e-b31f-f4154aadf170" containerName="kube-rbac-proxy" containerID="cri-o://e05f4784ce7ed803e04b81bf5155c163626dc7bb5a2b519d1e6ad4d4be64ffcb" gracePeriod=600 Mar 18 18:07:02.661010 master-0 kubenswrapper[30278]: I0318 18:07:02.660635 30278 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-monitoring/prometheus-k8s-0" podUID="5c6aeb7b-9c05-470e-b31f-f4154aadf170" containerName="kube-rbac-proxy-thanos" containerID="cri-o://fb690aadcc1a5dfcc8a6cf73791cc8218f074fece27ca8193b4a729b5036736e" gracePeriod=600 Mar 18 18:07:02.661010 master-0 kubenswrapper[30278]: I0318 18:07:02.660736 30278 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-monitoring/prometheus-k8s-0" podUID="5c6aeb7b-9c05-470e-b31f-f4154aadf170" containerName="kube-rbac-proxy-web" containerID="cri-o://9f353111ed2ff9d55f736938f996eecff5f8bf842f96ab8decc0cca74464a5d6" gracePeriod=600 Mar 18 18:07:02.661010 master-0 kubenswrapper[30278]: I0318 18:07:02.660809 30278 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-monitoring/prometheus-k8s-0" podUID="5c6aeb7b-9c05-470e-b31f-f4154aadf170" containerName="config-reloader" containerID="cri-o://3e7e5ed2596bcbac2ab91756313741ea4d24ac563598d8ab914b212f1f0abaec" gracePeriod=600 Mar 18 18:07:02.661010 master-0 kubenswrapper[30278]: I0318 18:07:02.660787 30278 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-monitoring/prometheus-k8s-0" podUID="5c6aeb7b-9c05-470e-b31f-f4154aadf170" containerName="thanos-sidecar" containerID="cri-o://fbf00d88b8f5f234c5616d73c233119048706129aff08fe14ae0fd745e851f31" gracePeriod=600 Mar 18 18:07:02.972405 master-0 kubenswrapper[30278]: I0318 18:07:02.972228 30278 generic.go:334] "Generic (PLEG): container finished" podID="5c6aeb7b-9c05-470e-b31f-f4154aadf170" containerID="fb690aadcc1a5dfcc8a6cf73791cc8218f074fece27ca8193b4a729b5036736e" exitCode=0 Mar 18 18:07:02.972724 master-0 kubenswrapper[30278]: I0318 18:07:02.972708 30278 generic.go:334] "Generic (PLEG): container finished" podID="5c6aeb7b-9c05-470e-b31f-f4154aadf170" containerID="e05f4784ce7ed803e04b81bf5155c163626dc7bb5a2b519d1e6ad4d4be64ffcb" exitCode=0 Mar 18 18:07:02.972814 master-0 kubenswrapper[30278]: I0318 18:07:02.972802 30278 generic.go:334] "Generic (PLEG): container finished" podID="5c6aeb7b-9c05-470e-b31f-f4154aadf170" containerID="9f353111ed2ff9d55f736938f996eecff5f8bf842f96ab8decc0cca74464a5d6" exitCode=0 Mar 18 18:07:02.972906 master-0 kubenswrapper[30278]: I0318 18:07:02.972894 30278 generic.go:334] "Generic (PLEG): container finished" podID="5c6aeb7b-9c05-470e-b31f-f4154aadf170" containerID="fbf00d88b8f5f234c5616d73c233119048706129aff08fe14ae0fd745e851f31" exitCode=0 Mar 18 18:07:02.972998 master-0 kubenswrapper[30278]: I0318 18:07:02.972986 30278 generic.go:334] "Generic (PLEG): container finished" podID="5c6aeb7b-9c05-470e-b31f-f4154aadf170" containerID="3e7e5ed2596bcbac2ab91756313741ea4d24ac563598d8ab914b212f1f0abaec" exitCode=0 Mar 18 18:07:02.973085 master-0 kubenswrapper[30278]: I0318 18:07:02.973073 30278 generic.go:334] "Generic (PLEG): container finished" podID="5c6aeb7b-9c05-470e-b31f-f4154aadf170" containerID="68e21c4284a08f52d019f354cb231dfaadd6758a8d35cc21c74f3c5191f9ed50" exitCode=0 Mar 18 18:07:02.973172 master-0 kubenswrapper[30278]: I0318 18:07:02.972373 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"5c6aeb7b-9c05-470e-b31f-f4154aadf170","Type":"ContainerDied","Data":"fb690aadcc1a5dfcc8a6cf73791cc8218f074fece27ca8193b4a729b5036736e"} Mar 18 18:07:02.973299 master-0 kubenswrapper[30278]: I0318 18:07:02.973263 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"5c6aeb7b-9c05-470e-b31f-f4154aadf170","Type":"ContainerDied","Data":"e05f4784ce7ed803e04b81bf5155c163626dc7bb5a2b519d1e6ad4d4be64ffcb"} Mar 18 18:07:02.973394 master-0 kubenswrapper[30278]: I0318 18:07:02.973382 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"5c6aeb7b-9c05-470e-b31f-f4154aadf170","Type":"ContainerDied","Data":"9f353111ed2ff9d55f736938f996eecff5f8bf842f96ab8decc0cca74464a5d6"} Mar 18 18:07:02.973487 master-0 kubenswrapper[30278]: I0318 18:07:02.973471 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"5c6aeb7b-9c05-470e-b31f-f4154aadf170","Type":"ContainerDied","Data":"fbf00d88b8f5f234c5616d73c233119048706129aff08fe14ae0fd745e851f31"} Mar 18 18:07:02.973597 master-0 kubenswrapper[30278]: I0318 18:07:02.973580 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"5c6aeb7b-9c05-470e-b31f-f4154aadf170","Type":"ContainerDied","Data":"3e7e5ed2596bcbac2ab91756313741ea4d24ac563598d8ab914b212f1f0abaec"} Mar 18 18:07:02.973790 master-0 kubenswrapper[30278]: I0318 18:07:02.973777 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"5c6aeb7b-9c05-470e-b31f-f4154aadf170","Type":"ContainerDied","Data":"68e21c4284a08f52d019f354cb231dfaadd6758a8d35cc21c74f3c5191f9ed50"} Mar 18 18:07:03.186870 master-0 kubenswrapper[30278]: I0318 18:07:03.186812 30278 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-k8s-0" Mar 18 18:07:03.281744 master-0 kubenswrapper[30278]: I0318 18:07:03.281564 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"configmap-metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/5c6aeb7b-9c05-470e-b31f-f4154aadf170-configmap-metrics-client-ca\") pod \"5c6aeb7b-9c05-470e-b31f-f4154aadf170\" (UID: \"5c6aeb7b-9c05-470e-b31f-f4154aadf170\") " Mar 18 18:07:03.281744 master-0 kubenswrapper[30278]: I0318 18:07:03.281658 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-prometheus-k8s-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/5c6aeb7b-9c05-470e-b31f-f4154aadf170-secret-prometheus-k8s-kube-rbac-proxy-web\") pod \"5c6aeb7b-9c05-470e-b31f-f4154aadf170\" (UID: \"5c6aeb7b-9c05-470e-b31f-f4154aadf170\") " Mar 18 18:07:03.281744 master-0 kubenswrapper[30278]: I0318 18:07:03.281697 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/5c6aeb7b-9c05-470e-b31f-f4154aadf170-secret-metrics-client-certs\") pod \"5c6aeb7b-9c05-470e-b31f-f4154aadf170\" (UID: \"5c6aeb7b-9c05-470e-b31f-f4154aadf170\") " Mar 18 18:07:03.282123 master-0 kubenswrapper[30278]: I0318 18:07:03.281921 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/5c6aeb7b-9c05-470e-b31f-f4154aadf170-secret-kube-rbac-proxy\") pod \"5c6aeb7b-9c05-470e-b31f-f4154aadf170\" (UID: \"5c6aeb7b-9c05-470e-b31f-f4154aadf170\") " Mar 18 18:07:03.282123 master-0 kubenswrapper[30278]: I0318 18:07:03.281964 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/5c6aeb7b-9c05-470e-b31f-f4154aadf170-secret-grpc-tls\") pod \"5c6aeb7b-9c05-470e-b31f-f4154aadf170\" (UID: \"5c6aeb7b-9c05-470e-b31f-f4154aadf170\") " Mar 18 18:07:03.282655 master-0 kubenswrapper[30278]: I0318 18:07:03.282588 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wg62n\" (UniqueName: \"kubernetes.io/projected/5c6aeb7b-9c05-470e-b31f-f4154aadf170-kube-api-access-wg62n\") pod \"5c6aeb7b-9c05-470e-b31f-f4154aadf170\" (UID: \"5c6aeb7b-9c05-470e-b31f-f4154aadf170\") " Mar 18 18:07:03.282766 master-0 kubenswrapper[30278]: I0318 18:07:03.282734 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5c6aeb7b-9c05-470e-b31f-f4154aadf170-prometheus-trusted-ca-bundle\") pod \"5c6aeb7b-9c05-470e-b31f-f4154aadf170\" (UID: \"5c6aeb7b-9c05-470e-b31f-f4154aadf170\") " Mar 18 18:07:03.282827 master-0 kubenswrapper[30278]: I0318 18:07:03.282577 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5c6aeb7b-9c05-470e-b31f-f4154aadf170-configmap-metrics-client-ca" (OuterVolumeSpecName: "configmap-metrics-client-ca") pod "5c6aeb7b-9c05-470e-b31f-f4154aadf170" (UID: "5c6aeb7b-9c05-470e-b31f-f4154aadf170"). InnerVolumeSpecName "configmap-metrics-client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 18:07:03.282827 master-0 kubenswrapper[30278]: I0318 18:07:03.282803 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/5c6aeb7b-9c05-470e-b31f-f4154aadf170-web-config\") pod \"5c6aeb7b-9c05-470e-b31f-f4154aadf170\" (UID: \"5c6aeb7b-9c05-470e-b31f-f4154aadf170\") " Mar 18 18:07:03.282933 master-0 kubenswrapper[30278]: I0318 18:07:03.282884 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/5c6aeb7b-9c05-470e-b31f-f4154aadf170-prometheus-k8s-rulefiles-0\") pod \"5c6aeb7b-9c05-470e-b31f-f4154aadf170\" (UID: \"5c6aeb7b-9c05-470e-b31f-f4154aadf170\") " Mar 18 18:07:03.282933 master-0 kubenswrapper[30278]: I0318 18:07:03.282912 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/5c6aeb7b-9c05-470e-b31f-f4154aadf170-tls-assets\") pod \"5c6aeb7b-9c05-470e-b31f-f4154aadf170\" (UID: \"5c6aeb7b-9c05-470e-b31f-f4154aadf170\") " Mar 18 18:07:03.283023 master-0 kubenswrapper[30278]: I0318 18:07:03.282949 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/5c6aeb7b-9c05-470e-b31f-f4154aadf170-secret-prometheus-k8s-tls\") pod \"5c6aeb7b-9c05-470e-b31f-f4154aadf170\" (UID: \"5c6aeb7b-9c05-470e-b31f-f4154aadf170\") " Mar 18 18:07:03.283023 master-0 kubenswrapper[30278]: I0318 18:07:03.282975 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5c6aeb7b-9c05-470e-b31f-f4154aadf170-configmap-serving-certs-ca-bundle\") pod \"5c6aeb7b-9c05-470e-b31f-f4154aadf170\" (UID: \"5c6aeb7b-9c05-470e-b31f-f4154aadf170\") " Mar 18 18:07:03.283466 master-0 kubenswrapper[30278]: I0318 18:07:03.283424 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/5c6aeb7b-9c05-470e-b31f-f4154aadf170-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"5c6aeb7b-9c05-470e-b31f-f4154aadf170\" (UID: \"5c6aeb7b-9c05-470e-b31f-f4154aadf170\") " Mar 18 18:07:03.283548 master-0 kubenswrapper[30278]: I0318 18:07:03.283476 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/5c6aeb7b-9c05-470e-b31f-f4154aadf170-config\") pod \"5c6aeb7b-9c05-470e-b31f-f4154aadf170\" (UID: \"5c6aeb7b-9c05-470e-b31f-f4154aadf170\") " Mar 18 18:07:03.283548 master-0 kubenswrapper[30278]: I0318 18:07:03.283534 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/5c6aeb7b-9c05-470e-b31f-f4154aadf170-thanos-prometheus-http-client-file\") pod \"5c6aeb7b-9c05-470e-b31f-f4154aadf170\" (UID: \"5c6aeb7b-9c05-470e-b31f-f4154aadf170\") " Mar 18 18:07:03.283692 master-0 kubenswrapper[30278]: I0318 18:07:03.283669 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/5c6aeb7b-9c05-470e-b31f-f4154aadf170-config-out\") pod \"5c6aeb7b-9c05-470e-b31f-f4154aadf170\" (UID: \"5c6aeb7b-9c05-470e-b31f-f4154aadf170\") " Mar 18 18:07:03.283772 master-0 kubenswrapper[30278]: I0318 18:07:03.283748 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5c6aeb7b-9c05-470e-b31f-f4154aadf170-configmap-kubelet-serving-ca-bundle\") pod \"5c6aeb7b-9c05-470e-b31f-f4154aadf170\" (UID: \"5c6aeb7b-9c05-470e-b31f-f4154aadf170\") " Mar 18 18:07:03.283823 master-0 kubenswrapper[30278]: I0318 18:07:03.283808 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-k8s-db\" (UniqueName: \"kubernetes.io/empty-dir/5c6aeb7b-9c05-470e-b31f-f4154aadf170-prometheus-k8s-db\") pod \"5c6aeb7b-9c05-470e-b31f-f4154aadf170\" (UID: \"5c6aeb7b-9c05-470e-b31f-f4154aadf170\") " Mar 18 18:07:03.284589 master-0 kubenswrapper[30278]: I0318 18:07:03.284559 30278 reconciler_common.go:293] "Volume detached for volume \"configmap-metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/5c6aeb7b-9c05-470e-b31f-f4154aadf170-configmap-metrics-client-ca\") on node \"master-0\" DevicePath \"\"" Mar 18 18:07:03.285156 master-0 kubenswrapper[30278]: I0318 18:07:03.285087 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5c6aeb7b-9c05-470e-b31f-f4154aadf170-prometheus-trusted-ca-bundle" (OuterVolumeSpecName: "prometheus-trusted-ca-bundle") pod "5c6aeb7b-9c05-470e-b31f-f4154aadf170" (UID: "5c6aeb7b-9c05-470e-b31f-f4154aadf170"). InnerVolumeSpecName "prometheus-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 18:07:03.285232 master-0 kubenswrapper[30278]: I0318 18:07:03.285101 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5c6aeb7b-9c05-470e-b31f-f4154aadf170-configmap-serving-certs-ca-bundle" (OuterVolumeSpecName: "configmap-serving-certs-ca-bundle") pod "5c6aeb7b-9c05-470e-b31f-f4154aadf170" (UID: "5c6aeb7b-9c05-470e-b31f-f4154aadf170"). InnerVolumeSpecName "configmap-serving-certs-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 18:07:03.287157 master-0 kubenswrapper[30278]: I0318 18:07:03.287113 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5c6aeb7b-9c05-470e-b31f-f4154aadf170-secret-prometheus-k8s-kube-rbac-proxy-web" (OuterVolumeSpecName: "secret-prometheus-k8s-kube-rbac-proxy-web") pod "5c6aeb7b-9c05-470e-b31f-f4154aadf170" (UID: "5c6aeb7b-9c05-470e-b31f-f4154aadf170"). InnerVolumeSpecName "secret-prometheus-k8s-kube-rbac-proxy-web". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 18:07:03.287363 master-0 kubenswrapper[30278]: I0318 18:07:03.287318 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5c6aeb7b-9c05-470e-b31f-f4154aadf170-secret-kube-rbac-proxy" (OuterVolumeSpecName: "secret-kube-rbac-proxy") pod "5c6aeb7b-9c05-470e-b31f-f4154aadf170" (UID: "5c6aeb7b-9c05-470e-b31f-f4154aadf170"). InnerVolumeSpecName "secret-kube-rbac-proxy". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 18:07:03.287484 master-0 kubenswrapper[30278]: I0318 18:07:03.287336 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5c6aeb7b-9c05-470e-b31f-f4154aadf170-prometheus-k8s-db" (OuterVolumeSpecName: "prometheus-k8s-db") pod "5c6aeb7b-9c05-470e-b31f-f4154aadf170" (UID: "5c6aeb7b-9c05-470e-b31f-f4154aadf170"). InnerVolumeSpecName "prometheus-k8s-db". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 18 18:07:03.287800 master-0 kubenswrapper[30278]: I0318 18:07:03.287564 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5c6aeb7b-9c05-470e-b31f-f4154aadf170-kube-api-access-wg62n" (OuterVolumeSpecName: "kube-api-access-wg62n") pod "5c6aeb7b-9c05-470e-b31f-f4154aadf170" (UID: "5c6aeb7b-9c05-470e-b31f-f4154aadf170"). InnerVolumeSpecName "kube-api-access-wg62n". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 18:07:03.287900 master-0 kubenswrapper[30278]: I0318 18:07:03.287827 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5c6aeb7b-9c05-470e-b31f-f4154aadf170-configmap-kubelet-serving-ca-bundle" (OuterVolumeSpecName: "configmap-kubelet-serving-ca-bundle") pod "5c6aeb7b-9c05-470e-b31f-f4154aadf170" (UID: "5c6aeb7b-9c05-470e-b31f-f4154aadf170"). InnerVolumeSpecName "configmap-kubelet-serving-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 18:07:03.288230 master-0 kubenswrapper[30278]: I0318 18:07:03.288177 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5c6aeb7b-9c05-470e-b31f-f4154aadf170-secret-grpc-tls" (OuterVolumeSpecName: "secret-grpc-tls") pod "5c6aeb7b-9c05-470e-b31f-f4154aadf170" (UID: "5c6aeb7b-9c05-470e-b31f-f4154aadf170"). InnerVolumeSpecName "secret-grpc-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 18:07:03.288347 master-0 kubenswrapper[30278]: I0318 18:07:03.288302 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5c6aeb7b-9c05-470e-b31f-f4154aadf170-secret-metrics-client-certs" (OuterVolumeSpecName: "secret-metrics-client-certs") pod "5c6aeb7b-9c05-470e-b31f-f4154aadf170" (UID: "5c6aeb7b-9c05-470e-b31f-f4154aadf170"). InnerVolumeSpecName "secret-metrics-client-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 18:07:03.289130 master-0 kubenswrapper[30278]: I0318 18:07:03.288996 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5c6aeb7b-9c05-470e-b31f-f4154aadf170-prometheus-k8s-rulefiles-0" (OuterVolumeSpecName: "prometheus-k8s-rulefiles-0") pod "5c6aeb7b-9c05-470e-b31f-f4154aadf170" (UID: "5c6aeb7b-9c05-470e-b31f-f4154aadf170"). InnerVolumeSpecName "prometheus-k8s-rulefiles-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 18:07:03.290005 master-0 kubenswrapper[30278]: I0318 18:07:03.289928 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5c6aeb7b-9c05-470e-b31f-f4154aadf170-tls-assets" (OuterVolumeSpecName: "tls-assets") pod "5c6aeb7b-9c05-470e-b31f-f4154aadf170" (UID: "5c6aeb7b-9c05-470e-b31f-f4154aadf170"). InnerVolumeSpecName "tls-assets". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 18:07:03.290361 master-0 kubenswrapper[30278]: I0318 18:07:03.290332 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5c6aeb7b-9c05-470e-b31f-f4154aadf170-config-out" (OuterVolumeSpecName: "config-out") pod "5c6aeb7b-9c05-470e-b31f-f4154aadf170" (UID: "5c6aeb7b-9c05-470e-b31f-f4154aadf170"). InnerVolumeSpecName "config-out". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 18 18:07:03.290621 master-0 kubenswrapper[30278]: I0318 18:07:03.290587 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5c6aeb7b-9c05-470e-b31f-f4154aadf170-thanos-prometheus-http-client-file" (OuterVolumeSpecName: "thanos-prometheus-http-client-file") pod "5c6aeb7b-9c05-470e-b31f-f4154aadf170" (UID: "5c6aeb7b-9c05-470e-b31f-f4154aadf170"). InnerVolumeSpecName "thanos-prometheus-http-client-file". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 18:07:03.290834 master-0 kubenswrapper[30278]: I0318 18:07:03.290808 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5c6aeb7b-9c05-470e-b31f-f4154aadf170-config" (OuterVolumeSpecName: "config") pod "5c6aeb7b-9c05-470e-b31f-f4154aadf170" (UID: "5c6aeb7b-9c05-470e-b31f-f4154aadf170"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 18:07:03.290921 master-0 kubenswrapper[30278]: I0318 18:07:03.290888 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5c6aeb7b-9c05-470e-b31f-f4154aadf170-secret-prometheus-k8s-tls" (OuterVolumeSpecName: "secret-prometheus-k8s-tls") pod "5c6aeb7b-9c05-470e-b31f-f4154aadf170" (UID: "5c6aeb7b-9c05-470e-b31f-f4154aadf170"). InnerVolumeSpecName "secret-prometheus-k8s-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 18:07:03.292879 master-0 kubenswrapper[30278]: I0318 18:07:03.292808 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5c6aeb7b-9c05-470e-b31f-f4154aadf170-secret-prometheus-k8s-thanos-sidecar-tls" (OuterVolumeSpecName: "secret-prometheus-k8s-thanos-sidecar-tls") pod "5c6aeb7b-9c05-470e-b31f-f4154aadf170" (UID: "5c6aeb7b-9c05-470e-b31f-f4154aadf170"). InnerVolumeSpecName "secret-prometheus-k8s-thanos-sidecar-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 18:07:03.326147 master-0 kubenswrapper[30278]: I0318 18:07:03.326068 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5c6aeb7b-9c05-470e-b31f-f4154aadf170-web-config" (OuterVolumeSpecName: "web-config") pod "5c6aeb7b-9c05-470e-b31f-f4154aadf170" (UID: "5c6aeb7b-9c05-470e-b31f-f4154aadf170"). InnerVolumeSpecName "web-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 18:07:03.386256 master-0 kubenswrapper[30278]: I0318 18:07:03.386132 30278 reconciler_common.go:293] "Volume detached for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/5c6aeb7b-9c05-470e-b31f-f4154aadf170-secret-prometheus-k8s-thanos-sidecar-tls\") on node \"master-0\" DevicePath \"\"" Mar 18 18:07:03.386256 master-0 kubenswrapper[30278]: I0318 18:07:03.386225 30278 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/5c6aeb7b-9c05-470e-b31f-f4154aadf170-config\") on node \"master-0\" DevicePath \"\"" Mar 18 18:07:03.386256 master-0 kubenswrapper[30278]: I0318 18:07:03.386244 30278 reconciler_common.go:293] "Volume detached for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/5c6aeb7b-9c05-470e-b31f-f4154aadf170-thanos-prometheus-http-client-file\") on node \"master-0\" DevicePath \"\"" Mar 18 18:07:03.386256 master-0 kubenswrapper[30278]: I0318 18:07:03.386262 30278 reconciler_common.go:293] "Volume detached for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/5c6aeb7b-9c05-470e-b31f-f4154aadf170-config-out\") on node \"master-0\" DevicePath \"\"" Mar 18 18:07:03.386256 master-0 kubenswrapper[30278]: I0318 18:07:03.386298 30278 reconciler_common.go:293] "Volume detached for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5c6aeb7b-9c05-470e-b31f-f4154aadf170-configmap-kubelet-serving-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 18 18:07:03.386813 master-0 kubenswrapper[30278]: I0318 18:07:03.386316 30278 reconciler_common.go:293] "Volume detached for volume \"prometheus-k8s-db\" (UniqueName: \"kubernetes.io/empty-dir/5c6aeb7b-9c05-470e-b31f-f4154aadf170-prometheus-k8s-db\") on node \"master-0\" DevicePath \"\"" Mar 18 18:07:03.386813 master-0 kubenswrapper[30278]: I0318 18:07:03.386331 30278 reconciler_common.go:293] "Volume detached for volume \"secret-prometheus-k8s-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/5c6aeb7b-9c05-470e-b31f-f4154aadf170-secret-prometheus-k8s-kube-rbac-proxy-web\") on node \"master-0\" DevicePath \"\"" Mar 18 18:07:03.386813 master-0 kubenswrapper[30278]: I0318 18:07:03.386348 30278 reconciler_common.go:293] "Volume detached for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/5c6aeb7b-9c05-470e-b31f-f4154aadf170-secret-kube-rbac-proxy\") on node \"master-0\" DevicePath \"\"" Mar 18 18:07:03.386813 master-0 kubenswrapper[30278]: I0318 18:07:03.386361 30278 reconciler_common.go:293] "Volume detached for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/5c6aeb7b-9c05-470e-b31f-f4154aadf170-secret-metrics-client-certs\") on node \"master-0\" DevicePath \"\"" Mar 18 18:07:03.386813 master-0 kubenswrapper[30278]: I0318 18:07:03.386375 30278 reconciler_common.go:293] "Volume detached for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/5c6aeb7b-9c05-470e-b31f-f4154aadf170-secret-grpc-tls\") on node \"master-0\" DevicePath \"\"" Mar 18 18:07:03.386813 master-0 kubenswrapper[30278]: I0318 18:07:03.386388 30278 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wg62n\" (UniqueName: \"kubernetes.io/projected/5c6aeb7b-9c05-470e-b31f-f4154aadf170-kube-api-access-wg62n\") on node \"master-0\" DevicePath \"\"" Mar 18 18:07:03.386813 master-0 kubenswrapper[30278]: I0318 18:07:03.386402 30278 reconciler_common.go:293] "Volume detached for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5c6aeb7b-9c05-470e-b31f-f4154aadf170-prometheus-trusted-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 18 18:07:03.386813 master-0 kubenswrapper[30278]: I0318 18:07:03.386414 30278 reconciler_common.go:293] "Volume detached for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/5c6aeb7b-9c05-470e-b31f-f4154aadf170-web-config\") on node \"master-0\" DevicePath \"\"" Mar 18 18:07:03.386813 master-0 kubenswrapper[30278]: I0318 18:07:03.386429 30278 reconciler_common.go:293] "Volume detached for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/5c6aeb7b-9c05-470e-b31f-f4154aadf170-prometheus-k8s-rulefiles-0\") on node \"master-0\" DevicePath \"\"" Mar 18 18:07:03.386813 master-0 kubenswrapper[30278]: I0318 18:07:03.386442 30278 reconciler_common.go:293] "Volume detached for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/5c6aeb7b-9c05-470e-b31f-f4154aadf170-tls-assets\") on node \"master-0\" DevicePath \"\"" Mar 18 18:07:03.386813 master-0 kubenswrapper[30278]: I0318 18:07:03.386455 30278 reconciler_common.go:293] "Volume detached for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/5c6aeb7b-9c05-470e-b31f-f4154aadf170-secret-prometheus-k8s-tls\") on node \"master-0\" DevicePath \"\"" Mar 18 18:07:03.386813 master-0 kubenswrapper[30278]: I0318 18:07:03.386469 30278 reconciler_common.go:293] "Volume detached for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5c6aeb7b-9c05-470e-b31f-f4154aadf170-configmap-serving-certs-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 18 18:07:03.986235 master-0 kubenswrapper[30278]: I0318 18:07:03.986040 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"5c6aeb7b-9c05-470e-b31f-f4154aadf170","Type":"ContainerDied","Data":"eab143820739697b21d7c3673655eccadcd4d1f56b4c551303a748b8c3bd62a6"} Mar 18 18:07:03.986235 master-0 kubenswrapper[30278]: I0318 18:07:03.986127 30278 scope.go:117] "RemoveContainer" containerID="fb690aadcc1a5dfcc8a6cf73791cc8218f074fece27ca8193b4a729b5036736e" Mar 18 18:07:03.986235 master-0 kubenswrapper[30278]: I0318 18:07:03.986183 30278 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-k8s-0" Mar 18 18:07:04.013308 master-0 kubenswrapper[30278]: I0318 18:07:04.013249 30278 scope.go:117] "RemoveContainer" containerID="e05f4784ce7ed803e04b81bf5155c163626dc7bb5a2b519d1e6ad4d4be64ffcb" Mar 18 18:07:04.039783 master-0 kubenswrapper[30278]: I0318 18:07:04.039742 30278 scope.go:117] "RemoveContainer" containerID="9f353111ed2ff9d55f736938f996eecff5f8bf842f96ab8decc0cca74464a5d6" Mar 18 18:07:04.040783 master-0 kubenswrapper[30278]: I0318 18:07:04.040760 30278 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-monitoring/prometheus-k8s-0"] Mar 18 18:07:04.049225 master-0 kubenswrapper[30278]: I0318 18:07:04.048112 30278 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-monitoring/prometheus-k8s-0"] Mar 18 18:07:04.071438 master-0 kubenswrapper[30278]: I0318 18:07:04.070736 30278 scope.go:117] "RemoveContainer" containerID="fbf00d88b8f5f234c5616d73c233119048706129aff08fe14ae0fd745e851f31" Mar 18 18:07:04.071438 master-0 kubenswrapper[30278]: I0318 18:07:04.070912 30278 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/prometheus-k8s-0"] Mar 18 18:07:04.071438 master-0 kubenswrapper[30278]: E0318 18:07:04.071303 30278 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c6aeb7b-9c05-470e-b31f-f4154aadf170" containerName="kube-rbac-proxy" Mar 18 18:07:04.071438 master-0 kubenswrapper[30278]: I0318 18:07:04.071321 30278 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c6aeb7b-9c05-470e-b31f-f4154aadf170" containerName="kube-rbac-proxy" Mar 18 18:07:04.071438 master-0 kubenswrapper[30278]: E0318 18:07:04.071343 30278 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c6aeb7b-9c05-470e-b31f-f4154aadf170" containerName="config-reloader" Mar 18 18:07:04.071438 master-0 kubenswrapper[30278]: I0318 18:07:04.071355 30278 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c6aeb7b-9c05-470e-b31f-f4154aadf170" containerName="config-reloader" Mar 18 18:07:04.071438 master-0 kubenswrapper[30278]: E0318 18:07:04.071380 30278 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c6aeb7b-9c05-470e-b31f-f4154aadf170" containerName="kube-rbac-proxy-web" Mar 18 18:07:04.071793 master-0 kubenswrapper[30278]: I0318 18:07:04.071476 30278 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c6aeb7b-9c05-470e-b31f-f4154aadf170" containerName="kube-rbac-proxy-web" Mar 18 18:07:04.071793 master-0 kubenswrapper[30278]: E0318 18:07:04.071499 30278 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c6aeb7b-9c05-470e-b31f-f4154aadf170" containerName="init-config-reloader" Mar 18 18:07:04.071793 master-0 kubenswrapper[30278]: I0318 18:07:04.071509 30278 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c6aeb7b-9c05-470e-b31f-f4154aadf170" containerName="init-config-reloader" Mar 18 18:07:04.071793 master-0 kubenswrapper[30278]: E0318 18:07:04.071522 30278 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c6aeb7b-9c05-470e-b31f-f4154aadf170" containerName="thanos-sidecar" Mar 18 18:07:04.071793 master-0 kubenswrapper[30278]: I0318 18:07:04.071543 30278 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c6aeb7b-9c05-470e-b31f-f4154aadf170" containerName="thanos-sidecar" Mar 18 18:07:04.071793 master-0 kubenswrapper[30278]: E0318 18:07:04.071554 30278 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c6aeb7b-9c05-470e-b31f-f4154aadf170" containerName="kube-rbac-proxy-thanos" Mar 18 18:07:04.071793 master-0 kubenswrapper[30278]: I0318 18:07:04.071562 30278 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c6aeb7b-9c05-470e-b31f-f4154aadf170" containerName="kube-rbac-proxy-thanos" Mar 18 18:07:04.071793 master-0 kubenswrapper[30278]: E0318 18:07:04.071640 30278 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c6aeb7b-9c05-470e-b31f-f4154aadf170" containerName="prometheus" Mar 18 18:07:04.071793 master-0 kubenswrapper[30278]: I0318 18:07:04.071652 30278 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c6aeb7b-9c05-470e-b31f-f4154aadf170" containerName="prometheus" Mar 18 18:07:04.072144 master-0 kubenswrapper[30278]: I0318 18:07:04.071851 30278 memory_manager.go:354] "RemoveStaleState removing state" podUID="5c6aeb7b-9c05-470e-b31f-f4154aadf170" containerName="kube-rbac-proxy-thanos" Mar 18 18:07:04.072144 master-0 kubenswrapper[30278]: I0318 18:07:04.071875 30278 memory_manager.go:354] "RemoveStaleState removing state" podUID="5c6aeb7b-9c05-470e-b31f-f4154aadf170" containerName="prometheus" Mar 18 18:07:04.072144 master-0 kubenswrapper[30278]: I0318 18:07:04.071911 30278 memory_manager.go:354] "RemoveStaleState removing state" podUID="5c6aeb7b-9c05-470e-b31f-f4154aadf170" containerName="kube-rbac-proxy" Mar 18 18:07:04.072144 master-0 kubenswrapper[30278]: I0318 18:07:04.071927 30278 memory_manager.go:354] "RemoveStaleState removing state" podUID="5c6aeb7b-9c05-470e-b31f-f4154aadf170" containerName="thanos-sidecar" Mar 18 18:07:04.072144 master-0 kubenswrapper[30278]: I0318 18:07:04.071939 30278 memory_manager.go:354] "RemoveStaleState removing state" podUID="5c6aeb7b-9c05-470e-b31f-f4154aadf170" containerName="config-reloader" Mar 18 18:07:04.072144 master-0 kubenswrapper[30278]: I0318 18:07:04.071950 30278 memory_manager.go:354] "RemoveStaleState removing state" podUID="5c6aeb7b-9c05-470e-b31f-f4154aadf170" containerName="kube-rbac-proxy-web" Mar 18 18:07:04.074523 master-0 kubenswrapper[30278]: I0318 18:07:04.074470 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-k8s-0" Mar 18 18:07:04.078457 master-0 kubenswrapper[30278]: I0318 18:07:04.077392 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-rbac-proxy" Mar 18 18:07:04.078846 master-0 kubenswrapper[30278]: I0318 18:07:04.078674 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-tls-assets-0" Mar 18 18:07:04.079069 master-0 kubenswrapper[30278]: I0318 18:07:04.078994 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-dockercfg-pm4sf" Mar 18 18:07:04.079145 master-0 kubenswrapper[30278]: I0318 18:07:04.079095 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-kube-rbac-proxy-web" Mar 18 18:07:04.079145 master-0 kubenswrapper[30278]: I0318 18:07:04.079140 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s" Mar 18 18:07:04.079235 master-0 kubenswrapper[30278]: I0318 18:07:04.079225 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-thanos-prometheus-http-client-file" Mar 18 18:07:04.079327 master-0 kubenswrapper[30278]: I0318 18:07:04.079267 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-thanos-sidecar-tls" Mar 18 18:07:04.080500 master-0 kubenswrapper[30278]: I0318 18:07:04.079379 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-tls" Mar 18 18:07:04.080500 master-0 kubenswrapper[30278]: I0318 18:07:04.079655 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-web-config" Mar 18 18:07:04.082946 master-0 kubenswrapper[30278]: I0318 18:07:04.082828 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"serving-certs-ca-bundle" Mar 18 18:07:04.087299 master-0 kubenswrapper[30278]: I0318 18:07:04.085791 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"prometheus-k8s-rulefiles-0" Mar 18 18:07:04.090497 master-0 kubenswrapper[30278]: I0318 18:07:04.090214 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"prometheus-trusted-ca-bundle" Mar 18 18:07:04.090603 master-0 kubenswrapper[30278]: I0318 18:07:04.090522 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-grpc-tls-66rqjfmn9qiqc" Mar 18 18:07:04.103370 master-0 kubenswrapper[30278]: I0318 18:07:04.098367 30278 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-k8s-0"] Mar 18 18:07:04.124780 master-0 kubenswrapper[30278]: I0318 18:07:04.124722 30278 scope.go:117] "RemoveContainer" containerID="3e7e5ed2596bcbac2ab91756313741ea4d24ac563598d8ab914b212f1f0abaec" Mar 18 18:07:04.155797 master-0 kubenswrapper[30278]: I0318 18:07:04.150024 30278 scope.go:117] "RemoveContainer" containerID="68e21c4284a08f52d019f354cb231dfaadd6758a8d35cc21c74f3c5191f9ed50" Mar 18 18:07:04.203350 master-0 kubenswrapper[30278]: I0318 18:07:04.201585 30278 scope.go:117] "RemoveContainer" containerID="c8221e27a9c966e7f7abb1d734a50b4f7eadfeeed99bb31aef81d0cd99c3e523" Mar 18 18:07:04.209530 master-0 kubenswrapper[30278]: I0318 18:07:04.208595 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/794bfefe-f0c1-4241-a015-d520b5e2d44a-web-config\") pod \"prometheus-k8s-0\" (UID: \"794bfefe-f0c1-4241-a015-d520b5e2d44a\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 18:07:04.209530 master-0 kubenswrapper[30278]: I0318 18:07:04.208675 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/794bfefe-f0c1-4241-a015-d520b5e2d44a-configmap-metrics-client-ca\") pod \"prometheus-k8s-0\" (UID: \"794bfefe-f0c1-4241-a015-d520b5e2d44a\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 18:07:04.209530 master-0 kubenswrapper[30278]: I0318 18:07:04.208739 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/794bfefe-f0c1-4241-a015-d520b5e2d44a-thanos-prometheus-http-client-file\") pod \"prometheus-k8s-0\" (UID: \"794bfefe-f0c1-4241-a015-d520b5e2d44a\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 18:07:04.209530 master-0 kubenswrapper[30278]: I0318 18:07:04.208776 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/794bfefe-f0c1-4241-a015-d520b5e2d44a-secret-grpc-tls\") pod \"prometheus-k8s-0\" (UID: \"794bfefe-f0c1-4241-a015-d520b5e2d44a\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 18:07:04.209530 master-0 kubenswrapper[30278]: I0318 18:07:04.208803 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/794bfefe-f0c1-4241-a015-d520b5e2d44a-configmap-serving-certs-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"794bfefe-f0c1-4241-a015-d520b5e2d44a\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 18:07:04.209530 master-0 kubenswrapper[30278]: I0318 18:07:04.208831 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/794bfefe-f0c1-4241-a015-d520b5e2d44a-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-0\" (UID: \"794bfefe-f0c1-4241-a015-d520b5e2d44a\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 18:07:04.209530 master-0 kubenswrapper[30278]: I0318 18:07:04.208856 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/794bfefe-f0c1-4241-a015-d520b5e2d44a-secret-kube-rbac-proxy\") pod \"prometheus-k8s-0\" (UID: \"794bfefe-f0c1-4241-a015-d520b5e2d44a\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 18:07:04.209530 master-0 kubenswrapper[30278]: I0318 18:07:04.208889 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/794bfefe-f0c1-4241-a015-d520b5e2d44a-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-0\" (UID: \"794bfefe-f0c1-4241-a015-d520b5e2d44a\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 18:07:04.209530 master-0 kubenswrapper[30278]: I0318 18:07:04.208929 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/794bfefe-f0c1-4241-a015-d520b5e2d44a-configmap-kubelet-serving-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"794bfefe-f0c1-4241-a015-d520b5e2d44a\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 18:07:04.209530 master-0 kubenswrapper[30278]: I0318 18:07:04.208952 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-prometheus-k8s-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/794bfefe-f0c1-4241-a015-d520b5e2d44a-secret-prometheus-k8s-kube-rbac-proxy-web\") pod \"prometheus-k8s-0\" (UID: \"794bfefe-f0c1-4241-a015-d520b5e2d44a\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 18:07:04.209530 master-0 kubenswrapper[30278]: I0318 18:07:04.208980 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/794bfefe-f0c1-4241-a015-d520b5e2d44a-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"794bfefe-f0c1-4241-a015-d520b5e2d44a\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 18:07:04.209530 master-0 kubenswrapper[30278]: I0318 18:07:04.209009 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/794bfefe-f0c1-4241-a015-d520b5e2d44a-config-out\") pod \"prometheus-k8s-0\" (UID: \"794bfefe-f0c1-4241-a015-d520b5e2d44a\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 18:07:04.209530 master-0 kubenswrapper[30278]: I0318 18:07:04.209051 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/794bfefe-f0c1-4241-a015-d520b5e2d44a-tls-assets\") pod \"prometheus-k8s-0\" (UID: \"794bfefe-f0c1-4241-a015-d520b5e2d44a\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 18:07:04.209530 master-0 kubenswrapper[30278]: I0318 18:07:04.209117 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xrw8p\" (UniqueName: \"kubernetes.io/projected/794bfefe-f0c1-4241-a015-d520b5e2d44a-kube-api-access-xrw8p\") pod \"prometheus-k8s-0\" (UID: \"794bfefe-f0c1-4241-a015-d520b5e2d44a\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 18:07:04.209530 master-0 kubenswrapper[30278]: I0318 18:07:04.209146 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/794bfefe-f0c1-4241-a015-d520b5e2d44a-secret-metrics-client-certs\") pod \"prometheus-k8s-0\" (UID: \"794bfefe-f0c1-4241-a015-d520b5e2d44a\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 18:07:04.209530 master-0 kubenswrapper[30278]: I0318 18:07:04.209171 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-k8s-db\" (UniqueName: \"kubernetes.io/empty-dir/794bfefe-f0c1-4241-a015-d520b5e2d44a-prometheus-k8s-db\") pod \"prometheus-k8s-0\" (UID: \"794bfefe-f0c1-4241-a015-d520b5e2d44a\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 18:07:04.209530 master-0 kubenswrapper[30278]: I0318 18:07:04.209201 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/794bfefe-f0c1-4241-a015-d520b5e2d44a-prometheus-k8s-rulefiles-0\") pod \"prometheus-k8s-0\" (UID: \"794bfefe-f0c1-4241-a015-d520b5e2d44a\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 18:07:04.209530 master-0 kubenswrapper[30278]: I0318 18:07:04.209232 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/794bfefe-f0c1-4241-a015-d520b5e2d44a-config\") pod \"prometheus-k8s-0\" (UID: \"794bfefe-f0c1-4241-a015-d520b5e2d44a\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 18:07:04.310677 master-0 kubenswrapper[30278]: I0318 18:07:04.310598 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/794bfefe-f0c1-4241-a015-d520b5e2d44a-config\") pod \"prometheus-k8s-0\" (UID: \"794bfefe-f0c1-4241-a015-d520b5e2d44a\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 18:07:04.311024 master-0 kubenswrapper[30278]: I0318 18:07:04.310691 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/794bfefe-f0c1-4241-a015-d520b5e2d44a-web-config\") pod \"prometheus-k8s-0\" (UID: \"794bfefe-f0c1-4241-a015-d520b5e2d44a\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 18:07:04.311024 master-0 kubenswrapper[30278]: I0318 18:07:04.310731 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/794bfefe-f0c1-4241-a015-d520b5e2d44a-configmap-metrics-client-ca\") pod \"prometheus-k8s-0\" (UID: \"794bfefe-f0c1-4241-a015-d520b5e2d44a\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 18:07:04.311024 master-0 kubenswrapper[30278]: I0318 18:07:04.310784 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/794bfefe-f0c1-4241-a015-d520b5e2d44a-thanos-prometheus-http-client-file\") pod \"prometheus-k8s-0\" (UID: \"794bfefe-f0c1-4241-a015-d520b5e2d44a\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 18:07:04.311024 master-0 kubenswrapper[30278]: I0318 18:07:04.310812 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/794bfefe-f0c1-4241-a015-d520b5e2d44a-secret-grpc-tls\") pod \"prometheus-k8s-0\" (UID: \"794bfefe-f0c1-4241-a015-d520b5e2d44a\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 18:07:04.311024 master-0 kubenswrapper[30278]: I0318 18:07:04.310839 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/794bfefe-f0c1-4241-a015-d520b5e2d44a-configmap-serving-certs-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"794bfefe-f0c1-4241-a015-d520b5e2d44a\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 18:07:04.311024 master-0 kubenswrapper[30278]: I0318 18:07:04.310866 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/794bfefe-f0c1-4241-a015-d520b5e2d44a-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-0\" (UID: \"794bfefe-f0c1-4241-a015-d520b5e2d44a\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 18:07:04.311024 master-0 kubenswrapper[30278]: I0318 18:07:04.310890 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/794bfefe-f0c1-4241-a015-d520b5e2d44a-secret-kube-rbac-proxy\") pod \"prometheus-k8s-0\" (UID: \"794bfefe-f0c1-4241-a015-d520b5e2d44a\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 18:07:04.311024 master-0 kubenswrapper[30278]: I0318 18:07:04.310919 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/794bfefe-f0c1-4241-a015-d520b5e2d44a-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-0\" (UID: \"794bfefe-f0c1-4241-a015-d520b5e2d44a\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 18:07:04.311024 master-0 kubenswrapper[30278]: I0318 18:07:04.310950 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/794bfefe-f0c1-4241-a015-d520b5e2d44a-secret-prometheus-k8s-kube-rbac-proxy-web\") pod \"prometheus-k8s-0\" (UID: \"794bfefe-f0c1-4241-a015-d520b5e2d44a\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 18:07:04.312362 master-0 kubenswrapper[30278]: I0318 18:07:04.311619 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/794bfefe-f0c1-4241-a015-d520b5e2d44a-configmap-kubelet-serving-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"794bfefe-f0c1-4241-a015-d520b5e2d44a\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 18:07:04.312362 master-0 kubenswrapper[30278]: I0318 18:07:04.311723 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/794bfefe-f0c1-4241-a015-d520b5e2d44a-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"794bfefe-f0c1-4241-a015-d520b5e2d44a\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 18:07:04.312362 master-0 kubenswrapper[30278]: I0318 18:07:04.311765 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/794bfefe-f0c1-4241-a015-d520b5e2d44a-config-out\") pod \"prometheus-k8s-0\" (UID: \"794bfefe-f0c1-4241-a015-d520b5e2d44a\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 18:07:04.312362 master-0 kubenswrapper[30278]: I0318 18:07:04.311798 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/794bfefe-f0c1-4241-a015-d520b5e2d44a-tls-assets\") pod \"prometheus-k8s-0\" (UID: \"794bfefe-f0c1-4241-a015-d520b5e2d44a\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 18:07:04.312362 master-0 kubenswrapper[30278]: I0318 18:07:04.311909 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xrw8p\" (UniqueName: \"kubernetes.io/projected/794bfefe-f0c1-4241-a015-d520b5e2d44a-kube-api-access-xrw8p\") pod \"prometheus-k8s-0\" (UID: \"794bfefe-f0c1-4241-a015-d520b5e2d44a\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 18:07:04.312362 master-0 kubenswrapper[30278]: I0318 18:07:04.311941 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/794bfefe-f0c1-4241-a015-d520b5e2d44a-secret-metrics-client-certs\") pod \"prometheus-k8s-0\" (UID: \"794bfefe-f0c1-4241-a015-d520b5e2d44a\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 18:07:04.312362 master-0 kubenswrapper[30278]: I0318 18:07:04.311970 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-k8s-db\" (UniqueName: \"kubernetes.io/empty-dir/794bfefe-f0c1-4241-a015-d520b5e2d44a-prometheus-k8s-db\") pod \"prometheus-k8s-0\" (UID: \"794bfefe-f0c1-4241-a015-d520b5e2d44a\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 18:07:04.312362 master-0 kubenswrapper[30278]: I0318 18:07:04.312001 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/794bfefe-f0c1-4241-a015-d520b5e2d44a-prometheus-k8s-rulefiles-0\") pod \"prometheus-k8s-0\" (UID: \"794bfefe-f0c1-4241-a015-d520b5e2d44a\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 18:07:04.312362 master-0 kubenswrapper[30278]: I0318 18:07:04.312172 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/794bfefe-f0c1-4241-a015-d520b5e2d44a-configmap-metrics-client-ca\") pod \"prometheus-k8s-0\" (UID: \"794bfefe-f0c1-4241-a015-d520b5e2d44a\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 18:07:04.313005 master-0 kubenswrapper[30278]: I0318 18:07:04.312815 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/794bfefe-f0c1-4241-a015-d520b5e2d44a-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"794bfefe-f0c1-4241-a015-d520b5e2d44a\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 18:07:04.313005 master-0 kubenswrapper[30278]: I0318 18:07:04.312824 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/794bfefe-f0c1-4241-a015-d520b5e2d44a-configmap-serving-certs-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"794bfefe-f0c1-4241-a015-d520b5e2d44a\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 18:07:04.313399 master-0 kubenswrapper[30278]: I0318 18:07:04.313356 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-k8s-db\" (UniqueName: \"kubernetes.io/empty-dir/794bfefe-f0c1-4241-a015-d520b5e2d44a-prometheus-k8s-db\") pod \"prometheus-k8s-0\" (UID: \"794bfefe-f0c1-4241-a015-d520b5e2d44a\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 18:07:04.315019 master-0 kubenswrapper[30278]: I0318 18:07:04.314567 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/794bfefe-f0c1-4241-a015-d520b5e2d44a-config-out\") pod \"prometheus-k8s-0\" (UID: \"794bfefe-f0c1-4241-a015-d520b5e2d44a\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 18:07:04.315019 master-0 kubenswrapper[30278]: I0318 18:07:04.314572 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/794bfefe-f0c1-4241-a015-d520b5e2d44a-configmap-kubelet-serving-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"794bfefe-f0c1-4241-a015-d520b5e2d44a\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 18:07:04.315019 master-0 kubenswrapper[30278]: I0318 18:07:04.314948 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/794bfefe-f0c1-4241-a015-d520b5e2d44a-config\") pod \"prometheus-k8s-0\" (UID: \"794bfefe-f0c1-4241-a015-d520b5e2d44a\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 18:07:04.322197 master-0 kubenswrapper[30278]: I0318 18:07:04.322119 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/794bfefe-f0c1-4241-a015-d520b5e2d44a-secret-grpc-tls\") pod \"prometheus-k8s-0\" (UID: \"794bfefe-f0c1-4241-a015-d520b5e2d44a\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 18:07:04.324669 master-0 kubenswrapper[30278]: I0318 18:07:04.324022 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-prometheus-k8s-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/794bfefe-f0c1-4241-a015-d520b5e2d44a-secret-prometheus-k8s-kube-rbac-proxy-web\") pod \"prometheus-k8s-0\" (UID: \"794bfefe-f0c1-4241-a015-d520b5e2d44a\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 18:07:04.324669 master-0 kubenswrapper[30278]: I0318 18:07:04.324079 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/794bfefe-f0c1-4241-a015-d520b5e2d44a-thanos-prometheus-http-client-file\") pod \"prometheus-k8s-0\" (UID: \"794bfefe-f0c1-4241-a015-d520b5e2d44a\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 18:07:04.324669 master-0 kubenswrapper[30278]: I0318 18:07:04.324140 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/794bfefe-f0c1-4241-a015-d520b5e2d44a-secret-kube-rbac-proxy\") pod \"prometheus-k8s-0\" (UID: \"794bfefe-f0c1-4241-a015-d520b5e2d44a\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 18:07:04.324669 master-0 kubenswrapper[30278]: I0318 18:07:04.324415 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/794bfefe-f0c1-4241-a015-d520b5e2d44a-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-0\" (UID: \"794bfefe-f0c1-4241-a015-d520b5e2d44a\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 18:07:04.324669 master-0 kubenswrapper[30278]: I0318 18:07:04.324603 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/794bfefe-f0c1-4241-a015-d520b5e2d44a-tls-assets\") pod \"prometheus-k8s-0\" (UID: \"794bfefe-f0c1-4241-a015-d520b5e2d44a\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 18:07:04.325143 master-0 kubenswrapper[30278]: I0318 18:07:04.325100 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/794bfefe-f0c1-4241-a015-d520b5e2d44a-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-0\" (UID: \"794bfefe-f0c1-4241-a015-d520b5e2d44a\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 18:07:04.331491 master-0 kubenswrapper[30278]: I0318 18:07:04.329190 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/794bfefe-f0c1-4241-a015-d520b5e2d44a-web-config\") pod \"prometheus-k8s-0\" (UID: \"794bfefe-f0c1-4241-a015-d520b5e2d44a\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 18:07:04.331491 master-0 kubenswrapper[30278]: I0318 18:07:04.329212 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/794bfefe-f0c1-4241-a015-d520b5e2d44a-prometheus-k8s-rulefiles-0\") pod \"prometheus-k8s-0\" (UID: \"794bfefe-f0c1-4241-a015-d520b5e2d44a\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 18:07:04.331491 master-0 kubenswrapper[30278]: I0318 18:07:04.330874 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/794bfefe-f0c1-4241-a015-d520b5e2d44a-secret-metrics-client-certs\") pod \"prometheus-k8s-0\" (UID: \"794bfefe-f0c1-4241-a015-d520b5e2d44a\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 18:07:04.331822 master-0 kubenswrapper[30278]: I0318 18:07:04.331713 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xrw8p\" (UniqueName: \"kubernetes.io/projected/794bfefe-f0c1-4241-a015-d520b5e2d44a-kube-api-access-xrw8p\") pod \"prometheus-k8s-0\" (UID: \"794bfefe-f0c1-4241-a015-d520b5e2d44a\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 18:07:04.409545 master-0 kubenswrapper[30278]: I0318 18:07:04.409267 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-k8s-0" Mar 18 18:07:04.877945 master-0 kubenswrapper[30278]: I0318 18:07:04.877638 30278 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-k8s-0"] Mar 18 18:07:04.887065 master-0 kubenswrapper[30278]: W0318 18:07:04.887011 30278 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod794bfefe_f0c1_4241_a015_d520b5e2d44a.slice/crio-69104e6cf038ed82a2618e0f3e27b598ef1abef7feac214fc09f6c2cf1761f92 WatchSource:0}: Error finding container 69104e6cf038ed82a2618e0f3e27b598ef1abef7feac214fc09f6c2cf1761f92: Status 404 returned error can't find the container with id 69104e6cf038ed82a2618e0f3e27b598ef1abef7feac214fc09f6c2cf1761f92 Mar 18 18:07:05.001433 master-0 kubenswrapper[30278]: I0318 18:07:05.001353 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"794bfefe-f0c1-4241-a015-d520b5e2d44a","Type":"ContainerStarted","Data":"69104e6cf038ed82a2618e0f3e27b598ef1abef7feac214fc09f6c2cf1761f92"} Mar 18 18:07:05.067963 master-0 kubenswrapper[30278]: I0318 18:07:05.067911 30278 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5c6aeb7b-9c05-470e-b31f-f4154aadf170" path="/var/lib/kubelet/pods/5c6aeb7b-9c05-470e-b31f-f4154aadf170/volumes" Mar 18 18:07:06.012901 master-0 kubenswrapper[30278]: I0318 18:07:06.012791 30278 generic.go:334] "Generic (PLEG): container finished" podID="794bfefe-f0c1-4241-a015-d520b5e2d44a" containerID="2327d77a7c2d53ea040799cbfd96a24e5084dd27fb5079375451e8a1f9c7d260" exitCode=0 Mar 18 18:07:06.012901 master-0 kubenswrapper[30278]: I0318 18:07:06.012898 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"794bfefe-f0c1-4241-a015-d520b5e2d44a","Type":"ContainerDied","Data":"2327d77a7c2d53ea040799cbfd96a24e5084dd27fb5079375451e8a1f9c7d260"} Mar 18 18:07:07.026471 master-0 kubenswrapper[30278]: I0318 18:07:07.026382 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"794bfefe-f0c1-4241-a015-d520b5e2d44a","Type":"ContainerStarted","Data":"d05365bd624179c957973e2d8022405b04fbcbad937e59a5456f370ab58f608c"} Mar 18 18:07:07.026471 master-0 kubenswrapper[30278]: I0318 18:07:07.026477 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"794bfefe-f0c1-4241-a015-d520b5e2d44a","Type":"ContainerStarted","Data":"6073860a9f339ca4355d41a4e730ac2bc54f9b3d64c28e3a4f7e5583d5583c58"} Mar 18 18:07:07.026471 master-0 kubenswrapper[30278]: I0318 18:07:07.026499 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"794bfefe-f0c1-4241-a015-d520b5e2d44a","Type":"ContainerStarted","Data":"fb0584f18c6036519f93879494496dc8ff8d4372e17929c72cfa8af360aa8be4"} Mar 18 18:07:07.027661 master-0 kubenswrapper[30278]: I0318 18:07:07.026519 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"794bfefe-f0c1-4241-a015-d520b5e2d44a","Type":"ContainerStarted","Data":"c02791a02528bbf2b94267f6c77356fcfbebc52e8984b7a025056c37dda221ac"} Mar 18 18:07:07.027661 master-0 kubenswrapper[30278]: I0318 18:07:07.026537 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"794bfefe-f0c1-4241-a015-d520b5e2d44a","Type":"ContainerStarted","Data":"f2e3cf9144a9fcee5fbe395a01fb7726d8f1b5b56b85bf6acf06eb16d5c3f208"} Mar 18 18:07:07.027661 master-0 kubenswrapper[30278]: I0318 18:07:07.026552 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"794bfefe-f0c1-4241-a015-d520b5e2d44a","Type":"ContainerStarted","Data":"449cc6b90fe315e926fa0a856f0b0d6e4d6fb1954ae3e8cf51eb697bf7fefef6"} Mar 18 18:07:07.072747 master-0 kubenswrapper[30278]: I0318 18:07:07.072606 30278 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/prometheus-k8s-0" podStartSLOduration=3.072580014 podStartE2EDuration="3.072580014s" podCreationTimestamp="2026-03-18 18:07:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 18:07:07.065023831 +0000 UTC m=+396.232208466" watchObservedRunningTime="2026-03-18 18:07:07.072580014 +0000 UTC m=+396.239764609" Mar 18 18:07:08.289303 master-0 kubenswrapper[30278]: I0318 18:07:08.289183 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemeter-client-tls\" (UniqueName: \"kubernetes.io/secret/49ae0fd5-b0ec-4b37-b441-4943f3b160d4-telemeter-client-tls\") pod \"telemeter-client-cf85db6cf-b9mbd\" (UID: \"49ae0fd5-b0ec-4b37-b441-4943f3b160d4\") " pod="openshift-monitoring/telemeter-client-cf85db6cf-b9mbd" Mar 18 18:07:08.290260 master-0 kubenswrapper[30278]: E0318 18:07:08.289477 30278 secret.go:189] Couldn't get secret openshift-monitoring/telemeter-client-tls: secret "telemeter-client-tls" not found Mar 18 18:07:08.290260 master-0 kubenswrapper[30278]: E0318 18:07:08.289614 30278 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/49ae0fd5-b0ec-4b37-b441-4943f3b160d4-telemeter-client-tls podName:49ae0fd5-b0ec-4b37-b441-4943f3b160d4 nodeName:}" failed. No retries permitted until 2026-03-18 18:07:24.289581847 +0000 UTC m=+413.456766432 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "telemeter-client-tls" (UniqueName: "kubernetes.io/secret/49ae0fd5-b0ec-4b37-b441-4943f3b160d4-telemeter-client-tls") pod "telemeter-client-cf85db6cf-b9mbd" (UID: "49ae0fd5-b0ec-4b37-b441-4943f3b160d4") : secret "telemeter-client-tls" not found Mar 18 18:07:09.409818 master-0 kubenswrapper[30278]: I0318 18:07:09.409703 30278 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/prometheus-k8s-0" Mar 18 18:07:17.472029 master-0 kubenswrapper[30278]: I0318 18:07:17.471928 30278 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-d89d9c4d9-57l4t" podUID="e5ec16cb-0d08-44d7-8f1c-8965a5613854" containerName="oauth-openshift" containerID="cri-o://0c6ff64b886c97c98a34d7858beca9cfac907017560a50040566b3ad5fc39ca4" gracePeriod=15 Mar 18 18:07:17.552990 master-0 kubenswrapper[30278]: I0318 18:07:17.552881 30278 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-6b7657f69f-w666c" podUID="bc445b25-803f-4668-9a96-d539108d2527" containerName="console" containerID="cri-o://3f98d79ffc4daa3f60484057dea724bfba2f41b2060e80bf35923e2ec901080d" gracePeriod=15 Mar 18 18:07:17.731311 master-0 kubenswrapper[30278]: I0318 18:07:17.730814 30278 patch_prober.go:28] interesting pod/console-6b7657f69f-w666c container/console namespace/openshift-console: Readiness probe status=failure output="Get \"https://10.128.0.102:8443/health\": dial tcp 10.128.0.102:8443: connect: connection refused" start-of-body= Mar 18 18:07:17.731311 master-0 kubenswrapper[30278]: I0318 18:07:17.730985 30278 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/console-6b7657f69f-w666c" podUID="bc445b25-803f-4668-9a96-d539108d2527" containerName="console" probeResult="failure" output="Get \"https://10.128.0.102:8443/health\": dial tcp 10.128.0.102:8443: connect: connection refused" Mar 18 18:07:18.025491 master-0 kubenswrapper[30278]: I0318 18:07:18.025430 30278 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-d89d9c4d9-57l4t" Mar 18 18:07:18.066220 master-0 kubenswrapper[30278]: I0318 18:07:18.066094 30278 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-79cbc94fc7-tlmnv"] Mar 18 18:07:18.066531 master-0 kubenswrapper[30278]: E0318 18:07:18.066502 30278 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e5ec16cb-0d08-44d7-8f1c-8965a5613854" containerName="oauth-openshift" Mar 18 18:07:18.066531 master-0 kubenswrapper[30278]: I0318 18:07:18.066526 30278 state_mem.go:107] "Deleted CPUSet assignment" podUID="e5ec16cb-0d08-44d7-8f1c-8965a5613854" containerName="oauth-openshift" Mar 18 18:07:18.066819 master-0 kubenswrapper[30278]: I0318 18:07:18.066787 30278 memory_manager.go:354] "RemoveStaleState removing state" podUID="e5ec16cb-0d08-44d7-8f1c-8965a5613854" containerName="oauth-openshift" Mar 18 18:07:18.071746 master-0 kubenswrapper[30278]: I0318 18:07:18.071704 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-79cbc94fc7-tlmnv" Mar 18 18:07:18.074084 master-0 kubenswrapper[30278]: I0318 18:07:18.074025 30278 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-6b7657f69f-w666c_bc445b25-803f-4668-9a96-d539108d2527/console/0.log" Mar 18 18:07:18.074171 master-0 kubenswrapper[30278]: I0318 18:07:18.074137 30278 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-6b7657f69f-w666c" Mar 18 18:07:18.077344 master-0 kubenswrapper[30278]: I0318 18:07:18.077307 30278 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-79cbc94fc7-tlmnv"] Mar 18 18:07:18.081163 master-0 kubenswrapper[30278]: I0318 18:07:18.081122 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/e5ec16cb-0d08-44d7-8f1c-8965a5613854-v4-0-config-system-session\") pod \"e5ec16cb-0d08-44d7-8f1c-8965a5613854\" (UID: \"e5ec16cb-0d08-44d7-8f1c-8965a5613854\") " Mar 18 18:07:18.081241 master-0 kubenswrapper[30278]: I0318 18:07:18.081178 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/e5ec16cb-0d08-44d7-8f1c-8965a5613854-v4-0-config-user-template-login\") pod \"e5ec16cb-0d08-44d7-8f1c-8965a5613854\" (UID: \"e5ec16cb-0d08-44d7-8f1c-8965a5613854\") " Mar 18 18:07:18.081241 master-0 kubenswrapper[30278]: I0318 18:07:18.081233 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/e5ec16cb-0d08-44d7-8f1c-8965a5613854-v4-0-config-system-service-ca\") pod \"e5ec16cb-0d08-44d7-8f1c-8965a5613854\" (UID: \"e5ec16cb-0d08-44d7-8f1c-8965a5613854\") " Mar 18 18:07:18.081384 master-0 kubenswrapper[30278]: I0318 18:07:18.081304 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/e5ec16cb-0d08-44d7-8f1c-8965a5613854-v4-0-config-system-ocp-branding-template\") pod \"e5ec16cb-0d08-44d7-8f1c-8965a5613854\" (UID: \"e5ec16cb-0d08-44d7-8f1c-8965a5613854\") " Mar 18 18:07:18.081384 master-0 kubenswrapper[30278]: I0318 18:07:18.081370 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/e5ec16cb-0d08-44d7-8f1c-8965a5613854-audit-policies\") pod \"e5ec16cb-0d08-44d7-8f1c-8965a5613854\" (UID: \"e5ec16cb-0d08-44d7-8f1c-8965a5613854\") " Mar 18 18:07:18.081466 master-0 kubenswrapper[30278]: I0318 18:07:18.081450 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/e5ec16cb-0d08-44d7-8f1c-8965a5613854-v4-0-config-system-serving-cert\") pod \"e5ec16cb-0d08-44d7-8f1c-8965a5613854\" (UID: \"e5ec16cb-0d08-44d7-8f1c-8965a5613854\") " Mar 18 18:07:18.081551 master-0 kubenswrapper[30278]: I0318 18:07:18.081507 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/e5ec16cb-0d08-44d7-8f1c-8965a5613854-v4-0-config-system-cliconfig\") pod \"e5ec16cb-0d08-44d7-8f1c-8965a5613854\" (UID: \"e5ec16cb-0d08-44d7-8f1c-8965a5613854\") " Mar 18 18:07:18.081590 master-0 kubenswrapper[30278]: I0318 18:07:18.081569 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mtxqt\" (UniqueName: \"kubernetes.io/projected/e5ec16cb-0d08-44d7-8f1c-8965a5613854-kube-api-access-mtxqt\") pod \"e5ec16cb-0d08-44d7-8f1c-8965a5613854\" (UID: \"e5ec16cb-0d08-44d7-8f1c-8965a5613854\") " Mar 18 18:07:18.081630 master-0 kubenswrapper[30278]: I0318 18:07:18.081613 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/e5ec16cb-0d08-44d7-8f1c-8965a5613854-v4-0-config-system-router-certs\") pod \"e5ec16cb-0d08-44d7-8f1c-8965a5613854\" (UID: \"e5ec16cb-0d08-44d7-8f1c-8965a5613854\") " Mar 18 18:07:18.081682 master-0 kubenswrapper[30278]: I0318 18:07:18.081659 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e5ec16cb-0d08-44d7-8f1c-8965a5613854-v4-0-config-system-trusted-ca-bundle\") pod \"e5ec16cb-0d08-44d7-8f1c-8965a5613854\" (UID: \"e5ec16cb-0d08-44d7-8f1c-8965a5613854\") " Mar 18 18:07:18.081734 master-0 kubenswrapper[30278]: I0318 18:07:18.081713 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/e5ec16cb-0d08-44d7-8f1c-8965a5613854-audit-dir\") pod \"e5ec16cb-0d08-44d7-8f1c-8965a5613854\" (UID: \"e5ec16cb-0d08-44d7-8f1c-8965a5613854\") " Mar 18 18:07:18.081817 master-0 kubenswrapper[30278]: I0318 18:07:18.081783 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/e5ec16cb-0d08-44d7-8f1c-8965a5613854-v4-0-config-user-template-provider-selection\") pod \"e5ec16cb-0d08-44d7-8f1c-8965a5613854\" (UID: \"e5ec16cb-0d08-44d7-8f1c-8965a5613854\") " Mar 18 18:07:18.081854 master-0 kubenswrapper[30278]: I0318 18:07:18.081825 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/e5ec16cb-0d08-44d7-8f1c-8965a5613854-v4-0-config-user-template-error\") pod \"e5ec16cb-0d08-44d7-8f1c-8965a5613854\" (UID: \"e5ec16cb-0d08-44d7-8f1c-8965a5613854\") " Mar 18 18:07:18.083742 master-0 kubenswrapper[30278]: I0318 18:07:18.083714 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e5ec16cb-0d08-44d7-8f1c-8965a5613854-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "e5ec16cb-0d08-44d7-8f1c-8965a5613854" (UID: "e5ec16cb-0d08-44d7-8f1c-8965a5613854"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 18:07:18.086240 master-0 kubenswrapper[30278]: I0318 18:07:18.086161 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e5ec16cb-0d08-44d7-8f1c-8965a5613854-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "e5ec16cb-0d08-44d7-8f1c-8965a5613854" (UID: "e5ec16cb-0d08-44d7-8f1c-8965a5613854"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 18:07:18.086635 master-0 kubenswrapper[30278]: I0318 18:07:18.086607 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e5ec16cb-0d08-44d7-8f1c-8965a5613854-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "e5ec16cb-0d08-44d7-8f1c-8965a5613854" (UID: "e5ec16cb-0d08-44d7-8f1c-8965a5613854"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 18:07:18.087115 master-0 kubenswrapper[30278]: I0318 18:07:18.087082 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e5ec16cb-0d08-44d7-8f1c-8965a5613854-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "e5ec16cb-0d08-44d7-8f1c-8965a5613854" (UID: "e5ec16cb-0d08-44d7-8f1c-8965a5613854"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 18:07:18.087210 master-0 kubenswrapper[30278]: I0318 18:07:18.087116 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e5ec16cb-0d08-44d7-8f1c-8965a5613854-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "e5ec16cb-0d08-44d7-8f1c-8965a5613854" (UID: "e5ec16cb-0d08-44d7-8f1c-8965a5613854"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 18:07:18.087945 master-0 kubenswrapper[30278]: I0318 18:07:18.087923 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e5ec16cb-0d08-44d7-8f1c-8965a5613854-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "e5ec16cb-0d08-44d7-8f1c-8965a5613854" (UID: "e5ec16cb-0d08-44d7-8f1c-8965a5613854"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 18:07:18.088684 master-0 kubenswrapper[30278]: I0318 18:07:18.088648 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e5ec16cb-0d08-44d7-8f1c-8965a5613854-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "e5ec16cb-0d08-44d7-8f1c-8965a5613854" (UID: "e5ec16cb-0d08-44d7-8f1c-8965a5613854"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 18:07:18.089181 master-0 kubenswrapper[30278]: I0318 18:07:18.088926 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e5ec16cb-0d08-44d7-8f1c-8965a5613854-kube-api-access-mtxqt" (OuterVolumeSpecName: "kube-api-access-mtxqt") pod "e5ec16cb-0d08-44d7-8f1c-8965a5613854" (UID: "e5ec16cb-0d08-44d7-8f1c-8965a5613854"). InnerVolumeSpecName "kube-api-access-mtxqt". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 18:07:18.089552 master-0 kubenswrapper[30278]: I0318 18:07:18.089524 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e5ec16cb-0d08-44d7-8f1c-8965a5613854-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "e5ec16cb-0d08-44d7-8f1c-8965a5613854" (UID: "e5ec16cb-0d08-44d7-8f1c-8965a5613854"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 18:07:18.090008 master-0 kubenswrapper[30278]: I0318 18:07:18.089954 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e5ec16cb-0d08-44d7-8f1c-8965a5613854-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "e5ec16cb-0d08-44d7-8f1c-8965a5613854" (UID: "e5ec16cb-0d08-44d7-8f1c-8965a5613854"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 18:07:18.091975 master-0 kubenswrapper[30278]: I0318 18:07:18.091901 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e5ec16cb-0d08-44d7-8f1c-8965a5613854-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "e5ec16cb-0d08-44d7-8f1c-8965a5613854" (UID: "e5ec16cb-0d08-44d7-8f1c-8965a5613854"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 18:07:18.092828 master-0 kubenswrapper[30278]: I0318 18:07:18.092798 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e5ec16cb-0d08-44d7-8f1c-8965a5613854-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "e5ec16cb-0d08-44d7-8f1c-8965a5613854" (UID: "e5ec16cb-0d08-44d7-8f1c-8965a5613854"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 18:07:18.104857 master-0 kubenswrapper[30278]: I0318 18:07:18.097229 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e5ec16cb-0d08-44d7-8f1c-8965a5613854-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "e5ec16cb-0d08-44d7-8f1c-8965a5613854" (UID: "e5ec16cb-0d08-44d7-8f1c-8965a5613854"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 18:07:18.131034 master-0 kubenswrapper[30278]: I0318 18:07:18.130978 30278 generic.go:334] "Generic (PLEG): container finished" podID="e5ec16cb-0d08-44d7-8f1c-8965a5613854" containerID="0c6ff64b886c97c98a34d7858beca9cfac907017560a50040566b3ad5fc39ca4" exitCode=0 Mar 18 18:07:18.131441 master-0 kubenswrapper[30278]: I0318 18:07:18.131051 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-d89d9c4d9-57l4t" event={"ID":"e5ec16cb-0d08-44d7-8f1c-8965a5613854","Type":"ContainerDied","Data":"0c6ff64b886c97c98a34d7858beca9cfac907017560a50040566b3ad5fc39ca4"} Mar 18 18:07:18.131441 master-0 kubenswrapper[30278]: I0318 18:07:18.131078 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-d89d9c4d9-57l4t" event={"ID":"e5ec16cb-0d08-44d7-8f1c-8965a5613854","Type":"ContainerDied","Data":"3614cc6956911548067b704c5c0f5658ad46e793b076fb2d8a91f86f1be1a500"} Mar 18 18:07:18.131441 master-0 kubenswrapper[30278]: I0318 18:07:18.131103 30278 scope.go:117] "RemoveContainer" containerID="0c6ff64b886c97c98a34d7858beca9cfac907017560a50040566b3ad5fc39ca4" Mar 18 18:07:18.131772 master-0 kubenswrapper[30278]: I0318 18:07:18.131742 30278 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-d89d9c4d9-57l4t" Mar 18 18:07:18.138512 master-0 kubenswrapper[30278]: I0318 18:07:18.138472 30278 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-6b7657f69f-w666c_bc445b25-803f-4668-9a96-d539108d2527/console/0.log" Mar 18 18:07:18.138725 master-0 kubenswrapper[30278]: I0318 18:07:18.138520 30278 generic.go:334] "Generic (PLEG): container finished" podID="bc445b25-803f-4668-9a96-d539108d2527" containerID="3f98d79ffc4daa3f60484057dea724bfba2f41b2060e80bf35923e2ec901080d" exitCode=2 Mar 18 18:07:18.138725 master-0 kubenswrapper[30278]: I0318 18:07:18.138549 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-6b7657f69f-w666c" event={"ID":"bc445b25-803f-4668-9a96-d539108d2527","Type":"ContainerDied","Data":"3f98d79ffc4daa3f60484057dea724bfba2f41b2060e80bf35923e2ec901080d"} Mar 18 18:07:18.138725 master-0 kubenswrapper[30278]: I0318 18:07:18.138576 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-6b7657f69f-w666c" event={"ID":"bc445b25-803f-4668-9a96-d539108d2527","Type":"ContainerDied","Data":"a7fbd13c897d2bdfe694281f979b8537a87069cb5f00fe4155043737949583e5"} Mar 18 18:07:18.138725 master-0 kubenswrapper[30278]: I0318 18:07:18.138622 30278 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-6b7657f69f-w666c" Mar 18 18:07:18.169985 master-0 kubenswrapper[30278]: I0318 18:07:18.169793 30278 scope.go:117] "RemoveContainer" containerID="0c6ff64b886c97c98a34d7858beca9cfac907017560a50040566b3ad5fc39ca4" Mar 18 18:07:18.170900 master-0 kubenswrapper[30278]: E0318 18:07:18.170569 30278 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0c6ff64b886c97c98a34d7858beca9cfac907017560a50040566b3ad5fc39ca4\": container with ID starting with 0c6ff64b886c97c98a34d7858beca9cfac907017560a50040566b3ad5fc39ca4 not found: ID does not exist" containerID="0c6ff64b886c97c98a34d7858beca9cfac907017560a50040566b3ad5fc39ca4" Mar 18 18:07:18.170900 master-0 kubenswrapper[30278]: I0318 18:07:18.170673 30278 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0c6ff64b886c97c98a34d7858beca9cfac907017560a50040566b3ad5fc39ca4"} err="failed to get container status \"0c6ff64b886c97c98a34d7858beca9cfac907017560a50040566b3ad5fc39ca4\": rpc error: code = NotFound desc = could not find container \"0c6ff64b886c97c98a34d7858beca9cfac907017560a50040566b3ad5fc39ca4\": container with ID starting with 0c6ff64b886c97c98a34d7858beca9cfac907017560a50040566b3ad5fc39ca4 not found: ID does not exist" Mar 18 18:07:18.170900 master-0 kubenswrapper[30278]: I0318 18:07:18.170718 30278 scope.go:117] "RemoveContainer" containerID="3f98d79ffc4daa3f60484057dea724bfba2f41b2060e80bf35923e2ec901080d" Mar 18 18:07:18.190049 master-0 kubenswrapper[30278]: I0318 18:07:18.189948 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/bc445b25-803f-4668-9a96-d539108d2527-console-oauth-config\") pod \"bc445b25-803f-4668-9a96-d539108d2527\" (UID: \"bc445b25-803f-4668-9a96-d539108d2527\") " Mar 18 18:07:18.191318 master-0 kubenswrapper[30278]: I0318 18:07:18.191253 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/bc445b25-803f-4668-9a96-d539108d2527-console-serving-cert\") pod \"bc445b25-803f-4668-9a96-d539108d2527\" (UID: \"bc445b25-803f-4668-9a96-d539108d2527\") " Mar 18 18:07:18.191427 master-0 kubenswrapper[30278]: I0318 18:07:18.191345 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/bc445b25-803f-4668-9a96-d539108d2527-service-ca\") pod \"bc445b25-803f-4668-9a96-d539108d2527\" (UID: \"bc445b25-803f-4668-9a96-d539108d2527\") " Mar 18 18:07:18.191427 master-0 kubenswrapper[30278]: I0318 18:07:18.191387 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/bc445b25-803f-4668-9a96-d539108d2527-console-config\") pod \"bc445b25-803f-4668-9a96-d539108d2527\" (UID: \"bc445b25-803f-4668-9a96-d539108d2527\") " Mar 18 18:07:18.191567 master-0 kubenswrapper[30278]: I0318 18:07:18.191458 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bbqtz\" (UniqueName: \"kubernetes.io/projected/bc445b25-803f-4668-9a96-d539108d2527-kube-api-access-bbqtz\") pod \"bc445b25-803f-4668-9a96-d539108d2527\" (UID: \"bc445b25-803f-4668-9a96-d539108d2527\") " Mar 18 18:07:18.191567 master-0 kubenswrapper[30278]: I0318 18:07:18.191510 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/bc445b25-803f-4668-9a96-d539108d2527-oauth-serving-cert\") pod \"bc445b25-803f-4668-9a96-d539108d2527\" (UID: \"bc445b25-803f-4668-9a96-d539108d2527\") " Mar 18 18:07:18.193316 master-0 kubenswrapper[30278]: I0318 18:07:18.192493 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bc445b25-803f-4668-9a96-d539108d2527-service-ca" (OuterVolumeSpecName: "service-ca") pod "bc445b25-803f-4668-9a96-d539108d2527" (UID: "bc445b25-803f-4668-9a96-d539108d2527"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 18:07:18.193316 master-0 kubenswrapper[30278]: I0318 18:07:18.192527 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bc445b25-803f-4668-9a96-d539108d2527-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "bc445b25-803f-4668-9a96-d539108d2527" (UID: "bc445b25-803f-4668-9a96-d539108d2527"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 18:07:18.193316 master-0 kubenswrapper[30278]: I0318 18:07:18.192570 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/a85f9e61-015c-41d5-bb38-de74da6a46da-v4-0-config-system-router-certs\") pod \"oauth-openshift-79cbc94fc7-tlmnv\" (UID: \"a85f9e61-015c-41d5-bb38-de74da6a46da\") " pod="openshift-authentication/oauth-openshift-79cbc94fc7-tlmnv" Mar 18 18:07:18.193316 master-0 kubenswrapper[30278]: I0318 18:07:18.192676 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/a85f9e61-015c-41d5-bb38-de74da6a46da-v4-0-config-system-serving-cert\") pod \"oauth-openshift-79cbc94fc7-tlmnv\" (UID: \"a85f9e61-015c-41d5-bb38-de74da6a46da\") " pod="openshift-authentication/oauth-openshift-79cbc94fc7-tlmnv" Mar 18 18:07:18.193316 master-0 kubenswrapper[30278]: I0318 18:07:18.192833 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/a85f9e61-015c-41d5-bb38-de74da6a46da-audit-dir\") pod \"oauth-openshift-79cbc94fc7-tlmnv\" (UID: \"a85f9e61-015c-41d5-bb38-de74da6a46da\") " pod="openshift-authentication/oauth-openshift-79cbc94fc7-tlmnv" Mar 18 18:07:18.193316 master-0 kubenswrapper[30278]: I0318 18:07:18.192874 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/a85f9e61-015c-41d5-bb38-de74da6a46da-v4-0-config-user-template-login\") pod \"oauth-openshift-79cbc94fc7-tlmnv\" (UID: \"a85f9e61-015c-41d5-bb38-de74da6a46da\") " pod="openshift-authentication/oauth-openshift-79cbc94fc7-tlmnv" Mar 18 18:07:18.193316 master-0 kubenswrapper[30278]: I0318 18:07:18.192967 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/a85f9e61-015c-41d5-bb38-de74da6a46da-v4-0-config-system-service-ca\") pod \"oauth-openshift-79cbc94fc7-tlmnv\" (UID: \"a85f9e61-015c-41d5-bb38-de74da6a46da\") " pod="openshift-authentication/oauth-openshift-79cbc94fc7-tlmnv" Mar 18 18:07:18.193316 master-0 kubenswrapper[30278]: I0318 18:07:18.193093 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/a85f9e61-015c-41d5-bb38-de74da6a46da-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-79cbc94fc7-tlmnv\" (UID: \"a85f9e61-015c-41d5-bb38-de74da6a46da\") " pod="openshift-authentication/oauth-openshift-79cbc94fc7-tlmnv" Mar 18 18:07:18.193316 master-0 kubenswrapper[30278]: I0318 18:07:18.193135 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zq9ps\" (UniqueName: \"kubernetes.io/projected/a85f9e61-015c-41d5-bb38-de74da6a46da-kube-api-access-zq9ps\") pod \"oauth-openshift-79cbc94fc7-tlmnv\" (UID: \"a85f9e61-015c-41d5-bb38-de74da6a46da\") " pod="openshift-authentication/oauth-openshift-79cbc94fc7-tlmnv" Mar 18 18:07:18.193316 master-0 kubenswrapper[30278]: I0318 18:07:18.193289 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/a85f9e61-015c-41d5-bb38-de74da6a46da-audit-policies\") pod \"oauth-openshift-79cbc94fc7-tlmnv\" (UID: \"a85f9e61-015c-41d5-bb38-de74da6a46da\") " pod="openshift-authentication/oauth-openshift-79cbc94fc7-tlmnv" Mar 18 18:07:18.193868 master-0 kubenswrapper[30278]: I0318 18:07:18.193386 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/a85f9e61-015c-41d5-bb38-de74da6a46da-v4-0-config-system-cliconfig\") pod \"oauth-openshift-79cbc94fc7-tlmnv\" (UID: \"a85f9e61-015c-41d5-bb38-de74da6a46da\") " pod="openshift-authentication/oauth-openshift-79cbc94fc7-tlmnv" Mar 18 18:07:18.193868 master-0 kubenswrapper[30278]: I0318 18:07:18.193439 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/a85f9e61-015c-41d5-bb38-de74da6a46da-v4-0-config-user-template-error\") pod \"oauth-openshift-79cbc94fc7-tlmnv\" (UID: \"a85f9e61-015c-41d5-bb38-de74da6a46da\") " pod="openshift-authentication/oauth-openshift-79cbc94fc7-tlmnv" Mar 18 18:07:18.204757 master-0 kubenswrapper[30278]: I0318 18:07:18.195523 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a85f9e61-015c-41d5-bb38-de74da6a46da-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-79cbc94fc7-tlmnv\" (UID: \"a85f9e61-015c-41d5-bb38-de74da6a46da\") " pod="openshift-authentication/oauth-openshift-79cbc94fc7-tlmnv" Mar 18 18:07:18.204757 master-0 kubenswrapper[30278]: I0318 18:07:18.195753 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/a85f9e61-015c-41d5-bb38-de74da6a46da-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-79cbc94fc7-tlmnv\" (UID: \"a85f9e61-015c-41d5-bb38-de74da6a46da\") " pod="openshift-authentication/oauth-openshift-79cbc94fc7-tlmnv" Mar 18 18:07:18.204757 master-0 kubenswrapper[30278]: I0318 18:07:18.195813 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/a85f9e61-015c-41d5-bb38-de74da6a46da-v4-0-config-system-session\") pod \"oauth-openshift-79cbc94fc7-tlmnv\" (UID: \"a85f9e61-015c-41d5-bb38-de74da6a46da\") " pod="openshift-authentication/oauth-openshift-79cbc94fc7-tlmnv" Mar 18 18:07:18.204757 master-0 kubenswrapper[30278]: I0318 18:07:18.196029 30278 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/e5ec16cb-0d08-44d7-8f1c-8965a5613854-v4-0-config-user-template-provider-selection\") on node \"master-0\" DevicePath \"\"" Mar 18 18:07:18.204757 master-0 kubenswrapper[30278]: I0318 18:07:18.196053 30278 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/e5ec16cb-0d08-44d7-8f1c-8965a5613854-v4-0-config-user-template-error\") on node \"master-0\" DevicePath \"\"" Mar 18 18:07:18.204757 master-0 kubenswrapper[30278]: I0318 18:07:18.196068 30278 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/bc445b25-803f-4668-9a96-d539108d2527-oauth-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 18 18:07:18.204757 master-0 kubenswrapper[30278]: I0318 18:07:18.196080 30278 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/e5ec16cb-0d08-44d7-8f1c-8965a5613854-v4-0-config-system-session\") on node \"master-0\" DevicePath \"\"" Mar 18 18:07:18.204757 master-0 kubenswrapper[30278]: I0318 18:07:18.196095 30278 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/e5ec16cb-0d08-44d7-8f1c-8965a5613854-v4-0-config-user-template-login\") on node \"master-0\" DevicePath \"\"" Mar 18 18:07:18.204757 master-0 kubenswrapper[30278]: I0318 18:07:18.196113 30278 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/e5ec16cb-0d08-44d7-8f1c-8965a5613854-v4-0-config-system-service-ca\") on node \"master-0\" DevicePath \"\"" Mar 18 18:07:18.204757 master-0 kubenswrapper[30278]: I0318 18:07:18.196124 30278 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/e5ec16cb-0d08-44d7-8f1c-8965a5613854-v4-0-config-system-ocp-branding-template\") on node \"master-0\" DevicePath \"\"" Mar 18 18:07:18.204757 master-0 kubenswrapper[30278]: I0318 18:07:18.196142 30278 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/e5ec16cb-0d08-44d7-8f1c-8965a5613854-audit-policies\") on node \"master-0\" DevicePath \"\"" Mar 18 18:07:18.204757 master-0 kubenswrapper[30278]: I0318 18:07:18.196157 30278 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/e5ec16cb-0d08-44d7-8f1c-8965a5613854-v4-0-config-system-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 18 18:07:18.204757 master-0 kubenswrapper[30278]: I0318 18:07:18.196168 30278 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/e5ec16cb-0d08-44d7-8f1c-8965a5613854-v4-0-config-system-cliconfig\") on node \"master-0\" DevicePath \"\"" Mar 18 18:07:18.204757 master-0 kubenswrapper[30278]: I0318 18:07:18.196180 30278 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mtxqt\" (UniqueName: \"kubernetes.io/projected/e5ec16cb-0d08-44d7-8f1c-8965a5613854-kube-api-access-mtxqt\") on node \"master-0\" DevicePath \"\"" Mar 18 18:07:18.204757 master-0 kubenswrapper[30278]: I0318 18:07:18.196192 30278 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/e5ec16cb-0d08-44d7-8f1c-8965a5613854-v4-0-config-system-router-certs\") on node \"master-0\" DevicePath \"\"" Mar 18 18:07:18.204757 master-0 kubenswrapper[30278]: I0318 18:07:18.196208 30278 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e5ec16cb-0d08-44d7-8f1c-8965a5613854-v4-0-config-system-trusted-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 18 18:07:18.204757 master-0 kubenswrapper[30278]: I0318 18:07:18.196219 30278 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/e5ec16cb-0d08-44d7-8f1c-8965a5613854-audit-dir\") on node \"master-0\" DevicePath \"\"" Mar 18 18:07:18.204757 master-0 kubenswrapper[30278]: I0318 18:07:18.196233 30278 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/bc445b25-803f-4668-9a96-d539108d2527-service-ca\") on node \"master-0\" DevicePath \"\"" Mar 18 18:07:18.204757 master-0 kubenswrapper[30278]: I0318 18:07:18.196384 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bc445b25-803f-4668-9a96-d539108d2527-console-config" (OuterVolumeSpecName: "console-config") pod "bc445b25-803f-4668-9a96-d539108d2527" (UID: "bc445b25-803f-4668-9a96-d539108d2527"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 18:07:18.204757 master-0 kubenswrapper[30278]: I0318 18:07:18.196563 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc445b25-803f-4668-9a96-d539108d2527-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "bc445b25-803f-4668-9a96-d539108d2527" (UID: "bc445b25-803f-4668-9a96-d539108d2527"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 18:07:18.204757 master-0 kubenswrapper[30278]: I0318 18:07:18.196713 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc445b25-803f-4668-9a96-d539108d2527-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "bc445b25-803f-4668-9a96-d539108d2527" (UID: "bc445b25-803f-4668-9a96-d539108d2527"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 18:07:18.204757 master-0 kubenswrapper[30278]: I0318 18:07:18.196834 30278 scope.go:117] "RemoveContainer" containerID="3f98d79ffc4daa3f60484057dea724bfba2f41b2060e80bf35923e2ec901080d" Mar 18 18:07:18.204757 master-0 kubenswrapper[30278]: I0318 18:07:18.196912 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc445b25-803f-4668-9a96-d539108d2527-kube-api-access-bbqtz" (OuterVolumeSpecName: "kube-api-access-bbqtz") pod "bc445b25-803f-4668-9a96-d539108d2527" (UID: "bc445b25-803f-4668-9a96-d539108d2527"). InnerVolumeSpecName "kube-api-access-bbqtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 18:07:18.204757 master-0 kubenswrapper[30278]: E0318 18:07:18.198341 30278 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3f98d79ffc4daa3f60484057dea724bfba2f41b2060e80bf35923e2ec901080d\": container with ID starting with 3f98d79ffc4daa3f60484057dea724bfba2f41b2060e80bf35923e2ec901080d not found: ID does not exist" containerID="3f98d79ffc4daa3f60484057dea724bfba2f41b2060e80bf35923e2ec901080d" Mar 18 18:07:18.204757 master-0 kubenswrapper[30278]: I0318 18:07:18.198479 30278 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3f98d79ffc4daa3f60484057dea724bfba2f41b2060e80bf35923e2ec901080d"} err="failed to get container status \"3f98d79ffc4daa3f60484057dea724bfba2f41b2060e80bf35923e2ec901080d\": rpc error: code = NotFound desc = could not find container \"3f98d79ffc4daa3f60484057dea724bfba2f41b2060e80bf35923e2ec901080d\": container with ID starting with 3f98d79ffc4daa3f60484057dea724bfba2f41b2060e80bf35923e2ec901080d not found: ID does not exist" Mar 18 18:07:18.207667 master-0 kubenswrapper[30278]: I0318 18:07:18.207607 30278 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-d89d9c4d9-57l4t"] Mar 18 18:07:18.217579 master-0 kubenswrapper[30278]: I0318 18:07:18.217401 30278 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-d89d9c4d9-57l4t"] Mar 18 18:07:18.297344 master-0 kubenswrapper[30278]: I0318 18:07:18.297154 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/a85f9e61-015c-41d5-bb38-de74da6a46da-v4-0-config-system-serving-cert\") pod \"oauth-openshift-79cbc94fc7-tlmnv\" (UID: \"a85f9e61-015c-41d5-bb38-de74da6a46da\") " pod="openshift-authentication/oauth-openshift-79cbc94fc7-tlmnv" Mar 18 18:07:18.297531 master-0 kubenswrapper[30278]: I0318 18:07:18.297344 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/a85f9e61-015c-41d5-bb38-de74da6a46da-audit-dir\") pod \"oauth-openshift-79cbc94fc7-tlmnv\" (UID: \"a85f9e61-015c-41d5-bb38-de74da6a46da\") " pod="openshift-authentication/oauth-openshift-79cbc94fc7-tlmnv" Mar 18 18:07:18.297531 master-0 kubenswrapper[30278]: I0318 18:07:18.297397 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/a85f9e61-015c-41d5-bb38-de74da6a46da-v4-0-config-user-template-login\") pod \"oauth-openshift-79cbc94fc7-tlmnv\" (UID: \"a85f9e61-015c-41d5-bb38-de74da6a46da\") " pod="openshift-authentication/oauth-openshift-79cbc94fc7-tlmnv" Mar 18 18:07:18.297600 master-0 kubenswrapper[30278]: I0318 18:07:18.297508 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/a85f9e61-015c-41d5-bb38-de74da6a46da-audit-dir\") pod \"oauth-openshift-79cbc94fc7-tlmnv\" (UID: \"a85f9e61-015c-41d5-bb38-de74da6a46da\") " pod="openshift-authentication/oauth-openshift-79cbc94fc7-tlmnv" Mar 18 18:07:18.297600 master-0 kubenswrapper[30278]: I0318 18:07:18.297546 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/a85f9e61-015c-41d5-bb38-de74da6a46da-v4-0-config-system-service-ca\") pod \"oauth-openshift-79cbc94fc7-tlmnv\" (UID: \"a85f9e61-015c-41d5-bb38-de74da6a46da\") " pod="openshift-authentication/oauth-openshift-79cbc94fc7-tlmnv" Mar 18 18:07:18.297910 master-0 kubenswrapper[30278]: I0318 18:07:18.297873 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/a85f9e61-015c-41d5-bb38-de74da6a46da-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-79cbc94fc7-tlmnv\" (UID: \"a85f9e61-015c-41d5-bb38-de74da6a46da\") " pod="openshift-authentication/oauth-openshift-79cbc94fc7-tlmnv" Mar 18 18:07:18.297910 master-0 kubenswrapper[30278]: I0318 18:07:18.297906 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zq9ps\" (UniqueName: \"kubernetes.io/projected/a85f9e61-015c-41d5-bb38-de74da6a46da-kube-api-access-zq9ps\") pod \"oauth-openshift-79cbc94fc7-tlmnv\" (UID: \"a85f9e61-015c-41d5-bb38-de74da6a46da\") " pod="openshift-authentication/oauth-openshift-79cbc94fc7-tlmnv" Mar 18 18:07:18.298767 master-0 kubenswrapper[30278]: I0318 18:07:18.298734 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/a85f9e61-015c-41d5-bb38-de74da6a46da-audit-policies\") pod \"oauth-openshift-79cbc94fc7-tlmnv\" (UID: \"a85f9e61-015c-41d5-bb38-de74da6a46da\") " pod="openshift-authentication/oauth-openshift-79cbc94fc7-tlmnv" Mar 18 18:07:18.298847 master-0 kubenswrapper[30278]: I0318 18:07:18.298801 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/a85f9e61-015c-41d5-bb38-de74da6a46da-v4-0-config-system-cliconfig\") pod \"oauth-openshift-79cbc94fc7-tlmnv\" (UID: \"a85f9e61-015c-41d5-bb38-de74da6a46da\") " pod="openshift-authentication/oauth-openshift-79cbc94fc7-tlmnv" Mar 18 18:07:18.298847 master-0 kubenswrapper[30278]: I0318 18:07:18.298836 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/a85f9e61-015c-41d5-bb38-de74da6a46da-v4-0-config-user-template-error\") pod \"oauth-openshift-79cbc94fc7-tlmnv\" (UID: \"a85f9e61-015c-41d5-bb38-de74da6a46da\") " pod="openshift-authentication/oauth-openshift-79cbc94fc7-tlmnv" Mar 18 18:07:18.298935 master-0 kubenswrapper[30278]: I0318 18:07:18.298897 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/a85f9e61-015c-41d5-bb38-de74da6a46da-v4-0-config-system-service-ca\") pod \"oauth-openshift-79cbc94fc7-tlmnv\" (UID: \"a85f9e61-015c-41d5-bb38-de74da6a46da\") " pod="openshift-authentication/oauth-openshift-79cbc94fc7-tlmnv" Mar 18 18:07:18.299002 master-0 kubenswrapper[30278]: I0318 18:07:18.298905 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a85f9e61-015c-41d5-bb38-de74da6a46da-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-79cbc94fc7-tlmnv\" (UID: \"a85f9e61-015c-41d5-bb38-de74da6a46da\") " pod="openshift-authentication/oauth-openshift-79cbc94fc7-tlmnv" Mar 18 18:07:18.299200 master-0 kubenswrapper[30278]: I0318 18:07:18.299169 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/a85f9e61-015c-41d5-bb38-de74da6a46da-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-79cbc94fc7-tlmnv\" (UID: \"a85f9e61-015c-41d5-bb38-de74da6a46da\") " pod="openshift-authentication/oauth-openshift-79cbc94fc7-tlmnv" Mar 18 18:07:18.299264 master-0 kubenswrapper[30278]: I0318 18:07:18.299239 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/a85f9e61-015c-41d5-bb38-de74da6a46da-v4-0-config-system-session\") pod \"oauth-openshift-79cbc94fc7-tlmnv\" (UID: \"a85f9e61-015c-41d5-bb38-de74da6a46da\") " pod="openshift-authentication/oauth-openshift-79cbc94fc7-tlmnv" Mar 18 18:07:18.299333 master-0 kubenswrapper[30278]: I0318 18:07:18.299308 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/a85f9e61-015c-41d5-bb38-de74da6a46da-v4-0-config-system-router-certs\") pod \"oauth-openshift-79cbc94fc7-tlmnv\" (UID: \"a85f9e61-015c-41d5-bb38-de74da6a46da\") " pod="openshift-authentication/oauth-openshift-79cbc94fc7-tlmnv" Mar 18 18:07:18.299454 master-0 kubenswrapper[30278]: I0318 18:07:18.299435 30278 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/bc445b25-803f-4668-9a96-d539108d2527-console-oauth-config\") on node \"master-0\" DevicePath \"\"" Mar 18 18:07:18.299521 master-0 kubenswrapper[30278]: I0318 18:07:18.299461 30278 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/bc445b25-803f-4668-9a96-d539108d2527-console-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 18 18:07:18.299521 master-0 kubenswrapper[30278]: I0318 18:07:18.299478 30278 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/bc445b25-803f-4668-9a96-d539108d2527-console-config\") on node \"master-0\" DevicePath \"\"" Mar 18 18:07:18.299521 master-0 kubenswrapper[30278]: I0318 18:07:18.299493 30278 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bbqtz\" (UniqueName: \"kubernetes.io/projected/bc445b25-803f-4668-9a96-d539108d2527-kube-api-access-bbqtz\") on node \"master-0\" DevicePath \"\"" Mar 18 18:07:18.299738 master-0 kubenswrapper[30278]: I0318 18:07:18.299698 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/a85f9e61-015c-41d5-bb38-de74da6a46da-audit-policies\") pod \"oauth-openshift-79cbc94fc7-tlmnv\" (UID: \"a85f9e61-015c-41d5-bb38-de74da6a46da\") " pod="openshift-authentication/oauth-openshift-79cbc94fc7-tlmnv" Mar 18 18:07:18.299833 master-0 kubenswrapper[30278]: I0318 18:07:18.299802 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/a85f9e61-015c-41d5-bb38-de74da6a46da-v4-0-config-system-cliconfig\") pod \"oauth-openshift-79cbc94fc7-tlmnv\" (UID: \"a85f9e61-015c-41d5-bb38-de74da6a46da\") " pod="openshift-authentication/oauth-openshift-79cbc94fc7-tlmnv" Mar 18 18:07:18.299955 master-0 kubenswrapper[30278]: I0318 18:07:18.299930 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a85f9e61-015c-41d5-bb38-de74da6a46da-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-79cbc94fc7-tlmnv\" (UID: \"a85f9e61-015c-41d5-bb38-de74da6a46da\") " pod="openshift-authentication/oauth-openshift-79cbc94fc7-tlmnv" Mar 18 18:07:18.302569 master-0 kubenswrapper[30278]: I0318 18:07:18.302509 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/a85f9e61-015c-41d5-bb38-de74da6a46da-v4-0-config-user-template-login\") pod \"oauth-openshift-79cbc94fc7-tlmnv\" (UID: \"a85f9e61-015c-41d5-bb38-de74da6a46da\") " pod="openshift-authentication/oauth-openshift-79cbc94fc7-tlmnv" Mar 18 18:07:18.302789 master-0 kubenswrapper[30278]: I0318 18:07:18.302755 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/a85f9e61-015c-41d5-bb38-de74da6a46da-v4-0-config-system-router-certs\") pod \"oauth-openshift-79cbc94fc7-tlmnv\" (UID: \"a85f9e61-015c-41d5-bb38-de74da6a46da\") " pod="openshift-authentication/oauth-openshift-79cbc94fc7-tlmnv" Mar 18 18:07:18.303121 master-0 kubenswrapper[30278]: I0318 18:07:18.303067 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/a85f9e61-015c-41d5-bb38-de74da6a46da-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-79cbc94fc7-tlmnv\" (UID: \"a85f9e61-015c-41d5-bb38-de74da6a46da\") " pod="openshift-authentication/oauth-openshift-79cbc94fc7-tlmnv" Mar 18 18:07:18.303313 master-0 kubenswrapper[30278]: I0318 18:07:18.303245 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/a85f9e61-015c-41d5-bb38-de74da6a46da-v4-0-config-system-session\") pod \"oauth-openshift-79cbc94fc7-tlmnv\" (UID: \"a85f9e61-015c-41d5-bb38-de74da6a46da\") " pod="openshift-authentication/oauth-openshift-79cbc94fc7-tlmnv" Mar 18 18:07:18.303523 master-0 kubenswrapper[30278]: I0318 18:07:18.303488 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/a85f9e61-015c-41d5-bb38-de74da6a46da-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-79cbc94fc7-tlmnv\" (UID: \"a85f9e61-015c-41d5-bb38-de74da6a46da\") " pod="openshift-authentication/oauth-openshift-79cbc94fc7-tlmnv" Mar 18 18:07:18.303771 master-0 kubenswrapper[30278]: I0318 18:07:18.303728 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/a85f9e61-015c-41d5-bb38-de74da6a46da-v4-0-config-system-serving-cert\") pod \"oauth-openshift-79cbc94fc7-tlmnv\" (UID: \"a85f9e61-015c-41d5-bb38-de74da6a46da\") " pod="openshift-authentication/oauth-openshift-79cbc94fc7-tlmnv" Mar 18 18:07:18.306654 master-0 kubenswrapper[30278]: I0318 18:07:18.306614 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/a85f9e61-015c-41d5-bb38-de74da6a46da-v4-0-config-user-template-error\") pod \"oauth-openshift-79cbc94fc7-tlmnv\" (UID: \"a85f9e61-015c-41d5-bb38-de74da6a46da\") " pod="openshift-authentication/oauth-openshift-79cbc94fc7-tlmnv" Mar 18 18:07:18.319890 master-0 kubenswrapper[30278]: I0318 18:07:18.319845 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zq9ps\" (UniqueName: \"kubernetes.io/projected/a85f9e61-015c-41d5-bb38-de74da6a46da-kube-api-access-zq9ps\") pod \"oauth-openshift-79cbc94fc7-tlmnv\" (UID: \"a85f9e61-015c-41d5-bb38-de74da6a46da\") " pod="openshift-authentication/oauth-openshift-79cbc94fc7-tlmnv" Mar 18 18:07:18.399427 master-0 kubenswrapper[30278]: I0318 18:07:18.399328 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-79cbc94fc7-tlmnv" Mar 18 18:07:18.482420 master-0 kubenswrapper[30278]: I0318 18:07:18.482373 30278 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-6b7657f69f-w666c"] Mar 18 18:07:18.487353 master-0 kubenswrapper[30278]: I0318 18:07:18.487310 30278 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-6b7657f69f-w666c"] Mar 18 18:07:18.840857 master-0 kubenswrapper[30278]: I0318 18:07:18.840682 30278 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-79cbc94fc7-tlmnv"] Mar 18 18:07:18.848327 master-0 kubenswrapper[30278]: W0318 18:07:18.846575 30278 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda85f9e61_015c_41d5_bb38_de74da6a46da.slice/crio-6a9298e24c15550d6e5125207e2784da84400e09ddabc2fae814adfc0438d87c WatchSource:0}: Error finding container 6a9298e24c15550d6e5125207e2784da84400e09ddabc2fae814adfc0438d87c: Status 404 returned error can't find the container with id 6a9298e24c15550d6e5125207e2784da84400e09ddabc2fae814adfc0438d87c Mar 18 18:07:19.074864 master-0 kubenswrapper[30278]: I0318 18:07:19.074775 30278 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc445b25-803f-4668-9a96-d539108d2527" path="/var/lib/kubelet/pods/bc445b25-803f-4668-9a96-d539108d2527/volumes" Mar 18 18:07:19.075978 master-0 kubenswrapper[30278]: I0318 18:07:19.075929 30278 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e5ec16cb-0d08-44d7-8f1c-8965a5613854" path="/var/lib/kubelet/pods/e5ec16cb-0d08-44d7-8f1c-8965a5613854/volumes" Mar 18 18:07:19.157669 master-0 kubenswrapper[30278]: I0318 18:07:19.157555 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-79cbc94fc7-tlmnv" event={"ID":"a85f9e61-015c-41d5-bb38-de74da6a46da","Type":"ContainerStarted","Data":"6a9298e24c15550d6e5125207e2784da84400e09ddabc2fae814adfc0438d87c"} Mar 18 18:07:20.172534 master-0 kubenswrapper[30278]: I0318 18:07:20.172468 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-79cbc94fc7-tlmnv" event={"ID":"a85f9e61-015c-41d5-bb38-de74da6a46da","Type":"ContainerStarted","Data":"596e2af648ef7d866cbe7e05a4b73e864896b814e31d81ac314c6d48f9a5f608"} Mar 18 18:07:20.176535 master-0 kubenswrapper[30278]: I0318 18:07:20.172911 30278 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-79cbc94fc7-tlmnv" Mar 18 18:07:20.184986 master-0 kubenswrapper[30278]: I0318 18:07:20.184927 30278 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-79cbc94fc7-tlmnv" Mar 18 18:07:20.208990 master-0 kubenswrapper[30278]: I0318 18:07:20.208902 30278 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-79cbc94fc7-tlmnv" podStartSLOduration=28.208882054 podStartE2EDuration="28.208882054s" podCreationTimestamp="2026-03-18 18:06:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 18:07:20.20347469 +0000 UTC m=+409.370659325" watchObservedRunningTime="2026-03-18 18:07:20.208882054 +0000 UTC m=+409.376066639" Mar 18 18:07:24.327031 master-0 kubenswrapper[30278]: I0318 18:07:24.326949 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemeter-client-tls\" (UniqueName: \"kubernetes.io/secret/49ae0fd5-b0ec-4b37-b441-4943f3b160d4-telemeter-client-tls\") pod \"telemeter-client-cf85db6cf-b9mbd\" (UID: \"49ae0fd5-b0ec-4b37-b441-4943f3b160d4\") " pod="openshift-monitoring/telemeter-client-cf85db6cf-b9mbd" Mar 18 18:07:24.332368 master-0 kubenswrapper[30278]: I0318 18:07:24.332308 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemeter-client-tls\" (UniqueName: \"kubernetes.io/secret/49ae0fd5-b0ec-4b37-b441-4943f3b160d4-telemeter-client-tls\") pod \"telemeter-client-cf85db6cf-b9mbd\" (UID: \"49ae0fd5-b0ec-4b37-b441-4943f3b160d4\") " pod="openshift-monitoring/telemeter-client-cf85db6cf-b9mbd" Mar 18 18:07:24.553176 master-0 kubenswrapper[30278]: I0318 18:07:24.553054 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/telemeter-client-cf85db6cf-b9mbd" Mar 18 18:07:24.927096 master-0 kubenswrapper[30278]: I0318 18:07:24.926440 30278 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/telemeter-client-cf85db6cf-b9mbd"] Mar 18 18:07:24.942810 master-0 kubenswrapper[30278]: W0318 18:07:24.942580 30278 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod49ae0fd5_b0ec_4b37_b441_4943f3b160d4.slice/crio-e8fc95f13e25203222169bf387fbb1010f7f46a9cd9ecf82e6453f3bcf8d9e92 WatchSource:0}: Error finding container e8fc95f13e25203222169bf387fbb1010f7f46a9cd9ecf82e6453f3bcf8d9e92: Status 404 returned error can't find the container with id e8fc95f13e25203222169bf387fbb1010f7f46a9cd9ecf82e6453f3bcf8d9e92 Mar 18 18:07:24.943897 master-0 kubenswrapper[30278]: I0318 18:07:24.943860 30278 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-monitoring/alertmanager-main-0"] Mar 18 18:07:24.944394 master-0 kubenswrapper[30278]: I0318 18:07:24.944348 30278 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-monitoring/alertmanager-main-0" podUID="89b1dfdf-4633-45af-8abd-931a76eca960" containerName="alertmanager" containerID="cri-o://54d5ec6af3880a2ea24f8fc641b0fdabd67a3d38b658ef8b46030a7fbdcb7542" gracePeriod=120 Mar 18 18:07:24.944692 master-0 kubenswrapper[30278]: I0318 18:07:24.944673 30278 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-monitoring/alertmanager-main-0" podUID="89b1dfdf-4633-45af-8abd-931a76eca960" containerName="prom-label-proxy" containerID="cri-o://0b35e279dda5a722efe795c0143026c8b448b1734eaac9f3c72eac823353df90" gracePeriod=120 Mar 18 18:07:24.944813 master-0 kubenswrapper[30278]: I0318 18:07:24.944798 30278 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-monitoring/alertmanager-main-0" podUID="89b1dfdf-4633-45af-8abd-931a76eca960" containerName="kube-rbac-proxy-metric" containerID="cri-o://6c03f43b3340afa5c24d3ab2e54d55fa56552e844242ec0e6bb87ed344e23aed" gracePeriod=120 Mar 18 18:07:24.944916 master-0 kubenswrapper[30278]: I0318 18:07:24.944901 30278 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-monitoring/alertmanager-main-0" podUID="89b1dfdf-4633-45af-8abd-931a76eca960" containerName="kube-rbac-proxy" containerID="cri-o://0b551827974ca1934d8f9a62505f47cc16f56f528ceb391855cad37846d46b67" gracePeriod=120 Mar 18 18:07:24.945026 master-0 kubenswrapper[30278]: I0318 18:07:24.945012 30278 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-monitoring/alertmanager-main-0" podUID="89b1dfdf-4633-45af-8abd-931a76eca960" containerName="kube-rbac-proxy-web" containerID="cri-o://3695e8b7d907a4ecd98dae9c6375016787fd4cf2e9dee7c3967cd4f43aeacc9c" gracePeriod=120 Mar 18 18:07:24.945129 master-0 kubenswrapper[30278]: I0318 18:07:24.945113 30278 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-monitoring/alertmanager-main-0" podUID="89b1dfdf-4633-45af-8abd-931a76eca960" containerName="config-reloader" containerID="cri-o://1151cf0b337961a5368835bec6f85275df0f5f5ad3f456f4b8617a0988d68ab0" gracePeriod=120 Mar 18 18:07:24.957939 master-0 kubenswrapper[30278]: I0318 18:07:24.957885 30278 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 18 18:07:25.224005 master-0 kubenswrapper[30278]: I0318 18:07:25.223859 30278 generic.go:334] "Generic (PLEG): container finished" podID="89b1dfdf-4633-45af-8abd-931a76eca960" containerID="0b35e279dda5a722efe795c0143026c8b448b1734eaac9f3c72eac823353df90" exitCode=0 Mar 18 18:07:25.224005 master-0 kubenswrapper[30278]: I0318 18:07:25.223908 30278 generic.go:334] "Generic (PLEG): container finished" podID="89b1dfdf-4633-45af-8abd-931a76eca960" containerID="6c03f43b3340afa5c24d3ab2e54d55fa56552e844242ec0e6bb87ed344e23aed" exitCode=0 Mar 18 18:07:25.224005 master-0 kubenswrapper[30278]: I0318 18:07:25.223921 30278 generic.go:334] "Generic (PLEG): container finished" podID="89b1dfdf-4633-45af-8abd-931a76eca960" containerID="0b551827974ca1934d8f9a62505f47cc16f56f528ceb391855cad37846d46b67" exitCode=0 Mar 18 18:07:25.224005 master-0 kubenswrapper[30278]: I0318 18:07:25.223934 30278 generic.go:334] "Generic (PLEG): container finished" podID="89b1dfdf-4633-45af-8abd-931a76eca960" containerID="1151cf0b337961a5368835bec6f85275df0f5f5ad3f456f4b8617a0988d68ab0" exitCode=0 Mar 18 18:07:25.224005 master-0 kubenswrapper[30278]: I0318 18:07:25.223943 30278 generic.go:334] "Generic (PLEG): container finished" podID="89b1dfdf-4633-45af-8abd-931a76eca960" containerID="54d5ec6af3880a2ea24f8fc641b0fdabd67a3d38b658ef8b46030a7fbdcb7542" exitCode=0 Mar 18 18:07:25.224460 master-0 kubenswrapper[30278]: I0318 18:07:25.224030 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"89b1dfdf-4633-45af-8abd-931a76eca960","Type":"ContainerDied","Data":"0b35e279dda5a722efe795c0143026c8b448b1734eaac9f3c72eac823353df90"} Mar 18 18:07:25.224460 master-0 kubenswrapper[30278]: I0318 18:07:25.224172 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"89b1dfdf-4633-45af-8abd-931a76eca960","Type":"ContainerDied","Data":"6c03f43b3340afa5c24d3ab2e54d55fa56552e844242ec0e6bb87ed344e23aed"} Mar 18 18:07:25.224460 master-0 kubenswrapper[30278]: I0318 18:07:25.224206 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"89b1dfdf-4633-45af-8abd-931a76eca960","Type":"ContainerDied","Data":"0b551827974ca1934d8f9a62505f47cc16f56f528ceb391855cad37846d46b67"} Mar 18 18:07:25.224460 master-0 kubenswrapper[30278]: I0318 18:07:25.224230 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"89b1dfdf-4633-45af-8abd-931a76eca960","Type":"ContainerDied","Data":"1151cf0b337961a5368835bec6f85275df0f5f5ad3f456f4b8617a0988d68ab0"} Mar 18 18:07:25.224460 master-0 kubenswrapper[30278]: I0318 18:07:25.224254 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"89b1dfdf-4633-45af-8abd-931a76eca960","Type":"ContainerDied","Data":"54d5ec6af3880a2ea24f8fc641b0fdabd67a3d38b658ef8b46030a7fbdcb7542"} Mar 18 18:07:25.226681 master-0 kubenswrapper[30278]: I0318 18:07:25.226628 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/telemeter-client-cf85db6cf-b9mbd" event={"ID":"49ae0fd5-b0ec-4b37-b441-4943f3b160d4","Type":"ContainerStarted","Data":"e8fc95f13e25203222169bf387fbb1010f7f46a9cd9ecf82e6453f3bcf8d9e92"} Mar 18 18:07:26.245348 master-0 kubenswrapper[30278]: I0318 18:07:26.241543 30278 generic.go:334] "Generic (PLEG): container finished" podID="89b1dfdf-4633-45af-8abd-931a76eca960" containerID="3695e8b7d907a4ecd98dae9c6375016787fd4cf2e9dee7c3967cd4f43aeacc9c" exitCode=0 Mar 18 18:07:26.245348 master-0 kubenswrapper[30278]: I0318 18:07:26.241595 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"89b1dfdf-4633-45af-8abd-931a76eca960","Type":"ContainerDied","Data":"3695e8b7d907a4ecd98dae9c6375016787fd4cf2e9dee7c3967cd4f43aeacc9c"} Mar 18 18:07:26.428666 master-0 kubenswrapper[30278]: I0318 18:07:26.428628 30278 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/alertmanager-main-0" Mar 18 18:07:26.578697 master-0 kubenswrapper[30278]: I0318 18:07:26.578615 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/89b1dfdf-4633-45af-8abd-931a76eca960-web-config\") pod \"89b1dfdf-4633-45af-8abd-931a76eca960\" (UID: \"89b1dfdf-4633-45af-8abd-931a76eca960\") " Mar 18 18:07:26.578938 master-0 kubenswrapper[30278]: I0318 18:07:26.578922 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/89b1dfdf-4633-45af-8abd-931a76eca960-secret-alertmanager-kube-rbac-proxy\") pod \"89b1dfdf-4633-45af-8abd-931a76eca960\" (UID: \"89b1dfdf-4633-45af-8abd-931a76eca960\") " Mar 18 18:07:26.579037 master-0 kubenswrapper[30278]: I0318 18:07:26.579023 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/89b1dfdf-4633-45af-8abd-931a76eca960-secret-alertmanager-main-tls\") pod \"89b1dfdf-4633-45af-8abd-931a76eca960\" (UID: \"89b1dfdf-4633-45af-8abd-931a76eca960\") " Mar 18 18:07:26.579154 master-0 kubenswrapper[30278]: I0318 18:07:26.579142 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/89b1dfdf-4633-45af-8abd-931a76eca960-alertmanager-trusted-ca-bundle\") pod \"89b1dfdf-4633-45af-8abd-931a76eca960\" (UID: \"89b1dfdf-4633-45af-8abd-931a76eca960\") " Mar 18 18:07:26.579318 master-0 kubenswrapper[30278]: I0318 18:07:26.579303 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/89b1dfdf-4633-45af-8abd-931a76eca960-tls-assets\") pod \"89b1dfdf-4633-45af-8abd-931a76eca960\" (UID: \"89b1dfdf-4633-45af-8abd-931a76eca960\") " Mar 18 18:07:26.579416 master-0 kubenswrapper[30278]: I0318 18:07:26.579404 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/89b1dfdf-4633-45af-8abd-931a76eca960-metrics-client-ca\") pod \"89b1dfdf-4633-45af-8abd-931a76eca960\" (UID: \"89b1dfdf-4633-45af-8abd-931a76eca960\") " Mar 18 18:07:26.579491 master-0 kubenswrapper[30278]: I0318 18:07:26.579479 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/89b1dfdf-4633-45af-8abd-931a76eca960-secret-alertmanager-kube-rbac-proxy-web\") pod \"89b1dfdf-4633-45af-8abd-931a76eca960\" (UID: \"89b1dfdf-4633-45af-8abd-931a76eca960\") " Mar 18 18:07:26.579567 master-0 kubenswrapper[30278]: I0318 18:07:26.579553 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ghl7k\" (UniqueName: \"kubernetes.io/projected/89b1dfdf-4633-45af-8abd-931a76eca960-kube-api-access-ghl7k\") pod \"89b1dfdf-4633-45af-8abd-931a76eca960\" (UID: \"89b1dfdf-4633-45af-8abd-931a76eca960\") " Mar 18 18:07:26.579638 master-0 kubenswrapper[30278]: I0318 18:07:26.579626 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/89b1dfdf-4633-45af-8abd-931a76eca960-secret-alertmanager-kube-rbac-proxy-metric\") pod \"89b1dfdf-4633-45af-8abd-931a76eca960\" (UID: \"89b1dfdf-4633-45af-8abd-931a76eca960\") " Mar 18 18:07:26.579739 master-0 kubenswrapper[30278]: I0318 18:07:26.579726 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/89b1dfdf-4633-45af-8abd-931a76eca960-config-out\") pod \"89b1dfdf-4633-45af-8abd-931a76eca960\" (UID: \"89b1dfdf-4633-45af-8abd-931a76eca960\") " Mar 18 18:07:26.579976 master-0 kubenswrapper[30278]: I0318 18:07:26.579814 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/89b1dfdf-4633-45af-8abd-931a76eca960-alertmanager-main-db\") pod \"89b1dfdf-4633-45af-8abd-931a76eca960\" (UID: \"89b1dfdf-4633-45af-8abd-931a76eca960\") " Mar 18 18:07:26.580261 master-0 kubenswrapper[30278]: I0318 18:07:26.580242 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/89b1dfdf-4633-45af-8abd-931a76eca960-config-volume\") pod \"89b1dfdf-4633-45af-8abd-931a76eca960\" (UID: \"89b1dfdf-4633-45af-8abd-931a76eca960\") " Mar 18 18:07:26.593405 master-0 kubenswrapper[30278]: I0318 18:07:26.582013 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/89b1dfdf-4633-45af-8abd-931a76eca960-metrics-client-ca" (OuterVolumeSpecName: "metrics-client-ca") pod "89b1dfdf-4633-45af-8abd-931a76eca960" (UID: "89b1dfdf-4633-45af-8abd-931a76eca960"). InnerVolumeSpecName "metrics-client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 18:07:26.593405 master-0 kubenswrapper[30278]: I0318 18:07:26.584866 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/89b1dfdf-4633-45af-8abd-931a76eca960-tls-assets" (OuterVolumeSpecName: "tls-assets") pod "89b1dfdf-4633-45af-8abd-931a76eca960" (UID: "89b1dfdf-4633-45af-8abd-931a76eca960"). InnerVolumeSpecName "tls-assets". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 18:07:26.593405 master-0 kubenswrapper[30278]: I0318 18:07:26.585157 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/89b1dfdf-4633-45af-8abd-931a76eca960-secret-alertmanager-main-tls" (OuterVolumeSpecName: "secret-alertmanager-main-tls") pod "89b1dfdf-4633-45af-8abd-931a76eca960" (UID: "89b1dfdf-4633-45af-8abd-931a76eca960"). InnerVolumeSpecName "secret-alertmanager-main-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 18:07:26.593405 master-0 kubenswrapper[30278]: I0318 18:07:26.585487 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/89b1dfdf-4633-45af-8abd-931a76eca960-kube-api-access-ghl7k" (OuterVolumeSpecName: "kube-api-access-ghl7k") pod "89b1dfdf-4633-45af-8abd-931a76eca960" (UID: "89b1dfdf-4633-45af-8abd-931a76eca960"). InnerVolumeSpecName "kube-api-access-ghl7k". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 18:07:26.593405 master-0 kubenswrapper[30278]: I0318 18:07:26.586014 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/89b1dfdf-4633-45af-8abd-931a76eca960-config-out" (OuterVolumeSpecName: "config-out") pod "89b1dfdf-4633-45af-8abd-931a76eca960" (UID: "89b1dfdf-4633-45af-8abd-931a76eca960"). InnerVolumeSpecName "config-out". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 18 18:07:26.593405 master-0 kubenswrapper[30278]: I0318 18:07:26.592232 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/89b1dfdf-4633-45af-8abd-931a76eca960-alertmanager-main-db" (OuterVolumeSpecName: "alertmanager-main-db") pod "89b1dfdf-4633-45af-8abd-931a76eca960" (UID: "89b1dfdf-4633-45af-8abd-931a76eca960"). InnerVolumeSpecName "alertmanager-main-db". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 18 18:07:26.595240 master-0 kubenswrapper[30278]: I0318 18:07:26.594858 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/89b1dfdf-4633-45af-8abd-931a76eca960-alertmanager-trusted-ca-bundle" (OuterVolumeSpecName: "alertmanager-trusted-ca-bundle") pod "89b1dfdf-4633-45af-8abd-931a76eca960" (UID: "89b1dfdf-4633-45af-8abd-931a76eca960"). InnerVolumeSpecName "alertmanager-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 18:07:26.596539 master-0 kubenswrapper[30278]: I0318 18:07:26.596305 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/89b1dfdf-4633-45af-8abd-931a76eca960-secret-alertmanager-kube-rbac-proxy-web" (OuterVolumeSpecName: "secret-alertmanager-kube-rbac-proxy-web") pod "89b1dfdf-4633-45af-8abd-931a76eca960" (UID: "89b1dfdf-4633-45af-8abd-931a76eca960"). InnerVolumeSpecName "secret-alertmanager-kube-rbac-proxy-web". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 18:07:26.596719 master-0 kubenswrapper[30278]: I0318 18:07:26.596692 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/89b1dfdf-4633-45af-8abd-931a76eca960-secret-alertmanager-kube-rbac-proxy" (OuterVolumeSpecName: "secret-alertmanager-kube-rbac-proxy") pod "89b1dfdf-4633-45af-8abd-931a76eca960" (UID: "89b1dfdf-4633-45af-8abd-931a76eca960"). InnerVolumeSpecName "secret-alertmanager-kube-rbac-proxy". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 18:07:26.602033 master-0 kubenswrapper[30278]: I0318 18:07:26.601930 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/89b1dfdf-4633-45af-8abd-931a76eca960-config-volume" (OuterVolumeSpecName: "config-volume") pod "89b1dfdf-4633-45af-8abd-931a76eca960" (UID: "89b1dfdf-4633-45af-8abd-931a76eca960"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 18:07:26.604480 master-0 kubenswrapper[30278]: I0318 18:07:26.604383 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/89b1dfdf-4633-45af-8abd-931a76eca960-secret-alertmanager-kube-rbac-proxy-metric" (OuterVolumeSpecName: "secret-alertmanager-kube-rbac-proxy-metric") pod "89b1dfdf-4633-45af-8abd-931a76eca960" (UID: "89b1dfdf-4633-45af-8abd-931a76eca960"). InnerVolumeSpecName "secret-alertmanager-kube-rbac-proxy-metric". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 18:07:26.640909 master-0 kubenswrapper[30278]: I0318 18:07:26.640861 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/89b1dfdf-4633-45af-8abd-931a76eca960-web-config" (OuterVolumeSpecName: "web-config") pod "89b1dfdf-4633-45af-8abd-931a76eca960" (UID: "89b1dfdf-4633-45af-8abd-931a76eca960"). InnerVolumeSpecName "web-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 18:07:26.682010 master-0 kubenswrapper[30278]: I0318 18:07:26.681957 30278 reconciler_common.go:293] "Volume detached for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/89b1dfdf-4633-45af-8abd-931a76eca960-tls-assets\") on node \"master-0\" DevicePath \"\"" Mar 18 18:07:26.682010 master-0 kubenswrapper[30278]: I0318 18:07:26.682000 30278 reconciler_common.go:293] "Volume detached for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/89b1dfdf-4633-45af-8abd-931a76eca960-metrics-client-ca\") on node \"master-0\" DevicePath \"\"" Mar 18 18:07:26.682165 master-0 kubenswrapper[30278]: I0318 18:07:26.682045 30278 reconciler_common.go:293] "Volume detached for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/89b1dfdf-4633-45af-8abd-931a76eca960-secret-alertmanager-kube-rbac-proxy-web\") on node \"master-0\" DevicePath \"\"" Mar 18 18:07:26.682165 master-0 kubenswrapper[30278]: I0318 18:07:26.682060 30278 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ghl7k\" (UniqueName: \"kubernetes.io/projected/89b1dfdf-4633-45af-8abd-931a76eca960-kube-api-access-ghl7k\") on node \"master-0\" DevicePath \"\"" Mar 18 18:07:26.682165 master-0 kubenswrapper[30278]: I0318 18:07:26.682077 30278 reconciler_common.go:293] "Volume detached for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/89b1dfdf-4633-45af-8abd-931a76eca960-secret-alertmanager-kube-rbac-proxy-metric\") on node \"master-0\" DevicePath \"\"" Mar 18 18:07:26.682165 master-0 kubenswrapper[30278]: I0318 18:07:26.682090 30278 reconciler_common.go:293] "Volume detached for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/89b1dfdf-4633-45af-8abd-931a76eca960-config-out\") on node \"master-0\" DevicePath \"\"" Mar 18 18:07:26.682165 master-0 kubenswrapper[30278]: I0318 18:07:26.682103 30278 reconciler_common.go:293] "Volume detached for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/89b1dfdf-4633-45af-8abd-931a76eca960-alertmanager-main-db\") on node \"master-0\" DevicePath \"\"" Mar 18 18:07:26.682165 master-0 kubenswrapper[30278]: I0318 18:07:26.682115 30278 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/89b1dfdf-4633-45af-8abd-931a76eca960-config-volume\") on node \"master-0\" DevicePath \"\"" Mar 18 18:07:26.682165 master-0 kubenswrapper[30278]: I0318 18:07:26.682124 30278 reconciler_common.go:293] "Volume detached for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/89b1dfdf-4633-45af-8abd-931a76eca960-web-config\") on node \"master-0\" DevicePath \"\"" Mar 18 18:07:26.682165 master-0 kubenswrapper[30278]: I0318 18:07:26.682135 30278 reconciler_common.go:293] "Volume detached for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/89b1dfdf-4633-45af-8abd-931a76eca960-secret-alertmanager-kube-rbac-proxy\") on node \"master-0\" DevicePath \"\"" Mar 18 18:07:26.682165 master-0 kubenswrapper[30278]: I0318 18:07:26.682149 30278 reconciler_common.go:293] "Volume detached for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/89b1dfdf-4633-45af-8abd-931a76eca960-secret-alertmanager-main-tls\") on node \"master-0\" DevicePath \"\"" Mar 18 18:07:26.682165 master-0 kubenswrapper[30278]: I0318 18:07:26.682162 30278 reconciler_common.go:293] "Volume detached for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/89b1dfdf-4633-45af-8abd-931a76eca960-alertmanager-trusted-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 18 18:07:27.270025 master-0 kubenswrapper[30278]: I0318 18:07:27.269958 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"89b1dfdf-4633-45af-8abd-931a76eca960","Type":"ContainerDied","Data":"6aed5aa23422f65dac7bda57b71903b3185d73e0dd8da2720937a75260d98b26"} Mar 18 18:07:27.270025 master-0 kubenswrapper[30278]: I0318 18:07:27.270030 30278 scope.go:117] "RemoveContainer" containerID="0b35e279dda5a722efe795c0143026c8b448b1734eaac9f3c72eac823353df90" Mar 18 18:07:27.270654 master-0 kubenswrapper[30278]: I0318 18:07:27.270230 30278 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/alertmanager-main-0" Mar 18 18:07:27.310556 master-0 kubenswrapper[30278]: I0318 18:07:27.310483 30278 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-monitoring/alertmanager-main-0"] Mar 18 18:07:27.317616 master-0 kubenswrapper[30278]: I0318 18:07:27.317538 30278 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-monitoring/alertmanager-main-0"] Mar 18 18:07:27.359502 master-0 kubenswrapper[30278]: I0318 18:07:27.359358 30278 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/alertmanager-main-0"] Mar 18 18:07:27.359816 master-0 kubenswrapper[30278]: E0318 18:07:27.359785 30278 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="89b1dfdf-4633-45af-8abd-931a76eca960" containerName="init-config-reloader" Mar 18 18:07:27.359816 master-0 kubenswrapper[30278]: I0318 18:07:27.359811 30278 state_mem.go:107] "Deleted CPUSet assignment" podUID="89b1dfdf-4633-45af-8abd-931a76eca960" containerName="init-config-reloader" Mar 18 18:07:27.359912 master-0 kubenswrapper[30278]: E0318 18:07:27.359839 30278 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="89b1dfdf-4633-45af-8abd-931a76eca960" containerName="alertmanager" Mar 18 18:07:27.359912 master-0 kubenswrapper[30278]: I0318 18:07:27.359848 30278 state_mem.go:107] "Deleted CPUSet assignment" podUID="89b1dfdf-4633-45af-8abd-931a76eca960" containerName="alertmanager" Mar 18 18:07:27.359912 master-0 kubenswrapper[30278]: E0318 18:07:27.359879 30278 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="89b1dfdf-4633-45af-8abd-931a76eca960" containerName="kube-rbac-proxy" Mar 18 18:07:27.359912 master-0 kubenswrapper[30278]: I0318 18:07:27.359891 30278 state_mem.go:107] "Deleted CPUSet assignment" podUID="89b1dfdf-4633-45af-8abd-931a76eca960" containerName="kube-rbac-proxy" Mar 18 18:07:27.359912 master-0 kubenswrapper[30278]: E0318 18:07:27.359906 30278 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="89b1dfdf-4633-45af-8abd-931a76eca960" containerName="kube-rbac-proxy-metric" Mar 18 18:07:27.359912 master-0 kubenswrapper[30278]: I0318 18:07:27.359914 30278 state_mem.go:107] "Deleted CPUSet assignment" podUID="89b1dfdf-4633-45af-8abd-931a76eca960" containerName="kube-rbac-proxy-metric" Mar 18 18:07:27.360136 master-0 kubenswrapper[30278]: E0318 18:07:27.359926 30278 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="89b1dfdf-4633-45af-8abd-931a76eca960" containerName="prom-label-proxy" Mar 18 18:07:27.360136 master-0 kubenswrapper[30278]: I0318 18:07:27.359934 30278 state_mem.go:107] "Deleted CPUSet assignment" podUID="89b1dfdf-4633-45af-8abd-931a76eca960" containerName="prom-label-proxy" Mar 18 18:07:27.360136 master-0 kubenswrapper[30278]: E0318 18:07:27.359946 30278 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bc445b25-803f-4668-9a96-d539108d2527" containerName="console" Mar 18 18:07:27.360136 master-0 kubenswrapper[30278]: I0318 18:07:27.359953 30278 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc445b25-803f-4668-9a96-d539108d2527" containerName="console" Mar 18 18:07:27.360136 master-0 kubenswrapper[30278]: E0318 18:07:27.359973 30278 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="89b1dfdf-4633-45af-8abd-931a76eca960" containerName="kube-rbac-proxy-web" Mar 18 18:07:27.360136 master-0 kubenswrapper[30278]: I0318 18:07:27.359982 30278 state_mem.go:107] "Deleted CPUSet assignment" podUID="89b1dfdf-4633-45af-8abd-931a76eca960" containerName="kube-rbac-proxy-web" Mar 18 18:07:27.360136 master-0 kubenswrapper[30278]: E0318 18:07:27.360000 30278 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="89b1dfdf-4633-45af-8abd-931a76eca960" containerName="config-reloader" Mar 18 18:07:27.360136 master-0 kubenswrapper[30278]: I0318 18:07:27.360010 30278 state_mem.go:107] "Deleted CPUSet assignment" podUID="89b1dfdf-4633-45af-8abd-931a76eca960" containerName="config-reloader" Mar 18 18:07:27.360611 master-0 kubenswrapper[30278]: I0318 18:07:27.360190 30278 memory_manager.go:354] "RemoveStaleState removing state" podUID="89b1dfdf-4633-45af-8abd-931a76eca960" containerName="alertmanager" Mar 18 18:07:27.360611 master-0 kubenswrapper[30278]: I0318 18:07:27.360213 30278 memory_manager.go:354] "RemoveStaleState removing state" podUID="89b1dfdf-4633-45af-8abd-931a76eca960" containerName="kube-rbac-proxy-web" Mar 18 18:07:27.360611 master-0 kubenswrapper[30278]: I0318 18:07:27.360228 30278 memory_manager.go:354] "RemoveStaleState removing state" podUID="89b1dfdf-4633-45af-8abd-931a76eca960" containerName="prom-label-proxy" Mar 18 18:07:27.360611 master-0 kubenswrapper[30278]: I0318 18:07:27.360256 30278 memory_manager.go:354] "RemoveStaleState removing state" podUID="bc445b25-803f-4668-9a96-d539108d2527" containerName="console" Mar 18 18:07:27.360611 master-0 kubenswrapper[30278]: I0318 18:07:27.360285 30278 memory_manager.go:354] "RemoveStaleState removing state" podUID="89b1dfdf-4633-45af-8abd-931a76eca960" containerName="kube-rbac-proxy" Mar 18 18:07:27.360611 master-0 kubenswrapper[30278]: I0318 18:07:27.360313 30278 memory_manager.go:354] "RemoveStaleState removing state" podUID="89b1dfdf-4633-45af-8abd-931a76eca960" containerName="config-reloader" Mar 18 18:07:27.360611 master-0 kubenswrapper[30278]: I0318 18:07:27.360324 30278 memory_manager.go:354] "RemoveStaleState removing state" podUID="89b1dfdf-4633-45af-8abd-931a76eca960" containerName="kube-rbac-proxy-metric" Mar 18 18:07:27.363151 master-0 kubenswrapper[30278]: I0318 18:07:27.363123 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/alertmanager-main-0" Mar 18 18:07:27.367586 master-0 kubenswrapper[30278]: I0318 18:07:27.367130 30278 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/alertmanager-main-0"] Mar 18 18:07:27.368064 master-0 kubenswrapper[30278]: I0318 18:07:27.368029 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-dockercfg-2pg6x" Mar 18 18:07:27.368347 master-0 kubenswrapper[30278]: I0318 18:07:27.368328 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-web-config" Mar 18 18:07:27.368600 master-0 kubenswrapper[30278]: I0318 18:07:27.368575 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-tls-assets-0" Mar 18 18:07:27.368752 master-0 kubenswrapper[30278]: I0318 18:07:27.368737 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-kube-rbac-proxy-web" Mar 18 18:07:27.368985 master-0 kubenswrapper[30278]: I0318 18:07:27.368969 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-kube-rbac-proxy-metric" Mar 18 18:07:27.371205 master-0 kubenswrapper[30278]: I0318 18:07:27.371186 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-tls" Mar 18 18:07:27.371384 master-0 kubenswrapper[30278]: I0318 18:07:27.371366 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-kube-rbac-proxy" Mar 18 18:07:27.371548 master-0 kubenswrapper[30278]: I0318 18:07:27.371520 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-generated" Mar 18 18:07:27.377504 master-0 kubenswrapper[30278]: I0318 18:07:27.377465 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"alertmanager-trusted-ca-bundle" Mar 18 18:07:27.502009 master-0 kubenswrapper[30278]: I0318 18:07:27.501944 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/055b8a84-fa30-4cdd-b5c8-eb9bbf7312b0-secret-alertmanager-kube-rbac-proxy-web\") pod \"alertmanager-main-0\" (UID: \"055b8a84-fa30-4cdd-b5c8-eb9bbf7312b0\") " pod="openshift-monitoring/alertmanager-main-0" Mar 18 18:07:27.502009 master-0 kubenswrapper[30278]: I0318 18:07:27.502010 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/055b8a84-fa30-4cdd-b5c8-eb9bbf7312b0-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"055b8a84-fa30-4cdd-b5c8-eb9bbf7312b0\") " pod="openshift-monitoring/alertmanager-main-0" Mar 18 18:07:27.502258 master-0 kubenswrapper[30278]: I0318 18:07:27.502031 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/055b8a84-fa30-4cdd-b5c8-eb9bbf7312b0-secret-alertmanager-kube-rbac-proxy\") pod \"alertmanager-main-0\" (UID: \"055b8a84-fa30-4cdd-b5c8-eb9bbf7312b0\") " pod="openshift-monitoring/alertmanager-main-0" Mar 18 18:07:27.502258 master-0 kubenswrapper[30278]: I0318 18:07:27.502062 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/055b8a84-fa30-4cdd-b5c8-eb9bbf7312b0-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"055b8a84-fa30-4cdd-b5c8-eb9bbf7312b0\") " pod="openshift-monitoring/alertmanager-main-0" Mar 18 18:07:27.502258 master-0 kubenswrapper[30278]: I0318 18:07:27.502094 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/055b8a84-fa30-4cdd-b5c8-eb9bbf7312b0-config-volume\") pod \"alertmanager-main-0\" (UID: \"055b8a84-fa30-4cdd-b5c8-eb9bbf7312b0\") " pod="openshift-monitoring/alertmanager-main-0" Mar 18 18:07:27.502258 master-0 kubenswrapper[30278]: I0318 18:07:27.502115 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/055b8a84-fa30-4cdd-b5c8-eb9bbf7312b0-secret-alertmanager-kube-rbac-proxy-metric\") pod \"alertmanager-main-0\" (UID: \"055b8a84-fa30-4cdd-b5c8-eb9bbf7312b0\") " pod="openshift-monitoring/alertmanager-main-0" Mar 18 18:07:27.502258 master-0 kubenswrapper[30278]: I0318 18:07:27.502140 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/055b8a84-fa30-4cdd-b5c8-eb9bbf7312b0-config-out\") pod \"alertmanager-main-0\" (UID: \"055b8a84-fa30-4cdd-b5c8-eb9bbf7312b0\") " pod="openshift-monitoring/alertmanager-main-0" Mar 18 18:07:27.502258 master-0 kubenswrapper[30278]: I0318 18:07:27.502159 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/055b8a84-fa30-4cdd-b5c8-eb9bbf7312b0-tls-assets\") pod \"alertmanager-main-0\" (UID: \"055b8a84-fa30-4cdd-b5c8-eb9bbf7312b0\") " pod="openshift-monitoring/alertmanager-main-0" Mar 18 18:07:27.502258 master-0 kubenswrapper[30278]: I0318 18:07:27.502189 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/055b8a84-fa30-4cdd-b5c8-eb9bbf7312b0-web-config\") pod \"alertmanager-main-0\" (UID: \"055b8a84-fa30-4cdd-b5c8-eb9bbf7312b0\") " pod="openshift-monitoring/alertmanager-main-0" Mar 18 18:07:27.502258 master-0 kubenswrapper[30278]: I0318 18:07:27.502242 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/055b8a84-fa30-4cdd-b5c8-eb9bbf7312b0-metrics-client-ca\") pod \"alertmanager-main-0\" (UID: \"055b8a84-fa30-4cdd-b5c8-eb9bbf7312b0\") " pod="openshift-monitoring/alertmanager-main-0" Mar 18 18:07:27.502558 master-0 kubenswrapper[30278]: I0318 18:07:27.502293 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/055b8a84-fa30-4cdd-b5c8-eb9bbf7312b0-alertmanager-main-db\") pod \"alertmanager-main-0\" (UID: \"055b8a84-fa30-4cdd-b5c8-eb9bbf7312b0\") " pod="openshift-monitoring/alertmanager-main-0" Mar 18 18:07:27.502558 master-0 kubenswrapper[30278]: I0318 18:07:27.502315 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bj9m7\" (UniqueName: \"kubernetes.io/projected/055b8a84-fa30-4cdd-b5c8-eb9bbf7312b0-kube-api-access-bj9m7\") pod \"alertmanager-main-0\" (UID: \"055b8a84-fa30-4cdd-b5c8-eb9bbf7312b0\") " pod="openshift-monitoring/alertmanager-main-0" Mar 18 18:07:27.505420 master-0 kubenswrapper[30278]: I0318 18:07:27.505373 30278 scope.go:117] "RemoveContainer" containerID="6c03f43b3340afa5c24d3ab2e54d55fa56552e844242ec0e6bb87ed344e23aed" Mar 18 18:07:27.537955 master-0 kubenswrapper[30278]: I0318 18:07:27.537911 30278 scope.go:117] "RemoveContainer" containerID="0b551827974ca1934d8f9a62505f47cc16f56f528ceb391855cad37846d46b67" Mar 18 18:07:27.559453 master-0 kubenswrapper[30278]: I0318 18:07:27.559383 30278 scope.go:117] "RemoveContainer" containerID="3695e8b7d907a4ecd98dae9c6375016787fd4cf2e9dee7c3967cd4f43aeacc9c" Mar 18 18:07:27.586583 master-0 kubenswrapper[30278]: I0318 18:07:27.586537 30278 scope.go:117] "RemoveContainer" containerID="1151cf0b337961a5368835bec6f85275df0f5f5ad3f456f4b8617a0988d68ab0" Mar 18 18:07:27.603960 master-0 kubenswrapper[30278]: I0318 18:07:27.603901 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/055b8a84-fa30-4cdd-b5c8-eb9bbf7312b0-config-out\") pod \"alertmanager-main-0\" (UID: \"055b8a84-fa30-4cdd-b5c8-eb9bbf7312b0\") " pod="openshift-monitoring/alertmanager-main-0" Mar 18 18:07:27.604384 master-0 kubenswrapper[30278]: I0318 18:07:27.604352 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/055b8a84-fa30-4cdd-b5c8-eb9bbf7312b0-tls-assets\") pod \"alertmanager-main-0\" (UID: \"055b8a84-fa30-4cdd-b5c8-eb9bbf7312b0\") " pod="openshift-monitoring/alertmanager-main-0" Mar 18 18:07:27.604541 master-0 kubenswrapper[30278]: I0318 18:07:27.604523 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/055b8a84-fa30-4cdd-b5c8-eb9bbf7312b0-web-config\") pod \"alertmanager-main-0\" (UID: \"055b8a84-fa30-4cdd-b5c8-eb9bbf7312b0\") " pod="openshift-monitoring/alertmanager-main-0" Mar 18 18:07:27.604678 master-0 kubenswrapper[30278]: I0318 18:07:27.604658 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/055b8a84-fa30-4cdd-b5c8-eb9bbf7312b0-metrics-client-ca\") pod \"alertmanager-main-0\" (UID: \"055b8a84-fa30-4cdd-b5c8-eb9bbf7312b0\") " pod="openshift-monitoring/alertmanager-main-0" Mar 18 18:07:27.604816 master-0 kubenswrapper[30278]: I0318 18:07:27.604798 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/055b8a84-fa30-4cdd-b5c8-eb9bbf7312b0-alertmanager-main-db\") pod \"alertmanager-main-0\" (UID: \"055b8a84-fa30-4cdd-b5c8-eb9bbf7312b0\") " pod="openshift-monitoring/alertmanager-main-0" Mar 18 18:07:27.604921 master-0 kubenswrapper[30278]: I0318 18:07:27.604904 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bj9m7\" (UniqueName: \"kubernetes.io/projected/055b8a84-fa30-4cdd-b5c8-eb9bbf7312b0-kube-api-access-bj9m7\") pod \"alertmanager-main-0\" (UID: \"055b8a84-fa30-4cdd-b5c8-eb9bbf7312b0\") " pod="openshift-monitoring/alertmanager-main-0" Mar 18 18:07:27.605047 master-0 kubenswrapper[30278]: I0318 18:07:27.605027 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/055b8a84-fa30-4cdd-b5c8-eb9bbf7312b0-secret-alertmanager-kube-rbac-proxy-web\") pod \"alertmanager-main-0\" (UID: \"055b8a84-fa30-4cdd-b5c8-eb9bbf7312b0\") " pod="openshift-monitoring/alertmanager-main-0" Mar 18 18:07:27.605165 master-0 kubenswrapper[30278]: I0318 18:07:27.605147 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/055b8a84-fa30-4cdd-b5c8-eb9bbf7312b0-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"055b8a84-fa30-4cdd-b5c8-eb9bbf7312b0\") " pod="openshift-monitoring/alertmanager-main-0" Mar 18 18:07:27.605336 master-0 kubenswrapper[30278]: I0318 18:07:27.605255 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/055b8a84-fa30-4cdd-b5c8-eb9bbf7312b0-secret-alertmanager-kube-rbac-proxy\") pod \"alertmanager-main-0\" (UID: \"055b8a84-fa30-4cdd-b5c8-eb9bbf7312b0\") " pod="openshift-monitoring/alertmanager-main-0" Mar 18 18:07:27.606354 master-0 kubenswrapper[30278]: I0318 18:07:27.606309 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/055b8a84-fa30-4cdd-b5c8-eb9bbf7312b0-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"055b8a84-fa30-4cdd-b5c8-eb9bbf7312b0\") " pod="openshift-monitoring/alertmanager-main-0" Mar 18 18:07:27.606505 master-0 kubenswrapper[30278]: I0318 18:07:27.606478 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/055b8a84-fa30-4cdd-b5c8-eb9bbf7312b0-config-volume\") pod \"alertmanager-main-0\" (UID: \"055b8a84-fa30-4cdd-b5c8-eb9bbf7312b0\") " pod="openshift-monitoring/alertmanager-main-0" Mar 18 18:07:27.606676 master-0 kubenswrapper[30278]: I0318 18:07:27.606616 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/055b8a84-fa30-4cdd-b5c8-eb9bbf7312b0-alertmanager-main-db\") pod \"alertmanager-main-0\" (UID: \"055b8a84-fa30-4cdd-b5c8-eb9bbf7312b0\") " pod="openshift-monitoring/alertmanager-main-0" Mar 18 18:07:27.607221 master-0 kubenswrapper[30278]: I0318 18:07:27.607122 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/055b8a84-fa30-4cdd-b5c8-eb9bbf7312b0-secret-alertmanager-kube-rbac-proxy-metric\") pod \"alertmanager-main-0\" (UID: \"055b8a84-fa30-4cdd-b5c8-eb9bbf7312b0\") " pod="openshift-monitoring/alertmanager-main-0" Mar 18 18:07:27.607539 master-0 kubenswrapper[30278]: I0318 18:07:27.607477 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/055b8a84-fa30-4cdd-b5c8-eb9bbf7312b0-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"055b8a84-fa30-4cdd-b5c8-eb9bbf7312b0\") " pod="openshift-monitoring/alertmanager-main-0" Mar 18 18:07:27.609840 master-0 kubenswrapper[30278]: I0318 18:07:27.609545 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/055b8a84-fa30-4cdd-b5c8-eb9bbf7312b0-metrics-client-ca\") pod \"alertmanager-main-0\" (UID: \"055b8a84-fa30-4cdd-b5c8-eb9bbf7312b0\") " pod="openshift-monitoring/alertmanager-main-0" Mar 18 18:07:27.610958 master-0 kubenswrapper[30278]: I0318 18:07:27.610569 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/055b8a84-fa30-4cdd-b5c8-eb9bbf7312b0-config-out\") pod \"alertmanager-main-0\" (UID: \"055b8a84-fa30-4cdd-b5c8-eb9bbf7312b0\") " pod="openshift-monitoring/alertmanager-main-0" Mar 18 18:07:27.611651 master-0 kubenswrapper[30278]: I0318 18:07:27.611265 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/055b8a84-fa30-4cdd-b5c8-eb9bbf7312b0-web-config\") pod \"alertmanager-main-0\" (UID: \"055b8a84-fa30-4cdd-b5c8-eb9bbf7312b0\") " pod="openshift-monitoring/alertmanager-main-0" Mar 18 18:07:27.611651 master-0 kubenswrapper[30278]: I0318 18:07:27.611426 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/055b8a84-fa30-4cdd-b5c8-eb9bbf7312b0-tls-assets\") pod \"alertmanager-main-0\" (UID: \"055b8a84-fa30-4cdd-b5c8-eb9bbf7312b0\") " pod="openshift-monitoring/alertmanager-main-0" Mar 18 18:07:27.611844 master-0 kubenswrapper[30278]: I0318 18:07:27.611798 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/055b8a84-fa30-4cdd-b5c8-eb9bbf7312b0-secret-alertmanager-kube-rbac-proxy\") pod \"alertmanager-main-0\" (UID: \"055b8a84-fa30-4cdd-b5c8-eb9bbf7312b0\") " pod="openshift-monitoring/alertmanager-main-0" Mar 18 18:07:27.611923 master-0 kubenswrapper[30278]: I0318 18:07:27.611875 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/055b8a84-fa30-4cdd-b5c8-eb9bbf7312b0-secret-alertmanager-kube-rbac-proxy-web\") pod \"alertmanager-main-0\" (UID: \"055b8a84-fa30-4cdd-b5c8-eb9bbf7312b0\") " pod="openshift-monitoring/alertmanager-main-0" Mar 18 18:07:27.611923 master-0 kubenswrapper[30278]: I0318 18:07:27.611903 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/055b8a84-fa30-4cdd-b5c8-eb9bbf7312b0-config-volume\") pod \"alertmanager-main-0\" (UID: \"055b8a84-fa30-4cdd-b5c8-eb9bbf7312b0\") " pod="openshift-monitoring/alertmanager-main-0" Mar 18 18:07:27.616157 master-0 kubenswrapper[30278]: I0318 18:07:27.613931 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/055b8a84-fa30-4cdd-b5c8-eb9bbf7312b0-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"055b8a84-fa30-4cdd-b5c8-eb9bbf7312b0\") " pod="openshift-monitoring/alertmanager-main-0" Mar 18 18:07:27.623163 master-0 kubenswrapper[30278]: I0318 18:07:27.622829 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/055b8a84-fa30-4cdd-b5c8-eb9bbf7312b0-secret-alertmanager-kube-rbac-proxy-metric\") pod \"alertmanager-main-0\" (UID: \"055b8a84-fa30-4cdd-b5c8-eb9bbf7312b0\") " pod="openshift-monitoring/alertmanager-main-0" Mar 18 18:07:27.623310 master-0 kubenswrapper[30278]: I0318 18:07:27.623245 30278 scope.go:117] "RemoveContainer" containerID="54d5ec6af3880a2ea24f8fc641b0fdabd67a3d38b658ef8b46030a7fbdcb7542" Mar 18 18:07:27.624942 master-0 kubenswrapper[30278]: I0318 18:07:27.624888 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bj9m7\" (UniqueName: \"kubernetes.io/projected/055b8a84-fa30-4cdd-b5c8-eb9bbf7312b0-kube-api-access-bj9m7\") pod \"alertmanager-main-0\" (UID: \"055b8a84-fa30-4cdd-b5c8-eb9bbf7312b0\") " pod="openshift-monitoring/alertmanager-main-0" Mar 18 18:07:27.696336 master-0 kubenswrapper[30278]: I0318 18:07:27.696225 30278 scope.go:117] "RemoveContainer" containerID="542b8a460709182e802cafd712d98a072c621022a5720b144290b9d16fc6737d" Mar 18 18:07:27.719199 master-0 kubenswrapper[30278]: I0318 18:07:27.719144 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/alertmanager-main-0" Mar 18 18:07:28.204622 master-0 kubenswrapper[30278]: I0318 18:07:28.204379 30278 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/alertmanager-main-0"] Mar 18 18:07:28.214657 master-0 kubenswrapper[30278]: W0318 18:07:28.214553 30278 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod055b8a84_fa30_4cdd_b5c8_eb9bbf7312b0.slice/crio-d1045e3c3302d272aa25c39260cc62ed378cd17b106d79d380736edf75f21ae6 WatchSource:0}: Error finding container d1045e3c3302d272aa25c39260cc62ed378cd17b106d79d380736edf75f21ae6: Status 404 returned error can't find the container with id d1045e3c3302d272aa25c39260cc62ed378cd17b106d79d380736edf75f21ae6 Mar 18 18:07:28.302762 master-0 kubenswrapper[30278]: I0318 18:07:28.302705 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/telemeter-client-cf85db6cf-b9mbd" event={"ID":"49ae0fd5-b0ec-4b37-b441-4943f3b160d4","Type":"ContainerStarted","Data":"65ba0ceae20e74f794d99d3a1d98f101a9e7b3a98e5fdedfb3cc8fedbeedd1fb"} Mar 18 18:07:28.302762 master-0 kubenswrapper[30278]: I0318 18:07:28.302758 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/telemeter-client-cf85db6cf-b9mbd" event={"ID":"49ae0fd5-b0ec-4b37-b441-4943f3b160d4","Type":"ContainerStarted","Data":"8555e20990b716e1707d0553dde6913a27e7370a15121b6edc2108863d733d81"} Mar 18 18:07:28.303243 master-0 kubenswrapper[30278]: I0318 18:07:28.302771 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/telemeter-client-cf85db6cf-b9mbd" event={"ID":"49ae0fd5-b0ec-4b37-b441-4943f3b160d4","Type":"ContainerStarted","Data":"969ddc73a8e646ccbbe30a2934e828977014a74f77a397d60344997c56cb04cd"} Mar 18 18:07:28.306150 master-0 kubenswrapper[30278]: I0318 18:07:28.306086 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"055b8a84-fa30-4cdd-b5c8-eb9bbf7312b0","Type":"ContainerStarted","Data":"d1045e3c3302d272aa25c39260cc62ed378cd17b106d79d380736edf75f21ae6"} Mar 18 18:07:28.330561 master-0 kubenswrapper[30278]: I0318 18:07:28.330463 30278 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/telemeter-client-cf85db6cf-b9mbd" podStartSLOduration=33.727126555 podStartE2EDuration="36.330435978s" podCreationTimestamp="2026-03-18 18:06:52 +0000 UTC" firstStartedPulling="2026-03-18 18:07:24.957784091 +0000 UTC m=+414.124968696" lastFinishedPulling="2026-03-18 18:07:27.561093524 +0000 UTC m=+416.728278119" observedRunningTime="2026-03-18 18:07:28.329059672 +0000 UTC m=+417.496244267" watchObservedRunningTime="2026-03-18 18:07:28.330435978 +0000 UTC m=+417.497620603" Mar 18 18:07:29.068543 master-0 kubenswrapper[30278]: I0318 18:07:29.068433 30278 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="89b1dfdf-4633-45af-8abd-931a76eca960" path="/var/lib/kubelet/pods/89b1dfdf-4633-45af-8abd-931a76eca960/volumes" Mar 18 18:07:29.314888 master-0 kubenswrapper[30278]: I0318 18:07:29.314829 30278 generic.go:334] "Generic (PLEG): container finished" podID="055b8a84-fa30-4cdd-b5c8-eb9bbf7312b0" containerID="0d953950d858da261e55834e9c33742c3f5a3b844c5c80e2ec1c8502db754ca5" exitCode=0 Mar 18 18:07:29.315536 master-0 kubenswrapper[30278]: I0318 18:07:29.314886 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"055b8a84-fa30-4cdd-b5c8-eb9bbf7312b0","Type":"ContainerDied","Data":"0d953950d858da261e55834e9c33742c3f5a3b844c5c80e2ec1c8502db754ca5"} Mar 18 18:07:30.335384 master-0 kubenswrapper[30278]: I0318 18:07:30.335298 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"055b8a84-fa30-4cdd-b5c8-eb9bbf7312b0","Type":"ContainerStarted","Data":"4e345e15ea10d2a4c974e99fb3fc2b3e690aca9e292a7209b40971c0c4b24881"} Mar 18 18:07:30.335384 master-0 kubenswrapper[30278]: I0318 18:07:30.335375 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"055b8a84-fa30-4cdd-b5c8-eb9bbf7312b0","Type":"ContainerStarted","Data":"403054d7431918eb58c2cb2758ec890ff7d3a06d3cc3d952647d8d33057496d0"} Mar 18 18:07:30.335384 master-0 kubenswrapper[30278]: I0318 18:07:30.335397 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"055b8a84-fa30-4cdd-b5c8-eb9bbf7312b0","Type":"ContainerStarted","Data":"f282bd8c66e8fc83c669b025dffe5548ee86431486b8e32caebf16c3ecedc899"} Mar 18 18:07:30.335384 master-0 kubenswrapper[30278]: I0318 18:07:30.335415 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"055b8a84-fa30-4cdd-b5c8-eb9bbf7312b0","Type":"ContainerStarted","Data":"3793c4c0f351ab3f5a694a172688243f261aaf88f2031484ef2aef18bc56a11b"} Mar 18 18:07:30.336809 master-0 kubenswrapper[30278]: I0318 18:07:30.335432 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"055b8a84-fa30-4cdd-b5c8-eb9bbf7312b0","Type":"ContainerStarted","Data":"3a0b2ebe66dd77943f7570e27229592c3a3d54cbdbef3416d842330747012f6a"} Mar 18 18:07:30.336809 master-0 kubenswrapper[30278]: I0318 18:07:30.335451 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"055b8a84-fa30-4cdd-b5c8-eb9bbf7312b0","Type":"ContainerStarted","Data":"5744d5de6284a2c95a9abd289bc3fd24d3366b1c576221229ccdebbe91d6aa89"} Mar 18 18:07:30.386402 master-0 kubenswrapper[30278]: I0318 18:07:30.386183 30278 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/alertmanager-main-0" podStartSLOduration=3.386145815 podStartE2EDuration="3.386145815s" podCreationTimestamp="2026-03-18 18:07:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 18:07:30.381002067 +0000 UTC m=+419.548186692" watchObservedRunningTime="2026-03-18 18:07:30.386145815 +0000 UTC m=+419.553330470" Mar 18 18:07:36.722505 master-0 kubenswrapper[30278]: I0318 18:07:36.722410 30278 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-5467bbc6b5-q6qdv"] Mar 18 18:07:36.723496 master-0 kubenswrapper[30278]: I0318 18:07:36.723460 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-5467bbc6b5-q6qdv" Mar 18 18:07:36.749250 master-0 kubenswrapper[30278]: I0318 18:07:36.749185 30278 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-5467bbc6b5-q6qdv"] Mar 18 18:07:36.889202 master-0 kubenswrapper[30278]: I0318 18:07:36.889097 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/b3903332-0da7-4cb1-95fa-a746750be09f-console-oauth-config\") pod \"console-5467bbc6b5-q6qdv\" (UID: \"b3903332-0da7-4cb1-95fa-a746750be09f\") " pod="openshift-console/console-5467bbc6b5-q6qdv" Mar 18 18:07:36.889561 master-0 kubenswrapper[30278]: I0318 18:07:36.889487 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/b3903332-0da7-4cb1-95fa-a746750be09f-oauth-serving-cert\") pod \"console-5467bbc6b5-q6qdv\" (UID: \"b3903332-0da7-4cb1-95fa-a746750be09f\") " pod="openshift-console/console-5467bbc6b5-q6qdv" Mar 18 18:07:36.889868 master-0 kubenswrapper[30278]: I0318 18:07:36.889808 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-slqhs\" (UniqueName: \"kubernetes.io/projected/b3903332-0da7-4cb1-95fa-a746750be09f-kube-api-access-slqhs\") pod \"console-5467bbc6b5-q6qdv\" (UID: \"b3903332-0da7-4cb1-95fa-a746750be09f\") " pod="openshift-console/console-5467bbc6b5-q6qdv" Mar 18 18:07:36.889985 master-0 kubenswrapper[30278]: I0318 18:07:36.889951 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/b3903332-0da7-4cb1-95fa-a746750be09f-service-ca\") pod \"console-5467bbc6b5-q6qdv\" (UID: \"b3903332-0da7-4cb1-95fa-a746750be09f\") " pod="openshift-console/console-5467bbc6b5-q6qdv" Mar 18 18:07:36.890049 master-0 kubenswrapper[30278]: I0318 18:07:36.890008 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/b3903332-0da7-4cb1-95fa-a746750be09f-console-serving-cert\") pod \"console-5467bbc6b5-q6qdv\" (UID: \"b3903332-0da7-4cb1-95fa-a746750be09f\") " pod="openshift-console/console-5467bbc6b5-q6qdv" Mar 18 18:07:36.890270 master-0 kubenswrapper[30278]: I0318 18:07:36.890228 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/b3903332-0da7-4cb1-95fa-a746750be09f-console-config\") pod \"console-5467bbc6b5-q6qdv\" (UID: \"b3903332-0da7-4cb1-95fa-a746750be09f\") " pod="openshift-console/console-5467bbc6b5-q6qdv" Mar 18 18:07:36.890403 master-0 kubenswrapper[30278]: I0318 18:07:36.890373 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b3903332-0da7-4cb1-95fa-a746750be09f-trusted-ca-bundle\") pod \"console-5467bbc6b5-q6qdv\" (UID: \"b3903332-0da7-4cb1-95fa-a746750be09f\") " pod="openshift-console/console-5467bbc6b5-q6qdv" Mar 18 18:07:36.991545 master-0 kubenswrapper[30278]: I0318 18:07:36.991340 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/b3903332-0da7-4cb1-95fa-a746750be09f-console-oauth-config\") pod \"console-5467bbc6b5-q6qdv\" (UID: \"b3903332-0da7-4cb1-95fa-a746750be09f\") " pod="openshift-console/console-5467bbc6b5-q6qdv" Mar 18 18:07:36.991545 master-0 kubenswrapper[30278]: I0318 18:07:36.991434 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/b3903332-0da7-4cb1-95fa-a746750be09f-oauth-serving-cert\") pod \"console-5467bbc6b5-q6qdv\" (UID: \"b3903332-0da7-4cb1-95fa-a746750be09f\") " pod="openshift-console/console-5467bbc6b5-q6qdv" Mar 18 18:07:36.991545 master-0 kubenswrapper[30278]: I0318 18:07:36.991518 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-slqhs\" (UniqueName: \"kubernetes.io/projected/b3903332-0da7-4cb1-95fa-a746750be09f-kube-api-access-slqhs\") pod \"console-5467bbc6b5-q6qdv\" (UID: \"b3903332-0da7-4cb1-95fa-a746750be09f\") " pod="openshift-console/console-5467bbc6b5-q6qdv" Mar 18 18:07:36.991990 master-0 kubenswrapper[30278]: I0318 18:07:36.991564 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/b3903332-0da7-4cb1-95fa-a746750be09f-service-ca\") pod \"console-5467bbc6b5-q6qdv\" (UID: \"b3903332-0da7-4cb1-95fa-a746750be09f\") " pod="openshift-console/console-5467bbc6b5-q6qdv" Mar 18 18:07:36.991990 master-0 kubenswrapper[30278]: I0318 18:07:36.991591 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/b3903332-0da7-4cb1-95fa-a746750be09f-console-serving-cert\") pod \"console-5467bbc6b5-q6qdv\" (UID: \"b3903332-0da7-4cb1-95fa-a746750be09f\") " pod="openshift-console/console-5467bbc6b5-q6qdv" Mar 18 18:07:36.991990 master-0 kubenswrapper[30278]: I0318 18:07:36.991651 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/b3903332-0da7-4cb1-95fa-a746750be09f-console-config\") pod \"console-5467bbc6b5-q6qdv\" (UID: \"b3903332-0da7-4cb1-95fa-a746750be09f\") " pod="openshift-console/console-5467bbc6b5-q6qdv" Mar 18 18:07:36.991990 master-0 kubenswrapper[30278]: I0318 18:07:36.991688 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b3903332-0da7-4cb1-95fa-a746750be09f-trusted-ca-bundle\") pod \"console-5467bbc6b5-q6qdv\" (UID: \"b3903332-0da7-4cb1-95fa-a746750be09f\") " pod="openshift-console/console-5467bbc6b5-q6qdv" Mar 18 18:07:36.994196 master-0 kubenswrapper[30278]: I0318 18:07:36.992447 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/b3903332-0da7-4cb1-95fa-a746750be09f-oauth-serving-cert\") pod \"console-5467bbc6b5-q6qdv\" (UID: \"b3903332-0da7-4cb1-95fa-a746750be09f\") " pod="openshift-console/console-5467bbc6b5-q6qdv" Mar 18 18:07:36.994196 master-0 kubenswrapper[30278]: I0318 18:07:36.993049 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b3903332-0da7-4cb1-95fa-a746750be09f-trusted-ca-bundle\") pod \"console-5467bbc6b5-q6qdv\" (UID: \"b3903332-0da7-4cb1-95fa-a746750be09f\") " pod="openshift-console/console-5467bbc6b5-q6qdv" Mar 18 18:07:36.994196 master-0 kubenswrapper[30278]: I0318 18:07:36.993771 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/b3903332-0da7-4cb1-95fa-a746750be09f-console-config\") pod \"console-5467bbc6b5-q6qdv\" (UID: \"b3903332-0da7-4cb1-95fa-a746750be09f\") " pod="openshift-console/console-5467bbc6b5-q6qdv" Mar 18 18:07:36.994196 master-0 kubenswrapper[30278]: I0318 18:07:36.993968 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/b3903332-0da7-4cb1-95fa-a746750be09f-service-ca\") pod \"console-5467bbc6b5-q6qdv\" (UID: \"b3903332-0da7-4cb1-95fa-a746750be09f\") " pod="openshift-console/console-5467bbc6b5-q6qdv" Mar 18 18:07:36.994935 master-0 kubenswrapper[30278]: I0318 18:07:36.994885 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/b3903332-0da7-4cb1-95fa-a746750be09f-console-oauth-config\") pod \"console-5467bbc6b5-q6qdv\" (UID: \"b3903332-0da7-4cb1-95fa-a746750be09f\") " pod="openshift-console/console-5467bbc6b5-q6qdv" Mar 18 18:07:36.999895 master-0 kubenswrapper[30278]: I0318 18:07:36.999840 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/b3903332-0da7-4cb1-95fa-a746750be09f-console-serving-cert\") pod \"console-5467bbc6b5-q6qdv\" (UID: \"b3903332-0da7-4cb1-95fa-a746750be09f\") " pod="openshift-console/console-5467bbc6b5-q6qdv" Mar 18 18:07:37.012309 master-0 kubenswrapper[30278]: I0318 18:07:37.012201 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-slqhs\" (UniqueName: \"kubernetes.io/projected/b3903332-0da7-4cb1-95fa-a746750be09f-kube-api-access-slqhs\") pod \"console-5467bbc6b5-q6qdv\" (UID: \"b3903332-0da7-4cb1-95fa-a746750be09f\") " pod="openshift-console/console-5467bbc6b5-q6qdv" Mar 18 18:07:37.046501 master-0 kubenswrapper[30278]: I0318 18:07:37.046352 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-5467bbc6b5-q6qdv" Mar 18 18:07:37.493285 master-0 kubenswrapper[30278]: W0318 18:07:37.493204 30278 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb3903332_0da7_4cb1_95fa_a746750be09f.slice/crio-1b32bc9c9ecedf522fba277fdcf3ab367e418ab158b06f02f2b86bbb3537ed74 WatchSource:0}: Error finding container 1b32bc9c9ecedf522fba277fdcf3ab367e418ab158b06f02f2b86bbb3537ed74: Status 404 returned error can't find the container with id 1b32bc9c9ecedf522fba277fdcf3ab367e418ab158b06f02f2b86bbb3537ed74 Mar 18 18:07:37.496651 master-0 kubenswrapper[30278]: I0318 18:07:37.496624 30278 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-5467bbc6b5-q6qdv"] Mar 18 18:07:38.417707 master-0 kubenswrapper[30278]: I0318 18:07:38.417655 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-5467bbc6b5-q6qdv" event={"ID":"b3903332-0da7-4cb1-95fa-a746750be09f","Type":"ContainerStarted","Data":"d46dbcc2484d2a1d9230ebfa72dccfcdf7a6561f69733f3f356b26d39c43b618"} Mar 18 18:07:38.418339 master-0 kubenswrapper[30278]: I0318 18:07:38.418318 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-5467bbc6b5-q6qdv" event={"ID":"b3903332-0da7-4cb1-95fa-a746750be09f","Type":"ContainerStarted","Data":"1b32bc9c9ecedf522fba277fdcf3ab367e418ab158b06f02f2b86bbb3537ed74"} Mar 18 18:07:38.456309 master-0 kubenswrapper[30278]: I0318 18:07:38.454032 30278 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-5467bbc6b5-q6qdv" podStartSLOduration=2.45400649 podStartE2EDuration="2.45400649s" podCreationTimestamp="2026-03-18 18:07:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 18:07:38.451001539 +0000 UTC m=+427.618186164" watchObservedRunningTime="2026-03-18 18:07:38.45400649 +0000 UTC m=+427.621191085" Mar 18 18:07:44.918026 master-0 kubenswrapper[30278]: I0318 18:07:44.917935 30278 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-5467bbc6b5-q6qdv"] Mar 18 18:07:44.961792 master-0 kubenswrapper[30278]: I0318 18:07:44.961714 30278 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-b79998fb9-lngkn"] Mar 18 18:07:44.963054 master-0 kubenswrapper[30278]: I0318 18:07:44.962996 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-b79998fb9-lngkn" Mar 18 18:07:44.978741 master-0 kubenswrapper[30278]: I0318 18:07:44.978662 30278 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-b79998fb9-lngkn"] Mar 18 18:07:45.052864 master-0 kubenswrapper[30278]: I0318 18:07:45.052744 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/318e6e33-711c-4ca1-940b-fc28e25e673f-console-serving-cert\") pod \"console-b79998fb9-lngkn\" (UID: \"318e6e33-711c-4ca1-940b-fc28e25e673f\") " pod="openshift-console/console-b79998fb9-lngkn" Mar 18 18:07:45.053075 master-0 kubenswrapper[30278]: I0318 18:07:45.052879 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/318e6e33-711c-4ca1-940b-fc28e25e673f-service-ca\") pod \"console-b79998fb9-lngkn\" (UID: \"318e6e33-711c-4ca1-940b-fc28e25e673f\") " pod="openshift-console/console-b79998fb9-lngkn" Mar 18 18:07:45.053075 master-0 kubenswrapper[30278]: I0318 18:07:45.052948 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/318e6e33-711c-4ca1-940b-fc28e25e673f-trusted-ca-bundle\") pod \"console-b79998fb9-lngkn\" (UID: \"318e6e33-711c-4ca1-940b-fc28e25e673f\") " pod="openshift-console/console-b79998fb9-lngkn" Mar 18 18:07:45.053075 master-0 kubenswrapper[30278]: I0318 18:07:45.053013 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/318e6e33-711c-4ca1-940b-fc28e25e673f-console-oauth-config\") pod \"console-b79998fb9-lngkn\" (UID: \"318e6e33-711c-4ca1-940b-fc28e25e673f\") " pod="openshift-console/console-b79998fb9-lngkn" Mar 18 18:07:45.053075 master-0 kubenswrapper[30278]: I0318 18:07:45.053051 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8lc2z\" (UniqueName: \"kubernetes.io/projected/318e6e33-711c-4ca1-940b-fc28e25e673f-kube-api-access-8lc2z\") pod \"console-b79998fb9-lngkn\" (UID: \"318e6e33-711c-4ca1-940b-fc28e25e673f\") " pod="openshift-console/console-b79998fb9-lngkn" Mar 18 18:07:45.053319 master-0 kubenswrapper[30278]: I0318 18:07:45.053204 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/318e6e33-711c-4ca1-940b-fc28e25e673f-console-config\") pod \"console-b79998fb9-lngkn\" (UID: \"318e6e33-711c-4ca1-940b-fc28e25e673f\") " pod="openshift-console/console-b79998fb9-lngkn" Mar 18 18:07:45.053366 master-0 kubenswrapper[30278]: I0318 18:07:45.053336 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/318e6e33-711c-4ca1-940b-fc28e25e673f-oauth-serving-cert\") pod \"console-b79998fb9-lngkn\" (UID: \"318e6e33-711c-4ca1-940b-fc28e25e673f\") " pod="openshift-console/console-b79998fb9-lngkn" Mar 18 18:07:45.155087 master-0 kubenswrapper[30278]: I0318 18:07:45.155029 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/318e6e33-711c-4ca1-940b-fc28e25e673f-service-ca\") pod \"console-b79998fb9-lngkn\" (UID: \"318e6e33-711c-4ca1-940b-fc28e25e673f\") " pod="openshift-console/console-b79998fb9-lngkn" Mar 18 18:07:45.155318 master-0 kubenswrapper[30278]: I0318 18:07:45.155137 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/318e6e33-711c-4ca1-940b-fc28e25e673f-trusted-ca-bundle\") pod \"console-b79998fb9-lngkn\" (UID: \"318e6e33-711c-4ca1-940b-fc28e25e673f\") " pod="openshift-console/console-b79998fb9-lngkn" Mar 18 18:07:45.155504 master-0 kubenswrapper[30278]: I0318 18:07:45.155423 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/318e6e33-711c-4ca1-940b-fc28e25e673f-console-oauth-config\") pod \"console-b79998fb9-lngkn\" (UID: \"318e6e33-711c-4ca1-940b-fc28e25e673f\") " pod="openshift-console/console-b79998fb9-lngkn" Mar 18 18:07:45.155678 master-0 kubenswrapper[30278]: I0318 18:07:45.155638 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8lc2z\" (UniqueName: \"kubernetes.io/projected/318e6e33-711c-4ca1-940b-fc28e25e673f-kube-api-access-8lc2z\") pod \"console-b79998fb9-lngkn\" (UID: \"318e6e33-711c-4ca1-940b-fc28e25e673f\") " pod="openshift-console/console-b79998fb9-lngkn" Mar 18 18:07:45.155870 master-0 kubenswrapper[30278]: I0318 18:07:45.155831 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/318e6e33-711c-4ca1-940b-fc28e25e673f-console-config\") pod \"console-b79998fb9-lngkn\" (UID: \"318e6e33-711c-4ca1-940b-fc28e25e673f\") " pod="openshift-console/console-b79998fb9-lngkn" Mar 18 18:07:45.155945 master-0 kubenswrapper[30278]: I0318 18:07:45.155922 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/318e6e33-711c-4ca1-940b-fc28e25e673f-oauth-serving-cert\") pod \"console-b79998fb9-lngkn\" (UID: \"318e6e33-711c-4ca1-940b-fc28e25e673f\") " pod="openshift-console/console-b79998fb9-lngkn" Mar 18 18:07:45.156313 master-0 kubenswrapper[30278]: I0318 18:07:45.156258 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/318e6e33-711c-4ca1-940b-fc28e25e673f-console-serving-cert\") pod \"console-b79998fb9-lngkn\" (UID: \"318e6e33-711c-4ca1-940b-fc28e25e673f\") " pod="openshift-console/console-b79998fb9-lngkn" Mar 18 18:07:45.157069 master-0 kubenswrapper[30278]: I0318 18:07:45.157027 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/318e6e33-711c-4ca1-940b-fc28e25e673f-console-config\") pod \"console-b79998fb9-lngkn\" (UID: \"318e6e33-711c-4ca1-940b-fc28e25e673f\") " pod="openshift-console/console-b79998fb9-lngkn" Mar 18 18:07:45.157374 master-0 kubenswrapper[30278]: I0318 18:07:45.157318 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/318e6e33-711c-4ca1-940b-fc28e25e673f-service-ca\") pod \"console-b79998fb9-lngkn\" (UID: \"318e6e33-711c-4ca1-940b-fc28e25e673f\") " pod="openshift-console/console-b79998fb9-lngkn" Mar 18 18:07:45.157812 master-0 kubenswrapper[30278]: I0318 18:07:45.157763 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/318e6e33-711c-4ca1-940b-fc28e25e673f-oauth-serving-cert\") pod \"console-b79998fb9-lngkn\" (UID: \"318e6e33-711c-4ca1-940b-fc28e25e673f\") " pod="openshift-console/console-b79998fb9-lngkn" Mar 18 18:07:45.159725 master-0 kubenswrapper[30278]: I0318 18:07:45.159615 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/318e6e33-711c-4ca1-940b-fc28e25e673f-trusted-ca-bundle\") pod \"console-b79998fb9-lngkn\" (UID: \"318e6e33-711c-4ca1-940b-fc28e25e673f\") " pod="openshift-console/console-b79998fb9-lngkn" Mar 18 18:07:45.160923 master-0 kubenswrapper[30278]: I0318 18:07:45.160869 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/318e6e33-711c-4ca1-940b-fc28e25e673f-console-oauth-config\") pod \"console-b79998fb9-lngkn\" (UID: \"318e6e33-711c-4ca1-940b-fc28e25e673f\") " pod="openshift-console/console-b79998fb9-lngkn" Mar 18 18:07:45.166011 master-0 kubenswrapper[30278]: I0318 18:07:45.165903 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/318e6e33-711c-4ca1-940b-fc28e25e673f-console-serving-cert\") pod \"console-b79998fb9-lngkn\" (UID: \"318e6e33-711c-4ca1-940b-fc28e25e673f\") " pod="openshift-console/console-b79998fb9-lngkn" Mar 18 18:07:45.181339 master-0 kubenswrapper[30278]: I0318 18:07:45.181200 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8lc2z\" (UniqueName: \"kubernetes.io/projected/318e6e33-711c-4ca1-940b-fc28e25e673f-kube-api-access-8lc2z\") pod \"console-b79998fb9-lngkn\" (UID: \"318e6e33-711c-4ca1-940b-fc28e25e673f\") " pod="openshift-console/console-b79998fb9-lngkn" Mar 18 18:07:45.287883 master-0 kubenswrapper[30278]: I0318 18:07:45.287804 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-b79998fb9-lngkn" Mar 18 18:07:45.829706 master-0 kubenswrapper[30278]: I0318 18:07:45.827624 30278 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-b79998fb9-lngkn"] Mar 18 18:07:45.835384 master-0 kubenswrapper[30278]: W0318 18:07:45.835179 30278 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod318e6e33_711c_4ca1_940b_fc28e25e673f.slice/crio-7580a92c6acb7f5b933bf9f20869d51243254be7c972a6a9e4784a058dad75ad WatchSource:0}: Error finding container 7580a92c6acb7f5b933bf9f20869d51243254be7c972a6a9e4784a058dad75ad: Status 404 returned error can't find the container with id 7580a92c6acb7f5b933bf9f20869d51243254be7c972a6a9e4784a058dad75ad Mar 18 18:07:46.489528 master-0 kubenswrapper[30278]: I0318 18:07:46.489458 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-b79998fb9-lngkn" event={"ID":"318e6e33-711c-4ca1-940b-fc28e25e673f","Type":"ContainerStarted","Data":"09ec2e426e7c782142cb59f9c9f442ab8b0a94d7277b40e8153d584ab701f393"} Mar 18 18:07:46.489528 master-0 kubenswrapper[30278]: I0318 18:07:46.489524 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-b79998fb9-lngkn" event={"ID":"318e6e33-711c-4ca1-940b-fc28e25e673f","Type":"ContainerStarted","Data":"7580a92c6acb7f5b933bf9f20869d51243254be7c972a6a9e4784a058dad75ad"} Mar 18 18:07:46.513462 master-0 kubenswrapper[30278]: I0318 18:07:46.513334 30278 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-b79998fb9-lngkn" podStartSLOduration=2.513308836 podStartE2EDuration="2.513308836s" podCreationTimestamp="2026-03-18 18:07:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 18:07:46.510666735 +0000 UTC m=+435.677851370" watchObservedRunningTime="2026-03-18 18:07:46.513308836 +0000 UTC m=+435.680493441" Mar 18 18:07:46.919527 master-0 kubenswrapper[30278]: I0318 18:07:46.919443 30278 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-b79998fb9-lngkn"] Mar 18 18:07:46.934636 master-0 kubenswrapper[30278]: I0318 18:07:46.934561 30278 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-69cdb7b474-rkjr2"] Mar 18 18:07:46.935691 master-0 kubenswrapper[30278]: I0318 18:07:46.935513 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-69cdb7b474-rkjr2" Mar 18 18:07:46.953986 master-0 kubenswrapper[30278]: I0318 18:07:46.953618 30278 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-69cdb7b474-rkjr2"] Mar 18 18:07:46.996523 master-0 kubenswrapper[30278]: I0318 18:07:46.996475 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/27547e71-8f5b-4e31-90c7-491fcda236fb-trusted-ca-bundle\") pod \"console-69cdb7b474-rkjr2\" (UID: \"27547e71-8f5b-4e31-90c7-491fcda236fb\") " pod="openshift-console/console-69cdb7b474-rkjr2" Mar 18 18:07:46.996849 master-0 kubenswrapper[30278]: I0318 18:07:46.996828 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/27547e71-8f5b-4e31-90c7-491fcda236fb-service-ca\") pod \"console-69cdb7b474-rkjr2\" (UID: \"27547e71-8f5b-4e31-90c7-491fcda236fb\") " pod="openshift-console/console-69cdb7b474-rkjr2" Mar 18 18:07:46.997009 master-0 kubenswrapper[30278]: I0318 18:07:46.996991 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9hdx7\" (UniqueName: \"kubernetes.io/projected/27547e71-8f5b-4e31-90c7-491fcda236fb-kube-api-access-9hdx7\") pod \"console-69cdb7b474-rkjr2\" (UID: \"27547e71-8f5b-4e31-90c7-491fcda236fb\") " pod="openshift-console/console-69cdb7b474-rkjr2" Mar 18 18:07:46.997119 master-0 kubenswrapper[30278]: I0318 18:07:46.997104 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/27547e71-8f5b-4e31-90c7-491fcda236fb-console-oauth-config\") pod \"console-69cdb7b474-rkjr2\" (UID: \"27547e71-8f5b-4e31-90c7-491fcda236fb\") " pod="openshift-console/console-69cdb7b474-rkjr2" Mar 18 18:07:46.997246 master-0 kubenswrapper[30278]: I0318 18:07:46.997229 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/27547e71-8f5b-4e31-90c7-491fcda236fb-console-serving-cert\") pod \"console-69cdb7b474-rkjr2\" (UID: \"27547e71-8f5b-4e31-90c7-491fcda236fb\") " pod="openshift-console/console-69cdb7b474-rkjr2" Mar 18 18:07:46.997376 master-0 kubenswrapper[30278]: I0318 18:07:46.997360 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/27547e71-8f5b-4e31-90c7-491fcda236fb-console-config\") pod \"console-69cdb7b474-rkjr2\" (UID: \"27547e71-8f5b-4e31-90c7-491fcda236fb\") " pod="openshift-console/console-69cdb7b474-rkjr2" Mar 18 18:07:46.997475 master-0 kubenswrapper[30278]: I0318 18:07:46.997461 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/27547e71-8f5b-4e31-90c7-491fcda236fb-oauth-serving-cert\") pod \"console-69cdb7b474-rkjr2\" (UID: \"27547e71-8f5b-4e31-90c7-491fcda236fb\") " pod="openshift-console/console-69cdb7b474-rkjr2" Mar 18 18:07:47.046958 master-0 kubenswrapper[30278]: I0318 18:07:47.046894 30278 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-5467bbc6b5-q6qdv" Mar 18 18:07:47.099581 master-0 kubenswrapper[30278]: I0318 18:07:47.099529 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9hdx7\" (UniqueName: \"kubernetes.io/projected/27547e71-8f5b-4e31-90c7-491fcda236fb-kube-api-access-9hdx7\") pod \"console-69cdb7b474-rkjr2\" (UID: \"27547e71-8f5b-4e31-90c7-491fcda236fb\") " pod="openshift-console/console-69cdb7b474-rkjr2" Mar 18 18:07:47.099581 master-0 kubenswrapper[30278]: I0318 18:07:47.099580 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/27547e71-8f5b-4e31-90c7-491fcda236fb-console-oauth-config\") pod \"console-69cdb7b474-rkjr2\" (UID: \"27547e71-8f5b-4e31-90c7-491fcda236fb\") " pod="openshift-console/console-69cdb7b474-rkjr2" Mar 18 18:07:47.099928 master-0 kubenswrapper[30278]: I0318 18:07:47.099620 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/27547e71-8f5b-4e31-90c7-491fcda236fb-console-serving-cert\") pod \"console-69cdb7b474-rkjr2\" (UID: \"27547e71-8f5b-4e31-90c7-491fcda236fb\") " pod="openshift-console/console-69cdb7b474-rkjr2" Mar 18 18:07:47.099928 master-0 kubenswrapper[30278]: I0318 18:07:47.099638 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/27547e71-8f5b-4e31-90c7-491fcda236fb-console-config\") pod \"console-69cdb7b474-rkjr2\" (UID: \"27547e71-8f5b-4e31-90c7-491fcda236fb\") " pod="openshift-console/console-69cdb7b474-rkjr2" Mar 18 18:07:47.099928 master-0 kubenswrapper[30278]: I0318 18:07:47.099671 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/27547e71-8f5b-4e31-90c7-491fcda236fb-oauth-serving-cert\") pod \"console-69cdb7b474-rkjr2\" (UID: \"27547e71-8f5b-4e31-90c7-491fcda236fb\") " pod="openshift-console/console-69cdb7b474-rkjr2" Mar 18 18:07:47.099928 master-0 kubenswrapper[30278]: I0318 18:07:47.099692 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/27547e71-8f5b-4e31-90c7-491fcda236fb-trusted-ca-bundle\") pod \"console-69cdb7b474-rkjr2\" (UID: \"27547e71-8f5b-4e31-90c7-491fcda236fb\") " pod="openshift-console/console-69cdb7b474-rkjr2" Mar 18 18:07:47.099928 master-0 kubenswrapper[30278]: I0318 18:07:47.099769 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/27547e71-8f5b-4e31-90c7-491fcda236fb-service-ca\") pod \"console-69cdb7b474-rkjr2\" (UID: \"27547e71-8f5b-4e31-90c7-491fcda236fb\") " pod="openshift-console/console-69cdb7b474-rkjr2" Mar 18 18:07:47.100716 master-0 kubenswrapper[30278]: I0318 18:07:47.100678 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/27547e71-8f5b-4e31-90c7-491fcda236fb-service-ca\") pod \"console-69cdb7b474-rkjr2\" (UID: \"27547e71-8f5b-4e31-90c7-491fcda236fb\") " pod="openshift-console/console-69cdb7b474-rkjr2" Mar 18 18:07:47.102000 master-0 kubenswrapper[30278]: I0318 18:07:47.101959 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/27547e71-8f5b-4e31-90c7-491fcda236fb-oauth-serving-cert\") pod \"console-69cdb7b474-rkjr2\" (UID: \"27547e71-8f5b-4e31-90c7-491fcda236fb\") " pod="openshift-console/console-69cdb7b474-rkjr2" Mar 18 18:07:47.102443 master-0 kubenswrapper[30278]: I0318 18:07:47.102412 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/27547e71-8f5b-4e31-90c7-491fcda236fb-trusted-ca-bundle\") pod \"console-69cdb7b474-rkjr2\" (UID: \"27547e71-8f5b-4e31-90c7-491fcda236fb\") " pod="openshift-console/console-69cdb7b474-rkjr2" Mar 18 18:07:47.102732 master-0 kubenswrapper[30278]: I0318 18:07:47.102699 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/27547e71-8f5b-4e31-90c7-491fcda236fb-console-config\") pod \"console-69cdb7b474-rkjr2\" (UID: \"27547e71-8f5b-4e31-90c7-491fcda236fb\") " pod="openshift-console/console-69cdb7b474-rkjr2" Mar 18 18:07:47.104305 master-0 kubenswrapper[30278]: I0318 18:07:47.104260 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/27547e71-8f5b-4e31-90c7-491fcda236fb-console-oauth-config\") pod \"console-69cdb7b474-rkjr2\" (UID: \"27547e71-8f5b-4e31-90c7-491fcda236fb\") " pod="openshift-console/console-69cdb7b474-rkjr2" Mar 18 18:07:47.104817 master-0 kubenswrapper[30278]: I0318 18:07:47.104794 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/27547e71-8f5b-4e31-90c7-491fcda236fb-console-serving-cert\") pod \"console-69cdb7b474-rkjr2\" (UID: \"27547e71-8f5b-4e31-90c7-491fcda236fb\") " pod="openshift-console/console-69cdb7b474-rkjr2" Mar 18 18:07:47.118088 master-0 kubenswrapper[30278]: I0318 18:07:47.117767 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9hdx7\" (UniqueName: \"kubernetes.io/projected/27547e71-8f5b-4e31-90c7-491fcda236fb-kube-api-access-9hdx7\") pod \"console-69cdb7b474-rkjr2\" (UID: \"27547e71-8f5b-4e31-90c7-491fcda236fb\") " pod="openshift-console/console-69cdb7b474-rkjr2" Mar 18 18:07:47.266262 master-0 kubenswrapper[30278]: I0318 18:07:47.266080 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-69cdb7b474-rkjr2" Mar 18 18:07:47.721962 master-0 kubenswrapper[30278]: I0318 18:07:47.721863 30278 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-69cdb7b474-rkjr2"] Mar 18 18:07:48.508714 master-0 kubenswrapper[30278]: I0318 18:07:48.507212 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-69cdb7b474-rkjr2" event={"ID":"27547e71-8f5b-4e31-90c7-491fcda236fb","Type":"ContainerStarted","Data":"4f6223e81be4e67b1f1e90f7dd170b1fc79d3b15bf2136de76be369b9a6f81e2"} Mar 18 18:07:48.508714 master-0 kubenswrapper[30278]: I0318 18:07:48.508175 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-69cdb7b474-rkjr2" event={"ID":"27547e71-8f5b-4e31-90c7-491fcda236fb","Type":"ContainerStarted","Data":"822dba3b8dd921fac3775f19dbf91ebebbb56e3412c701a58e84efbf8440d6bf"} Mar 18 18:07:48.540549 master-0 kubenswrapper[30278]: I0318 18:07:48.540292 30278 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-69cdb7b474-rkjr2" podStartSLOduration=2.540257652 podStartE2EDuration="2.540257652s" podCreationTimestamp="2026-03-18 18:07:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 18:07:48.535110915 +0000 UTC m=+437.702295510" watchObservedRunningTime="2026-03-18 18:07:48.540257652 +0000 UTC m=+437.707442237" Mar 18 18:07:55.289631 master-0 kubenswrapper[30278]: I0318 18:07:55.289544 30278 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-b79998fb9-lngkn" Mar 18 18:07:57.267438 master-0 kubenswrapper[30278]: I0318 18:07:57.267352 30278 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-69cdb7b474-rkjr2" Mar 18 18:07:57.267438 master-0 kubenswrapper[30278]: I0318 18:07:57.267456 30278 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-69cdb7b474-rkjr2" Mar 18 18:07:57.274470 master-0 kubenswrapper[30278]: I0318 18:07:57.274396 30278 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-69cdb7b474-rkjr2" Mar 18 18:07:57.587451 master-0 kubenswrapper[30278]: I0318 18:07:57.587304 30278 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-69cdb7b474-rkjr2" Mar 18 18:07:57.678522 master-0 kubenswrapper[30278]: I0318 18:07:57.678449 30278 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-9df654797-6rk29"] Mar 18 18:08:04.409948 master-0 kubenswrapper[30278]: I0318 18:08:04.409886 30278 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-monitoring/prometheus-k8s-0" Mar 18 18:08:04.443643 master-0 kubenswrapper[30278]: I0318 18:08:04.443594 30278 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-monitoring/prometheus-k8s-0" Mar 18 18:08:04.688541 master-0 kubenswrapper[30278]: I0318 18:08:04.688388 30278 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/prometheus-k8s-0" Mar 18 18:08:09.961256 master-0 kubenswrapper[30278]: I0318 18:08:09.961088 30278 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-5467bbc6b5-q6qdv" podUID="b3903332-0da7-4cb1-95fa-a746750be09f" containerName="console" containerID="cri-o://d46dbcc2484d2a1d9230ebfa72dccfcdf7a6561f69733f3f356b26d39c43b618" gracePeriod=15 Mar 18 18:08:10.424639 master-0 kubenswrapper[30278]: I0318 18:08:10.424542 30278 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-5467bbc6b5-q6qdv_b3903332-0da7-4cb1-95fa-a746750be09f/console/0.log" Mar 18 18:08:10.424988 master-0 kubenswrapper[30278]: I0318 18:08:10.424754 30278 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-5467bbc6b5-q6qdv" Mar 18 18:08:10.596126 master-0 kubenswrapper[30278]: I0318 18:08:10.596003 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/b3903332-0da7-4cb1-95fa-a746750be09f-console-serving-cert\") pod \"b3903332-0da7-4cb1-95fa-a746750be09f\" (UID: \"b3903332-0da7-4cb1-95fa-a746750be09f\") " Mar 18 18:08:10.596659 master-0 kubenswrapper[30278]: I0318 18:08:10.596175 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/b3903332-0da7-4cb1-95fa-a746750be09f-console-config\") pod \"b3903332-0da7-4cb1-95fa-a746750be09f\" (UID: \"b3903332-0da7-4cb1-95fa-a746750be09f\") " Mar 18 18:08:10.596659 master-0 kubenswrapper[30278]: I0318 18:08:10.596257 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-slqhs\" (UniqueName: \"kubernetes.io/projected/b3903332-0da7-4cb1-95fa-a746750be09f-kube-api-access-slqhs\") pod \"b3903332-0da7-4cb1-95fa-a746750be09f\" (UID: \"b3903332-0da7-4cb1-95fa-a746750be09f\") " Mar 18 18:08:10.596659 master-0 kubenswrapper[30278]: I0318 18:08:10.596469 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/b3903332-0da7-4cb1-95fa-a746750be09f-oauth-serving-cert\") pod \"b3903332-0da7-4cb1-95fa-a746750be09f\" (UID: \"b3903332-0da7-4cb1-95fa-a746750be09f\") " Mar 18 18:08:10.596659 master-0 kubenswrapper[30278]: I0318 18:08:10.596522 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b3903332-0da7-4cb1-95fa-a746750be09f-trusted-ca-bundle\") pod \"b3903332-0da7-4cb1-95fa-a746750be09f\" (UID: \"b3903332-0da7-4cb1-95fa-a746750be09f\") " Mar 18 18:08:10.596659 master-0 kubenswrapper[30278]: I0318 18:08:10.596546 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/b3903332-0da7-4cb1-95fa-a746750be09f-service-ca\") pod \"b3903332-0da7-4cb1-95fa-a746750be09f\" (UID: \"b3903332-0da7-4cb1-95fa-a746750be09f\") " Mar 18 18:08:10.596659 master-0 kubenswrapper[30278]: I0318 18:08:10.596592 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/b3903332-0da7-4cb1-95fa-a746750be09f-console-oauth-config\") pod \"b3903332-0da7-4cb1-95fa-a746750be09f\" (UID: \"b3903332-0da7-4cb1-95fa-a746750be09f\") " Mar 18 18:08:10.597302 master-0 kubenswrapper[30278]: I0318 18:08:10.597195 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b3903332-0da7-4cb1-95fa-a746750be09f-console-config" (OuterVolumeSpecName: "console-config") pod "b3903332-0da7-4cb1-95fa-a746750be09f" (UID: "b3903332-0da7-4cb1-95fa-a746750be09f"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 18:08:10.598003 master-0 kubenswrapper[30278]: I0318 18:08:10.597930 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b3903332-0da7-4cb1-95fa-a746750be09f-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "b3903332-0da7-4cb1-95fa-a746750be09f" (UID: "b3903332-0da7-4cb1-95fa-a746750be09f"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 18:08:10.598257 master-0 kubenswrapper[30278]: I0318 18:08:10.598170 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b3903332-0da7-4cb1-95fa-a746750be09f-service-ca" (OuterVolumeSpecName: "service-ca") pod "b3903332-0da7-4cb1-95fa-a746750be09f" (UID: "b3903332-0da7-4cb1-95fa-a746750be09f"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 18:08:10.598334 master-0 kubenswrapper[30278]: I0318 18:08:10.598206 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b3903332-0da7-4cb1-95fa-a746750be09f-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "b3903332-0da7-4cb1-95fa-a746750be09f" (UID: "b3903332-0da7-4cb1-95fa-a746750be09f"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 18:08:10.599518 master-0 kubenswrapper[30278]: I0318 18:08:10.599470 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b3903332-0da7-4cb1-95fa-a746750be09f-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "b3903332-0da7-4cb1-95fa-a746750be09f" (UID: "b3903332-0da7-4cb1-95fa-a746750be09f"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 18:08:10.599679 master-0 kubenswrapper[30278]: I0318 18:08:10.599631 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b3903332-0da7-4cb1-95fa-a746750be09f-kube-api-access-slqhs" (OuterVolumeSpecName: "kube-api-access-slqhs") pod "b3903332-0da7-4cb1-95fa-a746750be09f" (UID: "b3903332-0da7-4cb1-95fa-a746750be09f"). InnerVolumeSpecName "kube-api-access-slqhs". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 18:08:10.602639 master-0 kubenswrapper[30278]: I0318 18:08:10.602597 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b3903332-0da7-4cb1-95fa-a746750be09f-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "b3903332-0da7-4cb1-95fa-a746750be09f" (UID: "b3903332-0da7-4cb1-95fa-a746750be09f"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 18:08:10.694618 master-0 kubenswrapper[30278]: I0318 18:08:10.694451 30278 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-5467bbc6b5-q6qdv_b3903332-0da7-4cb1-95fa-a746750be09f/console/0.log" Mar 18 18:08:10.694618 master-0 kubenswrapper[30278]: I0318 18:08:10.694531 30278 generic.go:334] "Generic (PLEG): container finished" podID="b3903332-0da7-4cb1-95fa-a746750be09f" containerID="d46dbcc2484d2a1d9230ebfa72dccfcdf7a6561f69733f3f356b26d39c43b618" exitCode=2 Mar 18 18:08:10.694618 master-0 kubenswrapper[30278]: I0318 18:08:10.694573 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-5467bbc6b5-q6qdv" event={"ID":"b3903332-0da7-4cb1-95fa-a746750be09f","Type":"ContainerDied","Data":"d46dbcc2484d2a1d9230ebfa72dccfcdf7a6561f69733f3f356b26d39c43b618"} Mar 18 18:08:10.695017 master-0 kubenswrapper[30278]: I0318 18:08:10.694629 30278 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-5467bbc6b5-q6qdv" Mar 18 18:08:10.695017 master-0 kubenswrapper[30278]: I0318 18:08:10.694621 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-5467bbc6b5-q6qdv" event={"ID":"b3903332-0da7-4cb1-95fa-a746750be09f","Type":"ContainerDied","Data":"1b32bc9c9ecedf522fba277fdcf3ab367e418ab158b06f02f2b86bbb3537ed74"} Mar 18 18:08:10.695017 master-0 kubenswrapper[30278]: I0318 18:08:10.694707 30278 scope.go:117] "RemoveContainer" containerID="d46dbcc2484d2a1d9230ebfa72dccfcdf7a6561f69733f3f356b26d39c43b618" Mar 18 18:08:10.699868 master-0 kubenswrapper[30278]: I0318 18:08:10.698185 30278 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/b3903332-0da7-4cb1-95fa-a746750be09f-oauth-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 18 18:08:10.699868 master-0 kubenswrapper[30278]: I0318 18:08:10.698221 30278 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b3903332-0da7-4cb1-95fa-a746750be09f-trusted-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 18 18:08:10.699868 master-0 kubenswrapper[30278]: I0318 18:08:10.698235 30278 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/b3903332-0da7-4cb1-95fa-a746750be09f-service-ca\") on node \"master-0\" DevicePath \"\"" Mar 18 18:08:10.699868 master-0 kubenswrapper[30278]: I0318 18:08:10.698247 30278 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/b3903332-0da7-4cb1-95fa-a746750be09f-console-oauth-config\") on node \"master-0\" DevicePath \"\"" Mar 18 18:08:10.699868 master-0 kubenswrapper[30278]: I0318 18:08:10.698259 30278 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/b3903332-0da7-4cb1-95fa-a746750be09f-console-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 18 18:08:10.699868 master-0 kubenswrapper[30278]: I0318 18:08:10.698291 30278 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/b3903332-0da7-4cb1-95fa-a746750be09f-console-config\") on node \"master-0\" DevicePath \"\"" Mar 18 18:08:10.699868 master-0 kubenswrapper[30278]: I0318 18:08:10.698302 30278 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-slqhs\" (UniqueName: \"kubernetes.io/projected/b3903332-0da7-4cb1-95fa-a746750be09f-kube-api-access-slqhs\") on node \"master-0\" DevicePath \"\"" Mar 18 18:08:10.728335 master-0 kubenswrapper[30278]: I0318 18:08:10.728154 30278 scope.go:117] "RemoveContainer" containerID="d46dbcc2484d2a1d9230ebfa72dccfcdf7a6561f69733f3f356b26d39c43b618" Mar 18 18:08:10.728910 master-0 kubenswrapper[30278]: E0318 18:08:10.728835 30278 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d46dbcc2484d2a1d9230ebfa72dccfcdf7a6561f69733f3f356b26d39c43b618\": container with ID starting with d46dbcc2484d2a1d9230ebfa72dccfcdf7a6561f69733f3f356b26d39c43b618 not found: ID does not exist" containerID="d46dbcc2484d2a1d9230ebfa72dccfcdf7a6561f69733f3f356b26d39c43b618" Mar 18 18:08:10.728910 master-0 kubenswrapper[30278]: I0318 18:08:10.728923 30278 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d46dbcc2484d2a1d9230ebfa72dccfcdf7a6561f69733f3f356b26d39c43b618"} err="failed to get container status \"d46dbcc2484d2a1d9230ebfa72dccfcdf7a6561f69733f3f356b26d39c43b618\": rpc error: code = NotFound desc = could not find container \"d46dbcc2484d2a1d9230ebfa72dccfcdf7a6561f69733f3f356b26d39c43b618\": container with ID starting with d46dbcc2484d2a1d9230ebfa72dccfcdf7a6561f69733f3f356b26d39c43b618 not found: ID does not exist" Mar 18 18:08:10.746919 master-0 kubenswrapper[30278]: I0318 18:08:10.745586 30278 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-5467bbc6b5-q6qdv"] Mar 18 18:08:10.751179 master-0 kubenswrapper[30278]: I0318 18:08:10.751107 30278 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-5467bbc6b5-q6qdv"] Mar 18 18:08:11.069518 master-0 kubenswrapper[30278]: I0318 18:08:11.069441 30278 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b3903332-0da7-4cb1-95fa-a746750be09f" path="/var/lib/kubelet/pods/b3903332-0da7-4cb1-95fa-a746750be09f/volumes" Mar 18 18:08:13.549941 master-0 kubenswrapper[30278]: I0318 18:08:13.549777 30278 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-b79998fb9-lngkn" podUID="318e6e33-711c-4ca1-940b-fc28e25e673f" containerName="console" containerID="cri-o://09ec2e426e7c782142cb59f9c9f442ab8b0a94d7277b40e8153d584ab701f393" gracePeriod=15 Mar 18 18:08:13.733805 master-0 kubenswrapper[30278]: I0318 18:08:13.733730 30278 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-b79998fb9-lngkn_318e6e33-711c-4ca1-940b-fc28e25e673f/console/0.log" Mar 18 18:08:13.734135 master-0 kubenswrapper[30278]: I0318 18:08:13.733871 30278 generic.go:334] "Generic (PLEG): container finished" podID="318e6e33-711c-4ca1-940b-fc28e25e673f" containerID="09ec2e426e7c782142cb59f9c9f442ab8b0a94d7277b40e8153d584ab701f393" exitCode=2 Mar 18 18:08:13.734135 master-0 kubenswrapper[30278]: I0318 18:08:13.733919 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-b79998fb9-lngkn" event={"ID":"318e6e33-711c-4ca1-940b-fc28e25e673f","Type":"ContainerDied","Data":"09ec2e426e7c782142cb59f9c9f442ab8b0a94d7277b40e8153d584ab701f393"} Mar 18 18:08:14.124519 master-0 kubenswrapper[30278]: I0318 18:08:14.124480 30278 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-b79998fb9-lngkn_318e6e33-711c-4ca1-940b-fc28e25e673f/console/0.log" Mar 18 18:08:14.124850 master-0 kubenswrapper[30278]: I0318 18:08:14.124833 30278 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-b79998fb9-lngkn" Mar 18 18:08:14.265128 master-0 kubenswrapper[30278]: I0318 18:08:14.265040 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/318e6e33-711c-4ca1-940b-fc28e25e673f-console-config\") pod \"318e6e33-711c-4ca1-940b-fc28e25e673f\" (UID: \"318e6e33-711c-4ca1-940b-fc28e25e673f\") " Mar 18 18:08:14.265545 master-0 kubenswrapper[30278]: I0318 18:08:14.265191 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/318e6e33-711c-4ca1-940b-fc28e25e673f-service-ca\") pod \"318e6e33-711c-4ca1-940b-fc28e25e673f\" (UID: \"318e6e33-711c-4ca1-940b-fc28e25e673f\") " Mar 18 18:08:14.265884 master-0 kubenswrapper[30278]: I0318 18:08:14.265823 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8lc2z\" (UniqueName: \"kubernetes.io/projected/318e6e33-711c-4ca1-940b-fc28e25e673f-kube-api-access-8lc2z\") pod \"318e6e33-711c-4ca1-940b-fc28e25e673f\" (UID: \"318e6e33-711c-4ca1-940b-fc28e25e673f\") " Mar 18 18:08:14.266954 master-0 kubenswrapper[30278]: I0318 18:08:14.266520 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/318e6e33-711c-4ca1-940b-fc28e25e673f-trusted-ca-bundle\") pod \"318e6e33-711c-4ca1-940b-fc28e25e673f\" (UID: \"318e6e33-711c-4ca1-940b-fc28e25e673f\") " Mar 18 18:08:14.266954 master-0 kubenswrapper[30278]: I0318 18:08:14.266619 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/318e6e33-711c-4ca1-940b-fc28e25e673f-oauth-serving-cert\") pod \"318e6e33-711c-4ca1-940b-fc28e25e673f\" (UID: \"318e6e33-711c-4ca1-940b-fc28e25e673f\") " Mar 18 18:08:14.266954 master-0 kubenswrapper[30278]: I0318 18:08:14.266666 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/318e6e33-711c-4ca1-940b-fc28e25e673f-console-oauth-config\") pod \"318e6e33-711c-4ca1-940b-fc28e25e673f\" (UID: \"318e6e33-711c-4ca1-940b-fc28e25e673f\") " Mar 18 18:08:14.266954 master-0 kubenswrapper[30278]: I0318 18:08:14.265985 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/318e6e33-711c-4ca1-940b-fc28e25e673f-console-config" (OuterVolumeSpecName: "console-config") pod "318e6e33-711c-4ca1-940b-fc28e25e673f" (UID: "318e6e33-711c-4ca1-940b-fc28e25e673f"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 18:08:14.266954 master-0 kubenswrapper[30278]: I0318 18:08:14.266719 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/318e6e33-711c-4ca1-940b-fc28e25e673f-console-serving-cert\") pod \"318e6e33-711c-4ca1-940b-fc28e25e673f\" (UID: \"318e6e33-711c-4ca1-940b-fc28e25e673f\") " Mar 18 18:08:14.266954 master-0 kubenswrapper[30278]: I0318 18:08:14.266054 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/318e6e33-711c-4ca1-940b-fc28e25e673f-service-ca" (OuterVolumeSpecName: "service-ca") pod "318e6e33-711c-4ca1-940b-fc28e25e673f" (UID: "318e6e33-711c-4ca1-940b-fc28e25e673f"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 18:08:14.267472 master-0 kubenswrapper[30278]: I0318 18:08:14.267039 30278 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/318e6e33-711c-4ca1-940b-fc28e25e673f-service-ca\") on node \"master-0\" DevicePath \"\"" Mar 18 18:08:14.267472 master-0 kubenswrapper[30278]: I0318 18:08:14.267056 30278 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/318e6e33-711c-4ca1-940b-fc28e25e673f-console-config\") on node \"master-0\" DevicePath \"\"" Mar 18 18:08:14.267472 master-0 kubenswrapper[30278]: I0318 18:08:14.267146 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/318e6e33-711c-4ca1-940b-fc28e25e673f-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "318e6e33-711c-4ca1-940b-fc28e25e673f" (UID: "318e6e33-711c-4ca1-940b-fc28e25e673f"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 18:08:14.267472 master-0 kubenswrapper[30278]: I0318 18:08:14.267371 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/318e6e33-711c-4ca1-940b-fc28e25e673f-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "318e6e33-711c-4ca1-940b-fc28e25e673f" (UID: "318e6e33-711c-4ca1-940b-fc28e25e673f"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 18:08:14.269467 master-0 kubenswrapper[30278]: I0318 18:08:14.269415 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/318e6e33-711c-4ca1-940b-fc28e25e673f-kube-api-access-8lc2z" (OuterVolumeSpecName: "kube-api-access-8lc2z") pod "318e6e33-711c-4ca1-940b-fc28e25e673f" (UID: "318e6e33-711c-4ca1-940b-fc28e25e673f"). InnerVolumeSpecName "kube-api-access-8lc2z". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 18:08:14.270798 master-0 kubenswrapper[30278]: I0318 18:08:14.270712 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/318e6e33-711c-4ca1-940b-fc28e25e673f-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "318e6e33-711c-4ca1-940b-fc28e25e673f" (UID: "318e6e33-711c-4ca1-940b-fc28e25e673f"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 18:08:14.271838 master-0 kubenswrapper[30278]: I0318 18:08:14.271767 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/318e6e33-711c-4ca1-940b-fc28e25e673f-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "318e6e33-711c-4ca1-940b-fc28e25e673f" (UID: "318e6e33-711c-4ca1-940b-fc28e25e673f"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 18:08:14.368006 master-0 kubenswrapper[30278]: I0318 18:08:14.367948 30278 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/318e6e33-711c-4ca1-940b-fc28e25e673f-console-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 18 18:08:14.368006 master-0 kubenswrapper[30278]: I0318 18:08:14.368002 30278 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8lc2z\" (UniqueName: \"kubernetes.io/projected/318e6e33-711c-4ca1-940b-fc28e25e673f-kube-api-access-8lc2z\") on node \"master-0\" DevicePath \"\"" Mar 18 18:08:14.368006 master-0 kubenswrapper[30278]: I0318 18:08:14.368018 30278 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/318e6e33-711c-4ca1-940b-fc28e25e673f-trusted-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 18 18:08:14.368415 master-0 kubenswrapper[30278]: I0318 18:08:14.368030 30278 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/318e6e33-711c-4ca1-940b-fc28e25e673f-oauth-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 18 18:08:14.368415 master-0 kubenswrapper[30278]: I0318 18:08:14.368043 30278 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/318e6e33-711c-4ca1-940b-fc28e25e673f-console-oauth-config\") on node \"master-0\" DevicePath \"\"" Mar 18 18:08:14.749434 master-0 kubenswrapper[30278]: I0318 18:08:14.749209 30278 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-b79998fb9-lngkn_318e6e33-711c-4ca1-940b-fc28e25e673f/console/0.log" Mar 18 18:08:14.749434 master-0 kubenswrapper[30278]: I0318 18:08:14.749399 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-b79998fb9-lngkn" event={"ID":"318e6e33-711c-4ca1-940b-fc28e25e673f","Type":"ContainerDied","Data":"7580a92c6acb7f5b933bf9f20869d51243254be7c972a6a9e4784a058dad75ad"} Mar 18 18:08:14.749434 master-0 kubenswrapper[30278]: I0318 18:08:14.749442 30278 scope.go:117] "RemoveContainer" containerID="09ec2e426e7c782142cb59f9c9f442ab8b0a94d7277b40e8153d584ab701f393" Mar 18 18:08:14.750539 master-0 kubenswrapper[30278]: I0318 18:08:14.749564 30278 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-b79998fb9-lngkn" Mar 18 18:08:14.798516 master-0 kubenswrapper[30278]: I0318 18:08:14.798427 30278 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-b79998fb9-lngkn"] Mar 18 18:08:14.808411 master-0 kubenswrapper[30278]: I0318 18:08:14.808347 30278 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-b79998fb9-lngkn"] Mar 18 18:08:15.069424 master-0 kubenswrapper[30278]: I0318 18:08:15.069265 30278 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="318e6e33-711c-4ca1-940b-fc28e25e673f" path="/var/lib/kubelet/pods/318e6e33-711c-4ca1-940b-fc28e25e673f/volumes" Mar 18 18:08:19.320300 master-0 kubenswrapper[30278]: I0318 18:08:19.317975 30278 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-5d47bcf65d-2t257"] Mar 18 18:08:19.336972 master-0 kubenswrapper[30278]: E0318 18:08:19.336926 30278 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="318e6e33-711c-4ca1-940b-fc28e25e673f" containerName="console" Mar 18 18:08:19.337219 master-0 kubenswrapper[30278]: I0318 18:08:19.337207 30278 state_mem.go:107] "Deleted CPUSet assignment" podUID="318e6e33-711c-4ca1-940b-fc28e25e673f" containerName="console" Mar 18 18:08:19.337352 master-0 kubenswrapper[30278]: E0318 18:08:19.337341 30278 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b3903332-0da7-4cb1-95fa-a746750be09f" containerName="console" Mar 18 18:08:19.337424 master-0 kubenswrapper[30278]: I0318 18:08:19.337414 30278 state_mem.go:107] "Deleted CPUSet assignment" podUID="b3903332-0da7-4cb1-95fa-a746750be09f" containerName="console" Mar 18 18:08:19.337651 master-0 kubenswrapper[30278]: I0318 18:08:19.337638 30278 memory_manager.go:354] "RemoveStaleState removing state" podUID="b3903332-0da7-4cb1-95fa-a746750be09f" containerName="console" Mar 18 18:08:19.337744 master-0 kubenswrapper[30278]: I0318 18:08:19.337733 30278 memory_manager.go:354] "RemoveStaleState removing state" podUID="318e6e33-711c-4ca1-940b-fc28e25e673f" containerName="console" Mar 18 18:08:19.338523 master-0 kubenswrapper[30278]: I0318 18:08:19.338501 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-5d47bcf65d-2t257" Mar 18 18:08:19.339823 master-0 kubenswrapper[30278]: I0318 18:08:19.339805 30278 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-5d47bcf65d-2t257"] Mar 18 18:08:19.363902 master-0 kubenswrapper[30278]: I0318 18:08:19.363791 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/2a3ec7d1-8b00-45e3-865d-f696ae42fec1-console-config\") pod \"console-5d47bcf65d-2t257\" (UID: \"2a3ec7d1-8b00-45e3-865d-f696ae42fec1\") " pod="openshift-console/console-5d47bcf65d-2t257" Mar 18 18:08:19.363902 master-0 kubenswrapper[30278]: I0318 18:08:19.363845 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/2a3ec7d1-8b00-45e3-865d-f696ae42fec1-service-ca\") pod \"console-5d47bcf65d-2t257\" (UID: \"2a3ec7d1-8b00-45e3-865d-f696ae42fec1\") " pod="openshift-console/console-5d47bcf65d-2t257" Mar 18 18:08:19.363902 master-0 kubenswrapper[30278]: I0318 18:08:19.363894 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/2a3ec7d1-8b00-45e3-865d-f696ae42fec1-console-serving-cert\") pod \"console-5d47bcf65d-2t257\" (UID: \"2a3ec7d1-8b00-45e3-865d-f696ae42fec1\") " pod="openshift-console/console-5d47bcf65d-2t257" Mar 18 18:08:19.364212 master-0 kubenswrapper[30278]: I0318 18:08:19.363962 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9pjf7\" (UniqueName: \"kubernetes.io/projected/2a3ec7d1-8b00-45e3-865d-f696ae42fec1-kube-api-access-9pjf7\") pod \"console-5d47bcf65d-2t257\" (UID: \"2a3ec7d1-8b00-45e3-865d-f696ae42fec1\") " pod="openshift-console/console-5d47bcf65d-2t257" Mar 18 18:08:19.364212 master-0 kubenswrapper[30278]: I0318 18:08:19.363985 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2a3ec7d1-8b00-45e3-865d-f696ae42fec1-trusted-ca-bundle\") pod \"console-5d47bcf65d-2t257\" (UID: \"2a3ec7d1-8b00-45e3-865d-f696ae42fec1\") " pod="openshift-console/console-5d47bcf65d-2t257" Mar 18 18:08:19.364212 master-0 kubenswrapper[30278]: I0318 18:08:19.364019 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/2a3ec7d1-8b00-45e3-865d-f696ae42fec1-console-oauth-config\") pod \"console-5d47bcf65d-2t257\" (UID: \"2a3ec7d1-8b00-45e3-865d-f696ae42fec1\") " pod="openshift-console/console-5d47bcf65d-2t257" Mar 18 18:08:19.364212 master-0 kubenswrapper[30278]: I0318 18:08:19.364036 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/2a3ec7d1-8b00-45e3-865d-f696ae42fec1-oauth-serving-cert\") pod \"console-5d47bcf65d-2t257\" (UID: \"2a3ec7d1-8b00-45e3-865d-f696ae42fec1\") " pod="openshift-console/console-5d47bcf65d-2t257" Mar 18 18:08:19.465346 master-0 kubenswrapper[30278]: I0318 18:08:19.465305 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/2a3ec7d1-8b00-45e3-865d-f696ae42fec1-console-serving-cert\") pod \"console-5d47bcf65d-2t257\" (UID: \"2a3ec7d1-8b00-45e3-865d-f696ae42fec1\") " pod="openshift-console/console-5d47bcf65d-2t257" Mar 18 18:08:19.465693 master-0 kubenswrapper[30278]: I0318 18:08:19.465679 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9pjf7\" (UniqueName: \"kubernetes.io/projected/2a3ec7d1-8b00-45e3-865d-f696ae42fec1-kube-api-access-9pjf7\") pod \"console-5d47bcf65d-2t257\" (UID: \"2a3ec7d1-8b00-45e3-865d-f696ae42fec1\") " pod="openshift-console/console-5d47bcf65d-2t257" Mar 18 18:08:19.467613 master-0 kubenswrapper[30278]: I0318 18:08:19.467598 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2a3ec7d1-8b00-45e3-865d-f696ae42fec1-trusted-ca-bundle\") pod \"console-5d47bcf65d-2t257\" (UID: \"2a3ec7d1-8b00-45e3-865d-f696ae42fec1\") " pod="openshift-console/console-5d47bcf65d-2t257" Mar 18 18:08:19.468633 master-0 kubenswrapper[30278]: I0318 18:08:19.468619 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/2a3ec7d1-8b00-45e3-865d-f696ae42fec1-console-oauth-config\") pod \"console-5d47bcf65d-2t257\" (UID: \"2a3ec7d1-8b00-45e3-865d-f696ae42fec1\") " pod="openshift-console/console-5d47bcf65d-2t257" Mar 18 18:08:19.469073 master-0 kubenswrapper[30278]: I0318 18:08:19.469059 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/2a3ec7d1-8b00-45e3-865d-f696ae42fec1-oauth-serving-cert\") pod \"console-5d47bcf65d-2t257\" (UID: \"2a3ec7d1-8b00-45e3-865d-f696ae42fec1\") " pod="openshift-console/console-5d47bcf65d-2t257" Mar 18 18:08:19.470216 master-0 kubenswrapper[30278]: I0318 18:08:19.468736 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2a3ec7d1-8b00-45e3-865d-f696ae42fec1-trusted-ca-bundle\") pod \"console-5d47bcf65d-2t257\" (UID: \"2a3ec7d1-8b00-45e3-865d-f696ae42fec1\") " pod="openshift-console/console-5d47bcf65d-2t257" Mar 18 18:08:19.470216 master-0 kubenswrapper[30278]: I0318 18:08:19.468520 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/2a3ec7d1-8b00-45e3-865d-f696ae42fec1-console-serving-cert\") pod \"console-5d47bcf65d-2t257\" (UID: \"2a3ec7d1-8b00-45e3-865d-f696ae42fec1\") " pod="openshift-console/console-5d47bcf65d-2t257" Mar 18 18:08:19.470216 master-0 kubenswrapper[30278]: I0318 18:08:19.469843 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/2a3ec7d1-8b00-45e3-865d-f696ae42fec1-oauth-serving-cert\") pod \"console-5d47bcf65d-2t257\" (UID: \"2a3ec7d1-8b00-45e3-865d-f696ae42fec1\") " pod="openshift-console/console-5d47bcf65d-2t257" Mar 18 18:08:19.470443 master-0 kubenswrapper[30278]: I0318 18:08:19.470427 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/2a3ec7d1-8b00-45e3-865d-f696ae42fec1-console-config\") pod \"console-5d47bcf65d-2t257\" (UID: \"2a3ec7d1-8b00-45e3-865d-f696ae42fec1\") " pod="openshift-console/console-5d47bcf65d-2t257" Mar 18 18:08:19.471088 master-0 kubenswrapper[30278]: I0318 18:08:19.471075 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/2a3ec7d1-8b00-45e3-865d-f696ae42fec1-service-ca\") pod \"console-5d47bcf65d-2t257\" (UID: \"2a3ec7d1-8b00-45e3-865d-f696ae42fec1\") " pod="openshift-console/console-5d47bcf65d-2t257" Mar 18 18:08:19.471840 master-0 kubenswrapper[30278]: I0318 18:08:19.471016 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/2a3ec7d1-8b00-45e3-865d-f696ae42fec1-console-config\") pod \"console-5d47bcf65d-2t257\" (UID: \"2a3ec7d1-8b00-45e3-865d-f696ae42fec1\") " pod="openshift-console/console-5d47bcf65d-2t257" Mar 18 18:08:19.471932 master-0 kubenswrapper[30278]: I0318 18:08:19.471756 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/2a3ec7d1-8b00-45e3-865d-f696ae42fec1-service-ca\") pod \"console-5d47bcf65d-2t257\" (UID: \"2a3ec7d1-8b00-45e3-865d-f696ae42fec1\") " pod="openshift-console/console-5d47bcf65d-2t257" Mar 18 18:08:19.472722 master-0 kubenswrapper[30278]: I0318 18:08:19.472706 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/2a3ec7d1-8b00-45e3-865d-f696ae42fec1-console-oauth-config\") pod \"console-5d47bcf65d-2t257\" (UID: \"2a3ec7d1-8b00-45e3-865d-f696ae42fec1\") " pod="openshift-console/console-5d47bcf65d-2t257" Mar 18 18:08:19.487002 master-0 kubenswrapper[30278]: I0318 18:08:19.486958 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9pjf7\" (UniqueName: \"kubernetes.io/projected/2a3ec7d1-8b00-45e3-865d-f696ae42fec1-kube-api-access-9pjf7\") pod \"console-5d47bcf65d-2t257\" (UID: \"2a3ec7d1-8b00-45e3-865d-f696ae42fec1\") " pod="openshift-console/console-5d47bcf65d-2t257" Mar 18 18:08:19.675777 master-0 kubenswrapper[30278]: I0318 18:08:19.675616 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-5d47bcf65d-2t257" Mar 18 18:08:20.175737 master-0 kubenswrapper[30278]: I0318 18:08:20.175631 30278 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-5d47bcf65d-2t257"] Mar 18 18:08:20.184740 master-0 kubenswrapper[30278]: W0318 18:08:20.184708 30278 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2a3ec7d1_8b00_45e3_865d_f696ae42fec1.slice/crio-7f1dd499463643bfbc9969399e7a018b955907370ccdcb3c04bbb7b854cd9c7c WatchSource:0}: Error finding container 7f1dd499463643bfbc9969399e7a018b955907370ccdcb3c04bbb7b854cd9c7c: Status 404 returned error can't find the container with id 7f1dd499463643bfbc9969399e7a018b955907370ccdcb3c04bbb7b854cd9c7c Mar 18 18:08:20.807946 master-0 kubenswrapper[30278]: I0318 18:08:20.807892 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-5d47bcf65d-2t257" event={"ID":"2a3ec7d1-8b00-45e3-865d-f696ae42fec1","Type":"ContainerStarted","Data":"e01e5f509e1fe351286d94a227cf13b2a0af2879ca90fc24f3460af23a2e4821"} Mar 18 18:08:20.807946 master-0 kubenswrapper[30278]: I0318 18:08:20.807943 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-5d47bcf65d-2t257" event={"ID":"2a3ec7d1-8b00-45e3-865d-f696ae42fec1","Type":"ContainerStarted","Data":"7f1dd499463643bfbc9969399e7a018b955907370ccdcb3c04bbb7b854cd9c7c"} Mar 18 18:08:20.834684 master-0 kubenswrapper[30278]: I0318 18:08:20.834606 30278 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-5d47bcf65d-2t257" podStartSLOduration=1.834585183 podStartE2EDuration="1.834585183s" podCreationTimestamp="2026-03-18 18:08:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 18:08:20.827500153 +0000 UTC m=+469.994684758" watchObservedRunningTime="2026-03-18 18:08:20.834585183 +0000 UTC m=+470.001769788" Mar 18 18:08:22.722563 master-0 kubenswrapper[30278]: I0318 18:08:22.722451 30278 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-9df654797-6rk29" podUID="722cfd9d-3251-4136-8680-742b888588e2" containerName="console" containerID="cri-o://370b7083f5c39256c7ff65a327d53f1ffe1c4db50255322e4c361b2c01073a28" gracePeriod=15 Mar 18 18:08:23.178197 master-0 kubenswrapper[30278]: I0318 18:08:23.178150 30278 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-9df654797-6rk29_722cfd9d-3251-4136-8680-742b888588e2/console/0.log" Mar 18 18:08:23.178443 master-0 kubenswrapper[30278]: I0318 18:08:23.178249 30278 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-9df654797-6rk29" Mar 18 18:08:23.253270 master-0 kubenswrapper[30278]: I0318 18:08:23.253185 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/722cfd9d-3251-4136-8680-742b888588e2-oauth-serving-cert\") pod \"722cfd9d-3251-4136-8680-742b888588e2\" (UID: \"722cfd9d-3251-4136-8680-742b888588e2\") " Mar 18 18:08:23.253270 master-0 kubenswrapper[30278]: I0318 18:08:23.253261 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/722cfd9d-3251-4136-8680-742b888588e2-console-oauth-config\") pod \"722cfd9d-3251-4136-8680-742b888588e2\" (UID: \"722cfd9d-3251-4136-8680-742b888588e2\") " Mar 18 18:08:23.253709 master-0 kubenswrapper[30278]: I0318 18:08:23.253324 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/722cfd9d-3251-4136-8680-742b888588e2-service-ca\") pod \"722cfd9d-3251-4136-8680-742b888588e2\" (UID: \"722cfd9d-3251-4136-8680-742b888588e2\") " Mar 18 18:08:23.253709 master-0 kubenswrapper[30278]: I0318 18:08:23.253344 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/722cfd9d-3251-4136-8680-742b888588e2-trusted-ca-bundle\") pod \"722cfd9d-3251-4136-8680-742b888588e2\" (UID: \"722cfd9d-3251-4136-8680-742b888588e2\") " Mar 18 18:08:23.253709 master-0 kubenswrapper[30278]: I0318 18:08:23.253379 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/722cfd9d-3251-4136-8680-742b888588e2-console-config\") pod \"722cfd9d-3251-4136-8680-742b888588e2\" (UID: \"722cfd9d-3251-4136-8680-742b888588e2\") " Mar 18 18:08:23.253709 master-0 kubenswrapper[30278]: I0318 18:08:23.253422 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/722cfd9d-3251-4136-8680-742b888588e2-console-serving-cert\") pod \"722cfd9d-3251-4136-8680-742b888588e2\" (UID: \"722cfd9d-3251-4136-8680-742b888588e2\") " Mar 18 18:08:23.253709 master-0 kubenswrapper[30278]: I0318 18:08:23.253538 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d55q9\" (UniqueName: \"kubernetes.io/projected/722cfd9d-3251-4136-8680-742b888588e2-kube-api-access-d55q9\") pod \"722cfd9d-3251-4136-8680-742b888588e2\" (UID: \"722cfd9d-3251-4136-8680-742b888588e2\") " Mar 18 18:08:23.254016 master-0 kubenswrapper[30278]: I0318 18:08:23.253848 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/722cfd9d-3251-4136-8680-742b888588e2-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "722cfd9d-3251-4136-8680-742b888588e2" (UID: "722cfd9d-3251-4136-8680-742b888588e2"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 18:08:23.254578 master-0 kubenswrapper[30278]: I0318 18:08:23.254534 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/722cfd9d-3251-4136-8680-742b888588e2-console-config" (OuterVolumeSpecName: "console-config") pod "722cfd9d-3251-4136-8680-742b888588e2" (UID: "722cfd9d-3251-4136-8680-742b888588e2"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 18:08:23.254578 master-0 kubenswrapper[30278]: I0318 18:08:23.254563 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/722cfd9d-3251-4136-8680-742b888588e2-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "722cfd9d-3251-4136-8680-742b888588e2" (UID: "722cfd9d-3251-4136-8680-742b888588e2"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 18:08:23.255646 master-0 kubenswrapper[30278]: I0318 18:08:23.255571 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/722cfd9d-3251-4136-8680-742b888588e2-service-ca" (OuterVolumeSpecName: "service-ca") pod "722cfd9d-3251-4136-8680-742b888588e2" (UID: "722cfd9d-3251-4136-8680-742b888588e2"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 18:08:23.260360 master-0 kubenswrapper[30278]: I0318 18:08:23.260291 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/722cfd9d-3251-4136-8680-742b888588e2-kube-api-access-d55q9" (OuterVolumeSpecName: "kube-api-access-d55q9") pod "722cfd9d-3251-4136-8680-742b888588e2" (UID: "722cfd9d-3251-4136-8680-742b888588e2"). InnerVolumeSpecName "kube-api-access-d55q9". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 18:08:23.260360 master-0 kubenswrapper[30278]: I0318 18:08:23.260269 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/722cfd9d-3251-4136-8680-742b888588e2-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "722cfd9d-3251-4136-8680-742b888588e2" (UID: "722cfd9d-3251-4136-8680-742b888588e2"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 18:08:23.260836 master-0 kubenswrapper[30278]: I0318 18:08:23.260793 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/722cfd9d-3251-4136-8680-742b888588e2-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "722cfd9d-3251-4136-8680-742b888588e2" (UID: "722cfd9d-3251-4136-8680-742b888588e2"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 18:08:23.355450 master-0 kubenswrapper[30278]: I0318 18:08:23.355247 30278 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d55q9\" (UniqueName: \"kubernetes.io/projected/722cfd9d-3251-4136-8680-742b888588e2-kube-api-access-d55q9\") on node \"master-0\" DevicePath \"\"" Mar 18 18:08:23.355869 master-0 kubenswrapper[30278]: I0318 18:08:23.355730 30278 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/722cfd9d-3251-4136-8680-742b888588e2-oauth-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 18 18:08:23.355869 master-0 kubenswrapper[30278]: I0318 18:08:23.355763 30278 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/722cfd9d-3251-4136-8680-742b888588e2-console-oauth-config\") on node \"master-0\" DevicePath \"\"" Mar 18 18:08:23.355869 master-0 kubenswrapper[30278]: I0318 18:08:23.355785 30278 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/722cfd9d-3251-4136-8680-742b888588e2-trusted-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 18 18:08:23.355869 master-0 kubenswrapper[30278]: I0318 18:08:23.355803 30278 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/722cfd9d-3251-4136-8680-742b888588e2-service-ca\") on node \"master-0\" DevicePath \"\"" Mar 18 18:08:23.355869 master-0 kubenswrapper[30278]: I0318 18:08:23.355823 30278 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/722cfd9d-3251-4136-8680-742b888588e2-console-config\") on node \"master-0\" DevicePath \"\"" Mar 18 18:08:23.355869 master-0 kubenswrapper[30278]: I0318 18:08:23.355842 30278 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/722cfd9d-3251-4136-8680-742b888588e2-console-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 18 18:08:23.834518 master-0 kubenswrapper[30278]: I0318 18:08:23.834407 30278 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-9df654797-6rk29_722cfd9d-3251-4136-8680-742b888588e2/console/0.log" Mar 18 18:08:23.834518 master-0 kubenswrapper[30278]: I0318 18:08:23.834509 30278 generic.go:334] "Generic (PLEG): container finished" podID="722cfd9d-3251-4136-8680-742b888588e2" containerID="370b7083f5c39256c7ff65a327d53f1ffe1c4db50255322e4c361b2c01073a28" exitCode=2 Mar 18 18:08:23.835689 master-0 kubenswrapper[30278]: I0318 18:08:23.834554 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-9df654797-6rk29" event={"ID":"722cfd9d-3251-4136-8680-742b888588e2","Type":"ContainerDied","Data":"370b7083f5c39256c7ff65a327d53f1ffe1c4db50255322e4c361b2c01073a28"} Mar 18 18:08:23.835689 master-0 kubenswrapper[30278]: I0318 18:08:23.834596 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-9df654797-6rk29" event={"ID":"722cfd9d-3251-4136-8680-742b888588e2","Type":"ContainerDied","Data":"bc57e684fd02c35b74d2e8afdc2abf0538ab3fcef694bd06212503d110dd2ff0"} Mar 18 18:08:23.835689 master-0 kubenswrapper[30278]: I0318 18:08:23.834624 30278 scope.go:117] "RemoveContainer" containerID="370b7083f5c39256c7ff65a327d53f1ffe1c4db50255322e4c361b2c01073a28" Mar 18 18:08:23.835689 master-0 kubenswrapper[30278]: I0318 18:08:23.834773 30278 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-9df654797-6rk29" Mar 18 18:08:23.859560 master-0 kubenswrapper[30278]: I0318 18:08:23.859490 30278 scope.go:117] "RemoveContainer" containerID="370b7083f5c39256c7ff65a327d53f1ffe1c4db50255322e4c361b2c01073a28" Mar 18 18:08:23.860320 master-0 kubenswrapper[30278]: E0318 18:08:23.860214 30278 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"370b7083f5c39256c7ff65a327d53f1ffe1c4db50255322e4c361b2c01073a28\": container with ID starting with 370b7083f5c39256c7ff65a327d53f1ffe1c4db50255322e4c361b2c01073a28 not found: ID does not exist" containerID="370b7083f5c39256c7ff65a327d53f1ffe1c4db50255322e4c361b2c01073a28" Mar 18 18:08:23.860426 master-0 kubenswrapper[30278]: I0318 18:08:23.860390 30278 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"370b7083f5c39256c7ff65a327d53f1ffe1c4db50255322e4c361b2c01073a28"} err="failed to get container status \"370b7083f5c39256c7ff65a327d53f1ffe1c4db50255322e4c361b2c01073a28\": rpc error: code = NotFound desc = could not find container \"370b7083f5c39256c7ff65a327d53f1ffe1c4db50255322e4c361b2c01073a28\": container with ID starting with 370b7083f5c39256c7ff65a327d53f1ffe1c4db50255322e4c361b2c01073a28 not found: ID does not exist" Mar 18 18:08:23.906445 master-0 kubenswrapper[30278]: I0318 18:08:23.906350 30278 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-9df654797-6rk29"] Mar 18 18:08:23.915650 master-0 kubenswrapper[30278]: I0318 18:08:23.915578 30278 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-9df654797-6rk29"] Mar 18 18:08:25.069693 master-0 kubenswrapper[30278]: I0318 18:08:25.069593 30278 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="722cfd9d-3251-4136-8680-742b888588e2" path="/var/lib/kubelet/pods/722cfd9d-3251-4136-8680-742b888588e2/volumes" Mar 18 18:08:29.677210 master-0 kubenswrapper[30278]: I0318 18:08:29.677136 30278 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-5d47bcf65d-2t257" Mar 18 18:08:29.677210 master-0 kubenswrapper[30278]: I0318 18:08:29.677200 30278 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-5d47bcf65d-2t257" Mar 18 18:08:29.685555 master-0 kubenswrapper[30278]: I0318 18:08:29.685485 30278 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-5d47bcf65d-2t257" Mar 18 18:08:29.897718 master-0 kubenswrapper[30278]: I0318 18:08:29.897634 30278 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-5d47bcf65d-2t257" Mar 18 18:08:29.992352 master-0 kubenswrapper[30278]: I0318 18:08:29.992208 30278 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-69cdb7b474-rkjr2"] Mar 18 18:08:30.803873 master-0 kubenswrapper[30278]: I0318 18:08:30.803800 30278 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-7c48f8f679-djbqb"] Mar 18 18:08:30.804596 master-0 kubenswrapper[30278]: E0318 18:08:30.804263 30278 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="722cfd9d-3251-4136-8680-742b888588e2" containerName="console" Mar 18 18:08:30.804596 master-0 kubenswrapper[30278]: I0318 18:08:30.804305 30278 state_mem.go:107] "Deleted CPUSet assignment" podUID="722cfd9d-3251-4136-8680-742b888588e2" containerName="console" Mar 18 18:08:30.804596 master-0 kubenswrapper[30278]: I0318 18:08:30.804532 30278 memory_manager.go:354] "RemoveStaleState removing state" podUID="722cfd9d-3251-4136-8680-742b888588e2" containerName="console" Mar 18 18:08:30.805328 master-0 kubenswrapper[30278]: I0318 18:08:30.805300 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-7c48f8f679-djbqb" Mar 18 18:08:30.839576 master-0 kubenswrapper[30278]: I0318 18:08:30.839513 30278 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-7c48f8f679-djbqb"] Mar 18 18:08:31.012475 master-0 kubenswrapper[30278]: I0318 18:08:31.012415 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b294ce2a-9da1-4917-8c73-8e5b6320c88e-trusted-ca-bundle\") pod \"console-7c48f8f679-djbqb\" (UID: \"b294ce2a-9da1-4917-8c73-8e5b6320c88e\") " pod="openshift-console/console-7c48f8f679-djbqb" Mar 18 18:08:31.012764 master-0 kubenswrapper[30278]: I0318 18:08:31.012749 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/b294ce2a-9da1-4917-8c73-8e5b6320c88e-console-config\") pod \"console-7c48f8f679-djbqb\" (UID: \"b294ce2a-9da1-4917-8c73-8e5b6320c88e\") " pod="openshift-console/console-7c48f8f679-djbqb" Mar 18 18:08:31.012862 master-0 kubenswrapper[30278]: I0318 18:08:31.012847 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/b294ce2a-9da1-4917-8c73-8e5b6320c88e-console-serving-cert\") pod \"console-7c48f8f679-djbqb\" (UID: \"b294ce2a-9da1-4917-8c73-8e5b6320c88e\") " pod="openshift-console/console-7c48f8f679-djbqb" Mar 18 18:08:31.012938 master-0 kubenswrapper[30278]: I0318 18:08:31.012927 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/b294ce2a-9da1-4917-8c73-8e5b6320c88e-service-ca\") pod \"console-7c48f8f679-djbqb\" (UID: \"b294ce2a-9da1-4917-8c73-8e5b6320c88e\") " pod="openshift-console/console-7c48f8f679-djbqb" Mar 18 18:08:31.013023 master-0 kubenswrapper[30278]: I0318 18:08:31.013012 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/b294ce2a-9da1-4917-8c73-8e5b6320c88e-console-oauth-config\") pod \"console-7c48f8f679-djbqb\" (UID: \"b294ce2a-9da1-4917-8c73-8e5b6320c88e\") " pod="openshift-console/console-7c48f8f679-djbqb" Mar 18 18:08:31.013173 master-0 kubenswrapper[30278]: I0318 18:08:31.013160 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mq4qd\" (UniqueName: \"kubernetes.io/projected/b294ce2a-9da1-4917-8c73-8e5b6320c88e-kube-api-access-mq4qd\") pod \"console-7c48f8f679-djbqb\" (UID: \"b294ce2a-9da1-4917-8c73-8e5b6320c88e\") " pod="openshift-console/console-7c48f8f679-djbqb" Mar 18 18:08:31.013261 master-0 kubenswrapper[30278]: I0318 18:08:31.013247 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/b294ce2a-9da1-4917-8c73-8e5b6320c88e-oauth-serving-cert\") pod \"console-7c48f8f679-djbqb\" (UID: \"b294ce2a-9da1-4917-8c73-8e5b6320c88e\") " pod="openshift-console/console-7c48f8f679-djbqb" Mar 18 18:08:31.115157 master-0 kubenswrapper[30278]: I0318 18:08:31.115027 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mq4qd\" (UniqueName: \"kubernetes.io/projected/b294ce2a-9da1-4917-8c73-8e5b6320c88e-kube-api-access-mq4qd\") pod \"console-7c48f8f679-djbqb\" (UID: \"b294ce2a-9da1-4917-8c73-8e5b6320c88e\") " pod="openshift-console/console-7c48f8f679-djbqb" Mar 18 18:08:31.115157 master-0 kubenswrapper[30278]: I0318 18:08:31.115119 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/b294ce2a-9da1-4917-8c73-8e5b6320c88e-oauth-serving-cert\") pod \"console-7c48f8f679-djbqb\" (UID: \"b294ce2a-9da1-4917-8c73-8e5b6320c88e\") " pod="openshift-console/console-7c48f8f679-djbqb" Mar 18 18:08:31.115429 master-0 kubenswrapper[30278]: I0318 18:08:31.115182 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b294ce2a-9da1-4917-8c73-8e5b6320c88e-trusted-ca-bundle\") pod \"console-7c48f8f679-djbqb\" (UID: \"b294ce2a-9da1-4917-8c73-8e5b6320c88e\") " pod="openshift-console/console-7c48f8f679-djbqb" Mar 18 18:08:31.115429 master-0 kubenswrapper[30278]: I0318 18:08:31.115203 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/b294ce2a-9da1-4917-8c73-8e5b6320c88e-console-config\") pod \"console-7c48f8f679-djbqb\" (UID: \"b294ce2a-9da1-4917-8c73-8e5b6320c88e\") " pod="openshift-console/console-7c48f8f679-djbqb" Mar 18 18:08:31.115429 master-0 kubenswrapper[30278]: I0318 18:08:31.115227 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/b294ce2a-9da1-4917-8c73-8e5b6320c88e-console-serving-cert\") pod \"console-7c48f8f679-djbqb\" (UID: \"b294ce2a-9da1-4917-8c73-8e5b6320c88e\") " pod="openshift-console/console-7c48f8f679-djbqb" Mar 18 18:08:31.115429 master-0 kubenswrapper[30278]: I0318 18:08:31.115246 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/b294ce2a-9da1-4917-8c73-8e5b6320c88e-service-ca\") pod \"console-7c48f8f679-djbqb\" (UID: \"b294ce2a-9da1-4917-8c73-8e5b6320c88e\") " pod="openshift-console/console-7c48f8f679-djbqb" Mar 18 18:08:31.115429 master-0 kubenswrapper[30278]: I0318 18:08:31.115292 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/b294ce2a-9da1-4917-8c73-8e5b6320c88e-console-oauth-config\") pod \"console-7c48f8f679-djbqb\" (UID: \"b294ce2a-9da1-4917-8c73-8e5b6320c88e\") " pod="openshift-console/console-7c48f8f679-djbqb" Mar 18 18:08:31.116958 master-0 kubenswrapper[30278]: I0318 18:08:31.116918 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/b294ce2a-9da1-4917-8c73-8e5b6320c88e-oauth-serving-cert\") pod \"console-7c48f8f679-djbqb\" (UID: \"b294ce2a-9da1-4917-8c73-8e5b6320c88e\") " pod="openshift-console/console-7c48f8f679-djbqb" Mar 18 18:08:31.117170 master-0 kubenswrapper[30278]: I0318 18:08:31.117138 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b294ce2a-9da1-4917-8c73-8e5b6320c88e-trusted-ca-bundle\") pod \"console-7c48f8f679-djbqb\" (UID: \"b294ce2a-9da1-4917-8c73-8e5b6320c88e\") " pod="openshift-console/console-7c48f8f679-djbqb" Mar 18 18:08:31.117514 master-0 kubenswrapper[30278]: I0318 18:08:31.117481 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/b294ce2a-9da1-4917-8c73-8e5b6320c88e-console-config\") pod \"console-7c48f8f679-djbqb\" (UID: \"b294ce2a-9da1-4917-8c73-8e5b6320c88e\") " pod="openshift-console/console-7c48f8f679-djbqb" Mar 18 18:08:31.117673 master-0 kubenswrapper[30278]: I0318 18:08:31.117644 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/b294ce2a-9da1-4917-8c73-8e5b6320c88e-service-ca\") pod \"console-7c48f8f679-djbqb\" (UID: \"b294ce2a-9da1-4917-8c73-8e5b6320c88e\") " pod="openshift-console/console-7c48f8f679-djbqb" Mar 18 18:08:31.119981 master-0 kubenswrapper[30278]: I0318 18:08:31.119828 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/b294ce2a-9da1-4917-8c73-8e5b6320c88e-console-serving-cert\") pod \"console-7c48f8f679-djbqb\" (UID: \"b294ce2a-9da1-4917-8c73-8e5b6320c88e\") " pod="openshift-console/console-7c48f8f679-djbqb" Mar 18 18:08:31.123779 master-0 kubenswrapper[30278]: I0318 18:08:31.123745 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/b294ce2a-9da1-4917-8c73-8e5b6320c88e-console-oauth-config\") pod \"console-7c48f8f679-djbqb\" (UID: \"b294ce2a-9da1-4917-8c73-8e5b6320c88e\") " pod="openshift-console/console-7c48f8f679-djbqb" Mar 18 18:08:31.131451 master-0 kubenswrapper[30278]: I0318 18:08:31.131417 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mq4qd\" (UniqueName: \"kubernetes.io/projected/b294ce2a-9da1-4917-8c73-8e5b6320c88e-kube-api-access-mq4qd\") pod \"console-7c48f8f679-djbqb\" (UID: \"b294ce2a-9da1-4917-8c73-8e5b6320c88e\") " pod="openshift-console/console-7c48f8f679-djbqb" Mar 18 18:08:31.187789 master-0 kubenswrapper[30278]: I0318 18:08:31.187732 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-7c48f8f679-djbqb" Mar 18 18:08:31.649140 master-0 kubenswrapper[30278]: I0318 18:08:31.649064 30278 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-7c48f8f679-djbqb"] Mar 18 18:08:31.651194 master-0 kubenswrapper[30278]: W0318 18:08:31.651147 30278 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb294ce2a_9da1_4917_8c73_8e5b6320c88e.slice/crio-e642dca84321cf5a84d89f6d201c5193b5a8570eb15c7f2f1fefef39ff70a82f WatchSource:0}: Error finding container e642dca84321cf5a84d89f6d201c5193b5a8570eb15c7f2f1fefef39ff70a82f: Status 404 returned error can't find the container with id e642dca84321cf5a84d89f6d201c5193b5a8570eb15c7f2f1fefef39ff70a82f Mar 18 18:08:31.955787 master-0 kubenswrapper[30278]: I0318 18:08:31.955607 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-7c48f8f679-djbqb" event={"ID":"b294ce2a-9da1-4917-8c73-8e5b6320c88e","Type":"ContainerStarted","Data":"a6fd61de2952574e9197b1e9727e6230b428aee3a2ba56f41ea19507cc2576e0"} Mar 18 18:08:31.955787 master-0 kubenswrapper[30278]: I0318 18:08:31.955689 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-7c48f8f679-djbqb" event={"ID":"b294ce2a-9da1-4917-8c73-8e5b6320c88e","Type":"ContainerStarted","Data":"e642dca84321cf5a84d89f6d201c5193b5a8570eb15c7f2f1fefef39ff70a82f"} Mar 18 18:08:31.986509 master-0 kubenswrapper[30278]: I0318 18:08:31.986394 30278 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-7c48f8f679-djbqb" podStartSLOduration=1.986368282 podStartE2EDuration="1.986368282s" podCreationTimestamp="2026-03-18 18:08:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 18:08:31.985726305 +0000 UTC m=+481.152910920" watchObservedRunningTime="2026-03-18 18:08:31.986368282 +0000 UTC m=+481.153552907" Mar 18 18:08:41.188674 master-0 kubenswrapper[30278]: I0318 18:08:41.188584 30278 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-7c48f8f679-djbqb" Mar 18 18:08:41.188674 master-0 kubenswrapper[30278]: I0318 18:08:41.188669 30278 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-7c48f8f679-djbqb" Mar 18 18:08:41.195928 master-0 kubenswrapper[30278]: I0318 18:08:41.195882 30278 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-7c48f8f679-djbqb" Mar 18 18:08:42.055307 master-0 kubenswrapper[30278]: I0318 18:08:42.055231 30278 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-7c48f8f679-djbqb" Mar 18 18:08:42.145139 master-0 kubenswrapper[30278]: I0318 18:08:42.145054 30278 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-5d47bcf65d-2t257"] Mar 18 18:08:53.308358 master-0 kubenswrapper[30278]: I0318 18:08:53.308199 30278 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["sushy-emulator/sushy-emulator-59477995f9-q9kcc"] Mar 18 18:08:53.309925 master-0 kubenswrapper[30278]: I0318 18:08:53.309892 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="sushy-emulator/sushy-emulator-59477995f9-q9kcc" Mar 18 18:08:53.316860 master-0 kubenswrapper[30278]: I0318 18:08:53.316813 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"sushy-emulator"/"sushy-emulator-config" Mar 18 18:08:53.323778 master-0 kubenswrapper[30278]: I0318 18:08:53.322775 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"sushy-emulator"/"openshift-service-ca.crt" Mar 18 18:08:53.323778 master-0 kubenswrapper[30278]: I0318 18:08:53.322887 30278 reflector.go:368] Caches populated for *v1.Secret from object-"sushy-emulator"/"os-client-config" Mar 18 18:08:53.323778 master-0 kubenswrapper[30278]: I0318 18:08:53.323103 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"sushy-emulator"/"kube-root-ca.crt" Mar 18 18:08:53.342980 master-0 kubenswrapper[30278]: I0318 18:08:53.337906 30278 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["sushy-emulator/sushy-emulator-59477995f9-q9kcc"] Mar 18 18:08:53.383007 master-0 kubenswrapper[30278]: I0318 18:08:53.381195 30278 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-console/networking-console-plugin-7c6b76c555-ltp6d"] Mar 18 18:08:53.383007 master-0 kubenswrapper[30278]: I0318 18:08:53.382738 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-7c6b76c555-ltp6d" Mar 18 18:08:53.389380 master-0 kubenswrapper[30278]: I0318 18:08:53.388242 30278 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-network-console/networking-console-plugin-7c6b76c555-ltp6d"] Mar 18 18:08:53.389380 master-0 kubenswrapper[30278]: I0318 18:08:53.388535 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Mar 18 18:08:53.392007 master-0 kubenswrapper[30278]: I0318 18:08:53.391221 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Mar 18 18:08:53.436398 master-0 kubenswrapper[30278]: I0318 18:08:53.436343 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-client-config\" (UniqueName: \"kubernetes.io/secret/d3cdc990-12c3-4d4e-b059-51f2fa10c969-os-client-config\") pod \"sushy-emulator-59477995f9-q9kcc\" (UID: \"d3cdc990-12c3-4d4e-b059-51f2fa10c969\") " pod="sushy-emulator/sushy-emulator-59477995f9-q9kcc" Mar 18 18:08:53.436647 master-0 kubenswrapper[30278]: I0318 18:08:53.436462 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zl6r7\" (UniqueName: \"kubernetes.io/projected/d3cdc990-12c3-4d4e-b059-51f2fa10c969-kube-api-access-zl6r7\") pod \"sushy-emulator-59477995f9-q9kcc\" (UID: \"d3cdc990-12c3-4d4e-b059-51f2fa10c969\") " pod="sushy-emulator/sushy-emulator-59477995f9-q9kcc" Mar 18 18:08:53.436647 master-0 kubenswrapper[30278]: I0318 18:08:53.436530 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sushy-emulator-config\" (UniqueName: \"kubernetes.io/configmap/d3cdc990-12c3-4d4e-b059-51f2fa10c969-sushy-emulator-config\") pod \"sushy-emulator-59477995f9-q9kcc\" (UID: \"d3cdc990-12c3-4d4e-b059-51f2fa10c969\") " pod="sushy-emulator/sushy-emulator-59477995f9-q9kcc" Mar 18 18:08:53.538407 master-0 kubenswrapper[30278]: I0318 18:08:53.538216 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sushy-emulator-config\" (UniqueName: \"kubernetes.io/configmap/d3cdc990-12c3-4d4e-b059-51f2fa10c969-sushy-emulator-config\") pod \"sushy-emulator-59477995f9-q9kcc\" (UID: \"d3cdc990-12c3-4d4e-b059-51f2fa10c969\") " pod="sushy-emulator/sushy-emulator-59477995f9-q9kcc" Mar 18 18:08:53.538629 master-0 kubenswrapper[30278]: I0318 18:08:53.538439 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/2a4e8663-5d2d-42d8-9196-b39589a193ff-networking-console-plugin-cert\") pod \"networking-console-plugin-7c6b76c555-ltp6d\" (UID: \"2a4e8663-5d2d-42d8-9196-b39589a193ff\") " pod="openshift-network-console/networking-console-plugin-7c6b76c555-ltp6d" Mar 18 18:08:53.538629 master-0 kubenswrapper[30278]: I0318 18:08:53.538479 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/2a4e8663-5d2d-42d8-9196-b39589a193ff-nginx-conf\") pod \"networking-console-plugin-7c6b76c555-ltp6d\" (UID: \"2a4e8663-5d2d-42d8-9196-b39589a193ff\") " pod="openshift-network-console/networking-console-plugin-7c6b76c555-ltp6d" Mar 18 18:08:53.538629 master-0 kubenswrapper[30278]: I0318 18:08:53.538593 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-client-config\" (UniqueName: \"kubernetes.io/secret/d3cdc990-12c3-4d4e-b059-51f2fa10c969-os-client-config\") pod \"sushy-emulator-59477995f9-q9kcc\" (UID: \"d3cdc990-12c3-4d4e-b059-51f2fa10c969\") " pod="sushy-emulator/sushy-emulator-59477995f9-q9kcc" Mar 18 18:08:53.538731 master-0 kubenswrapper[30278]: I0318 18:08:53.538637 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zl6r7\" (UniqueName: \"kubernetes.io/projected/d3cdc990-12c3-4d4e-b059-51f2fa10c969-kube-api-access-zl6r7\") pod \"sushy-emulator-59477995f9-q9kcc\" (UID: \"d3cdc990-12c3-4d4e-b059-51f2fa10c969\") " pod="sushy-emulator/sushy-emulator-59477995f9-q9kcc" Mar 18 18:08:53.539696 master-0 kubenswrapper[30278]: I0318 18:08:53.539667 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sushy-emulator-config\" (UniqueName: \"kubernetes.io/configmap/d3cdc990-12c3-4d4e-b059-51f2fa10c969-sushy-emulator-config\") pod \"sushy-emulator-59477995f9-q9kcc\" (UID: \"d3cdc990-12c3-4d4e-b059-51f2fa10c969\") " pod="sushy-emulator/sushy-emulator-59477995f9-q9kcc" Mar 18 18:08:53.549189 master-0 kubenswrapper[30278]: I0318 18:08:53.549141 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-client-config\" (UniqueName: \"kubernetes.io/secret/d3cdc990-12c3-4d4e-b059-51f2fa10c969-os-client-config\") pod \"sushy-emulator-59477995f9-q9kcc\" (UID: \"d3cdc990-12c3-4d4e-b059-51f2fa10c969\") " pod="sushy-emulator/sushy-emulator-59477995f9-q9kcc" Mar 18 18:08:53.558442 master-0 kubenswrapper[30278]: I0318 18:08:53.558350 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zl6r7\" (UniqueName: \"kubernetes.io/projected/d3cdc990-12c3-4d4e-b059-51f2fa10c969-kube-api-access-zl6r7\") pod \"sushy-emulator-59477995f9-q9kcc\" (UID: \"d3cdc990-12c3-4d4e-b059-51f2fa10c969\") " pod="sushy-emulator/sushy-emulator-59477995f9-q9kcc" Mar 18 18:08:53.641121 master-0 kubenswrapper[30278]: I0318 18:08:53.640993 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/2a4e8663-5d2d-42d8-9196-b39589a193ff-networking-console-plugin-cert\") pod \"networking-console-plugin-7c6b76c555-ltp6d\" (UID: \"2a4e8663-5d2d-42d8-9196-b39589a193ff\") " pod="openshift-network-console/networking-console-plugin-7c6b76c555-ltp6d" Mar 18 18:08:53.641603 master-0 kubenswrapper[30278]: I0318 18:08:53.641452 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/2a4e8663-5d2d-42d8-9196-b39589a193ff-nginx-conf\") pod \"networking-console-plugin-7c6b76c555-ltp6d\" (UID: \"2a4e8663-5d2d-42d8-9196-b39589a193ff\") " pod="openshift-network-console/networking-console-plugin-7c6b76c555-ltp6d" Mar 18 18:08:53.643145 master-0 kubenswrapper[30278]: I0318 18:08:53.643084 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/2a4e8663-5d2d-42d8-9196-b39589a193ff-nginx-conf\") pod \"networking-console-plugin-7c6b76c555-ltp6d\" (UID: \"2a4e8663-5d2d-42d8-9196-b39589a193ff\") " pod="openshift-network-console/networking-console-plugin-7c6b76c555-ltp6d" Mar 18 18:08:53.646489 master-0 kubenswrapper[30278]: I0318 18:08:53.646437 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/2a4e8663-5d2d-42d8-9196-b39589a193ff-networking-console-plugin-cert\") pod \"networking-console-plugin-7c6b76c555-ltp6d\" (UID: \"2a4e8663-5d2d-42d8-9196-b39589a193ff\") " pod="openshift-network-console/networking-console-plugin-7c6b76c555-ltp6d" Mar 18 18:08:53.693477 master-0 kubenswrapper[30278]: I0318 18:08:53.693379 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="sushy-emulator/sushy-emulator-59477995f9-q9kcc" Mar 18 18:08:53.707512 master-0 kubenswrapper[30278]: I0318 18:08:53.707445 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-7c6b76c555-ltp6d" Mar 18 18:08:54.185091 master-0 kubenswrapper[30278]: I0318 18:08:54.185000 30278 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["sushy-emulator/sushy-emulator-59477995f9-q9kcc"] Mar 18 18:08:54.195946 master-0 kubenswrapper[30278]: W0318 18:08:54.195863 30278 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd3cdc990_12c3_4d4e_b059_51f2fa10c969.slice/crio-2ecee873876be4aa2f20a7f07d9a54c49f5e570dfc099989af2bb9c13fb1c475 WatchSource:0}: Error finding container 2ecee873876be4aa2f20a7f07d9a54c49f5e570dfc099989af2bb9c13fb1c475: Status 404 returned error can't find the container with id 2ecee873876be4aa2f20a7f07d9a54c49f5e570dfc099989af2bb9c13fb1c475 Mar 18 18:08:54.248321 master-0 kubenswrapper[30278]: I0318 18:08:54.248208 30278 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-network-console/networking-console-plugin-7c6b76c555-ltp6d"] Mar 18 18:08:54.322609 master-0 kubenswrapper[30278]: I0318 18:08:54.322502 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="sushy-emulator/sushy-emulator-59477995f9-q9kcc" event={"ID":"d3cdc990-12c3-4d4e-b059-51f2fa10c969","Type":"ContainerStarted","Data":"2ecee873876be4aa2f20a7f07d9a54c49f5e570dfc099989af2bb9c13fb1c475"} Mar 18 18:08:54.324909 master-0 kubenswrapper[30278]: I0318 18:08:54.324855 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-7c6b76c555-ltp6d" event={"ID":"2a4e8663-5d2d-42d8-9196-b39589a193ff","Type":"ContainerStarted","Data":"d06b097663b33002afc66c03d1c23304ef068277df1c2e55e843e64e22f8937d"} Mar 18 18:08:55.045099 master-0 kubenswrapper[30278]: I0318 18:08:55.044958 30278 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-69cdb7b474-rkjr2" podUID="27547e71-8f5b-4e31-90c7-491fcda236fb" containerName="console" containerID="cri-o://4f6223e81be4e67b1f1e90f7dd170b1fc79d3b15bf2136de76be369b9a6f81e2" gracePeriod=15 Mar 18 18:08:55.338606 master-0 kubenswrapper[30278]: I0318 18:08:55.338335 30278 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-69cdb7b474-rkjr2_27547e71-8f5b-4e31-90c7-491fcda236fb/console/0.log" Mar 18 18:08:55.338606 master-0 kubenswrapper[30278]: I0318 18:08:55.338422 30278 generic.go:334] "Generic (PLEG): container finished" podID="27547e71-8f5b-4e31-90c7-491fcda236fb" containerID="4f6223e81be4e67b1f1e90f7dd170b1fc79d3b15bf2136de76be369b9a6f81e2" exitCode=2 Mar 18 18:08:55.338606 master-0 kubenswrapper[30278]: I0318 18:08:55.338471 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-69cdb7b474-rkjr2" event={"ID":"27547e71-8f5b-4e31-90c7-491fcda236fb","Type":"ContainerDied","Data":"4f6223e81be4e67b1f1e90f7dd170b1fc79d3b15bf2136de76be369b9a6f81e2"} Mar 18 18:08:55.587837 master-0 kubenswrapper[30278]: I0318 18:08:55.587692 30278 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-69cdb7b474-rkjr2_27547e71-8f5b-4e31-90c7-491fcda236fb/console/0.log" Mar 18 18:08:55.587837 master-0 kubenswrapper[30278]: I0318 18:08:55.587807 30278 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-69cdb7b474-rkjr2" Mar 18 18:08:55.681981 master-0 kubenswrapper[30278]: I0318 18:08:55.681902 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/27547e71-8f5b-4e31-90c7-491fcda236fb-service-ca\") pod \"27547e71-8f5b-4e31-90c7-491fcda236fb\" (UID: \"27547e71-8f5b-4e31-90c7-491fcda236fb\") " Mar 18 18:08:55.682230 master-0 kubenswrapper[30278]: I0318 18:08:55.682000 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/27547e71-8f5b-4e31-90c7-491fcda236fb-trusted-ca-bundle\") pod \"27547e71-8f5b-4e31-90c7-491fcda236fb\" (UID: \"27547e71-8f5b-4e31-90c7-491fcda236fb\") " Mar 18 18:08:55.682230 master-0 kubenswrapper[30278]: I0318 18:08:55.682095 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/27547e71-8f5b-4e31-90c7-491fcda236fb-oauth-serving-cert\") pod \"27547e71-8f5b-4e31-90c7-491fcda236fb\" (UID: \"27547e71-8f5b-4e31-90c7-491fcda236fb\") " Mar 18 18:08:55.682383 master-0 kubenswrapper[30278]: I0318 18:08:55.682247 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/27547e71-8f5b-4e31-90c7-491fcda236fb-console-serving-cert\") pod \"27547e71-8f5b-4e31-90c7-491fcda236fb\" (UID: \"27547e71-8f5b-4e31-90c7-491fcda236fb\") " Mar 18 18:08:55.682383 master-0 kubenswrapper[30278]: I0318 18:08:55.682338 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/27547e71-8f5b-4e31-90c7-491fcda236fb-console-config\") pod \"27547e71-8f5b-4e31-90c7-491fcda236fb\" (UID: \"27547e71-8f5b-4e31-90c7-491fcda236fb\") " Mar 18 18:08:55.682487 master-0 kubenswrapper[30278]: I0318 18:08:55.682385 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9hdx7\" (UniqueName: \"kubernetes.io/projected/27547e71-8f5b-4e31-90c7-491fcda236fb-kube-api-access-9hdx7\") pod \"27547e71-8f5b-4e31-90c7-491fcda236fb\" (UID: \"27547e71-8f5b-4e31-90c7-491fcda236fb\") " Mar 18 18:08:55.682487 master-0 kubenswrapper[30278]: I0318 18:08:55.682454 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/27547e71-8f5b-4e31-90c7-491fcda236fb-console-oauth-config\") pod \"27547e71-8f5b-4e31-90c7-491fcda236fb\" (UID: \"27547e71-8f5b-4e31-90c7-491fcda236fb\") " Mar 18 18:08:55.682774 master-0 kubenswrapper[30278]: I0318 18:08:55.682692 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/27547e71-8f5b-4e31-90c7-491fcda236fb-service-ca" (OuterVolumeSpecName: "service-ca") pod "27547e71-8f5b-4e31-90c7-491fcda236fb" (UID: "27547e71-8f5b-4e31-90c7-491fcda236fb"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 18:08:55.683508 master-0 kubenswrapper[30278]: I0318 18:08:55.683464 30278 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/27547e71-8f5b-4e31-90c7-491fcda236fb-service-ca\") on node \"master-0\" DevicePath \"\"" Mar 18 18:08:55.683703 master-0 kubenswrapper[30278]: I0318 18:08:55.683618 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/27547e71-8f5b-4e31-90c7-491fcda236fb-console-config" (OuterVolumeSpecName: "console-config") pod "27547e71-8f5b-4e31-90c7-491fcda236fb" (UID: "27547e71-8f5b-4e31-90c7-491fcda236fb"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 18:08:55.683772 master-0 kubenswrapper[30278]: I0318 18:08:55.683667 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/27547e71-8f5b-4e31-90c7-491fcda236fb-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "27547e71-8f5b-4e31-90c7-491fcda236fb" (UID: "27547e71-8f5b-4e31-90c7-491fcda236fb"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 18:08:55.684074 master-0 kubenswrapper[30278]: I0318 18:08:55.684022 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/27547e71-8f5b-4e31-90c7-491fcda236fb-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "27547e71-8f5b-4e31-90c7-491fcda236fb" (UID: "27547e71-8f5b-4e31-90c7-491fcda236fb"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 18:08:55.686828 master-0 kubenswrapper[30278]: I0318 18:08:55.686773 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/27547e71-8f5b-4e31-90c7-491fcda236fb-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "27547e71-8f5b-4e31-90c7-491fcda236fb" (UID: "27547e71-8f5b-4e31-90c7-491fcda236fb"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 18:08:55.689392 master-0 kubenswrapper[30278]: I0318 18:08:55.689338 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/27547e71-8f5b-4e31-90c7-491fcda236fb-kube-api-access-9hdx7" (OuterVolumeSpecName: "kube-api-access-9hdx7") pod "27547e71-8f5b-4e31-90c7-491fcda236fb" (UID: "27547e71-8f5b-4e31-90c7-491fcda236fb"). InnerVolumeSpecName "kube-api-access-9hdx7". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 18:08:55.690509 master-0 kubenswrapper[30278]: I0318 18:08:55.690444 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/27547e71-8f5b-4e31-90c7-491fcda236fb-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "27547e71-8f5b-4e31-90c7-491fcda236fb" (UID: "27547e71-8f5b-4e31-90c7-491fcda236fb"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 18:08:55.785039 master-0 kubenswrapper[30278]: I0318 18:08:55.784923 30278 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/27547e71-8f5b-4e31-90c7-491fcda236fb-trusted-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 18 18:08:55.785039 master-0 kubenswrapper[30278]: I0318 18:08:55.784982 30278 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/27547e71-8f5b-4e31-90c7-491fcda236fb-oauth-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 18 18:08:55.785039 master-0 kubenswrapper[30278]: I0318 18:08:55.784994 30278 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/27547e71-8f5b-4e31-90c7-491fcda236fb-console-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 18 18:08:55.785039 master-0 kubenswrapper[30278]: I0318 18:08:55.785005 30278 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/27547e71-8f5b-4e31-90c7-491fcda236fb-console-config\") on node \"master-0\" DevicePath \"\"" Mar 18 18:08:55.785039 master-0 kubenswrapper[30278]: I0318 18:08:55.785017 30278 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9hdx7\" (UniqueName: \"kubernetes.io/projected/27547e71-8f5b-4e31-90c7-491fcda236fb-kube-api-access-9hdx7\") on node \"master-0\" DevicePath \"\"" Mar 18 18:08:55.785039 master-0 kubenswrapper[30278]: I0318 18:08:55.785026 30278 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/27547e71-8f5b-4e31-90c7-491fcda236fb-console-oauth-config\") on node \"master-0\" DevicePath \"\"" Mar 18 18:08:56.347803 master-0 kubenswrapper[30278]: I0318 18:08:56.347599 30278 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-69cdb7b474-rkjr2_27547e71-8f5b-4e31-90c7-491fcda236fb/console/0.log" Mar 18 18:08:56.347803 master-0 kubenswrapper[30278]: I0318 18:08:56.347661 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-69cdb7b474-rkjr2" event={"ID":"27547e71-8f5b-4e31-90c7-491fcda236fb","Type":"ContainerDied","Data":"822dba3b8dd921fac3775f19dbf91ebebbb56e3412c701a58e84efbf8440d6bf"} Mar 18 18:08:56.347803 master-0 kubenswrapper[30278]: I0318 18:08:56.347701 30278 scope.go:117] "RemoveContainer" containerID="4f6223e81be4e67b1f1e90f7dd170b1fc79d3b15bf2136de76be369b9a6f81e2" Mar 18 18:08:56.347803 master-0 kubenswrapper[30278]: I0318 18:08:56.347728 30278 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-69cdb7b474-rkjr2" Mar 18 18:08:56.386449 master-0 kubenswrapper[30278]: I0318 18:08:56.386391 30278 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-69cdb7b474-rkjr2"] Mar 18 18:08:56.393129 master-0 kubenswrapper[30278]: I0318 18:08:56.393078 30278 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-69cdb7b474-rkjr2"] Mar 18 18:08:57.065175 master-0 kubenswrapper[30278]: I0318 18:08:57.065107 30278 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="27547e71-8f5b-4e31-90c7-491fcda236fb" path="/var/lib/kubelet/pods/27547e71-8f5b-4e31-90c7-491fcda236fb/volumes" Mar 18 18:09:02.405442 master-0 kubenswrapper[30278]: I0318 18:09:02.405341 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="sushy-emulator/sushy-emulator-59477995f9-q9kcc" event={"ID":"d3cdc990-12c3-4d4e-b059-51f2fa10c969","Type":"ContainerStarted","Data":"f748a457cb8354b70e164255646680d1b2d69dee36af4494bb47b34c0b66abb5"} Mar 18 18:09:02.407895 master-0 kubenswrapper[30278]: I0318 18:09:02.407839 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-7c6b76c555-ltp6d" event={"ID":"2a4e8663-5d2d-42d8-9196-b39589a193ff","Type":"ContainerStarted","Data":"208665cd30385471ad0855e7752b1060af677a716f79e3fd67af05d74cf5ff89"} Mar 18 18:09:02.440325 master-0 kubenswrapper[30278]: I0318 18:09:02.440019 30278 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="sushy-emulator/sushy-emulator-59477995f9-q9kcc" podStartSLOduration=3.601804487 podStartE2EDuration="11.439992008s" podCreationTimestamp="2026-03-18 18:08:51 +0000 UTC" firstStartedPulling="2026-03-18 18:08:54.19523908 +0000 UTC m=+503.362423705" lastFinishedPulling="2026-03-18 18:09:02.033426591 +0000 UTC m=+511.200611226" observedRunningTime="2026-03-18 18:09:02.438017134 +0000 UTC m=+511.605201769" watchObservedRunningTime="2026-03-18 18:09:02.439992008 +0000 UTC m=+511.607176603" Mar 18 18:09:02.539308 master-0 kubenswrapper[30278]: I0318 18:09:02.537205 30278 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-network-console/networking-console-plugin-7c6b76c555-ltp6d" podStartSLOduration=2.832431135 podStartE2EDuration="10.537167694s" podCreationTimestamp="2026-03-18 18:08:52 +0000 UTC" firstStartedPulling="2026-03-18 18:08:54.254972168 +0000 UTC m=+503.422156753" lastFinishedPulling="2026-03-18 18:09:01.959708707 +0000 UTC m=+511.126893312" observedRunningTime="2026-03-18 18:09:02.470641123 +0000 UTC m=+511.637825738" watchObservedRunningTime="2026-03-18 18:09:02.537167694 +0000 UTC m=+511.704352299" Mar 18 18:09:03.694691 master-0 kubenswrapper[30278]: I0318 18:09:03.694543 30278 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="sushy-emulator/sushy-emulator-59477995f9-q9kcc" Mar 18 18:09:03.694691 master-0 kubenswrapper[30278]: I0318 18:09:03.694654 30278 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="sushy-emulator/sushy-emulator-59477995f9-q9kcc" Mar 18 18:09:03.709326 master-0 kubenswrapper[30278]: I0318 18:09:03.709252 30278 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="sushy-emulator/sushy-emulator-59477995f9-q9kcc" Mar 18 18:09:04.429298 master-0 kubenswrapper[30278]: I0318 18:09:04.429160 30278 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="sushy-emulator/sushy-emulator-59477995f9-q9kcc" Mar 18 18:09:07.182315 master-0 kubenswrapper[30278]: I0318 18:09:07.182209 30278 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-5d47bcf65d-2t257" podUID="2a3ec7d1-8b00-45e3-865d-f696ae42fec1" containerName="console" containerID="cri-o://e01e5f509e1fe351286d94a227cf13b2a0af2879ca90fc24f3460af23a2e4821" gracePeriod=15 Mar 18 18:09:07.465337 master-0 kubenswrapper[30278]: I0318 18:09:07.465256 30278 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-5d47bcf65d-2t257_2a3ec7d1-8b00-45e3-865d-f696ae42fec1/console/0.log" Mar 18 18:09:07.465567 master-0 kubenswrapper[30278]: I0318 18:09:07.465390 30278 generic.go:334] "Generic (PLEG): container finished" podID="2a3ec7d1-8b00-45e3-865d-f696ae42fec1" containerID="e01e5f509e1fe351286d94a227cf13b2a0af2879ca90fc24f3460af23a2e4821" exitCode=2 Mar 18 18:09:07.465567 master-0 kubenswrapper[30278]: I0318 18:09:07.465433 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-5d47bcf65d-2t257" event={"ID":"2a3ec7d1-8b00-45e3-865d-f696ae42fec1","Type":"ContainerDied","Data":"e01e5f509e1fe351286d94a227cf13b2a0af2879ca90fc24f3460af23a2e4821"} Mar 18 18:09:07.658448 master-0 kubenswrapper[30278]: I0318 18:09:07.658366 30278 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-5d47bcf65d-2t257_2a3ec7d1-8b00-45e3-865d-f696ae42fec1/console/0.log" Mar 18 18:09:07.658448 master-0 kubenswrapper[30278]: I0318 18:09:07.658455 30278 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-5d47bcf65d-2t257" Mar 18 18:09:07.814310 master-0 kubenswrapper[30278]: I0318 18:09:07.814215 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/2a3ec7d1-8b00-45e3-865d-f696ae42fec1-oauth-serving-cert\") pod \"2a3ec7d1-8b00-45e3-865d-f696ae42fec1\" (UID: \"2a3ec7d1-8b00-45e3-865d-f696ae42fec1\") " Mar 18 18:09:07.814570 master-0 kubenswrapper[30278]: I0318 18:09:07.814554 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/2a3ec7d1-8b00-45e3-865d-f696ae42fec1-console-config\") pod \"2a3ec7d1-8b00-45e3-865d-f696ae42fec1\" (UID: \"2a3ec7d1-8b00-45e3-865d-f696ae42fec1\") " Mar 18 18:09:07.814610 master-0 kubenswrapper[30278]: I0318 18:09:07.814598 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/2a3ec7d1-8b00-45e3-865d-f696ae42fec1-console-oauth-config\") pod \"2a3ec7d1-8b00-45e3-865d-f696ae42fec1\" (UID: \"2a3ec7d1-8b00-45e3-865d-f696ae42fec1\") " Mar 18 18:09:07.814653 master-0 kubenswrapper[30278]: I0318 18:09:07.814645 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9pjf7\" (UniqueName: \"kubernetes.io/projected/2a3ec7d1-8b00-45e3-865d-f696ae42fec1-kube-api-access-9pjf7\") pod \"2a3ec7d1-8b00-45e3-865d-f696ae42fec1\" (UID: \"2a3ec7d1-8b00-45e3-865d-f696ae42fec1\") " Mar 18 18:09:07.814717 master-0 kubenswrapper[30278]: I0318 18:09:07.814692 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/2a3ec7d1-8b00-45e3-865d-f696ae42fec1-console-serving-cert\") pod \"2a3ec7d1-8b00-45e3-865d-f696ae42fec1\" (UID: \"2a3ec7d1-8b00-45e3-865d-f696ae42fec1\") " Mar 18 18:09:07.814757 master-0 kubenswrapper[30278]: I0318 18:09:07.814722 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2a3ec7d1-8b00-45e3-865d-f696ae42fec1-trusted-ca-bundle\") pod \"2a3ec7d1-8b00-45e3-865d-f696ae42fec1\" (UID: \"2a3ec7d1-8b00-45e3-865d-f696ae42fec1\") " Mar 18 18:09:07.814791 master-0 kubenswrapper[30278]: I0318 18:09:07.814771 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/2a3ec7d1-8b00-45e3-865d-f696ae42fec1-service-ca\") pod \"2a3ec7d1-8b00-45e3-865d-f696ae42fec1\" (UID: \"2a3ec7d1-8b00-45e3-865d-f696ae42fec1\") " Mar 18 18:09:07.815354 master-0 kubenswrapper[30278]: I0318 18:09:07.815243 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2a3ec7d1-8b00-45e3-865d-f696ae42fec1-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "2a3ec7d1-8b00-45e3-865d-f696ae42fec1" (UID: "2a3ec7d1-8b00-45e3-865d-f696ae42fec1"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 18:09:07.815414 master-0 kubenswrapper[30278]: I0318 18:09:07.815262 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2a3ec7d1-8b00-45e3-865d-f696ae42fec1-console-config" (OuterVolumeSpecName: "console-config") pod "2a3ec7d1-8b00-45e3-865d-f696ae42fec1" (UID: "2a3ec7d1-8b00-45e3-865d-f696ae42fec1"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 18:09:07.816177 master-0 kubenswrapper[30278]: I0318 18:09:07.816115 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2a3ec7d1-8b00-45e3-865d-f696ae42fec1-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "2a3ec7d1-8b00-45e3-865d-f696ae42fec1" (UID: "2a3ec7d1-8b00-45e3-865d-f696ae42fec1"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 18:09:07.816432 master-0 kubenswrapper[30278]: I0318 18:09:07.816372 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2a3ec7d1-8b00-45e3-865d-f696ae42fec1-service-ca" (OuterVolumeSpecName: "service-ca") pod "2a3ec7d1-8b00-45e3-865d-f696ae42fec1" (UID: "2a3ec7d1-8b00-45e3-865d-f696ae42fec1"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 18:09:07.818515 master-0 kubenswrapper[30278]: I0318 18:09:07.818439 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2a3ec7d1-8b00-45e3-865d-f696ae42fec1-kube-api-access-9pjf7" (OuterVolumeSpecName: "kube-api-access-9pjf7") pod "2a3ec7d1-8b00-45e3-865d-f696ae42fec1" (UID: "2a3ec7d1-8b00-45e3-865d-f696ae42fec1"). InnerVolumeSpecName "kube-api-access-9pjf7". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 18:09:07.818691 master-0 kubenswrapper[30278]: I0318 18:09:07.818607 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2a3ec7d1-8b00-45e3-865d-f696ae42fec1-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "2a3ec7d1-8b00-45e3-865d-f696ae42fec1" (UID: "2a3ec7d1-8b00-45e3-865d-f696ae42fec1"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 18:09:07.820830 master-0 kubenswrapper[30278]: I0318 18:09:07.820772 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2a3ec7d1-8b00-45e3-865d-f696ae42fec1-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "2a3ec7d1-8b00-45e3-865d-f696ae42fec1" (UID: "2a3ec7d1-8b00-45e3-865d-f696ae42fec1"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 18:09:07.917900 master-0 kubenswrapper[30278]: I0318 18:09:07.917809 30278 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/2a3ec7d1-8b00-45e3-865d-f696ae42fec1-oauth-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 18 18:09:07.917900 master-0 kubenswrapper[30278]: I0318 18:09:07.917890 30278 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/2a3ec7d1-8b00-45e3-865d-f696ae42fec1-console-config\") on node \"master-0\" DevicePath \"\"" Mar 18 18:09:07.917900 master-0 kubenswrapper[30278]: I0318 18:09:07.917907 30278 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/2a3ec7d1-8b00-45e3-865d-f696ae42fec1-console-oauth-config\") on node \"master-0\" DevicePath \"\"" Mar 18 18:09:07.917900 master-0 kubenswrapper[30278]: I0318 18:09:07.917921 30278 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9pjf7\" (UniqueName: \"kubernetes.io/projected/2a3ec7d1-8b00-45e3-865d-f696ae42fec1-kube-api-access-9pjf7\") on node \"master-0\" DevicePath \"\"" Mar 18 18:09:07.917900 master-0 kubenswrapper[30278]: I0318 18:09:07.917936 30278 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/2a3ec7d1-8b00-45e3-865d-f696ae42fec1-console-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 18 18:09:07.919221 master-0 kubenswrapper[30278]: I0318 18:09:07.917978 30278 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2a3ec7d1-8b00-45e3-865d-f696ae42fec1-trusted-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 18 18:09:07.919221 master-0 kubenswrapper[30278]: I0318 18:09:07.917995 30278 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/2a3ec7d1-8b00-45e3-865d-f696ae42fec1-service-ca\") on node \"master-0\" DevicePath \"\"" Mar 18 18:09:08.479341 master-0 kubenswrapper[30278]: I0318 18:09:08.479231 30278 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-5d47bcf65d-2t257_2a3ec7d1-8b00-45e3-865d-f696ae42fec1/console/0.log" Mar 18 18:09:08.480370 master-0 kubenswrapper[30278]: I0318 18:09:08.479387 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-5d47bcf65d-2t257" event={"ID":"2a3ec7d1-8b00-45e3-865d-f696ae42fec1","Type":"ContainerDied","Data":"7f1dd499463643bfbc9969399e7a018b955907370ccdcb3c04bbb7b854cd9c7c"} Mar 18 18:09:08.480370 master-0 kubenswrapper[30278]: I0318 18:09:08.479460 30278 scope.go:117] "RemoveContainer" containerID="e01e5f509e1fe351286d94a227cf13b2a0af2879ca90fc24f3460af23a2e4821" Mar 18 18:09:08.480370 master-0 kubenswrapper[30278]: I0318 18:09:08.479679 30278 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-5d47bcf65d-2t257" Mar 18 18:09:08.559821 master-0 kubenswrapper[30278]: I0318 18:09:08.559700 30278 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-5d47bcf65d-2t257"] Mar 18 18:09:08.568333 master-0 kubenswrapper[30278]: I0318 18:09:08.568151 30278 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-5d47bcf65d-2t257"] Mar 18 18:09:09.069962 master-0 kubenswrapper[30278]: I0318 18:09:09.069878 30278 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2a3ec7d1-8b00-45e3-865d-f696ae42fec1" path="/var/lib/kubelet/pods/2a3ec7d1-8b00-45e3-865d-f696ae42fec1/volumes" Mar 18 18:09:23.425843 master-0 kubenswrapper[30278]: I0318 18:09:23.425766 30278 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["sushy-emulator/nova-console-poller-769bf5fc45-glg25"] Mar 18 18:09:23.426871 master-0 kubenswrapper[30278]: E0318 18:09:23.426373 30278 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="27547e71-8f5b-4e31-90c7-491fcda236fb" containerName="console" Mar 18 18:09:23.426871 master-0 kubenswrapper[30278]: I0318 18:09:23.426415 30278 state_mem.go:107] "Deleted CPUSet assignment" podUID="27547e71-8f5b-4e31-90c7-491fcda236fb" containerName="console" Mar 18 18:09:23.426871 master-0 kubenswrapper[30278]: E0318 18:09:23.426498 30278 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2a3ec7d1-8b00-45e3-865d-f696ae42fec1" containerName="console" Mar 18 18:09:23.426871 master-0 kubenswrapper[30278]: I0318 18:09:23.426522 30278 state_mem.go:107] "Deleted CPUSet assignment" podUID="2a3ec7d1-8b00-45e3-865d-f696ae42fec1" containerName="console" Mar 18 18:09:23.427061 master-0 kubenswrapper[30278]: I0318 18:09:23.426876 30278 memory_manager.go:354] "RemoveStaleState removing state" podUID="2a3ec7d1-8b00-45e3-865d-f696ae42fec1" containerName="console" Mar 18 18:09:23.427061 master-0 kubenswrapper[30278]: I0318 18:09:23.426968 30278 memory_manager.go:354] "RemoveStaleState removing state" podUID="27547e71-8f5b-4e31-90c7-491fcda236fb" containerName="console" Mar 18 18:09:23.428563 master-0 kubenswrapper[30278]: I0318 18:09:23.428505 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="sushy-emulator/nova-console-poller-769bf5fc45-glg25" Mar 18 18:09:23.452199 master-0 kubenswrapper[30278]: I0318 18:09:23.452079 30278 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["sushy-emulator/nova-console-poller-769bf5fc45-glg25"] Mar 18 18:09:23.531458 master-0 kubenswrapper[30278]: I0318 18:09:23.531353 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lnvzq\" (UniqueName: \"kubernetes.io/projected/453b640e-c266-4ee8-96e5-a27fcdba9df4-kube-api-access-lnvzq\") pod \"nova-console-poller-769bf5fc45-glg25\" (UID: \"453b640e-c266-4ee8-96e5-a27fcdba9df4\") " pod="sushy-emulator/nova-console-poller-769bf5fc45-glg25" Mar 18 18:09:23.531458 master-0 kubenswrapper[30278]: I0318 18:09:23.531470 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-client-config\" (UniqueName: \"kubernetes.io/secret/453b640e-c266-4ee8-96e5-a27fcdba9df4-os-client-config\") pod \"nova-console-poller-769bf5fc45-glg25\" (UID: \"453b640e-c266-4ee8-96e5-a27fcdba9df4\") " pod="sushy-emulator/nova-console-poller-769bf5fc45-glg25" Mar 18 18:09:23.633680 master-0 kubenswrapper[30278]: I0318 18:09:23.633596 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lnvzq\" (UniqueName: \"kubernetes.io/projected/453b640e-c266-4ee8-96e5-a27fcdba9df4-kube-api-access-lnvzq\") pod \"nova-console-poller-769bf5fc45-glg25\" (UID: \"453b640e-c266-4ee8-96e5-a27fcdba9df4\") " pod="sushy-emulator/nova-console-poller-769bf5fc45-glg25" Mar 18 18:09:23.633680 master-0 kubenswrapper[30278]: I0318 18:09:23.633723 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-client-config\" (UniqueName: \"kubernetes.io/secret/453b640e-c266-4ee8-96e5-a27fcdba9df4-os-client-config\") pod \"nova-console-poller-769bf5fc45-glg25\" (UID: \"453b640e-c266-4ee8-96e5-a27fcdba9df4\") " pod="sushy-emulator/nova-console-poller-769bf5fc45-glg25" Mar 18 18:09:23.639501 master-0 kubenswrapper[30278]: I0318 18:09:23.639459 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-client-config\" (UniqueName: \"kubernetes.io/secret/453b640e-c266-4ee8-96e5-a27fcdba9df4-os-client-config\") pod \"nova-console-poller-769bf5fc45-glg25\" (UID: \"453b640e-c266-4ee8-96e5-a27fcdba9df4\") " pod="sushy-emulator/nova-console-poller-769bf5fc45-glg25" Mar 18 18:09:23.675783 master-0 kubenswrapper[30278]: I0318 18:09:23.675666 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lnvzq\" (UniqueName: \"kubernetes.io/projected/453b640e-c266-4ee8-96e5-a27fcdba9df4-kube-api-access-lnvzq\") pod \"nova-console-poller-769bf5fc45-glg25\" (UID: \"453b640e-c266-4ee8-96e5-a27fcdba9df4\") " pod="sushy-emulator/nova-console-poller-769bf5fc45-glg25" Mar 18 18:09:23.752476 master-0 kubenswrapper[30278]: I0318 18:09:23.752237 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="sushy-emulator/nova-console-poller-769bf5fc45-glg25" Mar 18 18:09:24.057673 master-0 kubenswrapper[30278]: I0318 18:09:24.057307 30278 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["sushy-emulator/nova-console-poller-769bf5fc45-glg25"] Mar 18 18:09:24.064471 master-0 kubenswrapper[30278]: W0318 18:09:24.064382 30278 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod453b640e_c266_4ee8_96e5_a27fcdba9df4.slice/crio-d954da9deff4409911e47007b76f058676d7d18c5935736f65ac6f4cfe2b0efd WatchSource:0}: Error finding container d954da9deff4409911e47007b76f058676d7d18c5935736f65ac6f4cfe2b0efd: Status 404 returned error can't find the container with id d954da9deff4409911e47007b76f058676d7d18c5935736f65ac6f4cfe2b0efd Mar 18 18:09:24.651392 master-0 kubenswrapper[30278]: I0318 18:09:24.651187 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="sushy-emulator/nova-console-poller-769bf5fc45-glg25" event={"ID":"453b640e-c266-4ee8-96e5-a27fcdba9df4","Type":"ContainerStarted","Data":"d954da9deff4409911e47007b76f058676d7d18c5935736f65ac6f4cfe2b0efd"} Mar 18 18:09:29.707544 master-0 kubenswrapper[30278]: I0318 18:09:29.707228 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="sushy-emulator/nova-console-poller-769bf5fc45-glg25" event={"ID":"453b640e-c266-4ee8-96e5-a27fcdba9df4","Type":"ContainerStarted","Data":"2ed2346c1eb7a7e89276cb2db57bbdfa9d632b545062a1b38bf3ac0ede3e6b12"} Mar 18 18:09:30.720139 master-0 kubenswrapper[30278]: I0318 18:09:30.720057 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="sushy-emulator/nova-console-poller-769bf5fc45-glg25" event={"ID":"453b640e-c266-4ee8-96e5-a27fcdba9df4","Type":"ContainerStarted","Data":"3fc6426b8b214a19efa3f8215f01d35dcfcfeadef6ca398a950f8998113193de"} Mar 18 18:09:36.400420 master-0 kubenswrapper[30278]: I0318 18:09:36.400339 30278 scope.go:117] "RemoveContainer" containerID="af3223d37de441a43e2bb9840f2c7d68ed9137889a1d1026233d1692393573ca" Mar 18 18:09:51.171582 master-0 kubenswrapper[30278]: I0318 18:09:51.171449 30278 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="sushy-emulator/nova-console-poller-769bf5fc45-glg25" podStartSLOduration=22.233380381 podStartE2EDuration="28.171421341s" podCreationTimestamp="2026-03-18 18:09:23 +0000 UTC" firstStartedPulling="2026-03-18 18:09:24.067558836 +0000 UTC m=+533.234743431" lastFinishedPulling="2026-03-18 18:09:30.005599756 +0000 UTC m=+539.172784391" observedRunningTime="2026-03-18 18:09:30.76042728 +0000 UTC m=+539.927611905" watchObservedRunningTime="2026-03-18 18:09:51.171421341 +0000 UTC m=+560.338605966" Mar 18 18:09:51.173612 master-0 kubenswrapper[30278]: I0318 18:09:51.172910 30278 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/installer-4-master-0"] Mar 18 18:09:51.174519 master-0 kubenswrapper[30278]: I0318 18:09:51.174461 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-4-master-0" Mar 18 18:09:51.179197 master-0 kubenswrapper[30278]: I0318 18:09:51.179084 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Mar 18 18:09:51.180500 master-0 kubenswrapper[30278]: I0318 18:09:51.179690 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-cskqs" Mar 18 18:09:51.198834 master-0 kubenswrapper[30278]: I0318 18:09:51.198561 30278 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/installer-4-master-0"] Mar 18 18:09:51.357365 master-0 kubenswrapper[30278]: I0318 18:09:51.357292 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/14ce9a69-2bf5-4809-90d9-b0b122aa11e5-var-lock\") pod \"installer-4-master-0\" (UID: \"14ce9a69-2bf5-4809-90d9-b0b122aa11e5\") " pod="openshift-kube-controller-manager/installer-4-master-0" Mar 18 18:09:51.357740 master-0 kubenswrapper[30278]: I0318 18:09:51.357642 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/14ce9a69-2bf5-4809-90d9-b0b122aa11e5-kubelet-dir\") pod \"installer-4-master-0\" (UID: \"14ce9a69-2bf5-4809-90d9-b0b122aa11e5\") " pod="openshift-kube-controller-manager/installer-4-master-0" Mar 18 18:09:51.357802 master-0 kubenswrapper[30278]: I0318 18:09:51.357771 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/14ce9a69-2bf5-4809-90d9-b0b122aa11e5-kube-api-access\") pod \"installer-4-master-0\" (UID: \"14ce9a69-2bf5-4809-90d9-b0b122aa11e5\") " pod="openshift-kube-controller-manager/installer-4-master-0" Mar 18 18:09:51.459907 master-0 kubenswrapper[30278]: I0318 18:09:51.459678 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/14ce9a69-2bf5-4809-90d9-b0b122aa11e5-kubelet-dir\") pod \"installer-4-master-0\" (UID: \"14ce9a69-2bf5-4809-90d9-b0b122aa11e5\") " pod="openshift-kube-controller-manager/installer-4-master-0" Mar 18 18:09:51.459907 master-0 kubenswrapper[30278]: I0318 18:09:51.459779 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/14ce9a69-2bf5-4809-90d9-b0b122aa11e5-kube-api-access\") pod \"installer-4-master-0\" (UID: \"14ce9a69-2bf5-4809-90d9-b0b122aa11e5\") " pod="openshift-kube-controller-manager/installer-4-master-0" Mar 18 18:09:51.459907 master-0 kubenswrapper[30278]: I0318 18:09:51.459856 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/14ce9a69-2bf5-4809-90d9-b0b122aa11e5-kubelet-dir\") pod \"installer-4-master-0\" (UID: \"14ce9a69-2bf5-4809-90d9-b0b122aa11e5\") " pod="openshift-kube-controller-manager/installer-4-master-0" Mar 18 18:09:51.460486 master-0 kubenswrapper[30278]: I0318 18:09:51.459916 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/14ce9a69-2bf5-4809-90d9-b0b122aa11e5-var-lock\") pod \"installer-4-master-0\" (UID: \"14ce9a69-2bf5-4809-90d9-b0b122aa11e5\") " pod="openshift-kube-controller-manager/installer-4-master-0" Mar 18 18:09:51.460486 master-0 kubenswrapper[30278]: I0318 18:09:51.460214 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/14ce9a69-2bf5-4809-90d9-b0b122aa11e5-var-lock\") pod \"installer-4-master-0\" (UID: \"14ce9a69-2bf5-4809-90d9-b0b122aa11e5\") " pod="openshift-kube-controller-manager/installer-4-master-0" Mar 18 18:09:51.498653 master-0 kubenswrapper[30278]: I0318 18:09:51.498493 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/14ce9a69-2bf5-4809-90d9-b0b122aa11e5-kube-api-access\") pod \"installer-4-master-0\" (UID: \"14ce9a69-2bf5-4809-90d9-b0b122aa11e5\") " pod="openshift-kube-controller-manager/installer-4-master-0" Mar 18 18:09:51.514466 master-0 kubenswrapper[30278]: I0318 18:09:51.514401 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-4-master-0" Mar 18 18:09:51.970555 master-0 kubenswrapper[30278]: I0318 18:09:51.970480 30278 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/installer-4-master-0"] Mar 18 18:09:51.975638 master-0 kubenswrapper[30278]: W0318 18:09:51.975402 30278 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod14ce9a69_2bf5_4809_90d9_b0b122aa11e5.slice/crio-17d38e4c8b5eb3d2b77c2d3cd296a73447532f2d142906acdf081de620c2323c WatchSource:0}: Error finding container 17d38e4c8b5eb3d2b77c2d3cd296a73447532f2d142906acdf081de620c2323c: Status 404 returned error can't find the container with id 17d38e4c8b5eb3d2b77c2d3cd296a73447532f2d142906acdf081de620c2323c Mar 18 18:09:52.935730 master-0 kubenswrapper[30278]: I0318 18:09:52.935635 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-4-master-0" event={"ID":"14ce9a69-2bf5-4809-90d9-b0b122aa11e5","Type":"ContainerStarted","Data":"22e1f6d1a4e16788e923e144a291ae6a910b7ae94879c2bbc84ab52e476aebd2"} Mar 18 18:09:52.935730 master-0 kubenswrapper[30278]: I0318 18:09:52.935725 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-4-master-0" event={"ID":"14ce9a69-2bf5-4809-90d9-b0b122aa11e5","Type":"ContainerStarted","Data":"17d38e4c8b5eb3d2b77c2d3cd296a73447532f2d142906acdf081de620c2323c"} Mar 18 18:09:52.966649 master-0 kubenswrapper[30278]: I0318 18:09:52.966517 30278 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/installer-4-master-0" podStartSLOduration=1.966486772 podStartE2EDuration="1.966486772s" podCreationTimestamp="2026-03-18 18:09:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 18:09:52.961713125 +0000 UTC m=+562.128897750" watchObservedRunningTime="2026-03-18 18:09:52.966486772 +0000 UTC m=+562.133671397" Mar 18 18:09:55.533845 master-0 kubenswrapper[30278]: I0318 18:09:55.533728 30278 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["sushy-emulator/nova-console-recorder-546f7fd845-mfrbg"] Mar 18 18:09:55.537729 master-0 kubenswrapper[30278]: I0318 18:09:55.535771 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="sushy-emulator/nova-console-recorder-546f7fd845-mfrbg" Mar 18 18:09:55.555645 master-0 kubenswrapper[30278]: I0318 18:09:55.555580 30278 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["sushy-emulator/nova-console-recorder-546f7fd845-mfrbg"] Mar 18 18:09:55.641063 master-0 kubenswrapper[30278]: I0318 18:09:55.640897 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-console-recordings-pv\" (UniqueName: \"kubernetes.io/nfs/8f3c7af9-c36e-46e6-871c-861154bd71ce-nova-console-recordings-pv\") pod \"nova-console-recorder-546f7fd845-mfrbg\" (UID: \"8f3c7af9-c36e-46e6-871c-861154bd71ce\") " pod="sushy-emulator/nova-console-recorder-546f7fd845-mfrbg" Mar 18 18:09:55.641063 master-0 kubenswrapper[30278]: I0318 18:09:55.641057 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-client-config\" (UniqueName: \"kubernetes.io/secret/8f3c7af9-c36e-46e6-871c-861154bd71ce-os-client-config\") pod \"nova-console-recorder-546f7fd845-mfrbg\" (UID: \"8f3c7af9-c36e-46e6-871c-861154bd71ce\") " pod="sushy-emulator/nova-console-recorder-546f7fd845-mfrbg" Mar 18 18:09:55.641405 master-0 kubenswrapper[30278]: I0318 18:09:55.641121 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rtp4h\" (UniqueName: \"kubernetes.io/projected/8f3c7af9-c36e-46e6-871c-861154bd71ce-kube-api-access-rtp4h\") pod \"nova-console-recorder-546f7fd845-mfrbg\" (UID: \"8f3c7af9-c36e-46e6-871c-861154bd71ce\") " pod="sushy-emulator/nova-console-recorder-546f7fd845-mfrbg" Mar 18 18:09:55.743375 master-0 kubenswrapper[30278]: I0318 18:09:55.743228 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-console-recordings-pv\" (UniqueName: \"kubernetes.io/nfs/8f3c7af9-c36e-46e6-871c-861154bd71ce-nova-console-recordings-pv\") pod \"nova-console-recorder-546f7fd845-mfrbg\" (UID: \"8f3c7af9-c36e-46e6-871c-861154bd71ce\") " pod="sushy-emulator/nova-console-recorder-546f7fd845-mfrbg" Mar 18 18:09:55.743679 master-0 kubenswrapper[30278]: I0318 18:09:55.743518 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-client-config\" (UniqueName: \"kubernetes.io/secret/8f3c7af9-c36e-46e6-871c-861154bd71ce-os-client-config\") pod \"nova-console-recorder-546f7fd845-mfrbg\" (UID: \"8f3c7af9-c36e-46e6-871c-861154bd71ce\") " pod="sushy-emulator/nova-console-recorder-546f7fd845-mfrbg" Mar 18 18:09:55.743679 master-0 kubenswrapper[30278]: I0318 18:09:55.743570 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rtp4h\" (UniqueName: \"kubernetes.io/projected/8f3c7af9-c36e-46e6-871c-861154bd71ce-kube-api-access-rtp4h\") pod \"nova-console-recorder-546f7fd845-mfrbg\" (UID: \"8f3c7af9-c36e-46e6-871c-861154bd71ce\") " pod="sushy-emulator/nova-console-recorder-546f7fd845-mfrbg" Mar 18 18:09:55.752246 master-0 kubenswrapper[30278]: I0318 18:09:55.752115 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-client-config\" (UniqueName: \"kubernetes.io/secret/8f3c7af9-c36e-46e6-871c-861154bd71ce-os-client-config\") pod \"nova-console-recorder-546f7fd845-mfrbg\" (UID: \"8f3c7af9-c36e-46e6-871c-861154bd71ce\") " pod="sushy-emulator/nova-console-recorder-546f7fd845-mfrbg" Mar 18 18:09:55.771740 master-0 kubenswrapper[30278]: I0318 18:09:55.771621 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rtp4h\" (UniqueName: \"kubernetes.io/projected/8f3c7af9-c36e-46e6-871c-861154bd71ce-kube-api-access-rtp4h\") pod \"nova-console-recorder-546f7fd845-mfrbg\" (UID: \"8f3c7af9-c36e-46e6-871c-861154bd71ce\") " pod="sushy-emulator/nova-console-recorder-546f7fd845-mfrbg" Mar 18 18:09:56.463514 master-0 kubenswrapper[30278]: I0318 18:09:56.463403 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-console-recordings-pv\" (UniqueName: \"kubernetes.io/nfs/8f3c7af9-c36e-46e6-871c-861154bd71ce-nova-console-recordings-pv\") pod \"nova-console-recorder-546f7fd845-mfrbg\" (UID: \"8f3c7af9-c36e-46e6-871c-861154bd71ce\") " pod="sushy-emulator/nova-console-recorder-546f7fd845-mfrbg" Mar 18 18:09:56.761135 master-0 kubenswrapper[30278]: I0318 18:09:56.760835 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="sushy-emulator/nova-console-recorder-546f7fd845-mfrbg" Mar 18 18:09:57.294620 master-0 kubenswrapper[30278]: I0318 18:09:57.294551 30278 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["sushy-emulator/nova-console-recorder-546f7fd845-mfrbg"] Mar 18 18:09:57.980621 master-0 kubenswrapper[30278]: I0318 18:09:57.980505 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="sushy-emulator/nova-console-recorder-546f7fd845-mfrbg" event={"ID":"8f3c7af9-c36e-46e6-871c-861154bd71ce","Type":"ContainerStarted","Data":"9028b37fe1cc04a0c08ce3d828b09c83e213f151d3d6e570575727cc07e249cb"} Mar 18 18:10:07.074006 master-0 kubenswrapper[30278]: I0318 18:10:07.073906 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="sushy-emulator/nova-console-recorder-546f7fd845-mfrbg" event={"ID":"8f3c7af9-c36e-46e6-871c-861154bd71ce","Type":"ContainerStarted","Data":"b7f887e0124a7f7cc0e178a453544836f0e30e5594e92048dc097b6d81d24d4c"} Mar 18 18:10:09.096695 master-0 kubenswrapper[30278]: I0318 18:10:09.096631 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="sushy-emulator/nova-console-recorder-546f7fd845-mfrbg" event={"ID":"8f3c7af9-c36e-46e6-871c-861154bd71ce","Type":"ContainerStarted","Data":"2aaefe1fb6043b623f5180a40028503e954e8c84e5b023c64726b3eecaf4ec31"} Mar 18 18:10:09.130614 master-0 kubenswrapper[30278]: I0318 18:10:09.130518 30278 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="sushy-emulator/nova-console-recorder-546f7fd845-mfrbg" podStartSLOduration=3.292625219 podStartE2EDuration="14.130487685s" podCreationTimestamp="2026-03-18 18:09:55 +0000 UTC" firstStartedPulling="2026-03-18 18:09:57.30398772 +0000 UTC m=+566.471172355" lastFinishedPulling="2026-03-18 18:10:08.141850226 +0000 UTC m=+577.309034821" observedRunningTime="2026-03-18 18:10:09.121125983 +0000 UTC m=+578.288310608" watchObservedRunningTime="2026-03-18 18:10:09.130487685 +0000 UTC m=+578.297672320" Mar 18 18:10:25.443265 master-0 kubenswrapper[30278]: I0318 18:10:25.443145 30278 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-controller-manager/kube-controller-manager-master-0"] Mar 18 18:10:25.444403 master-0 kubenswrapper[30278]: I0318 18:10:25.443687 30278 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="efc76217af9e7119e39d2455d00c223f" containerName="kube-controller-manager-cert-syncer" containerID="cri-o://b58573729d641d7e86f1ec2365e091375bd8cf625b0a9697be4ea6b82ebe135b" gracePeriod=30 Mar 18 18:10:25.444403 master-0 kubenswrapper[30278]: I0318 18:10:25.443756 30278 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="efc76217af9e7119e39d2455d00c223f" containerName="kube-controller-manager" containerID="cri-o://c974ce9bca98caf206cacb3590d85f8cb970581a77ff4f55db1e8e82efb4ff2c" gracePeriod=30 Mar 18 18:10:25.444403 master-0 kubenswrapper[30278]: I0318 18:10:25.443896 30278 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="efc76217af9e7119e39d2455d00c223f" containerName="cluster-policy-controller" containerID="cri-o://dec20dd282b8a1026853916cbbdbad7fcda801cf86223b20c47a3250f052fed3" gracePeriod=30 Mar 18 18:10:25.444403 master-0 kubenswrapper[30278]: I0318 18:10:25.443942 30278 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="efc76217af9e7119e39d2455d00c223f" containerName="kube-controller-manager-recovery-controller" containerID="cri-o://31da287ae2ee280ceb25c6d586c08cddceb6988bdd57a314f7a80a3ffba9a2ae" gracePeriod=30 Mar 18 18:10:25.447051 master-0 kubenswrapper[30278]: I0318 18:10:25.446190 30278 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-controller-manager/kube-controller-manager-master-0"] Mar 18 18:10:25.447051 master-0 kubenswrapper[30278]: E0318 18:10:25.446914 30278 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="efc76217af9e7119e39d2455d00c223f" containerName="cluster-policy-controller" Mar 18 18:10:25.447051 master-0 kubenswrapper[30278]: I0318 18:10:25.446953 30278 state_mem.go:107] "Deleted CPUSet assignment" podUID="efc76217af9e7119e39d2455d00c223f" containerName="cluster-policy-controller" Mar 18 18:10:25.447051 master-0 kubenswrapper[30278]: E0318 18:10:25.447000 30278 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="efc76217af9e7119e39d2455d00c223f" containerName="kube-controller-manager" Mar 18 18:10:25.447051 master-0 kubenswrapper[30278]: I0318 18:10:25.447024 30278 state_mem.go:107] "Deleted CPUSet assignment" podUID="efc76217af9e7119e39d2455d00c223f" containerName="kube-controller-manager" Mar 18 18:10:25.447051 master-0 kubenswrapper[30278]: E0318 18:10:25.447046 30278 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="efc76217af9e7119e39d2455d00c223f" containerName="kube-controller-manager-cert-syncer" Mar 18 18:10:25.447051 master-0 kubenswrapper[30278]: I0318 18:10:25.447064 30278 state_mem.go:107] "Deleted CPUSet assignment" podUID="efc76217af9e7119e39d2455d00c223f" containerName="kube-controller-manager-cert-syncer" Mar 18 18:10:25.447051 master-0 kubenswrapper[30278]: E0318 18:10:25.447106 30278 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="efc76217af9e7119e39d2455d00c223f" containerName="kube-controller-manager" Mar 18 18:10:25.447051 master-0 kubenswrapper[30278]: I0318 18:10:25.447123 30278 state_mem.go:107] "Deleted CPUSet assignment" podUID="efc76217af9e7119e39d2455d00c223f" containerName="kube-controller-manager" Mar 18 18:10:25.448772 master-0 kubenswrapper[30278]: E0318 18:10:25.447150 30278 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="efc76217af9e7119e39d2455d00c223f" containerName="cluster-policy-controller" Mar 18 18:10:25.448772 master-0 kubenswrapper[30278]: I0318 18:10:25.447166 30278 state_mem.go:107] "Deleted CPUSet assignment" podUID="efc76217af9e7119e39d2455d00c223f" containerName="cluster-policy-controller" Mar 18 18:10:25.448772 master-0 kubenswrapper[30278]: E0318 18:10:25.447210 30278 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="efc76217af9e7119e39d2455d00c223f" containerName="kube-controller-manager-recovery-controller" Mar 18 18:10:25.448772 master-0 kubenswrapper[30278]: I0318 18:10:25.447229 30278 state_mem.go:107] "Deleted CPUSet assignment" podUID="efc76217af9e7119e39d2455d00c223f" containerName="kube-controller-manager-recovery-controller" Mar 18 18:10:25.448772 master-0 kubenswrapper[30278]: I0318 18:10:25.447585 30278 memory_manager.go:354] "RemoveStaleState removing state" podUID="efc76217af9e7119e39d2455d00c223f" containerName="kube-controller-manager" Mar 18 18:10:25.448772 master-0 kubenswrapper[30278]: I0318 18:10:25.447632 30278 memory_manager.go:354] "RemoveStaleState removing state" podUID="efc76217af9e7119e39d2455d00c223f" containerName="kube-controller-manager" Mar 18 18:10:25.448772 master-0 kubenswrapper[30278]: I0318 18:10:25.447660 30278 memory_manager.go:354] "RemoveStaleState removing state" podUID="efc76217af9e7119e39d2455d00c223f" containerName="kube-controller-manager-recovery-controller" Mar 18 18:10:25.448772 master-0 kubenswrapper[30278]: I0318 18:10:25.447687 30278 memory_manager.go:354] "RemoveStaleState removing state" podUID="efc76217af9e7119e39d2455d00c223f" containerName="kube-controller-manager-cert-syncer" Mar 18 18:10:25.448772 master-0 kubenswrapper[30278]: I0318 18:10:25.447716 30278 memory_manager.go:354] "RemoveStaleState removing state" podUID="efc76217af9e7119e39d2455d00c223f" containerName="cluster-policy-controller" Mar 18 18:10:25.448772 master-0 kubenswrapper[30278]: I0318 18:10:25.448436 30278 memory_manager.go:354] "RemoveStaleState removing state" podUID="efc76217af9e7119e39d2455d00c223f" containerName="cluster-policy-controller" Mar 18 18:10:25.469760 master-0 kubenswrapper[30278]: I0318 18:10:25.469676 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/05e7d2c9a162447375c640c3bf90c6fd-resource-dir\") pod \"kube-controller-manager-master-0\" (UID: \"05e7d2c9a162447375c640c3bf90c6fd\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 18:10:25.470254 master-0 kubenswrapper[30278]: I0318 18:10:25.470075 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/05e7d2c9a162447375c640c3bf90c6fd-cert-dir\") pod \"kube-controller-manager-master-0\" (UID: \"05e7d2c9a162447375c640c3bf90c6fd\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 18:10:25.571259 master-0 kubenswrapper[30278]: I0318 18:10:25.571199 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/05e7d2c9a162447375c640c3bf90c6fd-resource-dir\") pod \"kube-controller-manager-master-0\" (UID: \"05e7d2c9a162447375c640c3bf90c6fd\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 18:10:25.571397 master-0 kubenswrapper[30278]: I0318 18:10:25.571380 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/05e7d2c9a162447375c640c3bf90c6fd-cert-dir\") pod \"kube-controller-manager-master-0\" (UID: \"05e7d2c9a162447375c640c3bf90c6fd\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 18:10:25.571496 master-0 kubenswrapper[30278]: I0318 18:10:25.571473 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/05e7d2c9a162447375c640c3bf90c6fd-cert-dir\") pod \"kube-controller-manager-master-0\" (UID: \"05e7d2c9a162447375c640c3bf90c6fd\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 18:10:25.571566 master-0 kubenswrapper[30278]: I0318 18:10:25.571508 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/05e7d2c9a162447375c640c3bf90c6fd-resource-dir\") pod \"kube-controller-manager-master-0\" (UID: \"05e7d2c9a162447375c640c3bf90c6fd\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 18:10:25.697888 master-0 kubenswrapper[30278]: I0318 18:10:25.697710 30278 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_efc76217af9e7119e39d2455d00c223f/kube-controller-manager-cert-syncer/0.log" Mar 18 18:10:25.698430 master-0 kubenswrapper[30278]: I0318 18:10:25.698388 30278 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_efc76217af9e7119e39d2455d00c223f/cluster-policy-controller/0.log" Mar 18 18:10:25.702882 master-0 kubenswrapper[30278]: I0318 18:10:25.702827 30278 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_efc76217af9e7119e39d2455d00c223f/kube-controller-manager/0.log" Mar 18 18:10:25.703052 master-0 kubenswrapper[30278]: I0318 18:10:25.702942 30278 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 18:10:25.709416 master-0 kubenswrapper[30278]: I0318 18:10:25.709360 30278 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" oldPodUID="efc76217af9e7119e39d2455d00c223f" podUID="05e7d2c9a162447375c640c3bf90c6fd" Mar 18 18:10:25.774305 master-0 kubenswrapper[30278]: I0318 18:10:25.774192 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/efc76217af9e7119e39d2455d00c223f-resource-dir\") pod \"efc76217af9e7119e39d2455d00c223f\" (UID: \"efc76217af9e7119e39d2455d00c223f\") " Mar 18 18:10:25.774641 master-0 kubenswrapper[30278]: I0318 18:10:25.774368 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/efc76217af9e7119e39d2455d00c223f-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "efc76217af9e7119e39d2455d00c223f" (UID: "efc76217af9e7119e39d2455d00c223f"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 18:10:25.774641 master-0 kubenswrapper[30278]: I0318 18:10:25.774267 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/efc76217af9e7119e39d2455d00c223f-cert-dir\") pod \"efc76217af9e7119e39d2455d00c223f\" (UID: \"efc76217af9e7119e39d2455d00c223f\") " Mar 18 18:10:25.774641 master-0 kubenswrapper[30278]: I0318 18:10:25.774442 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/efc76217af9e7119e39d2455d00c223f-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "efc76217af9e7119e39d2455d00c223f" (UID: "efc76217af9e7119e39d2455d00c223f"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 18:10:25.775072 master-0 kubenswrapper[30278]: I0318 18:10:25.775018 30278 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/efc76217af9e7119e39d2455d00c223f-resource-dir\") on node \"master-0\" DevicePath \"\"" Mar 18 18:10:25.775072 master-0 kubenswrapper[30278]: I0318 18:10:25.775055 30278 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/efc76217af9e7119e39d2455d00c223f-cert-dir\") on node \"master-0\" DevicePath \"\"" Mar 18 18:10:26.276617 master-0 kubenswrapper[30278]: I0318 18:10:26.276498 30278 generic.go:334] "Generic (PLEG): container finished" podID="14ce9a69-2bf5-4809-90d9-b0b122aa11e5" containerID="22e1f6d1a4e16788e923e144a291ae6a910b7ae94879c2bbc84ab52e476aebd2" exitCode=0 Mar 18 18:10:26.276947 master-0 kubenswrapper[30278]: I0318 18:10:26.276627 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-4-master-0" event={"ID":"14ce9a69-2bf5-4809-90d9-b0b122aa11e5","Type":"ContainerDied","Data":"22e1f6d1a4e16788e923e144a291ae6a910b7ae94879c2bbc84ab52e476aebd2"} Mar 18 18:10:26.283489 master-0 kubenswrapper[30278]: I0318 18:10:26.283436 30278 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_efc76217af9e7119e39d2455d00c223f/kube-controller-manager-cert-syncer/0.log" Mar 18 18:10:26.287896 master-0 kubenswrapper[30278]: I0318 18:10:26.287837 30278 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_efc76217af9e7119e39d2455d00c223f/cluster-policy-controller/0.log" Mar 18 18:10:26.289200 master-0 kubenswrapper[30278]: I0318 18:10:26.289146 30278 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_efc76217af9e7119e39d2455d00c223f/kube-controller-manager/0.log" Mar 18 18:10:26.289304 master-0 kubenswrapper[30278]: I0318 18:10:26.289237 30278 generic.go:334] "Generic (PLEG): container finished" podID="efc76217af9e7119e39d2455d00c223f" containerID="c974ce9bca98caf206cacb3590d85f8cb970581a77ff4f55db1e8e82efb4ff2c" exitCode=0 Mar 18 18:10:26.289361 master-0 kubenswrapper[30278]: I0318 18:10:26.289311 30278 generic.go:334] "Generic (PLEG): container finished" podID="efc76217af9e7119e39d2455d00c223f" containerID="dec20dd282b8a1026853916cbbdbad7fcda801cf86223b20c47a3250f052fed3" exitCode=0 Mar 18 18:10:26.289361 master-0 kubenswrapper[30278]: I0318 18:10:26.289344 30278 generic.go:334] "Generic (PLEG): container finished" podID="efc76217af9e7119e39d2455d00c223f" containerID="31da287ae2ee280ceb25c6d586c08cddceb6988bdd57a314f7a80a3ffba9a2ae" exitCode=0 Mar 18 18:10:26.289470 master-0 kubenswrapper[30278]: I0318 18:10:26.289369 30278 generic.go:334] "Generic (PLEG): container finished" podID="efc76217af9e7119e39d2455d00c223f" containerID="b58573729d641d7e86f1ec2365e091375bd8cf625b0a9697be4ea6b82ebe135b" exitCode=2 Mar 18 18:10:26.289470 master-0 kubenswrapper[30278]: I0318 18:10:26.289439 30278 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bf25ed1be4c3abef2ee86d44fadd6095dc54deb721dd3c3546ed28b136e56926" Mar 18 18:10:26.289559 master-0 kubenswrapper[30278]: I0318 18:10:26.289475 30278 scope.go:117] "RemoveContainer" containerID="498a5c57b90053a76dc039b2bff8526c3d09fbb3c0193932a4070bb49e9eec20" Mar 18 18:10:26.289559 master-0 kubenswrapper[30278]: I0318 18:10:26.289519 30278 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 18:10:26.324244 master-0 kubenswrapper[30278]: I0318 18:10:26.321716 30278 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" oldPodUID="efc76217af9e7119e39d2455d00c223f" podUID="05e7d2c9a162447375c640c3bf90c6fd" Mar 18 18:10:26.330478 master-0 kubenswrapper[30278]: I0318 18:10:26.330399 30278 scope.go:117] "RemoveContainer" containerID="346470c7e231870f2c02c668d780fdbc24cd909efb0248742f57a63237119f4a" Mar 18 18:10:26.336253 master-0 kubenswrapper[30278]: I0318 18:10:26.336190 30278 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" oldPodUID="efc76217af9e7119e39d2455d00c223f" podUID="05e7d2c9a162447375c640c3bf90c6fd" Mar 18 18:10:27.070656 master-0 kubenswrapper[30278]: I0318 18:10:27.070566 30278 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efc76217af9e7119e39d2455d00c223f" path="/var/lib/kubelet/pods/efc76217af9e7119e39d2455d00c223f/volumes" Mar 18 18:10:27.303625 master-0 kubenswrapper[30278]: I0318 18:10:27.303544 30278 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_efc76217af9e7119e39d2455d00c223f/kube-controller-manager-cert-syncer/0.log" Mar 18 18:10:27.720563 master-0 kubenswrapper[30278]: I0318 18:10:27.720496 30278 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-4-master-0" Mar 18 18:10:27.817427 master-0 kubenswrapper[30278]: I0318 18:10:27.817344 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/14ce9a69-2bf5-4809-90d9-b0b122aa11e5-kube-api-access\") pod \"14ce9a69-2bf5-4809-90d9-b0b122aa11e5\" (UID: \"14ce9a69-2bf5-4809-90d9-b0b122aa11e5\") " Mar 18 18:10:27.817775 master-0 kubenswrapper[30278]: I0318 18:10:27.817514 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/14ce9a69-2bf5-4809-90d9-b0b122aa11e5-var-lock\") pod \"14ce9a69-2bf5-4809-90d9-b0b122aa11e5\" (UID: \"14ce9a69-2bf5-4809-90d9-b0b122aa11e5\") " Mar 18 18:10:27.817775 master-0 kubenswrapper[30278]: I0318 18:10:27.817537 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/14ce9a69-2bf5-4809-90d9-b0b122aa11e5-kubelet-dir\") pod \"14ce9a69-2bf5-4809-90d9-b0b122aa11e5\" (UID: \"14ce9a69-2bf5-4809-90d9-b0b122aa11e5\") " Mar 18 18:10:27.817775 master-0 kubenswrapper[30278]: I0318 18:10:27.817713 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/14ce9a69-2bf5-4809-90d9-b0b122aa11e5-var-lock" (OuterVolumeSpecName: "var-lock") pod "14ce9a69-2bf5-4809-90d9-b0b122aa11e5" (UID: "14ce9a69-2bf5-4809-90d9-b0b122aa11e5"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 18:10:27.818150 master-0 kubenswrapper[30278]: I0318 18:10:27.817769 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/14ce9a69-2bf5-4809-90d9-b0b122aa11e5-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "14ce9a69-2bf5-4809-90d9-b0b122aa11e5" (UID: "14ce9a69-2bf5-4809-90d9-b0b122aa11e5"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 18:10:27.818624 master-0 kubenswrapper[30278]: I0318 18:10:27.818552 30278 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/14ce9a69-2bf5-4809-90d9-b0b122aa11e5-var-lock\") on node \"master-0\" DevicePath \"\"" Mar 18 18:10:27.818624 master-0 kubenswrapper[30278]: I0318 18:10:27.818618 30278 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/14ce9a69-2bf5-4809-90d9-b0b122aa11e5-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Mar 18 18:10:27.822757 master-0 kubenswrapper[30278]: I0318 18:10:27.822701 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/14ce9a69-2bf5-4809-90d9-b0b122aa11e5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "14ce9a69-2bf5-4809-90d9-b0b122aa11e5" (UID: "14ce9a69-2bf5-4809-90d9-b0b122aa11e5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 18:10:27.921064 master-0 kubenswrapper[30278]: I0318 18:10:27.920973 30278 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/14ce9a69-2bf5-4809-90d9-b0b122aa11e5-kube-api-access\") on node \"master-0\" DevicePath \"\"" Mar 18 18:10:28.315647 master-0 kubenswrapper[30278]: I0318 18:10:28.315546 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-4-master-0" event={"ID":"14ce9a69-2bf5-4809-90d9-b0b122aa11e5","Type":"ContainerDied","Data":"17d38e4c8b5eb3d2b77c2d3cd296a73447532f2d142906acdf081de620c2323c"} Mar 18 18:10:28.315647 master-0 kubenswrapper[30278]: I0318 18:10:28.315615 30278 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="17d38e4c8b5eb3d2b77c2d3cd296a73447532f2d142906acdf081de620c2323c" Mar 18 18:10:28.316801 master-0 kubenswrapper[30278]: I0318 18:10:28.315712 30278 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-4-master-0" Mar 18 18:10:40.054594 master-0 kubenswrapper[30278]: I0318 18:10:40.054530 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 18:10:41.215443 master-0 kubenswrapper[30278]: I0318 18:10:41.215402 30278 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="54b31b17-786c-4ae6-904b-806c57d8aa55" Mar 18 18:10:41.215904 master-0 kubenswrapper[30278]: I0318 18:10:41.215886 30278 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="54b31b17-786c-4ae6-904b-806c57d8aa55" Mar 18 18:10:41.247855 master-0 kubenswrapper[30278]: I0318 18:10:41.247815 30278 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 18:10:41.257513 master-0 kubenswrapper[30278]: I0318 18:10:41.257478 30278 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-master-0"] Mar 18 18:10:41.270640 master-0 kubenswrapper[30278]: I0318 18:10:41.270556 30278 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-master-0"] Mar 18 18:10:41.271734 master-0 kubenswrapper[30278]: I0318 18:10:41.271700 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 18:10:41.276859 master-0 kubenswrapper[30278]: I0318 18:10:41.276821 30278 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-master-0"] Mar 18 18:10:41.303451 master-0 kubenswrapper[30278]: W0318 18:10:41.303399 30278 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod05e7d2c9a162447375c640c3bf90c6fd.slice/crio-92d5185c096547877eb5c9b2a88eaa105c97005210d707e28dcf0b893e023b8f WatchSource:0}: Error finding container 92d5185c096547877eb5c9b2a88eaa105c97005210d707e28dcf0b893e023b8f: Status 404 returned error can't find the container with id 92d5185c096547877eb5c9b2a88eaa105c97005210d707e28dcf0b893e023b8f Mar 18 18:10:41.439340 master-0 kubenswrapper[30278]: I0318 18:10:41.439258 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"05e7d2c9a162447375c640c3bf90c6fd","Type":"ContainerStarted","Data":"92d5185c096547877eb5c9b2a88eaa105c97005210d707e28dcf0b893e023b8f"} Mar 18 18:10:43.462870 master-0 kubenswrapper[30278]: I0318 18:10:43.462655 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"05e7d2c9a162447375c640c3bf90c6fd","Type":"ContainerStarted","Data":"9d4a2150397e3165cfc5b4c40070920b8de1a7ad7c2c36463e3c48645662801a"} Mar 18 18:10:46.487931 master-0 kubenswrapper[30278]: I0318 18:10:46.487620 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"05e7d2c9a162447375c640c3bf90c6fd","Type":"ContainerStarted","Data":"fa9a54edac3be328e46f51cd09b0d6d30236ed93d60471d9d652bded5e3c4ade"} Mar 18 18:10:49.509705 master-0 kubenswrapper[30278]: I0318 18:10:49.509630 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"05e7d2c9a162447375c640c3bf90c6fd","Type":"ContainerStarted","Data":"80f2f76081d774bd2bafc0f4772434335d76832e6d5bb41b14dac73658ec1690"} Mar 18 18:10:51.540946 master-0 kubenswrapper[30278]: I0318 18:10:51.538322 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"05e7d2c9a162447375c640c3bf90c6fd","Type":"ContainerStarted","Data":"99bcda4337e56966076f0c041cdfd809c53a84d4d242bb1d332bfcbd4e31ff4e"} Mar 18 18:10:51.567977 master-0 kubenswrapper[30278]: I0318 18:10:51.567885 30278 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podStartSLOduration=10.567855541 podStartE2EDuration="10.567855541s" podCreationTimestamp="2026-03-18 18:10:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 18:10:51.562311802 +0000 UTC m=+620.729496397" watchObservedRunningTime="2026-03-18 18:10:51.567855541 +0000 UTC m=+620.735040136" Mar 18 18:11:01.273568 master-0 kubenswrapper[30278]: I0318 18:11:01.273478 30278 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 18:11:01.273568 master-0 kubenswrapper[30278]: I0318 18:11:01.273561 30278 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 18:11:01.273568 master-0 kubenswrapper[30278]: I0318 18:11:01.273585 30278 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 18:11:01.274985 master-0 kubenswrapper[30278]: I0318 18:11:01.273603 30278 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 18:11:01.274985 master-0 kubenswrapper[30278]: I0318 18:11:01.273909 30278 patch_prober.go:28] interesting pod/kube-controller-manager-master-0 container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.32.10:10257/healthz\": dial tcp 192.168.32.10:10257: connect: connection refused" start-of-body= Mar 18 18:11:01.274985 master-0 kubenswrapper[30278]: I0318 18:11:01.273974 30278 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="05e7d2c9a162447375c640c3bf90c6fd" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.32.10:10257/healthz\": dial tcp 192.168.32.10:10257: connect: connection refused" Mar 18 18:11:01.281550 master-0 kubenswrapper[30278]: I0318 18:11:01.281507 30278 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 18:11:01.624096 master-0 kubenswrapper[30278]: I0318 18:11:01.624021 30278 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 18:11:11.273389 master-0 kubenswrapper[30278]: I0318 18:11:11.273232 30278 patch_prober.go:28] interesting pod/kube-controller-manager-master-0 container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.32.10:10257/healthz\": dial tcp 192.168.32.10:10257: connect: connection refused" start-of-body= Mar 18 18:11:11.273389 master-0 kubenswrapper[30278]: I0318 18:11:11.273376 30278 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="05e7d2c9a162447375c640c3bf90c6fd" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.32.10:10257/healthz\": dial tcp 192.168.32.10:10257: connect: connection refused" Mar 18 18:11:21.273847 master-0 kubenswrapper[30278]: I0318 18:11:21.273762 30278 patch_prober.go:28] interesting pod/kube-controller-manager-master-0 container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.32.10:10257/healthz\": dial tcp 192.168.32.10:10257: connect: connection refused" start-of-body= Mar 18 18:11:21.274579 master-0 kubenswrapper[30278]: I0318 18:11:21.273870 30278 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="05e7d2c9a162447375c640c3bf90c6fd" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.32.10:10257/healthz\": dial tcp 192.168.32.10:10257: connect: connection refused" Mar 18 18:11:21.274579 master-0 kubenswrapper[30278]: I0318 18:11:21.273957 30278 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 18:11:21.275022 master-0 kubenswrapper[30278]: I0318 18:11:21.274926 30278 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="kube-controller-manager" containerStatusID={"Type":"cri-o","ID":"9d4a2150397e3165cfc5b4c40070920b8de1a7ad7c2c36463e3c48645662801a"} pod="openshift-kube-controller-manager/kube-controller-manager-master-0" containerMessage="Container kube-controller-manager failed startup probe, will be restarted" Mar 18 18:11:21.275355 master-0 kubenswrapper[30278]: I0318 18:11:21.275222 30278 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="05e7d2c9a162447375c640c3bf90c6fd" containerName="kube-controller-manager" containerID="cri-o://9d4a2150397e3165cfc5b4c40070920b8de1a7ad7c2c36463e3c48645662801a" gracePeriod=30 Mar 18 18:11:52.073225 master-0 kubenswrapper[30278]: I0318 18:11:52.073165 30278 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_05e7d2c9a162447375c640c3bf90c6fd/kube-controller-manager/0.log" Mar 18 18:11:52.073777 master-0 kubenswrapper[30278]: I0318 18:11:52.073244 30278 generic.go:334] "Generic (PLEG): container finished" podID="05e7d2c9a162447375c640c3bf90c6fd" containerID="9d4a2150397e3165cfc5b4c40070920b8de1a7ad7c2c36463e3c48645662801a" exitCode=137 Mar 18 18:11:52.073777 master-0 kubenswrapper[30278]: I0318 18:11:52.073304 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"05e7d2c9a162447375c640c3bf90c6fd","Type":"ContainerDied","Data":"9d4a2150397e3165cfc5b4c40070920b8de1a7ad7c2c36463e3c48645662801a"} Mar 18 18:11:53.088155 master-0 kubenswrapper[30278]: I0318 18:11:53.088038 30278 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_05e7d2c9a162447375c640c3bf90c6fd/kube-controller-manager/0.log" Mar 18 18:11:53.088155 master-0 kubenswrapper[30278]: I0318 18:11:53.088098 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"05e7d2c9a162447375c640c3bf90c6fd","Type":"ContainerStarted","Data":"42e0851427813814c9617d5a2cccf910940953a58633c34dd66d84c186a6a095"} Mar 18 18:12:01.272806 master-0 kubenswrapper[30278]: I0318 18:12:01.272681 30278 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 18:12:01.273934 master-0 kubenswrapper[30278]: I0318 18:12:01.273477 30278 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 18:12:01.278207 master-0 kubenswrapper[30278]: I0318 18:12:01.278139 30278 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 18:12:02.180722 master-0 kubenswrapper[30278]: I0318 18:12:02.180611 30278 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 18:12:10.966096 master-0 kubenswrapper[30278]: I0318 18:12:10.965910 30278 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4gd5rf"] Mar 18 18:12:10.966869 master-0 kubenswrapper[30278]: E0318 18:12:10.966244 30278 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="14ce9a69-2bf5-4809-90d9-b0b122aa11e5" containerName="installer" Mar 18 18:12:10.966869 master-0 kubenswrapper[30278]: I0318 18:12:10.966258 30278 state_mem.go:107] "Deleted CPUSet assignment" podUID="14ce9a69-2bf5-4809-90d9-b0b122aa11e5" containerName="installer" Mar 18 18:12:10.966869 master-0 kubenswrapper[30278]: I0318 18:12:10.966438 30278 memory_manager.go:354] "RemoveStaleState removing state" podUID="14ce9a69-2bf5-4809-90d9-b0b122aa11e5" containerName="installer" Mar 18 18:12:10.968937 master-0 kubenswrapper[30278]: I0318 18:12:10.968678 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4gd5rf" Mar 18 18:12:10.982225 master-0 kubenswrapper[30278]: I0318 18:12:10.982165 30278 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4gd5rf"] Mar 18 18:12:11.062595 master-0 kubenswrapper[30278]: I0318 18:12:11.062068 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f25853ca-a2c8-4bdc-9237-10df91ac04f7-bundle\") pod \"7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4gd5rf\" (UID: \"f25853ca-a2c8-4bdc-9237-10df91ac04f7\") " pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4gd5rf" Mar 18 18:12:11.062595 master-0 kubenswrapper[30278]: I0318 18:12:11.062532 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-62dbk\" (UniqueName: \"kubernetes.io/projected/f25853ca-a2c8-4bdc-9237-10df91ac04f7-kube-api-access-62dbk\") pod \"7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4gd5rf\" (UID: \"f25853ca-a2c8-4bdc-9237-10df91ac04f7\") " pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4gd5rf" Mar 18 18:12:11.062595 master-0 kubenswrapper[30278]: I0318 18:12:11.062562 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f25853ca-a2c8-4bdc-9237-10df91ac04f7-util\") pod \"7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4gd5rf\" (UID: \"f25853ca-a2c8-4bdc-9237-10df91ac04f7\") " pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4gd5rf" Mar 18 18:12:11.163220 master-0 kubenswrapper[30278]: I0318 18:12:11.163126 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f25853ca-a2c8-4bdc-9237-10df91ac04f7-bundle\") pod \"7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4gd5rf\" (UID: \"f25853ca-a2c8-4bdc-9237-10df91ac04f7\") " pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4gd5rf" Mar 18 18:12:11.163220 master-0 kubenswrapper[30278]: I0318 18:12:11.163224 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-62dbk\" (UniqueName: \"kubernetes.io/projected/f25853ca-a2c8-4bdc-9237-10df91ac04f7-kube-api-access-62dbk\") pod \"7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4gd5rf\" (UID: \"f25853ca-a2c8-4bdc-9237-10df91ac04f7\") " pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4gd5rf" Mar 18 18:12:11.163635 master-0 kubenswrapper[30278]: I0318 18:12:11.163595 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f25853ca-a2c8-4bdc-9237-10df91ac04f7-util\") pod \"7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4gd5rf\" (UID: \"f25853ca-a2c8-4bdc-9237-10df91ac04f7\") " pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4gd5rf" Mar 18 18:12:11.163827 master-0 kubenswrapper[30278]: I0318 18:12:11.163778 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f25853ca-a2c8-4bdc-9237-10df91ac04f7-bundle\") pod \"7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4gd5rf\" (UID: \"f25853ca-a2c8-4bdc-9237-10df91ac04f7\") " pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4gd5rf" Mar 18 18:12:11.164062 master-0 kubenswrapper[30278]: I0318 18:12:11.164027 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f25853ca-a2c8-4bdc-9237-10df91ac04f7-util\") pod \"7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4gd5rf\" (UID: \"f25853ca-a2c8-4bdc-9237-10df91ac04f7\") " pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4gd5rf" Mar 18 18:12:11.179743 master-0 kubenswrapper[30278]: I0318 18:12:11.179684 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-62dbk\" (UniqueName: \"kubernetes.io/projected/f25853ca-a2c8-4bdc-9237-10df91ac04f7-kube-api-access-62dbk\") pod \"7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4gd5rf\" (UID: \"f25853ca-a2c8-4bdc-9237-10df91ac04f7\") " pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4gd5rf" Mar 18 18:12:11.288681 master-0 kubenswrapper[30278]: I0318 18:12:11.288527 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4gd5rf" Mar 18 18:12:11.767616 master-0 kubenswrapper[30278]: I0318 18:12:11.767560 30278 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4gd5rf"] Mar 18 18:12:11.780844 master-0 kubenswrapper[30278]: W0318 18:12:11.780794 30278 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf25853ca_a2c8_4bdc_9237_10df91ac04f7.slice/crio-2e10c72df08f6373f54643e04080b225fa34fb7c356b8974e9ecec1159350447 WatchSource:0}: Error finding container 2e10c72df08f6373f54643e04080b225fa34fb7c356b8974e9ecec1159350447: Status 404 returned error can't find the container with id 2e10c72df08f6373f54643e04080b225fa34fb7c356b8974e9ecec1159350447 Mar 18 18:12:12.272734 master-0 kubenswrapper[30278]: I0318 18:12:12.272594 30278 generic.go:334] "Generic (PLEG): container finished" podID="f25853ca-a2c8-4bdc-9237-10df91ac04f7" containerID="0180cfa0b5bb5742b888cf9b4a29585fa478c0ebea758d431e4896c7bc812507" exitCode=0 Mar 18 18:12:12.272734 master-0 kubenswrapper[30278]: I0318 18:12:12.272662 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4gd5rf" event={"ID":"f25853ca-a2c8-4bdc-9237-10df91ac04f7","Type":"ContainerDied","Data":"0180cfa0b5bb5742b888cf9b4a29585fa478c0ebea758d431e4896c7bc812507"} Mar 18 18:12:12.272734 master-0 kubenswrapper[30278]: I0318 18:12:12.272697 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4gd5rf" event={"ID":"f25853ca-a2c8-4bdc-9237-10df91ac04f7","Type":"ContainerStarted","Data":"2e10c72df08f6373f54643e04080b225fa34fb7c356b8974e9ecec1159350447"} Mar 18 18:12:16.309254 master-0 kubenswrapper[30278]: I0318 18:12:16.309202 30278 generic.go:334] "Generic (PLEG): container finished" podID="f25853ca-a2c8-4bdc-9237-10df91ac04f7" containerID="3532faa729139023383d12f2b9fcbc9b1fea886e0258aa9ae21bcad03d45dacd" exitCode=0 Mar 18 18:12:16.310085 master-0 kubenswrapper[30278]: I0318 18:12:16.309295 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4gd5rf" event={"ID":"f25853ca-a2c8-4bdc-9237-10df91ac04f7","Type":"ContainerDied","Data":"3532faa729139023383d12f2b9fcbc9b1fea886e0258aa9ae21bcad03d45dacd"} Mar 18 18:12:17.323912 master-0 kubenswrapper[30278]: I0318 18:12:17.323840 30278 generic.go:334] "Generic (PLEG): container finished" podID="f25853ca-a2c8-4bdc-9237-10df91ac04f7" containerID="7ee81f9784c4aeae6781f6f8fa9629e2feac8b7f56d701989abde0c31d818e57" exitCode=0 Mar 18 18:12:17.324521 master-0 kubenswrapper[30278]: I0318 18:12:17.323917 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4gd5rf" event={"ID":"f25853ca-a2c8-4bdc-9237-10df91ac04f7","Type":"ContainerDied","Data":"7ee81f9784c4aeae6781f6f8fa9629e2feac8b7f56d701989abde0c31d818e57"} Mar 18 18:12:18.647988 master-0 kubenswrapper[30278]: I0318 18:12:18.647935 30278 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4gd5rf" Mar 18 18:12:18.703586 master-0 kubenswrapper[30278]: I0318 18:12:18.703494 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-62dbk\" (UniqueName: \"kubernetes.io/projected/f25853ca-a2c8-4bdc-9237-10df91ac04f7-kube-api-access-62dbk\") pod \"f25853ca-a2c8-4bdc-9237-10df91ac04f7\" (UID: \"f25853ca-a2c8-4bdc-9237-10df91ac04f7\") " Mar 18 18:12:18.703586 master-0 kubenswrapper[30278]: I0318 18:12:18.703609 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f25853ca-a2c8-4bdc-9237-10df91ac04f7-bundle\") pod \"f25853ca-a2c8-4bdc-9237-10df91ac04f7\" (UID: \"f25853ca-a2c8-4bdc-9237-10df91ac04f7\") " Mar 18 18:12:18.704119 master-0 kubenswrapper[30278]: I0318 18:12:18.703734 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f25853ca-a2c8-4bdc-9237-10df91ac04f7-util\") pod \"f25853ca-a2c8-4bdc-9237-10df91ac04f7\" (UID: \"f25853ca-a2c8-4bdc-9237-10df91ac04f7\") " Mar 18 18:12:18.705038 master-0 kubenswrapper[30278]: I0318 18:12:18.704928 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f25853ca-a2c8-4bdc-9237-10df91ac04f7-bundle" (OuterVolumeSpecName: "bundle") pod "f25853ca-a2c8-4bdc-9237-10df91ac04f7" (UID: "f25853ca-a2c8-4bdc-9237-10df91ac04f7"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 18 18:12:18.708053 master-0 kubenswrapper[30278]: I0318 18:12:18.707968 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f25853ca-a2c8-4bdc-9237-10df91ac04f7-kube-api-access-62dbk" (OuterVolumeSpecName: "kube-api-access-62dbk") pod "f25853ca-a2c8-4bdc-9237-10df91ac04f7" (UID: "f25853ca-a2c8-4bdc-9237-10df91ac04f7"). InnerVolumeSpecName "kube-api-access-62dbk". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 18:12:18.718162 master-0 kubenswrapper[30278]: I0318 18:12:18.716832 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f25853ca-a2c8-4bdc-9237-10df91ac04f7-util" (OuterVolumeSpecName: "util") pod "f25853ca-a2c8-4bdc-9237-10df91ac04f7" (UID: "f25853ca-a2c8-4bdc-9237-10df91ac04f7"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 18 18:12:18.805625 master-0 kubenswrapper[30278]: I0318 18:12:18.805522 30278 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-62dbk\" (UniqueName: \"kubernetes.io/projected/f25853ca-a2c8-4bdc-9237-10df91ac04f7-kube-api-access-62dbk\") on node \"master-0\" DevicePath \"\"" Mar 18 18:12:18.805625 master-0 kubenswrapper[30278]: I0318 18:12:18.805594 30278 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f25853ca-a2c8-4bdc-9237-10df91ac04f7-bundle\") on node \"master-0\" DevicePath \"\"" Mar 18 18:12:18.805625 master-0 kubenswrapper[30278]: I0318 18:12:18.805609 30278 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f25853ca-a2c8-4bdc-9237-10df91ac04f7-util\") on node \"master-0\" DevicePath \"\"" Mar 18 18:12:19.346573 master-0 kubenswrapper[30278]: I0318 18:12:19.346484 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4gd5rf" event={"ID":"f25853ca-a2c8-4bdc-9237-10df91ac04f7","Type":"ContainerDied","Data":"2e10c72df08f6373f54643e04080b225fa34fb7c356b8974e9ecec1159350447"} Mar 18 18:12:19.346573 master-0 kubenswrapper[30278]: I0318 18:12:19.346569 30278 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2e10c72df08f6373f54643e04080b225fa34fb7c356b8974e9ecec1159350447" Mar 18 18:12:19.346891 master-0 kubenswrapper[30278]: I0318 18:12:19.346594 30278 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4gd5rf" Mar 18 18:12:28.611602 master-0 kubenswrapper[30278]: I0318 18:12:28.611529 30278 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-storage/lvms-operator-fb9bb8dcb-p7wgg"] Mar 18 18:12:28.612370 master-0 kubenswrapper[30278]: E0318 18:12:28.611865 30278 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f25853ca-a2c8-4bdc-9237-10df91ac04f7" containerName="util" Mar 18 18:12:28.612370 master-0 kubenswrapper[30278]: I0318 18:12:28.611876 30278 state_mem.go:107] "Deleted CPUSet assignment" podUID="f25853ca-a2c8-4bdc-9237-10df91ac04f7" containerName="util" Mar 18 18:12:28.612370 master-0 kubenswrapper[30278]: E0318 18:12:28.611890 30278 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f25853ca-a2c8-4bdc-9237-10df91ac04f7" containerName="extract" Mar 18 18:12:28.612370 master-0 kubenswrapper[30278]: I0318 18:12:28.611896 30278 state_mem.go:107] "Deleted CPUSet assignment" podUID="f25853ca-a2c8-4bdc-9237-10df91ac04f7" containerName="extract" Mar 18 18:12:28.612370 master-0 kubenswrapper[30278]: E0318 18:12:28.611908 30278 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f25853ca-a2c8-4bdc-9237-10df91ac04f7" containerName="pull" Mar 18 18:12:28.612370 master-0 kubenswrapper[30278]: I0318 18:12:28.611914 30278 state_mem.go:107] "Deleted CPUSet assignment" podUID="f25853ca-a2c8-4bdc-9237-10df91ac04f7" containerName="pull" Mar 18 18:12:28.612370 master-0 kubenswrapper[30278]: I0318 18:12:28.612071 30278 memory_manager.go:354] "RemoveStaleState removing state" podUID="f25853ca-a2c8-4bdc-9237-10df91ac04f7" containerName="extract" Mar 18 18:12:28.612668 master-0 kubenswrapper[30278]: I0318 18:12:28.612607 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-storage/lvms-operator-fb9bb8dcb-p7wgg" Mar 18 18:12:28.615219 master-0 kubenswrapper[30278]: I0318 18:12:28.615171 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-storage"/"openshift-service-ca.crt" Mar 18 18:12:28.615356 master-0 kubenswrapper[30278]: I0318 18:12:28.615213 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-storage"/"lvms-operator-service-cert" Mar 18 18:12:28.615397 master-0 kubenswrapper[30278]: I0318 18:12:28.615171 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-storage"/"lvms-operator-webhook-server-cert" Mar 18 18:12:28.615655 master-0 kubenswrapper[30278]: I0318 18:12:28.615624 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-storage"/"lvms-operator-metrics-cert" Mar 18 18:12:28.616244 master-0 kubenswrapper[30278]: I0318 18:12:28.616214 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-storage"/"kube-root-ca.crt" Mar 18 18:12:28.634573 master-0 kubenswrapper[30278]: I0318 18:12:28.634488 30278 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-storage/lvms-operator-fb9bb8dcb-p7wgg"] Mar 18 18:12:28.689189 master-0 kubenswrapper[30278]: I0318 18:12:28.689111 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/d19a7b63-0e81-4f5a-a09f-cb10fcabf9f5-webhook-cert\") pod \"lvms-operator-fb9bb8dcb-p7wgg\" (UID: \"d19a7b63-0e81-4f5a-a09f-cb10fcabf9f5\") " pod="openshift-storage/lvms-operator-fb9bb8dcb-p7wgg" Mar 18 18:12:28.689189 master-0 kubenswrapper[30278]: I0318 18:12:28.689195 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-cert\" (UniqueName: \"kubernetes.io/secret/d19a7b63-0e81-4f5a-a09f-cb10fcabf9f5-metrics-cert\") pod \"lvms-operator-fb9bb8dcb-p7wgg\" (UID: \"d19a7b63-0e81-4f5a-a09f-cb10fcabf9f5\") " pod="openshift-storage/lvms-operator-fb9bb8dcb-p7wgg" Mar 18 18:12:28.689521 master-0 kubenswrapper[30278]: I0318 18:12:28.689231 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/d19a7b63-0e81-4f5a-a09f-cb10fcabf9f5-socket-dir\") pod \"lvms-operator-fb9bb8dcb-p7wgg\" (UID: \"d19a7b63-0e81-4f5a-a09f-cb10fcabf9f5\") " pod="openshift-storage/lvms-operator-fb9bb8dcb-p7wgg" Mar 18 18:12:28.689521 master-0 kubenswrapper[30278]: I0318 18:12:28.689350 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4ntnf\" (UniqueName: \"kubernetes.io/projected/d19a7b63-0e81-4f5a-a09f-cb10fcabf9f5-kube-api-access-4ntnf\") pod \"lvms-operator-fb9bb8dcb-p7wgg\" (UID: \"d19a7b63-0e81-4f5a-a09f-cb10fcabf9f5\") " pod="openshift-storage/lvms-operator-fb9bb8dcb-p7wgg" Mar 18 18:12:28.689521 master-0 kubenswrapper[30278]: I0318 18:12:28.689415 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/d19a7b63-0e81-4f5a-a09f-cb10fcabf9f5-apiservice-cert\") pod \"lvms-operator-fb9bb8dcb-p7wgg\" (UID: \"d19a7b63-0e81-4f5a-a09f-cb10fcabf9f5\") " pod="openshift-storage/lvms-operator-fb9bb8dcb-p7wgg" Mar 18 18:12:28.790991 master-0 kubenswrapper[30278]: I0318 18:12:28.790923 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4ntnf\" (UniqueName: \"kubernetes.io/projected/d19a7b63-0e81-4f5a-a09f-cb10fcabf9f5-kube-api-access-4ntnf\") pod \"lvms-operator-fb9bb8dcb-p7wgg\" (UID: \"d19a7b63-0e81-4f5a-a09f-cb10fcabf9f5\") " pod="openshift-storage/lvms-operator-fb9bb8dcb-p7wgg" Mar 18 18:12:28.791232 master-0 kubenswrapper[30278]: I0318 18:12:28.791012 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/d19a7b63-0e81-4f5a-a09f-cb10fcabf9f5-apiservice-cert\") pod \"lvms-operator-fb9bb8dcb-p7wgg\" (UID: \"d19a7b63-0e81-4f5a-a09f-cb10fcabf9f5\") " pod="openshift-storage/lvms-operator-fb9bb8dcb-p7wgg" Mar 18 18:12:28.791357 master-0 kubenswrapper[30278]: I0318 18:12:28.791285 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/d19a7b63-0e81-4f5a-a09f-cb10fcabf9f5-webhook-cert\") pod \"lvms-operator-fb9bb8dcb-p7wgg\" (UID: \"d19a7b63-0e81-4f5a-a09f-cb10fcabf9f5\") " pod="openshift-storage/lvms-operator-fb9bb8dcb-p7wgg" Mar 18 18:12:28.791421 master-0 kubenswrapper[30278]: I0318 18:12:28.791402 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-cert\" (UniqueName: \"kubernetes.io/secret/d19a7b63-0e81-4f5a-a09f-cb10fcabf9f5-metrics-cert\") pod \"lvms-operator-fb9bb8dcb-p7wgg\" (UID: \"d19a7b63-0e81-4f5a-a09f-cb10fcabf9f5\") " pod="openshift-storage/lvms-operator-fb9bb8dcb-p7wgg" Mar 18 18:12:28.791648 master-0 kubenswrapper[30278]: I0318 18:12:28.791617 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/d19a7b63-0e81-4f5a-a09f-cb10fcabf9f5-socket-dir\") pod \"lvms-operator-fb9bb8dcb-p7wgg\" (UID: \"d19a7b63-0e81-4f5a-a09f-cb10fcabf9f5\") " pod="openshift-storage/lvms-operator-fb9bb8dcb-p7wgg" Mar 18 18:12:28.792295 master-0 kubenswrapper[30278]: I0318 18:12:28.792233 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/d19a7b63-0e81-4f5a-a09f-cb10fcabf9f5-socket-dir\") pod \"lvms-operator-fb9bb8dcb-p7wgg\" (UID: \"d19a7b63-0e81-4f5a-a09f-cb10fcabf9f5\") " pod="openshift-storage/lvms-operator-fb9bb8dcb-p7wgg" Mar 18 18:12:28.794788 master-0 kubenswrapper[30278]: I0318 18:12:28.794759 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/d19a7b63-0e81-4f5a-a09f-cb10fcabf9f5-apiservice-cert\") pod \"lvms-operator-fb9bb8dcb-p7wgg\" (UID: \"d19a7b63-0e81-4f5a-a09f-cb10fcabf9f5\") " pod="openshift-storage/lvms-operator-fb9bb8dcb-p7wgg" Mar 18 18:12:28.795259 master-0 kubenswrapper[30278]: I0318 18:12:28.795003 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/d19a7b63-0e81-4f5a-a09f-cb10fcabf9f5-webhook-cert\") pod \"lvms-operator-fb9bb8dcb-p7wgg\" (UID: \"d19a7b63-0e81-4f5a-a09f-cb10fcabf9f5\") " pod="openshift-storage/lvms-operator-fb9bb8dcb-p7wgg" Mar 18 18:12:28.796420 master-0 kubenswrapper[30278]: I0318 18:12:28.796365 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-cert\" (UniqueName: \"kubernetes.io/secret/d19a7b63-0e81-4f5a-a09f-cb10fcabf9f5-metrics-cert\") pod \"lvms-operator-fb9bb8dcb-p7wgg\" (UID: \"d19a7b63-0e81-4f5a-a09f-cb10fcabf9f5\") " pod="openshift-storage/lvms-operator-fb9bb8dcb-p7wgg" Mar 18 18:12:28.808206 master-0 kubenswrapper[30278]: I0318 18:12:28.808158 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4ntnf\" (UniqueName: \"kubernetes.io/projected/d19a7b63-0e81-4f5a-a09f-cb10fcabf9f5-kube-api-access-4ntnf\") pod \"lvms-operator-fb9bb8dcb-p7wgg\" (UID: \"d19a7b63-0e81-4f5a-a09f-cb10fcabf9f5\") " pod="openshift-storage/lvms-operator-fb9bb8dcb-p7wgg" Mar 18 18:12:28.929895 master-0 kubenswrapper[30278]: I0318 18:12:28.929733 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-storage/lvms-operator-fb9bb8dcb-p7wgg" Mar 18 18:12:29.414780 master-0 kubenswrapper[30278]: W0318 18:12:29.414727 30278 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd19a7b63_0e81_4f5a_a09f_cb10fcabf9f5.slice/crio-ee6fb0d0f05e43ecbcd0a5f48b569c3792d97dd9770438dbafa9f4326c6f49ef WatchSource:0}: Error finding container ee6fb0d0f05e43ecbcd0a5f48b569c3792d97dd9770438dbafa9f4326c6f49ef: Status 404 returned error can't find the container with id ee6fb0d0f05e43ecbcd0a5f48b569c3792d97dd9770438dbafa9f4326c6f49ef Mar 18 18:12:29.416974 master-0 kubenswrapper[30278]: I0318 18:12:29.416899 30278 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-storage/lvms-operator-fb9bb8dcb-p7wgg"] Mar 18 18:12:29.418552 master-0 kubenswrapper[30278]: I0318 18:12:29.418515 30278 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 18 18:12:29.430377 master-0 kubenswrapper[30278]: I0318 18:12:29.429753 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-storage/lvms-operator-fb9bb8dcb-p7wgg" event={"ID":"d19a7b63-0e81-4f5a-a09f-cb10fcabf9f5","Type":"ContainerStarted","Data":"ee6fb0d0f05e43ecbcd0a5f48b569c3792d97dd9770438dbafa9f4326c6f49ef"} Mar 18 18:12:35.491369 master-0 kubenswrapper[30278]: I0318 18:12:35.491184 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-storage/lvms-operator-fb9bb8dcb-p7wgg" event={"ID":"d19a7b63-0e81-4f5a-a09f-cb10fcabf9f5","Type":"ContainerStarted","Data":"eae600b6717f154b93058737ba90d36b93033feb23981bc629ce0487ec4be6e6"} Mar 18 18:12:35.492125 master-0 kubenswrapper[30278]: I0318 18:12:35.491396 30278 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-storage/lvms-operator-fb9bb8dcb-p7wgg" Mar 18 18:12:35.529407 master-0 kubenswrapper[30278]: I0318 18:12:35.527788 30278 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-storage/lvms-operator-fb9bb8dcb-p7wgg" podStartSLOduration=1.84181871 podStartE2EDuration="7.527736193s" podCreationTimestamp="2026-03-18 18:12:28 +0000 UTC" firstStartedPulling="2026-03-18 18:12:29.418421701 +0000 UTC m=+718.585606316" lastFinishedPulling="2026-03-18 18:12:35.104339204 +0000 UTC m=+724.271523799" observedRunningTime="2026-03-18 18:12:35.517858877 +0000 UTC m=+724.685043472" watchObservedRunningTime="2026-03-18 18:12:35.527736193 +0000 UTC m=+724.694920808" Mar 18 18:12:36.506020 master-0 kubenswrapper[30278]: I0318 18:12:36.505959 30278 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-storage/lvms-operator-fb9bb8dcb-p7wgg" Mar 18 18:12:36.538985 master-0 kubenswrapper[30278]: I0318 18:12:36.538875 30278 scope.go:117] "RemoveContainer" containerID="31da287ae2ee280ceb25c6d586c08cddceb6988bdd57a314f7a80a3ffba9a2ae" Mar 18 18:12:36.581222 master-0 kubenswrapper[30278]: I0318 18:12:36.581162 30278 scope.go:117] "RemoveContainer" containerID="c974ce9bca98caf206cacb3590d85f8cb970581a77ff4f55db1e8e82efb4ff2c" Mar 18 18:12:36.609231 master-0 kubenswrapper[30278]: I0318 18:12:36.609174 30278 scope.go:117] "RemoveContainer" containerID="dec20dd282b8a1026853916cbbdbad7fcda801cf86223b20c47a3250f052fed3" Mar 18 18:12:36.640506 master-0 kubenswrapper[30278]: I0318 18:12:36.639052 30278 scope.go:117] "RemoveContainer" containerID="b58573729d641d7e86f1ec2365e091375bd8cf625b0a9697be4ea6b82ebe135b" Mar 18 18:12:39.810225 master-0 kubenswrapper[30278]: I0318 18:12:39.810144 30278 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5k4dp4"] Mar 18 18:12:39.811962 master-0 kubenswrapper[30278]: I0318 18:12:39.811928 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5k4dp4" Mar 18 18:12:39.831428 master-0 kubenswrapper[30278]: I0318 18:12:39.831371 30278 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5k4dp4"] Mar 18 18:12:39.924628 master-0 kubenswrapper[30278]: I0318 18:12:39.924515 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/5053c4bd-4ae3-4092-ba2a-35fd700acb8c-util\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5k4dp4\" (UID: \"5053c4bd-4ae3-4092-ba2a-35fd700acb8c\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5k4dp4" Mar 18 18:12:39.924942 master-0 kubenswrapper[30278]: I0318 18:12:39.924659 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-522rc\" (UniqueName: \"kubernetes.io/projected/5053c4bd-4ae3-4092-ba2a-35fd700acb8c-kube-api-access-522rc\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5k4dp4\" (UID: \"5053c4bd-4ae3-4092-ba2a-35fd700acb8c\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5k4dp4" Mar 18 18:12:39.924942 master-0 kubenswrapper[30278]: I0318 18:12:39.924736 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/5053c4bd-4ae3-4092-ba2a-35fd700acb8c-bundle\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5k4dp4\" (UID: \"5053c4bd-4ae3-4092-ba2a-35fd700acb8c\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5k4dp4" Mar 18 18:12:40.026861 master-0 kubenswrapper[30278]: I0318 18:12:40.026779 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/5053c4bd-4ae3-4092-ba2a-35fd700acb8c-util\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5k4dp4\" (UID: \"5053c4bd-4ae3-4092-ba2a-35fd700acb8c\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5k4dp4" Mar 18 18:12:40.027114 master-0 kubenswrapper[30278]: I0318 18:12:40.026882 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-522rc\" (UniqueName: \"kubernetes.io/projected/5053c4bd-4ae3-4092-ba2a-35fd700acb8c-kube-api-access-522rc\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5k4dp4\" (UID: \"5053c4bd-4ae3-4092-ba2a-35fd700acb8c\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5k4dp4" Mar 18 18:12:40.027114 master-0 kubenswrapper[30278]: I0318 18:12:40.027053 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/5053c4bd-4ae3-4092-ba2a-35fd700acb8c-bundle\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5k4dp4\" (UID: \"5053c4bd-4ae3-4092-ba2a-35fd700acb8c\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5k4dp4" Mar 18 18:12:40.027696 master-0 kubenswrapper[30278]: I0318 18:12:40.027642 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/5053c4bd-4ae3-4092-ba2a-35fd700acb8c-util\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5k4dp4\" (UID: \"5053c4bd-4ae3-4092-ba2a-35fd700acb8c\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5k4dp4" Mar 18 18:12:40.027915 master-0 kubenswrapper[30278]: I0318 18:12:40.027878 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/5053c4bd-4ae3-4092-ba2a-35fd700acb8c-bundle\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5k4dp4\" (UID: \"5053c4bd-4ae3-4092-ba2a-35fd700acb8c\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5k4dp4" Mar 18 18:12:40.048249 master-0 kubenswrapper[30278]: I0318 18:12:40.048167 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-522rc\" (UniqueName: \"kubernetes.io/projected/5053c4bd-4ae3-4092-ba2a-35fd700acb8c-kube-api-access-522rc\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5k4dp4\" (UID: \"5053c4bd-4ae3-4092-ba2a-35fd700acb8c\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5k4dp4" Mar 18 18:12:40.130921 master-0 kubenswrapper[30278]: I0318 18:12:40.130773 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5k4dp4" Mar 18 18:12:40.605573 master-0 kubenswrapper[30278]: I0318 18:12:40.603958 30278 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5k4dp4"] Mar 18 18:12:40.606934 master-0 kubenswrapper[30278]: W0318 18:12:40.606872 30278 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5053c4bd_4ae3_4092_ba2a_35fd700acb8c.slice/crio-e4b9360d9420618027696b129dcec8b06e60eff2f1ecc58f4d49f6a02e233bc0 WatchSource:0}: Error finding container e4b9360d9420618027696b129dcec8b06e60eff2f1ecc58f4d49f6a02e233bc0: Status 404 returned error can't find the container with id e4b9360d9420618027696b129dcec8b06e60eff2f1ecc58f4d49f6a02e233bc0 Mar 18 18:12:40.619049 master-0 kubenswrapper[30278]: I0318 18:12:40.618979 30278 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1tp5cc"] Mar 18 18:12:40.624154 master-0 kubenswrapper[30278]: I0318 18:12:40.624066 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1tp5cc" Mar 18 18:12:40.636778 master-0 kubenswrapper[30278]: I0318 18:12:40.636711 30278 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1tp5cc"] Mar 18 18:12:40.643761 master-0 kubenswrapper[30278]: I0318 18:12:40.643682 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e0f1105a-8c32-44a8-a9b9-d0b7a4d97646-util\") pod \"2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1tp5cc\" (UID: \"e0f1105a-8c32-44a8-a9b9-d0b7a4d97646\") " pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1tp5cc" Mar 18 18:12:40.644386 master-0 kubenswrapper[30278]: I0318 18:12:40.644068 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8db4n\" (UniqueName: \"kubernetes.io/projected/e0f1105a-8c32-44a8-a9b9-d0b7a4d97646-kube-api-access-8db4n\") pod \"2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1tp5cc\" (UID: \"e0f1105a-8c32-44a8-a9b9-d0b7a4d97646\") " pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1tp5cc" Mar 18 18:12:40.644386 master-0 kubenswrapper[30278]: I0318 18:12:40.644147 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e0f1105a-8c32-44a8-a9b9-d0b7a4d97646-bundle\") pod \"2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1tp5cc\" (UID: \"e0f1105a-8c32-44a8-a9b9-d0b7a4d97646\") " pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1tp5cc" Mar 18 18:12:40.747052 master-0 kubenswrapper[30278]: I0318 18:12:40.746961 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e0f1105a-8c32-44a8-a9b9-d0b7a4d97646-util\") pod \"2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1tp5cc\" (UID: \"e0f1105a-8c32-44a8-a9b9-d0b7a4d97646\") " pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1tp5cc" Mar 18 18:12:40.747492 master-0 kubenswrapper[30278]: I0318 18:12:40.747271 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8db4n\" (UniqueName: \"kubernetes.io/projected/e0f1105a-8c32-44a8-a9b9-d0b7a4d97646-kube-api-access-8db4n\") pod \"2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1tp5cc\" (UID: \"e0f1105a-8c32-44a8-a9b9-d0b7a4d97646\") " pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1tp5cc" Mar 18 18:12:40.747492 master-0 kubenswrapper[30278]: I0318 18:12:40.747325 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e0f1105a-8c32-44a8-a9b9-d0b7a4d97646-bundle\") pod \"2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1tp5cc\" (UID: \"e0f1105a-8c32-44a8-a9b9-d0b7a4d97646\") " pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1tp5cc" Mar 18 18:12:40.748082 master-0 kubenswrapper[30278]: I0318 18:12:40.748042 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e0f1105a-8c32-44a8-a9b9-d0b7a4d97646-bundle\") pod \"2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1tp5cc\" (UID: \"e0f1105a-8c32-44a8-a9b9-d0b7a4d97646\") " pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1tp5cc" Mar 18 18:12:40.748367 master-0 kubenswrapper[30278]: I0318 18:12:40.748298 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e0f1105a-8c32-44a8-a9b9-d0b7a4d97646-util\") pod \"2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1tp5cc\" (UID: \"e0f1105a-8c32-44a8-a9b9-d0b7a4d97646\") " pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1tp5cc" Mar 18 18:12:40.771811 master-0 kubenswrapper[30278]: I0318 18:12:40.771762 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8db4n\" (UniqueName: \"kubernetes.io/projected/e0f1105a-8c32-44a8-a9b9-d0b7a4d97646-kube-api-access-8db4n\") pod \"2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1tp5cc\" (UID: \"e0f1105a-8c32-44a8-a9b9-d0b7a4d97646\") " pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1tp5cc" Mar 18 18:12:40.963490 master-0 kubenswrapper[30278]: I0318 18:12:40.963260 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1tp5cc" Mar 18 18:12:41.463972 master-0 kubenswrapper[30278]: I0318 18:12:41.463891 30278 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1tp5cc"] Mar 18 18:12:41.466668 master-0 kubenswrapper[30278]: W0318 18:12:41.466593 30278 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode0f1105a_8c32_44a8_a9b9_d0b7a4d97646.slice/crio-358d2882f0bd307fb8bd0fe64c1bb3e662f8430521caf5aa6097864407c64c33 WatchSource:0}: Error finding container 358d2882f0bd307fb8bd0fe64c1bb3e662f8430521caf5aa6097864407c64c33: Status 404 returned error can't find the container with id 358d2882f0bd307fb8bd0fe64c1bb3e662f8430521caf5aa6097864407c64c33 Mar 18 18:12:41.544942 master-0 kubenswrapper[30278]: I0318 18:12:41.544875 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1tp5cc" event={"ID":"e0f1105a-8c32-44a8-a9b9-d0b7a4d97646","Type":"ContainerStarted","Data":"358d2882f0bd307fb8bd0fe64c1bb3e662f8430521caf5aa6097864407c64c33"} Mar 18 18:12:41.547897 master-0 kubenswrapper[30278]: I0318 18:12:41.547869 30278 generic.go:334] "Generic (PLEG): container finished" podID="5053c4bd-4ae3-4092-ba2a-35fd700acb8c" containerID="7022a646a2afe1ef64316ee5a0964191a608cbce13a0e2e47e75942493eb0ed1" exitCode=0 Mar 18 18:12:41.548227 master-0 kubenswrapper[30278]: I0318 18:12:41.548212 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5k4dp4" event={"ID":"5053c4bd-4ae3-4092-ba2a-35fd700acb8c","Type":"ContainerDied","Data":"7022a646a2afe1ef64316ee5a0964191a608cbce13a0e2e47e75942493eb0ed1"} Mar 18 18:12:41.548372 master-0 kubenswrapper[30278]: I0318 18:12:41.548319 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5k4dp4" event={"ID":"5053c4bd-4ae3-4092-ba2a-35fd700acb8c","Type":"ContainerStarted","Data":"e4b9360d9420618027696b129dcec8b06e60eff2f1ecc58f4d49f6a02e233bc0"} Mar 18 18:12:41.818751 master-0 kubenswrapper[30278]: I0318 18:12:41.818668 30278 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874v28xx"] Mar 18 18:12:41.820761 master-0 kubenswrapper[30278]: I0318 18:12:41.820710 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874v28xx" Mar 18 18:12:41.828848 master-0 kubenswrapper[30278]: I0318 18:12:41.828763 30278 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874v28xx"] Mar 18 18:12:41.871021 master-0 kubenswrapper[30278]: I0318 18:12:41.870472 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/d5669f96-ae3d-49f7-8230-a510fec85d74-bundle\") pod \"1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874v28xx\" (UID: \"d5669f96-ae3d-49f7-8230-a510fec85d74\") " pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874v28xx" Mar 18 18:12:41.872030 master-0 kubenswrapper[30278]: I0318 18:12:41.870692 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s66v4\" (UniqueName: \"kubernetes.io/projected/d5669f96-ae3d-49f7-8230-a510fec85d74-kube-api-access-s66v4\") pod \"1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874v28xx\" (UID: \"d5669f96-ae3d-49f7-8230-a510fec85d74\") " pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874v28xx" Mar 18 18:12:41.872030 master-0 kubenswrapper[30278]: I0318 18:12:41.871899 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/d5669f96-ae3d-49f7-8230-a510fec85d74-util\") pod \"1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874v28xx\" (UID: \"d5669f96-ae3d-49f7-8230-a510fec85d74\") " pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874v28xx" Mar 18 18:12:41.974863 master-0 kubenswrapper[30278]: I0318 18:12:41.974784 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/d5669f96-ae3d-49f7-8230-a510fec85d74-bundle\") pod \"1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874v28xx\" (UID: \"d5669f96-ae3d-49f7-8230-a510fec85d74\") " pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874v28xx" Mar 18 18:12:41.975939 master-0 kubenswrapper[30278]: I0318 18:12:41.975509 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/d5669f96-ae3d-49f7-8230-a510fec85d74-bundle\") pod \"1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874v28xx\" (UID: \"d5669f96-ae3d-49f7-8230-a510fec85d74\") " pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874v28xx" Mar 18 18:12:41.975939 master-0 kubenswrapper[30278]: I0318 18:12:41.975750 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s66v4\" (UniqueName: \"kubernetes.io/projected/d5669f96-ae3d-49f7-8230-a510fec85d74-kube-api-access-s66v4\") pod \"1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874v28xx\" (UID: \"d5669f96-ae3d-49f7-8230-a510fec85d74\") " pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874v28xx" Mar 18 18:12:41.975939 master-0 kubenswrapper[30278]: I0318 18:12:41.975890 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/d5669f96-ae3d-49f7-8230-a510fec85d74-util\") pod \"1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874v28xx\" (UID: \"d5669f96-ae3d-49f7-8230-a510fec85d74\") " pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874v28xx" Mar 18 18:12:41.976682 master-0 kubenswrapper[30278]: I0318 18:12:41.976630 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/d5669f96-ae3d-49f7-8230-a510fec85d74-util\") pod \"1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874v28xx\" (UID: \"d5669f96-ae3d-49f7-8230-a510fec85d74\") " pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874v28xx" Mar 18 18:12:41.995628 master-0 kubenswrapper[30278]: I0318 18:12:41.995572 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s66v4\" (UniqueName: \"kubernetes.io/projected/d5669f96-ae3d-49f7-8230-a510fec85d74-kube-api-access-s66v4\") pod \"1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874v28xx\" (UID: \"d5669f96-ae3d-49f7-8230-a510fec85d74\") " pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874v28xx" Mar 18 18:12:42.140762 master-0 kubenswrapper[30278]: I0318 18:12:42.140568 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874v28xx" Mar 18 18:12:42.562548 master-0 kubenswrapper[30278]: I0318 18:12:42.562475 30278 generic.go:334] "Generic (PLEG): container finished" podID="e0f1105a-8c32-44a8-a9b9-d0b7a4d97646" containerID="04be864a783bed44297d39515178d36344c0dbd5e25d61a2185b15cb26abb04c" exitCode=0 Mar 18 18:12:42.562944 master-0 kubenswrapper[30278]: I0318 18:12:42.562565 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1tp5cc" event={"ID":"e0f1105a-8c32-44a8-a9b9-d0b7a4d97646","Type":"ContainerDied","Data":"04be864a783bed44297d39515178d36344c0dbd5e25d61a2185b15cb26abb04c"} Mar 18 18:12:42.616101 master-0 kubenswrapper[30278]: I0318 18:12:42.615862 30278 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874v28xx"] Mar 18 18:12:42.626142 master-0 kubenswrapper[30278]: W0318 18:12:42.626088 30278 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd5669f96_ae3d_49f7_8230_a510fec85d74.slice/crio-2421c4c812e10de9b7896e7aba578cc3aefb2d94f1dad7ad833fccbc26d7eb50 WatchSource:0}: Error finding container 2421c4c812e10de9b7896e7aba578cc3aefb2d94f1dad7ad833fccbc26d7eb50: Status 404 returned error can't find the container with id 2421c4c812e10de9b7896e7aba578cc3aefb2d94f1dad7ad833fccbc26d7eb50 Mar 18 18:12:43.576407 master-0 kubenswrapper[30278]: I0318 18:12:43.575761 30278 generic.go:334] "Generic (PLEG): container finished" podID="d5669f96-ae3d-49f7-8230-a510fec85d74" containerID="d0bf1cf978f0667d646e988c1b6dcca9820c99226948f6bc2180ef5b4a719f42" exitCode=0 Mar 18 18:12:43.576407 master-0 kubenswrapper[30278]: I0318 18:12:43.575857 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874v28xx" event={"ID":"d5669f96-ae3d-49f7-8230-a510fec85d74","Type":"ContainerDied","Data":"d0bf1cf978f0667d646e988c1b6dcca9820c99226948f6bc2180ef5b4a719f42"} Mar 18 18:12:43.576407 master-0 kubenswrapper[30278]: I0318 18:12:43.575914 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874v28xx" event={"ID":"d5669f96-ae3d-49f7-8230-a510fec85d74","Type":"ContainerStarted","Data":"2421c4c812e10de9b7896e7aba578cc3aefb2d94f1dad7ad833fccbc26d7eb50"} Mar 18 18:12:44.594342 master-0 kubenswrapper[30278]: I0318 18:12:44.594232 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1tp5cc" event={"ID":"e0f1105a-8c32-44a8-a9b9-d0b7a4d97646","Type":"ContainerStarted","Data":"d19fbe9c4ff2a793947eb9be0b035a7a95a960e8932ff5fe9f3cab52cfee7f71"} Mar 18 18:12:45.605782 master-0 kubenswrapper[30278]: I0318 18:12:45.605721 30278 generic.go:334] "Generic (PLEG): container finished" podID="e0f1105a-8c32-44a8-a9b9-d0b7a4d97646" containerID="d19fbe9c4ff2a793947eb9be0b035a7a95a960e8932ff5fe9f3cab52cfee7f71" exitCode=0 Mar 18 18:12:45.607442 master-0 kubenswrapper[30278]: I0318 18:12:45.605774 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1tp5cc" event={"ID":"e0f1105a-8c32-44a8-a9b9-d0b7a4d97646","Type":"ContainerDied","Data":"d19fbe9c4ff2a793947eb9be0b035a7a95a960e8932ff5fe9f3cab52cfee7f71"} Mar 18 18:12:46.636315 master-0 kubenswrapper[30278]: I0318 18:12:46.636204 30278 generic.go:334] "Generic (PLEG): container finished" podID="e0f1105a-8c32-44a8-a9b9-d0b7a4d97646" containerID="939980f3578b4c022e796138e67eececdaed1da322acb2be58f420615004b170" exitCode=0 Mar 18 18:12:46.636315 master-0 kubenswrapper[30278]: I0318 18:12:46.636268 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1tp5cc" event={"ID":"e0f1105a-8c32-44a8-a9b9-d0b7a4d97646","Type":"ContainerDied","Data":"939980f3578b4c022e796138e67eececdaed1da322acb2be58f420615004b170"} Mar 18 18:12:47.435925 master-0 kubenswrapper[30278]: I0318 18:12:47.429696 30278 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726mjnp8"] Mar 18 18:12:47.435925 master-0 kubenswrapper[30278]: I0318 18:12:47.431253 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726mjnp8" Mar 18 18:12:47.440082 master-0 kubenswrapper[30278]: I0318 18:12:47.440021 30278 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726mjnp8"] Mar 18 18:12:47.510365 master-0 kubenswrapper[30278]: I0318 18:12:47.510309 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bhnch\" (UniqueName: \"kubernetes.io/projected/f282f37d-0392-49e7-89e6-21b4664587c4-kube-api-access-bhnch\") pod \"93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726mjnp8\" (UID: \"f282f37d-0392-49e7-89e6-21b4664587c4\") " pod="openshift-marketplace/93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726mjnp8" Mar 18 18:12:47.510618 master-0 kubenswrapper[30278]: I0318 18:12:47.510447 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f282f37d-0392-49e7-89e6-21b4664587c4-util\") pod \"93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726mjnp8\" (UID: \"f282f37d-0392-49e7-89e6-21b4664587c4\") " pod="openshift-marketplace/93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726mjnp8" Mar 18 18:12:47.510806 master-0 kubenswrapper[30278]: I0318 18:12:47.510739 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f282f37d-0392-49e7-89e6-21b4664587c4-bundle\") pod \"93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726mjnp8\" (UID: \"f282f37d-0392-49e7-89e6-21b4664587c4\") " pod="openshift-marketplace/93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726mjnp8" Mar 18 18:12:47.613128 master-0 kubenswrapper[30278]: I0318 18:12:47.613012 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bhnch\" (UniqueName: \"kubernetes.io/projected/f282f37d-0392-49e7-89e6-21b4664587c4-kube-api-access-bhnch\") pod \"93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726mjnp8\" (UID: \"f282f37d-0392-49e7-89e6-21b4664587c4\") " pod="openshift-marketplace/93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726mjnp8" Mar 18 18:12:47.613443 master-0 kubenswrapper[30278]: I0318 18:12:47.613203 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f282f37d-0392-49e7-89e6-21b4664587c4-util\") pod \"93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726mjnp8\" (UID: \"f282f37d-0392-49e7-89e6-21b4664587c4\") " pod="openshift-marketplace/93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726mjnp8" Mar 18 18:12:47.613443 master-0 kubenswrapper[30278]: I0318 18:12:47.613307 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f282f37d-0392-49e7-89e6-21b4664587c4-bundle\") pod \"93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726mjnp8\" (UID: \"f282f37d-0392-49e7-89e6-21b4664587c4\") " pod="openshift-marketplace/93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726mjnp8" Mar 18 18:12:47.614008 master-0 kubenswrapper[30278]: I0318 18:12:47.613946 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f282f37d-0392-49e7-89e6-21b4664587c4-util\") pod \"93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726mjnp8\" (UID: \"f282f37d-0392-49e7-89e6-21b4664587c4\") " pod="openshift-marketplace/93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726mjnp8" Mar 18 18:12:47.614162 master-0 kubenswrapper[30278]: I0318 18:12:47.614096 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f282f37d-0392-49e7-89e6-21b4664587c4-bundle\") pod \"93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726mjnp8\" (UID: \"f282f37d-0392-49e7-89e6-21b4664587c4\") " pod="openshift-marketplace/93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726mjnp8" Mar 18 18:12:47.644473 master-0 kubenswrapper[30278]: I0318 18:12:47.644404 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bhnch\" (UniqueName: \"kubernetes.io/projected/f282f37d-0392-49e7-89e6-21b4664587c4-kube-api-access-bhnch\") pod \"93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726mjnp8\" (UID: \"f282f37d-0392-49e7-89e6-21b4664587c4\") " pod="openshift-marketplace/93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726mjnp8" Mar 18 18:12:47.757297 master-0 kubenswrapper[30278]: I0318 18:12:47.757198 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726mjnp8" Mar 18 18:12:48.184098 master-0 kubenswrapper[30278]: I0318 18:12:48.184035 30278 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1tp5cc" Mar 18 18:12:48.223753 master-0 kubenswrapper[30278]: I0318 18:12:48.223697 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e0f1105a-8c32-44a8-a9b9-d0b7a4d97646-util\") pod \"e0f1105a-8c32-44a8-a9b9-d0b7a4d97646\" (UID: \"e0f1105a-8c32-44a8-a9b9-d0b7a4d97646\") " Mar 18 18:12:48.223964 master-0 kubenswrapper[30278]: I0318 18:12:48.223940 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8db4n\" (UniqueName: \"kubernetes.io/projected/e0f1105a-8c32-44a8-a9b9-d0b7a4d97646-kube-api-access-8db4n\") pod \"e0f1105a-8c32-44a8-a9b9-d0b7a4d97646\" (UID: \"e0f1105a-8c32-44a8-a9b9-d0b7a4d97646\") " Mar 18 18:12:48.224027 master-0 kubenswrapper[30278]: I0318 18:12:48.224008 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e0f1105a-8c32-44a8-a9b9-d0b7a4d97646-bundle\") pod \"e0f1105a-8c32-44a8-a9b9-d0b7a4d97646\" (UID: \"e0f1105a-8c32-44a8-a9b9-d0b7a4d97646\") " Mar 18 18:12:48.226069 master-0 kubenswrapper[30278]: I0318 18:12:48.225990 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e0f1105a-8c32-44a8-a9b9-d0b7a4d97646-bundle" (OuterVolumeSpecName: "bundle") pod "e0f1105a-8c32-44a8-a9b9-d0b7a4d97646" (UID: "e0f1105a-8c32-44a8-a9b9-d0b7a4d97646"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 18 18:12:48.228809 master-0 kubenswrapper[30278]: I0318 18:12:48.228748 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e0f1105a-8c32-44a8-a9b9-d0b7a4d97646-kube-api-access-8db4n" (OuterVolumeSpecName: "kube-api-access-8db4n") pod "e0f1105a-8c32-44a8-a9b9-d0b7a4d97646" (UID: "e0f1105a-8c32-44a8-a9b9-d0b7a4d97646"). InnerVolumeSpecName "kube-api-access-8db4n". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 18:12:48.233367 master-0 kubenswrapper[30278]: I0318 18:12:48.233288 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e0f1105a-8c32-44a8-a9b9-d0b7a4d97646-util" (OuterVolumeSpecName: "util") pod "e0f1105a-8c32-44a8-a9b9-d0b7a4d97646" (UID: "e0f1105a-8c32-44a8-a9b9-d0b7a4d97646"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 18 18:12:48.332210 master-0 kubenswrapper[30278]: I0318 18:12:48.332156 30278 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8db4n\" (UniqueName: \"kubernetes.io/projected/e0f1105a-8c32-44a8-a9b9-d0b7a4d97646-kube-api-access-8db4n\") on node \"master-0\" DevicePath \"\"" Mar 18 18:12:48.332210 master-0 kubenswrapper[30278]: I0318 18:12:48.332204 30278 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e0f1105a-8c32-44a8-a9b9-d0b7a4d97646-bundle\") on node \"master-0\" DevicePath \"\"" Mar 18 18:12:48.333136 master-0 kubenswrapper[30278]: I0318 18:12:48.332217 30278 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e0f1105a-8c32-44a8-a9b9-d0b7a4d97646-util\") on node \"master-0\" DevicePath \"\"" Mar 18 18:12:48.663615 master-0 kubenswrapper[30278]: I0318 18:12:48.663473 30278 generic.go:334] "Generic (PLEG): container finished" podID="d5669f96-ae3d-49f7-8230-a510fec85d74" containerID="e8a7ab8bcbbe0f0949b61369d21475fdccad4bef085d3b5c7253c6c860535f40" exitCode=0 Mar 18 18:12:48.663615 master-0 kubenswrapper[30278]: I0318 18:12:48.663589 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874v28xx" event={"ID":"d5669f96-ae3d-49f7-8230-a510fec85d74","Type":"ContainerDied","Data":"e8a7ab8bcbbe0f0949b61369d21475fdccad4bef085d3b5c7253c6c860535f40"} Mar 18 18:12:48.668397 master-0 kubenswrapper[30278]: I0318 18:12:48.667991 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1tp5cc" event={"ID":"e0f1105a-8c32-44a8-a9b9-d0b7a4d97646","Type":"ContainerDied","Data":"358d2882f0bd307fb8bd0fe64c1bb3e662f8430521caf5aa6097864407c64c33"} Mar 18 18:12:48.668397 master-0 kubenswrapper[30278]: I0318 18:12:48.668022 30278 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="358d2882f0bd307fb8bd0fe64c1bb3e662f8430521caf5aa6097864407c64c33" Mar 18 18:12:48.668397 master-0 kubenswrapper[30278]: I0318 18:12:48.668084 30278 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1tp5cc" Mar 18 18:12:48.673739 master-0 kubenswrapper[30278]: I0318 18:12:48.673679 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5k4dp4" event={"ID":"5053c4bd-4ae3-4092-ba2a-35fd700acb8c","Type":"ContainerStarted","Data":"78f6f687fb8ff46ed41cb80ab01fd1d3658b2d08fdbd863614c1159b84ac7456"} Mar 18 18:12:48.807335 master-0 kubenswrapper[30278]: I0318 18:12:48.807225 30278 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726mjnp8"] Mar 18 18:12:48.808721 master-0 kubenswrapper[30278]: W0318 18:12:48.808629 30278 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf282f37d_0392_49e7_89e6_21b4664587c4.slice/crio-26f4be64f62a0c34a82dbfdc80e6c6ac47479699d09784e834badcf7d180c42b WatchSource:0}: Error finding container 26f4be64f62a0c34a82dbfdc80e6c6ac47479699d09784e834badcf7d180c42b: Status 404 returned error can't find the container with id 26f4be64f62a0c34a82dbfdc80e6c6ac47479699d09784e834badcf7d180c42b Mar 18 18:12:49.684582 master-0 kubenswrapper[30278]: I0318 18:12:49.684530 30278 generic.go:334] "Generic (PLEG): container finished" podID="5053c4bd-4ae3-4092-ba2a-35fd700acb8c" containerID="78f6f687fb8ff46ed41cb80ab01fd1d3658b2d08fdbd863614c1159b84ac7456" exitCode=0 Mar 18 18:12:49.684972 master-0 kubenswrapper[30278]: I0318 18:12:49.684629 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5k4dp4" event={"ID":"5053c4bd-4ae3-4092-ba2a-35fd700acb8c","Type":"ContainerDied","Data":"78f6f687fb8ff46ed41cb80ab01fd1d3658b2d08fdbd863614c1159b84ac7456"} Mar 18 18:12:49.693378 master-0 kubenswrapper[30278]: I0318 18:12:49.693252 30278 generic.go:334] "Generic (PLEG): container finished" podID="d5669f96-ae3d-49f7-8230-a510fec85d74" containerID="4f2cc91a21f589503ca947d237d9484041b577b3f308ef31a1ddaaaa4a5f3778" exitCode=0 Mar 18 18:12:49.693559 master-0 kubenswrapper[30278]: I0318 18:12:49.693335 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874v28xx" event={"ID":"d5669f96-ae3d-49f7-8230-a510fec85d74","Type":"ContainerDied","Data":"4f2cc91a21f589503ca947d237d9484041b577b3f308ef31a1ddaaaa4a5f3778"} Mar 18 18:12:49.698700 master-0 kubenswrapper[30278]: I0318 18:12:49.698554 30278 generic.go:334] "Generic (PLEG): container finished" podID="f282f37d-0392-49e7-89e6-21b4664587c4" containerID="61df4a304cced2ca0ad3628c7d450e4faffa512aee9d980d8cbf45a3c3e48ee9" exitCode=0 Mar 18 18:12:49.698783 master-0 kubenswrapper[30278]: I0318 18:12:49.698686 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726mjnp8" event={"ID":"f282f37d-0392-49e7-89e6-21b4664587c4","Type":"ContainerDied","Data":"61df4a304cced2ca0ad3628c7d450e4faffa512aee9d980d8cbf45a3c3e48ee9"} Mar 18 18:12:49.698783 master-0 kubenswrapper[30278]: I0318 18:12:49.698769 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726mjnp8" event={"ID":"f282f37d-0392-49e7-89e6-21b4664587c4","Type":"ContainerStarted","Data":"26f4be64f62a0c34a82dbfdc80e6c6ac47479699d09784e834badcf7d180c42b"} Mar 18 18:12:50.716025 master-0 kubenswrapper[30278]: I0318 18:12:50.715844 30278 generic.go:334] "Generic (PLEG): container finished" podID="5053c4bd-4ae3-4092-ba2a-35fd700acb8c" containerID="52ceda3ee10c786ac733d7ec3e8b0f7ae4732249069087db0af6648ca321eba8" exitCode=0 Mar 18 18:12:50.717055 master-0 kubenswrapper[30278]: I0318 18:12:50.715964 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5k4dp4" event={"ID":"5053c4bd-4ae3-4092-ba2a-35fd700acb8c","Type":"ContainerDied","Data":"52ceda3ee10c786ac733d7ec3e8b0f7ae4732249069087db0af6648ca321eba8"} Mar 18 18:12:51.161647 master-0 kubenswrapper[30278]: I0318 18:12:51.161595 30278 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874v28xx" Mar 18 18:12:51.243105 master-0 kubenswrapper[30278]: I0318 18:12:51.243014 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/d5669f96-ae3d-49f7-8230-a510fec85d74-bundle\") pod \"d5669f96-ae3d-49f7-8230-a510fec85d74\" (UID: \"d5669f96-ae3d-49f7-8230-a510fec85d74\") " Mar 18 18:12:51.243382 master-0 kubenswrapper[30278]: I0318 18:12:51.243157 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s66v4\" (UniqueName: \"kubernetes.io/projected/d5669f96-ae3d-49f7-8230-a510fec85d74-kube-api-access-s66v4\") pod \"d5669f96-ae3d-49f7-8230-a510fec85d74\" (UID: \"d5669f96-ae3d-49f7-8230-a510fec85d74\") " Mar 18 18:12:51.243382 master-0 kubenswrapper[30278]: I0318 18:12:51.243183 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/d5669f96-ae3d-49f7-8230-a510fec85d74-util\") pod \"d5669f96-ae3d-49f7-8230-a510fec85d74\" (UID: \"d5669f96-ae3d-49f7-8230-a510fec85d74\") " Mar 18 18:12:51.247547 master-0 kubenswrapper[30278]: I0318 18:12:51.247325 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d5669f96-ae3d-49f7-8230-a510fec85d74-bundle" (OuterVolumeSpecName: "bundle") pod "d5669f96-ae3d-49f7-8230-a510fec85d74" (UID: "d5669f96-ae3d-49f7-8230-a510fec85d74"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 18 18:12:51.251428 master-0 kubenswrapper[30278]: I0318 18:12:51.251374 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d5669f96-ae3d-49f7-8230-a510fec85d74-kube-api-access-s66v4" (OuterVolumeSpecName: "kube-api-access-s66v4") pod "d5669f96-ae3d-49f7-8230-a510fec85d74" (UID: "d5669f96-ae3d-49f7-8230-a510fec85d74"). InnerVolumeSpecName "kube-api-access-s66v4". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 18:12:51.258092 master-0 kubenswrapper[30278]: I0318 18:12:51.257971 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d5669f96-ae3d-49f7-8230-a510fec85d74-util" (OuterVolumeSpecName: "util") pod "d5669f96-ae3d-49f7-8230-a510fec85d74" (UID: "d5669f96-ae3d-49f7-8230-a510fec85d74"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 18 18:12:51.354401 master-0 kubenswrapper[30278]: I0318 18:12:51.354258 30278 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/d5669f96-ae3d-49f7-8230-a510fec85d74-bundle\") on node \"master-0\" DevicePath \"\"" Mar 18 18:12:51.354401 master-0 kubenswrapper[30278]: I0318 18:12:51.354356 30278 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/d5669f96-ae3d-49f7-8230-a510fec85d74-util\") on node \"master-0\" DevicePath \"\"" Mar 18 18:12:51.354401 master-0 kubenswrapper[30278]: I0318 18:12:51.354379 30278 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s66v4\" (UniqueName: \"kubernetes.io/projected/d5669f96-ae3d-49f7-8230-a510fec85d74-kube-api-access-s66v4\") on node \"master-0\" DevicePath \"\"" Mar 18 18:12:51.900212 master-0 kubenswrapper[30278]: I0318 18:12:51.900109 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874v28xx" event={"ID":"d5669f96-ae3d-49f7-8230-a510fec85d74","Type":"ContainerDied","Data":"2421c4c812e10de9b7896e7aba578cc3aefb2d94f1dad7ad833fccbc26d7eb50"} Mar 18 18:12:51.900212 master-0 kubenswrapper[30278]: I0318 18:12:51.900195 30278 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2421c4c812e10de9b7896e7aba578cc3aefb2d94f1dad7ad833fccbc26d7eb50" Mar 18 18:12:51.901055 master-0 kubenswrapper[30278]: I0318 18:12:51.900359 30278 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874v28xx" Mar 18 18:12:51.906895 master-0 kubenswrapper[30278]: I0318 18:12:51.906302 30278 generic.go:334] "Generic (PLEG): container finished" podID="f282f37d-0392-49e7-89e6-21b4664587c4" containerID="b251a57bcc775096591285266a1a83c95747cc87cf8c5768510e117a431583ce" exitCode=0 Mar 18 18:12:51.907637 master-0 kubenswrapper[30278]: I0318 18:12:51.907594 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726mjnp8" event={"ID":"f282f37d-0392-49e7-89e6-21b4664587c4","Type":"ContainerDied","Data":"b251a57bcc775096591285266a1a83c95747cc87cf8c5768510e117a431583ce"} Mar 18 18:12:52.401858 master-0 kubenswrapper[30278]: I0318 18:12:52.401804 30278 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5k4dp4" Mar 18 18:12:52.519414 master-0 kubenswrapper[30278]: I0318 18:12:52.518483 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/5053c4bd-4ae3-4092-ba2a-35fd700acb8c-bundle\") pod \"5053c4bd-4ae3-4092-ba2a-35fd700acb8c\" (UID: \"5053c4bd-4ae3-4092-ba2a-35fd700acb8c\") " Mar 18 18:12:52.519414 master-0 kubenswrapper[30278]: I0318 18:12:52.518584 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-522rc\" (UniqueName: \"kubernetes.io/projected/5053c4bd-4ae3-4092-ba2a-35fd700acb8c-kube-api-access-522rc\") pod \"5053c4bd-4ae3-4092-ba2a-35fd700acb8c\" (UID: \"5053c4bd-4ae3-4092-ba2a-35fd700acb8c\") " Mar 18 18:12:52.519414 master-0 kubenswrapper[30278]: I0318 18:12:52.518816 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/5053c4bd-4ae3-4092-ba2a-35fd700acb8c-util\") pod \"5053c4bd-4ae3-4092-ba2a-35fd700acb8c\" (UID: \"5053c4bd-4ae3-4092-ba2a-35fd700acb8c\") " Mar 18 18:12:52.520104 master-0 kubenswrapper[30278]: I0318 18:12:52.520044 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5053c4bd-4ae3-4092-ba2a-35fd700acb8c-bundle" (OuterVolumeSpecName: "bundle") pod "5053c4bd-4ae3-4092-ba2a-35fd700acb8c" (UID: "5053c4bd-4ae3-4092-ba2a-35fd700acb8c"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 18 18:12:52.528675 master-0 kubenswrapper[30278]: I0318 18:12:52.528611 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5053c4bd-4ae3-4092-ba2a-35fd700acb8c-kube-api-access-522rc" (OuterVolumeSpecName: "kube-api-access-522rc") pod "5053c4bd-4ae3-4092-ba2a-35fd700acb8c" (UID: "5053c4bd-4ae3-4092-ba2a-35fd700acb8c"). InnerVolumeSpecName "kube-api-access-522rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 18:12:52.540939 master-0 kubenswrapper[30278]: I0318 18:12:52.540822 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5053c4bd-4ae3-4092-ba2a-35fd700acb8c-util" (OuterVolumeSpecName: "util") pod "5053c4bd-4ae3-4092-ba2a-35fd700acb8c" (UID: "5053c4bd-4ae3-4092-ba2a-35fd700acb8c"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 18 18:12:52.622118 master-0 kubenswrapper[30278]: I0318 18:12:52.622047 30278 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-522rc\" (UniqueName: \"kubernetes.io/projected/5053c4bd-4ae3-4092-ba2a-35fd700acb8c-kube-api-access-522rc\") on node \"master-0\" DevicePath \"\"" Mar 18 18:12:52.622118 master-0 kubenswrapper[30278]: I0318 18:12:52.622111 30278 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/5053c4bd-4ae3-4092-ba2a-35fd700acb8c-util\") on node \"master-0\" DevicePath \"\"" Mar 18 18:12:52.622258 master-0 kubenswrapper[30278]: I0318 18:12:52.622126 30278 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/5053c4bd-4ae3-4092-ba2a-35fd700acb8c-bundle\") on node \"master-0\" DevicePath \"\"" Mar 18 18:12:52.941883 master-0 kubenswrapper[30278]: I0318 18:12:52.941834 30278 generic.go:334] "Generic (PLEG): container finished" podID="f282f37d-0392-49e7-89e6-21b4664587c4" containerID="a501696ac0606eb96145ae4fffb7fd7d1035c38ebf938becbc0ee5ec3f9ac367" exitCode=0 Mar 18 18:12:52.942669 master-0 kubenswrapper[30278]: I0318 18:12:52.941958 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726mjnp8" event={"ID":"f282f37d-0392-49e7-89e6-21b4664587c4","Type":"ContainerDied","Data":"a501696ac0606eb96145ae4fffb7fd7d1035c38ebf938becbc0ee5ec3f9ac367"} Mar 18 18:12:52.960881 master-0 kubenswrapper[30278]: I0318 18:12:52.960808 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5k4dp4" event={"ID":"5053c4bd-4ae3-4092-ba2a-35fd700acb8c","Type":"ContainerDied","Data":"e4b9360d9420618027696b129dcec8b06e60eff2f1ecc58f4d49f6a02e233bc0"} Mar 18 18:12:52.960881 master-0 kubenswrapper[30278]: I0318 18:12:52.960886 30278 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e4b9360d9420618027696b129dcec8b06e60eff2f1ecc58f4d49f6a02e233bc0" Mar 18 18:12:52.961183 master-0 kubenswrapper[30278]: I0318 18:12:52.961157 30278 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5k4dp4" Mar 18 18:12:54.333630 master-0 kubenswrapper[30278]: I0318 18:12:54.333586 30278 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726mjnp8" Mar 18 18:12:54.465807 master-0 kubenswrapper[30278]: I0318 18:12:54.465732 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f282f37d-0392-49e7-89e6-21b4664587c4-bundle\") pod \"f282f37d-0392-49e7-89e6-21b4664587c4\" (UID: \"f282f37d-0392-49e7-89e6-21b4664587c4\") " Mar 18 18:12:54.466119 master-0 kubenswrapper[30278]: I0318 18:12:54.465949 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f282f37d-0392-49e7-89e6-21b4664587c4-util\") pod \"f282f37d-0392-49e7-89e6-21b4664587c4\" (UID: \"f282f37d-0392-49e7-89e6-21b4664587c4\") " Mar 18 18:12:54.466119 master-0 kubenswrapper[30278]: I0318 18:12:54.466090 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bhnch\" (UniqueName: \"kubernetes.io/projected/f282f37d-0392-49e7-89e6-21b4664587c4-kube-api-access-bhnch\") pod \"f282f37d-0392-49e7-89e6-21b4664587c4\" (UID: \"f282f37d-0392-49e7-89e6-21b4664587c4\") " Mar 18 18:12:54.471309 master-0 kubenswrapper[30278]: I0318 18:12:54.469387 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f282f37d-0392-49e7-89e6-21b4664587c4-kube-api-access-bhnch" (OuterVolumeSpecName: "kube-api-access-bhnch") pod "f282f37d-0392-49e7-89e6-21b4664587c4" (UID: "f282f37d-0392-49e7-89e6-21b4664587c4"). InnerVolumeSpecName "kube-api-access-bhnch". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 18:12:54.471309 master-0 kubenswrapper[30278]: I0318 18:12:54.470728 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f282f37d-0392-49e7-89e6-21b4664587c4-bundle" (OuterVolumeSpecName: "bundle") pod "f282f37d-0392-49e7-89e6-21b4664587c4" (UID: "f282f37d-0392-49e7-89e6-21b4664587c4"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 18 18:12:54.562968 master-0 kubenswrapper[30278]: I0318 18:12:54.562860 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f282f37d-0392-49e7-89e6-21b4664587c4-util" (OuterVolumeSpecName: "util") pod "f282f37d-0392-49e7-89e6-21b4664587c4" (UID: "f282f37d-0392-49e7-89e6-21b4664587c4"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 18 18:12:54.572989 master-0 kubenswrapper[30278]: I0318 18:12:54.572917 30278 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bhnch\" (UniqueName: \"kubernetes.io/projected/f282f37d-0392-49e7-89e6-21b4664587c4-kube-api-access-bhnch\") on node \"master-0\" DevicePath \"\"" Mar 18 18:12:54.572989 master-0 kubenswrapper[30278]: I0318 18:12:54.572966 30278 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f282f37d-0392-49e7-89e6-21b4664587c4-bundle\") on node \"master-0\" DevicePath \"\"" Mar 18 18:12:54.572989 master-0 kubenswrapper[30278]: I0318 18:12:54.572980 30278 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f282f37d-0392-49e7-89e6-21b4664587c4-util\") on node \"master-0\" DevicePath \"\"" Mar 18 18:12:54.980733 master-0 kubenswrapper[30278]: I0318 18:12:54.980514 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726mjnp8" event={"ID":"f282f37d-0392-49e7-89e6-21b4664587c4","Type":"ContainerDied","Data":"26f4be64f62a0c34a82dbfdc80e6c6ac47479699d09784e834badcf7d180c42b"} Mar 18 18:12:54.980733 master-0 kubenswrapper[30278]: I0318 18:12:54.980591 30278 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726mjnp8" Mar 18 18:12:54.980733 master-0 kubenswrapper[30278]: I0318 18:12:54.980608 30278 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="26f4be64f62a0c34a82dbfdc80e6c6ac47479699d09784e834badcf7d180c42b" Mar 18 18:12:59.529262 master-0 kubenswrapper[30278]: I0318 18:12:59.529184 30278 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-796d4cfff4-gvw4g"] Mar 18 18:12:59.529933 master-0 kubenswrapper[30278]: E0318 18:12:59.529601 30278 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5053c4bd-4ae3-4092-ba2a-35fd700acb8c" containerName="extract" Mar 18 18:12:59.529933 master-0 kubenswrapper[30278]: I0318 18:12:59.529621 30278 state_mem.go:107] "Deleted CPUSet assignment" podUID="5053c4bd-4ae3-4092-ba2a-35fd700acb8c" containerName="extract" Mar 18 18:12:59.529933 master-0 kubenswrapper[30278]: E0318 18:12:59.529635 30278 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f282f37d-0392-49e7-89e6-21b4664587c4" containerName="extract" Mar 18 18:12:59.529933 master-0 kubenswrapper[30278]: I0318 18:12:59.529645 30278 state_mem.go:107] "Deleted CPUSet assignment" podUID="f282f37d-0392-49e7-89e6-21b4664587c4" containerName="extract" Mar 18 18:12:59.529933 master-0 kubenswrapper[30278]: E0318 18:12:59.529662 30278 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e0f1105a-8c32-44a8-a9b9-d0b7a4d97646" containerName="util" Mar 18 18:12:59.529933 master-0 kubenswrapper[30278]: I0318 18:12:59.529670 30278 state_mem.go:107] "Deleted CPUSet assignment" podUID="e0f1105a-8c32-44a8-a9b9-d0b7a4d97646" containerName="util" Mar 18 18:12:59.529933 master-0 kubenswrapper[30278]: E0318 18:12:59.529680 30278 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f282f37d-0392-49e7-89e6-21b4664587c4" containerName="pull" Mar 18 18:12:59.529933 master-0 kubenswrapper[30278]: I0318 18:12:59.529687 30278 state_mem.go:107] "Deleted CPUSet assignment" podUID="f282f37d-0392-49e7-89e6-21b4664587c4" containerName="pull" Mar 18 18:12:59.529933 master-0 kubenswrapper[30278]: E0318 18:12:59.529701 30278 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d5669f96-ae3d-49f7-8230-a510fec85d74" containerName="pull" Mar 18 18:12:59.529933 master-0 kubenswrapper[30278]: I0318 18:12:59.529708 30278 state_mem.go:107] "Deleted CPUSet assignment" podUID="d5669f96-ae3d-49f7-8230-a510fec85d74" containerName="pull" Mar 18 18:12:59.529933 master-0 kubenswrapper[30278]: E0318 18:12:59.529726 30278 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5053c4bd-4ae3-4092-ba2a-35fd700acb8c" containerName="util" Mar 18 18:12:59.529933 master-0 kubenswrapper[30278]: I0318 18:12:59.529734 30278 state_mem.go:107] "Deleted CPUSet assignment" podUID="5053c4bd-4ae3-4092-ba2a-35fd700acb8c" containerName="util" Mar 18 18:12:59.529933 master-0 kubenswrapper[30278]: E0318 18:12:59.529745 30278 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f282f37d-0392-49e7-89e6-21b4664587c4" containerName="util" Mar 18 18:12:59.529933 master-0 kubenswrapper[30278]: I0318 18:12:59.529752 30278 state_mem.go:107] "Deleted CPUSet assignment" podUID="f282f37d-0392-49e7-89e6-21b4664587c4" containerName="util" Mar 18 18:12:59.529933 master-0 kubenswrapper[30278]: E0318 18:12:59.529773 30278 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e0f1105a-8c32-44a8-a9b9-d0b7a4d97646" containerName="pull" Mar 18 18:12:59.529933 master-0 kubenswrapper[30278]: I0318 18:12:59.529781 30278 state_mem.go:107] "Deleted CPUSet assignment" podUID="e0f1105a-8c32-44a8-a9b9-d0b7a4d97646" containerName="pull" Mar 18 18:12:59.529933 master-0 kubenswrapper[30278]: E0318 18:12:59.529790 30278 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d5669f96-ae3d-49f7-8230-a510fec85d74" containerName="extract" Mar 18 18:12:59.529933 master-0 kubenswrapper[30278]: I0318 18:12:59.529798 30278 state_mem.go:107] "Deleted CPUSet assignment" podUID="d5669f96-ae3d-49f7-8230-a510fec85d74" containerName="extract" Mar 18 18:12:59.529933 master-0 kubenswrapper[30278]: E0318 18:12:59.529811 30278 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e0f1105a-8c32-44a8-a9b9-d0b7a4d97646" containerName="extract" Mar 18 18:12:59.529933 master-0 kubenswrapper[30278]: I0318 18:12:59.529819 30278 state_mem.go:107] "Deleted CPUSet assignment" podUID="e0f1105a-8c32-44a8-a9b9-d0b7a4d97646" containerName="extract" Mar 18 18:12:59.529933 master-0 kubenswrapper[30278]: E0318 18:12:59.529837 30278 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5053c4bd-4ae3-4092-ba2a-35fd700acb8c" containerName="pull" Mar 18 18:12:59.529933 master-0 kubenswrapper[30278]: I0318 18:12:59.529845 30278 state_mem.go:107] "Deleted CPUSet assignment" podUID="5053c4bd-4ae3-4092-ba2a-35fd700acb8c" containerName="pull" Mar 18 18:12:59.529933 master-0 kubenswrapper[30278]: E0318 18:12:59.529858 30278 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d5669f96-ae3d-49f7-8230-a510fec85d74" containerName="util" Mar 18 18:12:59.529933 master-0 kubenswrapper[30278]: I0318 18:12:59.529867 30278 state_mem.go:107] "Deleted CPUSet assignment" podUID="d5669f96-ae3d-49f7-8230-a510fec85d74" containerName="util" Mar 18 18:12:59.530931 master-0 kubenswrapper[30278]: I0318 18:12:59.530033 30278 memory_manager.go:354] "RemoveStaleState removing state" podUID="d5669f96-ae3d-49f7-8230-a510fec85d74" containerName="extract" Mar 18 18:12:59.530931 master-0 kubenswrapper[30278]: I0318 18:12:59.530053 30278 memory_manager.go:354] "RemoveStaleState removing state" podUID="f282f37d-0392-49e7-89e6-21b4664587c4" containerName="extract" Mar 18 18:12:59.530931 master-0 kubenswrapper[30278]: I0318 18:12:59.530077 30278 memory_manager.go:354] "RemoveStaleState removing state" podUID="e0f1105a-8c32-44a8-a9b9-d0b7a4d97646" containerName="extract" Mar 18 18:12:59.530931 master-0 kubenswrapper[30278]: I0318 18:12:59.530098 30278 memory_manager.go:354] "RemoveStaleState removing state" podUID="5053c4bd-4ae3-4092-ba2a-35fd700acb8c" containerName="extract" Mar 18 18:12:59.530931 master-0 kubenswrapper[30278]: I0318 18:12:59.530753 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-796d4cfff4-gvw4g" Mar 18 18:12:59.533239 master-0 kubenswrapper[30278]: I0318 18:12:59.533188 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Mar 18 18:12:59.536305 master-0 kubenswrapper[30278]: I0318 18:12:59.536240 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Mar 18 18:12:59.546767 master-0 kubenswrapper[30278]: I0318 18:12:59.546685 30278 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-796d4cfff4-gvw4g"] Mar 18 18:12:59.564697 master-0 kubenswrapper[30278]: I0318 18:12:59.564634 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6sc5z\" (UniqueName: \"kubernetes.io/projected/ace8aac5-f45b-4819-b121-bf9db0c63e4f-kube-api-access-6sc5z\") pod \"nmstate-operator-796d4cfff4-gvw4g\" (UID: \"ace8aac5-f45b-4819-b121-bf9db0c63e4f\") " pod="openshift-nmstate/nmstate-operator-796d4cfff4-gvw4g" Mar 18 18:12:59.666960 master-0 kubenswrapper[30278]: I0318 18:12:59.666864 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6sc5z\" (UniqueName: \"kubernetes.io/projected/ace8aac5-f45b-4819-b121-bf9db0c63e4f-kube-api-access-6sc5z\") pod \"nmstate-operator-796d4cfff4-gvw4g\" (UID: \"ace8aac5-f45b-4819-b121-bf9db0c63e4f\") " pod="openshift-nmstate/nmstate-operator-796d4cfff4-gvw4g" Mar 18 18:12:59.693013 master-0 kubenswrapper[30278]: I0318 18:12:59.692947 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6sc5z\" (UniqueName: \"kubernetes.io/projected/ace8aac5-f45b-4819-b121-bf9db0c63e4f-kube-api-access-6sc5z\") pod \"nmstate-operator-796d4cfff4-gvw4g\" (UID: \"ace8aac5-f45b-4819-b121-bf9db0c63e4f\") " pod="openshift-nmstate/nmstate-operator-796d4cfff4-gvw4g" Mar 18 18:12:59.848685 master-0 kubenswrapper[30278]: I0318 18:12:59.848606 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-796d4cfff4-gvw4g" Mar 18 18:13:00.388245 master-0 kubenswrapper[30278]: I0318 18:13:00.388190 30278 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-796d4cfff4-gvw4g"] Mar 18 18:13:00.393015 master-0 kubenswrapper[30278]: W0318 18:13:00.392976 30278 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podace8aac5_f45b_4819_b121_bf9db0c63e4f.slice/crio-6e45148b0326e9ae7d974eb3177d52071f437e3121751dda125a7d039cc083c3 WatchSource:0}: Error finding container 6e45148b0326e9ae7d974eb3177d52071f437e3121751dda125a7d039cc083c3: Status 404 returned error can't find the container with id 6e45148b0326e9ae7d974eb3177d52071f437e3121751dda125a7d039cc083c3 Mar 18 18:13:01.068301 master-0 kubenswrapper[30278]: I0318 18:13:01.064949 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-796d4cfff4-gvw4g" event={"ID":"ace8aac5-f45b-4819-b121-bf9db0c63e4f","Type":"ContainerStarted","Data":"6e45148b0326e9ae7d974eb3177d52071f437e3121751dda125a7d039cc083c3"} Mar 18 18:13:04.095149 master-0 kubenswrapper[30278]: I0318 18:13:04.095063 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-796d4cfff4-gvw4g" event={"ID":"ace8aac5-f45b-4819-b121-bf9db0c63e4f","Type":"ContainerStarted","Data":"2db7c838dd51eadc251e2c7c1f58283ed30c832746e3502703239ce2f96f269a"} Mar 18 18:13:04.135297 master-0 kubenswrapper[30278]: I0318 18:13:04.135157 30278 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-796d4cfff4-gvw4g" podStartSLOduration=2.163485666 podStartE2EDuration="5.135127077s" podCreationTimestamp="2026-03-18 18:12:59 +0000 UTC" firstStartedPulling="2026-03-18 18:13:00.395818616 +0000 UTC m=+749.563003211" lastFinishedPulling="2026-03-18 18:13:03.367460007 +0000 UTC m=+752.534644622" observedRunningTime="2026-03-18 18:13:04.115612749 +0000 UTC m=+753.282797384" watchObservedRunningTime="2026-03-18 18:13:04.135127077 +0000 UTC m=+753.302311682" Mar 18 18:13:07.699327 master-0 kubenswrapper[30278]: I0318 18:13:07.699094 30278 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-848f479545-kv7v2"] Mar 18 18:13:07.700328 master-0 kubenswrapper[30278]: I0318 18:13:07.700064 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-848f479545-kv7v2" Mar 18 18:13:07.703906 master-0 kubenswrapper[30278]: I0318 18:13:07.701878 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Mar 18 18:13:07.704383 master-0 kubenswrapper[30278]: I0318 18:13:07.704360 30278 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Mar 18 18:13:07.704698 master-0 kubenswrapper[30278]: I0318 18:13:07.704684 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Mar 18 18:13:07.704897 master-0 kubenswrapper[30278]: I0318 18:13:07.704884 30278 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Mar 18 18:13:07.733328 master-0 kubenswrapper[30278]: I0318 18:13:07.732671 30278 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-848f479545-kv7v2"] Mar 18 18:13:07.844646 master-0 kubenswrapper[30278]: I0318 18:13:07.844551 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lw69g\" (UniqueName: \"kubernetes.io/projected/79b7d491-7665-41af-95d6-f17d8ce48257-kube-api-access-lw69g\") pod \"metallb-operator-controller-manager-848f479545-kv7v2\" (UID: \"79b7d491-7665-41af-95d6-f17d8ce48257\") " pod="metallb-system/metallb-operator-controller-manager-848f479545-kv7v2" Mar 18 18:13:07.844883 master-0 kubenswrapper[30278]: I0318 18:13:07.844706 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/79b7d491-7665-41af-95d6-f17d8ce48257-apiservice-cert\") pod \"metallb-operator-controller-manager-848f479545-kv7v2\" (UID: \"79b7d491-7665-41af-95d6-f17d8ce48257\") " pod="metallb-system/metallb-operator-controller-manager-848f479545-kv7v2" Mar 18 18:13:07.844883 master-0 kubenswrapper[30278]: I0318 18:13:07.844773 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/79b7d491-7665-41af-95d6-f17d8ce48257-webhook-cert\") pod \"metallb-operator-controller-manager-848f479545-kv7v2\" (UID: \"79b7d491-7665-41af-95d6-f17d8ce48257\") " pod="metallb-system/metallb-operator-controller-manager-848f479545-kv7v2" Mar 18 18:13:07.946895 master-0 kubenswrapper[30278]: I0318 18:13:07.946823 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/79b7d491-7665-41af-95d6-f17d8ce48257-apiservice-cert\") pod \"metallb-operator-controller-manager-848f479545-kv7v2\" (UID: \"79b7d491-7665-41af-95d6-f17d8ce48257\") " pod="metallb-system/metallb-operator-controller-manager-848f479545-kv7v2" Mar 18 18:13:07.946895 master-0 kubenswrapper[30278]: I0318 18:13:07.946903 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/79b7d491-7665-41af-95d6-f17d8ce48257-webhook-cert\") pod \"metallb-operator-controller-manager-848f479545-kv7v2\" (UID: \"79b7d491-7665-41af-95d6-f17d8ce48257\") " pod="metallb-system/metallb-operator-controller-manager-848f479545-kv7v2" Mar 18 18:13:07.947181 master-0 kubenswrapper[30278]: I0318 18:13:07.947016 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lw69g\" (UniqueName: \"kubernetes.io/projected/79b7d491-7665-41af-95d6-f17d8ce48257-kube-api-access-lw69g\") pod \"metallb-operator-controller-manager-848f479545-kv7v2\" (UID: \"79b7d491-7665-41af-95d6-f17d8ce48257\") " pod="metallb-system/metallb-operator-controller-manager-848f479545-kv7v2" Mar 18 18:13:07.955337 master-0 kubenswrapper[30278]: I0318 18:13:07.950146 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/79b7d491-7665-41af-95d6-f17d8ce48257-apiservice-cert\") pod \"metallb-operator-controller-manager-848f479545-kv7v2\" (UID: \"79b7d491-7665-41af-95d6-f17d8ce48257\") " pod="metallb-system/metallb-operator-controller-manager-848f479545-kv7v2" Mar 18 18:13:07.955337 master-0 kubenswrapper[30278]: I0318 18:13:07.950330 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/79b7d491-7665-41af-95d6-f17d8ce48257-webhook-cert\") pod \"metallb-operator-controller-manager-848f479545-kv7v2\" (UID: \"79b7d491-7665-41af-95d6-f17d8ce48257\") " pod="metallb-system/metallb-operator-controller-manager-848f479545-kv7v2" Mar 18 18:13:07.974295 master-0 kubenswrapper[30278]: I0318 18:13:07.974217 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lw69g\" (UniqueName: \"kubernetes.io/projected/79b7d491-7665-41af-95d6-f17d8ce48257-kube-api-access-lw69g\") pod \"metallb-operator-controller-manager-848f479545-kv7v2\" (UID: \"79b7d491-7665-41af-95d6-f17d8ce48257\") " pod="metallb-system/metallb-operator-controller-manager-848f479545-kv7v2" Mar 18 18:13:08.027823 master-0 kubenswrapper[30278]: I0318 18:13:08.027764 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-848f479545-kv7v2" Mar 18 18:13:08.036405 master-0 kubenswrapper[30278]: I0318 18:13:08.036336 30278 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-7f9bdbf4b-qndmm"] Mar 18 18:13:08.037434 master-0 kubenswrapper[30278]: I0318 18:13:08.037406 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-7f9bdbf4b-qndmm" Mar 18 18:13:08.045026 master-0 kubenswrapper[30278]: I0318 18:13:08.044886 30278 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Mar 18 18:13:08.045026 master-0 kubenswrapper[30278]: I0318 18:13:08.044924 30278 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Mar 18 18:13:08.080604 master-0 kubenswrapper[30278]: I0318 18:13:08.079183 30278 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-7f9bdbf4b-qndmm"] Mar 18 18:13:08.151806 master-0 kubenswrapper[30278]: I0318 18:13:08.151736 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/65e5c2ef-6493-4705-b8e2-36ee0cae8c27-webhook-cert\") pod \"metallb-operator-webhook-server-7f9bdbf4b-qndmm\" (UID: \"65e5c2ef-6493-4705-b8e2-36ee0cae8c27\") " pod="metallb-system/metallb-operator-webhook-server-7f9bdbf4b-qndmm" Mar 18 18:13:08.151806 master-0 kubenswrapper[30278]: I0318 18:13:08.151786 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cqw4q\" (UniqueName: \"kubernetes.io/projected/65e5c2ef-6493-4705-b8e2-36ee0cae8c27-kube-api-access-cqw4q\") pod \"metallb-operator-webhook-server-7f9bdbf4b-qndmm\" (UID: \"65e5c2ef-6493-4705-b8e2-36ee0cae8c27\") " pod="metallb-system/metallb-operator-webhook-server-7f9bdbf4b-qndmm" Mar 18 18:13:08.152099 master-0 kubenswrapper[30278]: I0318 18:13:08.151837 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/65e5c2ef-6493-4705-b8e2-36ee0cae8c27-apiservice-cert\") pod \"metallb-operator-webhook-server-7f9bdbf4b-qndmm\" (UID: \"65e5c2ef-6493-4705-b8e2-36ee0cae8c27\") " pod="metallb-system/metallb-operator-webhook-server-7f9bdbf4b-qndmm" Mar 18 18:13:08.258647 master-0 kubenswrapper[30278]: I0318 18:13:08.258576 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/65e5c2ef-6493-4705-b8e2-36ee0cae8c27-webhook-cert\") pod \"metallb-operator-webhook-server-7f9bdbf4b-qndmm\" (UID: \"65e5c2ef-6493-4705-b8e2-36ee0cae8c27\") " pod="metallb-system/metallb-operator-webhook-server-7f9bdbf4b-qndmm" Mar 18 18:13:08.258647 master-0 kubenswrapper[30278]: I0318 18:13:08.258628 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqw4q\" (UniqueName: \"kubernetes.io/projected/65e5c2ef-6493-4705-b8e2-36ee0cae8c27-kube-api-access-cqw4q\") pod \"metallb-operator-webhook-server-7f9bdbf4b-qndmm\" (UID: \"65e5c2ef-6493-4705-b8e2-36ee0cae8c27\") " pod="metallb-system/metallb-operator-webhook-server-7f9bdbf4b-qndmm" Mar 18 18:13:08.258961 master-0 kubenswrapper[30278]: I0318 18:13:08.258672 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/65e5c2ef-6493-4705-b8e2-36ee0cae8c27-apiservice-cert\") pod \"metallb-operator-webhook-server-7f9bdbf4b-qndmm\" (UID: \"65e5c2ef-6493-4705-b8e2-36ee0cae8c27\") " pod="metallb-system/metallb-operator-webhook-server-7f9bdbf4b-qndmm" Mar 18 18:13:08.262934 master-0 kubenswrapper[30278]: I0318 18:13:08.262649 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/65e5c2ef-6493-4705-b8e2-36ee0cae8c27-apiservice-cert\") pod \"metallb-operator-webhook-server-7f9bdbf4b-qndmm\" (UID: \"65e5c2ef-6493-4705-b8e2-36ee0cae8c27\") " pod="metallb-system/metallb-operator-webhook-server-7f9bdbf4b-qndmm" Mar 18 18:13:08.262934 master-0 kubenswrapper[30278]: I0318 18:13:08.262872 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/65e5c2ef-6493-4705-b8e2-36ee0cae8c27-webhook-cert\") pod \"metallb-operator-webhook-server-7f9bdbf4b-qndmm\" (UID: \"65e5c2ef-6493-4705-b8e2-36ee0cae8c27\") " pod="metallb-system/metallb-operator-webhook-server-7f9bdbf4b-qndmm" Mar 18 18:13:08.294346 master-0 kubenswrapper[30278]: I0318 18:13:08.293508 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqw4q\" (UniqueName: \"kubernetes.io/projected/65e5c2ef-6493-4705-b8e2-36ee0cae8c27-kube-api-access-cqw4q\") pod \"metallb-operator-webhook-server-7f9bdbf4b-qndmm\" (UID: \"65e5c2ef-6493-4705-b8e2-36ee0cae8c27\") " pod="metallb-system/metallb-operator-webhook-server-7f9bdbf4b-qndmm" Mar 18 18:13:08.427037 master-0 kubenswrapper[30278]: I0318 18:13:08.426967 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-7f9bdbf4b-qndmm" Mar 18 18:13:08.565817 master-0 kubenswrapper[30278]: I0318 18:13:08.565323 30278 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-848f479545-kv7v2"] Mar 18 18:13:08.586449 master-0 kubenswrapper[30278]: W0318 18:13:08.586384 30278 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod79b7d491_7665_41af_95d6_f17d8ce48257.slice/crio-a80706d2daf31b5165a1abebcfe6ea5bdade27771f47c11b0c6b858099c0f7cf WatchSource:0}: Error finding container a80706d2daf31b5165a1abebcfe6ea5bdade27771f47c11b0c6b858099c0f7cf: Status 404 returned error can't find the container with id a80706d2daf31b5165a1abebcfe6ea5bdade27771f47c11b0c6b858099c0f7cf Mar 18 18:13:08.915716 master-0 kubenswrapper[30278]: I0318 18:13:08.915632 30278 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-7f9bdbf4b-qndmm"] Mar 18 18:13:09.185874 master-0 kubenswrapper[30278]: I0318 18:13:09.185724 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-7f9bdbf4b-qndmm" event={"ID":"65e5c2ef-6493-4705-b8e2-36ee0cae8c27","Type":"ContainerStarted","Data":"b78602493be82e3ae0ff7e1a28df2f1d31453e36a83cc56f6389c8db43d3eeda"} Mar 18 18:13:09.187783 master-0 kubenswrapper[30278]: I0318 18:13:09.187712 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-848f479545-kv7v2" event={"ID":"79b7d491-7665-41af-95d6-f17d8ce48257","Type":"ContainerStarted","Data":"a80706d2daf31b5165a1abebcfe6ea5bdade27771f47c11b0c6b858099c0f7cf"} Mar 18 18:13:11.824479 master-0 kubenswrapper[30278]: I0318 18:13:11.824378 30278 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-54c5p"] Mar 18 18:13:11.825517 master-0 kubenswrapper[30278]: I0318 18:13:11.825484 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-54c5p" Mar 18 18:13:11.837729 master-0 kubenswrapper[30278]: I0318 18:13:11.837586 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager-operator"/"openshift-service-ca.crt" Mar 18 18:13:11.838008 master-0 kubenswrapper[30278]: I0318 18:13:11.837825 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager-operator"/"kube-root-ca.crt" Mar 18 18:13:11.851559 master-0 kubenswrapper[30278]: I0318 18:13:11.849790 30278 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-54c5p"] Mar 18 18:13:11.957622 master-0 kubenswrapper[30278]: I0318 18:13:11.957513 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n4r2c\" (UniqueName: \"kubernetes.io/projected/4b4bf311-a7be-4e56-aa2d-7a0eae65a4a6-kube-api-access-n4r2c\") pod \"cert-manager-operator-controller-manager-66c8bdd694-54c5p\" (UID: \"4b4bf311-a7be-4e56-aa2d-7a0eae65a4a6\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-54c5p" Mar 18 18:13:11.957854 master-0 kubenswrapper[30278]: I0318 18:13:11.957641 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/4b4bf311-a7be-4e56-aa2d-7a0eae65a4a6-tmp\") pod \"cert-manager-operator-controller-manager-66c8bdd694-54c5p\" (UID: \"4b4bf311-a7be-4e56-aa2d-7a0eae65a4a6\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-54c5p" Mar 18 18:13:12.060511 master-0 kubenswrapper[30278]: I0318 18:13:12.060447 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n4r2c\" (UniqueName: \"kubernetes.io/projected/4b4bf311-a7be-4e56-aa2d-7a0eae65a4a6-kube-api-access-n4r2c\") pod \"cert-manager-operator-controller-manager-66c8bdd694-54c5p\" (UID: \"4b4bf311-a7be-4e56-aa2d-7a0eae65a4a6\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-54c5p" Mar 18 18:13:12.060511 master-0 kubenswrapper[30278]: I0318 18:13:12.060518 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/4b4bf311-a7be-4e56-aa2d-7a0eae65a4a6-tmp\") pod \"cert-manager-operator-controller-manager-66c8bdd694-54c5p\" (UID: \"4b4bf311-a7be-4e56-aa2d-7a0eae65a4a6\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-54c5p" Mar 18 18:13:12.062064 master-0 kubenswrapper[30278]: I0318 18:13:12.061060 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/4b4bf311-a7be-4e56-aa2d-7a0eae65a4a6-tmp\") pod \"cert-manager-operator-controller-manager-66c8bdd694-54c5p\" (UID: \"4b4bf311-a7be-4e56-aa2d-7a0eae65a4a6\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-54c5p" Mar 18 18:13:12.087963 master-0 kubenswrapper[30278]: I0318 18:13:12.086647 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n4r2c\" (UniqueName: \"kubernetes.io/projected/4b4bf311-a7be-4e56-aa2d-7a0eae65a4a6-kube-api-access-n4r2c\") pod \"cert-manager-operator-controller-manager-66c8bdd694-54c5p\" (UID: \"4b4bf311-a7be-4e56-aa2d-7a0eae65a4a6\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-54c5p" Mar 18 18:13:12.171771 master-0 kubenswrapper[30278]: I0318 18:13:12.171695 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-54c5p" Mar 18 18:13:15.791724 master-0 kubenswrapper[30278]: I0318 18:13:15.791532 30278 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-54c5p"] Mar 18 18:13:16.288508 master-0 kubenswrapper[30278]: I0318 18:13:16.288404 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-54c5p" event={"ID":"4b4bf311-a7be-4e56-aa2d-7a0eae65a4a6","Type":"ContainerStarted","Data":"ea810366d32d53d1225cba1845e329ebc25c57087607da1c1f27f2e53d3e16fb"} Mar 18 18:13:16.305554 master-0 kubenswrapper[30278]: I0318 18:13:16.305456 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-7f9bdbf4b-qndmm" event={"ID":"65e5c2ef-6493-4705-b8e2-36ee0cae8c27","Type":"ContainerStarted","Data":"9ccb763afb2c9bddd577709fa15deb93d1d20fef9fd8e1406ec88b6f8e5ca2d9"} Mar 18 18:13:16.305554 master-0 kubenswrapper[30278]: I0318 18:13:16.305571 30278 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-7f9bdbf4b-qndmm" Mar 18 18:13:16.315746 master-0 kubenswrapper[30278]: I0318 18:13:16.315655 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-848f479545-kv7v2" event={"ID":"79b7d491-7665-41af-95d6-f17d8ce48257","Type":"ContainerStarted","Data":"5254dc3d872b40ae3bd4d4b57191d492c17075dbd02d25f09ca8ab94c41d32a8"} Mar 18 18:13:16.319304 master-0 kubenswrapper[30278]: I0318 18:13:16.316898 30278 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-848f479545-kv7v2" Mar 18 18:13:16.370309 master-0 kubenswrapper[30278]: I0318 18:13:16.369371 30278 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-7f9bdbf4b-qndmm" podStartSLOduration=1.8384567299999999 podStartE2EDuration="8.369349344s" podCreationTimestamp="2026-03-18 18:13:08 +0000 UTC" firstStartedPulling="2026-03-18 18:13:08.925583314 +0000 UTC m=+758.092767909" lastFinishedPulling="2026-03-18 18:13:15.456475928 +0000 UTC m=+764.623660523" observedRunningTime="2026-03-18 18:13:16.35364709 +0000 UTC m=+765.520831685" watchObservedRunningTime="2026-03-18 18:13:16.369349344 +0000 UTC m=+765.536533949" Mar 18 18:13:16.419689 master-0 kubenswrapper[30278]: I0318 18:13:16.419590 30278 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-848f479545-kv7v2" podStartSLOduration=2.758301934 podStartE2EDuration="9.419562662s" podCreationTimestamp="2026-03-18 18:13:07 +0000 UTC" firstStartedPulling="2026-03-18 18:13:08.590549685 +0000 UTC m=+757.757734280" lastFinishedPulling="2026-03-18 18:13:15.251810413 +0000 UTC m=+764.418995008" observedRunningTime="2026-03-18 18:13:16.401434723 +0000 UTC m=+765.568619318" watchObservedRunningTime="2026-03-18 18:13:16.419562662 +0000 UTC m=+765.586747257" Mar 18 18:13:23.415898 master-0 kubenswrapper[30278]: I0318 18:13:23.415807 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-54c5p" event={"ID":"4b4bf311-a7be-4e56-aa2d-7a0eae65a4a6","Type":"ContainerStarted","Data":"b2f5b937aae8146a20445688d50ff2881eb65e660e4253205260373be5208c0a"} Mar 18 18:13:23.438809 master-0 kubenswrapper[30278]: I0318 18:13:23.438717 30278 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-54c5p" podStartSLOduration=5.433654275 podStartE2EDuration="12.438692319s" podCreationTimestamp="2026-03-18 18:13:11 +0000 UTC" firstStartedPulling="2026-03-18 18:13:15.790916863 +0000 UTC m=+764.958101458" lastFinishedPulling="2026-03-18 18:13:22.795954907 +0000 UTC m=+771.963139502" observedRunningTime="2026-03-18 18:13:23.435464232 +0000 UTC m=+772.602648837" watchObservedRunningTime="2026-03-18 18:13:23.438692319 +0000 UTC m=+772.605876914" Mar 18 18:13:25.849465 master-0 kubenswrapper[30278]: I0318 18:13:25.849339 30278 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-6888856db4-8sskx"] Mar 18 18:13:25.850929 master-0 kubenswrapper[30278]: I0318 18:13:25.850861 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-6888856db4-8sskx" Mar 18 18:13:25.853085 master-0 kubenswrapper[30278]: I0318 18:13:25.853030 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Mar 18 18:13:25.853338 master-0 kubenswrapper[30278]: I0318 18:13:25.853246 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Mar 18 18:13:25.874421 master-0 kubenswrapper[30278]: I0318 18:13:25.874329 30278 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-6888856db4-8sskx"] Mar 18 18:13:25.936389 master-0 kubenswrapper[30278]: I0318 18:13:25.936257 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xfz8n\" (UniqueName: \"kubernetes.io/projected/ab08745b-f333-419d-87e1-00c073463a8a-kube-api-access-xfz8n\") pod \"cert-manager-webhook-6888856db4-8sskx\" (UID: \"ab08745b-f333-419d-87e1-00c073463a8a\") " pod="cert-manager/cert-manager-webhook-6888856db4-8sskx" Mar 18 18:13:25.936389 master-0 kubenswrapper[30278]: I0318 18:13:25.936380 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/ab08745b-f333-419d-87e1-00c073463a8a-bound-sa-token\") pod \"cert-manager-webhook-6888856db4-8sskx\" (UID: \"ab08745b-f333-419d-87e1-00c073463a8a\") " pod="cert-manager/cert-manager-webhook-6888856db4-8sskx" Mar 18 18:13:26.037844 master-0 kubenswrapper[30278]: I0318 18:13:26.037800 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xfz8n\" (UniqueName: \"kubernetes.io/projected/ab08745b-f333-419d-87e1-00c073463a8a-kube-api-access-xfz8n\") pod \"cert-manager-webhook-6888856db4-8sskx\" (UID: \"ab08745b-f333-419d-87e1-00c073463a8a\") " pod="cert-manager/cert-manager-webhook-6888856db4-8sskx" Mar 18 18:13:26.038188 master-0 kubenswrapper[30278]: I0318 18:13:26.038167 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/ab08745b-f333-419d-87e1-00c073463a8a-bound-sa-token\") pod \"cert-manager-webhook-6888856db4-8sskx\" (UID: \"ab08745b-f333-419d-87e1-00c073463a8a\") " pod="cert-manager/cert-manager-webhook-6888856db4-8sskx" Mar 18 18:13:26.070304 master-0 kubenswrapper[30278]: I0318 18:13:26.069321 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xfz8n\" (UniqueName: \"kubernetes.io/projected/ab08745b-f333-419d-87e1-00c073463a8a-kube-api-access-xfz8n\") pod \"cert-manager-webhook-6888856db4-8sskx\" (UID: \"ab08745b-f333-419d-87e1-00c073463a8a\") " pod="cert-manager/cert-manager-webhook-6888856db4-8sskx" Mar 18 18:13:26.082304 master-0 kubenswrapper[30278]: I0318 18:13:26.082199 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/ab08745b-f333-419d-87e1-00c073463a8a-bound-sa-token\") pod \"cert-manager-webhook-6888856db4-8sskx\" (UID: \"ab08745b-f333-419d-87e1-00c073463a8a\") " pod="cert-manager/cert-manager-webhook-6888856db4-8sskx" Mar 18 18:13:26.171061 master-0 kubenswrapper[30278]: I0318 18:13:26.170896 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-6888856db4-8sskx" Mar 18 18:13:26.653056 master-0 kubenswrapper[30278]: W0318 18:13:26.652973 30278 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podab08745b_f333_419d_87e1_00c073463a8a.slice/crio-041e8e459f897c77890979be65c67910c94021bf904fc06ab7e3efa2225105b9 WatchSource:0}: Error finding container 041e8e459f897c77890979be65c67910c94021bf904fc06ab7e3efa2225105b9: Status 404 returned error can't find the container with id 041e8e459f897c77890979be65c67910c94021bf904fc06ab7e3efa2225105b9 Mar 18 18:13:26.654738 master-0 kubenswrapper[30278]: I0318 18:13:26.654684 30278 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-6888856db4-8sskx"] Mar 18 18:13:27.462710 master-0 kubenswrapper[30278]: I0318 18:13:27.462652 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-6888856db4-8sskx" event={"ID":"ab08745b-f333-419d-87e1-00c073463a8a","Type":"ContainerStarted","Data":"041e8e459f897c77890979be65c67910c94021bf904fc06ab7e3efa2225105b9"} Mar 18 18:13:28.004844 master-0 kubenswrapper[30278]: I0318 18:13:28.004758 30278 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-5545bd876-67lqt"] Mar 18 18:13:28.005999 master-0 kubenswrapper[30278]: I0318 18:13:28.005953 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-5545bd876-67lqt" Mar 18 18:13:28.017412 master-0 kubenswrapper[30278]: I0318 18:13:28.017263 30278 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-5545bd876-67lqt"] Mar 18 18:13:28.077470 master-0 kubenswrapper[30278]: I0318 18:13:28.077392 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k2927\" (UniqueName: \"kubernetes.io/projected/fbb33307-9b20-4372-a6ca-60473053b4e7-kube-api-access-k2927\") pod \"cert-manager-cainjector-5545bd876-67lqt\" (UID: \"fbb33307-9b20-4372-a6ca-60473053b4e7\") " pod="cert-manager/cert-manager-cainjector-5545bd876-67lqt" Mar 18 18:13:28.077744 master-0 kubenswrapper[30278]: I0318 18:13:28.077488 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/fbb33307-9b20-4372-a6ca-60473053b4e7-bound-sa-token\") pod \"cert-manager-cainjector-5545bd876-67lqt\" (UID: \"fbb33307-9b20-4372-a6ca-60473053b4e7\") " pod="cert-manager/cert-manager-cainjector-5545bd876-67lqt" Mar 18 18:13:28.180300 master-0 kubenswrapper[30278]: I0318 18:13:28.179772 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k2927\" (UniqueName: \"kubernetes.io/projected/fbb33307-9b20-4372-a6ca-60473053b4e7-kube-api-access-k2927\") pod \"cert-manager-cainjector-5545bd876-67lqt\" (UID: \"fbb33307-9b20-4372-a6ca-60473053b4e7\") " pod="cert-manager/cert-manager-cainjector-5545bd876-67lqt" Mar 18 18:13:28.180300 master-0 kubenswrapper[30278]: I0318 18:13:28.179876 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/fbb33307-9b20-4372-a6ca-60473053b4e7-bound-sa-token\") pod \"cert-manager-cainjector-5545bd876-67lqt\" (UID: \"fbb33307-9b20-4372-a6ca-60473053b4e7\") " pod="cert-manager/cert-manager-cainjector-5545bd876-67lqt" Mar 18 18:13:28.203297 master-0 kubenswrapper[30278]: I0318 18:13:28.201245 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k2927\" (UniqueName: \"kubernetes.io/projected/fbb33307-9b20-4372-a6ca-60473053b4e7-kube-api-access-k2927\") pod \"cert-manager-cainjector-5545bd876-67lqt\" (UID: \"fbb33307-9b20-4372-a6ca-60473053b4e7\") " pod="cert-manager/cert-manager-cainjector-5545bd876-67lqt" Mar 18 18:13:28.217305 master-0 kubenswrapper[30278]: I0318 18:13:28.214040 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/fbb33307-9b20-4372-a6ca-60473053b4e7-bound-sa-token\") pod \"cert-manager-cainjector-5545bd876-67lqt\" (UID: \"fbb33307-9b20-4372-a6ca-60473053b4e7\") " pod="cert-manager/cert-manager-cainjector-5545bd876-67lqt" Mar 18 18:13:28.342627 master-0 kubenswrapper[30278]: I0318 18:13:28.330786 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-5545bd876-67lqt" Mar 18 18:13:28.437709 master-0 kubenswrapper[30278]: I0318 18:13:28.437653 30278 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-7f9bdbf4b-qndmm" Mar 18 18:13:28.710401 master-0 kubenswrapper[30278]: I0318 18:13:28.710301 30278 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-8ff7d675-r8248"] Mar 18 18:13:28.731798 master-0 kubenswrapper[30278]: I0318 18:13:28.728568 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-8ff7d675-r8248" Mar 18 18:13:28.731798 master-0 kubenswrapper[30278]: I0318 18:13:28.728866 30278 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-8ff7d675-r8248"] Mar 18 18:13:28.733825 master-0 kubenswrapper[30278]: I0318 18:13:28.733789 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"openshift-service-ca.crt" Mar 18 18:13:28.733978 master-0 kubenswrapper[30278]: I0318 18:13:28.733950 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"kube-root-ca.crt" Mar 18 18:13:28.799091 master-0 kubenswrapper[30278]: I0318 18:13:28.798981 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gz57j\" (UniqueName: \"kubernetes.io/projected/0a3577af-da41-4598-b0e7-0ea2f10f4d00-kube-api-access-gz57j\") pod \"obo-prometheus-operator-8ff7d675-r8248\" (UID: \"0a3577af-da41-4598-b0e7-0ea2f10f4d00\") " pod="openshift-operators/obo-prometheus-operator-8ff7d675-r8248" Mar 18 18:13:28.901388 master-0 kubenswrapper[30278]: I0318 18:13:28.900440 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gz57j\" (UniqueName: \"kubernetes.io/projected/0a3577af-da41-4598-b0e7-0ea2f10f4d00-kube-api-access-gz57j\") pod \"obo-prometheus-operator-8ff7d675-r8248\" (UID: \"0a3577af-da41-4598-b0e7-0ea2f10f4d00\") " pod="openshift-operators/obo-prometheus-operator-8ff7d675-r8248" Mar 18 18:13:28.921468 master-0 kubenswrapper[30278]: I0318 18:13:28.921424 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gz57j\" (UniqueName: \"kubernetes.io/projected/0a3577af-da41-4598-b0e7-0ea2f10f4d00-kube-api-access-gz57j\") pod \"obo-prometheus-operator-8ff7d675-r8248\" (UID: \"0a3577af-da41-4598-b0e7-0ea2f10f4d00\") " pod="openshift-operators/obo-prometheus-operator-8ff7d675-r8248" Mar 18 18:13:28.963620 master-0 kubenswrapper[30278]: I0318 18:13:28.963571 30278 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-5545bd876-67lqt"] Mar 18 18:13:29.056608 master-0 kubenswrapper[30278]: I0318 18:13:29.056512 30278 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-7c74b8df45-g5np5"] Mar 18 18:13:29.058263 master-0 kubenswrapper[30278]: I0318 18:13:29.058228 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7c74b8df45-g5np5" Mar 18 18:13:29.063733 master-0 kubenswrapper[30278]: I0318 18:13:29.063673 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-service-cert" Mar 18 18:13:29.079002 master-0 kubenswrapper[30278]: I0318 18:13:29.078944 30278 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-7c74b8df45-4p2zl"] Mar 18 18:13:29.083292 master-0 kubenswrapper[30278]: I0318 18:13:29.080085 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7c74b8df45-4p2zl" Mar 18 18:13:29.086161 master-0 kubenswrapper[30278]: I0318 18:13:29.084383 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-8ff7d675-r8248" Mar 18 18:13:29.086161 master-0 kubenswrapper[30278]: I0318 18:13:29.084981 30278 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-7c74b8df45-g5np5"] Mar 18 18:13:29.115879 master-0 kubenswrapper[30278]: I0318 18:13:29.115564 30278 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-7c74b8df45-4p2zl"] Mar 18 18:13:29.206574 master-0 kubenswrapper[30278]: I0318 18:13:29.206399 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/d79d0eb2-fcc6-4cbd-b98f-add9bf5bc666-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-7c74b8df45-g5np5\" (UID: \"d79d0eb2-fcc6-4cbd-b98f-add9bf5bc666\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7c74b8df45-g5np5" Mar 18 18:13:29.206574 master-0 kubenswrapper[30278]: I0318 18:13:29.206503 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/d79d0eb2-fcc6-4cbd-b98f-add9bf5bc666-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-7c74b8df45-g5np5\" (UID: \"d79d0eb2-fcc6-4cbd-b98f-add9bf5bc666\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7c74b8df45-g5np5" Mar 18 18:13:29.206574 master-0 kubenswrapper[30278]: I0318 18:13:29.206573 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/0c50322e-7236-4d94-a13b-098b28afbe97-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-7c74b8df45-4p2zl\" (UID: \"0c50322e-7236-4d94-a13b-098b28afbe97\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7c74b8df45-4p2zl" Mar 18 18:13:29.206972 master-0 kubenswrapper[30278]: I0318 18:13:29.206628 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/0c50322e-7236-4d94-a13b-098b28afbe97-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-7c74b8df45-4p2zl\" (UID: \"0c50322e-7236-4d94-a13b-098b28afbe97\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7c74b8df45-4p2zl" Mar 18 18:13:29.311590 master-0 kubenswrapper[30278]: I0318 18:13:29.311470 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/d79d0eb2-fcc6-4cbd-b98f-add9bf5bc666-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-7c74b8df45-g5np5\" (UID: \"d79d0eb2-fcc6-4cbd-b98f-add9bf5bc666\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7c74b8df45-g5np5" Mar 18 18:13:29.313534 master-0 kubenswrapper[30278]: I0318 18:13:29.313438 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/0c50322e-7236-4d94-a13b-098b28afbe97-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-7c74b8df45-4p2zl\" (UID: \"0c50322e-7236-4d94-a13b-098b28afbe97\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7c74b8df45-4p2zl" Mar 18 18:13:29.313720 master-0 kubenswrapper[30278]: I0318 18:13:29.313682 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/0c50322e-7236-4d94-a13b-098b28afbe97-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-7c74b8df45-4p2zl\" (UID: \"0c50322e-7236-4d94-a13b-098b28afbe97\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7c74b8df45-4p2zl" Mar 18 18:13:29.313939 master-0 kubenswrapper[30278]: I0318 18:13:29.313895 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/d79d0eb2-fcc6-4cbd-b98f-add9bf5bc666-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-7c74b8df45-g5np5\" (UID: \"d79d0eb2-fcc6-4cbd-b98f-add9bf5bc666\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7c74b8df45-g5np5" Mar 18 18:13:29.316513 master-0 kubenswrapper[30278]: I0318 18:13:29.316479 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/0c50322e-7236-4d94-a13b-098b28afbe97-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-7c74b8df45-4p2zl\" (UID: \"0c50322e-7236-4d94-a13b-098b28afbe97\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7c74b8df45-4p2zl" Mar 18 18:13:29.319970 master-0 kubenswrapper[30278]: I0318 18:13:29.316988 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/0c50322e-7236-4d94-a13b-098b28afbe97-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-7c74b8df45-4p2zl\" (UID: \"0c50322e-7236-4d94-a13b-098b28afbe97\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7c74b8df45-4p2zl" Mar 18 18:13:29.319970 master-0 kubenswrapper[30278]: I0318 18:13:29.317229 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/d79d0eb2-fcc6-4cbd-b98f-add9bf5bc666-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-7c74b8df45-g5np5\" (UID: \"d79d0eb2-fcc6-4cbd-b98f-add9bf5bc666\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7c74b8df45-g5np5" Mar 18 18:13:29.353611 master-0 kubenswrapper[30278]: I0318 18:13:29.353543 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/d79d0eb2-fcc6-4cbd-b98f-add9bf5bc666-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-7c74b8df45-g5np5\" (UID: \"d79d0eb2-fcc6-4cbd-b98f-add9bf5bc666\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7c74b8df45-g5np5" Mar 18 18:13:29.411299 master-0 kubenswrapper[30278]: I0318 18:13:29.411094 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7c74b8df45-g5np5" Mar 18 18:13:29.440769 master-0 kubenswrapper[30278]: I0318 18:13:29.435806 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7c74b8df45-4p2zl" Mar 18 18:13:29.608570 master-0 kubenswrapper[30278]: I0318 18:13:29.608420 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-5545bd876-67lqt" event={"ID":"fbb33307-9b20-4372-a6ca-60473053b4e7","Type":"ContainerStarted","Data":"d39fef24b4ff63d76caee137ed944ca2e1b53927dc44e2bad4058b1ec67caffc"} Mar 18 18:13:29.801860 master-0 kubenswrapper[30278]: I0318 18:13:29.800879 30278 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-8ff7d675-r8248"] Mar 18 18:13:29.838128 master-0 kubenswrapper[30278]: I0318 18:13:29.833771 30278 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/observability-operator-6dd7dd855f-85vsw"] Mar 18 18:13:29.838128 master-0 kubenswrapper[30278]: I0318 18:13:29.834713 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-6dd7dd855f-85vsw" Mar 18 18:13:29.838128 master-0 kubenswrapper[30278]: I0318 18:13:29.837653 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-tls" Mar 18 18:13:29.885614 master-0 kubenswrapper[30278]: I0318 18:13:29.885561 30278 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-6dd7dd855f-85vsw"] Mar 18 18:13:29.956374 master-0 kubenswrapper[30278]: I0318 18:13:29.956295 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cstlk\" (UniqueName: \"kubernetes.io/projected/114649d6-2e5d-4cfd-b5b7-94d92d0991ae-kube-api-access-cstlk\") pod \"observability-operator-6dd7dd855f-85vsw\" (UID: \"114649d6-2e5d-4cfd-b5b7-94d92d0991ae\") " pod="openshift-operators/observability-operator-6dd7dd855f-85vsw" Mar 18 18:13:29.956583 master-0 kubenswrapper[30278]: I0318 18:13:29.956522 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/114649d6-2e5d-4cfd-b5b7-94d92d0991ae-observability-operator-tls\") pod \"observability-operator-6dd7dd855f-85vsw\" (UID: \"114649d6-2e5d-4cfd-b5b7-94d92d0991ae\") " pod="openshift-operators/observability-operator-6dd7dd855f-85vsw" Mar 18 18:13:30.060192 master-0 kubenswrapper[30278]: I0318 18:13:30.059105 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/114649d6-2e5d-4cfd-b5b7-94d92d0991ae-observability-operator-tls\") pod \"observability-operator-6dd7dd855f-85vsw\" (UID: \"114649d6-2e5d-4cfd-b5b7-94d92d0991ae\") " pod="openshift-operators/observability-operator-6dd7dd855f-85vsw" Mar 18 18:13:30.060192 master-0 kubenswrapper[30278]: I0318 18:13:30.059200 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cstlk\" (UniqueName: \"kubernetes.io/projected/114649d6-2e5d-4cfd-b5b7-94d92d0991ae-kube-api-access-cstlk\") pod \"observability-operator-6dd7dd855f-85vsw\" (UID: \"114649d6-2e5d-4cfd-b5b7-94d92d0991ae\") " pod="openshift-operators/observability-operator-6dd7dd855f-85vsw" Mar 18 18:13:30.068323 master-0 kubenswrapper[30278]: I0318 18:13:30.066494 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/114649d6-2e5d-4cfd-b5b7-94d92d0991ae-observability-operator-tls\") pod \"observability-operator-6dd7dd855f-85vsw\" (UID: \"114649d6-2e5d-4cfd-b5b7-94d92d0991ae\") " pod="openshift-operators/observability-operator-6dd7dd855f-85vsw" Mar 18 18:13:30.106859 master-0 kubenswrapper[30278]: I0318 18:13:30.097571 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cstlk\" (UniqueName: \"kubernetes.io/projected/114649d6-2e5d-4cfd-b5b7-94d92d0991ae-kube-api-access-cstlk\") pod \"observability-operator-6dd7dd855f-85vsw\" (UID: \"114649d6-2e5d-4cfd-b5b7-94d92d0991ae\") " pod="openshift-operators/observability-operator-6dd7dd855f-85vsw" Mar 18 18:13:30.118312 master-0 kubenswrapper[30278]: I0318 18:13:30.116831 30278 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-7c74b8df45-g5np5"] Mar 18 18:13:30.186450 master-0 kubenswrapper[30278]: I0318 18:13:30.186371 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-6dd7dd855f-85vsw" Mar 18 18:13:30.249676 master-0 kubenswrapper[30278]: I0318 18:13:30.246770 30278 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/perses-operator-fbcfc585b-zpr69"] Mar 18 18:13:30.249676 master-0 kubenswrapper[30278]: I0318 18:13:30.248487 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-fbcfc585b-zpr69" Mar 18 18:13:30.252657 master-0 kubenswrapper[30278]: I0318 18:13:30.252615 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"perses-operator-service-cert" Mar 18 18:13:30.260863 master-0 kubenswrapper[30278]: I0318 18:13:30.260396 30278 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-fbcfc585b-zpr69"] Mar 18 18:13:30.301434 master-0 kubenswrapper[30278]: I0318 18:13:30.301202 30278 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-7c74b8df45-4p2zl"] Mar 18 18:13:30.384401 master-0 kubenswrapper[30278]: I0318 18:13:30.383267 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/2bcbbd66-54ea-45ed-bee1-49ff8fc4c132-openshift-service-ca\") pod \"perses-operator-fbcfc585b-zpr69\" (UID: \"2bcbbd66-54ea-45ed-bee1-49ff8fc4c132\") " pod="openshift-operators/perses-operator-fbcfc585b-zpr69" Mar 18 18:13:30.384401 master-0 kubenswrapper[30278]: I0318 18:13:30.383419 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qmnhb\" (UniqueName: \"kubernetes.io/projected/2bcbbd66-54ea-45ed-bee1-49ff8fc4c132-kube-api-access-qmnhb\") pod \"perses-operator-fbcfc585b-zpr69\" (UID: \"2bcbbd66-54ea-45ed-bee1-49ff8fc4c132\") " pod="openshift-operators/perses-operator-fbcfc585b-zpr69" Mar 18 18:13:30.384401 master-0 kubenswrapper[30278]: I0318 18:13:30.383467 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/2bcbbd66-54ea-45ed-bee1-49ff8fc4c132-webhook-cert\") pod \"perses-operator-fbcfc585b-zpr69\" (UID: \"2bcbbd66-54ea-45ed-bee1-49ff8fc4c132\") " pod="openshift-operators/perses-operator-fbcfc585b-zpr69" Mar 18 18:13:30.384401 master-0 kubenswrapper[30278]: I0318 18:13:30.383506 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/2bcbbd66-54ea-45ed-bee1-49ff8fc4c132-apiservice-cert\") pod \"perses-operator-fbcfc585b-zpr69\" (UID: \"2bcbbd66-54ea-45ed-bee1-49ff8fc4c132\") " pod="openshift-operators/perses-operator-fbcfc585b-zpr69" Mar 18 18:13:30.485661 master-0 kubenswrapper[30278]: I0318 18:13:30.485540 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qmnhb\" (UniqueName: \"kubernetes.io/projected/2bcbbd66-54ea-45ed-bee1-49ff8fc4c132-kube-api-access-qmnhb\") pod \"perses-operator-fbcfc585b-zpr69\" (UID: \"2bcbbd66-54ea-45ed-bee1-49ff8fc4c132\") " pod="openshift-operators/perses-operator-fbcfc585b-zpr69" Mar 18 18:13:30.485992 master-0 kubenswrapper[30278]: I0318 18:13:30.485704 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/2bcbbd66-54ea-45ed-bee1-49ff8fc4c132-webhook-cert\") pod \"perses-operator-fbcfc585b-zpr69\" (UID: \"2bcbbd66-54ea-45ed-bee1-49ff8fc4c132\") " pod="openshift-operators/perses-operator-fbcfc585b-zpr69" Mar 18 18:13:30.485992 master-0 kubenswrapper[30278]: I0318 18:13:30.485734 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/2bcbbd66-54ea-45ed-bee1-49ff8fc4c132-apiservice-cert\") pod \"perses-operator-fbcfc585b-zpr69\" (UID: \"2bcbbd66-54ea-45ed-bee1-49ff8fc4c132\") " pod="openshift-operators/perses-operator-fbcfc585b-zpr69" Mar 18 18:13:30.485992 master-0 kubenswrapper[30278]: I0318 18:13:30.485784 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/2bcbbd66-54ea-45ed-bee1-49ff8fc4c132-openshift-service-ca\") pod \"perses-operator-fbcfc585b-zpr69\" (UID: \"2bcbbd66-54ea-45ed-bee1-49ff8fc4c132\") " pod="openshift-operators/perses-operator-fbcfc585b-zpr69" Mar 18 18:13:30.488486 master-0 kubenswrapper[30278]: I0318 18:13:30.488446 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/2bcbbd66-54ea-45ed-bee1-49ff8fc4c132-openshift-service-ca\") pod \"perses-operator-fbcfc585b-zpr69\" (UID: \"2bcbbd66-54ea-45ed-bee1-49ff8fc4c132\") " pod="openshift-operators/perses-operator-fbcfc585b-zpr69" Mar 18 18:13:30.494872 master-0 kubenswrapper[30278]: I0318 18:13:30.493998 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/2bcbbd66-54ea-45ed-bee1-49ff8fc4c132-webhook-cert\") pod \"perses-operator-fbcfc585b-zpr69\" (UID: \"2bcbbd66-54ea-45ed-bee1-49ff8fc4c132\") " pod="openshift-operators/perses-operator-fbcfc585b-zpr69" Mar 18 18:13:30.496023 master-0 kubenswrapper[30278]: I0318 18:13:30.495976 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/2bcbbd66-54ea-45ed-bee1-49ff8fc4c132-apiservice-cert\") pod \"perses-operator-fbcfc585b-zpr69\" (UID: \"2bcbbd66-54ea-45ed-bee1-49ff8fc4c132\") " pod="openshift-operators/perses-operator-fbcfc585b-zpr69" Mar 18 18:13:30.508754 master-0 kubenswrapper[30278]: I0318 18:13:30.508678 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qmnhb\" (UniqueName: \"kubernetes.io/projected/2bcbbd66-54ea-45ed-bee1-49ff8fc4c132-kube-api-access-qmnhb\") pod \"perses-operator-fbcfc585b-zpr69\" (UID: \"2bcbbd66-54ea-45ed-bee1-49ff8fc4c132\") " pod="openshift-operators/perses-operator-fbcfc585b-zpr69" Mar 18 18:13:30.595544 master-0 kubenswrapper[30278]: I0318 18:13:30.595365 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-fbcfc585b-zpr69" Mar 18 18:13:30.625925 master-0 kubenswrapper[30278]: I0318 18:13:30.625695 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7c74b8df45-4p2zl" event={"ID":"0c50322e-7236-4d94-a13b-098b28afbe97","Type":"ContainerStarted","Data":"30cc6a6867352b1ee9489fe104a0136c5facd751a9e67c33d64cac12f4772788"} Mar 18 18:13:30.633232 master-0 kubenswrapper[30278]: I0318 18:13:30.633175 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7c74b8df45-g5np5" event={"ID":"d79d0eb2-fcc6-4cbd-b98f-add9bf5bc666","Type":"ContainerStarted","Data":"8c593e98fe7eb5441054faa4c701af277d4c3dc7f4694d505a7f0c2420a5b17e"} Mar 18 18:13:30.634602 master-0 kubenswrapper[30278]: I0318 18:13:30.634562 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-8ff7d675-r8248" event={"ID":"0a3577af-da41-4598-b0e7-0ea2f10f4d00","Type":"ContainerStarted","Data":"1d39eeee2d7cf3f0cf685d4b1562cdc4255116e9f209b12be7b1c8ac441c9b13"} Mar 18 18:13:30.740216 master-0 kubenswrapper[30278]: I0318 18:13:30.739748 30278 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-6dd7dd855f-85vsw"] Mar 18 18:13:31.104671 master-0 kubenswrapper[30278]: I0318 18:13:31.100131 30278 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-fbcfc585b-zpr69"] Mar 18 18:13:31.112763 master-0 kubenswrapper[30278]: W0318 18:13:31.112656 30278 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2bcbbd66_54ea_45ed_bee1_49ff8fc4c132.slice/crio-53e8f791940b1a43cd592de77faae882467769fd44dd8390a93993831bc61650 WatchSource:0}: Error finding container 53e8f791940b1a43cd592de77faae882467769fd44dd8390a93993831bc61650: Status 404 returned error can't find the container with id 53e8f791940b1a43cd592de77faae882467769fd44dd8390a93993831bc61650 Mar 18 18:13:31.646931 master-0 kubenswrapper[30278]: I0318 18:13:31.646866 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-6dd7dd855f-85vsw" event={"ID":"114649d6-2e5d-4cfd-b5b7-94d92d0991ae","Type":"ContainerStarted","Data":"08076c7ab8be12367fb4f079a1710e56536d0d26ce2eacad2d81ac115b594dad"} Mar 18 18:13:31.649465 master-0 kubenswrapper[30278]: I0318 18:13:31.649438 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-fbcfc585b-zpr69" event={"ID":"2bcbbd66-54ea-45ed-bee1-49ff8fc4c132","Type":"ContainerStarted","Data":"53e8f791940b1a43cd592de77faae882467769fd44dd8390a93993831bc61650"} Mar 18 18:13:39.849580 master-0 kubenswrapper[30278]: I0318 18:13:39.848503 30278 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-545d4d4674-x7qmw"] Mar 18 18:13:39.850196 master-0 kubenswrapper[30278]: I0318 18:13:39.850165 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-545d4d4674-x7qmw" Mar 18 18:13:39.862314 master-0 kubenswrapper[30278]: I0318 18:13:39.860996 30278 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-545d4d4674-x7qmw"] Mar 18 18:13:39.949784 master-0 kubenswrapper[30278]: I0318 18:13:39.949692 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7lfgh\" (UniqueName: \"kubernetes.io/projected/dfbdf43d-c5a2-4d91-8c01-c7f229864550-kube-api-access-7lfgh\") pod \"cert-manager-545d4d4674-x7qmw\" (UID: \"dfbdf43d-c5a2-4d91-8c01-c7f229864550\") " pod="cert-manager/cert-manager-545d4d4674-x7qmw" Mar 18 18:13:39.950247 master-0 kubenswrapper[30278]: I0318 18:13:39.950187 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/dfbdf43d-c5a2-4d91-8c01-c7f229864550-bound-sa-token\") pod \"cert-manager-545d4d4674-x7qmw\" (UID: \"dfbdf43d-c5a2-4d91-8c01-c7f229864550\") " pod="cert-manager/cert-manager-545d4d4674-x7qmw" Mar 18 18:13:40.053200 master-0 kubenswrapper[30278]: I0318 18:13:40.052342 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/dfbdf43d-c5a2-4d91-8c01-c7f229864550-bound-sa-token\") pod \"cert-manager-545d4d4674-x7qmw\" (UID: \"dfbdf43d-c5a2-4d91-8c01-c7f229864550\") " pod="cert-manager/cert-manager-545d4d4674-x7qmw" Mar 18 18:13:40.057129 master-0 kubenswrapper[30278]: I0318 18:13:40.053266 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7lfgh\" (UniqueName: \"kubernetes.io/projected/dfbdf43d-c5a2-4d91-8c01-c7f229864550-kube-api-access-7lfgh\") pod \"cert-manager-545d4d4674-x7qmw\" (UID: \"dfbdf43d-c5a2-4d91-8c01-c7f229864550\") " pod="cert-manager/cert-manager-545d4d4674-x7qmw" Mar 18 18:13:40.071035 master-0 kubenswrapper[30278]: I0318 18:13:40.070979 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7lfgh\" (UniqueName: \"kubernetes.io/projected/dfbdf43d-c5a2-4d91-8c01-c7f229864550-kube-api-access-7lfgh\") pod \"cert-manager-545d4d4674-x7qmw\" (UID: \"dfbdf43d-c5a2-4d91-8c01-c7f229864550\") " pod="cert-manager/cert-manager-545d4d4674-x7qmw" Mar 18 18:13:40.073325 master-0 kubenswrapper[30278]: I0318 18:13:40.073261 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/dfbdf43d-c5a2-4d91-8c01-c7f229864550-bound-sa-token\") pod \"cert-manager-545d4d4674-x7qmw\" (UID: \"dfbdf43d-c5a2-4d91-8c01-c7f229864550\") " pod="cert-manager/cert-manager-545d4d4674-x7qmw" Mar 18 18:13:40.187855 master-0 kubenswrapper[30278]: I0318 18:13:40.187699 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-545d4d4674-x7qmw" Mar 18 18:13:41.750493 master-0 kubenswrapper[30278]: I0318 18:13:41.748507 30278 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-545d4d4674-x7qmw"] Mar 18 18:13:41.803851 master-0 kubenswrapper[30278]: I0318 18:13:41.803754 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-8ff7d675-r8248" event={"ID":"0a3577af-da41-4598-b0e7-0ea2f10f4d00","Type":"ContainerStarted","Data":"39436e0f8cdeb011b1c81964be5f9dfe4f6f2e1cf6da1bfe36fbab0c39fc5a16"} Mar 18 18:13:41.807462 master-0 kubenswrapper[30278]: I0318 18:13:41.807189 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-6888856db4-8sskx" event={"ID":"ab08745b-f333-419d-87e1-00c073463a8a","Type":"ContainerStarted","Data":"d28debca1a021736f128ec29ecdc1d4b875c3086eafeac35909efedb239f9752"} Mar 18 18:13:41.807462 master-0 kubenswrapper[30278]: I0318 18:13:41.807408 30278 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-6888856db4-8sskx" Mar 18 18:13:41.813530 master-0 kubenswrapper[30278]: I0318 18:13:41.813399 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-545d4d4674-x7qmw" event={"ID":"dfbdf43d-c5a2-4d91-8c01-c7f229864550","Type":"ContainerStarted","Data":"17739d1af51305d912e28917416b36a349f8112fd7c564edcb1867c5135a30a0"} Mar 18 18:13:41.821911 master-0 kubenswrapper[30278]: I0318 18:13:41.820780 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-fbcfc585b-zpr69" event={"ID":"2bcbbd66-54ea-45ed-bee1-49ff8fc4c132","Type":"ContainerStarted","Data":"51160bbc31819cb408f85ad9fa35ff0d61ece55de771cc45d51522499cf6ed47"} Mar 18 18:13:41.821911 master-0 kubenswrapper[30278]: I0318 18:13:41.821114 30278 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/perses-operator-fbcfc585b-zpr69" Mar 18 18:13:41.830362 master-0 kubenswrapper[30278]: I0318 18:13:41.823519 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-5545bd876-67lqt" event={"ID":"fbb33307-9b20-4372-a6ca-60473053b4e7","Type":"ContainerStarted","Data":"b00f0b01445d56fa6fbe27129c6d49724a6c375d5b882d485c4d02fa707a382f"} Mar 18 18:13:41.832386 master-0 kubenswrapper[30278]: I0318 18:13:41.832317 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7c74b8df45-4p2zl" event={"ID":"0c50322e-7236-4d94-a13b-098b28afbe97","Type":"ContainerStarted","Data":"f959ac64286baa67633d39582b206b5f76d8d7c8949f4dac6d9b5687ecb52dc5"} Mar 18 18:13:41.832518 master-0 kubenswrapper[30278]: I0318 18:13:41.832439 30278 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7c74b8df45-g5np5" podStartSLOduration=1.64911813 podStartE2EDuration="12.832416837s" podCreationTimestamp="2026-03-18 18:13:29 +0000 UTC" firstStartedPulling="2026-03-18 18:13:30.103782802 +0000 UTC m=+779.270967397" lastFinishedPulling="2026-03-18 18:13:41.287081509 +0000 UTC m=+790.454266104" observedRunningTime="2026-03-18 18:13:41.830811593 +0000 UTC m=+790.997996198" watchObservedRunningTime="2026-03-18 18:13:41.832416837 +0000 UTC m=+790.999601432" Mar 18 18:13:41.897329 master-0 kubenswrapper[30278]: I0318 18:13:41.895405 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-6dd7dd855f-85vsw" event={"ID":"114649d6-2e5d-4cfd-b5b7-94d92d0991ae","Type":"ContainerStarted","Data":"49bce670206019f1a952e278223b0d357c86c4c86cf81771e4b6ff0f988d26fb"} Mar 18 18:13:41.897329 master-0 kubenswrapper[30278]: I0318 18:13:41.896218 30278 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/observability-operator-6dd7dd855f-85vsw" Mar 18 18:13:41.897329 master-0 kubenswrapper[30278]: I0318 18:13:41.897053 30278 patch_prober.go:28] interesting pod/observability-operator-6dd7dd855f-85vsw container/operator namespace/openshift-operators: Readiness probe status=failure output="Get \"http://10.128.0.133:8081/healthz\": dial tcp 10.128.0.133:8081: connect: connection refused" start-of-body= Mar 18 18:13:41.897329 master-0 kubenswrapper[30278]: I0318 18:13:41.897114 30278 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operators/observability-operator-6dd7dd855f-85vsw" podUID="114649d6-2e5d-4cfd-b5b7-94d92d0991ae" containerName="operator" probeResult="failure" output="Get \"http://10.128.0.133:8081/healthz\": dial tcp 10.128.0.133:8081: connect: connection refused" Mar 18 18:13:41.912560 master-0 kubenswrapper[30278]: I0318 18:13:41.911226 30278 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-8ff7d675-r8248" podStartSLOduration=2.440937499 podStartE2EDuration="13.911204337s" podCreationTimestamp="2026-03-18 18:13:28 +0000 UTC" firstStartedPulling="2026-03-18 18:13:29.798675231 +0000 UTC m=+778.965859826" lastFinishedPulling="2026-03-18 18:13:41.268942069 +0000 UTC m=+790.436126664" observedRunningTime="2026-03-18 18:13:41.878644487 +0000 UTC m=+791.045829082" watchObservedRunningTime="2026-03-18 18:13:41.911204337 +0000 UTC m=+791.078388932" Mar 18 18:13:41.965307 master-0 kubenswrapper[30278]: I0318 18:13:41.957783 30278 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-6888856db4-8sskx" podStartSLOduration=2.3292707200000002 podStartE2EDuration="16.957744585s" podCreationTimestamp="2026-03-18 18:13:25 +0000 UTC" firstStartedPulling="2026-03-18 18:13:26.658191513 +0000 UTC m=+775.825376108" lastFinishedPulling="2026-03-18 18:13:41.286665378 +0000 UTC m=+790.453849973" observedRunningTime="2026-03-18 18:13:41.927619262 +0000 UTC m=+791.094803857" watchObservedRunningTime="2026-03-18 18:13:41.957744585 +0000 UTC m=+791.124929180" Mar 18 18:13:41.976114 master-0 kubenswrapper[30278]: I0318 18:13:41.972610 30278 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-5545bd876-67lqt" podStartSLOduration=2.702083809 podStartE2EDuration="14.972584757s" podCreationTimestamp="2026-03-18 18:13:27 +0000 UTC" firstStartedPulling="2026-03-18 18:13:28.968761938 +0000 UTC m=+778.135946533" lastFinishedPulling="2026-03-18 18:13:41.239262886 +0000 UTC m=+790.406447481" observedRunningTime="2026-03-18 18:13:41.950177592 +0000 UTC m=+791.117362197" watchObservedRunningTime="2026-03-18 18:13:41.972584757 +0000 UTC m=+791.139769342" Mar 18 18:13:41.997991 master-0 kubenswrapper[30278]: I0318 18:13:41.997897 30278 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/perses-operator-fbcfc585b-zpr69" podStartSLOduration=1.825928404 podStartE2EDuration="11.997873741s" podCreationTimestamp="2026-03-18 18:13:30 +0000 UTC" firstStartedPulling="2026-03-18 18:13:31.118512403 +0000 UTC m=+780.285696998" lastFinishedPulling="2026-03-18 18:13:41.29045774 +0000 UTC m=+790.457642335" observedRunningTime="2026-03-18 18:13:41.996514204 +0000 UTC m=+791.163698799" watchObservedRunningTime="2026-03-18 18:13:41.997873741 +0000 UTC m=+791.165058326" Mar 18 18:13:42.056298 master-0 kubenswrapper[30278]: I0318 18:13:42.055087 30278 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7c74b8df45-4p2zl" podStartSLOduration=2.0774763529999998 podStartE2EDuration="13.055053867s" podCreationTimestamp="2026-03-18 18:13:29 +0000 UTC" firstStartedPulling="2026-03-18 18:13:30.308759575 +0000 UTC m=+779.475944170" lastFinishedPulling="2026-03-18 18:13:41.286337089 +0000 UTC m=+790.453521684" observedRunningTime="2026-03-18 18:13:42.031802579 +0000 UTC m=+791.198987164" watchObservedRunningTime="2026-03-18 18:13:42.055053867 +0000 UTC m=+791.222238452" Mar 18 18:13:42.097431 master-0 kubenswrapper[30278]: I0318 18:13:42.095205 30278 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/observability-operator-6dd7dd855f-85vsw" podStartSLOduration=2.511443849 podStartE2EDuration="13.095031799s" podCreationTimestamp="2026-03-18 18:13:29 +0000 UTC" firstStartedPulling="2026-03-18 18:13:30.74605067 +0000 UTC m=+779.913235265" lastFinishedPulling="2026-03-18 18:13:41.32963862 +0000 UTC m=+790.496823215" observedRunningTime="2026-03-18 18:13:42.080087584 +0000 UTC m=+791.247272199" watchObservedRunningTime="2026-03-18 18:13:42.095031799 +0000 UTC m=+791.262216394" Mar 18 18:13:42.907212 master-0 kubenswrapper[30278]: I0318 18:13:42.907086 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-545d4d4674-x7qmw" event={"ID":"dfbdf43d-c5a2-4d91-8c01-c7f229864550","Type":"ContainerStarted","Data":"9886c4a2ac99c5deefc0194226a02fa022ebf54016ebaaaed849be4283b2cea9"} Mar 18 18:13:42.910053 master-0 kubenswrapper[30278]: I0318 18:13:42.909976 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7c74b8df45-g5np5" event={"ID":"d79d0eb2-fcc6-4cbd-b98f-add9bf5bc666","Type":"ContainerStarted","Data":"7db13628cf29b91564e8e8ce9d50a7ad59a0e080193b6953584473dcb887a1c1"} Mar 18 18:13:42.921300 master-0 kubenswrapper[30278]: I0318 18:13:42.919692 30278 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/observability-operator-6dd7dd855f-85vsw" Mar 18 18:13:42.953300 master-0 kubenswrapper[30278]: I0318 18:13:42.950630 30278 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-545d4d4674-x7qmw" podStartSLOduration=3.950597655 podStartE2EDuration="3.950597655s" podCreationTimestamp="2026-03-18 18:13:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 18:13:42.937645745 +0000 UTC m=+792.104830340" watchObservedRunningTime="2026-03-18 18:13:42.950597655 +0000 UTC m=+792.117782250" Mar 18 18:13:46.176171 master-0 kubenswrapper[30278]: I0318 18:13:46.176082 30278 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-6888856db4-8sskx" Mar 18 18:13:48.031890 master-0 kubenswrapper[30278]: I0318 18:13:48.031816 30278 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-848f479545-kv7v2" Mar 18 18:13:50.597789 master-0 kubenswrapper[30278]: I0318 18:13:50.597729 30278 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/perses-operator-fbcfc585b-zpr69" Mar 18 18:13:56.417966 master-0 kubenswrapper[30278]: I0318 18:13:56.417028 30278 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-bcc4b6f68-g4479"] Mar 18 18:13:56.423838 master-0 kubenswrapper[30278]: I0318 18:13:56.420745 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-g4479" Mar 18 18:13:56.436140 master-0 kubenswrapper[30278]: I0318 18:13:56.428635 30278 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Mar 18 18:13:56.436140 master-0 kubenswrapper[30278]: I0318 18:13:56.434679 30278 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-bcc4b6f68-g4479"] Mar 18 18:13:56.466516 master-0 kubenswrapper[30278]: I0318 18:13:56.466477 30278 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-ztqqc"] Mar 18 18:13:56.470520 master-0 kubenswrapper[30278]: I0318 18:13:56.470478 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-ztqqc" Mar 18 18:13:56.477294 master-0 kubenswrapper[30278]: I0318 18:13:56.475700 30278 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Mar 18 18:13:56.477294 master-0 kubenswrapper[30278]: I0318 18:13:56.476680 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Mar 18 18:13:56.556434 master-0 kubenswrapper[30278]: I0318 18:13:56.556140 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/c5c65977-8004-4434-8d99-7624d08d9b3a-metrics\") pod \"frr-k8s-ztqqc\" (UID: \"c5c65977-8004-4434-8d99-7624d08d9b3a\") " pod="metallb-system/frr-k8s-ztqqc" Mar 18 18:13:56.556434 master-0 kubenswrapper[30278]: I0318 18:13:56.556229 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/c5c65977-8004-4434-8d99-7624d08d9b3a-frr-conf\") pod \"frr-k8s-ztqqc\" (UID: \"c5c65977-8004-4434-8d99-7624d08d9b3a\") " pod="metallb-system/frr-k8s-ztqqc" Mar 18 18:13:56.556434 master-0 kubenswrapper[30278]: I0318 18:13:56.556262 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/c5c65977-8004-4434-8d99-7624d08d9b3a-reloader\") pod \"frr-k8s-ztqqc\" (UID: \"c5c65977-8004-4434-8d99-7624d08d9b3a\") " pod="metallb-system/frr-k8s-ztqqc" Mar 18 18:13:56.556434 master-0 kubenswrapper[30278]: I0318 18:13:56.556294 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/c5c65977-8004-4434-8d99-7624d08d9b3a-frr-sockets\") pod \"frr-k8s-ztqqc\" (UID: \"c5c65977-8004-4434-8d99-7624d08d9b3a\") " pod="metallb-system/frr-k8s-ztqqc" Mar 18 18:13:56.556434 master-0 kubenswrapper[30278]: I0318 18:13:56.556311 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c5c65977-8004-4434-8d99-7624d08d9b3a-metrics-certs\") pod \"frr-k8s-ztqqc\" (UID: \"c5c65977-8004-4434-8d99-7624d08d9b3a\") " pod="metallb-system/frr-k8s-ztqqc" Mar 18 18:13:56.556791 master-0 kubenswrapper[30278]: I0318 18:13:56.556519 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6pszn\" (UniqueName: \"kubernetes.io/projected/c5c65977-8004-4434-8d99-7624d08d9b3a-kube-api-access-6pszn\") pod \"frr-k8s-ztqqc\" (UID: \"c5c65977-8004-4434-8d99-7624d08d9b3a\") " pod="metallb-system/frr-k8s-ztqqc" Mar 18 18:13:56.556791 master-0 kubenswrapper[30278]: I0318 18:13:56.556594 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k9f2t\" (UniqueName: \"kubernetes.io/projected/efa4b92a-4f1e-40bd-b1e4-3bcb1b2fc4f9-kube-api-access-k9f2t\") pod \"frr-k8s-webhook-server-bcc4b6f68-g4479\" (UID: \"efa4b92a-4f1e-40bd-b1e4-3bcb1b2fc4f9\") " pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-g4479" Mar 18 18:13:56.556791 master-0 kubenswrapper[30278]: I0318 18:13:56.556662 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/c5c65977-8004-4434-8d99-7624d08d9b3a-frr-startup\") pod \"frr-k8s-ztqqc\" (UID: \"c5c65977-8004-4434-8d99-7624d08d9b3a\") " pod="metallb-system/frr-k8s-ztqqc" Mar 18 18:13:56.556791 master-0 kubenswrapper[30278]: I0318 18:13:56.556706 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/efa4b92a-4f1e-40bd-b1e4-3bcb1b2fc4f9-cert\") pod \"frr-k8s-webhook-server-bcc4b6f68-g4479\" (UID: \"efa4b92a-4f1e-40bd-b1e4-3bcb1b2fc4f9\") " pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-g4479" Mar 18 18:13:56.574895 master-0 kubenswrapper[30278]: I0318 18:13:56.574802 30278 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-m67cm"] Mar 18 18:13:56.576167 master-0 kubenswrapper[30278]: I0318 18:13:56.576133 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-m67cm" Mar 18 18:13:56.584249 master-0 kubenswrapper[30278]: I0318 18:13:56.584196 30278 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Mar 18 18:13:56.584469 master-0 kubenswrapper[30278]: I0318 18:13:56.584446 30278 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Mar 18 18:13:56.584617 master-0 kubenswrapper[30278]: I0318 18:13:56.584594 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Mar 18 18:13:56.626608 master-0 kubenswrapper[30278]: I0318 18:13:56.623613 30278 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-7bb4cc7c98-skcb4"] Mar 18 18:13:56.628385 master-0 kubenswrapper[30278]: I0318 18:13:56.628322 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-7bb4cc7c98-skcb4" Mar 18 18:13:56.636086 master-0 kubenswrapper[30278]: I0318 18:13:56.636018 30278 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Mar 18 18:13:56.649919 master-0 kubenswrapper[30278]: I0318 18:13:56.649821 30278 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-7bb4cc7c98-skcb4"] Mar 18 18:13:56.667332 master-0 kubenswrapper[30278]: I0318 18:13:56.665669 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/8f8a9e5f-b9b7-4366-a778-1bf7177693c5-metallb-excludel2\") pod \"speaker-m67cm\" (UID: \"8f8a9e5f-b9b7-4366-a778-1bf7177693c5\") " pod="metallb-system/speaker-m67cm" Mar 18 18:13:56.667332 master-0 kubenswrapper[30278]: I0318 18:13:56.665767 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/c5c65977-8004-4434-8d99-7624d08d9b3a-frr-conf\") pod \"frr-k8s-ztqqc\" (UID: \"c5c65977-8004-4434-8d99-7624d08d9b3a\") " pod="metallb-system/frr-k8s-ztqqc" Mar 18 18:13:56.667332 master-0 kubenswrapper[30278]: I0318 18:13:56.665803 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/c5c65977-8004-4434-8d99-7624d08d9b3a-reloader\") pod \"frr-k8s-ztqqc\" (UID: \"c5c65977-8004-4434-8d99-7624d08d9b3a\") " pod="metallb-system/frr-k8s-ztqqc" Mar 18 18:13:56.667332 master-0 kubenswrapper[30278]: I0318 18:13:56.665823 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xnjpd\" (UniqueName: \"kubernetes.io/projected/8f8a9e5f-b9b7-4366-a778-1bf7177693c5-kube-api-access-xnjpd\") pod \"speaker-m67cm\" (UID: \"8f8a9e5f-b9b7-4366-a778-1bf7177693c5\") " pod="metallb-system/speaker-m67cm" Mar 18 18:13:56.667332 master-0 kubenswrapper[30278]: I0318 18:13:56.665850 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/c5c65977-8004-4434-8d99-7624d08d9b3a-frr-sockets\") pod \"frr-k8s-ztqqc\" (UID: \"c5c65977-8004-4434-8d99-7624d08d9b3a\") " pod="metallb-system/frr-k8s-ztqqc" Mar 18 18:13:56.667332 master-0 kubenswrapper[30278]: I0318 18:13:56.665866 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c5c65977-8004-4434-8d99-7624d08d9b3a-metrics-certs\") pod \"frr-k8s-ztqqc\" (UID: \"c5c65977-8004-4434-8d99-7624d08d9b3a\") " pod="metallb-system/frr-k8s-ztqqc" Mar 18 18:13:56.667332 master-0 kubenswrapper[30278]: I0318 18:13:56.665910 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6pszn\" (UniqueName: \"kubernetes.io/projected/c5c65977-8004-4434-8d99-7624d08d9b3a-kube-api-access-6pszn\") pod \"frr-k8s-ztqqc\" (UID: \"c5c65977-8004-4434-8d99-7624d08d9b3a\") " pod="metallb-system/frr-k8s-ztqqc" Mar 18 18:13:56.667332 master-0 kubenswrapper[30278]: I0318 18:13:56.665941 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k9f2t\" (UniqueName: \"kubernetes.io/projected/efa4b92a-4f1e-40bd-b1e4-3bcb1b2fc4f9-kube-api-access-k9f2t\") pod \"frr-k8s-webhook-server-bcc4b6f68-g4479\" (UID: \"efa4b92a-4f1e-40bd-b1e4-3bcb1b2fc4f9\") " pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-g4479" Mar 18 18:13:56.667332 master-0 kubenswrapper[30278]: I0318 18:13:56.665972 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/c5c65977-8004-4434-8d99-7624d08d9b3a-frr-startup\") pod \"frr-k8s-ztqqc\" (UID: \"c5c65977-8004-4434-8d99-7624d08d9b3a\") " pod="metallb-system/frr-k8s-ztqqc" Mar 18 18:13:56.667332 master-0 kubenswrapper[30278]: I0318 18:13:56.665996 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/efa4b92a-4f1e-40bd-b1e4-3bcb1b2fc4f9-cert\") pod \"frr-k8s-webhook-server-bcc4b6f68-g4479\" (UID: \"efa4b92a-4f1e-40bd-b1e4-3bcb1b2fc4f9\") " pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-g4479" Mar 18 18:13:56.667332 master-0 kubenswrapper[30278]: I0318 18:13:56.666028 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/c5c65977-8004-4434-8d99-7624d08d9b3a-metrics\") pod \"frr-k8s-ztqqc\" (UID: \"c5c65977-8004-4434-8d99-7624d08d9b3a\") " pod="metallb-system/frr-k8s-ztqqc" Mar 18 18:13:56.667332 master-0 kubenswrapper[30278]: I0318 18:13:56.666047 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8f8a9e5f-b9b7-4366-a778-1bf7177693c5-metrics-certs\") pod \"speaker-m67cm\" (UID: \"8f8a9e5f-b9b7-4366-a778-1bf7177693c5\") " pod="metallb-system/speaker-m67cm" Mar 18 18:13:56.667332 master-0 kubenswrapper[30278]: I0318 18:13:56.666068 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/8f8a9e5f-b9b7-4366-a778-1bf7177693c5-memberlist\") pod \"speaker-m67cm\" (UID: \"8f8a9e5f-b9b7-4366-a778-1bf7177693c5\") " pod="metallb-system/speaker-m67cm" Mar 18 18:13:56.667332 master-0 kubenswrapper[30278]: E0318 18:13:56.666573 30278 secret.go:189] Couldn't get secret metallb-system/frr-k8s-certs-secret: secret "frr-k8s-certs-secret" not found Mar 18 18:13:56.667332 master-0 kubenswrapper[30278]: E0318 18:13:56.666661 30278 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c5c65977-8004-4434-8d99-7624d08d9b3a-metrics-certs podName:c5c65977-8004-4434-8d99-7624d08d9b3a nodeName:}" failed. No retries permitted until 2026-03-18 18:13:57.166637325 +0000 UTC m=+806.333821920 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/c5c65977-8004-4434-8d99-7624d08d9b3a-metrics-certs") pod "frr-k8s-ztqqc" (UID: "c5c65977-8004-4434-8d99-7624d08d9b3a") : secret "frr-k8s-certs-secret" not found Mar 18 18:13:56.667332 master-0 kubenswrapper[30278]: I0318 18:13:56.666734 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/c5c65977-8004-4434-8d99-7624d08d9b3a-frr-conf\") pod \"frr-k8s-ztqqc\" (UID: \"c5c65977-8004-4434-8d99-7624d08d9b3a\") " pod="metallb-system/frr-k8s-ztqqc" Mar 18 18:13:56.668014 master-0 kubenswrapper[30278]: I0318 18:13:56.667718 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/c5c65977-8004-4434-8d99-7624d08d9b3a-frr-startup\") pod \"frr-k8s-ztqqc\" (UID: \"c5c65977-8004-4434-8d99-7624d08d9b3a\") " pod="metallb-system/frr-k8s-ztqqc" Mar 18 18:13:56.670862 master-0 kubenswrapper[30278]: I0318 18:13:56.670704 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/c5c65977-8004-4434-8d99-7624d08d9b3a-reloader\") pod \"frr-k8s-ztqqc\" (UID: \"c5c65977-8004-4434-8d99-7624d08d9b3a\") " pod="metallb-system/frr-k8s-ztqqc" Mar 18 18:13:56.670996 master-0 kubenswrapper[30278]: I0318 18:13:56.670903 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/c5c65977-8004-4434-8d99-7624d08d9b3a-metrics\") pod \"frr-k8s-ztqqc\" (UID: \"c5c65977-8004-4434-8d99-7624d08d9b3a\") " pod="metallb-system/frr-k8s-ztqqc" Mar 18 18:13:56.682333 master-0 kubenswrapper[30278]: I0318 18:13:56.673850 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/efa4b92a-4f1e-40bd-b1e4-3bcb1b2fc4f9-cert\") pod \"frr-k8s-webhook-server-bcc4b6f68-g4479\" (UID: \"efa4b92a-4f1e-40bd-b1e4-3bcb1b2fc4f9\") " pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-g4479" Mar 18 18:13:56.682333 master-0 kubenswrapper[30278]: I0318 18:13:56.674074 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/c5c65977-8004-4434-8d99-7624d08d9b3a-frr-sockets\") pod \"frr-k8s-ztqqc\" (UID: \"c5c65977-8004-4434-8d99-7624d08d9b3a\") " pod="metallb-system/frr-k8s-ztqqc" Mar 18 18:13:56.709740 master-0 kubenswrapper[30278]: I0318 18:13:56.705893 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6pszn\" (UniqueName: \"kubernetes.io/projected/c5c65977-8004-4434-8d99-7624d08d9b3a-kube-api-access-6pszn\") pod \"frr-k8s-ztqqc\" (UID: \"c5c65977-8004-4434-8d99-7624d08d9b3a\") " pod="metallb-system/frr-k8s-ztqqc" Mar 18 18:13:56.715845 master-0 kubenswrapper[30278]: I0318 18:13:56.715090 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k9f2t\" (UniqueName: \"kubernetes.io/projected/efa4b92a-4f1e-40bd-b1e4-3bcb1b2fc4f9-kube-api-access-k9f2t\") pod \"frr-k8s-webhook-server-bcc4b6f68-g4479\" (UID: \"efa4b92a-4f1e-40bd-b1e4-3bcb1b2fc4f9\") " pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-g4479" Mar 18 18:13:56.770722 master-0 kubenswrapper[30278]: I0318 18:13:56.768756 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0326959b-b1d6-42ef-9fe5-bb33aa37df40-metrics-certs\") pod \"controller-7bb4cc7c98-skcb4\" (UID: \"0326959b-b1d6-42ef-9fe5-bb33aa37df40\") " pod="metallb-system/controller-7bb4cc7c98-skcb4" Mar 18 18:13:56.770722 master-0 kubenswrapper[30278]: I0318 18:13:56.768857 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/0326959b-b1d6-42ef-9fe5-bb33aa37df40-cert\") pod \"controller-7bb4cc7c98-skcb4\" (UID: \"0326959b-b1d6-42ef-9fe5-bb33aa37df40\") " pod="metallb-system/controller-7bb4cc7c98-skcb4" Mar 18 18:13:56.770722 master-0 kubenswrapper[30278]: I0318 18:13:56.768883 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8f8a9e5f-b9b7-4366-a778-1bf7177693c5-metrics-certs\") pod \"speaker-m67cm\" (UID: \"8f8a9e5f-b9b7-4366-a778-1bf7177693c5\") " pod="metallb-system/speaker-m67cm" Mar 18 18:13:56.770722 master-0 kubenswrapper[30278]: I0318 18:13:56.768903 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/8f8a9e5f-b9b7-4366-a778-1bf7177693c5-memberlist\") pod \"speaker-m67cm\" (UID: \"8f8a9e5f-b9b7-4366-a778-1bf7177693c5\") " pod="metallb-system/speaker-m67cm" Mar 18 18:13:56.770722 master-0 kubenswrapper[30278]: I0318 18:13:56.768921 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nt5q6\" (UniqueName: \"kubernetes.io/projected/0326959b-b1d6-42ef-9fe5-bb33aa37df40-kube-api-access-nt5q6\") pod \"controller-7bb4cc7c98-skcb4\" (UID: \"0326959b-b1d6-42ef-9fe5-bb33aa37df40\") " pod="metallb-system/controller-7bb4cc7c98-skcb4" Mar 18 18:13:56.770722 master-0 kubenswrapper[30278]: I0318 18:13:56.768957 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/8f8a9e5f-b9b7-4366-a778-1bf7177693c5-metallb-excludel2\") pod \"speaker-m67cm\" (UID: \"8f8a9e5f-b9b7-4366-a778-1bf7177693c5\") " pod="metallb-system/speaker-m67cm" Mar 18 18:13:56.770722 master-0 kubenswrapper[30278]: I0318 18:13:56.768985 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xnjpd\" (UniqueName: \"kubernetes.io/projected/8f8a9e5f-b9b7-4366-a778-1bf7177693c5-kube-api-access-xnjpd\") pod \"speaker-m67cm\" (UID: \"8f8a9e5f-b9b7-4366-a778-1bf7177693c5\") " pod="metallb-system/speaker-m67cm" Mar 18 18:13:56.770722 master-0 kubenswrapper[30278]: E0318 18:13:56.770555 30278 secret.go:189] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Mar 18 18:13:56.770722 master-0 kubenswrapper[30278]: E0318 18:13:56.770636 30278 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8f8a9e5f-b9b7-4366-a778-1bf7177693c5-memberlist podName:8f8a9e5f-b9b7-4366-a778-1bf7177693c5 nodeName:}" failed. No retries permitted until 2026-03-18 18:13:57.270618627 +0000 UTC m=+806.437803222 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/8f8a9e5f-b9b7-4366-a778-1bf7177693c5-memberlist") pod "speaker-m67cm" (UID: "8f8a9e5f-b9b7-4366-a778-1bf7177693c5") : secret "metallb-memberlist" not found Mar 18 18:13:56.773079 master-0 kubenswrapper[30278]: I0318 18:13:56.771569 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/8f8a9e5f-b9b7-4366-a778-1bf7177693c5-metallb-excludel2\") pod \"speaker-m67cm\" (UID: \"8f8a9e5f-b9b7-4366-a778-1bf7177693c5\") " pod="metallb-system/speaker-m67cm" Mar 18 18:13:56.775869 master-0 kubenswrapper[30278]: I0318 18:13:56.775760 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8f8a9e5f-b9b7-4366-a778-1bf7177693c5-metrics-certs\") pod \"speaker-m67cm\" (UID: \"8f8a9e5f-b9b7-4366-a778-1bf7177693c5\") " pod="metallb-system/speaker-m67cm" Mar 18 18:13:56.778596 master-0 kubenswrapper[30278]: I0318 18:13:56.777725 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-g4479" Mar 18 18:13:56.798076 master-0 kubenswrapper[30278]: I0318 18:13:56.798011 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xnjpd\" (UniqueName: \"kubernetes.io/projected/8f8a9e5f-b9b7-4366-a778-1bf7177693c5-kube-api-access-xnjpd\") pod \"speaker-m67cm\" (UID: \"8f8a9e5f-b9b7-4366-a778-1bf7177693c5\") " pod="metallb-system/speaker-m67cm" Mar 18 18:13:56.879123 master-0 kubenswrapper[30278]: I0318 18:13:56.870029 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/0326959b-b1d6-42ef-9fe5-bb33aa37df40-cert\") pod \"controller-7bb4cc7c98-skcb4\" (UID: \"0326959b-b1d6-42ef-9fe5-bb33aa37df40\") " pod="metallb-system/controller-7bb4cc7c98-skcb4" Mar 18 18:13:56.879123 master-0 kubenswrapper[30278]: I0318 18:13:56.870114 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nt5q6\" (UniqueName: \"kubernetes.io/projected/0326959b-b1d6-42ef-9fe5-bb33aa37df40-kube-api-access-nt5q6\") pod \"controller-7bb4cc7c98-skcb4\" (UID: \"0326959b-b1d6-42ef-9fe5-bb33aa37df40\") " pod="metallb-system/controller-7bb4cc7c98-skcb4" Mar 18 18:13:56.879123 master-0 kubenswrapper[30278]: I0318 18:13:56.870218 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0326959b-b1d6-42ef-9fe5-bb33aa37df40-metrics-certs\") pod \"controller-7bb4cc7c98-skcb4\" (UID: \"0326959b-b1d6-42ef-9fe5-bb33aa37df40\") " pod="metallb-system/controller-7bb4cc7c98-skcb4" Mar 18 18:13:56.879123 master-0 kubenswrapper[30278]: I0318 18:13:56.874236 30278 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Mar 18 18:13:56.879123 master-0 kubenswrapper[30278]: I0318 18:13:56.875249 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0326959b-b1d6-42ef-9fe5-bb33aa37df40-metrics-certs\") pod \"controller-7bb4cc7c98-skcb4\" (UID: \"0326959b-b1d6-42ef-9fe5-bb33aa37df40\") " pod="metallb-system/controller-7bb4cc7c98-skcb4" Mar 18 18:13:56.888357 master-0 kubenswrapper[30278]: I0318 18:13:56.887729 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/0326959b-b1d6-42ef-9fe5-bb33aa37df40-cert\") pod \"controller-7bb4cc7c98-skcb4\" (UID: \"0326959b-b1d6-42ef-9fe5-bb33aa37df40\") " pod="metallb-system/controller-7bb4cc7c98-skcb4" Mar 18 18:13:56.897390 master-0 kubenswrapper[30278]: I0318 18:13:56.897333 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nt5q6\" (UniqueName: \"kubernetes.io/projected/0326959b-b1d6-42ef-9fe5-bb33aa37df40-kube-api-access-nt5q6\") pod \"controller-7bb4cc7c98-skcb4\" (UID: \"0326959b-b1d6-42ef-9fe5-bb33aa37df40\") " pod="metallb-system/controller-7bb4cc7c98-skcb4" Mar 18 18:13:57.051855 master-0 kubenswrapper[30278]: I0318 18:13:57.051630 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-7bb4cc7c98-skcb4" Mar 18 18:13:57.134067 master-0 kubenswrapper[30278]: I0318 18:13:57.131305 30278 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-bcc4b6f68-g4479"] Mar 18 18:13:57.190957 master-0 kubenswrapper[30278]: I0318 18:13:57.190002 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c5c65977-8004-4434-8d99-7624d08d9b3a-metrics-certs\") pod \"frr-k8s-ztqqc\" (UID: \"c5c65977-8004-4434-8d99-7624d08d9b3a\") " pod="metallb-system/frr-k8s-ztqqc" Mar 18 18:13:57.203689 master-0 kubenswrapper[30278]: I0318 18:13:57.200801 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c5c65977-8004-4434-8d99-7624d08d9b3a-metrics-certs\") pod \"frr-k8s-ztqqc\" (UID: \"c5c65977-8004-4434-8d99-7624d08d9b3a\") " pod="metallb-system/frr-k8s-ztqqc" Mar 18 18:13:57.302313 master-0 kubenswrapper[30278]: I0318 18:13:57.302243 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/8f8a9e5f-b9b7-4366-a778-1bf7177693c5-memberlist\") pod \"speaker-m67cm\" (UID: \"8f8a9e5f-b9b7-4366-a778-1bf7177693c5\") " pod="metallb-system/speaker-m67cm" Mar 18 18:13:57.302538 master-0 kubenswrapper[30278]: E0318 18:13:57.302460 30278 secret.go:189] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Mar 18 18:13:57.302538 master-0 kubenswrapper[30278]: E0318 18:13:57.302528 30278 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8f8a9e5f-b9b7-4366-a778-1bf7177693c5-memberlist podName:8f8a9e5f-b9b7-4366-a778-1bf7177693c5 nodeName:}" failed. No retries permitted until 2026-03-18 18:13:58.302509321 +0000 UTC m=+807.469693916 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/8f8a9e5f-b9b7-4366-a778-1bf7177693c5-memberlist") pod "speaker-m67cm" (UID: "8f8a9e5f-b9b7-4366-a778-1bf7177693c5") : secret "metallb-memberlist" not found Mar 18 18:13:57.453037 master-0 kubenswrapper[30278]: I0318 18:13:57.446816 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-ztqqc" Mar 18 18:13:57.580187 master-0 kubenswrapper[30278]: I0318 18:13:57.580142 30278 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-7bb4cc7c98-skcb4"] Mar 18 18:13:58.093861 master-0 kubenswrapper[30278]: I0318 18:13:58.093798 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-g4479" event={"ID":"efa4b92a-4f1e-40bd-b1e4-3bcb1b2fc4f9","Type":"ContainerStarted","Data":"3fbf5d05319f0386d0010eeac6d695f980d64a2ec54adba068e8db60e238af4f"} Mar 18 18:13:58.095239 master-0 kubenswrapper[30278]: I0318 18:13:58.095213 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-7bb4cc7c98-skcb4" event={"ID":"0326959b-b1d6-42ef-9fe5-bb33aa37df40","Type":"ContainerStarted","Data":"1b73ec39623f2749571b53f1c7dd35f8ace8757b8c089908ee6c1eb6774062ce"} Mar 18 18:13:58.095332 master-0 kubenswrapper[30278]: I0318 18:13:58.095241 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-7bb4cc7c98-skcb4" event={"ID":"0326959b-b1d6-42ef-9fe5-bb33aa37df40","Type":"ContainerStarted","Data":"b7ce63b5dbc408890dcf5a14ca74ed948d97f6170867ece9139a7d9f3969833a"} Mar 18 18:13:58.096344 master-0 kubenswrapper[30278]: I0318 18:13:58.096317 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-ztqqc" event={"ID":"c5c65977-8004-4434-8d99-7624d08d9b3a","Type":"ContainerStarted","Data":"eb0d742bde06d823f9de0db0e4c43b6f98b6409b4f2c37b77d79bdd1e7129924"} Mar 18 18:13:58.353375 master-0 kubenswrapper[30278]: I0318 18:13:58.353222 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/8f8a9e5f-b9b7-4366-a778-1bf7177693c5-memberlist\") pod \"speaker-m67cm\" (UID: \"8f8a9e5f-b9b7-4366-a778-1bf7177693c5\") " pod="metallb-system/speaker-m67cm" Mar 18 18:13:58.356581 master-0 kubenswrapper[30278]: I0318 18:13:58.356528 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/8f8a9e5f-b9b7-4366-a778-1bf7177693c5-memberlist\") pod \"speaker-m67cm\" (UID: \"8f8a9e5f-b9b7-4366-a778-1bf7177693c5\") " pod="metallb-system/speaker-m67cm" Mar 18 18:13:58.432153 master-0 kubenswrapper[30278]: I0318 18:13:58.432095 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-m67cm" Mar 18 18:13:58.434778 master-0 kubenswrapper[30278]: I0318 18:13:58.434726 30278 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-9b8c8685d-zc4ph"] Mar 18 18:13:58.436198 master-0 kubenswrapper[30278]: I0318 18:13:58.436173 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-9b8c8685d-zc4ph" Mar 18 18:13:58.453235 master-0 kubenswrapper[30278]: I0318 18:13:58.453178 30278 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-9b8c8685d-zc4ph"] Mar 18 18:13:58.464723 master-0 kubenswrapper[30278]: I0318 18:13:58.463855 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pdmmg\" (UniqueName: \"kubernetes.io/projected/185bb037-2ee1-460c-b291-beb7bf78bb99-kube-api-access-pdmmg\") pod \"nmstate-metrics-9b8c8685d-zc4ph\" (UID: \"185bb037-2ee1-460c-b291-beb7bf78bb99\") " pod="openshift-nmstate/nmstate-metrics-9b8c8685d-zc4ph" Mar 18 18:13:58.494049 master-0 kubenswrapper[30278]: I0318 18:13:58.490967 30278 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-9kcdn"] Mar 18 18:13:58.494049 master-0 kubenswrapper[30278]: I0318 18:13:58.492096 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-9kcdn" Mar 18 18:13:58.545266 master-0 kubenswrapper[30278]: I0318 18:13:58.545113 30278 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-5f558f5558-dlkh5"] Mar 18 18:13:58.546509 master-0 kubenswrapper[30278]: I0318 18:13:58.546320 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-5f558f5558-dlkh5" Mar 18 18:13:58.549932 master-0 kubenswrapper[30278]: I0318 18:13:58.549886 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Mar 18 18:13:58.567373 master-0 kubenswrapper[30278]: I0318 18:13:58.566237 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pdmmg\" (UniqueName: \"kubernetes.io/projected/185bb037-2ee1-460c-b291-beb7bf78bb99-kube-api-access-pdmmg\") pod \"nmstate-metrics-9b8c8685d-zc4ph\" (UID: \"185bb037-2ee1-460c-b291-beb7bf78bb99\") " pod="openshift-nmstate/nmstate-metrics-9b8c8685d-zc4ph" Mar 18 18:13:58.567373 master-0 kubenswrapper[30278]: I0318 18:13:58.566327 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qhfm6\" (UniqueName: \"kubernetes.io/projected/56c34c5b-17a3-4109-b2fa-27d0db19d95c-kube-api-access-qhfm6\") pod \"nmstate-webhook-5f558f5558-dlkh5\" (UID: \"56c34c5b-17a3-4109-b2fa-27d0db19d95c\") " pod="openshift-nmstate/nmstate-webhook-5f558f5558-dlkh5" Mar 18 18:13:58.567373 master-0 kubenswrapper[30278]: I0318 18:13:58.566375 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/5d513b42-f68d-4b03-b420-71e8e8cf0d75-nmstate-lock\") pod \"nmstate-handler-9kcdn\" (UID: \"5d513b42-f68d-4b03-b420-71e8e8cf0d75\") " pod="openshift-nmstate/nmstate-handler-9kcdn" Mar 18 18:13:58.567373 master-0 kubenswrapper[30278]: I0318 18:13:58.566487 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/56c34c5b-17a3-4109-b2fa-27d0db19d95c-tls-key-pair\") pod \"nmstate-webhook-5f558f5558-dlkh5\" (UID: \"56c34c5b-17a3-4109-b2fa-27d0db19d95c\") " pod="openshift-nmstate/nmstate-webhook-5f558f5558-dlkh5" Mar 18 18:13:58.567373 master-0 kubenswrapper[30278]: I0318 18:13:58.566541 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lbc7b\" (UniqueName: \"kubernetes.io/projected/5d513b42-f68d-4b03-b420-71e8e8cf0d75-kube-api-access-lbc7b\") pod \"nmstate-handler-9kcdn\" (UID: \"5d513b42-f68d-4b03-b420-71e8e8cf0d75\") " pod="openshift-nmstate/nmstate-handler-9kcdn" Mar 18 18:13:58.567373 master-0 kubenswrapper[30278]: I0318 18:13:58.566570 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/5d513b42-f68d-4b03-b420-71e8e8cf0d75-ovs-socket\") pod \"nmstate-handler-9kcdn\" (UID: \"5d513b42-f68d-4b03-b420-71e8e8cf0d75\") " pod="openshift-nmstate/nmstate-handler-9kcdn" Mar 18 18:13:58.567373 master-0 kubenswrapper[30278]: I0318 18:13:58.566593 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/5d513b42-f68d-4b03-b420-71e8e8cf0d75-dbus-socket\") pod \"nmstate-handler-9kcdn\" (UID: \"5d513b42-f68d-4b03-b420-71e8e8cf0d75\") " pod="openshift-nmstate/nmstate-handler-9kcdn" Mar 18 18:13:58.584080 master-0 kubenswrapper[30278]: I0318 18:13:58.584017 30278 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-5f558f5558-dlkh5"] Mar 18 18:13:58.626954 master-0 kubenswrapper[30278]: I0318 18:13:58.626165 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pdmmg\" (UniqueName: \"kubernetes.io/projected/185bb037-2ee1-460c-b291-beb7bf78bb99-kube-api-access-pdmmg\") pod \"nmstate-metrics-9b8c8685d-zc4ph\" (UID: \"185bb037-2ee1-460c-b291-beb7bf78bb99\") " pod="openshift-nmstate/nmstate-metrics-9b8c8685d-zc4ph" Mar 18 18:13:58.682300 master-0 kubenswrapper[30278]: I0318 18:13:58.676457 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/56c34c5b-17a3-4109-b2fa-27d0db19d95c-tls-key-pair\") pod \"nmstate-webhook-5f558f5558-dlkh5\" (UID: \"56c34c5b-17a3-4109-b2fa-27d0db19d95c\") " pod="openshift-nmstate/nmstate-webhook-5f558f5558-dlkh5" Mar 18 18:13:58.682300 master-0 kubenswrapper[30278]: I0318 18:13:58.676536 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lbc7b\" (UniqueName: \"kubernetes.io/projected/5d513b42-f68d-4b03-b420-71e8e8cf0d75-kube-api-access-lbc7b\") pod \"nmstate-handler-9kcdn\" (UID: \"5d513b42-f68d-4b03-b420-71e8e8cf0d75\") " pod="openshift-nmstate/nmstate-handler-9kcdn" Mar 18 18:13:58.682300 master-0 kubenswrapper[30278]: I0318 18:13:58.676563 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/5d513b42-f68d-4b03-b420-71e8e8cf0d75-ovs-socket\") pod \"nmstate-handler-9kcdn\" (UID: \"5d513b42-f68d-4b03-b420-71e8e8cf0d75\") " pod="openshift-nmstate/nmstate-handler-9kcdn" Mar 18 18:13:58.682300 master-0 kubenswrapper[30278]: I0318 18:13:58.676587 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/5d513b42-f68d-4b03-b420-71e8e8cf0d75-dbus-socket\") pod \"nmstate-handler-9kcdn\" (UID: \"5d513b42-f68d-4b03-b420-71e8e8cf0d75\") " pod="openshift-nmstate/nmstate-handler-9kcdn" Mar 18 18:13:58.682300 master-0 kubenswrapper[30278]: I0318 18:13:58.676612 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qhfm6\" (UniqueName: \"kubernetes.io/projected/56c34c5b-17a3-4109-b2fa-27d0db19d95c-kube-api-access-qhfm6\") pod \"nmstate-webhook-5f558f5558-dlkh5\" (UID: \"56c34c5b-17a3-4109-b2fa-27d0db19d95c\") " pod="openshift-nmstate/nmstate-webhook-5f558f5558-dlkh5" Mar 18 18:13:58.682300 master-0 kubenswrapper[30278]: I0318 18:13:58.676630 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/5d513b42-f68d-4b03-b420-71e8e8cf0d75-nmstate-lock\") pod \"nmstate-handler-9kcdn\" (UID: \"5d513b42-f68d-4b03-b420-71e8e8cf0d75\") " pod="openshift-nmstate/nmstate-handler-9kcdn" Mar 18 18:13:58.682300 master-0 kubenswrapper[30278]: I0318 18:13:58.676913 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/5d513b42-f68d-4b03-b420-71e8e8cf0d75-nmstate-lock\") pod \"nmstate-handler-9kcdn\" (UID: \"5d513b42-f68d-4b03-b420-71e8e8cf0d75\") " pod="openshift-nmstate/nmstate-handler-9kcdn" Mar 18 18:13:58.682300 master-0 kubenswrapper[30278]: I0318 18:13:58.677211 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/5d513b42-f68d-4b03-b420-71e8e8cf0d75-ovs-socket\") pod \"nmstate-handler-9kcdn\" (UID: \"5d513b42-f68d-4b03-b420-71e8e8cf0d75\") " pod="openshift-nmstate/nmstate-handler-9kcdn" Mar 18 18:13:58.682300 master-0 kubenswrapper[30278]: I0318 18:13:58.677262 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/5d513b42-f68d-4b03-b420-71e8e8cf0d75-dbus-socket\") pod \"nmstate-handler-9kcdn\" (UID: \"5d513b42-f68d-4b03-b420-71e8e8cf0d75\") " pod="openshift-nmstate/nmstate-handler-9kcdn" Mar 18 18:13:58.693295 master-0 kubenswrapper[30278]: I0318 18:13:58.686255 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/56c34c5b-17a3-4109-b2fa-27d0db19d95c-tls-key-pair\") pod \"nmstate-webhook-5f558f5558-dlkh5\" (UID: \"56c34c5b-17a3-4109-b2fa-27d0db19d95c\") " pod="openshift-nmstate/nmstate-webhook-5f558f5558-dlkh5" Mar 18 18:13:58.734445 master-0 kubenswrapper[30278]: I0318 18:13:58.723561 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lbc7b\" (UniqueName: \"kubernetes.io/projected/5d513b42-f68d-4b03-b420-71e8e8cf0d75-kube-api-access-lbc7b\") pod \"nmstate-handler-9kcdn\" (UID: \"5d513b42-f68d-4b03-b420-71e8e8cf0d75\") " pod="openshift-nmstate/nmstate-handler-9kcdn" Mar 18 18:13:58.734445 master-0 kubenswrapper[30278]: I0318 18:13:58.732617 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qhfm6\" (UniqueName: \"kubernetes.io/projected/56c34c5b-17a3-4109-b2fa-27d0db19d95c-kube-api-access-qhfm6\") pod \"nmstate-webhook-5f558f5558-dlkh5\" (UID: \"56c34c5b-17a3-4109-b2fa-27d0db19d95c\") " pod="openshift-nmstate/nmstate-webhook-5f558f5558-dlkh5" Mar 18 18:13:58.773300 master-0 kubenswrapper[30278]: I0318 18:13:58.767719 30278 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-86f58fcf4-49xpf"] Mar 18 18:13:58.773300 master-0 kubenswrapper[30278]: I0318 18:13:58.769113 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-49xpf" Mar 18 18:13:58.783208 master-0 kubenswrapper[30278]: I0318 18:13:58.776209 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Mar 18 18:13:58.783208 master-0 kubenswrapper[30278]: I0318 18:13:58.776433 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Mar 18 18:13:58.804509 master-0 kubenswrapper[30278]: I0318 18:13:58.798118 30278 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-86f58fcf4-49xpf"] Mar 18 18:13:58.835730 master-0 kubenswrapper[30278]: I0318 18:13:58.835672 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-9b8c8685d-zc4ph" Mar 18 18:13:58.884627 master-0 kubenswrapper[30278]: I0318 18:13:58.884578 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/dc688679-6ccb-42d6-aa9b-620284991fbe-nginx-conf\") pod \"nmstate-console-plugin-86f58fcf4-49xpf\" (UID: \"dc688679-6ccb-42d6-aa9b-620284991fbe\") " pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-49xpf" Mar 18 18:13:58.884780 master-0 kubenswrapper[30278]: I0318 18:13:58.884652 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t6b5p\" (UniqueName: \"kubernetes.io/projected/dc688679-6ccb-42d6-aa9b-620284991fbe-kube-api-access-t6b5p\") pod \"nmstate-console-plugin-86f58fcf4-49xpf\" (UID: \"dc688679-6ccb-42d6-aa9b-620284991fbe\") " pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-49xpf" Mar 18 18:13:58.884780 master-0 kubenswrapper[30278]: I0318 18:13:58.884702 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/dc688679-6ccb-42d6-aa9b-620284991fbe-plugin-serving-cert\") pod \"nmstate-console-plugin-86f58fcf4-49xpf\" (UID: \"dc688679-6ccb-42d6-aa9b-620284991fbe\") " pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-49xpf" Mar 18 18:13:58.911764 master-0 kubenswrapper[30278]: I0318 18:13:58.911690 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-9kcdn" Mar 18 18:13:58.923734 master-0 kubenswrapper[30278]: I0318 18:13:58.923659 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-5f558f5558-dlkh5" Mar 18 18:13:58.992081 master-0 kubenswrapper[30278]: I0318 18:13:58.988231 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t6b5p\" (UniqueName: \"kubernetes.io/projected/dc688679-6ccb-42d6-aa9b-620284991fbe-kube-api-access-t6b5p\") pod \"nmstate-console-plugin-86f58fcf4-49xpf\" (UID: \"dc688679-6ccb-42d6-aa9b-620284991fbe\") " pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-49xpf" Mar 18 18:13:58.992081 master-0 kubenswrapper[30278]: I0318 18:13:58.988311 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/dc688679-6ccb-42d6-aa9b-620284991fbe-plugin-serving-cert\") pod \"nmstate-console-plugin-86f58fcf4-49xpf\" (UID: \"dc688679-6ccb-42d6-aa9b-620284991fbe\") " pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-49xpf" Mar 18 18:13:58.992081 master-0 kubenswrapper[30278]: I0318 18:13:58.988413 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/dc688679-6ccb-42d6-aa9b-620284991fbe-nginx-conf\") pod \"nmstate-console-plugin-86f58fcf4-49xpf\" (UID: \"dc688679-6ccb-42d6-aa9b-620284991fbe\") " pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-49xpf" Mar 18 18:13:58.992081 master-0 kubenswrapper[30278]: I0318 18:13:58.989331 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/dc688679-6ccb-42d6-aa9b-620284991fbe-nginx-conf\") pod \"nmstate-console-plugin-86f58fcf4-49xpf\" (UID: \"dc688679-6ccb-42d6-aa9b-620284991fbe\") " pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-49xpf" Mar 18 18:13:58.992081 master-0 kubenswrapper[30278]: E0318 18:13:58.989430 30278 secret.go:189] Couldn't get secret openshift-nmstate/plugin-serving-cert: secret "plugin-serving-cert" not found Mar 18 18:13:58.992081 master-0 kubenswrapper[30278]: E0318 18:13:58.989467 30278 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dc688679-6ccb-42d6-aa9b-620284991fbe-plugin-serving-cert podName:dc688679-6ccb-42d6-aa9b-620284991fbe nodeName:}" failed. No retries permitted until 2026-03-18 18:13:59.489454231 +0000 UTC m=+808.656638826 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "plugin-serving-cert" (UniqueName: "kubernetes.io/secret/dc688679-6ccb-42d6-aa9b-620284991fbe-plugin-serving-cert") pod "nmstate-console-plugin-86f58fcf4-49xpf" (UID: "dc688679-6ccb-42d6-aa9b-620284991fbe") : secret "plugin-serving-cert" not found Mar 18 18:13:59.073026 master-0 kubenswrapper[30278]: W0318 18:13:59.063487 30278 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5d513b42_f68d_4b03_b420_71e8e8cf0d75.slice/crio-37f023599a4cccc2679d1d35f6893b7b2534cb817b54d2312b292fe8d4f4a8e9 WatchSource:0}: Error finding container 37f023599a4cccc2679d1d35f6893b7b2534cb817b54d2312b292fe8d4f4a8e9: Status 404 returned error can't find the container with id 37f023599a4cccc2679d1d35f6893b7b2534cb817b54d2312b292fe8d4f4a8e9 Mar 18 18:13:59.107415 master-0 kubenswrapper[30278]: I0318 18:13:59.105242 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t6b5p\" (UniqueName: \"kubernetes.io/projected/dc688679-6ccb-42d6-aa9b-620284991fbe-kube-api-access-t6b5p\") pod \"nmstate-console-plugin-86f58fcf4-49xpf\" (UID: \"dc688679-6ccb-42d6-aa9b-620284991fbe\") " pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-49xpf" Mar 18 18:13:59.215313 master-0 kubenswrapper[30278]: I0318 18:13:59.214248 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-9kcdn" event={"ID":"5d513b42-f68d-4b03-b420-71e8e8cf0d75","Type":"ContainerStarted","Data":"37f023599a4cccc2679d1d35f6893b7b2534cb817b54d2312b292fe8d4f4a8e9"} Mar 18 18:13:59.271300 master-0 kubenswrapper[30278]: I0318 18:13:59.253544 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-m67cm" event={"ID":"8f8a9e5f-b9b7-4366-a778-1bf7177693c5","Type":"ContainerStarted","Data":"6ff293b86e9bd79abcd7c1a88cea26cffb3ad092e28dee45cdbfce96d29949ee"} Mar 18 18:13:59.271300 master-0 kubenswrapper[30278]: I0318 18:13:59.253605 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-m67cm" event={"ID":"8f8a9e5f-b9b7-4366-a778-1bf7177693c5","Type":"ContainerStarted","Data":"a636e057a022f26f669074d25f408e4966d82f7141443280d46a3da92f26340c"} Mar 18 18:13:59.324377 master-0 kubenswrapper[30278]: I0318 18:13:59.317240 30278 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f76dd88c-h9rrg"] Mar 18 18:13:59.324377 master-0 kubenswrapper[30278]: I0318 18:13:59.318694 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f76dd88c-h9rrg" Mar 18 18:13:59.380312 master-0 kubenswrapper[30278]: I0318 18:13:59.377746 30278 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f76dd88c-h9rrg"] Mar 18 18:13:59.418348 master-0 kubenswrapper[30278]: I0318 18:13:59.413225 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/da98779c-7834-4e68-b018-40d11d173a55-console-serving-cert\") pod \"console-f76dd88c-h9rrg\" (UID: \"da98779c-7834-4e68-b018-40d11d173a55\") " pod="openshift-console/console-f76dd88c-h9rrg" Mar 18 18:13:59.418348 master-0 kubenswrapper[30278]: I0318 18:13:59.413298 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/da98779c-7834-4e68-b018-40d11d173a55-console-oauth-config\") pod \"console-f76dd88c-h9rrg\" (UID: \"da98779c-7834-4e68-b018-40d11d173a55\") " pod="openshift-console/console-f76dd88c-h9rrg" Mar 18 18:13:59.418348 master-0 kubenswrapper[30278]: I0318 18:13:59.413391 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/da98779c-7834-4e68-b018-40d11d173a55-service-ca\") pod \"console-f76dd88c-h9rrg\" (UID: \"da98779c-7834-4e68-b018-40d11d173a55\") " pod="openshift-console/console-f76dd88c-h9rrg" Mar 18 18:13:59.418348 master-0 kubenswrapper[30278]: I0318 18:13:59.413675 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/da98779c-7834-4e68-b018-40d11d173a55-trusted-ca-bundle\") pod \"console-f76dd88c-h9rrg\" (UID: \"da98779c-7834-4e68-b018-40d11d173a55\") " pod="openshift-console/console-f76dd88c-h9rrg" Mar 18 18:13:59.418348 master-0 kubenswrapper[30278]: I0318 18:13:59.413758 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/da98779c-7834-4e68-b018-40d11d173a55-oauth-serving-cert\") pod \"console-f76dd88c-h9rrg\" (UID: \"da98779c-7834-4e68-b018-40d11d173a55\") " pod="openshift-console/console-f76dd88c-h9rrg" Mar 18 18:13:59.418348 master-0 kubenswrapper[30278]: I0318 18:13:59.413778 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xdf7f\" (UniqueName: \"kubernetes.io/projected/da98779c-7834-4e68-b018-40d11d173a55-kube-api-access-xdf7f\") pod \"console-f76dd88c-h9rrg\" (UID: \"da98779c-7834-4e68-b018-40d11d173a55\") " pod="openshift-console/console-f76dd88c-h9rrg" Mar 18 18:13:59.418348 master-0 kubenswrapper[30278]: I0318 18:13:59.413838 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/da98779c-7834-4e68-b018-40d11d173a55-console-config\") pod \"console-f76dd88c-h9rrg\" (UID: \"da98779c-7834-4e68-b018-40d11d173a55\") " pod="openshift-console/console-f76dd88c-h9rrg" Mar 18 18:13:59.461317 master-0 kubenswrapper[30278]: I0318 18:13:59.452928 30278 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-9b8c8685d-zc4ph"] Mar 18 18:13:59.520240 master-0 kubenswrapper[30278]: I0318 18:13:59.520177 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/da98779c-7834-4e68-b018-40d11d173a55-console-serving-cert\") pod \"console-f76dd88c-h9rrg\" (UID: \"da98779c-7834-4e68-b018-40d11d173a55\") " pod="openshift-console/console-f76dd88c-h9rrg" Mar 18 18:13:59.520568 master-0 kubenswrapper[30278]: I0318 18:13:59.520546 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/da98779c-7834-4e68-b018-40d11d173a55-console-oauth-config\") pod \"console-f76dd88c-h9rrg\" (UID: \"da98779c-7834-4e68-b018-40d11d173a55\") " pod="openshift-console/console-f76dd88c-h9rrg" Mar 18 18:13:59.520685 master-0 kubenswrapper[30278]: I0318 18:13:59.520669 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/da98779c-7834-4e68-b018-40d11d173a55-service-ca\") pod \"console-f76dd88c-h9rrg\" (UID: \"da98779c-7834-4e68-b018-40d11d173a55\") " pod="openshift-console/console-f76dd88c-h9rrg" Mar 18 18:13:59.520853 master-0 kubenswrapper[30278]: I0318 18:13:59.520837 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/da98779c-7834-4e68-b018-40d11d173a55-trusted-ca-bundle\") pod \"console-f76dd88c-h9rrg\" (UID: \"da98779c-7834-4e68-b018-40d11d173a55\") " pod="openshift-console/console-f76dd88c-h9rrg" Mar 18 18:13:59.520967 master-0 kubenswrapper[30278]: I0318 18:13:59.520953 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/da98779c-7834-4e68-b018-40d11d173a55-oauth-serving-cert\") pod \"console-f76dd88c-h9rrg\" (UID: \"da98779c-7834-4e68-b018-40d11d173a55\") " pod="openshift-console/console-f76dd88c-h9rrg" Mar 18 18:13:59.521047 master-0 kubenswrapper[30278]: I0318 18:13:59.521035 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xdf7f\" (UniqueName: \"kubernetes.io/projected/da98779c-7834-4e68-b018-40d11d173a55-kube-api-access-xdf7f\") pod \"console-f76dd88c-h9rrg\" (UID: \"da98779c-7834-4e68-b018-40d11d173a55\") " pod="openshift-console/console-f76dd88c-h9rrg" Mar 18 18:13:59.521137 master-0 kubenswrapper[30278]: I0318 18:13:59.521125 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/dc688679-6ccb-42d6-aa9b-620284991fbe-plugin-serving-cert\") pod \"nmstate-console-plugin-86f58fcf4-49xpf\" (UID: \"dc688679-6ccb-42d6-aa9b-620284991fbe\") " pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-49xpf" Mar 18 18:13:59.521212 master-0 kubenswrapper[30278]: I0318 18:13:59.521200 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/da98779c-7834-4e68-b018-40d11d173a55-console-config\") pod \"console-f76dd88c-h9rrg\" (UID: \"da98779c-7834-4e68-b018-40d11d173a55\") " pod="openshift-console/console-f76dd88c-h9rrg" Mar 18 18:13:59.521493 master-0 kubenswrapper[30278]: I0318 18:13:59.521449 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/da98779c-7834-4e68-b018-40d11d173a55-service-ca\") pod \"console-f76dd88c-h9rrg\" (UID: \"da98779c-7834-4e68-b018-40d11d173a55\") " pod="openshift-console/console-f76dd88c-h9rrg" Mar 18 18:13:59.529317 master-0 kubenswrapper[30278]: I0318 18:13:59.522135 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/da98779c-7834-4e68-b018-40d11d173a55-console-config\") pod \"console-f76dd88c-h9rrg\" (UID: \"da98779c-7834-4e68-b018-40d11d173a55\") " pod="openshift-console/console-f76dd88c-h9rrg" Mar 18 18:13:59.529894 master-0 kubenswrapper[30278]: I0318 18:13:59.522650 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/da98779c-7834-4e68-b018-40d11d173a55-oauth-serving-cert\") pod \"console-f76dd88c-h9rrg\" (UID: \"da98779c-7834-4e68-b018-40d11d173a55\") " pod="openshift-console/console-f76dd88c-h9rrg" Mar 18 18:13:59.529977 master-0 kubenswrapper[30278]: I0318 18:13:59.523244 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/da98779c-7834-4e68-b018-40d11d173a55-trusted-ca-bundle\") pod \"console-f76dd88c-h9rrg\" (UID: \"da98779c-7834-4e68-b018-40d11d173a55\") " pod="openshift-console/console-f76dd88c-h9rrg" Mar 18 18:13:59.530055 master-0 kubenswrapper[30278]: I0318 18:13:59.524914 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/da98779c-7834-4e68-b018-40d11d173a55-console-serving-cert\") pod \"console-f76dd88c-h9rrg\" (UID: \"da98779c-7834-4e68-b018-40d11d173a55\") " pod="openshift-console/console-f76dd88c-h9rrg" Mar 18 18:13:59.540301 master-0 kubenswrapper[30278]: I0318 18:13:59.537016 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/da98779c-7834-4e68-b018-40d11d173a55-console-oauth-config\") pod \"console-f76dd88c-h9rrg\" (UID: \"da98779c-7834-4e68-b018-40d11d173a55\") " pod="openshift-console/console-f76dd88c-h9rrg" Mar 18 18:13:59.546292 master-0 kubenswrapper[30278]: I0318 18:13:59.542881 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/dc688679-6ccb-42d6-aa9b-620284991fbe-plugin-serving-cert\") pod \"nmstate-console-plugin-86f58fcf4-49xpf\" (UID: \"dc688679-6ccb-42d6-aa9b-620284991fbe\") " pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-49xpf" Mar 18 18:13:59.574159 master-0 kubenswrapper[30278]: I0318 18:13:59.569541 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xdf7f\" (UniqueName: \"kubernetes.io/projected/da98779c-7834-4e68-b018-40d11d173a55-kube-api-access-xdf7f\") pod \"console-f76dd88c-h9rrg\" (UID: \"da98779c-7834-4e68-b018-40d11d173a55\") " pod="openshift-console/console-f76dd88c-h9rrg" Mar 18 18:13:59.791389 master-0 kubenswrapper[30278]: I0318 18:13:59.789688 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f76dd88c-h9rrg" Mar 18 18:13:59.791389 master-0 kubenswrapper[30278]: I0318 18:13:59.789704 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-49xpf" Mar 18 18:14:00.010568 master-0 kubenswrapper[30278]: I0318 18:14:00.009832 30278 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-5f558f5558-dlkh5"] Mar 18 18:14:00.294668 master-0 kubenswrapper[30278]: I0318 18:14:00.294610 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-5f558f5558-dlkh5" event={"ID":"56c34c5b-17a3-4109-b2fa-27d0db19d95c","Type":"ContainerStarted","Data":"554892a49c7dbe64588568b4b78634d81e04e5a61044d3282f18d32021552326"} Mar 18 18:14:00.300871 master-0 kubenswrapper[30278]: I0318 18:14:00.300821 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-9b8c8685d-zc4ph" event={"ID":"185bb037-2ee1-460c-b291-beb7bf78bb99","Type":"ContainerStarted","Data":"da0d0e42eb200d1cc42d22344db927ad06f9b6b1ca4cfb834e03e9a1bfdd0986"} Mar 18 18:14:00.366973 master-0 kubenswrapper[30278]: I0318 18:14:00.366906 30278 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-86f58fcf4-49xpf"] Mar 18 18:14:00.470053 master-0 kubenswrapper[30278]: W0318 18:14:00.469995 30278 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podda98779c_7834_4e68_b018_40d11d173a55.slice/crio-e873eeba4805bc8d229e1ca40dfed9ab9719df1d29ca2154f99a1f7f31bd520e WatchSource:0}: Error finding container e873eeba4805bc8d229e1ca40dfed9ab9719df1d29ca2154f99a1f7f31bd520e: Status 404 returned error can't find the container with id e873eeba4805bc8d229e1ca40dfed9ab9719df1d29ca2154f99a1f7f31bd520e Mar 18 18:14:00.471796 master-0 kubenswrapper[30278]: I0318 18:14:00.471663 30278 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f76dd88c-h9rrg"] Mar 18 18:14:01.341300 master-0 kubenswrapper[30278]: I0318 18:14:01.333853 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-49xpf" event={"ID":"dc688679-6ccb-42d6-aa9b-620284991fbe","Type":"ContainerStarted","Data":"e1ae30bd1fa225209d78f8cf08b17f2d93ae6e54222bd4863a996fc42c87bc69"} Mar 18 18:14:01.345452 master-0 kubenswrapper[30278]: I0318 18:14:01.345402 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f76dd88c-h9rrg" event={"ID":"da98779c-7834-4e68-b018-40d11d173a55","Type":"ContainerStarted","Data":"9f002d5f4ab0f4f242d7ac39cc1e93e48139dbd618155b70e85f94c3160822de"} Mar 18 18:14:01.345667 master-0 kubenswrapper[30278]: I0318 18:14:01.345459 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f76dd88c-h9rrg" event={"ID":"da98779c-7834-4e68-b018-40d11d173a55","Type":"ContainerStarted","Data":"e873eeba4805bc8d229e1ca40dfed9ab9719df1d29ca2154f99a1f7f31bd520e"} Mar 18 18:14:01.358656 master-0 kubenswrapper[30278]: I0318 18:14:01.358595 30278 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-7bb4cc7c98-skcb4" Mar 18 18:14:01.408995 master-0 kubenswrapper[30278]: I0318 18:14:01.408916 30278 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f76dd88c-h9rrg" podStartSLOduration=2.408895649 podStartE2EDuration="2.408895649s" podCreationTimestamp="2026-03-18 18:13:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 18:14:01.403315978 +0000 UTC m=+810.570500573" watchObservedRunningTime="2026-03-18 18:14:01.408895649 +0000 UTC m=+810.576080254" Mar 18 18:14:01.709837 master-0 kubenswrapper[30278]: I0318 18:14:01.709712 30278 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-7bb4cc7c98-skcb4" podStartSLOduration=2.241851873 podStartE2EDuration="5.709661703s" podCreationTimestamp="2026-03-18 18:13:56 +0000 UTC" firstStartedPulling="2026-03-18 18:13:57.676980428 +0000 UTC m=+806.844165023" lastFinishedPulling="2026-03-18 18:14:01.144790258 +0000 UTC m=+810.311974853" observedRunningTime="2026-03-18 18:14:01.705605713 +0000 UTC m=+810.872790318" watchObservedRunningTime="2026-03-18 18:14:01.709661703 +0000 UTC m=+810.876846318" Mar 18 18:14:02.377295 master-0 kubenswrapper[30278]: I0318 18:14:02.377217 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-7bb4cc7c98-skcb4" event={"ID":"0326959b-b1d6-42ef-9fe5-bb33aa37df40","Type":"ContainerStarted","Data":"091a000855a0614949756f8335e6c72feec446d20242bc77fd76fff9b046123c"} Mar 18 18:14:02.380830 master-0 kubenswrapper[30278]: I0318 18:14:02.380775 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-m67cm" event={"ID":"8f8a9e5f-b9b7-4366-a778-1bf7177693c5","Type":"ContainerStarted","Data":"14486944deec6076ae49303e9a9ced256441aeaba7281cefd20a36d96133a4cd"} Mar 18 18:14:02.412476 master-0 kubenswrapper[30278]: I0318 18:14:02.412377 30278 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-m67cm" podStartSLOduration=4.394303032 podStartE2EDuration="6.412354235s" podCreationTimestamp="2026-03-18 18:13:56 +0000 UTC" firstStartedPulling="2026-03-18 18:13:59.013870011 +0000 UTC m=+808.181054606" lastFinishedPulling="2026-03-18 18:14:01.031921214 +0000 UTC m=+810.199105809" observedRunningTime="2026-03-18 18:14:02.400142226 +0000 UTC m=+811.567326841" watchObservedRunningTime="2026-03-18 18:14:02.412354235 +0000 UTC m=+811.579538850" Mar 18 18:14:03.389727 master-0 kubenswrapper[30278]: I0318 18:14:03.389651 30278 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-m67cm" Mar 18 18:14:06.435045 master-0 kubenswrapper[30278]: I0318 18:14:06.434290 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-49xpf" event={"ID":"dc688679-6ccb-42d6-aa9b-620284991fbe","Type":"ContainerStarted","Data":"257255800cca59302716fe625b47d4ef973340b083e4bd581dcbede3ff25e08f"} Mar 18 18:14:06.444061 master-0 kubenswrapper[30278]: I0318 18:14:06.444005 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-9b8c8685d-zc4ph" event={"ID":"185bb037-2ee1-460c-b291-beb7bf78bb99","Type":"ContainerStarted","Data":"89f0de4a587541bec4e4813d49817c28cdb2ffff962457d7f6eb5a3c42d8af6e"} Mar 18 18:14:06.444232 master-0 kubenswrapper[30278]: I0318 18:14:06.444066 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-9b8c8685d-zc4ph" event={"ID":"185bb037-2ee1-460c-b291-beb7bf78bb99","Type":"ContainerStarted","Data":"61a94dd5a5edca1c6851688c91743c347c19d0877f159ab5f3be97e22cfba57c"} Mar 18 18:14:06.445634 master-0 kubenswrapper[30278]: I0318 18:14:06.445613 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-9kcdn" event={"ID":"5d513b42-f68d-4b03-b420-71e8e8cf0d75","Type":"ContainerStarted","Data":"c81a3dd07227a9c4e117bb51e9b088add50893eb8bdd234d57517d99fd184ab2"} Mar 18 18:14:06.445843 master-0 kubenswrapper[30278]: I0318 18:14:06.445791 30278 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-9kcdn" Mar 18 18:14:06.447337 master-0 kubenswrapper[30278]: I0318 18:14:06.447298 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-g4479" event={"ID":"efa4b92a-4f1e-40bd-b1e4-3bcb1b2fc4f9","Type":"ContainerStarted","Data":"5d86e5696f4ec6f74ba941f56b432b17a78b0de988e2f04922f8549880823961"} Mar 18 18:14:06.447764 master-0 kubenswrapper[30278]: I0318 18:14:06.447742 30278 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-g4479" Mar 18 18:14:06.449139 master-0 kubenswrapper[30278]: I0318 18:14:06.449066 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-5f558f5558-dlkh5" event={"ID":"56c34c5b-17a3-4109-b2fa-27d0db19d95c","Type":"ContainerStarted","Data":"6ccca482069ab1846102e9fa17d8483cf1d109c59c9edcbaea6bcfd8044b3320"} Mar 18 18:14:06.449783 master-0 kubenswrapper[30278]: I0318 18:14:06.449754 30278 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-5f558f5558-dlkh5" Mar 18 18:14:06.452764 master-0 kubenswrapper[30278]: I0318 18:14:06.452705 30278 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-49xpf" podStartSLOduration=2.888468055 podStartE2EDuration="8.452694337s" podCreationTimestamp="2026-03-18 18:13:58 +0000 UTC" firstStartedPulling="2026-03-18 18:14:00.379663976 +0000 UTC m=+809.546848571" lastFinishedPulling="2026-03-18 18:14:05.943890238 +0000 UTC m=+815.111074853" observedRunningTime="2026-03-18 18:14:06.451787473 +0000 UTC m=+815.618972068" watchObservedRunningTime="2026-03-18 18:14:06.452694337 +0000 UTC m=+815.619878932" Mar 18 18:14:06.454806 master-0 kubenswrapper[30278]: I0318 18:14:06.453872 30278 generic.go:334] "Generic (PLEG): container finished" podID="c5c65977-8004-4434-8d99-7624d08d9b3a" containerID="a4488f117fe2195d83a74072daee99d5927ce5b493b38d58cbf20eb0b372cea4" exitCode=0 Mar 18 18:14:06.454806 master-0 kubenswrapper[30278]: I0318 18:14:06.453930 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-ztqqc" event={"ID":"c5c65977-8004-4434-8d99-7624d08d9b3a","Type":"ContainerDied","Data":"a4488f117fe2195d83a74072daee99d5927ce5b493b38d58cbf20eb0b372cea4"} Mar 18 18:14:06.483142 master-0 kubenswrapper[30278]: I0318 18:14:06.483071 30278 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-5f558f5558-dlkh5" podStartSLOduration=2.542101647 podStartE2EDuration="8.483054498s" podCreationTimestamp="2026-03-18 18:13:58 +0000 UTC" firstStartedPulling="2026-03-18 18:13:59.995597579 +0000 UTC m=+809.162782174" lastFinishedPulling="2026-03-18 18:14:05.93655042 +0000 UTC m=+815.103735025" observedRunningTime="2026-03-18 18:14:06.482635307 +0000 UTC m=+815.649819902" watchObservedRunningTime="2026-03-18 18:14:06.483054498 +0000 UTC m=+815.650239093" Mar 18 18:14:06.507963 master-0 kubenswrapper[30278]: I0318 18:14:06.507844 30278 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-g4479" podStartSLOduration=1.704714297 podStartE2EDuration="10.507821538s" podCreationTimestamp="2026-03-18 18:13:56 +0000 UTC" firstStartedPulling="2026-03-18 18:13:57.143815669 +0000 UTC m=+806.311000264" lastFinishedPulling="2026-03-18 18:14:05.9469229 +0000 UTC m=+815.114107505" observedRunningTime="2026-03-18 18:14:06.505448994 +0000 UTC m=+815.672633589" watchObservedRunningTime="2026-03-18 18:14:06.507821538 +0000 UTC m=+815.675006133" Mar 18 18:14:06.538014 master-0 kubenswrapper[30278]: I0318 18:14:06.537914 30278 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-9kcdn" podStartSLOduration=1.708468985 podStartE2EDuration="8.537893922s" podCreationTimestamp="2026-03-18 18:13:58 +0000 UTC" firstStartedPulling="2026-03-18 18:13:59.122203141 +0000 UTC m=+808.289387736" lastFinishedPulling="2026-03-18 18:14:05.951628058 +0000 UTC m=+815.118812673" observedRunningTime="2026-03-18 18:14:06.531381395 +0000 UTC m=+815.698566000" watchObservedRunningTime="2026-03-18 18:14:06.537893922 +0000 UTC m=+815.705078517" Mar 18 18:14:06.557722 master-0 kubenswrapper[30278]: I0318 18:14:06.556805 30278 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-9b8c8685d-zc4ph" podStartSLOduration=2.099576212 podStartE2EDuration="8.556783353s" podCreationTimestamp="2026-03-18 18:13:58 +0000 UTC" firstStartedPulling="2026-03-18 18:13:59.487662464 +0000 UTC m=+808.654847059" lastFinishedPulling="2026-03-18 18:14:05.944869605 +0000 UTC m=+815.112054200" observedRunningTime="2026-03-18 18:14:06.551554091 +0000 UTC m=+815.718738686" watchObservedRunningTime="2026-03-18 18:14:06.556783353 +0000 UTC m=+815.723967958" Mar 18 18:14:07.066079 master-0 kubenswrapper[30278]: I0318 18:14:07.066017 30278 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-7bb4cc7c98-skcb4" Mar 18 18:14:07.467780 master-0 kubenswrapper[30278]: I0318 18:14:07.467576 30278 generic.go:334] "Generic (PLEG): container finished" podID="c5c65977-8004-4434-8d99-7624d08d9b3a" containerID="8291151a02ee7e1cfcd911b6ea12d7d2ab7037c6c4d53fc79a7d03ad48442e73" exitCode=0 Mar 18 18:14:07.470702 master-0 kubenswrapper[30278]: I0318 18:14:07.470625 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-ztqqc" event={"ID":"c5c65977-8004-4434-8d99-7624d08d9b3a","Type":"ContainerDied","Data":"8291151a02ee7e1cfcd911b6ea12d7d2ab7037c6c4d53fc79a7d03ad48442e73"} Mar 18 18:14:08.436551 master-0 kubenswrapper[30278]: I0318 18:14:08.436483 30278 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-m67cm" Mar 18 18:14:08.480948 master-0 kubenswrapper[30278]: I0318 18:14:08.480892 30278 generic.go:334] "Generic (PLEG): container finished" podID="c5c65977-8004-4434-8d99-7624d08d9b3a" containerID="c1c3d84bc99464a51805083703e9d2766fcdaaea847aae086d51be3057aa79bb" exitCode=0 Mar 18 18:14:08.481630 master-0 kubenswrapper[30278]: I0318 18:14:08.480941 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-ztqqc" event={"ID":"c5c65977-8004-4434-8d99-7624d08d9b3a","Type":"ContainerDied","Data":"c1c3d84bc99464a51805083703e9d2766fcdaaea847aae086d51be3057aa79bb"} Mar 18 18:14:09.502488 master-0 kubenswrapper[30278]: I0318 18:14:09.500172 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-ztqqc" event={"ID":"c5c65977-8004-4434-8d99-7624d08d9b3a","Type":"ContainerStarted","Data":"a2234170d7f78066d93a524cb30b9f0bf773079b4114d4c0e63e87d7e77ebb53"} Mar 18 18:14:09.502488 master-0 kubenswrapper[30278]: I0318 18:14:09.500235 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-ztqqc" event={"ID":"c5c65977-8004-4434-8d99-7624d08d9b3a","Type":"ContainerStarted","Data":"033d36c5df540ae8fedc74ff83d231b9887e2711118af92b62141ebbf2960bfd"} Mar 18 18:14:09.502488 master-0 kubenswrapper[30278]: I0318 18:14:09.500252 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-ztqqc" event={"ID":"c5c65977-8004-4434-8d99-7624d08d9b3a","Type":"ContainerStarted","Data":"6598532f09bc7d6161e970a2197b9b92484d783d6b430cf7b1107359c50950ab"} Mar 18 18:14:09.502488 master-0 kubenswrapper[30278]: I0318 18:14:09.500267 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-ztqqc" event={"ID":"c5c65977-8004-4434-8d99-7624d08d9b3a","Type":"ContainerStarted","Data":"01fa03eca0be9982063add219ac2e01b708d3fd52a0611d3cbc323d82a83e698"} Mar 18 18:14:09.502488 master-0 kubenswrapper[30278]: I0318 18:14:09.500298 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-ztqqc" event={"ID":"c5c65977-8004-4434-8d99-7624d08d9b3a","Type":"ContainerStarted","Data":"2f68f16a84ddfb9683b8001c15ef6243528f0d0f23bf5512787fb9431e0035f5"} Mar 18 18:14:09.790815 master-0 kubenswrapper[30278]: I0318 18:14:09.790738 30278 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f76dd88c-h9rrg" Mar 18 18:14:09.790815 master-0 kubenswrapper[30278]: I0318 18:14:09.790798 30278 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f76dd88c-h9rrg" Mar 18 18:14:09.795459 master-0 kubenswrapper[30278]: I0318 18:14:09.795373 30278 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f76dd88c-h9rrg" Mar 18 18:14:10.518419 master-0 kubenswrapper[30278]: I0318 18:14:10.518341 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-ztqqc" event={"ID":"c5c65977-8004-4434-8d99-7624d08d9b3a","Type":"ContainerStarted","Data":"2a01567d8b2abaecd400c5175aa1dc0ae401dcfc174a6c7d29aff292ad5140a1"} Mar 18 18:14:10.519374 master-0 kubenswrapper[30278]: I0318 18:14:10.518565 30278 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-ztqqc" Mar 18 18:14:10.523236 master-0 kubenswrapper[30278]: I0318 18:14:10.523142 30278 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f76dd88c-h9rrg" Mar 18 18:14:10.552357 master-0 kubenswrapper[30278]: I0318 18:14:10.552205 30278 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-ztqqc" podStartSLOduration=6.201019299 podStartE2EDuration="14.552178359s" podCreationTimestamp="2026-03-18 18:13:56 +0000 UTC" firstStartedPulling="2026-03-18 18:13:57.64270582 +0000 UTC m=+806.809890415" lastFinishedPulling="2026-03-18 18:14:05.99386488 +0000 UTC m=+815.161049475" observedRunningTime="2026-03-18 18:14:10.546213808 +0000 UTC m=+819.713398413" watchObservedRunningTime="2026-03-18 18:14:10.552178359 +0000 UTC m=+819.719362964" Mar 18 18:14:10.687662 master-0 kubenswrapper[30278]: I0318 18:14:10.681117 30278 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-7c48f8f679-djbqb"] Mar 18 18:14:12.448539 master-0 kubenswrapper[30278]: I0318 18:14:12.448470 30278 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-ztqqc" Mar 18 18:14:12.489218 master-0 kubenswrapper[30278]: I0318 18:14:12.489154 30278 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-ztqqc" Mar 18 18:14:13.955133 master-0 kubenswrapper[30278]: I0318 18:14:13.955079 30278 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-9kcdn" Mar 18 18:14:16.786463 master-0 kubenswrapper[30278]: I0318 18:14:16.786384 30278 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-g4479" Mar 18 18:14:18.936209 master-0 kubenswrapper[30278]: I0318 18:14:18.936141 30278 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-5f558f5558-dlkh5" Mar 18 18:14:24.007434 master-0 kubenswrapper[30278]: I0318 18:14:24.007357 30278 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-storage/vg-manager-52qpc"] Mar 18 18:14:24.009178 master-0 kubenswrapper[30278]: I0318 18:14:24.009002 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-storage/vg-manager-52qpc" Mar 18 18:14:24.011882 master-0 kubenswrapper[30278]: I0318 18:14:24.011808 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-storage"/"vg-manager-metrics-cert" Mar 18 18:14:24.026429 master-0 kubenswrapper[30278]: I0318 18:14:24.026357 30278 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-storage/vg-manager-52qpc"] Mar 18 18:14:24.094768 master-0 kubenswrapper[30278]: I0318 18:14:24.094590 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-udev\" (UniqueName: \"kubernetes.io/host-path/392a5b52-8422-484c-8d32-fd3661007e2b-run-udev\") pod \"vg-manager-52qpc\" (UID: \"392a5b52-8422-484c-8d32-fd3661007e2b\") " pod="openshift-storage/vg-manager-52qpc" Mar 18 18:14:24.094768 master-0 kubenswrapper[30278]: I0318 18:14:24.094715 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lvmd-config\" (UniqueName: \"kubernetes.io/host-path/392a5b52-8422-484c-8d32-fd3661007e2b-lvmd-config\") pod \"vg-manager-52qpc\" (UID: \"392a5b52-8422-484c-8d32-fd3661007e2b\") " pod="openshift-storage/vg-manager-52qpc" Mar 18 18:14:24.095101 master-0 kubenswrapper[30278]: I0318 18:14:24.094939 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-plugin-dir\" (UniqueName: \"kubernetes.io/host-path/392a5b52-8422-484c-8d32-fd3661007e2b-node-plugin-dir\") pod \"vg-manager-52qpc\" (UID: \"392a5b52-8422-484c-8d32-fd3661007e2b\") " pod="openshift-storage/vg-manager-52qpc" Mar 18 18:14:24.095101 master-0 kubenswrapper[30278]: I0318 18:14:24.095094 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/392a5b52-8422-484c-8d32-fd3661007e2b-registration-dir\") pod \"vg-manager-52qpc\" (UID: \"392a5b52-8422-484c-8d32-fd3661007e2b\") " pod="openshift-storage/vg-manager-52qpc" Mar 18 18:14:24.095354 master-0 kubenswrapper[30278]: I0318 18:14:24.095251 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/392a5b52-8422-484c-8d32-fd3661007e2b-sys\") pod \"vg-manager-52qpc\" (UID: \"392a5b52-8422-484c-8d32-fd3661007e2b\") " pod="openshift-storage/vg-manager-52qpc" Mar 18 18:14:24.095435 master-0 kubenswrapper[30278]: I0318 18:14:24.095420 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9vb4n\" (UniqueName: \"kubernetes.io/projected/392a5b52-8422-484c-8d32-fd3661007e2b-kube-api-access-9vb4n\") pod \"vg-manager-52qpc\" (UID: \"392a5b52-8422-484c-8d32-fd3661007e2b\") " pod="openshift-storage/vg-manager-52qpc" Mar 18 18:14:24.095550 master-0 kubenswrapper[30278]: I0318 18:14:24.095523 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-plugin-dir\" (UniqueName: \"kubernetes.io/host-path/392a5b52-8422-484c-8d32-fd3661007e2b-csi-plugin-dir\") pod \"vg-manager-52qpc\" (UID: \"392a5b52-8422-484c-8d32-fd3661007e2b\") " pod="openshift-storage/vg-manager-52qpc" Mar 18 18:14:24.095738 master-0 kubenswrapper[30278]: I0318 18:14:24.095684 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"file-lock-dir\" (UniqueName: \"kubernetes.io/host-path/392a5b52-8422-484c-8d32-fd3661007e2b-file-lock-dir\") pod \"vg-manager-52qpc\" (UID: \"392a5b52-8422-484c-8d32-fd3661007e2b\") " pod="openshift-storage/vg-manager-52qpc" Mar 18 18:14:24.095863 master-0 kubenswrapper[30278]: I0318 18:14:24.095820 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-cert\" (UniqueName: \"kubernetes.io/secret/392a5b52-8422-484c-8d32-fd3661007e2b-metrics-cert\") pod \"vg-manager-52qpc\" (UID: \"392a5b52-8422-484c-8d32-fd3661007e2b\") " pod="openshift-storage/vg-manager-52qpc" Mar 18 18:14:24.098470 master-0 kubenswrapper[30278]: I0318 18:14:24.095948 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-volumes-dir\" (UniqueName: \"kubernetes.io/host-path/392a5b52-8422-484c-8d32-fd3661007e2b-pod-volumes-dir\") pod \"vg-manager-52qpc\" (UID: \"392a5b52-8422-484c-8d32-fd3661007e2b\") " pod="openshift-storage/vg-manager-52qpc" Mar 18 18:14:24.098470 master-0 kubenswrapper[30278]: I0318 18:14:24.096484 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"device-dir\" (UniqueName: \"kubernetes.io/host-path/392a5b52-8422-484c-8d32-fd3661007e2b-device-dir\") pod \"vg-manager-52qpc\" (UID: \"392a5b52-8422-484c-8d32-fd3661007e2b\") " pod="openshift-storage/vg-manager-52qpc" Mar 18 18:14:24.199297 master-0 kubenswrapper[30278]: I0318 18:14:24.197652 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"device-dir\" (UniqueName: \"kubernetes.io/host-path/392a5b52-8422-484c-8d32-fd3661007e2b-device-dir\") pod \"vg-manager-52qpc\" (UID: \"392a5b52-8422-484c-8d32-fd3661007e2b\") " pod="openshift-storage/vg-manager-52qpc" Mar 18 18:14:24.199297 master-0 kubenswrapper[30278]: I0318 18:14:24.197724 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-udev\" (UniqueName: \"kubernetes.io/host-path/392a5b52-8422-484c-8d32-fd3661007e2b-run-udev\") pod \"vg-manager-52qpc\" (UID: \"392a5b52-8422-484c-8d32-fd3661007e2b\") " pod="openshift-storage/vg-manager-52qpc" Mar 18 18:14:24.199297 master-0 kubenswrapper[30278]: I0318 18:14:24.197759 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lvmd-config\" (UniqueName: \"kubernetes.io/host-path/392a5b52-8422-484c-8d32-fd3661007e2b-lvmd-config\") pod \"vg-manager-52qpc\" (UID: \"392a5b52-8422-484c-8d32-fd3661007e2b\") " pod="openshift-storage/vg-manager-52qpc" Mar 18 18:14:24.199297 master-0 kubenswrapper[30278]: I0318 18:14:24.197778 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-plugin-dir\" (UniqueName: \"kubernetes.io/host-path/392a5b52-8422-484c-8d32-fd3661007e2b-node-plugin-dir\") pod \"vg-manager-52qpc\" (UID: \"392a5b52-8422-484c-8d32-fd3661007e2b\") " pod="openshift-storage/vg-manager-52qpc" Mar 18 18:14:24.199297 master-0 kubenswrapper[30278]: I0318 18:14:24.197810 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/392a5b52-8422-484c-8d32-fd3661007e2b-registration-dir\") pod \"vg-manager-52qpc\" (UID: \"392a5b52-8422-484c-8d32-fd3661007e2b\") " pod="openshift-storage/vg-manager-52qpc" Mar 18 18:14:24.199297 master-0 kubenswrapper[30278]: I0318 18:14:24.197844 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/392a5b52-8422-484c-8d32-fd3661007e2b-sys\") pod \"vg-manager-52qpc\" (UID: \"392a5b52-8422-484c-8d32-fd3661007e2b\") " pod="openshift-storage/vg-manager-52qpc" Mar 18 18:14:24.199297 master-0 kubenswrapper[30278]: I0318 18:14:24.197888 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9vb4n\" (UniqueName: \"kubernetes.io/projected/392a5b52-8422-484c-8d32-fd3661007e2b-kube-api-access-9vb4n\") pod \"vg-manager-52qpc\" (UID: \"392a5b52-8422-484c-8d32-fd3661007e2b\") " pod="openshift-storage/vg-manager-52qpc" Mar 18 18:14:24.199297 master-0 kubenswrapper[30278]: I0318 18:14:24.197913 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-plugin-dir\" (UniqueName: \"kubernetes.io/host-path/392a5b52-8422-484c-8d32-fd3661007e2b-csi-plugin-dir\") pod \"vg-manager-52qpc\" (UID: \"392a5b52-8422-484c-8d32-fd3661007e2b\") " pod="openshift-storage/vg-manager-52qpc" Mar 18 18:14:24.199297 master-0 kubenswrapper[30278]: I0318 18:14:24.197943 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"file-lock-dir\" (UniqueName: \"kubernetes.io/host-path/392a5b52-8422-484c-8d32-fd3661007e2b-file-lock-dir\") pod \"vg-manager-52qpc\" (UID: \"392a5b52-8422-484c-8d32-fd3661007e2b\") " pod="openshift-storage/vg-manager-52qpc" Mar 18 18:14:24.199297 master-0 kubenswrapper[30278]: I0318 18:14:24.197971 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-cert\" (UniqueName: \"kubernetes.io/secret/392a5b52-8422-484c-8d32-fd3661007e2b-metrics-cert\") pod \"vg-manager-52qpc\" (UID: \"392a5b52-8422-484c-8d32-fd3661007e2b\") " pod="openshift-storage/vg-manager-52qpc" Mar 18 18:14:24.199297 master-0 kubenswrapper[30278]: I0318 18:14:24.198000 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-volumes-dir\" (UniqueName: \"kubernetes.io/host-path/392a5b52-8422-484c-8d32-fd3661007e2b-pod-volumes-dir\") pod \"vg-manager-52qpc\" (UID: \"392a5b52-8422-484c-8d32-fd3661007e2b\") " pod="openshift-storage/vg-manager-52qpc" Mar 18 18:14:24.199297 master-0 kubenswrapper[30278]: I0318 18:14:24.198140 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-volumes-dir\" (UniqueName: \"kubernetes.io/host-path/392a5b52-8422-484c-8d32-fd3661007e2b-pod-volumes-dir\") pod \"vg-manager-52qpc\" (UID: \"392a5b52-8422-484c-8d32-fd3661007e2b\") " pod="openshift-storage/vg-manager-52qpc" Mar 18 18:14:24.199297 master-0 kubenswrapper[30278]: I0318 18:14:24.198194 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"device-dir\" (UniqueName: \"kubernetes.io/host-path/392a5b52-8422-484c-8d32-fd3661007e2b-device-dir\") pod \"vg-manager-52qpc\" (UID: \"392a5b52-8422-484c-8d32-fd3661007e2b\") " pod="openshift-storage/vg-manager-52qpc" Mar 18 18:14:24.199297 master-0 kubenswrapper[30278]: I0318 18:14:24.198217 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-udev\" (UniqueName: \"kubernetes.io/host-path/392a5b52-8422-484c-8d32-fd3661007e2b-run-udev\") pod \"vg-manager-52qpc\" (UID: \"392a5b52-8422-484c-8d32-fd3661007e2b\") " pod="openshift-storage/vg-manager-52qpc" Mar 18 18:14:24.199297 master-0 kubenswrapper[30278]: I0318 18:14:24.198439 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lvmd-config\" (UniqueName: \"kubernetes.io/host-path/392a5b52-8422-484c-8d32-fd3661007e2b-lvmd-config\") pod \"vg-manager-52qpc\" (UID: \"392a5b52-8422-484c-8d32-fd3661007e2b\") " pod="openshift-storage/vg-manager-52qpc" Mar 18 18:14:24.199297 master-0 kubenswrapper[30278]: I0318 18:14:24.198734 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-plugin-dir\" (UniqueName: \"kubernetes.io/host-path/392a5b52-8422-484c-8d32-fd3661007e2b-node-plugin-dir\") pod \"vg-manager-52qpc\" (UID: \"392a5b52-8422-484c-8d32-fd3661007e2b\") " pod="openshift-storage/vg-manager-52qpc" Mar 18 18:14:24.199297 master-0 kubenswrapper[30278]: I0318 18:14:24.198847 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/392a5b52-8422-484c-8d32-fd3661007e2b-registration-dir\") pod \"vg-manager-52qpc\" (UID: \"392a5b52-8422-484c-8d32-fd3661007e2b\") " pod="openshift-storage/vg-manager-52qpc" Mar 18 18:14:24.199297 master-0 kubenswrapper[30278]: I0318 18:14:24.198897 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/392a5b52-8422-484c-8d32-fd3661007e2b-sys\") pod \"vg-manager-52qpc\" (UID: \"392a5b52-8422-484c-8d32-fd3661007e2b\") " pod="openshift-storage/vg-manager-52qpc" Mar 18 18:14:24.199297 master-0 kubenswrapper[30278]: I0318 18:14:24.198980 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-plugin-dir\" (UniqueName: \"kubernetes.io/host-path/392a5b52-8422-484c-8d32-fd3661007e2b-csi-plugin-dir\") pod \"vg-manager-52qpc\" (UID: \"392a5b52-8422-484c-8d32-fd3661007e2b\") " pod="openshift-storage/vg-manager-52qpc" Mar 18 18:14:24.199297 master-0 kubenswrapper[30278]: I0318 18:14:24.199100 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"file-lock-dir\" (UniqueName: \"kubernetes.io/host-path/392a5b52-8422-484c-8d32-fd3661007e2b-file-lock-dir\") pod \"vg-manager-52qpc\" (UID: \"392a5b52-8422-484c-8d32-fd3661007e2b\") " pod="openshift-storage/vg-manager-52qpc" Mar 18 18:14:24.209637 master-0 kubenswrapper[30278]: I0318 18:14:24.209587 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-cert\" (UniqueName: \"kubernetes.io/secret/392a5b52-8422-484c-8d32-fd3661007e2b-metrics-cert\") pod \"vg-manager-52qpc\" (UID: \"392a5b52-8422-484c-8d32-fd3661007e2b\") " pod="openshift-storage/vg-manager-52qpc" Mar 18 18:14:24.218057 master-0 kubenswrapper[30278]: I0318 18:14:24.218014 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9vb4n\" (UniqueName: \"kubernetes.io/projected/392a5b52-8422-484c-8d32-fd3661007e2b-kube-api-access-9vb4n\") pod \"vg-manager-52qpc\" (UID: \"392a5b52-8422-484c-8d32-fd3661007e2b\") " pod="openshift-storage/vg-manager-52qpc" Mar 18 18:14:24.372975 master-0 kubenswrapper[30278]: I0318 18:14:24.372907 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-storage/vg-manager-52qpc" Mar 18 18:14:24.873152 master-0 kubenswrapper[30278]: I0318 18:14:24.873077 30278 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-storage/vg-manager-52qpc"] Mar 18 18:14:24.875476 master-0 kubenswrapper[30278]: W0318 18:14:24.875403 30278 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod392a5b52_8422_484c_8d32_fd3661007e2b.slice/crio-7f751ba7728b1ec72c1f7b1e84481d66bbfa35c6193cb3bab5a98ae415e2ae0c WatchSource:0}: Error finding container 7f751ba7728b1ec72c1f7b1e84481d66bbfa35c6193cb3bab5a98ae415e2ae0c: Status 404 returned error can't find the container with id 7f751ba7728b1ec72c1f7b1e84481d66bbfa35c6193cb3bab5a98ae415e2ae0c Mar 18 18:14:25.699326 master-0 kubenswrapper[30278]: I0318 18:14:25.699242 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-storage/vg-manager-52qpc" event={"ID":"392a5b52-8422-484c-8d32-fd3661007e2b","Type":"ContainerStarted","Data":"20d545417575b64b8f6f4444d6441464e8a34230873f6dd47a24dbda27f965df"} Mar 18 18:14:25.699326 master-0 kubenswrapper[30278]: I0318 18:14:25.699322 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-storage/vg-manager-52qpc" event={"ID":"392a5b52-8422-484c-8d32-fd3661007e2b","Type":"ContainerStarted","Data":"7f751ba7728b1ec72c1f7b1e84481d66bbfa35c6193cb3bab5a98ae415e2ae0c"} Mar 18 18:14:25.729012 master-0 kubenswrapper[30278]: I0318 18:14:25.728897 30278 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-storage/vg-manager-52qpc" podStartSLOduration=2.728873039 podStartE2EDuration="2.728873039s" podCreationTimestamp="2026-03-18 18:14:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 18:14:25.721320065 +0000 UTC m=+834.888504670" watchObservedRunningTime="2026-03-18 18:14:25.728873039 +0000 UTC m=+834.896057644" Mar 18 18:14:27.453268 master-0 kubenswrapper[30278]: I0318 18:14:27.453175 30278 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-ztqqc" Mar 18 18:14:27.743781 master-0 kubenswrapper[30278]: I0318 18:14:27.743603 30278 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-storage_vg-manager-52qpc_392a5b52-8422-484c-8d32-fd3661007e2b/vg-manager/0.log" Mar 18 18:14:27.743781 master-0 kubenswrapper[30278]: I0318 18:14:27.743688 30278 generic.go:334] "Generic (PLEG): container finished" podID="392a5b52-8422-484c-8d32-fd3661007e2b" containerID="20d545417575b64b8f6f4444d6441464e8a34230873f6dd47a24dbda27f965df" exitCode=1 Mar 18 18:14:27.743781 master-0 kubenswrapper[30278]: I0318 18:14:27.743745 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-storage/vg-manager-52qpc" event={"ID":"392a5b52-8422-484c-8d32-fd3661007e2b","Type":"ContainerDied","Data":"20d545417575b64b8f6f4444d6441464e8a34230873f6dd47a24dbda27f965df"} Mar 18 18:14:27.744708 master-0 kubenswrapper[30278]: I0318 18:14:27.744683 30278 scope.go:117] "RemoveContainer" containerID="20d545417575b64b8f6f4444d6441464e8a34230873f6dd47a24dbda27f965df" Mar 18 18:14:28.253719 master-0 kubenswrapper[30278]: I0318 18:14:28.253517 30278 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/topolvm.io-reg.sock" Mar 18 18:14:28.623945 master-0 kubenswrapper[30278]: I0318 18:14:28.623745 30278 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/topolvm.io-reg.sock","Timestamp":"2026-03-18T18:14:28.253564054Z","Handler":null,"Name":""} Mar 18 18:14:28.626240 master-0 kubenswrapper[30278]: I0318 18:14:28.626194 30278 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: topolvm.io endpoint: /var/lib/kubelet/plugins/topolvm.io/node/csi-topolvm.sock versions: 1.0.0 Mar 18 18:14:28.626383 master-0 kubenswrapper[30278]: I0318 18:14:28.626249 30278 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: topolvm.io at endpoint: /var/lib/kubelet/plugins/topolvm.io/node/csi-topolvm.sock Mar 18 18:14:28.755912 master-0 kubenswrapper[30278]: I0318 18:14:28.755678 30278 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-storage_vg-manager-52qpc_392a5b52-8422-484c-8d32-fd3661007e2b/vg-manager/0.log" Mar 18 18:14:28.755912 master-0 kubenswrapper[30278]: I0318 18:14:28.755766 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-storage/vg-manager-52qpc" event={"ID":"392a5b52-8422-484c-8d32-fd3661007e2b","Type":"ContainerStarted","Data":"ae7e7c676fcf43cedff26f4c78feacd9ae2b1310dfb0ef0543f7caa782350834"} Mar 18 18:14:31.184030 master-0 kubenswrapper[30278]: I0318 18:14:31.183948 30278 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-4bxf4"] Mar 18 18:14:31.186316 master-0 kubenswrapper[30278]: I0318 18:14:31.185060 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-4bxf4" Mar 18 18:14:31.193303 master-0 kubenswrapper[30278]: I0318 18:14:31.191221 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Mar 18 18:14:31.193973 master-0 kubenswrapper[30278]: I0318 18:14:31.193627 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Mar 18 18:14:31.202196 master-0 kubenswrapper[30278]: I0318 18:14:31.202132 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xzmmx\" (UniqueName: \"kubernetes.io/projected/c81d87dc-f4b5-44ef-a5d0-b6766ddfe807-kube-api-access-xzmmx\") pod \"openstack-operator-index-4bxf4\" (UID: \"c81d87dc-f4b5-44ef-a5d0-b6766ddfe807\") " pod="openstack-operators/openstack-operator-index-4bxf4" Mar 18 18:14:31.210249 master-0 kubenswrapper[30278]: I0318 18:14:31.210099 30278 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-4bxf4"] Mar 18 18:14:31.304410 master-0 kubenswrapper[30278]: I0318 18:14:31.304332 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xzmmx\" (UniqueName: \"kubernetes.io/projected/c81d87dc-f4b5-44ef-a5d0-b6766ddfe807-kube-api-access-xzmmx\") pod \"openstack-operator-index-4bxf4\" (UID: \"c81d87dc-f4b5-44ef-a5d0-b6766ddfe807\") " pod="openstack-operators/openstack-operator-index-4bxf4" Mar 18 18:14:31.323751 master-0 kubenswrapper[30278]: I0318 18:14:31.322994 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xzmmx\" (UniqueName: \"kubernetes.io/projected/c81d87dc-f4b5-44ef-a5d0-b6766ddfe807-kube-api-access-xzmmx\") pod \"openstack-operator-index-4bxf4\" (UID: \"c81d87dc-f4b5-44ef-a5d0-b6766ddfe807\") " pod="openstack-operators/openstack-operator-index-4bxf4" Mar 18 18:14:31.529036 master-0 kubenswrapper[30278]: I0318 18:14:31.528903 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-4bxf4" Mar 18 18:14:32.036508 master-0 kubenswrapper[30278]: I0318 18:14:32.036449 30278 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-4bxf4"] Mar 18 18:14:32.054080 master-0 kubenswrapper[30278]: W0318 18:14:32.053980 30278 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc81d87dc_f4b5_44ef_a5d0_b6766ddfe807.slice/crio-91eeb06aae89e7132df72b3cb66e27d3c32aca1132dd7e9fdc1065c913557040 WatchSource:0}: Error finding container 91eeb06aae89e7132df72b3cb66e27d3c32aca1132dd7e9fdc1065c913557040: Status 404 returned error can't find the container with id 91eeb06aae89e7132df72b3cb66e27d3c32aca1132dd7e9fdc1065c913557040 Mar 18 18:14:32.794123 master-0 kubenswrapper[30278]: I0318 18:14:32.794057 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-4bxf4" event={"ID":"c81d87dc-f4b5-44ef-a5d0-b6766ddfe807","Type":"ContainerStarted","Data":"91eeb06aae89e7132df72b3cb66e27d3c32aca1132dd7e9fdc1065c913557040"} Mar 18 18:14:34.374994 master-0 kubenswrapper[30278]: I0318 18:14:34.373866 30278 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-storage/vg-manager-52qpc" Mar 18 18:14:34.376450 master-0 kubenswrapper[30278]: I0318 18:14:34.376328 30278 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-storage/vg-manager-52qpc" Mar 18 18:14:34.810679 master-0 kubenswrapper[30278]: I0318 18:14:34.810625 30278 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-storage/vg-manager-52qpc" Mar 18 18:14:34.812256 master-0 kubenswrapper[30278]: I0318 18:14:34.812223 30278 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-storage/vg-manager-52qpc" Mar 18 18:14:35.731899 master-0 kubenswrapper[30278]: I0318 18:14:35.731792 30278 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-7c48f8f679-djbqb" podUID="b294ce2a-9da1-4917-8c73-8e5b6320c88e" containerName="console" containerID="cri-o://a6fd61de2952574e9197b1e9727e6230b428aee3a2ba56f41ea19507cc2576e0" gracePeriod=15 Mar 18 18:14:35.825209 master-0 kubenswrapper[30278]: I0318 18:14:35.825121 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-4bxf4" event={"ID":"c81d87dc-f4b5-44ef-a5d0-b6766ddfe807","Type":"ContainerStarted","Data":"d48dced0d0f7ba33af56f1e413c60d9279dc96f5c49069cd7c11c5ec65447278"} Mar 18 18:14:35.867929 master-0 kubenswrapper[30278]: I0318 18:14:35.867797 30278 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-4bxf4" podStartSLOduration=1.701114247 podStartE2EDuration="4.867765853s" podCreationTimestamp="2026-03-18 18:14:31 +0000 UTC" firstStartedPulling="2026-03-18 18:14:32.059864086 +0000 UTC m=+841.227048691" lastFinishedPulling="2026-03-18 18:14:35.226515702 +0000 UTC m=+844.393700297" observedRunningTime="2026-03-18 18:14:35.855247924 +0000 UTC m=+845.022432529" watchObservedRunningTime="2026-03-18 18:14:35.867765853 +0000 UTC m=+845.034950489" Mar 18 18:14:36.192609 master-0 kubenswrapper[30278]: I0318 18:14:36.192534 30278 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-7c48f8f679-djbqb_b294ce2a-9da1-4917-8c73-8e5b6320c88e/console/0.log" Mar 18 18:14:36.192832 master-0 kubenswrapper[30278]: I0318 18:14:36.192675 30278 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-7c48f8f679-djbqb" Mar 18 18:14:36.309300 master-0 kubenswrapper[30278]: I0318 18:14:36.308511 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/b294ce2a-9da1-4917-8c73-8e5b6320c88e-console-serving-cert\") pod \"b294ce2a-9da1-4917-8c73-8e5b6320c88e\" (UID: \"b294ce2a-9da1-4917-8c73-8e5b6320c88e\") " Mar 18 18:14:36.309300 master-0 kubenswrapper[30278]: I0318 18:14:36.308582 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b294ce2a-9da1-4917-8c73-8e5b6320c88e-trusted-ca-bundle\") pod \"b294ce2a-9da1-4917-8c73-8e5b6320c88e\" (UID: \"b294ce2a-9da1-4917-8c73-8e5b6320c88e\") " Mar 18 18:14:36.309300 master-0 kubenswrapper[30278]: I0318 18:14:36.308610 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/b294ce2a-9da1-4917-8c73-8e5b6320c88e-console-oauth-config\") pod \"b294ce2a-9da1-4917-8c73-8e5b6320c88e\" (UID: \"b294ce2a-9da1-4917-8c73-8e5b6320c88e\") " Mar 18 18:14:36.309300 master-0 kubenswrapper[30278]: I0318 18:14:36.308630 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/b294ce2a-9da1-4917-8c73-8e5b6320c88e-service-ca\") pod \"b294ce2a-9da1-4917-8c73-8e5b6320c88e\" (UID: \"b294ce2a-9da1-4917-8c73-8e5b6320c88e\") " Mar 18 18:14:36.309300 master-0 kubenswrapper[30278]: I0318 18:14:36.308654 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mq4qd\" (UniqueName: \"kubernetes.io/projected/b294ce2a-9da1-4917-8c73-8e5b6320c88e-kube-api-access-mq4qd\") pod \"b294ce2a-9da1-4917-8c73-8e5b6320c88e\" (UID: \"b294ce2a-9da1-4917-8c73-8e5b6320c88e\") " Mar 18 18:14:36.309300 master-0 kubenswrapper[30278]: I0318 18:14:36.308691 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/b294ce2a-9da1-4917-8c73-8e5b6320c88e-oauth-serving-cert\") pod \"b294ce2a-9da1-4917-8c73-8e5b6320c88e\" (UID: \"b294ce2a-9da1-4917-8c73-8e5b6320c88e\") " Mar 18 18:14:36.309300 master-0 kubenswrapper[30278]: I0318 18:14:36.308709 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/b294ce2a-9da1-4917-8c73-8e5b6320c88e-console-config\") pod \"b294ce2a-9da1-4917-8c73-8e5b6320c88e\" (UID: \"b294ce2a-9da1-4917-8c73-8e5b6320c88e\") " Mar 18 18:14:36.309791 master-0 kubenswrapper[30278]: I0318 18:14:36.309444 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b294ce2a-9da1-4917-8c73-8e5b6320c88e-console-config" (OuterVolumeSpecName: "console-config") pod "b294ce2a-9da1-4917-8c73-8e5b6320c88e" (UID: "b294ce2a-9da1-4917-8c73-8e5b6320c88e"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 18:14:36.320305 master-0 kubenswrapper[30278]: I0318 18:14:36.318308 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b294ce2a-9da1-4917-8c73-8e5b6320c88e-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "b294ce2a-9da1-4917-8c73-8e5b6320c88e" (UID: "b294ce2a-9da1-4917-8c73-8e5b6320c88e"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 18:14:36.320305 master-0 kubenswrapper[30278]: I0318 18:14:36.319183 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b294ce2a-9da1-4917-8c73-8e5b6320c88e-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "b294ce2a-9da1-4917-8c73-8e5b6320c88e" (UID: "b294ce2a-9da1-4917-8c73-8e5b6320c88e"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 18:14:36.321881 master-0 kubenswrapper[30278]: I0318 18:14:36.321805 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b294ce2a-9da1-4917-8c73-8e5b6320c88e-service-ca" (OuterVolumeSpecName: "service-ca") pod "b294ce2a-9da1-4917-8c73-8e5b6320c88e" (UID: "b294ce2a-9da1-4917-8c73-8e5b6320c88e"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 18:14:36.322431 master-0 kubenswrapper[30278]: I0318 18:14:36.322389 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b294ce2a-9da1-4917-8c73-8e5b6320c88e-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "b294ce2a-9da1-4917-8c73-8e5b6320c88e" (UID: "b294ce2a-9da1-4917-8c73-8e5b6320c88e"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 18:14:36.327297 master-0 kubenswrapper[30278]: I0318 18:14:36.323999 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b294ce2a-9da1-4917-8c73-8e5b6320c88e-kube-api-access-mq4qd" (OuterVolumeSpecName: "kube-api-access-mq4qd") pod "b294ce2a-9da1-4917-8c73-8e5b6320c88e" (UID: "b294ce2a-9da1-4917-8c73-8e5b6320c88e"). InnerVolumeSpecName "kube-api-access-mq4qd". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 18:14:36.333870 master-0 kubenswrapper[30278]: I0318 18:14:36.332625 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b294ce2a-9da1-4917-8c73-8e5b6320c88e-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "b294ce2a-9da1-4917-8c73-8e5b6320c88e" (UID: "b294ce2a-9da1-4917-8c73-8e5b6320c88e"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 18:14:36.420890 master-0 kubenswrapper[30278]: I0318 18:14:36.420720 30278 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/b294ce2a-9da1-4917-8c73-8e5b6320c88e-console-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 18 18:14:36.420890 master-0 kubenswrapper[30278]: I0318 18:14:36.420796 30278 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b294ce2a-9da1-4917-8c73-8e5b6320c88e-trusted-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 18 18:14:36.420890 master-0 kubenswrapper[30278]: I0318 18:14:36.420815 30278 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/b294ce2a-9da1-4917-8c73-8e5b6320c88e-console-oauth-config\") on node \"master-0\" DevicePath \"\"" Mar 18 18:14:36.420890 master-0 kubenswrapper[30278]: I0318 18:14:36.420831 30278 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/b294ce2a-9da1-4917-8c73-8e5b6320c88e-service-ca\") on node \"master-0\" DevicePath \"\"" Mar 18 18:14:36.420890 master-0 kubenswrapper[30278]: I0318 18:14:36.420846 30278 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mq4qd\" (UniqueName: \"kubernetes.io/projected/b294ce2a-9da1-4917-8c73-8e5b6320c88e-kube-api-access-mq4qd\") on node \"master-0\" DevicePath \"\"" Mar 18 18:14:36.420890 master-0 kubenswrapper[30278]: I0318 18:14:36.420859 30278 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/b294ce2a-9da1-4917-8c73-8e5b6320c88e-oauth-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 18 18:14:36.420890 master-0 kubenswrapper[30278]: I0318 18:14:36.420871 30278 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/b294ce2a-9da1-4917-8c73-8e5b6320c88e-console-config\") on node \"master-0\" DevicePath \"\"" Mar 18 18:14:36.721323 master-0 kubenswrapper[30278]: I0318 18:14:36.721166 30278 scope.go:117] "RemoveContainer" containerID="a6fd61de2952574e9197b1e9727e6230b428aee3a2ba56f41ea19507cc2576e0" Mar 18 18:14:36.833331 master-0 kubenswrapper[30278]: I0318 18:14:36.833204 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-7c48f8f679-djbqb" event={"ID":"b294ce2a-9da1-4917-8c73-8e5b6320c88e","Type":"ContainerDied","Data":"a6fd61de2952574e9197b1e9727e6230b428aee3a2ba56f41ea19507cc2576e0"} Mar 18 18:14:36.833331 master-0 kubenswrapper[30278]: I0318 18:14:36.833301 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-7c48f8f679-djbqb" event={"ID":"b294ce2a-9da1-4917-8c73-8e5b6320c88e","Type":"ContainerDied","Data":"e642dca84321cf5a84d89f6d201c5193b5a8570eb15c7f2f1fefef39ff70a82f"} Mar 18 18:14:36.834659 master-0 kubenswrapper[30278]: I0318 18:14:36.833466 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-7c48f8f679-djbqb" Mar 18 18:14:36.885901 master-0 kubenswrapper[30278]: I0318 18:14:36.885776 30278 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-7c48f8f679-djbqb"] Mar 18 18:14:36.899789 master-0 kubenswrapper[30278]: I0318 18:14:36.899695 30278 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-7c48f8f679-djbqb"] Mar 18 18:14:37.074833 master-0 kubenswrapper[30278]: I0318 18:14:37.074712 30278 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b294ce2a-9da1-4917-8c73-8e5b6320c88e" path="/var/lib/kubelet/pods/b294ce2a-9da1-4917-8c73-8e5b6320c88e/volumes" Mar 18 18:14:41.530319 master-0 kubenswrapper[30278]: I0318 18:14:41.530236 30278 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-4bxf4" Mar 18 18:14:41.531606 master-0 kubenswrapper[30278]: I0318 18:14:41.530537 30278 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-4bxf4" Mar 18 18:14:41.575945 master-0 kubenswrapper[30278]: I0318 18:14:41.575879 30278 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-4bxf4" Mar 18 18:14:41.938168 master-0 kubenswrapper[30278]: I0318 18:14:41.938079 30278 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-4bxf4" Mar 18 18:14:48.217946 master-0 kubenswrapper[30278]: I0318 18:14:48.217855 30278 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ed96add5fd8bdef5b30529b55940919abbfd5c2160aba46f636d9e153bpdxqh"] Mar 18 18:14:48.218906 master-0 kubenswrapper[30278]: E0318 18:14:48.218577 30278 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b294ce2a-9da1-4917-8c73-8e5b6320c88e" containerName="console" Mar 18 18:14:48.218906 master-0 kubenswrapper[30278]: I0318 18:14:48.218611 30278 state_mem.go:107] "Deleted CPUSet assignment" podUID="b294ce2a-9da1-4917-8c73-8e5b6320c88e" containerName="console" Mar 18 18:14:48.219049 master-0 kubenswrapper[30278]: I0318 18:14:48.219017 30278 memory_manager.go:354] "RemoveStaleState removing state" podUID="b294ce2a-9da1-4917-8c73-8e5b6320c88e" containerName="console" Mar 18 18:14:48.221355 master-0 kubenswrapper[30278]: I0318 18:14:48.221255 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ed96add5fd8bdef5b30529b55940919abbfd5c2160aba46f636d9e153bpdxqh" Mar 18 18:14:48.242832 master-0 kubenswrapper[30278]: I0318 18:14:48.241254 30278 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ed96add5fd8bdef5b30529b55940919abbfd5c2160aba46f636d9e153bpdxqh"] Mar 18 18:14:48.382453 master-0 kubenswrapper[30278]: I0318 18:14:48.382260 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/bdaac128-a880-4a06-9b63-31eba0e41a53-bundle\") pod \"ed96add5fd8bdef5b30529b55940919abbfd5c2160aba46f636d9e153bpdxqh\" (UID: \"bdaac128-a880-4a06-9b63-31eba0e41a53\") " pod="openstack-operators/ed96add5fd8bdef5b30529b55940919abbfd5c2160aba46f636d9e153bpdxqh" Mar 18 18:14:48.382969 master-0 kubenswrapper[30278]: I0318 18:14:48.382844 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/bdaac128-a880-4a06-9b63-31eba0e41a53-util\") pod \"ed96add5fd8bdef5b30529b55940919abbfd5c2160aba46f636d9e153bpdxqh\" (UID: \"bdaac128-a880-4a06-9b63-31eba0e41a53\") " pod="openstack-operators/ed96add5fd8bdef5b30529b55940919abbfd5c2160aba46f636d9e153bpdxqh" Mar 18 18:14:48.383338 master-0 kubenswrapper[30278]: I0318 18:14:48.383200 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-npmqh\" (UniqueName: \"kubernetes.io/projected/bdaac128-a880-4a06-9b63-31eba0e41a53-kube-api-access-npmqh\") pod \"ed96add5fd8bdef5b30529b55940919abbfd5c2160aba46f636d9e153bpdxqh\" (UID: \"bdaac128-a880-4a06-9b63-31eba0e41a53\") " pod="openstack-operators/ed96add5fd8bdef5b30529b55940919abbfd5c2160aba46f636d9e153bpdxqh" Mar 18 18:14:48.488484 master-0 kubenswrapper[30278]: I0318 18:14:48.488226 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/bdaac128-a880-4a06-9b63-31eba0e41a53-bundle\") pod \"ed96add5fd8bdef5b30529b55940919abbfd5c2160aba46f636d9e153bpdxqh\" (UID: \"bdaac128-a880-4a06-9b63-31eba0e41a53\") " pod="openstack-operators/ed96add5fd8bdef5b30529b55940919abbfd5c2160aba46f636d9e153bpdxqh" Mar 18 18:14:48.488923 master-0 kubenswrapper[30278]: I0318 18:14:48.488566 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/bdaac128-a880-4a06-9b63-31eba0e41a53-util\") pod \"ed96add5fd8bdef5b30529b55940919abbfd5c2160aba46f636d9e153bpdxqh\" (UID: \"bdaac128-a880-4a06-9b63-31eba0e41a53\") " pod="openstack-operators/ed96add5fd8bdef5b30529b55940919abbfd5c2160aba46f636d9e153bpdxqh" Mar 18 18:14:48.488923 master-0 kubenswrapper[30278]: I0318 18:14:48.488661 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-npmqh\" (UniqueName: \"kubernetes.io/projected/bdaac128-a880-4a06-9b63-31eba0e41a53-kube-api-access-npmqh\") pod \"ed96add5fd8bdef5b30529b55940919abbfd5c2160aba46f636d9e153bpdxqh\" (UID: \"bdaac128-a880-4a06-9b63-31eba0e41a53\") " pod="openstack-operators/ed96add5fd8bdef5b30529b55940919abbfd5c2160aba46f636d9e153bpdxqh" Mar 18 18:14:48.490363 master-0 kubenswrapper[30278]: I0318 18:14:48.489960 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/bdaac128-a880-4a06-9b63-31eba0e41a53-bundle\") pod \"ed96add5fd8bdef5b30529b55940919abbfd5c2160aba46f636d9e153bpdxqh\" (UID: \"bdaac128-a880-4a06-9b63-31eba0e41a53\") " pod="openstack-operators/ed96add5fd8bdef5b30529b55940919abbfd5c2160aba46f636d9e153bpdxqh" Mar 18 18:14:48.490363 master-0 kubenswrapper[30278]: I0318 18:14:48.490067 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/bdaac128-a880-4a06-9b63-31eba0e41a53-util\") pod \"ed96add5fd8bdef5b30529b55940919abbfd5c2160aba46f636d9e153bpdxqh\" (UID: \"bdaac128-a880-4a06-9b63-31eba0e41a53\") " pod="openstack-operators/ed96add5fd8bdef5b30529b55940919abbfd5c2160aba46f636d9e153bpdxqh" Mar 18 18:14:48.523670 master-0 kubenswrapper[30278]: I0318 18:14:48.523574 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-npmqh\" (UniqueName: \"kubernetes.io/projected/bdaac128-a880-4a06-9b63-31eba0e41a53-kube-api-access-npmqh\") pod \"ed96add5fd8bdef5b30529b55940919abbfd5c2160aba46f636d9e153bpdxqh\" (UID: \"bdaac128-a880-4a06-9b63-31eba0e41a53\") " pod="openstack-operators/ed96add5fd8bdef5b30529b55940919abbfd5c2160aba46f636d9e153bpdxqh" Mar 18 18:14:48.550516 master-0 kubenswrapper[30278]: I0318 18:14:48.550367 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ed96add5fd8bdef5b30529b55940919abbfd5c2160aba46f636d9e153bpdxqh" Mar 18 18:14:49.111997 master-0 kubenswrapper[30278]: I0318 18:14:49.108380 30278 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ed96add5fd8bdef5b30529b55940919abbfd5c2160aba46f636d9e153bpdxqh"] Mar 18 18:14:49.987860 master-0 kubenswrapper[30278]: I0318 18:14:49.987790 30278 generic.go:334] "Generic (PLEG): container finished" podID="bdaac128-a880-4a06-9b63-31eba0e41a53" containerID="7d2e9695189f6ddd3612ba26c644aa5e00cb04b3baa889ea91268d0d83004354" exitCode=0 Mar 18 18:14:49.987860 master-0 kubenswrapper[30278]: I0318 18:14:49.987853 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ed96add5fd8bdef5b30529b55940919abbfd5c2160aba46f636d9e153bpdxqh" event={"ID":"bdaac128-a880-4a06-9b63-31eba0e41a53","Type":"ContainerDied","Data":"7d2e9695189f6ddd3612ba26c644aa5e00cb04b3baa889ea91268d0d83004354"} Mar 18 18:14:49.988526 master-0 kubenswrapper[30278]: I0318 18:14:49.987883 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ed96add5fd8bdef5b30529b55940919abbfd5c2160aba46f636d9e153bpdxqh" event={"ID":"bdaac128-a880-4a06-9b63-31eba0e41a53","Type":"ContainerStarted","Data":"9d689730ecde69bc79aad1599300f50f65aff180ee2b09beea18891fd411cd07"} Mar 18 18:14:50.999004 master-0 kubenswrapper[30278]: I0318 18:14:50.998931 30278 generic.go:334] "Generic (PLEG): container finished" podID="bdaac128-a880-4a06-9b63-31eba0e41a53" containerID="6a799a3d28a39cab8fcfd5873dd60ce299a89f62498fdb16fd9463996bd05c67" exitCode=0 Mar 18 18:14:50.999629 master-0 kubenswrapper[30278]: I0318 18:14:50.999022 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ed96add5fd8bdef5b30529b55940919abbfd5c2160aba46f636d9e153bpdxqh" event={"ID":"bdaac128-a880-4a06-9b63-31eba0e41a53","Type":"ContainerDied","Data":"6a799a3d28a39cab8fcfd5873dd60ce299a89f62498fdb16fd9463996bd05c67"} Mar 18 18:14:52.012577 master-0 kubenswrapper[30278]: I0318 18:14:52.012518 30278 generic.go:334] "Generic (PLEG): container finished" podID="bdaac128-a880-4a06-9b63-31eba0e41a53" containerID="99982d83c09a09cac9c40575b39b9c10ad220dc6a63b06c41a73eb582d046efc" exitCode=0 Mar 18 18:14:52.012577 master-0 kubenswrapper[30278]: I0318 18:14:52.012580 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ed96add5fd8bdef5b30529b55940919abbfd5c2160aba46f636d9e153bpdxqh" event={"ID":"bdaac128-a880-4a06-9b63-31eba0e41a53","Type":"ContainerDied","Data":"99982d83c09a09cac9c40575b39b9c10ad220dc6a63b06c41a73eb582d046efc"} Mar 18 18:14:53.478364 master-0 kubenswrapper[30278]: I0318 18:14:53.478266 30278 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ed96add5fd8bdef5b30529b55940919abbfd5c2160aba46f636d9e153bpdxqh" Mar 18 18:14:53.600738 master-0 kubenswrapper[30278]: I0318 18:14:53.600667 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-npmqh\" (UniqueName: \"kubernetes.io/projected/bdaac128-a880-4a06-9b63-31eba0e41a53-kube-api-access-npmqh\") pod \"bdaac128-a880-4a06-9b63-31eba0e41a53\" (UID: \"bdaac128-a880-4a06-9b63-31eba0e41a53\") " Mar 18 18:14:53.600975 master-0 kubenswrapper[30278]: I0318 18:14:53.600840 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/bdaac128-a880-4a06-9b63-31eba0e41a53-util\") pod \"bdaac128-a880-4a06-9b63-31eba0e41a53\" (UID: \"bdaac128-a880-4a06-9b63-31eba0e41a53\") " Mar 18 18:14:53.600975 master-0 kubenswrapper[30278]: I0318 18:14:53.600940 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/bdaac128-a880-4a06-9b63-31eba0e41a53-bundle\") pod \"bdaac128-a880-4a06-9b63-31eba0e41a53\" (UID: \"bdaac128-a880-4a06-9b63-31eba0e41a53\") " Mar 18 18:14:53.601808 master-0 kubenswrapper[30278]: I0318 18:14:53.601748 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bdaac128-a880-4a06-9b63-31eba0e41a53-bundle" (OuterVolumeSpecName: "bundle") pod "bdaac128-a880-4a06-9b63-31eba0e41a53" (UID: "bdaac128-a880-4a06-9b63-31eba0e41a53"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 18 18:14:53.604451 master-0 kubenswrapper[30278]: I0318 18:14:53.604399 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bdaac128-a880-4a06-9b63-31eba0e41a53-kube-api-access-npmqh" (OuterVolumeSpecName: "kube-api-access-npmqh") pod "bdaac128-a880-4a06-9b63-31eba0e41a53" (UID: "bdaac128-a880-4a06-9b63-31eba0e41a53"). InnerVolumeSpecName "kube-api-access-npmqh". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 18:14:53.621006 master-0 kubenswrapper[30278]: I0318 18:14:53.620922 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bdaac128-a880-4a06-9b63-31eba0e41a53-util" (OuterVolumeSpecName: "util") pod "bdaac128-a880-4a06-9b63-31eba0e41a53" (UID: "bdaac128-a880-4a06-9b63-31eba0e41a53"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 18 18:14:53.706598 master-0 kubenswrapper[30278]: I0318 18:14:53.706123 30278 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/bdaac128-a880-4a06-9b63-31eba0e41a53-util\") on node \"master-0\" DevicePath \"\"" Mar 18 18:14:53.706598 master-0 kubenswrapper[30278]: I0318 18:14:53.706188 30278 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/bdaac128-a880-4a06-9b63-31eba0e41a53-bundle\") on node \"master-0\" DevicePath \"\"" Mar 18 18:14:53.706598 master-0 kubenswrapper[30278]: I0318 18:14:53.706210 30278 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-npmqh\" (UniqueName: \"kubernetes.io/projected/bdaac128-a880-4a06-9b63-31eba0e41a53-kube-api-access-npmqh\") on node \"master-0\" DevicePath \"\"" Mar 18 18:14:54.036905 master-0 kubenswrapper[30278]: I0318 18:14:54.036755 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ed96add5fd8bdef5b30529b55940919abbfd5c2160aba46f636d9e153bpdxqh" event={"ID":"bdaac128-a880-4a06-9b63-31eba0e41a53","Type":"ContainerDied","Data":"9d689730ecde69bc79aad1599300f50f65aff180ee2b09beea18891fd411cd07"} Mar 18 18:14:54.036905 master-0 kubenswrapper[30278]: I0318 18:14:54.036829 30278 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9d689730ecde69bc79aad1599300f50f65aff180ee2b09beea18891fd411cd07" Mar 18 18:14:54.037177 master-0 kubenswrapper[30278]: I0318 18:14:54.036893 30278 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ed96add5fd8bdef5b30529b55940919abbfd5c2160aba46f636d9e153bpdxqh" Mar 18 18:15:01.545085 master-0 kubenswrapper[30278]: I0318 18:15:01.545020 30278 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-init-b95d58ccd-5hcl8"] Mar 18 18:15:01.548399 master-0 kubenswrapper[30278]: E0318 18:15:01.545504 30278 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bdaac128-a880-4a06-9b63-31eba0e41a53" containerName="extract" Mar 18 18:15:01.548399 master-0 kubenswrapper[30278]: I0318 18:15:01.545520 30278 state_mem.go:107] "Deleted CPUSet assignment" podUID="bdaac128-a880-4a06-9b63-31eba0e41a53" containerName="extract" Mar 18 18:15:01.548399 master-0 kubenswrapper[30278]: E0318 18:15:01.545550 30278 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bdaac128-a880-4a06-9b63-31eba0e41a53" containerName="pull" Mar 18 18:15:01.548399 master-0 kubenswrapper[30278]: I0318 18:15:01.545560 30278 state_mem.go:107] "Deleted CPUSet assignment" podUID="bdaac128-a880-4a06-9b63-31eba0e41a53" containerName="pull" Mar 18 18:15:01.548399 master-0 kubenswrapper[30278]: E0318 18:15:01.545580 30278 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bdaac128-a880-4a06-9b63-31eba0e41a53" containerName="util" Mar 18 18:15:01.548399 master-0 kubenswrapper[30278]: I0318 18:15:01.545589 30278 state_mem.go:107] "Deleted CPUSet assignment" podUID="bdaac128-a880-4a06-9b63-31eba0e41a53" containerName="util" Mar 18 18:15:01.548399 master-0 kubenswrapper[30278]: I0318 18:15:01.545855 30278 memory_manager.go:354] "RemoveStaleState removing state" podUID="bdaac128-a880-4a06-9b63-31eba0e41a53" containerName="extract" Mar 18 18:15:01.548399 master-0 kubenswrapper[30278]: I0318 18:15:01.546605 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-b95d58ccd-5hcl8" Mar 18 18:15:01.576498 master-0 kubenswrapper[30278]: I0318 18:15:01.576453 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6s95p\" (UniqueName: \"kubernetes.io/projected/9b109c17-0ffa-4cd2-b3b6-594807af0537-kube-api-access-6s95p\") pod \"openstack-operator-controller-init-b95d58ccd-5hcl8\" (UID: \"9b109c17-0ffa-4cd2-b3b6-594807af0537\") " pod="openstack-operators/openstack-operator-controller-init-b95d58ccd-5hcl8" Mar 18 18:15:01.584716 master-0 kubenswrapper[30278]: I0318 18:15:01.584622 30278 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-b95d58ccd-5hcl8"] Mar 18 18:15:01.679369 master-0 kubenswrapper[30278]: I0318 18:15:01.678890 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6s95p\" (UniqueName: \"kubernetes.io/projected/9b109c17-0ffa-4cd2-b3b6-594807af0537-kube-api-access-6s95p\") pod \"openstack-operator-controller-init-b95d58ccd-5hcl8\" (UID: \"9b109c17-0ffa-4cd2-b3b6-594807af0537\") " pod="openstack-operators/openstack-operator-controller-init-b95d58ccd-5hcl8" Mar 18 18:15:01.700052 master-0 kubenswrapper[30278]: I0318 18:15:01.699976 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6s95p\" (UniqueName: \"kubernetes.io/projected/9b109c17-0ffa-4cd2-b3b6-594807af0537-kube-api-access-6s95p\") pod \"openstack-operator-controller-init-b95d58ccd-5hcl8\" (UID: \"9b109c17-0ffa-4cd2-b3b6-594807af0537\") " pod="openstack-operators/openstack-operator-controller-init-b95d58ccd-5hcl8" Mar 18 18:15:01.891667 master-0 kubenswrapper[30278]: I0318 18:15:01.891593 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-b95d58ccd-5hcl8" Mar 18 18:15:02.413042 master-0 kubenswrapper[30278]: I0318 18:15:02.412976 30278 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-b95d58ccd-5hcl8"] Mar 18 18:15:02.439042 master-0 kubenswrapper[30278]: W0318 18:15:02.438927 30278 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9b109c17_0ffa_4cd2_b3b6_594807af0537.slice/crio-1ecbd7197951844a8169b40f925213a3b4123a428784674740bfbdc32e7f8e81 WatchSource:0}: Error finding container 1ecbd7197951844a8169b40f925213a3b4123a428784674740bfbdc32e7f8e81: Status 404 returned error can't find the container with id 1ecbd7197951844a8169b40f925213a3b4123a428784674740bfbdc32e7f8e81 Mar 18 18:15:03.158301 master-0 kubenswrapper[30278]: I0318 18:15:03.157500 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-b95d58ccd-5hcl8" event={"ID":"9b109c17-0ffa-4cd2-b3b6-594807af0537","Type":"ContainerStarted","Data":"1ecbd7197951844a8169b40f925213a3b4123a428784674740bfbdc32e7f8e81"} Mar 18 18:15:08.232768 master-0 kubenswrapper[30278]: I0318 18:15:08.232678 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-b95d58ccd-5hcl8" event={"ID":"9b109c17-0ffa-4cd2-b3b6-594807af0537","Type":"ContainerStarted","Data":"a6ba88679bfeed0767d5c086dc7daafd008eed1d89166ee8c375ee11944437ae"} Mar 18 18:15:08.233724 master-0 kubenswrapper[30278]: I0318 18:15:08.232985 30278 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-init-b95d58ccd-5hcl8" Mar 18 18:15:08.288321 master-0 kubenswrapper[30278]: I0318 18:15:08.286603 30278 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-init-b95d58ccd-5hcl8" podStartSLOduration=2.395823727 podStartE2EDuration="7.286574505s" podCreationTimestamp="2026-03-18 18:15:01 +0000 UTC" firstStartedPulling="2026-03-18 18:15:02.44150967 +0000 UTC m=+871.608694265" lastFinishedPulling="2026-03-18 18:15:07.332260438 +0000 UTC m=+876.499445043" observedRunningTime="2026-03-18 18:15:08.27195492 +0000 UTC m=+877.439139545" watchObservedRunningTime="2026-03-18 18:15:08.286574505 +0000 UTC m=+877.453759130" Mar 18 18:15:21.895096 master-0 kubenswrapper[30278]: I0318 18:15:21.894990 30278 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-init-b95d58ccd-5hcl8" Mar 18 18:15:42.437774 master-0 kubenswrapper[30278]: I0318 18:15:42.437683 30278 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-8d58dc466-qkpnz"] Mar 18 18:15:42.441522 master-0 kubenswrapper[30278]: I0318 18:15:42.439361 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-8d58dc466-qkpnz" Mar 18 18:15:42.461312 master-0 kubenswrapper[30278]: I0318 18:15:42.458348 30278 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-59bc569d95-7dcfq"] Mar 18 18:15:42.461312 master-0 kubenswrapper[30278]: I0318 18:15:42.459856 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-59bc569d95-7dcfq" Mar 18 18:15:42.470919 master-0 kubenswrapper[30278]: I0318 18:15:42.470846 30278 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-8d58dc466-qkpnz"] Mar 18 18:15:42.496307 master-0 kubenswrapper[30278]: I0318 18:15:42.488054 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kczvr\" (UniqueName: \"kubernetes.io/projected/cc44a223-9705-4e38-986f-24d296b1ab51-kube-api-access-kczvr\") pod \"cinder-operator-controller-manager-8d58dc466-qkpnz\" (UID: \"cc44a223-9705-4e38-986f-24d296b1ab51\") " pod="openstack-operators/cinder-operator-controller-manager-8d58dc466-qkpnz" Mar 18 18:15:42.565254 master-0 kubenswrapper[30278]: I0318 18:15:42.564426 30278 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-59bc569d95-7dcfq"] Mar 18 18:15:42.631506 master-0 kubenswrapper[30278]: I0318 18:15:42.627740 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kczvr\" (UniqueName: \"kubernetes.io/projected/cc44a223-9705-4e38-986f-24d296b1ab51-kube-api-access-kczvr\") pod \"cinder-operator-controller-manager-8d58dc466-qkpnz\" (UID: \"cc44a223-9705-4e38-986f-24d296b1ab51\") " pod="openstack-operators/cinder-operator-controller-manager-8d58dc466-qkpnz" Mar 18 18:15:42.631506 master-0 kubenswrapper[30278]: I0318 18:15:42.627987 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lb85s\" (UniqueName: \"kubernetes.io/projected/12a2950a-56b8-4997-9115-1acb7487d7b8-kube-api-access-lb85s\") pod \"barbican-operator-controller-manager-59bc569d95-7dcfq\" (UID: \"12a2950a-56b8-4997-9115-1acb7487d7b8\") " pod="openstack-operators/barbican-operator-controller-manager-59bc569d95-7dcfq" Mar 18 18:15:42.680400 master-0 kubenswrapper[30278]: I0318 18:15:42.675042 30278 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-588d4d986b-nmf4w"] Mar 18 18:15:42.680400 master-0 kubenswrapper[30278]: I0318 18:15:42.676589 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-588d4d986b-nmf4w" Mar 18 18:15:42.711524 master-0 kubenswrapper[30278]: I0318 18:15:42.707224 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kczvr\" (UniqueName: \"kubernetes.io/projected/cc44a223-9705-4e38-986f-24d296b1ab51-kube-api-access-kczvr\") pod \"cinder-operator-controller-manager-8d58dc466-qkpnz\" (UID: \"cc44a223-9705-4e38-986f-24d296b1ab51\") " pod="openstack-operators/cinder-operator-controller-manager-8d58dc466-qkpnz" Mar 18 18:15:42.749651 master-0 kubenswrapper[30278]: I0318 18:15:42.746255 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2mp82\" (UniqueName: \"kubernetes.io/projected/017ad4ff-4f9a-4d44-b2d4-9b694732f01b-kube-api-access-2mp82\") pod \"designate-operator-controller-manager-588d4d986b-nmf4w\" (UID: \"017ad4ff-4f9a-4d44-b2d4-9b694732f01b\") " pod="openstack-operators/designate-operator-controller-manager-588d4d986b-nmf4w" Mar 18 18:15:42.749651 master-0 kubenswrapper[30278]: I0318 18:15:42.746404 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lb85s\" (UniqueName: \"kubernetes.io/projected/12a2950a-56b8-4997-9115-1acb7487d7b8-kube-api-access-lb85s\") pod \"barbican-operator-controller-manager-59bc569d95-7dcfq\" (UID: \"12a2950a-56b8-4997-9115-1acb7487d7b8\") " pod="openstack-operators/barbican-operator-controller-manager-59bc569d95-7dcfq" Mar 18 18:15:42.832308 master-0 kubenswrapper[30278]: I0318 18:15:42.824086 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-8d58dc466-qkpnz" Mar 18 18:15:42.855255 master-0 kubenswrapper[30278]: I0318 18:15:42.844373 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lb85s\" (UniqueName: \"kubernetes.io/projected/12a2950a-56b8-4997-9115-1acb7487d7b8-kube-api-access-lb85s\") pod \"barbican-operator-controller-manager-59bc569d95-7dcfq\" (UID: \"12a2950a-56b8-4997-9115-1acb7487d7b8\") " pod="openstack-operators/barbican-operator-controller-manager-59bc569d95-7dcfq" Mar 18 18:15:42.862755 master-0 kubenswrapper[30278]: I0318 18:15:42.858502 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2mp82\" (UniqueName: \"kubernetes.io/projected/017ad4ff-4f9a-4d44-b2d4-9b694732f01b-kube-api-access-2mp82\") pod \"designate-operator-controller-manager-588d4d986b-nmf4w\" (UID: \"017ad4ff-4f9a-4d44-b2d4-9b694732f01b\") " pod="openstack-operators/designate-operator-controller-manager-588d4d986b-nmf4w" Mar 18 18:15:43.030076 master-0 kubenswrapper[30278]: I0318 18:15:43.028073 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-59bc569d95-7dcfq" Mar 18 18:15:43.060649 master-0 kubenswrapper[30278]: I0318 18:15:43.054852 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2mp82\" (UniqueName: \"kubernetes.io/projected/017ad4ff-4f9a-4d44-b2d4-9b694732f01b-kube-api-access-2mp82\") pod \"designate-operator-controller-manager-588d4d986b-nmf4w\" (UID: \"017ad4ff-4f9a-4d44-b2d4-9b694732f01b\") " pod="openstack-operators/designate-operator-controller-manager-588d4d986b-nmf4w" Mar 18 18:15:43.133819 master-0 kubenswrapper[30278]: I0318 18:15:43.130084 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-588d4d986b-nmf4w" Mar 18 18:15:43.268984 master-0 kubenswrapper[30278]: I0318 18:15:43.267765 30278 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-588d4d986b-nmf4w"] Mar 18 18:15:43.326588 master-0 kubenswrapper[30278]: I0318 18:15:43.326503 30278 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-79df6bcc97-kmxft"] Mar 18 18:15:43.334675 master-0 kubenswrapper[30278]: I0318 18:15:43.331822 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-79df6bcc97-kmxft" Mar 18 18:15:43.420801 master-0 kubenswrapper[30278]: I0318 18:15:43.414210 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jprc9\" (UniqueName: \"kubernetes.io/projected/3df82072-f7cc-4b7a-82ae-803eadfb2dde-kube-api-access-jprc9\") pod \"glance-operator-controller-manager-79df6bcc97-kmxft\" (UID: \"3df82072-f7cc-4b7a-82ae-803eadfb2dde\") " pod="openstack-operators/glance-operator-controller-manager-79df6bcc97-kmxft" Mar 18 18:15:43.425576 master-0 kubenswrapper[30278]: I0318 18:15:43.424361 30278 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-79df6bcc97-kmxft"] Mar 18 18:15:43.437043 master-0 kubenswrapper[30278]: I0318 18:15:43.435681 30278 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-67dd5f86f5-q5xdd"] Mar 18 18:15:43.499149 master-0 kubenswrapper[30278]: I0318 18:15:43.498324 30278 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-67dd5f86f5-q5xdd"] Mar 18 18:15:43.499149 master-0 kubenswrapper[30278]: I0318 18:15:43.498381 30278 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-8464cc45fb-stb7j"] Mar 18 18:15:43.499149 master-0 kubenswrapper[30278]: I0318 18:15:43.498754 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-67dd5f86f5-q5xdd" Mar 18 18:15:43.500960 master-0 kubenswrapper[30278]: I0318 18:15:43.500921 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-8464cc45fb-stb7j" Mar 18 18:15:43.501415 master-0 kubenswrapper[30278]: I0318 18:15:43.501346 30278 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-7dd6bb94c9-mxxlh"] Mar 18 18:15:43.502229 master-0 kubenswrapper[30278]: I0318 18:15:43.502198 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-7dd6bb94c9-mxxlh" Mar 18 18:15:43.509682 master-0 kubenswrapper[30278]: I0318 18:15:43.507475 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Mar 18 18:15:43.525250 master-0 kubenswrapper[30278]: I0318 18:15:43.516874 30278 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-7dd6bb94c9-mxxlh"] Mar 18 18:15:43.525250 master-0 kubenswrapper[30278]: I0318 18:15:43.517620 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jprc9\" (UniqueName: \"kubernetes.io/projected/3df82072-f7cc-4b7a-82ae-803eadfb2dde-kube-api-access-jprc9\") pod \"glance-operator-controller-manager-79df6bcc97-kmxft\" (UID: \"3df82072-f7cc-4b7a-82ae-803eadfb2dde\") " pod="openstack-operators/glance-operator-controller-manager-79df6bcc97-kmxft" Mar 18 18:15:43.528421 master-0 kubenswrapper[30278]: I0318 18:15:43.528365 30278 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-8464cc45fb-stb7j"] Mar 18 18:15:43.545717 master-0 kubenswrapper[30278]: I0318 18:15:43.536823 30278 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-768b96df4c-j5p6q"] Mar 18 18:15:43.545717 master-0 kubenswrapper[30278]: I0318 18:15:43.538630 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-768b96df4c-j5p6q" Mar 18 18:15:43.555461 master-0 kubenswrapper[30278]: I0318 18:15:43.555392 30278 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-659bd6b58d-q7g49"] Mar 18 18:15:43.556867 master-0 kubenswrapper[30278]: I0318 18:15:43.556834 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-659bd6b58d-q7g49" Mar 18 18:15:43.562421 master-0 kubenswrapper[30278]: I0318 18:15:43.560132 30278 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-55f864c847-nml4w"] Mar 18 18:15:43.562421 master-0 kubenswrapper[30278]: I0318 18:15:43.561833 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-55f864c847-nml4w" Mar 18 18:15:43.567379 master-0 kubenswrapper[30278]: I0318 18:15:43.567169 30278 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-659bd6b58d-q7g49"] Mar 18 18:15:43.587763 master-0 kubenswrapper[30278]: I0318 18:15:43.586364 30278 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-768b96df4c-j5p6q"] Mar 18 18:15:43.609808 master-0 kubenswrapper[30278]: I0318 18:15:43.608773 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jprc9\" (UniqueName: \"kubernetes.io/projected/3df82072-f7cc-4b7a-82ae-803eadfb2dde-kube-api-access-jprc9\") pod \"glance-operator-controller-manager-79df6bcc97-kmxft\" (UID: \"3df82072-f7cc-4b7a-82ae-803eadfb2dde\") " pod="openstack-operators/glance-operator-controller-manager-79df6bcc97-kmxft" Mar 18 18:15:43.623371 master-0 kubenswrapper[30278]: I0318 18:15:43.622357 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6fgwq\" (UniqueName: \"kubernetes.io/projected/dee848d7-cf06-4bfe-b6e0-3ab0afa826a9-kube-api-access-6fgwq\") pod \"horizon-operator-controller-manager-8464cc45fb-stb7j\" (UID: \"dee848d7-cf06-4bfe-b6e0-3ab0afa826a9\") " pod="openstack-operators/horizon-operator-controller-manager-8464cc45fb-stb7j" Mar 18 18:15:43.623371 master-0 kubenswrapper[30278]: I0318 18:15:43.610919 30278 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-55f864c847-nml4w"] Mar 18 18:15:43.623371 master-0 kubenswrapper[30278]: I0318 18:15:43.622535 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s9zgm\" (UniqueName: \"kubernetes.io/projected/08dec5b3-09c6-4aa4-8c40-544556d1b7d4-kube-api-access-s9zgm\") pod \"ironic-operator-controller-manager-659bd6b58d-q7g49\" (UID: \"08dec5b3-09c6-4aa4-8c40-544556d1b7d4\") " pod="openstack-operators/ironic-operator-controller-manager-659bd6b58d-q7g49" Mar 18 18:15:43.623371 master-0 kubenswrapper[30278]: I0318 18:15:43.622681 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cmzst\" (UniqueName: \"kubernetes.io/projected/d486893b-62ed-4907-a004-9f6bf4e0a79f-kube-api-access-cmzst\") pod \"infra-operator-controller-manager-7dd6bb94c9-mxxlh\" (UID: \"d486893b-62ed-4907-a004-9f6bf4e0a79f\") " pod="openstack-operators/infra-operator-controller-manager-7dd6bb94c9-mxxlh" Mar 18 18:15:43.623371 master-0 kubenswrapper[30278]: I0318 18:15:43.622738 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6czzj\" (UniqueName: \"kubernetes.io/projected/2f573bc4-cb28-4631-9b9e-2cfbc078e1ed-kube-api-access-6czzj\") pod \"heat-operator-controller-manager-67dd5f86f5-q5xdd\" (UID: \"2f573bc4-cb28-4631-9b9e-2cfbc078e1ed\") " pod="openstack-operators/heat-operator-controller-manager-67dd5f86f5-q5xdd" Mar 18 18:15:43.623371 master-0 kubenswrapper[30278]: I0318 18:15:43.622773 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p4rc7\" (UniqueName: \"kubernetes.io/projected/da9d67e1-3213-4c5a-9b44-b02d440b36e7-kube-api-access-p4rc7\") pod \"keystone-operator-controller-manager-768b96df4c-j5p6q\" (UID: \"da9d67e1-3213-4c5a-9b44-b02d440b36e7\") " pod="openstack-operators/keystone-operator-controller-manager-768b96df4c-j5p6q" Mar 18 18:15:43.623371 master-0 kubenswrapper[30278]: I0318 18:15:43.622823 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6kw5d\" (UniqueName: \"kubernetes.io/projected/d3080dde-8e85-442a-ae2a-581507874a2d-kube-api-access-6kw5d\") pod \"manila-operator-controller-manager-55f864c847-nml4w\" (UID: \"d3080dde-8e85-442a-ae2a-581507874a2d\") " pod="openstack-operators/manila-operator-controller-manager-55f864c847-nml4w" Mar 18 18:15:43.623371 master-0 kubenswrapper[30278]: I0318 18:15:43.622859 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/d486893b-62ed-4907-a004-9f6bf4e0a79f-cert\") pod \"infra-operator-controller-manager-7dd6bb94c9-mxxlh\" (UID: \"d486893b-62ed-4907-a004-9f6bf4e0a79f\") " pod="openstack-operators/infra-operator-controller-manager-7dd6bb94c9-mxxlh" Mar 18 18:15:43.640237 master-0 kubenswrapper[30278]: I0318 18:15:43.639596 30278 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-67ccfc9778-5hkw5"] Mar 18 18:15:43.642064 master-0 kubenswrapper[30278]: I0318 18:15:43.641073 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-67ccfc9778-5hkw5" Mar 18 18:15:43.675556 master-0 kubenswrapper[30278]: I0318 18:15:43.674677 30278 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-67ccfc9778-5hkw5"] Mar 18 18:15:43.711384 master-0 kubenswrapper[30278]: I0318 18:15:43.700786 30278 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-767865f676-vs6hj"] Mar 18 18:15:43.711384 master-0 kubenswrapper[30278]: I0318 18:15:43.709001 30278 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-767865f676-vs6hj"] Mar 18 18:15:43.711384 master-0 kubenswrapper[30278]: I0318 18:15:43.709202 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-767865f676-vs6hj" Mar 18 18:15:43.727317 master-0 kubenswrapper[30278]: I0318 18:15:43.726438 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cmzst\" (UniqueName: \"kubernetes.io/projected/d486893b-62ed-4907-a004-9f6bf4e0a79f-kube-api-access-cmzst\") pod \"infra-operator-controller-manager-7dd6bb94c9-mxxlh\" (UID: \"d486893b-62ed-4907-a004-9f6bf4e0a79f\") " pod="openstack-operators/infra-operator-controller-manager-7dd6bb94c9-mxxlh" Mar 18 18:15:43.727317 master-0 kubenswrapper[30278]: I0318 18:15:43.726517 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6czzj\" (UniqueName: \"kubernetes.io/projected/2f573bc4-cb28-4631-9b9e-2cfbc078e1ed-kube-api-access-6czzj\") pod \"heat-operator-controller-manager-67dd5f86f5-q5xdd\" (UID: \"2f573bc4-cb28-4631-9b9e-2cfbc078e1ed\") " pod="openstack-operators/heat-operator-controller-manager-67dd5f86f5-q5xdd" Mar 18 18:15:43.727317 master-0 kubenswrapper[30278]: I0318 18:15:43.726547 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p4rc7\" (UniqueName: \"kubernetes.io/projected/da9d67e1-3213-4c5a-9b44-b02d440b36e7-kube-api-access-p4rc7\") pod \"keystone-operator-controller-manager-768b96df4c-j5p6q\" (UID: \"da9d67e1-3213-4c5a-9b44-b02d440b36e7\") " pod="openstack-operators/keystone-operator-controller-manager-768b96df4c-j5p6q" Mar 18 18:15:43.727317 master-0 kubenswrapper[30278]: I0318 18:15:43.726576 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6kw5d\" (UniqueName: \"kubernetes.io/projected/d3080dde-8e85-442a-ae2a-581507874a2d-kube-api-access-6kw5d\") pod \"manila-operator-controller-manager-55f864c847-nml4w\" (UID: \"d3080dde-8e85-442a-ae2a-581507874a2d\") " pod="openstack-operators/manila-operator-controller-manager-55f864c847-nml4w" Mar 18 18:15:43.727317 master-0 kubenswrapper[30278]: I0318 18:15:43.726600 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/d486893b-62ed-4907-a004-9f6bf4e0a79f-cert\") pod \"infra-operator-controller-manager-7dd6bb94c9-mxxlh\" (UID: \"d486893b-62ed-4907-a004-9f6bf4e0a79f\") " pod="openstack-operators/infra-operator-controller-manager-7dd6bb94c9-mxxlh" Mar 18 18:15:43.727317 master-0 kubenswrapper[30278]: I0318 18:15:43.726756 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7xkt6\" (UniqueName: \"kubernetes.io/projected/7bbcfafe-41f1-44dc-9c89-89dae4c1fac4-kube-api-access-7xkt6\") pod \"mariadb-operator-controller-manager-67ccfc9778-5hkw5\" (UID: \"7bbcfafe-41f1-44dc-9c89-89dae4c1fac4\") " pod="openstack-operators/mariadb-operator-controller-manager-67ccfc9778-5hkw5" Mar 18 18:15:43.727317 master-0 kubenswrapper[30278]: I0318 18:15:43.726832 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6fgwq\" (UniqueName: \"kubernetes.io/projected/dee848d7-cf06-4bfe-b6e0-3ab0afa826a9-kube-api-access-6fgwq\") pod \"horizon-operator-controller-manager-8464cc45fb-stb7j\" (UID: \"dee848d7-cf06-4bfe-b6e0-3ab0afa826a9\") " pod="openstack-operators/horizon-operator-controller-manager-8464cc45fb-stb7j" Mar 18 18:15:43.727317 master-0 kubenswrapper[30278]: I0318 18:15:43.726868 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s9zgm\" (UniqueName: \"kubernetes.io/projected/08dec5b3-09c6-4aa4-8c40-544556d1b7d4-kube-api-access-s9zgm\") pod \"ironic-operator-controller-manager-659bd6b58d-q7g49\" (UID: \"08dec5b3-09c6-4aa4-8c40-544556d1b7d4\") " pod="openstack-operators/ironic-operator-controller-manager-659bd6b58d-q7g49" Mar 18 18:15:43.727883 master-0 kubenswrapper[30278]: E0318 18:15:43.727811 30278 secret.go:189] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Mar 18 18:15:43.727922 master-0 kubenswrapper[30278]: E0318 18:15:43.727887 30278 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d486893b-62ed-4907-a004-9f6bf4e0a79f-cert podName:d486893b-62ed-4907-a004-9f6bf4e0a79f nodeName:}" failed. No retries permitted until 2026-03-18 18:15:44.227851205 +0000 UTC m=+913.395035800 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/d486893b-62ed-4907-a004-9f6bf4e0a79f-cert") pod "infra-operator-controller-manager-7dd6bb94c9-mxxlh" (UID: "d486893b-62ed-4907-a004-9f6bf4e0a79f") : secret "infra-operator-webhook-server-cert" not found Mar 18 18:15:43.750302 master-0 kubenswrapper[30278]: I0318 18:15:43.742090 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-79df6bcc97-kmxft" Mar 18 18:15:43.768711 master-0 kubenswrapper[30278]: I0318 18:15:43.759797 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6kw5d\" (UniqueName: \"kubernetes.io/projected/d3080dde-8e85-442a-ae2a-581507874a2d-kube-api-access-6kw5d\") pod \"manila-operator-controller-manager-55f864c847-nml4w\" (UID: \"d3080dde-8e85-442a-ae2a-581507874a2d\") " pod="openstack-operators/manila-operator-controller-manager-55f864c847-nml4w" Mar 18 18:15:43.768711 master-0 kubenswrapper[30278]: I0318 18:15:43.764036 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p4rc7\" (UniqueName: \"kubernetes.io/projected/da9d67e1-3213-4c5a-9b44-b02d440b36e7-kube-api-access-p4rc7\") pod \"keystone-operator-controller-manager-768b96df4c-j5p6q\" (UID: \"da9d67e1-3213-4c5a-9b44-b02d440b36e7\") " pod="openstack-operators/keystone-operator-controller-manager-768b96df4c-j5p6q" Mar 18 18:15:43.773632 master-0 kubenswrapper[30278]: I0318 18:15:43.773564 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cmzst\" (UniqueName: \"kubernetes.io/projected/d486893b-62ed-4907-a004-9f6bf4e0a79f-kube-api-access-cmzst\") pod \"infra-operator-controller-manager-7dd6bb94c9-mxxlh\" (UID: \"d486893b-62ed-4907-a004-9f6bf4e0a79f\") " pod="openstack-operators/infra-operator-controller-manager-7dd6bb94c9-mxxlh" Mar 18 18:15:43.783994 master-0 kubenswrapper[30278]: I0318 18:15:43.778416 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s9zgm\" (UniqueName: \"kubernetes.io/projected/08dec5b3-09c6-4aa4-8c40-544556d1b7d4-kube-api-access-s9zgm\") pod \"ironic-operator-controller-manager-659bd6b58d-q7g49\" (UID: \"08dec5b3-09c6-4aa4-8c40-544556d1b7d4\") " pod="openstack-operators/ironic-operator-controller-manager-659bd6b58d-q7g49" Mar 18 18:15:43.784251 master-0 kubenswrapper[30278]: I0318 18:15:43.784224 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6fgwq\" (UniqueName: \"kubernetes.io/projected/dee848d7-cf06-4bfe-b6e0-3ab0afa826a9-kube-api-access-6fgwq\") pod \"horizon-operator-controller-manager-8464cc45fb-stb7j\" (UID: \"dee848d7-cf06-4bfe-b6e0-3ab0afa826a9\") " pod="openstack-operators/horizon-operator-controller-manager-8464cc45fb-stb7j" Mar 18 18:15:43.801148 master-0 kubenswrapper[30278]: I0318 18:15:43.801068 30278 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-5d488d59fb-9btcv"] Mar 18 18:15:43.803021 master-0 kubenswrapper[30278]: I0318 18:15:43.802760 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-5d488d59fb-9btcv" Mar 18 18:15:43.817156 master-0 kubenswrapper[30278]: I0318 18:15:43.817107 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6czzj\" (UniqueName: \"kubernetes.io/projected/2f573bc4-cb28-4631-9b9e-2cfbc078e1ed-kube-api-access-6czzj\") pod \"heat-operator-controller-manager-67dd5f86f5-q5xdd\" (UID: \"2f573bc4-cb28-4631-9b9e-2cfbc078e1ed\") " pod="openstack-operators/heat-operator-controller-manager-67dd5f86f5-q5xdd" Mar 18 18:15:43.829890 master-0 kubenswrapper[30278]: I0318 18:15:43.828596 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8nm4j\" (UniqueName: \"kubernetes.io/projected/9fa45639-3436-43df-a879-b6445c664661-kube-api-access-8nm4j\") pod \"neutron-operator-controller-manager-767865f676-vs6hj\" (UID: \"9fa45639-3436-43df-a879-b6445c664661\") " pod="openstack-operators/neutron-operator-controller-manager-767865f676-vs6hj" Mar 18 18:15:43.829890 master-0 kubenswrapper[30278]: I0318 18:15:43.828847 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7xkt6\" (UniqueName: \"kubernetes.io/projected/7bbcfafe-41f1-44dc-9c89-89dae4c1fac4-kube-api-access-7xkt6\") pod \"mariadb-operator-controller-manager-67ccfc9778-5hkw5\" (UID: \"7bbcfafe-41f1-44dc-9c89-89dae4c1fac4\") " pod="openstack-operators/mariadb-operator-controller-manager-67ccfc9778-5hkw5" Mar 18 18:15:43.832063 master-0 kubenswrapper[30278]: I0318 18:15:43.831409 30278 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-5b9f45d989-hlkz4"] Mar 18 18:15:43.835099 master-0 kubenswrapper[30278]: I0318 18:15:43.833485 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-5b9f45d989-hlkz4" Mar 18 18:15:43.839747 master-0 kubenswrapper[30278]: I0318 18:15:43.839685 30278 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-5d488d59fb-9btcv"] Mar 18 18:15:43.863305 master-0 kubenswrapper[30278]: I0318 18:15:43.860995 30278 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-5b9f45d989-hlkz4"] Mar 18 18:15:43.863305 master-0 kubenswrapper[30278]: I0318 18:15:43.861064 30278 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-89d64c458-jnvcb"] Mar 18 18:15:43.863305 master-0 kubenswrapper[30278]: I0318 18:15:43.862869 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7xkt6\" (UniqueName: \"kubernetes.io/projected/7bbcfafe-41f1-44dc-9c89-89dae4c1fac4-kube-api-access-7xkt6\") pod \"mariadb-operator-controller-manager-67ccfc9778-5hkw5\" (UID: \"7bbcfafe-41f1-44dc-9c89-89dae4c1fac4\") " pod="openstack-operators/mariadb-operator-controller-manager-67ccfc9778-5hkw5" Mar 18 18:15:43.863718 master-0 kubenswrapper[30278]: I0318 18:15:43.863662 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-89d64c458-jnvcb" Mar 18 18:15:43.871297 master-0 kubenswrapper[30278]: I0318 18:15:43.866708 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Mar 18 18:15:43.871297 master-0 kubenswrapper[30278]: I0318 18:15:43.870372 30278 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-884679f54-l66pc"] Mar 18 18:15:43.876288 master-0 kubenswrapper[30278]: I0318 18:15:43.872378 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-884679f54-l66pc" Mar 18 18:15:43.883290 master-0 kubenswrapper[30278]: I0318 18:15:43.879799 30278 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-89d64c458-jnvcb"] Mar 18 18:15:43.896652 master-0 kubenswrapper[30278]: I0318 18:15:43.891685 30278 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-5784578c99-dx9nw"] Mar 18 18:15:43.896652 master-0 kubenswrapper[30278]: I0318 18:15:43.894580 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-5784578c99-dx9nw" Mar 18 18:15:43.908091 master-0 kubenswrapper[30278]: I0318 18:15:43.906872 30278 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-5784578c99-dx9nw"] Mar 18 18:15:43.925359 master-0 kubenswrapper[30278]: I0318 18:15:43.921227 30278 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-884679f54-l66pc"] Mar 18 18:15:43.935520 master-0 kubenswrapper[30278]: I0318 18:15:43.932980 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8nm4j\" (UniqueName: \"kubernetes.io/projected/9fa45639-3436-43df-a879-b6445c664661-kube-api-access-8nm4j\") pod \"neutron-operator-controller-manager-767865f676-vs6hj\" (UID: \"9fa45639-3436-43df-a879-b6445c664661\") " pod="openstack-operators/neutron-operator-controller-manager-767865f676-vs6hj" Mar 18 18:15:43.935520 master-0 kubenswrapper[30278]: I0318 18:15:43.933078 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mvjzm\" (UniqueName: \"kubernetes.io/projected/65f3dd0a-e9a1-4087-ba1a-47366cf25382-kube-api-access-mvjzm\") pod \"openstack-baremetal-operator-controller-manager-89d64c458-jnvcb\" (UID: \"65f3dd0a-e9a1-4087-ba1a-47366cf25382\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-89d64c458-jnvcb" Mar 18 18:15:43.935520 master-0 kubenswrapper[30278]: I0318 18:15:43.933119 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/65f3dd0a-e9a1-4087-ba1a-47366cf25382-cert\") pod \"openstack-baremetal-operator-controller-manager-89d64c458-jnvcb\" (UID: \"65f3dd0a-e9a1-4087-ba1a-47366cf25382\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-89d64c458-jnvcb" Mar 18 18:15:43.935520 master-0 kubenswrapper[30278]: I0318 18:15:43.933156 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t2j5g\" (UniqueName: \"kubernetes.io/projected/9808351e-1785-48d6-a2fd-8953742f27cc-kube-api-access-t2j5g\") pod \"nova-operator-controller-manager-5d488d59fb-9btcv\" (UID: \"9808351e-1785-48d6-a2fd-8953742f27cc\") " pod="openstack-operators/nova-operator-controller-manager-5d488d59fb-9btcv" Mar 18 18:15:43.935520 master-0 kubenswrapper[30278]: I0318 18:15:43.933193 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-272hk\" (UniqueName: \"kubernetes.io/projected/dbb9017b-18df-4021-bbe9-af055932f22a-kube-api-access-272hk\") pod \"octavia-operator-controller-manager-5b9f45d989-hlkz4\" (UID: \"dbb9017b-18df-4021-bbe9-af055932f22a\") " pod="openstack-operators/octavia-operator-controller-manager-5b9f45d989-hlkz4" Mar 18 18:15:43.935520 master-0 kubenswrapper[30278]: I0318 18:15:43.933213 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zq76s\" (UniqueName: \"kubernetes.io/projected/1aa3b381-7785-41a0-9d4c-094a9e4abbe5-kube-api-access-zq76s\") pod \"ovn-operator-controller-manager-884679f54-l66pc\" (UID: \"1aa3b381-7785-41a0-9d4c-094a9e4abbe5\") " pod="openstack-operators/ovn-operator-controller-manager-884679f54-l66pc" Mar 18 18:15:43.935520 master-0 kubenswrapper[30278]: I0318 18:15:43.933245 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tdpqs\" (UniqueName: \"kubernetes.io/projected/ff1d10fa-b70d-439b-8183-dbdf8042e43d-kube-api-access-tdpqs\") pod \"placement-operator-controller-manager-5784578c99-dx9nw\" (UID: \"ff1d10fa-b70d-439b-8183-dbdf8042e43d\") " pod="openstack-operators/placement-operator-controller-manager-5784578c99-dx9nw" Mar 18 18:15:43.939094 master-0 kubenswrapper[30278]: I0318 18:15:43.937534 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-67dd5f86f5-q5xdd" Mar 18 18:15:43.958952 master-0 kubenswrapper[30278]: I0318 18:15:43.958139 30278 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-c674c5965-vf92l"] Mar 18 18:15:43.961400 master-0 kubenswrapper[30278]: I0318 18:15:43.959740 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-c674c5965-vf92l" Mar 18 18:15:43.970348 master-0 kubenswrapper[30278]: I0318 18:15:43.967118 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8nm4j\" (UniqueName: \"kubernetes.io/projected/9fa45639-3436-43df-a879-b6445c664661-kube-api-access-8nm4j\") pod \"neutron-operator-controller-manager-767865f676-vs6hj\" (UID: \"9fa45639-3436-43df-a879-b6445c664661\") " pod="openstack-operators/neutron-operator-controller-manager-767865f676-vs6hj" Mar 18 18:15:43.970348 master-0 kubenswrapper[30278]: I0318 18:15:43.967402 30278 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-c674c5965-vf92l"] Mar 18 18:15:43.987950 master-0 kubenswrapper[30278]: I0318 18:15:43.984387 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-768b96df4c-j5p6q" Mar 18 18:15:44.000655 master-0 kubenswrapper[30278]: I0318 18:15:43.994670 30278 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-d6b694c5-z9sth"] Mar 18 18:15:44.000655 master-0 kubenswrapper[30278]: I0318 18:15:43.996356 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-d6b694c5-z9sth" Mar 18 18:15:44.007316 master-0 kubenswrapper[30278]: I0318 18:15:44.004693 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-8464cc45fb-stb7j" Mar 18 18:15:44.013341 master-0 kubenswrapper[30278]: I0318 18:15:44.007693 30278 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-5c5cb9c4d7-lkr87"] Mar 18 18:15:44.013341 master-0 kubenswrapper[30278]: I0318 18:15:44.009985 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-5c5cb9c4d7-lkr87" Mar 18 18:15:44.019418 master-0 kubenswrapper[30278]: I0318 18:15:44.019360 30278 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-d6b694c5-z9sth"] Mar 18 18:15:44.019751 master-0 kubenswrapper[30278]: I0318 18:15:44.019570 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-659bd6b58d-q7g49" Mar 18 18:15:44.053313 master-0 kubenswrapper[30278]: I0318 18:15:44.036099 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c24w9\" (UniqueName: \"kubernetes.io/projected/6cde8ef7-31e7-496a-95a0-381f4bd6c4ed-kube-api-access-c24w9\") pod \"swift-operator-controller-manager-c674c5965-vf92l\" (UID: \"6cde8ef7-31e7-496a-95a0-381f4bd6c4ed\") " pod="openstack-operators/swift-operator-controller-manager-c674c5965-vf92l" Mar 18 18:15:44.053313 master-0 kubenswrapper[30278]: I0318 18:15:44.036251 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b9kpn\" (UniqueName: \"kubernetes.io/projected/445552af-e585-4728-adfa-9fe6f9e79cc1-kube-api-access-b9kpn\") pod \"telemetry-operator-controller-manager-d6b694c5-z9sth\" (UID: \"445552af-e585-4728-adfa-9fe6f9e79cc1\") " pod="openstack-operators/telemetry-operator-controller-manager-d6b694c5-z9sth" Mar 18 18:15:44.053313 master-0 kubenswrapper[30278]: I0318 18:15:44.036373 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mvjzm\" (UniqueName: \"kubernetes.io/projected/65f3dd0a-e9a1-4087-ba1a-47366cf25382-kube-api-access-mvjzm\") pod \"openstack-baremetal-operator-controller-manager-89d64c458-jnvcb\" (UID: \"65f3dd0a-e9a1-4087-ba1a-47366cf25382\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-89d64c458-jnvcb" Mar 18 18:15:44.053313 master-0 kubenswrapper[30278]: I0318 18:15:44.036407 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/65f3dd0a-e9a1-4087-ba1a-47366cf25382-cert\") pod \"openstack-baremetal-operator-controller-manager-89d64c458-jnvcb\" (UID: \"65f3dd0a-e9a1-4087-ba1a-47366cf25382\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-89d64c458-jnvcb" Mar 18 18:15:44.053313 master-0 kubenswrapper[30278]: I0318 18:15:44.036439 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t2j5g\" (UniqueName: \"kubernetes.io/projected/9808351e-1785-48d6-a2fd-8953742f27cc-kube-api-access-t2j5g\") pod \"nova-operator-controller-manager-5d488d59fb-9btcv\" (UID: \"9808351e-1785-48d6-a2fd-8953742f27cc\") " pod="openstack-operators/nova-operator-controller-manager-5d488d59fb-9btcv" Mar 18 18:15:44.053313 master-0 kubenswrapper[30278]: I0318 18:15:44.036476 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-272hk\" (UniqueName: \"kubernetes.io/projected/dbb9017b-18df-4021-bbe9-af055932f22a-kube-api-access-272hk\") pod \"octavia-operator-controller-manager-5b9f45d989-hlkz4\" (UID: \"dbb9017b-18df-4021-bbe9-af055932f22a\") " pod="openstack-operators/octavia-operator-controller-manager-5b9f45d989-hlkz4" Mar 18 18:15:44.053313 master-0 kubenswrapper[30278]: I0318 18:15:44.036503 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zq76s\" (UniqueName: \"kubernetes.io/projected/1aa3b381-7785-41a0-9d4c-094a9e4abbe5-kube-api-access-zq76s\") pod \"ovn-operator-controller-manager-884679f54-l66pc\" (UID: \"1aa3b381-7785-41a0-9d4c-094a9e4abbe5\") " pod="openstack-operators/ovn-operator-controller-manager-884679f54-l66pc" Mar 18 18:15:44.053313 master-0 kubenswrapper[30278]: I0318 18:15:44.036542 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tdpqs\" (UniqueName: \"kubernetes.io/projected/ff1d10fa-b70d-439b-8183-dbdf8042e43d-kube-api-access-tdpqs\") pod \"placement-operator-controller-manager-5784578c99-dx9nw\" (UID: \"ff1d10fa-b70d-439b-8183-dbdf8042e43d\") " pod="openstack-operators/placement-operator-controller-manager-5784578c99-dx9nw" Mar 18 18:15:44.053313 master-0 kubenswrapper[30278]: I0318 18:15:44.037262 30278 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-5c5cb9c4d7-lkr87"] Mar 18 18:15:44.053313 master-0 kubenswrapper[30278]: E0318 18:15:44.037493 30278 secret.go:189] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Mar 18 18:15:44.064759 master-0 kubenswrapper[30278]: E0318 18:15:44.060808 30278 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/65f3dd0a-e9a1-4087-ba1a-47366cf25382-cert podName:65f3dd0a-e9a1-4087-ba1a-47366cf25382 nodeName:}" failed. No retries permitted until 2026-03-18 18:15:44.560769958 +0000 UTC m=+913.727954553 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/65f3dd0a-e9a1-4087-ba1a-47366cf25382-cert") pod "openstack-baremetal-operator-controller-manager-89d64c458-jnvcb" (UID: "65f3dd0a-e9a1-4087-ba1a-47366cf25382") : secret "openstack-baremetal-operator-webhook-server-cert" not found Mar 18 18:15:44.120516 master-0 kubenswrapper[30278]: I0318 18:15:44.102222 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-272hk\" (UniqueName: \"kubernetes.io/projected/dbb9017b-18df-4021-bbe9-af055932f22a-kube-api-access-272hk\") pod \"octavia-operator-controller-manager-5b9f45d989-hlkz4\" (UID: \"dbb9017b-18df-4021-bbe9-af055932f22a\") " pod="openstack-operators/octavia-operator-controller-manager-5b9f45d989-hlkz4" Mar 18 18:15:44.120516 master-0 kubenswrapper[30278]: I0318 18:15:44.102657 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tdpqs\" (UniqueName: \"kubernetes.io/projected/ff1d10fa-b70d-439b-8183-dbdf8042e43d-kube-api-access-tdpqs\") pod \"placement-operator-controller-manager-5784578c99-dx9nw\" (UID: \"ff1d10fa-b70d-439b-8183-dbdf8042e43d\") " pod="openstack-operators/placement-operator-controller-manager-5784578c99-dx9nw" Mar 18 18:15:44.120516 master-0 kubenswrapper[30278]: I0318 18:15:44.107592 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zq76s\" (UniqueName: \"kubernetes.io/projected/1aa3b381-7785-41a0-9d4c-094a9e4abbe5-kube-api-access-zq76s\") pod \"ovn-operator-controller-manager-884679f54-l66pc\" (UID: \"1aa3b381-7785-41a0-9d4c-094a9e4abbe5\") " pod="openstack-operators/ovn-operator-controller-manager-884679f54-l66pc" Mar 18 18:15:44.120516 master-0 kubenswrapper[30278]: I0318 18:15:44.111429 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mvjzm\" (UniqueName: \"kubernetes.io/projected/65f3dd0a-e9a1-4087-ba1a-47366cf25382-kube-api-access-mvjzm\") pod \"openstack-baremetal-operator-controller-manager-89d64c458-jnvcb\" (UID: \"65f3dd0a-e9a1-4087-ba1a-47366cf25382\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-89d64c458-jnvcb" Mar 18 18:15:44.145306 master-0 kubenswrapper[30278]: I0318 18:15:44.134097 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-55f864c847-nml4w" Mar 18 18:15:44.145306 master-0 kubenswrapper[30278]: I0318 18:15:44.140005 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c24w9\" (UniqueName: \"kubernetes.io/projected/6cde8ef7-31e7-496a-95a0-381f4bd6c4ed-kube-api-access-c24w9\") pod \"swift-operator-controller-manager-c674c5965-vf92l\" (UID: \"6cde8ef7-31e7-496a-95a0-381f4bd6c4ed\") " pod="openstack-operators/swift-operator-controller-manager-c674c5965-vf92l" Mar 18 18:15:44.145306 master-0 kubenswrapper[30278]: I0318 18:15:44.140166 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dw5zd\" (UniqueName: \"kubernetes.io/projected/4053a2ba-dff4-4ce8-a482-567999b6cd75-kube-api-access-dw5zd\") pod \"test-operator-controller-manager-5c5cb9c4d7-lkr87\" (UID: \"4053a2ba-dff4-4ce8-a482-567999b6cd75\") " pod="openstack-operators/test-operator-controller-manager-5c5cb9c4d7-lkr87" Mar 18 18:15:44.145306 master-0 kubenswrapper[30278]: I0318 18:15:44.140209 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b9kpn\" (UniqueName: \"kubernetes.io/projected/445552af-e585-4728-adfa-9fe6f9e79cc1-kube-api-access-b9kpn\") pod \"telemetry-operator-controller-manager-d6b694c5-z9sth\" (UID: \"445552af-e585-4728-adfa-9fe6f9e79cc1\") " pod="openstack-operators/telemetry-operator-controller-manager-d6b694c5-z9sth" Mar 18 18:15:44.167548 master-0 kubenswrapper[30278]: I0318 18:15:44.165719 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t2j5g\" (UniqueName: \"kubernetes.io/projected/9808351e-1785-48d6-a2fd-8953742f27cc-kube-api-access-t2j5g\") pod \"nova-operator-controller-manager-5d488d59fb-9btcv\" (UID: \"9808351e-1785-48d6-a2fd-8953742f27cc\") " pod="openstack-operators/nova-operator-controller-manager-5d488d59fb-9btcv" Mar 18 18:15:44.179268 master-0 kubenswrapper[30278]: I0318 18:15:44.178985 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c24w9\" (UniqueName: \"kubernetes.io/projected/6cde8ef7-31e7-496a-95a0-381f4bd6c4ed-kube-api-access-c24w9\") pod \"swift-operator-controller-manager-c674c5965-vf92l\" (UID: \"6cde8ef7-31e7-496a-95a0-381f4bd6c4ed\") " pod="openstack-operators/swift-operator-controller-manager-c674c5965-vf92l" Mar 18 18:15:44.180008 master-0 kubenswrapper[30278]: I0318 18:15:44.179949 30278 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-6c4d75f7f9-v9v5q"] Mar 18 18:15:44.184382 master-0 kubenswrapper[30278]: I0318 18:15:44.182159 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-67ccfc9778-5hkw5" Mar 18 18:15:44.184382 master-0 kubenswrapper[30278]: I0318 18:15:44.183590 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-6c4d75f7f9-v9v5q" Mar 18 18:15:44.189684 master-0 kubenswrapper[30278]: I0318 18:15:44.188372 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b9kpn\" (UniqueName: \"kubernetes.io/projected/445552af-e585-4728-adfa-9fe6f9e79cc1-kube-api-access-b9kpn\") pod \"telemetry-operator-controller-manager-d6b694c5-z9sth\" (UID: \"445552af-e585-4728-adfa-9fe6f9e79cc1\") " pod="openstack-operators/telemetry-operator-controller-manager-d6b694c5-z9sth" Mar 18 18:15:44.197742 master-0 kubenswrapper[30278]: I0318 18:15:44.196582 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-767865f676-vs6hj" Mar 18 18:15:44.219461 master-0 kubenswrapper[30278]: I0318 18:15:44.219358 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-5d488d59fb-9btcv" Mar 18 18:15:44.250605 master-0 kubenswrapper[30278]: W0318 18:15:44.237843 30278 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcc44a223_9705_4e38_986f_24d296b1ab51.slice/crio-01d872957206af2cfeaf4f041a601e92ce08ec4120b3f2656b039ea94ea294a4 WatchSource:0}: Error finding container 01d872957206af2cfeaf4f041a601e92ce08ec4120b3f2656b039ea94ea294a4: Status 404 returned error can't find the container with id 01d872957206af2cfeaf4f041a601e92ce08ec4120b3f2656b039ea94ea294a4 Mar 18 18:15:44.250605 master-0 kubenswrapper[30278]: I0318 18:15:44.243477 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dw5zd\" (UniqueName: \"kubernetes.io/projected/4053a2ba-dff4-4ce8-a482-567999b6cd75-kube-api-access-dw5zd\") pod \"test-operator-controller-manager-5c5cb9c4d7-lkr87\" (UID: \"4053a2ba-dff4-4ce8-a482-567999b6cd75\") " pod="openstack-operators/test-operator-controller-manager-5c5cb9c4d7-lkr87" Mar 18 18:15:44.250605 master-0 kubenswrapper[30278]: I0318 18:15:44.243645 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/d486893b-62ed-4907-a004-9f6bf4e0a79f-cert\") pod \"infra-operator-controller-manager-7dd6bb94c9-mxxlh\" (UID: \"d486893b-62ed-4907-a004-9f6bf4e0a79f\") " pod="openstack-operators/infra-operator-controller-manager-7dd6bb94c9-mxxlh" Mar 18 18:15:44.250605 master-0 kubenswrapper[30278]: I0318 18:15:44.243685 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7snbf\" (UniqueName: \"kubernetes.io/projected/1da476d6-9e63-42af-9501-abfb534343d9-kube-api-access-7snbf\") pod \"watcher-operator-controller-manager-6c4d75f7f9-v9v5q\" (UID: \"1da476d6-9e63-42af-9501-abfb534343d9\") " pod="openstack-operators/watcher-operator-controller-manager-6c4d75f7f9-v9v5q" Mar 18 18:15:44.250605 master-0 kubenswrapper[30278]: I0318 18:15:44.246624 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-5b9f45d989-hlkz4" Mar 18 18:15:44.251478 master-0 kubenswrapper[30278]: E0318 18:15:44.251341 30278 secret.go:189] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Mar 18 18:15:44.251478 master-0 kubenswrapper[30278]: E0318 18:15:44.251412 30278 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d486893b-62ed-4907-a004-9f6bf4e0a79f-cert podName:d486893b-62ed-4907-a004-9f6bf4e0a79f nodeName:}" failed. No retries permitted until 2026-03-18 18:15:45.251391583 +0000 UTC m=+914.418576178 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/d486893b-62ed-4907-a004-9f6bf4e0a79f-cert") pod "infra-operator-controller-manager-7dd6bb94c9-mxxlh" (UID: "d486893b-62ed-4907-a004-9f6bf4e0a79f") : secret "infra-operator-webhook-server-cert" not found Mar 18 18:15:44.299850 master-0 kubenswrapper[30278]: I0318 18:15:44.292554 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-884679f54-l66pc" Mar 18 18:15:44.299850 master-0 kubenswrapper[30278]: I0318 18:15:44.293534 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dw5zd\" (UniqueName: \"kubernetes.io/projected/4053a2ba-dff4-4ce8-a482-567999b6cd75-kube-api-access-dw5zd\") pod \"test-operator-controller-manager-5c5cb9c4d7-lkr87\" (UID: \"4053a2ba-dff4-4ce8-a482-567999b6cd75\") " pod="openstack-operators/test-operator-controller-manager-5c5cb9c4d7-lkr87" Mar 18 18:15:44.329084 master-0 kubenswrapper[30278]: I0318 18:15:44.329008 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-5784578c99-dx9nw" Mar 18 18:15:44.349632 master-0 kubenswrapper[30278]: I0318 18:15:44.348777 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7snbf\" (UniqueName: \"kubernetes.io/projected/1da476d6-9e63-42af-9501-abfb534343d9-kube-api-access-7snbf\") pod \"watcher-operator-controller-manager-6c4d75f7f9-v9v5q\" (UID: \"1da476d6-9e63-42af-9501-abfb534343d9\") " pod="openstack-operators/watcher-operator-controller-manager-6c4d75f7f9-v9v5q" Mar 18 18:15:44.360373 master-0 kubenswrapper[30278]: W0318 18:15:44.360337 30278 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod017ad4ff_4f9a_4d44_b2d4_9b694732f01b.slice/crio-f2a205061436e51809a7dc5d4aeb47c9275767bf592edde251f985d7e5791b2f WatchSource:0}: Error finding container f2a205061436e51809a7dc5d4aeb47c9275767bf592edde251f985d7e5791b2f: Status 404 returned error can't find the container with id f2a205061436e51809a7dc5d4aeb47c9275767bf592edde251f985d7e5791b2f Mar 18 18:15:44.372547 master-0 kubenswrapper[30278]: I0318 18:15:44.371042 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-c674c5965-vf92l" Mar 18 18:15:44.377453 master-0 kubenswrapper[30278]: I0318 18:15:44.377423 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-d6b694c5-z9sth" Mar 18 18:15:44.377649 master-0 kubenswrapper[30278]: I0318 18:15:44.377574 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7snbf\" (UniqueName: \"kubernetes.io/projected/1da476d6-9e63-42af-9501-abfb534343d9-kube-api-access-7snbf\") pod \"watcher-operator-controller-manager-6c4d75f7f9-v9v5q\" (UID: \"1da476d6-9e63-42af-9501-abfb534343d9\") " pod="openstack-operators/watcher-operator-controller-manager-6c4d75f7f9-v9v5q" Mar 18 18:15:44.380319 master-0 kubenswrapper[30278]: I0318 18:15:44.380282 30278 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-6c4d75f7f9-v9v5q"] Mar 18 18:15:44.392980 master-0 kubenswrapper[30278]: I0318 18:15:44.392807 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-5c5cb9c4d7-lkr87" Mar 18 18:15:44.420626 master-0 kubenswrapper[30278]: I0318 18:15:44.420549 30278 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-64cc6d45b7-7xs4c"] Mar 18 18:15:44.444848 master-0 kubenswrapper[30278]: I0318 18:15:44.444814 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-64cc6d45b7-7xs4c" Mar 18 18:15:44.446517 master-0 kubenswrapper[30278]: I0318 18:15:44.446461 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-6c4d75f7f9-v9v5q" Mar 18 18:15:44.450459 master-0 kubenswrapper[30278]: I0318 18:15:44.449652 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"metrics-server-cert" Mar 18 18:15:44.450773 master-0 kubenswrapper[30278]: I0318 18:15:44.450761 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Mar 18 18:15:44.546566 master-0 kubenswrapper[30278]: I0318 18:15:44.544554 30278 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-64cc6d45b7-7xs4c"] Mar 18 18:15:44.562670 master-0 kubenswrapper[30278]: I0318 18:15:44.561674 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mbtph\" (UniqueName: \"kubernetes.io/projected/a79357fe-125e-464c-a801-0949a13db2d1-kube-api-access-mbtph\") pod \"openstack-operator-controller-manager-64cc6d45b7-7xs4c\" (UID: \"a79357fe-125e-464c-a801-0949a13db2d1\") " pod="openstack-operators/openstack-operator-controller-manager-64cc6d45b7-7xs4c" Mar 18 18:15:44.562670 master-0 kubenswrapper[30278]: I0318 18:15:44.561752 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/65f3dd0a-e9a1-4087-ba1a-47366cf25382-cert\") pod \"openstack-baremetal-operator-controller-manager-89d64c458-jnvcb\" (UID: \"65f3dd0a-e9a1-4087-ba1a-47366cf25382\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-89d64c458-jnvcb" Mar 18 18:15:44.562670 master-0 kubenswrapper[30278]: I0318 18:15:44.561816 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/a79357fe-125e-464c-a801-0949a13db2d1-webhook-certs\") pod \"openstack-operator-controller-manager-64cc6d45b7-7xs4c\" (UID: \"a79357fe-125e-464c-a801-0949a13db2d1\") " pod="openstack-operators/openstack-operator-controller-manager-64cc6d45b7-7xs4c" Mar 18 18:15:44.562670 master-0 kubenswrapper[30278]: I0318 18:15:44.561940 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a79357fe-125e-464c-a801-0949a13db2d1-metrics-certs\") pod \"openstack-operator-controller-manager-64cc6d45b7-7xs4c\" (UID: \"a79357fe-125e-464c-a801-0949a13db2d1\") " pod="openstack-operators/openstack-operator-controller-manager-64cc6d45b7-7xs4c" Mar 18 18:15:44.562670 master-0 kubenswrapper[30278]: E0318 18:15:44.562577 30278 secret.go:189] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Mar 18 18:15:44.562670 master-0 kubenswrapper[30278]: E0318 18:15:44.562633 30278 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/65f3dd0a-e9a1-4087-ba1a-47366cf25382-cert podName:65f3dd0a-e9a1-4087-ba1a-47366cf25382 nodeName:}" failed. No retries permitted until 2026-03-18 18:15:45.56261471 +0000 UTC m=+914.729799305 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/65f3dd0a-e9a1-4087-ba1a-47366cf25382-cert") pod "openstack-baremetal-operator-controller-manager-89d64c458-jnvcb" (UID: "65f3dd0a-e9a1-4087-ba1a-47366cf25382") : secret "openstack-baremetal-operator-webhook-server-cert" not found Mar 18 18:15:44.665357 master-0 kubenswrapper[30278]: I0318 18:15:44.665222 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mbtph\" (UniqueName: \"kubernetes.io/projected/a79357fe-125e-464c-a801-0949a13db2d1-kube-api-access-mbtph\") pod \"openstack-operator-controller-manager-64cc6d45b7-7xs4c\" (UID: \"a79357fe-125e-464c-a801-0949a13db2d1\") " pod="openstack-operators/openstack-operator-controller-manager-64cc6d45b7-7xs4c" Mar 18 18:15:44.665800 master-0 kubenswrapper[30278]: I0318 18:15:44.665763 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/a79357fe-125e-464c-a801-0949a13db2d1-webhook-certs\") pod \"openstack-operator-controller-manager-64cc6d45b7-7xs4c\" (UID: \"a79357fe-125e-464c-a801-0949a13db2d1\") " pod="openstack-operators/openstack-operator-controller-manager-64cc6d45b7-7xs4c" Mar 18 18:15:44.668395 master-0 kubenswrapper[30278]: E0318 18:15:44.665989 30278 secret.go:189] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Mar 18 18:15:44.668395 master-0 kubenswrapper[30278]: I0318 18:15:44.666035 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a79357fe-125e-464c-a801-0949a13db2d1-metrics-certs\") pod \"openstack-operator-controller-manager-64cc6d45b7-7xs4c\" (UID: \"a79357fe-125e-464c-a801-0949a13db2d1\") " pod="openstack-operators/openstack-operator-controller-manager-64cc6d45b7-7xs4c" Mar 18 18:15:44.668395 master-0 kubenswrapper[30278]: E0318 18:15:44.666124 30278 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a79357fe-125e-464c-a801-0949a13db2d1-webhook-certs podName:a79357fe-125e-464c-a801-0949a13db2d1 nodeName:}" failed. No retries permitted until 2026-03-18 18:15:45.166092318 +0000 UTC m=+914.333277113 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/a79357fe-125e-464c-a801-0949a13db2d1-webhook-certs") pod "openstack-operator-controller-manager-64cc6d45b7-7xs4c" (UID: "a79357fe-125e-464c-a801-0949a13db2d1") : secret "webhook-server-cert" not found Mar 18 18:15:44.668395 master-0 kubenswrapper[30278]: E0318 18:15:44.666186 30278 secret.go:189] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Mar 18 18:15:44.668395 master-0 kubenswrapper[30278]: E0318 18:15:44.666260 30278 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a79357fe-125e-464c-a801-0949a13db2d1-metrics-certs podName:a79357fe-125e-464c-a801-0949a13db2d1 nodeName:}" failed. No retries permitted until 2026-03-18 18:15:45.166239792 +0000 UTC m=+914.333424387 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/a79357fe-125e-464c-a801-0949a13db2d1-metrics-certs") pod "openstack-operator-controller-manager-64cc6d45b7-7xs4c" (UID: "a79357fe-125e-464c-a801-0949a13db2d1") : secret "metrics-server-cert" not found Mar 18 18:15:44.684070 master-0 kubenswrapper[30278]: I0318 18:15:44.684009 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mbtph\" (UniqueName: \"kubernetes.io/projected/a79357fe-125e-464c-a801-0949a13db2d1-kube-api-access-mbtph\") pod \"openstack-operator-controller-manager-64cc6d45b7-7xs4c\" (UID: \"a79357fe-125e-464c-a801-0949a13db2d1\") " pod="openstack-operators/openstack-operator-controller-manager-64cc6d45b7-7xs4c" Mar 18 18:15:44.684590 master-0 kubenswrapper[30278]: I0318 18:15:44.684557 30278 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-jfv7j"] Mar 18 18:15:44.690406 master-0 kubenswrapper[30278]: I0318 18:15:44.686105 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-jfv7j" Mar 18 18:15:44.752138 master-0 kubenswrapper[30278]: I0318 18:15:44.752078 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-588d4d986b-nmf4w" event={"ID":"017ad4ff-4f9a-4d44-b2d4-9b694732f01b","Type":"ContainerStarted","Data":"f2a205061436e51809a7dc5d4aeb47c9275767bf592edde251f985d7e5791b2f"} Mar 18 18:15:44.755955 master-0 kubenswrapper[30278]: I0318 18:15:44.755898 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-79df6bcc97-kmxft" event={"ID":"3df82072-f7cc-4b7a-82ae-803eadfb2dde","Type":"ContainerStarted","Data":"7dcecb3f7be94dc80c3b12d49f6f898930b86a5da6302275b91ea0f8e2c2d633"} Mar 18 18:15:44.763871 master-0 kubenswrapper[30278]: I0318 18:15:44.760625 30278 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-jfv7j"] Mar 18 18:15:44.765363 master-0 kubenswrapper[30278]: I0318 18:15:44.764492 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-67dd5f86f5-q5xdd" event={"ID":"2f573bc4-cb28-4631-9b9e-2cfbc078e1ed","Type":"ContainerStarted","Data":"56d9c99327962238a852af08b9fcc01b7c0b6f45f2c13cb851dae146f8589ee4"} Mar 18 18:15:44.769876 master-0 kubenswrapper[30278]: I0318 18:15:44.769788 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-8d58dc466-qkpnz" event={"ID":"cc44a223-9705-4e38-986f-24d296b1ab51","Type":"ContainerStarted","Data":"01d872957206af2cfeaf4f041a601e92ce08ec4120b3f2656b039ea94ea294a4"} Mar 18 18:15:44.772067 master-0 kubenswrapper[30278]: I0318 18:15:44.772008 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-59bc569d95-7dcfq" event={"ID":"12a2950a-56b8-4997-9115-1acb7487d7b8","Type":"ContainerStarted","Data":"45bcb8874f5f0778339f918c008ab617fdc863c83dd94f74d9ce451f3d4f48cb"} Mar 18 18:15:44.893086 master-0 kubenswrapper[30278]: I0318 18:15:44.870239 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zgkl5\" (UniqueName: \"kubernetes.io/projected/6f5ebc48-9a75-41b9-a6d1-3e84c7ba6f54-kube-api-access-zgkl5\") pod \"rabbitmq-cluster-operator-manager-668c99d594-jfv7j\" (UID: \"6f5ebc48-9a75-41b9-a6d1-3e84c7ba6f54\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-jfv7j" Mar 18 18:15:44.904258 master-0 kubenswrapper[30278]: I0318 18:15:44.898458 30278 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-59bc569d95-7dcfq"] Mar 18 18:15:44.972358 master-0 kubenswrapper[30278]: I0318 18:15:44.972289 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zgkl5\" (UniqueName: \"kubernetes.io/projected/6f5ebc48-9a75-41b9-a6d1-3e84c7ba6f54-kube-api-access-zgkl5\") pod \"rabbitmq-cluster-operator-manager-668c99d594-jfv7j\" (UID: \"6f5ebc48-9a75-41b9-a6d1-3e84c7ba6f54\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-jfv7j" Mar 18 18:15:44.993408 master-0 kubenswrapper[30278]: I0318 18:15:44.993359 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zgkl5\" (UniqueName: \"kubernetes.io/projected/6f5ebc48-9a75-41b9-a6d1-3e84c7ba6f54-kube-api-access-zgkl5\") pod \"rabbitmq-cluster-operator-manager-668c99d594-jfv7j\" (UID: \"6f5ebc48-9a75-41b9-a6d1-3e84c7ba6f54\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-jfv7j" Mar 18 18:15:45.033776 master-0 kubenswrapper[30278]: I0318 18:15:45.033626 30278 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-8d58dc466-qkpnz"] Mar 18 18:15:45.053320 master-0 kubenswrapper[30278]: W0318 18:15:45.053259 30278 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddee848d7_cf06_4bfe_b6e0_3ab0afa826a9.slice/crio-f67b416c44aa283ee2f4abc51805c28f8870d42be4823d02cf7d912f6d09f7e4 WatchSource:0}: Error finding container f67b416c44aa283ee2f4abc51805c28f8870d42be4823d02cf7d912f6d09f7e4: Status 404 returned error can't find the container with id f67b416c44aa283ee2f4abc51805c28f8870d42be4823d02cf7d912f6d09f7e4 Mar 18 18:15:45.152979 master-0 kubenswrapper[30278]: I0318 18:15:45.139537 30278 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-588d4d986b-nmf4w"] Mar 18 18:15:45.182998 master-0 kubenswrapper[30278]: I0318 18:15:45.182826 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a79357fe-125e-464c-a801-0949a13db2d1-metrics-certs\") pod \"openstack-operator-controller-manager-64cc6d45b7-7xs4c\" (UID: \"a79357fe-125e-464c-a801-0949a13db2d1\") " pod="openstack-operators/openstack-operator-controller-manager-64cc6d45b7-7xs4c" Mar 18 18:15:45.183229 master-0 kubenswrapper[30278]: I0318 18:15:45.182996 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/a79357fe-125e-464c-a801-0949a13db2d1-webhook-certs\") pod \"openstack-operator-controller-manager-64cc6d45b7-7xs4c\" (UID: \"a79357fe-125e-464c-a801-0949a13db2d1\") " pod="openstack-operators/openstack-operator-controller-manager-64cc6d45b7-7xs4c" Mar 18 18:15:45.183229 master-0 kubenswrapper[30278]: E0318 18:15:45.183053 30278 secret.go:189] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Mar 18 18:15:45.183229 master-0 kubenswrapper[30278]: E0318 18:15:45.183163 30278 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a79357fe-125e-464c-a801-0949a13db2d1-metrics-certs podName:a79357fe-125e-464c-a801-0949a13db2d1 nodeName:}" failed. No retries permitted until 2026-03-18 18:15:46.183133511 +0000 UTC m=+915.350318106 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/a79357fe-125e-464c-a801-0949a13db2d1-metrics-certs") pod "openstack-operator-controller-manager-64cc6d45b7-7xs4c" (UID: "a79357fe-125e-464c-a801-0949a13db2d1") : secret "metrics-server-cert" not found Mar 18 18:15:45.183229 master-0 kubenswrapper[30278]: E0318 18:15:45.183186 30278 secret.go:189] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Mar 18 18:15:45.184096 master-0 kubenswrapper[30278]: E0318 18:15:45.183262 30278 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a79357fe-125e-464c-a801-0949a13db2d1-webhook-certs podName:a79357fe-125e-464c-a801-0949a13db2d1 nodeName:}" failed. No retries permitted until 2026-03-18 18:15:46.183238214 +0000 UTC m=+915.350422809 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/a79357fe-125e-464c-a801-0949a13db2d1-webhook-certs") pod "openstack-operator-controller-manager-64cc6d45b7-7xs4c" (UID: "a79357fe-125e-464c-a801-0949a13db2d1") : secret "webhook-server-cert" not found Mar 18 18:15:45.184670 master-0 kubenswrapper[30278]: I0318 18:15:45.184285 30278 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-79df6bcc97-kmxft"] Mar 18 18:15:45.246833 master-0 kubenswrapper[30278]: I0318 18:15:45.238187 30278 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-67dd5f86f5-q5xdd"] Mar 18 18:15:45.267113 master-0 kubenswrapper[30278]: I0318 18:15:45.263023 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-jfv7j" Mar 18 18:15:45.288904 master-0 kubenswrapper[30278]: I0318 18:15:45.287434 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/d486893b-62ed-4907-a004-9f6bf4e0a79f-cert\") pod \"infra-operator-controller-manager-7dd6bb94c9-mxxlh\" (UID: \"d486893b-62ed-4907-a004-9f6bf4e0a79f\") " pod="openstack-operators/infra-operator-controller-manager-7dd6bb94c9-mxxlh" Mar 18 18:15:45.288904 master-0 kubenswrapper[30278]: E0318 18:15:45.287654 30278 secret.go:189] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Mar 18 18:15:45.288904 master-0 kubenswrapper[30278]: E0318 18:15:45.287726 30278 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d486893b-62ed-4907-a004-9f6bf4e0a79f-cert podName:d486893b-62ed-4907-a004-9f6bf4e0a79f nodeName:}" failed. No retries permitted until 2026-03-18 18:15:47.287705808 +0000 UTC m=+916.454890403 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/d486893b-62ed-4907-a004-9f6bf4e0a79f-cert") pod "infra-operator-controller-manager-7dd6bb94c9-mxxlh" (UID: "d486893b-62ed-4907-a004-9f6bf4e0a79f") : secret "infra-operator-webhook-server-cert" not found Mar 18 18:15:45.298856 master-0 kubenswrapper[30278]: I0318 18:15:45.298776 30278 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-768b96df4c-j5p6q"] Mar 18 18:15:45.314770 master-0 kubenswrapper[30278]: I0318 18:15:45.312404 30278 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-8464cc45fb-stb7j"] Mar 18 18:15:45.319611 master-0 kubenswrapper[30278]: W0318 18:15:45.319448 30278 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod08dec5b3_09c6_4aa4_8c40_544556d1b7d4.slice/crio-4bb7efcfca674fae60cac7dd0a23b15634b883bfaaa223df54bfbcf248e0b85b WatchSource:0}: Error finding container 4bb7efcfca674fae60cac7dd0a23b15634b883bfaaa223df54bfbcf248e0b85b: Status 404 returned error can't find the container with id 4bb7efcfca674fae60cac7dd0a23b15634b883bfaaa223df54bfbcf248e0b85b Mar 18 18:15:45.361686 master-0 kubenswrapper[30278]: I0318 18:15:45.360862 30278 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-659bd6b58d-q7g49"] Mar 18 18:15:45.398563 master-0 kubenswrapper[30278]: I0318 18:15:45.398504 30278 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-55f864c847-nml4w"] Mar 18 18:15:45.423557 master-0 kubenswrapper[30278]: I0318 18:15:45.422607 30278 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-767865f676-vs6hj"] Mar 18 18:15:45.595593 master-0 kubenswrapper[30278]: I0318 18:15:45.595491 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/65f3dd0a-e9a1-4087-ba1a-47366cf25382-cert\") pod \"openstack-baremetal-operator-controller-manager-89d64c458-jnvcb\" (UID: \"65f3dd0a-e9a1-4087-ba1a-47366cf25382\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-89d64c458-jnvcb" Mar 18 18:15:45.596163 master-0 kubenswrapper[30278]: E0318 18:15:45.595920 30278 secret.go:189] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Mar 18 18:15:45.596163 master-0 kubenswrapper[30278]: E0318 18:15:45.596002 30278 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/65f3dd0a-e9a1-4087-ba1a-47366cf25382-cert podName:65f3dd0a-e9a1-4087-ba1a-47366cf25382 nodeName:}" failed. No retries permitted until 2026-03-18 18:15:47.595980045 +0000 UTC m=+916.763164640 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/65f3dd0a-e9a1-4087-ba1a-47366cf25382-cert") pod "openstack-baremetal-operator-controller-manager-89d64c458-jnvcb" (UID: "65f3dd0a-e9a1-4087-ba1a-47366cf25382") : secret "openstack-baremetal-operator-webhook-server-cert" not found Mar 18 18:15:45.791307 master-0 kubenswrapper[30278]: I0318 18:15:45.791233 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-8464cc45fb-stb7j" event={"ID":"dee848d7-cf06-4bfe-b6e0-3ab0afa826a9","Type":"ContainerStarted","Data":"f67b416c44aa283ee2f4abc51805c28f8870d42be4823d02cf7d912f6d09f7e4"} Mar 18 18:15:45.808904 master-0 kubenswrapper[30278]: I0318 18:15:45.808849 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-55f864c847-nml4w" event={"ID":"d3080dde-8e85-442a-ae2a-581507874a2d","Type":"ContainerStarted","Data":"fd404d1863d1e98dc19dcd4af40883a5306b2643917b5f5df7588b94f4f5d5f7"} Mar 18 18:15:45.813548 master-0 kubenswrapper[30278]: I0318 18:15:45.813512 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-767865f676-vs6hj" event={"ID":"9fa45639-3436-43df-a879-b6445c664661","Type":"ContainerStarted","Data":"b55b4e7301f05b9b3e6dbfdea78d7bc6fc7b3810ae85bb58e61aa42058979fb2"} Mar 18 18:15:45.815189 master-0 kubenswrapper[30278]: I0318 18:15:45.815149 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-768b96df4c-j5p6q" event={"ID":"da9d67e1-3213-4c5a-9b44-b02d440b36e7","Type":"ContainerStarted","Data":"eeaee43327db7267ae36405fcca10e25a344cce74b7cf365516d2c5aa0a820cc"} Mar 18 18:15:45.818209 master-0 kubenswrapper[30278]: I0318 18:15:45.818059 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-659bd6b58d-q7g49" event={"ID":"08dec5b3-09c6-4aa4-8c40-544556d1b7d4","Type":"ContainerStarted","Data":"4bb7efcfca674fae60cac7dd0a23b15634b883bfaaa223df54bfbcf248e0b85b"} Mar 18 18:15:45.999021 master-0 kubenswrapper[30278]: I0318 18:15:45.994320 30278 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-5c5cb9c4d7-lkr87"] Mar 18 18:15:46.010961 master-0 kubenswrapper[30278]: I0318 18:15:46.010857 30278 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-67ccfc9778-5hkw5"] Mar 18 18:15:46.043055 master-0 kubenswrapper[30278]: I0318 18:15:46.042920 30278 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-5d488d59fb-9btcv"] Mar 18 18:15:46.055654 master-0 kubenswrapper[30278]: W0318 18:15:46.055489 30278 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod445552af_e585_4728_adfa_9fe6f9e79cc1.slice/crio-182cdcbc7fba6e84201b9ea087e004d08946d720873094798b63c4318c9fefed WatchSource:0}: Error finding container 182cdcbc7fba6e84201b9ea087e004d08946d720873094798b63c4318c9fefed: Status 404 returned error can't find the container with id 182cdcbc7fba6e84201b9ea087e004d08946d720873094798b63c4318c9fefed Mar 18 18:15:46.107186 master-0 kubenswrapper[30278]: I0318 18:15:46.106765 30278 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-5784578c99-dx9nw"] Mar 18 18:15:46.187754 master-0 kubenswrapper[30278]: I0318 18:15:46.187166 30278 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-d6b694c5-z9sth"] Mar 18 18:15:46.229113 master-0 kubenswrapper[30278]: I0318 18:15:46.229037 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/a79357fe-125e-464c-a801-0949a13db2d1-webhook-certs\") pod \"openstack-operator-controller-manager-64cc6d45b7-7xs4c\" (UID: \"a79357fe-125e-464c-a801-0949a13db2d1\") " pod="openstack-operators/openstack-operator-controller-manager-64cc6d45b7-7xs4c" Mar 18 18:15:46.232049 master-0 kubenswrapper[30278]: I0318 18:15:46.232031 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a79357fe-125e-464c-a801-0949a13db2d1-metrics-certs\") pod \"openstack-operator-controller-manager-64cc6d45b7-7xs4c\" (UID: \"a79357fe-125e-464c-a801-0949a13db2d1\") " pod="openstack-operators/openstack-operator-controller-manager-64cc6d45b7-7xs4c" Mar 18 18:15:46.232339 master-0 kubenswrapper[30278]: E0318 18:15:46.231343 30278 secret.go:189] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Mar 18 18:15:46.232515 master-0 kubenswrapper[30278]: E0318 18:15:46.232501 30278 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a79357fe-125e-464c-a801-0949a13db2d1-webhook-certs podName:a79357fe-125e-464c-a801-0949a13db2d1 nodeName:}" failed. No retries permitted until 2026-03-18 18:15:48.232480107 +0000 UTC m=+917.399664702 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/a79357fe-125e-464c-a801-0949a13db2d1-webhook-certs") pod "openstack-operator-controller-manager-64cc6d45b7-7xs4c" (UID: "a79357fe-125e-464c-a801-0949a13db2d1") : secret "webhook-server-cert" not found Mar 18 18:15:46.235343 master-0 kubenswrapper[30278]: E0318 18:15:46.235260 30278 secret.go:189] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Mar 18 18:15:46.236106 master-0 kubenswrapper[30278]: E0318 18:15:46.235509 30278 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a79357fe-125e-464c-a801-0949a13db2d1-metrics-certs podName:a79357fe-125e-464c-a801-0949a13db2d1 nodeName:}" failed. No retries permitted until 2026-03-18 18:15:48.235368146 +0000 UTC m=+917.402552741 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/a79357fe-125e-464c-a801-0949a13db2d1-metrics-certs") pod "openstack-operator-controller-manager-64cc6d45b7-7xs4c" (UID: "a79357fe-125e-464c-a801-0949a13db2d1") : secret "metrics-server-cert" not found Mar 18 18:15:46.292154 master-0 kubenswrapper[30278]: W0318 18:15:46.292022 30278 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1da476d6_9e63_42af_9501_abfb534343d9.slice/crio-48e6d2a21d87f27c449c51f2dd3c8521b3176e5edfb21f351a287892af859aa2 WatchSource:0}: Error finding container 48e6d2a21d87f27c449c51f2dd3c8521b3176e5edfb21f351a287892af859aa2: Status 404 returned error can't find the container with id 48e6d2a21d87f27c449c51f2dd3c8521b3176e5edfb21f351a287892af859aa2 Mar 18 18:15:46.346647 master-0 kubenswrapper[30278]: W0318 18:15:46.346567 30278 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1aa3b381_7785_41a0_9d4c_094a9e4abbe5.slice/crio-d43f52ba196575bc28ac31c996fe88f2014b76626d429d648a04c6dbead98436 WatchSource:0}: Error finding container d43f52ba196575bc28ac31c996fe88f2014b76626d429d648a04c6dbead98436: Status 404 returned error can't find the container with id d43f52ba196575bc28ac31c996fe88f2014b76626d429d648a04c6dbead98436 Mar 18 18:15:46.350576 master-0 kubenswrapper[30278]: W0318 18:15:46.350115 30278 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddbb9017b_18df_4021_bbe9_af055932f22a.slice/crio-9fa3f6f2e43fb1f2bbc4c1097c7e1ca704a50dc5087894aed162ec7232e4b073 WatchSource:0}: Error finding container 9fa3f6f2e43fb1f2bbc4c1097c7e1ca704a50dc5087894aed162ec7232e4b073: Status 404 returned error can't find the container with id 9fa3f6f2e43fb1f2bbc4c1097c7e1ca704a50dc5087894aed162ec7232e4b073 Mar 18 18:15:46.351111 master-0 kubenswrapper[30278]: I0318 18:15:46.350939 30278 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-6c4d75f7f9-v9v5q"] Mar 18 18:15:46.362567 master-0 kubenswrapper[30278]: I0318 18:15:46.362345 30278 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-5b9f45d989-hlkz4"] Mar 18 18:15:46.377560 master-0 kubenswrapper[30278]: I0318 18:15:46.375160 30278 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-c674c5965-vf92l"] Mar 18 18:15:46.383869 master-0 kubenswrapper[30278]: I0318 18:15:46.383818 30278 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-884679f54-l66pc"] Mar 18 18:15:46.456918 master-0 kubenswrapper[30278]: I0318 18:15:46.456819 30278 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-jfv7j"] Mar 18 18:15:46.468082 master-0 kubenswrapper[30278]: W0318 18:15:46.468032 30278 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6f5ebc48_9a75_41b9_a6d1_3e84c7ba6f54.slice/crio-830aef777be08860debc0975e943b0108d450364a3f8d990e356ba7bfd7a8f7e WatchSource:0}: Error finding container 830aef777be08860debc0975e943b0108d450364a3f8d990e356ba7bfd7a8f7e: Status 404 returned error can't find the container with id 830aef777be08860debc0975e943b0108d450364a3f8d990e356ba7bfd7a8f7e Mar 18 18:15:46.840203 master-0 kubenswrapper[30278]: I0318 18:15:46.840136 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-5c5cb9c4d7-lkr87" event={"ID":"4053a2ba-dff4-4ce8-a482-567999b6cd75","Type":"ContainerStarted","Data":"7e56216609bc01ada65350fd2c17b8aed36cc07b9be38be537c7f203010d7dd3"} Mar 18 18:15:46.849638 master-0 kubenswrapper[30278]: I0318 18:15:46.849606 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-5784578c99-dx9nw" event={"ID":"ff1d10fa-b70d-439b-8183-dbdf8042e43d","Type":"ContainerStarted","Data":"eba084865203937d95b8c0ca32cd95c2195318d1075af74cfa453f4863e6d1a6"} Mar 18 18:15:46.856958 master-0 kubenswrapper[30278]: I0318 18:15:46.856883 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-jfv7j" event={"ID":"6f5ebc48-9a75-41b9-a6d1-3e84c7ba6f54","Type":"ContainerStarted","Data":"830aef777be08860debc0975e943b0108d450364a3f8d990e356ba7bfd7a8f7e"} Mar 18 18:15:46.867443 master-0 kubenswrapper[30278]: I0318 18:15:46.867232 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-c674c5965-vf92l" event={"ID":"6cde8ef7-31e7-496a-95a0-381f4bd6c4ed","Type":"ContainerStarted","Data":"cf10272035555cc270249d7a74c8a417bded21003a7942fdc8b6082eeb15eaf3"} Mar 18 18:15:46.872781 master-0 kubenswrapper[30278]: I0318 18:15:46.872670 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-6c4d75f7f9-v9v5q" event={"ID":"1da476d6-9e63-42af-9501-abfb534343d9","Type":"ContainerStarted","Data":"48e6d2a21d87f27c449c51f2dd3c8521b3176e5edfb21f351a287892af859aa2"} Mar 18 18:15:46.879517 master-0 kubenswrapper[30278]: I0318 18:15:46.879419 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-67ccfc9778-5hkw5" event={"ID":"7bbcfafe-41f1-44dc-9c89-89dae4c1fac4","Type":"ContainerStarted","Data":"9490232fcaa5f960766629750400431894cf709a86e636e49f80144d4eafc39b"} Mar 18 18:15:46.883024 master-0 kubenswrapper[30278]: I0318 18:15:46.882893 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-5b9f45d989-hlkz4" event={"ID":"dbb9017b-18df-4021-bbe9-af055932f22a","Type":"ContainerStarted","Data":"9fa3f6f2e43fb1f2bbc4c1097c7e1ca704a50dc5087894aed162ec7232e4b073"} Mar 18 18:15:46.887174 master-0 kubenswrapper[30278]: I0318 18:15:46.887090 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-5d488d59fb-9btcv" event={"ID":"9808351e-1785-48d6-a2fd-8953742f27cc","Type":"ContainerStarted","Data":"9ab9881a83eddb1a4de9698a5eac36ac1b7321fa3d843e233486a63df1ad7a7d"} Mar 18 18:15:46.889901 master-0 kubenswrapper[30278]: I0318 18:15:46.889852 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-d6b694c5-z9sth" event={"ID":"445552af-e585-4728-adfa-9fe6f9e79cc1","Type":"ContainerStarted","Data":"182cdcbc7fba6e84201b9ea087e004d08946d720873094798b63c4318c9fefed"} Mar 18 18:15:46.892786 master-0 kubenswrapper[30278]: I0318 18:15:46.892757 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-884679f54-l66pc" event={"ID":"1aa3b381-7785-41a0-9d4c-094a9e4abbe5","Type":"ContainerStarted","Data":"d43f52ba196575bc28ac31c996fe88f2014b76626d429d648a04c6dbead98436"} Mar 18 18:15:47.364339 master-0 kubenswrapper[30278]: I0318 18:15:47.363082 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/d486893b-62ed-4907-a004-9f6bf4e0a79f-cert\") pod \"infra-operator-controller-manager-7dd6bb94c9-mxxlh\" (UID: \"d486893b-62ed-4907-a004-9f6bf4e0a79f\") " pod="openstack-operators/infra-operator-controller-manager-7dd6bb94c9-mxxlh" Mar 18 18:15:47.364339 master-0 kubenswrapper[30278]: E0318 18:15:47.363395 30278 secret.go:189] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Mar 18 18:15:47.364339 master-0 kubenswrapper[30278]: E0318 18:15:47.363456 30278 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d486893b-62ed-4907-a004-9f6bf4e0a79f-cert podName:d486893b-62ed-4907-a004-9f6bf4e0a79f nodeName:}" failed. No retries permitted until 2026-03-18 18:15:51.363438682 +0000 UTC m=+920.530623277 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/d486893b-62ed-4907-a004-9f6bf4e0a79f-cert") pod "infra-operator-controller-manager-7dd6bb94c9-mxxlh" (UID: "d486893b-62ed-4907-a004-9f6bf4e0a79f") : secret "infra-operator-webhook-server-cert" not found Mar 18 18:15:47.673204 master-0 kubenswrapper[30278]: I0318 18:15:47.673053 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/65f3dd0a-e9a1-4087-ba1a-47366cf25382-cert\") pod \"openstack-baremetal-operator-controller-manager-89d64c458-jnvcb\" (UID: \"65f3dd0a-e9a1-4087-ba1a-47366cf25382\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-89d64c458-jnvcb" Mar 18 18:15:47.673564 master-0 kubenswrapper[30278]: E0318 18:15:47.673301 30278 secret.go:189] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Mar 18 18:15:47.673564 master-0 kubenswrapper[30278]: E0318 18:15:47.673428 30278 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/65f3dd0a-e9a1-4087-ba1a-47366cf25382-cert podName:65f3dd0a-e9a1-4087-ba1a-47366cf25382 nodeName:}" failed. No retries permitted until 2026-03-18 18:15:51.673400285 +0000 UTC m=+920.840584880 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/65f3dd0a-e9a1-4087-ba1a-47366cf25382-cert") pod "openstack-baremetal-operator-controller-manager-89d64c458-jnvcb" (UID: "65f3dd0a-e9a1-4087-ba1a-47366cf25382") : secret "openstack-baremetal-operator-webhook-server-cert" not found Mar 18 18:15:48.291027 master-0 kubenswrapper[30278]: I0318 18:15:48.290918 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/a79357fe-125e-464c-a801-0949a13db2d1-webhook-certs\") pod \"openstack-operator-controller-manager-64cc6d45b7-7xs4c\" (UID: \"a79357fe-125e-464c-a801-0949a13db2d1\") " pod="openstack-operators/openstack-operator-controller-manager-64cc6d45b7-7xs4c" Mar 18 18:15:48.291848 master-0 kubenswrapper[30278]: E0318 18:15:48.291122 30278 secret.go:189] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Mar 18 18:15:48.291848 master-0 kubenswrapper[30278]: E0318 18:15:48.291207 30278 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a79357fe-125e-464c-a801-0949a13db2d1-webhook-certs podName:a79357fe-125e-464c-a801-0949a13db2d1 nodeName:}" failed. No retries permitted until 2026-03-18 18:15:52.291185501 +0000 UTC m=+921.458370106 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/a79357fe-125e-464c-a801-0949a13db2d1-webhook-certs") pod "openstack-operator-controller-manager-64cc6d45b7-7xs4c" (UID: "a79357fe-125e-464c-a801-0949a13db2d1") : secret "webhook-server-cert" not found Mar 18 18:15:48.291848 master-0 kubenswrapper[30278]: I0318 18:15:48.291207 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a79357fe-125e-464c-a801-0949a13db2d1-metrics-certs\") pod \"openstack-operator-controller-manager-64cc6d45b7-7xs4c\" (UID: \"a79357fe-125e-464c-a801-0949a13db2d1\") " pod="openstack-operators/openstack-operator-controller-manager-64cc6d45b7-7xs4c" Mar 18 18:15:48.291848 master-0 kubenswrapper[30278]: E0318 18:15:48.291494 30278 secret.go:189] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Mar 18 18:15:48.291848 master-0 kubenswrapper[30278]: E0318 18:15:48.291536 30278 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a79357fe-125e-464c-a801-0949a13db2d1-metrics-certs podName:a79357fe-125e-464c-a801-0949a13db2d1 nodeName:}" failed. No retries permitted until 2026-03-18 18:15:52.29152378 +0000 UTC m=+921.458708385 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/a79357fe-125e-464c-a801-0949a13db2d1-metrics-certs") pod "openstack-operator-controller-manager-64cc6d45b7-7xs4c" (UID: "a79357fe-125e-464c-a801-0949a13db2d1") : secret "metrics-server-cert" not found Mar 18 18:15:51.388763 master-0 kubenswrapper[30278]: I0318 18:15:51.388614 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/d486893b-62ed-4907-a004-9f6bf4e0a79f-cert\") pod \"infra-operator-controller-manager-7dd6bb94c9-mxxlh\" (UID: \"d486893b-62ed-4907-a004-9f6bf4e0a79f\") " pod="openstack-operators/infra-operator-controller-manager-7dd6bb94c9-mxxlh" Mar 18 18:15:51.389976 master-0 kubenswrapper[30278]: E0318 18:15:51.389459 30278 secret.go:189] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Mar 18 18:15:51.389976 master-0 kubenswrapper[30278]: E0318 18:15:51.389563 30278 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d486893b-62ed-4907-a004-9f6bf4e0a79f-cert podName:d486893b-62ed-4907-a004-9f6bf4e0a79f nodeName:}" failed. No retries permitted until 2026-03-18 18:15:59.389539309 +0000 UTC m=+928.556723904 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/d486893b-62ed-4907-a004-9f6bf4e0a79f-cert") pod "infra-operator-controller-manager-7dd6bb94c9-mxxlh" (UID: "d486893b-62ed-4907-a004-9f6bf4e0a79f") : secret "infra-operator-webhook-server-cert" not found Mar 18 18:15:51.700107 master-0 kubenswrapper[30278]: I0318 18:15:51.699933 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/65f3dd0a-e9a1-4087-ba1a-47366cf25382-cert\") pod \"openstack-baremetal-operator-controller-manager-89d64c458-jnvcb\" (UID: \"65f3dd0a-e9a1-4087-ba1a-47366cf25382\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-89d64c458-jnvcb" Mar 18 18:15:51.700418 master-0 kubenswrapper[30278]: E0318 18:15:51.700152 30278 secret.go:189] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Mar 18 18:15:51.700418 master-0 kubenswrapper[30278]: E0318 18:15:51.700288 30278 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/65f3dd0a-e9a1-4087-ba1a-47366cf25382-cert podName:65f3dd0a-e9a1-4087-ba1a-47366cf25382 nodeName:}" failed. No retries permitted until 2026-03-18 18:15:59.700243861 +0000 UTC m=+928.867428456 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/65f3dd0a-e9a1-4087-ba1a-47366cf25382-cert") pod "openstack-baremetal-operator-controller-manager-89d64c458-jnvcb" (UID: "65f3dd0a-e9a1-4087-ba1a-47366cf25382") : secret "openstack-baremetal-operator-webhook-server-cert" not found Mar 18 18:15:52.314892 master-0 kubenswrapper[30278]: I0318 18:15:52.314014 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/a79357fe-125e-464c-a801-0949a13db2d1-webhook-certs\") pod \"openstack-operator-controller-manager-64cc6d45b7-7xs4c\" (UID: \"a79357fe-125e-464c-a801-0949a13db2d1\") " pod="openstack-operators/openstack-operator-controller-manager-64cc6d45b7-7xs4c" Mar 18 18:15:52.314892 master-0 kubenswrapper[30278]: I0318 18:15:52.314170 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a79357fe-125e-464c-a801-0949a13db2d1-metrics-certs\") pod \"openstack-operator-controller-manager-64cc6d45b7-7xs4c\" (UID: \"a79357fe-125e-464c-a801-0949a13db2d1\") " pod="openstack-operators/openstack-operator-controller-manager-64cc6d45b7-7xs4c" Mar 18 18:15:52.314892 master-0 kubenswrapper[30278]: E0318 18:15:52.314407 30278 secret.go:189] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Mar 18 18:15:52.314892 master-0 kubenswrapper[30278]: E0318 18:15:52.314485 30278 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a79357fe-125e-464c-a801-0949a13db2d1-metrics-certs podName:a79357fe-125e-464c-a801-0949a13db2d1 nodeName:}" failed. No retries permitted until 2026-03-18 18:16:00.314463561 +0000 UTC m=+929.481648176 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/a79357fe-125e-464c-a801-0949a13db2d1-metrics-certs") pod "openstack-operator-controller-manager-64cc6d45b7-7xs4c" (UID: "a79357fe-125e-464c-a801-0949a13db2d1") : secret "metrics-server-cert" not found Mar 18 18:15:52.314892 master-0 kubenswrapper[30278]: E0318 18:15:52.314515 30278 secret.go:189] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Mar 18 18:15:52.314892 master-0 kubenswrapper[30278]: E0318 18:15:52.314605 30278 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a79357fe-125e-464c-a801-0949a13db2d1-webhook-certs podName:a79357fe-125e-464c-a801-0949a13db2d1 nodeName:}" failed. No retries permitted until 2026-03-18 18:16:00.314584924 +0000 UTC m=+929.481769519 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/a79357fe-125e-464c-a801-0949a13db2d1-webhook-certs") pod "openstack-operator-controller-manager-64cc6d45b7-7xs4c" (UID: "a79357fe-125e-464c-a801-0949a13db2d1") : secret "webhook-server-cert" not found Mar 18 18:15:59.417185 master-0 kubenswrapper[30278]: I0318 18:15:59.417117 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/d486893b-62ed-4907-a004-9f6bf4e0a79f-cert\") pod \"infra-operator-controller-manager-7dd6bb94c9-mxxlh\" (UID: \"d486893b-62ed-4907-a004-9f6bf4e0a79f\") " pod="openstack-operators/infra-operator-controller-manager-7dd6bb94c9-mxxlh" Mar 18 18:15:59.420269 master-0 kubenswrapper[30278]: I0318 18:15:59.420223 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/d486893b-62ed-4907-a004-9f6bf4e0a79f-cert\") pod \"infra-operator-controller-manager-7dd6bb94c9-mxxlh\" (UID: \"d486893b-62ed-4907-a004-9f6bf4e0a79f\") " pod="openstack-operators/infra-operator-controller-manager-7dd6bb94c9-mxxlh" Mar 18 18:15:59.546317 master-0 kubenswrapper[30278]: I0318 18:15:59.546221 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-7dd6bb94c9-mxxlh" Mar 18 18:15:59.724763 master-0 kubenswrapper[30278]: I0318 18:15:59.724678 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/65f3dd0a-e9a1-4087-ba1a-47366cf25382-cert\") pod \"openstack-baremetal-operator-controller-manager-89d64c458-jnvcb\" (UID: \"65f3dd0a-e9a1-4087-ba1a-47366cf25382\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-89d64c458-jnvcb" Mar 18 18:15:59.733085 master-0 kubenswrapper[30278]: I0318 18:15:59.732936 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/65f3dd0a-e9a1-4087-ba1a-47366cf25382-cert\") pod \"openstack-baremetal-operator-controller-manager-89d64c458-jnvcb\" (UID: \"65f3dd0a-e9a1-4087-ba1a-47366cf25382\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-89d64c458-jnvcb" Mar 18 18:15:59.894940 master-0 kubenswrapper[30278]: I0318 18:15:59.894816 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-89d64c458-jnvcb" Mar 18 18:16:00.341696 master-0 kubenswrapper[30278]: I0318 18:16:00.341582 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/a79357fe-125e-464c-a801-0949a13db2d1-webhook-certs\") pod \"openstack-operator-controller-manager-64cc6d45b7-7xs4c\" (UID: \"a79357fe-125e-464c-a801-0949a13db2d1\") " pod="openstack-operators/openstack-operator-controller-manager-64cc6d45b7-7xs4c" Mar 18 18:16:00.342019 master-0 kubenswrapper[30278]: I0318 18:16:00.341964 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a79357fe-125e-464c-a801-0949a13db2d1-metrics-certs\") pod \"openstack-operator-controller-manager-64cc6d45b7-7xs4c\" (UID: \"a79357fe-125e-464c-a801-0949a13db2d1\") " pod="openstack-operators/openstack-operator-controller-manager-64cc6d45b7-7xs4c" Mar 18 18:16:00.342987 master-0 kubenswrapper[30278]: E0318 18:16:00.342312 30278 secret.go:189] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Mar 18 18:16:00.342987 master-0 kubenswrapper[30278]: E0318 18:16:00.342355 30278 secret.go:189] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Mar 18 18:16:00.342987 master-0 kubenswrapper[30278]: E0318 18:16:00.342431 30278 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a79357fe-125e-464c-a801-0949a13db2d1-webhook-certs podName:a79357fe-125e-464c-a801-0949a13db2d1 nodeName:}" failed. No retries permitted until 2026-03-18 18:16:16.342399399 +0000 UTC m=+945.509584004 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/a79357fe-125e-464c-a801-0949a13db2d1-webhook-certs") pod "openstack-operator-controller-manager-64cc6d45b7-7xs4c" (UID: "a79357fe-125e-464c-a801-0949a13db2d1") : secret "webhook-server-cert" not found Mar 18 18:16:00.342987 master-0 kubenswrapper[30278]: E0318 18:16:00.342510 30278 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a79357fe-125e-464c-a801-0949a13db2d1-metrics-certs podName:a79357fe-125e-464c-a801-0949a13db2d1 nodeName:}" failed. No retries permitted until 2026-03-18 18:16:16.34245005 +0000 UTC m=+945.509634665 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/a79357fe-125e-464c-a801-0949a13db2d1-metrics-certs") pod "openstack-operator-controller-manager-64cc6d45b7-7xs4c" (UID: "a79357fe-125e-464c-a801-0949a13db2d1") : secret "metrics-server-cert" not found Mar 18 18:16:05.468322 master-0 kubenswrapper[30278]: I0318 18:16:05.468242 30278 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-89d64c458-jnvcb"] Mar 18 18:16:05.701393 master-0 kubenswrapper[30278]: I0318 18:16:05.701331 30278 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-7dd6bb94c9-mxxlh"] Mar 18 18:16:06.260315 master-0 kubenswrapper[30278]: I0318 18:16:06.257889 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-884679f54-l66pc" event={"ID":"1aa3b381-7785-41a0-9d4c-094a9e4abbe5","Type":"ContainerStarted","Data":"c690ef39c9bf5fce52136f9fff60f274a1d5c35cf17bca9ceb444dbfc0be1a42"} Mar 18 18:16:06.282316 master-0 kubenswrapper[30278]: I0318 18:16:06.280414 30278 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-884679f54-l66pc" Mar 18 18:16:06.288304 master-0 kubenswrapper[30278]: I0318 18:16:06.284230 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-588d4d986b-nmf4w" event={"ID":"017ad4ff-4f9a-4d44-b2d4-9b694732f01b","Type":"ContainerStarted","Data":"9c9bcdd013fe5b4f5e14ae85859a38d2544cf2f0e07e217c87abb96876384ae4"} Mar 18 18:16:06.288304 master-0 kubenswrapper[30278]: I0318 18:16:06.284870 30278 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-588d4d986b-nmf4w" Mar 18 18:16:06.288304 master-0 kubenswrapper[30278]: I0318 18:16:06.287654 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-79df6bcc97-kmxft" event={"ID":"3df82072-f7cc-4b7a-82ae-803eadfb2dde","Type":"ContainerStarted","Data":"cab09d70652fc41d5cd98f1c2c79c046d49744c3b1dbb71f08498bd2d617c64b"} Mar 18 18:16:06.288304 master-0 kubenswrapper[30278]: I0318 18:16:06.287928 30278 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-79df6bcc97-kmxft" Mar 18 18:16:06.291311 master-0 kubenswrapper[30278]: I0318 18:16:06.289377 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-67ccfc9778-5hkw5" event={"ID":"7bbcfafe-41f1-44dc-9c89-89dae4c1fac4","Type":"ContainerStarted","Data":"63b9de0b5a477245925297b30a7f1b5849e5d9c362c6e317a5d5c4ff32457471"} Mar 18 18:16:06.291311 master-0 kubenswrapper[30278]: I0318 18:16:06.290222 30278 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-67ccfc9778-5hkw5" Mar 18 18:16:06.291479 master-0 kubenswrapper[30278]: I0318 18:16:06.291292 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-6c4d75f7f9-v9v5q" event={"ID":"1da476d6-9e63-42af-9501-abfb534343d9","Type":"ContainerStarted","Data":"2c504cf718d5b8dd4844551de7974108672e314b114b0780c9c0b48565db2a01"} Mar 18 18:16:06.293872 master-0 kubenswrapper[30278]: I0318 18:16:06.291709 30278 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-6c4d75f7f9-v9v5q" Mar 18 18:16:06.295295 master-0 kubenswrapper[30278]: I0318 18:16:06.294159 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-5c5cb9c4d7-lkr87" event={"ID":"4053a2ba-dff4-4ce8-a482-567999b6cd75","Type":"ContainerStarted","Data":"373a88456e6e609b646c40d2f149b1fc70fc1cfe83362a7bd3e984c81cf1e7b9"} Mar 18 18:16:06.295295 master-0 kubenswrapper[30278]: I0318 18:16:06.294208 30278 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-5c5cb9c4d7-lkr87" Mar 18 18:16:06.326611 master-0 kubenswrapper[30278]: I0318 18:16:06.325055 30278 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-5d488d59fb-9btcv" Mar 18 18:16:06.340712 master-0 kubenswrapper[30278]: I0318 18:16:06.340321 30278 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-8d58dc466-qkpnz" Mar 18 18:16:06.344292 master-0 kubenswrapper[30278]: I0318 18:16:06.341297 30278 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-884679f54-l66pc" podStartSLOduration=4.751764704 podStartE2EDuration="23.341262375s" podCreationTimestamp="2026-03-18 18:15:43 +0000 UTC" firstStartedPulling="2026-03-18 18:15:46.360039288 +0000 UTC m=+915.527223883" lastFinishedPulling="2026-03-18 18:16:04.949536949 +0000 UTC m=+934.116721554" observedRunningTime="2026-03-18 18:16:06.321388387 +0000 UTC m=+935.488573002" watchObservedRunningTime="2026-03-18 18:16:06.341262375 +0000 UTC m=+935.508446970" Mar 18 18:16:06.369313 master-0 kubenswrapper[30278]: I0318 18:16:06.366774 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-d6b694c5-z9sth" event={"ID":"445552af-e585-4728-adfa-9fe6f9e79cc1","Type":"ContainerStarted","Data":"eafc8c67178ab5ef0516b3d36a101d17058247e91ca3153d35acd8cf75585a6e"} Mar 18 18:16:06.369313 master-0 kubenswrapper[30278]: I0318 18:16:06.367256 30278 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-d6b694c5-z9sth" Mar 18 18:16:06.389503 master-0 kubenswrapper[30278]: I0318 18:16:06.387788 30278 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-79df6bcc97-kmxft" podStartSLOduration=4.082605525 podStartE2EDuration="24.387755722s" podCreationTimestamp="2026-03-18 18:15:42 +0000 UTC" firstStartedPulling="2026-03-18 18:15:44.592866248 +0000 UTC m=+913.760050853" lastFinishedPulling="2026-03-18 18:16:04.898016415 +0000 UTC m=+934.065201050" observedRunningTime="2026-03-18 18:16:06.371596085 +0000 UTC m=+935.538780670" watchObservedRunningTime="2026-03-18 18:16:06.387755722 +0000 UTC m=+935.554940317" Mar 18 18:16:06.389503 master-0 kubenswrapper[30278]: I0318 18:16:06.388068 30278 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-5784578c99-dx9nw" Mar 18 18:16:06.400310 master-0 kubenswrapper[30278]: I0318 18:16:06.399850 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-59bc569d95-7dcfq" event={"ID":"12a2950a-56b8-4997-9115-1acb7487d7b8","Type":"ContainerStarted","Data":"fca03cb0c43cbee33809d39c63f99d06d18d07b1c4a87c75e2f6612037c8273b"} Mar 18 18:16:06.400310 master-0 kubenswrapper[30278]: I0318 18:16:06.400308 30278 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-59bc569d95-7dcfq" Mar 18 18:16:06.402857 master-0 kubenswrapper[30278]: I0318 18:16:06.402646 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-8464cc45fb-stb7j" event={"ID":"dee848d7-cf06-4bfe-b6e0-3ab0afa826a9","Type":"ContainerStarted","Data":"f7f07277934aac5c78dc9f981d323705dd1e94d49b041c865063388decbdef2f"} Mar 18 18:16:06.403235 master-0 kubenswrapper[30278]: I0318 18:16:06.403202 30278 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-8464cc45fb-stb7j" Mar 18 18:16:06.444600 master-0 kubenswrapper[30278]: I0318 18:16:06.444502 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-c674c5965-vf92l" event={"ID":"6cde8ef7-31e7-496a-95a0-381f4bd6c4ed","Type":"ContainerStarted","Data":"bce96d26642698f905628dc27f09ee9f0c267a147182b5f63a159b5aac24f45a"} Mar 18 18:16:06.445390 master-0 kubenswrapper[30278]: I0318 18:16:06.445358 30278 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-c674c5965-vf92l" Mar 18 18:16:06.460361 master-0 kubenswrapper[30278]: I0318 18:16:06.456226 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-7dd6bb94c9-mxxlh" event={"ID":"d486893b-62ed-4907-a004-9f6bf4e0a79f","Type":"ContainerStarted","Data":"7fcab49ea105adcb16d08437cbdb830979c467d68393c0860bdcc648beca486d"} Mar 18 18:16:06.467339 master-0 kubenswrapper[30278]: I0318 18:16:06.462247 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-67dd5f86f5-q5xdd" event={"ID":"2f573bc4-cb28-4631-9b9e-2cfbc078e1ed","Type":"ContainerStarted","Data":"7cda6a1ccc7f6ef2532369180f982ce546df6dba3b2875f6839e5bb6e83b350c"} Mar 18 18:16:06.467339 master-0 kubenswrapper[30278]: I0318 18:16:06.463529 30278 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-67dd5f86f5-q5xdd" Mar 18 18:16:06.467339 master-0 kubenswrapper[30278]: I0318 18:16:06.466471 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-5b9f45d989-hlkz4" event={"ID":"dbb9017b-18df-4021-bbe9-af055932f22a","Type":"ContainerStarted","Data":"bd8fd2c638cb641cef231fef470ec3e586056dff9dba3900e332735788dfad46"} Mar 18 18:16:06.467339 master-0 kubenswrapper[30278]: I0318 18:16:06.467093 30278 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-5b9f45d989-hlkz4" Mar 18 18:16:06.490692 master-0 kubenswrapper[30278]: I0318 18:16:06.490431 30278 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-588d4d986b-nmf4w" podStartSLOduration=4.031748269 podStartE2EDuration="24.490400468s" podCreationTimestamp="2026-03-18 18:15:42 +0000 UTC" firstStartedPulling="2026-03-18 18:15:44.438898774 +0000 UTC m=+913.606083359" lastFinishedPulling="2026-03-18 18:16:04.897550943 +0000 UTC m=+934.064735558" observedRunningTime="2026-03-18 18:16:06.411501595 +0000 UTC m=+935.578686190" watchObservedRunningTime="2026-03-18 18:16:06.490400468 +0000 UTC m=+935.657585073" Mar 18 18:16:06.498935 master-0 kubenswrapper[30278]: I0318 18:16:06.498865 30278 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-6c4d75f7f9-v9v5q" podStartSLOduration=5.010081519 podStartE2EDuration="23.498847257s" podCreationTimestamp="2026-03-18 18:15:43 +0000 UTC" firstStartedPulling="2026-03-18 18:15:46.303813827 +0000 UTC m=+915.470998422" lastFinishedPulling="2026-03-18 18:16:04.792579565 +0000 UTC m=+933.959764160" observedRunningTime="2026-03-18 18:16:06.473727377 +0000 UTC m=+935.640911972" watchObservedRunningTime="2026-03-18 18:16:06.498847257 +0000 UTC m=+935.666031852" Mar 18 18:16:06.516325 master-0 kubenswrapper[30278]: I0318 18:16:06.514725 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-767865f676-vs6hj" event={"ID":"9fa45639-3436-43df-a879-b6445c664661","Type":"ContainerStarted","Data":"5d9db38438330f6bb676e6d6fb239a540e94bce0996c95ea889ba2798dabb9aa"} Mar 18 18:16:06.516325 master-0 kubenswrapper[30278]: I0318 18:16:06.515922 30278 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-767865f676-vs6hj" Mar 18 18:16:06.542328 master-0 kubenswrapper[30278]: I0318 18:16:06.541245 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-768b96df4c-j5p6q" event={"ID":"da9d67e1-3213-4c5a-9b44-b02d440b36e7","Type":"ContainerStarted","Data":"87bd5727782ad06a8b314dbcc6d81b0cab6e208c0db2261a2a22a0d6dc919336"} Mar 18 18:16:06.542328 master-0 kubenswrapper[30278]: I0318 18:16:06.542264 30278 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-768b96df4c-j5p6q" Mar 18 18:16:06.546308 master-0 kubenswrapper[30278]: I0318 18:16:06.545118 30278 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-67ccfc9778-5hkw5" podStartSLOduration=7.102776789 podStartE2EDuration="24.545091017s" podCreationTimestamp="2026-03-18 18:15:42 +0000 UTC" firstStartedPulling="2026-03-18 18:15:46.040434995 +0000 UTC m=+915.207619590" lastFinishedPulling="2026-03-18 18:16:03.482749233 +0000 UTC m=+932.649933818" observedRunningTime="2026-03-18 18:16:06.533285298 +0000 UTC m=+935.700469893" watchObservedRunningTime="2026-03-18 18:16:06.545091017 +0000 UTC m=+935.712275612" Mar 18 18:16:06.582322 master-0 kubenswrapper[30278]: I0318 18:16:06.580651 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-89d64c458-jnvcb" event={"ID":"65f3dd0a-e9a1-4087-ba1a-47366cf25382","Type":"ContainerStarted","Data":"e9cfe301bfb688497b26debf45f53504199a3165815714aec460e70fb76e5b69"} Mar 18 18:16:06.607657 master-0 kubenswrapper[30278]: I0318 18:16:06.600234 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-659bd6b58d-q7g49" event={"ID":"08dec5b3-09c6-4aa4-8c40-544556d1b7d4","Type":"ContainerStarted","Data":"465f91b69472a812b968a36a4e6c65bdff1a7d5b737eb89e5183255e3ec72843"} Mar 18 18:16:06.607657 master-0 kubenswrapper[30278]: I0318 18:16:06.600386 30278 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-659bd6b58d-q7g49" Mar 18 18:16:06.608112 master-0 kubenswrapper[30278]: I0318 18:16:06.593103 30278 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-5c5cb9c4d7-lkr87" podStartSLOduration=4.736962472 podStartE2EDuration="23.593076654s" podCreationTimestamp="2026-03-18 18:15:43 +0000 UTC" firstStartedPulling="2026-03-18 18:15:46.041315798 +0000 UTC m=+915.208500383" lastFinishedPulling="2026-03-18 18:16:04.89742995 +0000 UTC m=+934.064614565" observedRunningTime="2026-03-18 18:16:06.570329059 +0000 UTC m=+935.737513654" watchObservedRunningTime="2026-03-18 18:16:06.593076654 +0000 UTC m=+935.760261249" Mar 18 18:16:06.609297 master-0 kubenswrapper[30278]: I0318 18:16:06.609225 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-55f864c847-nml4w" event={"ID":"d3080dde-8e85-442a-ae2a-581507874a2d","Type":"ContainerStarted","Data":"36bc2198bf18afb7607d02a7261e50efe9ae5dcc0ec437872e33ee71187fb60e"} Mar 18 18:16:06.609659 master-0 kubenswrapper[30278]: I0318 18:16:06.609611 30278 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-55f864c847-nml4w" Mar 18 18:16:06.630332 master-0 kubenswrapper[30278]: I0318 18:16:06.624060 30278 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-767865f676-vs6hj" podStartSLOduration=5.207714051 podStartE2EDuration="24.624040682s" podCreationTimestamp="2026-03-18 18:15:42 +0000 UTC" firstStartedPulling="2026-03-18 18:15:45.376243563 +0000 UTC m=+914.543428158" lastFinishedPulling="2026-03-18 18:16:04.792570154 +0000 UTC m=+933.959754789" observedRunningTime="2026-03-18 18:16:06.62285596 +0000 UTC m=+935.790040555" watchObservedRunningTime="2026-03-18 18:16:06.624040682 +0000 UTC m=+935.791225277" Mar 18 18:16:06.737296 master-0 kubenswrapper[30278]: I0318 18:16:06.737164 30278 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-8d58dc466-qkpnz" podStartSLOduration=4.125265318 podStartE2EDuration="24.7371237s" podCreationTimestamp="2026-03-18 18:15:42 +0000 UTC" firstStartedPulling="2026-03-18 18:15:44.284620002 +0000 UTC m=+913.451804597" lastFinishedPulling="2026-03-18 18:16:04.896478384 +0000 UTC m=+934.063662979" observedRunningTime="2026-03-18 18:16:06.669873452 +0000 UTC m=+935.837058047" watchObservedRunningTime="2026-03-18 18:16:06.7371237 +0000 UTC m=+935.904308315" Mar 18 18:16:06.752303 master-0 kubenswrapper[30278]: I0318 18:16:06.749361 30278 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-67dd5f86f5-q5xdd" podStartSLOduration=4.58903546 podStartE2EDuration="24.74933154s" podCreationTimestamp="2026-03-18 18:15:42 +0000 UTC" firstStartedPulling="2026-03-18 18:15:44.737092689 +0000 UTC m=+913.904277284" lastFinishedPulling="2026-03-18 18:16:04.897388769 +0000 UTC m=+934.064573364" observedRunningTime="2026-03-18 18:16:06.746986507 +0000 UTC m=+935.914171102" watchObservedRunningTime="2026-03-18 18:16:06.74933154 +0000 UTC m=+935.916516125" Mar 18 18:16:06.907424 master-0 kubenswrapper[30278]: I0318 18:16:06.905087 30278 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-768b96df4c-j5p6q" podStartSLOduration=4.866444582 podStartE2EDuration="24.905059282s" podCreationTimestamp="2026-03-18 18:15:42 +0000 UTC" firstStartedPulling="2026-03-18 18:15:44.884963307 +0000 UTC m=+914.052147902" lastFinishedPulling="2026-03-18 18:16:04.923578007 +0000 UTC m=+934.090762602" observedRunningTime="2026-03-18 18:16:06.902884003 +0000 UTC m=+936.070068598" watchObservedRunningTime="2026-03-18 18:16:06.905059282 +0000 UTC m=+936.072243877" Mar 18 18:16:06.907424 master-0 kubenswrapper[30278]: I0318 18:16:06.906431 30278 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-d6b694c5-z9sth" podStartSLOduration=5.040143383 podStartE2EDuration="23.906424638s" podCreationTimestamp="2026-03-18 18:15:43 +0000 UTC" firstStartedPulling="2026-03-18 18:15:46.096120711 +0000 UTC m=+915.263305306" lastFinishedPulling="2026-03-18 18:16:04.962401966 +0000 UTC m=+934.129586561" observedRunningTime="2026-03-18 18:16:06.845689667 +0000 UTC m=+936.012874262" watchObservedRunningTime="2026-03-18 18:16:06.906424638 +0000 UTC m=+936.073609233" Mar 18 18:16:06.958313 master-0 kubenswrapper[30278]: I0318 18:16:06.945631 30278 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-8464cc45fb-stb7j" podStartSLOduration=5.217794133 podStartE2EDuration="24.945605258s" podCreationTimestamp="2026-03-18 18:15:42 +0000 UTC" firstStartedPulling="2026-03-18 18:15:45.064733199 +0000 UTC m=+914.231917794" lastFinishedPulling="2026-03-18 18:16:04.792544294 +0000 UTC m=+933.959728919" observedRunningTime="2026-03-18 18:16:06.9349516 +0000 UTC m=+936.102136195" watchObservedRunningTime="2026-03-18 18:16:06.945605258 +0000 UTC m=+936.112789853" Mar 18 18:16:07.120236 master-0 kubenswrapper[30278]: I0318 18:16:07.114700 30278 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-59bc569d95-7dcfq" podStartSLOduration=7.93554221 podStartE2EDuration="25.114678671s" podCreationTimestamp="2026-03-18 18:15:42 +0000 UTC" firstStartedPulling="2026-03-18 18:15:43.982122582 +0000 UTC m=+913.149307177" lastFinishedPulling="2026-03-18 18:16:01.161259043 +0000 UTC m=+930.328443638" observedRunningTime="2026-03-18 18:16:07.02665486 +0000 UTC m=+936.193839455" watchObservedRunningTime="2026-03-18 18:16:07.114678671 +0000 UTC m=+936.281863266" Mar 18 18:16:07.125292 master-0 kubenswrapper[30278]: I0318 18:16:07.123446 30278 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-c674c5965-vf92l" podStartSLOduration=5.48452359 podStartE2EDuration="24.123434008s" podCreationTimestamp="2026-03-18 18:15:43 +0000 UTC" firstStartedPulling="2026-03-18 18:15:46.30393825 +0000 UTC m=+915.471122845" lastFinishedPulling="2026-03-18 18:16:04.942848638 +0000 UTC m=+934.110033263" observedRunningTime="2026-03-18 18:16:07.113664543 +0000 UTC m=+936.280849138" watchObservedRunningTime="2026-03-18 18:16:07.123434008 +0000 UTC m=+936.290618613" Mar 18 18:16:07.163410 master-0 kubenswrapper[30278]: I0318 18:16:07.156993 30278 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-5b9f45d989-hlkz4" podStartSLOduration=6.56996164 podStartE2EDuration="25.156968064s" podCreationTimestamp="2026-03-18 18:15:42 +0000 UTC" firstStartedPulling="2026-03-18 18:15:46.360977853 +0000 UTC m=+915.528162448" lastFinishedPulling="2026-03-18 18:16:04.947984277 +0000 UTC m=+934.115168872" observedRunningTime="2026-03-18 18:16:07.142224195 +0000 UTC m=+936.309408790" watchObservedRunningTime="2026-03-18 18:16:07.156968064 +0000 UTC m=+936.324152659" Mar 18 18:16:07.236337 master-0 kubenswrapper[30278]: I0318 18:16:07.235080 30278 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-5d488d59fb-9btcv" podStartSLOduration=6.437421516 podStartE2EDuration="25.235052566s" podCreationTimestamp="2026-03-18 18:15:42 +0000 UTC" firstStartedPulling="2026-03-18 18:15:46.100221401 +0000 UTC m=+915.267405996" lastFinishedPulling="2026-03-18 18:16:04.897852451 +0000 UTC m=+934.065037046" observedRunningTime="2026-03-18 18:16:07.209805933 +0000 UTC m=+936.376990528" watchObservedRunningTime="2026-03-18 18:16:07.235052566 +0000 UTC m=+936.402237171" Mar 18 18:16:07.236337 master-0 kubenswrapper[30278]: I0318 18:16:07.235626 30278 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-5784578c99-dx9nw" podStartSLOduration=5.359697354 podStartE2EDuration="24.235618051s" podCreationTimestamp="2026-03-18 18:15:43 +0000 UTC" firstStartedPulling="2026-03-18 18:15:46.044480024 +0000 UTC m=+915.211664619" lastFinishedPulling="2026-03-18 18:16:04.920400681 +0000 UTC m=+934.087585316" observedRunningTime="2026-03-18 18:16:07.234166502 +0000 UTC m=+936.401351097" watchObservedRunningTime="2026-03-18 18:16:07.235618051 +0000 UTC m=+936.402802636" Mar 18 18:16:07.552412 master-0 kubenswrapper[30278]: I0318 18:16:07.547047 30278 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-55f864c847-nml4w" podStartSLOduration=8.172143837 podStartE2EDuration="25.547020482s" podCreationTimestamp="2026-03-18 18:15:42 +0000 UTC" firstStartedPulling="2026-03-18 18:15:45.328617455 +0000 UTC m=+914.495802050" lastFinishedPulling="2026-03-18 18:16:02.7034941 +0000 UTC m=+931.870678695" observedRunningTime="2026-03-18 18:16:07.538911293 +0000 UTC m=+936.706095888" watchObservedRunningTime="2026-03-18 18:16:07.547020482 +0000 UTC m=+936.714205077" Mar 18 18:16:07.628609 master-0 kubenswrapper[30278]: I0318 18:16:07.628359 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-5d488d59fb-9btcv" event={"ID":"9808351e-1785-48d6-a2fd-8953742f27cc","Type":"ContainerStarted","Data":"580d2e68b666a13942b3f16306daeb0f0926c87fa69125b0694773e0f969e1f1"} Mar 18 18:16:07.632516 master-0 kubenswrapper[30278]: I0318 18:16:07.631572 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-8d58dc466-qkpnz" event={"ID":"cc44a223-9705-4e38-986f-24d296b1ab51","Type":"ContainerStarted","Data":"4021be738a003701b51bf8c0ab438c227a92c9b99d54fe956f2e28ae8953b6a6"} Mar 18 18:16:07.635757 master-0 kubenswrapper[30278]: I0318 18:16:07.635477 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-5784578c99-dx9nw" event={"ID":"ff1d10fa-b70d-439b-8183-dbdf8042e43d","Type":"ContainerStarted","Data":"eefd5767e0c3aa0e41f94453645b20d6896df7bfefbf788ea0a6a089ed1b835a"} Mar 18 18:16:07.662579 master-0 kubenswrapper[30278]: I0318 18:16:07.662497 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-jfv7j" event={"ID":"6f5ebc48-9a75-41b9-a6d1-3e84c7ba6f54","Type":"ContainerStarted","Data":"6a4f87a89971a9a17746aa10c16ae91bfd7ec712e457322e01dcda448b7fed16"} Mar 18 18:16:08.037167 master-0 kubenswrapper[30278]: I0318 18:16:08.036987 30278 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-659bd6b58d-q7g49" podStartSLOduration=6.420499549 podStartE2EDuration="26.036959562s" podCreationTimestamp="2026-03-18 18:15:42 +0000 UTC" firstStartedPulling="2026-03-18 18:15:45.326335423 +0000 UTC m=+914.493520018" lastFinishedPulling="2026-03-18 18:16:04.942795436 +0000 UTC m=+934.109980031" observedRunningTime="2026-03-18 18:16:08.029634223 +0000 UTC m=+937.196818828" watchObservedRunningTime="2026-03-18 18:16:08.036959562 +0000 UTC m=+937.204144177" Mar 18 18:16:08.778190 master-0 kubenswrapper[30278]: I0318 18:16:08.778041 30278 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-jfv7j" podStartSLOduration=7.207398969 podStartE2EDuration="25.778011591s" podCreationTimestamp="2026-03-18 18:15:43 +0000 UTC" firstStartedPulling="2026-03-18 18:15:46.473543256 +0000 UTC m=+915.640727851" lastFinishedPulling="2026-03-18 18:16:05.044155868 +0000 UTC m=+934.211340473" observedRunningTime="2026-03-18 18:16:08.774624 +0000 UTC m=+937.941808595" watchObservedRunningTime="2026-03-18 18:16:08.778011591 +0000 UTC m=+937.945196186" Mar 18 18:16:12.727844 master-0 kubenswrapper[30278]: I0318 18:16:12.727753 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-7dd6bb94c9-mxxlh" event={"ID":"d486893b-62ed-4907-a004-9f6bf4e0a79f","Type":"ContainerStarted","Data":"54e94fb253c5ef1e64ebafa405c6fd9f8fd8d5c386adf1f8166a21d7c29e3399"} Mar 18 18:16:12.729105 master-0 kubenswrapper[30278]: I0318 18:16:12.727909 30278 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-7dd6bb94c9-mxxlh" Mar 18 18:16:12.729793 master-0 kubenswrapper[30278]: I0318 18:16:12.729735 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-89d64c458-jnvcb" event={"ID":"65f3dd0a-e9a1-4087-ba1a-47366cf25382","Type":"ContainerStarted","Data":"ad74a75774e8482788969128ab925197c12b3a2c282ba9ca7bba8ac3066426bb"} Mar 18 18:16:12.729932 master-0 kubenswrapper[30278]: I0318 18:16:12.729897 30278 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-89d64c458-jnvcb" Mar 18 18:16:12.752722 master-0 kubenswrapper[30278]: I0318 18:16:12.752568 30278 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-7dd6bb94c9-mxxlh" podStartSLOduration=24.795981781000002 podStartE2EDuration="30.752535874s" podCreationTimestamp="2026-03-18 18:15:42 +0000 UTC" firstStartedPulling="2026-03-18 18:16:05.724811374 +0000 UTC m=+934.891995969" lastFinishedPulling="2026-03-18 18:16:11.681365467 +0000 UTC m=+940.848550062" observedRunningTime="2026-03-18 18:16:12.748880066 +0000 UTC m=+941.916064671" watchObservedRunningTime="2026-03-18 18:16:12.752535874 +0000 UTC m=+941.919720469" Mar 18 18:16:12.798380 master-0 kubenswrapper[30278]: I0318 18:16:12.796005 30278 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-89d64c458-jnvcb" podStartSLOduration=23.74188584 podStartE2EDuration="29.795961319s" podCreationTimestamp="2026-03-18 18:15:43 +0000 UTC" firstStartedPulling="2026-03-18 18:16:05.61665193 +0000 UTC m=+934.783836525" lastFinishedPulling="2026-03-18 18:16:11.670727389 +0000 UTC m=+940.837912004" observedRunningTime="2026-03-18 18:16:12.785900047 +0000 UTC m=+941.953084652" watchObservedRunningTime="2026-03-18 18:16:12.795961319 +0000 UTC m=+941.963145914" Mar 18 18:16:12.831347 master-0 kubenswrapper[30278]: I0318 18:16:12.831210 30278 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-8d58dc466-qkpnz" Mar 18 18:16:13.038715 master-0 kubenswrapper[30278]: I0318 18:16:13.038599 30278 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-59bc569d95-7dcfq" Mar 18 18:16:13.172320 master-0 kubenswrapper[30278]: I0318 18:16:13.171571 30278 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-588d4d986b-nmf4w" Mar 18 18:16:13.747747 master-0 kubenswrapper[30278]: I0318 18:16:13.747625 30278 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-79df6bcc97-kmxft" Mar 18 18:16:13.946439 master-0 kubenswrapper[30278]: I0318 18:16:13.946345 30278 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-67dd5f86f5-q5xdd" Mar 18 18:16:13.990315 master-0 kubenswrapper[30278]: I0318 18:16:13.989517 30278 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-768b96df4c-j5p6q" Mar 18 18:16:14.009870 master-0 kubenswrapper[30278]: I0318 18:16:14.009747 30278 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-8464cc45fb-stb7j" Mar 18 18:16:14.039062 master-0 kubenswrapper[30278]: I0318 18:16:14.038483 30278 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-659bd6b58d-q7g49" Mar 18 18:16:14.145507 master-0 kubenswrapper[30278]: I0318 18:16:14.145455 30278 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-55f864c847-nml4w" Mar 18 18:16:14.198527 master-0 kubenswrapper[30278]: I0318 18:16:14.198464 30278 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-67ccfc9778-5hkw5" Mar 18 18:16:14.205818 master-0 kubenswrapper[30278]: I0318 18:16:14.205775 30278 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-767865f676-vs6hj" Mar 18 18:16:14.236004 master-0 kubenswrapper[30278]: I0318 18:16:14.234426 30278 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-5d488d59fb-9btcv" Mar 18 18:16:14.258023 master-0 kubenswrapper[30278]: I0318 18:16:14.257968 30278 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-5b9f45d989-hlkz4" Mar 18 18:16:14.315200 master-0 kubenswrapper[30278]: I0318 18:16:14.311569 30278 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-884679f54-l66pc" Mar 18 18:16:14.338514 master-0 kubenswrapper[30278]: I0318 18:16:14.338446 30278 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-5784578c99-dx9nw" Mar 18 18:16:14.382825 master-0 kubenswrapper[30278]: I0318 18:16:14.382360 30278 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-c674c5965-vf92l" Mar 18 18:16:14.388888 master-0 kubenswrapper[30278]: I0318 18:16:14.388821 30278 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-d6b694c5-z9sth" Mar 18 18:16:14.400624 master-0 kubenswrapper[30278]: I0318 18:16:14.400488 30278 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-5c5cb9c4d7-lkr87" Mar 18 18:16:14.467529 master-0 kubenswrapper[30278]: I0318 18:16:14.465860 30278 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-6c4d75f7f9-v9v5q" Mar 18 18:16:16.421985 master-0 kubenswrapper[30278]: I0318 18:16:16.421899 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/a79357fe-125e-464c-a801-0949a13db2d1-webhook-certs\") pod \"openstack-operator-controller-manager-64cc6d45b7-7xs4c\" (UID: \"a79357fe-125e-464c-a801-0949a13db2d1\") " pod="openstack-operators/openstack-operator-controller-manager-64cc6d45b7-7xs4c" Mar 18 18:16:16.422696 master-0 kubenswrapper[30278]: I0318 18:16:16.422044 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a79357fe-125e-464c-a801-0949a13db2d1-metrics-certs\") pod \"openstack-operator-controller-manager-64cc6d45b7-7xs4c\" (UID: \"a79357fe-125e-464c-a801-0949a13db2d1\") " pod="openstack-operators/openstack-operator-controller-manager-64cc6d45b7-7xs4c" Mar 18 18:16:16.427873 master-0 kubenswrapper[30278]: I0318 18:16:16.427794 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/a79357fe-125e-464c-a801-0949a13db2d1-webhook-certs\") pod \"openstack-operator-controller-manager-64cc6d45b7-7xs4c\" (UID: \"a79357fe-125e-464c-a801-0949a13db2d1\") " pod="openstack-operators/openstack-operator-controller-manager-64cc6d45b7-7xs4c" Mar 18 18:16:16.428597 master-0 kubenswrapper[30278]: I0318 18:16:16.428538 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a79357fe-125e-464c-a801-0949a13db2d1-metrics-certs\") pod \"openstack-operator-controller-manager-64cc6d45b7-7xs4c\" (UID: \"a79357fe-125e-464c-a801-0949a13db2d1\") " pod="openstack-operators/openstack-operator-controller-manager-64cc6d45b7-7xs4c" Mar 18 18:16:16.623621 master-0 kubenswrapper[30278]: I0318 18:16:16.623528 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-64cc6d45b7-7xs4c" Mar 18 18:16:17.456384 master-0 kubenswrapper[30278]: I0318 18:16:17.456296 30278 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-64cc6d45b7-7xs4c"] Mar 18 18:16:17.799896 master-0 kubenswrapper[30278]: I0318 18:16:17.799746 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-64cc6d45b7-7xs4c" event={"ID":"a79357fe-125e-464c-a801-0949a13db2d1","Type":"ContainerStarted","Data":"0d381974296788be53fffbc6458b3a4eff4118bed6b484747d8efe22d25dbad6"} Mar 18 18:16:17.799896 master-0 kubenswrapper[30278]: I0318 18:16:17.799822 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-64cc6d45b7-7xs4c" event={"ID":"a79357fe-125e-464c-a801-0949a13db2d1","Type":"ContainerStarted","Data":"41a281fcc21666dae36507e93730828286342eb495091a6d2931372d9dcaed77"} Mar 18 18:16:17.800205 master-0 kubenswrapper[30278]: I0318 18:16:17.799957 30278 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-64cc6d45b7-7xs4c" Mar 18 18:16:17.844131 master-0 kubenswrapper[30278]: I0318 18:16:17.842396 30278 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-64cc6d45b7-7xs4c" podStartSLOduration=34.842366747 podStartE2EDuration="34.842366747s" podCreationTimestamp="2026-03-18 18:15:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 18:16:17.827341531 +0000 UTC m=+946.994526146" watchObservedRunningTime="2026-03-18 18:16:17.842366747 +0000 UTC m=+947.009551362" Mar 18 18:16:19.556967 master-0 kubenswrapper[30278]: I0318 18:16:19.556860 30278 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-7dd6bb94c9-mxxlh" Mar 18 18:16:19.902472 master-0 kubenswrapper[30278]: I0318 18:16:19.902264 30278 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-89d64c458-jnvcb" Mar 18 18:16:26.635576 master-0 kubenswrapper[30278]: I0318 18:16:26.635487 30278 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-64cc6d45b7-7xs4c" Mar 18 18:17:11.298311 master-0 kubenswrapper[30278]: I0318 18:17:11.298158 30278 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-55994974c5-l544m"] Mar 18 18:17:11.315304 master-0 kubenswrapper[30278]: I0318 18:17:11.300490 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55994974c5-l544m" Mar 18 18:17:11.315304 master-0 kubenswrapper[30278]: I0318 18:17:11.303107 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Mar 18 18:17:11.315304 master-0 kubenswrapper[30278]: I0318 18:17:11.304616 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Mar 18 18:17:11.333230 master-0 kubenswrapper[30278]: I0318 18:17:11.326310 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Mar 18 18:17:11.339465 master-0 kubenswrapper[30278]: I0318 18:17:11.338966 30278 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-55994974c5-l544m"] Mar 18 18:17:11.373075 master-0 kubenswrapper[30278]: I0318 18:17:11.373020 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/de6412a1-7511-4f9b-a1e6-bb1735327597-config\") pod \"dnsmasq-dns-55994974c5-l544m\" (UID: \"de6412a1-7511-4f9b-a1e6-bb1735327597\") " pod="openstack/dnsmasq-dns-55994974c5-l544m" Mar 18 18:17:11.373463 master-0 kubenswrapper[30278]: I0318 18:17:11.373087 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4qsp8\" (UniqueName: \"kubernetes.io/projected/de6412a1-7511-4f9b-a1e6-bb1735327597-kube-api-access-4qsp8\") pod \"dnsmasq-dns-55994974c5-l544m\" (UID: \"de6412a1-7511-4f9b-a1e6-bb1735327597\") " pod="openstack/dnsmasq-dns-55994974c5-l544m" Mar 18 18:17:11.378730 master-0 kubenswrapper[30278]: I0318 18:17:11.378674 30278 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5d859fb5df-r468z"] Mar 18 18:17:11.393482 master-0 kubenswrapper[30278]: I0318 18:17:11.393386 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5d859fb5df-r468z" Mar 18 18:17:11.396315 master-0 kubenswrapper[30278]: I0318 18:17:11.396251 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Mar 18 18:17:11.433288 master-0 kubenswrapper[30278]: I0318 18:17:11.433209 30278 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5d859fb5df-r468z"] Mar 18 18:17:11.476444 master-0 kubenswrapper[30278]: I0318 18:17:11.476382 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4qsp8\" (UniqueName: \"kubernetes.io/projected/de6412a1-7511-4f9b-a1e6-bb1735327597-kube-api-access-4qsp8\") pod \"dnsmasq-dns-55994974c5-l544m\" (UID: \"de6412a1-7511-4f9b-a1e6-bb1735327597\") " pod="openstack/dnsmasq-dns-55994974c5-l544m" Mar 18 18:17:11.477290 master-0 kubenswrapper[30278]: I0318 18:17:11.477251 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/86671053-9c92-43bd-b6e2-3655bc6d3e3f-dns-svc\") pod \"dnsmasq-dns-5d859fb5df-r468z\" (UID: \"86671053-9c92-43bd-b6e2-3655bc6d3e3f\") " pod="openstack/dnsmasq-dns-5d859fb5df-r468z" Mar 18 18:17:11.477591 master-0 kubenswrapper[30278]: I0318 18:17:11.477576 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/86671053-9c92-43bd-b6e2-3655bc6d3e3f-config\") pod \"dnsmasq-dns-5d859fb5df-r468z\" (UID: \"86671053-9c92-43bd-b6e2-3655bc6d3e3f\") " pod="openstack/dnsmasq-dns-5d859fb5df-r468z" Mar 18 18:17:11.477872 master-0 kubenswrapper[30278]: I0318 18:17:11.477851 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dwvwm\" (UniqueName: \"kubernetes.io/projected/86671053-9c92-43bd-b6e2-3655bc6d3e3f-kube-api-access-dwvwm\") pod \"dnsmasq-dns-5d859fb5df-r468z\" (UID: \"86671053-9c92-43bd-b6e2-3655bc6d3e3f\") " pod="openstack/dnsmasq-dns-5d859fb5df-r468z" Mar 18 18:17:11.478043 master-0 kubenswrapper[30278]: I0318 18:17:11.478028 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/de6412a1-7511-4f9b-a1e6-bb1735327597-config\") pod \"dnsmasq-dns-55994974c5-l544m\" (UID: \"de6412a1-7511-4f9b-a1e6-bb1735327597\") " pod="openstack/dnsmasq-dns-55994974c5-l544m" Mar 18 18:17:11.482312 master-0 kubenswrapper[30278]: I0318 18:17:11.482234 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/de6412a1-7511-4f9b-a1e6-bb1735327597-config\") pod \"dnsmasq-dns-55994974c5-l544m\" (UID: \"de6412a1-7511-4f9b-a1e6-bb1735327597\") " pod="openstack/dnsmasq-dns-55994974c5-l544m" Mar 18 18:17:11.499607 master-0 kubenswrapper[30278]: I0318 18:17:11.499582 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4qsp8\" (UniqueName: \"kubernetes.io/projected/de6412a1-7511-4f9b-a1e6-bb1735327597-kube-api-access-4qsp8\") pod \"dnsmasq-dns-55994974c5-l544m\" (UID: \"de6412a1-7511-4f9b-a1e6-bb1735327597\") " pod="openstack/dnsmasq-dns-55994974c5-l544m" Mar 18 18:17:11.582158 master-0 kubenswrapper[30278]: I0318 18:17:11.580066 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/86671053-9c92-43bd-b6e2-3655bc6d3e3f-config\") pod \"dnsmasq-dns-5d859fb5df-r468z\" (UID: \"86671053-9c92-43bd-b6e2-3655bc6d3e3f\") " pod="openstack/dnsmasq-dns-5d859fb5df-r468z" Mar 18 18:17:11.582158 master-0 kubenswrapper[30278]: I0318 18:17:11.580151 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dwvwm\" (UniqueName: \"kubernetes.io/projected/86671053-9c92-43bd-b6e2-3655bc6d3e3f-kube-api-access-dwvwm\") pod \"dnsmasq-dns-5d859fb5df-r468z\" (UID: \"86671053-9c92-43bd-b6e2-3655bc6d3e3f\") " pod="openstack/dnsmasq-dns-5d859fb5df-r468z" Mar 18 18:17:11.582158 master-0 kubenswrapper[30278]: I0318 18:17:11.580217 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/86671053-9c92-43bd-b6e2-3655bc6d3e3f-dns-svc\") pod \"dnsmasq-dns-5d859fb5df-r468z\" (UID: \"86671053-9c92-43bd-b6e2-3655bc6d3e3f\") " pod="openstack/dnsmasq-dns-5d859fb5df-r468z" Mar 18 18:17:11.582158 master-0 kubenswrapper[30278]: I0318 18:17:11.581753 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/86671053-9c92-43bd-b6e2-3655bc6d3e3f-dns-svc\") pod \"dnsmasq-dns-5d859fb5df-r468z\" (UID: \"86671053-9c92-43bd-b6e2-3655bc6d3e3f\") " pod="openstack/dnsmasq-dns-5d859fb5df-r468z" Mar 18 18:17:11.582963 master-0 kubenswrapper[30278]: I0318 18:17:11.582922 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/86671053-9c92-43bd-b6e2-3655bc6d3e3f-config\") pod \"dnsmasq-dns-5d859fb5df-r468z\" (UID: \"86671053-9c92-43bd-b6e2-3655bc6d3e3f\") " pod="openstack/dnsmasq-dns-5d859fb5df-r468z" Mar 18 18:17:11.606940 master-0 kubenswrapper[30278]: I0318 18:17:11.606289 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dwvwm\" (UniqueName: \"kubernetes.io/projected/86671053-9c92-43bd-b6e2-3655bc6d3e3f-kube-api-access-dwvwm\") pod \"dnsmasq-dns-5d859fb5df-r468z\" (UID: \"86671053-9c92-43bd-b6e2-3655bc6d3e3f\") " pod="openstack/dnsmasq-dns-5d859fb5df-r468z" Mar 18 18:17:11.683047 master-0 kubenswrapper[30278]: I0318 18:17:11.682951 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55994974c5-l544m" Mar 18 18:17:11.727027 master-0 kubenswrapper[30278]: I0318 18:17:11.726836 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5d859fb5df-r468z" Mar 18 18:17:12.172156 master-0 kubenswrapper[30278]: I0318 18:17:12.170784 30278 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-55994974c5-l544m"] Mar 18 18:17:12.394975 master-0 kubenswrapper[30278]: I0318 18:17:12.394912 30278 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5d859fb5df-r468z"] Mar 18 18:17:12.566651 master-0 kubenswrapper[30278]: I0318 18:17:12.566246 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5d859fb5df-r468z" event={"ID":"86671053-9c92-43bd-b6e2-3655bc6d3e3f","Type":"ContainerStarted","Data":"cd61cd5e180010d2a4daabae171646f4c170c86823c64a2eabbe1d7bd2f7679b"} Mar 18 18:17:12.568530 master-0 kubenswrapper[30278]: I0318 18:17:12.567809 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55994974c5-l544m" event={"ID":"de6412a1-7511-4f9b-a1e6-bb1735327597","Type":"ContainerStarted","Data":"46387d93a78c255a10532a6accc4c949e405452319187f74ea1d7795eafe455e"} Mar 18 18:17:13.982306 master-0 kubenswrapper[30278]: I0318 18:17:13.978650 30278 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-55994974c5-l544m"] Mar 18 18:17:14.144364 master-0 kubenswrapper[30278]: I0318 18:17:14.130606 30278 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6877bbfb4f-tg9rw"] Mar 18 18:17:14.144364 master-0 kubenswrapper[30278]: I0318 18:17:14.132908 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6877bbfb4f-tg9rw" Mar 18 18:17:14.183991 master-0 kubenswrapper[30278]: I0318 18:17:14.180880 30278 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6877bbfb4f-tg9rw"] Mar 18 18:17:14.312023 master-0 kubenswrapper[30278]: I0318 18:17:14.310717 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9lrhn\" (UniqueName: \"kubernetes.io/projected/b558c2d8-aed9-4381-9a37-c753f736e7f2-kube-api-access-9lrhn\") pod \"dnsmasq-dns-6877bbfb4f-tg9rw\" (UID: \"b558c2d8-aed9-4381-9a37-c753f736e7f2\") " pod="openstack/dnsmasq-dns-6877bbfb4f-tg9rw" Mar 18 18:17:14.312023 master-0 kubenswrapper[30278]: I0318 18:17:14.310839 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b558c2d8-aed9-4381-9a37-c753f736e7f2-dns-svc\") pod \"dnsmasq-dns-6877bbfb4f-tg9rw\" (UID: \"b558c2d8-aed9-4381-9a37-c753f736e7f2\") " pod="openstack/dnsmasq-dns-6877bbfb4f-tg9rw" Mar 18 18:17:14.312023 master-0 kubenswrapper[30278]: I0318 18:17:14.311012 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b558c2d8-aed9-4381-9a37-c753f736e7f2-config\") pod \"dnsmasq-dns-6877bbfb4f-tg9rw\" (UID: \"b558c2d8-aed9-4381-9a37-c753f736e7f2\") " pod="openstack/dnsmasq-dns-6877bbfb4f-tg9rw" Mar 18 18:17:14.415920 master-0 kubenswrapper[30278]: I0318 18:17:14.415761 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9lrhn\" (UniqueName: \"kubernetes.io/projected/b558c2d8-aed9-4381-9a37-c753f736e7f2-kube-api-access-9lrhn\") pod \"dnsmasq-dns-6877bbfb4f-tg9rw\" (UID: \"b558c2d8-aed9-4381-9a37-c753f736e7f2\") " pod="openstack/dnsmasq-dns-6877bbfb4f-tg9rw" Mar 18 18:17:14.415920 master-0 kubenswrapper[30278]: I0318 18:17:14.415857 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b558c2d8-aed9-4381-9a37-c753f736e7f2-dns-svc\") pod \"dnsmasq-dns-6877bbfb4f-tg9rw\" (UID: \"b558c2d8-aed9-4381-9a37-c753f736e7f2\") " pod="openstack/dnsmasq-dns-6877bbfb4f-tg9rw" Mar 18 18:17:14.416261 master-0 kubenswrapper[30278]: I0318 18:17:14.415967 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b558c2d8-aed9-4381-9a37-c753f736e7f2-config\") pod \"dnsmasq-dns-6877bbfb4f-tg9rw\" (UID: \"b558c2d8-aed9-4381-9a37-c753f736e7f2\") " pod="openstack/dnsmasq-dns-6877bbfb4f-tg9rw" Mar 18 18:17:14.417024 master-0 kubenswrapper[30278]: I0318 18:17:14.416990 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b558c2d8-aed9-4381-9a37-c753f736e7f2-config\") pod \"dnsmasq-dns-6877bbfb4f-tg9rw\" (UID: \"b558c2d8-aed9-4381-9a37-c753f736e7f2\") " pod="openstack/dnsmasq-dns-6877bbfb4f-tg9rw" Mar 18 18:17:14.422588 master-0 kubenswrapper[30278]: I0318 18:17:14.417994 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b558c2d8-aed9-4381-9a37-c753f736e7f2-dns-svc\") pod \"dnsmasq-dns-6877bbfb4f-tg9rw\" (UID: \"b558c2d8-aed9-4381-9a37-c753f736e7f2\") " pod="openstack/dnsmasq-dns-6877bbfb4f-tg9rw" Mar 18 18:17:14.480657 master-0 kubenswrapper[30278]: I0318 18:17:14.476877 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9lrhn\" (UniqueName: \"kubernetes.io/projected/b558c2d8-aed9-4381-9a37-c753f736e7f2-kube-api-access-9lrhn\") pod \"dnsmasq-dns-6877bbfb4f-tg9rw\" (UID: \"b558c2d8-aed9-4381-9a37-c753f736e7f2\") " pod="openstack/dnsmasq-dns-6877bbfb4f-tg9rw" Mar 18 18:17:14.531057 master-0 kubenswrapper[30278]: I0318 18:17:14.521616 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6877bbfb4f-tg9rw" Mar 18 18:17:14.822606 master-0 kubenswrapper[30278]: I0318 18:17:14.822290 30278 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5d859fb5df-r468z"] Mar 18 18:17:14.864031 master-0 kubenswrapper[30278]: I0318 18:17:14.858142 30278 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6f75dd7cd9-cwrjw"] Mar 18 18:17:14.871235 master-0 kubenswrapper[30278]: I0318 18:17:14.867682 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6f75dd7cd9-cwrjw" Mar 18 18:17:14.897768 master-0 kubenswrapper[30278]: I0318 18:17:14.897648 30278 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6f75dd7cd9-cwrjw"] Mar 18 18:17:15.013093 master-0 kubenswrapper[30278]: I0318 18:17:15.013020 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2a622380-55da-4d69-a65a-5db6c07eb3d7-dns-svc\") pod \"dnsmasq-dns-6f75dd7cd9-cwrjw\" (UID: \"2a622380-55da-4d69-a65a-5db6c07eb3d7\") " pod="openstack/dnsmasq-dns-6f75dd7cd9-cwrjw" Mar 18 18:17:15.013847 master-0 kubenswrapper[30278]: I0318 18:17:15.013605 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2a622380-55da-4d69-a65a-5db6c07eb3d7-config\") pod \"dnsmasq-dns-6f75dd7cd9-cwrjw\" (UID: \"2a622380-55da-4d69-a65a-5db6c07eb3d7\") " pod="openstack/dnsmasq-dns-6f75dd7cd9-cwrjw" Mar 18 18:17:15.013847 master-0 kubenswrapper[30278]: I0318 18:17:15.013738 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ldhnf\" (UniqueName: \"kubernetes.io/projected/2a622380-55da-4d69-a65a-5db6c07eb3d7-kube-api-access-ldhnf\") pod \"dnsmasq-dns-6f75dd7cd9-cwrjw\" (UID: \"2a622380-55da-4d69-a65a-5db6c07eb3d7\") " pod="openstack/dnsmasq-dns-6f75dd7cd9-cwrjw" Mar 18 18:17:15.126399 master-0 kubenswrapper[30278]: I0318 18:17:15.119803 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2a622380-55da-4d69-a65a-5db6c07eb3d7-config\") pod \"dnsmasq-dns-6f75dd7cd9-cwrjw\" (UID: \"2a622380-55da-4d69-a65a-5db6c07eb3d7\") " pod="openstack/dnsmasq-dns-6f75dd7cd9-cwrjw" Mar 18 18:17:15.126399 master-0 kubenswrapper[30278]: I0318 18:17:15.119961 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ldhnf\" (UniqueName: \"kubernetes.io/projected/2a622380-55da-4d69-a65a-5db6c07eb3d7-kube-api-access-ldhnf\") pod \"dnsmasq-dns-6f75dd7cd9-cwrjw\" (UID: \"2a622380-55da-4d69-a65a-5db6c07eb3d7\") " pod="openstack/dnsmasq-dns-6f75dd7cd9-cwrjw" Mar 18 18:17:15.126399 master-0 kubenswrapper[30278]: I0318 18:17:15.120042 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2a622380-55da-4d69-a65a-5db6c07eb3d7-dns-svc\") pod \"dnsmasq-dns-6f75dd7cd9-cwrjw\" (UID: \"2a622380-55da-4d69-a65a-5db6c07eb3d7\") " pod="openstack/dnsmasq-dns-6f75dd7cd9-cwrjw" Mar 18 18:17:15.126399 master-0 kubenswrapper[30278]: I0318 18:17:15.124585 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2a622380-55da-4d69-a65a-5db6c07eb3d7-dns-svc\") pod \"dnsmasq-dns-6f75dd7cd9-cwrjw\" (UID: \"2a622380-55da-4d69-a65a-5db6c07eb3d7\") " pod="openstack/dnsmasq-dns-6f75dd7cd9-cwrjw" Mar 18 18:17:15.135558 master-0 kubenswrapper[30278]: I0318 18:17:15.128433 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2a622380-55da-4d69-a65a-5db6c07eb3d7-config\") pod \"dnsmasq-dns-6f75dd7cd9-cwrjw\" (UID: \"2a622380-55da-4d69-a65a-5db6c07eb3d7\") " pod="openstack/dnsmasq-dns-6f75dd7cd9-cwrjw" Mar 18 18:17:15.161030 master-0 kubenswrapper[30278]: I0318 18:17:15.160953 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ldhnf\" (UniqueName: \"kubernetes.io/projected/2a622380-55da-4d69-a65a-5db6c07eb3d7-kube-api-access-ldhnf\") pod \"dnsmasq-dns-6f75dd7cd9-cwrjw\" (UID: \"2a622380-55da-4d69-a65a-5db6c07eb3d7\") " pod="openstack/dnsmasq-dns-6f75dd7cd9-cwrjw" Mar 18 18:17:15.258759 master-0 kubenswrapper[30278]: I0318 18:17:15.258484 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6f75dd7cd9-cwrjw" Mar 18 18:17:15.292151 master-0 kubenswrapper[30278]: I0318 18:17:15.292063 30278 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6877bbfb4f-tg9rw"] Mar 18 18:17:15.349661 master-0 kubenswrapper[30278]: W0318 18:17:15.349593 30278 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb558c2d8_aed9_4381_9a37_c753f736e7f2.slice/crio-0a8362c1e41667b292febc047a750f33f858d4c7a34ebee20941e6d523802ffd WatchSource:0}: Error finding container 0a8362c1e41667b292febc047a750f33f858d4c7a34ebee20941e6d523802ffd: Status 404 returned error can't find the container with id 0a8362c1e41667b292febc047a750f33f858d4c7a34ebee20941e6d523802ffd Mar 18 18:17:15.699024 master-0 kubenswrapper[30278]: I0318 18:17:15.697378 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6877bbfb4f-tg9rw" event={"ID":"b558c2d8-aed9-4381-9a37-c753f736e7f2","Type":"ContainerStarted","Data":"0a8362c1e41667b292febc047a750f33f858d4c7a34ebee20941e6d523802ffd"} Mar 18 18:17:15.945850 master-0 kubenswrapper[30278]: I0318 18:17:15.945788 30278 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6f75dd7cd9-cwrjw"] Mar 18 18:17:16.762739 master-0 kubenswrapper[30278]: I0318 18:17:16.761752 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6f75dd7cd9-cwrjw" event={"ID":"2a622380-55da-4d69-a65a-5db6c07eb3d7","Type":"ContainerStarted","Data":"9c8c85e966a4ae85504ed577d9344e74d5710e8fac922e2b0697d449410aeeda"} Mar 18 18:17:18.374359 master-0 kubenswrapper[30278]: I0318 18:17:18.372706 30278 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Mar 18 18:17:18.375100 master-0 kubenswrapper[30278]: I0318 18:17:18.375013 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Mar 18 18:17:18.386259 master-0 kubenswrapper[30278]: I0318 18:17:18.386168 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Mar 18 18:17:18.386639 master-0 kubenswrapper[30278]: I0318 18:17:18.386534 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Mar 18 18:17:18.387007 master-0 kubenswrapper[30278]: I0318 18:17:18.386743 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Mar 18 18:17:18.387007 master-0 kubenswrapper[30278]: I0318 18:17:18.386750 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Mar 18 18:17:18.387456 master-0 kubenswrapper[30278]: I0318 18:17:18.387178 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Mar 18 18:17:18.387456 master-0 kubenswrapper[30278]: I0318 18:17:18.387351 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Mar 18 18:17:18.412303 master-0 kubenswrapper[30278]: I0318 18:17:18.407457 30278 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Mar 18 18:17:18.608341 master-0 kubenswrapper[30278]: I0318 18:17:18.601760 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a24f1688-7c02-4ac5-af8a-0a5c3847755a-config-data\") pod \"rabbitmq-server-0\" (UID: \"a24f1688-7c02-4ac5-af8a-0a5c3847755a\") " pod="openstack/rabbitmq-server-0" Mar 18 18:17:18.608341 master-0 kubenswrapper[30278]: I0318 18:17:18.601825 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/a24f1688-7c02-4ac5-af8a-0a5c3847755a-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"a24f1688-7c02-4ac5-af8a-0a5c3847755a\") " pod="openstack/rabbitmq-server-0" Mar 18 18:17:18.608341 master-0 kubenswrapper[30278]: I0318 18:17:18.601861 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/a24f1688-7c02-4ac5-af8a-0a5c3847755a-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"a24f1688-7c02-4ac5-af8a-0a5c3847755a\") " pod="openstack/rabbitmq-server-0" Mar 18 18:17:18.608341 master-0 kubenswrapper[30278]: I0318 18:17:18.601910 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/a24f1688-7c02-4ac5-af8a-0a5c3847755a-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"a24f1688-7c02-4ac5-af8a-0a5c3847755a\") " pod="openstack/rabbitmq-server-0" Mar 18 18:17:18.608341 master-0 kubenswrapper[30278]: I0318 18:17:18.601943 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/a24f1688-7c02-4ac5-af8a-0a5c3847755a-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"a24f1688-7c02-4ac5-af8a-0a5c3847755a\") " pod="openstack/rabbitmq-server-0" Mar 18 18:17:18.608341 master-0 kubenswrapper[30278]: I0318 18:17:18.601964 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/a24f1688-7c02-4ac5-af8a-0a5c3847755a-pod-info\") pod \"rabbitmq-server-0\" (UID: \"a24f1688-7c02-4ac5-af8a-0a5c3847755a\") " pod="openstack/rabbitmq-server-0" Mar 18 18:17:18.608341 master-0 kubenswrapper[30278]: I0318 18:17:18.601982 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/a24f1688-7c02-4ac5-af8a-0a5c3847755a-server-conf\") pod \"rabbitmq-server-0\" (UID: \"a24f1688-7c02-4ac5-af8a-0a5c3847755a\") " pod="openstack/rabbitmq-server-0" Mar 18 18:17:18.608341 master-0 kubenswrapper[30278]: I0318 18:17:18.602033 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vtm8p\" (UniqueName: \"kubernetes.io/projected/a24f1688-7c02-4ac5-af8a-0a5c3847755a-kube-api-access-vtm8p\") pod \"rabbitmq-server-0\" (UID: \"a24f1688-7c02-4ac5-af8a-0a5c3847755a\") " pod="openstack/rabbitmq-server-0" Mar 18 18:17:18.608341 master-0 kubenswrapper[30278]: I0318 18:17:18.602060 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-c8ce544a-ee75-42cb-9e84-ec48cf2706b9\" (UniqueName: \"kubernetes.io/csi/topolvm.io^7899ca0d-506d-408d-a7de-f4bbe4704a46\") pod \"rabbitmq-server-0\" (UID: \"a24f1688-7c02-4ac5-af8a-0a5c3847755a\") " pod="openstack/rabbitmq-server-0" Mar 18 18:17:18.608341 master-0 kubenswrapper[30278]: I0318 18:17:18.602093 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/a24f1688-7c02-4ac5-af8a-0a5c3847755a-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"a24f1688-7c02-4ac5-af8a-0a5c3847755a\") " pod="openstack/rabbitmq-server-0" Mar 18 18:17:18.608341 master-0 kubenswrapper[30278]: I0318 18:17:18.602121 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/a24f1688-7c02-4ac5-af8a-0a5c3847755a-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"a24f1688-7c02-4ac5-af8a-0a5c3847755a\") " pod="openstack/rabbitmq-server-0" Mar 18 18:17:18.706133 master-0 kubenswrapper[30278]: I0318 18:17:18.705776 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/a24f1688-7c02-4ac5-af8a-0a5c3847755a-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"a24f1688-7c02-4ac5-af8a-0a5c3847755a\") " pod="openstack/rabbitmq-server-0" Mar 18 18:17:18.709680 master-0 kubenswrapper[30278]: I0318 18:17:18.709613 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/a24f1688-7c02-4ac5-af8a-0a5c3847755a-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"a24f1688-7c02-4ac5-af8a-0a5c3847755a\") " pod="openstack/rabbitmq-server-0" Mar 18 18:17:18.709776 master-0 kubenswrapper[30278]: I0318 18:17:18.709710 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/a24f1688-7c02-4ac5-af8a-0a5c3847755a-pod-info\") pod \"rabbitmq-server-0\" (UID: \"a24f1688-7c02-4ac5-af8a-0a5c3847755a\") " pod="openstack/rabbitmq-server-0" Mar 18 18:17:18.709776 master-0 kubenswrapper[30278]: I0318 18:17:18.709760 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/a24f1688-7c02-4ac5-af8a-0a5c3847755a-server-conf\") pod \"rabbitmq-server-0\" (UID: \"a24f1688-7c02-4ac5-af8a-0a5c3847755a\") " pod="openstack/rabbitmq-server-0" Mar 18 18:17:18.709998 master-0 kubenswrapper[30278]: I0318 18:17:18.709973 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vtm8p\" (UniqueName: \"kubernetes.io/projected/a24f1688-7c02-4ac5-af8a-0a5c3847755a-kube-api-access-vtm8p\") pod \"rabbitmq-server-0\" (UID: \"a24f1688-7c02-4ac5-af8a-0a5c3847755a\") " pod="openstack/rabbitmq-server-0" Mar 18 18:17:18.710056 master-0 kubenswrapper[30278]: I0318 18:17:18.710023 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-c8ce544a-ee75-42cb-9e84-ec48cf2706b9\" (UniqueName: \"kubernetes.io/csi/topolvm.io^7899ca0d-506d-408d-a7de-f4bbe4704a46\") pod \"rabbitmq-server-0\" (UID: \"a24f1688-7c02-4ac5-af8a-0a5c3847755a\") " pod="openstack/rabbitmq-server-0" Mar 18 18:17:18.710125 master-0 kubenswrapper[30278]: I0318 18:17:18.710103 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/a24f1688-7c02-4ac5-af8a-0a5c3847755a-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"a24f1688-7c02-4ac5-af8a-0a5c3847755a\") " pod="openstack/rabbitmq-server-0" Mar 18 18:17:18.710186 master-0 kubenswrapper[30278]: I0318 18:17:18.710175 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/a24f1688-7c02-4ac5-af8a-0a5c3847755a-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"a24f1688-7c02-4ac5-af8a-0a5c3847755a\") " pod="openstack/rabbitmq-server-0" Mar 18 18:17:18.710908 master-0 kubenswrapper[30278]: I0318 18:17:18.710884 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a24f1688-7c02-4ac5-af8a-0a5c3847755a-config-data\") pod \"rabbitmq-server-0\" (UID: \"a24f1688-7c02-4ac5-af8a-0a5c3847755a\") " pod="openstack/rabbitmq-server-0" Mar 18 18:17:18.710969 master-0 kubenswrapper[30278]: I0318 18:17:18.710917 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/a24f1688-7c02-4ac5-af8a-0a5c3847755a-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"a24f1688-7c02-4ac5-af8a-0a5c3847755a\") " pod="openstack/rabbitmq-server-0" Mar 18 18:17:18.711012 master-0 kubenswrapper[30278]: I0318 18:17:18.710977 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/a24f1688-7c02-4ac5-af8a-0a5c3847755a-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"a24f1688-7c02-4ac5-af8a-0a5c3847755a\") " pod="openstack/rabbitmq-server-0" Mar 18 18:17:18.720998 master-0 kubenswrapper[30278]: I0318 18:17:18.720791 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/a24f1688-7c02-4ac5-af8a-0a5c3847755a-server-conf\") pod \"rabbitmq-server-0\" (UID: \"a24f1688-7c02-4ac5-af8a-0a5c3847755a\") " pod="openstack/rabbitmq-server-0" Mar 18 18:17:18.729963 master-0 kubenswrapper[30278]: I0318 18:17:18.729908 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/a24f1688-7c02-4ac5-af8a-0a5c3847755a-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"a24f1688-7c02-4ac5-af8a-0a5c3847755a\") " pod="openstack/rabbitmq-server-0" Mar 18 18:17:18.731047 master-0 kubenswrapper[30278]: I0318 18:17:18.730999 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a24f1688-7c02-4ac5-af8a-0a5c3847755a-config-data\") pod \"rabbitmq-server-0\" (UID: \"a24f1688-7c02-4ac5-af8a-0a5c3847755a\") " pod="openstack/rabbitmq-server-0" Mar 18 18:17:18.731347 master-0 kubenswrapper[30278]: I0318 18:17:18.731297 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/a24f1688-7c02-4ac5-af8a-0a5c3847755a-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"a24f1688-7c02-4ac5-af8a-0a5c3847755a\") " pod="openstack/rabbitmq-server-0" Mar 18 18:17:18.731712 master-0 kubenswrapper[30278]: I0318 18:17:18.731682 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/a24f1688-7c02-4ac5-af8a-0a5c3847755a-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"a24f1688-7c02-4ac5-af8a-0a5c3847755a\") " pod="openstack/rabbitmq-server-0" Mar 18 18:17:18.731913 master-0 kubenswrapper[30278]: I0318 18:17:18.731885 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/a24f1688-7c02-4ac5-af8a-0a5c3847755a-pod-info\") pod \"rabbitmq-server-0\" (UID: \"a24f1688-7c02-4ac5-af8a-0a5c3847755a\") " pod="openstack/rabbitmq-server-0" Mar 18 18:17:18.734400 master-0 kubenswrapper[30278]: I0318 18:17:18.734033 30278 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Mar 18 18:17:18.734400 master-0 kubenswrapper[30278]: I0318 18:17:18.734110 30278 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-c8ce544a-ee75-42cb-9e84-ec48cf2706b9\" (UniqueName: \"kubernetes.io/csi/topolvm.io^7899ca0d-506d-408d-a7de-f4bbe4704a46\") pod \"rabbitmq-server-0\" (UID: \"a24f1688-7c02-4ac5-af8a-0a5c3847755a\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/topolvm.io/590e641af827bef5c9d188c3d2739391abb44f031a230f410995a9bf66984c9e/globalmount\"" pod="openstack/rabbitmq-server-0" Mar 18 18:17:18.734545 master-0 kubenswrapper[30278]: I0318 18:17:18.734388 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/a24f1688-7c02-4ac5-af8a-0a5c3847755a-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"a24f1688-7c02-4ac5-af8a-0a5c3847755a\") " pod="openstack/rabbitmq-server-0" Mar 18 18:17:18.736257 master-0 kubenswrapper[30278]: I0318 18:17:18.736210 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/a24f1688-7c02-4ac5-af8a-0a5c3847755a-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"a24f1688-7c02-4ac5-af8a-0a5c3847755a\") " pod="openstack/rabbitmq-server-0" Mar 18 18:17:18.749384 master-0 kubenswrapper[30278]: I0318 18:17:18.749214 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/a24f1688-7c02-4ac5-af8a-0a5c3847755a-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"a24f1688-7c02-4ac5-af8a-0a5c3847755a\") " pod="openstack/rabbitmq-server-0" Mar 18 18:17:18.781099 master-0 kubenswrapper[30278]: I0318 18:17:18.778931 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vtm8p\" (UniqueName: \"kubernetes.io/projected/a24f1688-7c02-4ac5-af8a-0a5c3847755a-kube-api-access-vtm8p\") pod \"rabbitmq-server-0\" (UID: \"a24f1688-7c02-4ac5-af8a-0a5c3847755a\") " pod="openstack/rabbitmq-server-0" Mar 18 18:17:19.197820 master-0 kubenswrapper[30278]: I0318 18:17:19.197677 30278 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Mar 18 18:17:19.216730 master-0 kubenswrapper[30278]: I0318 18:17:19.216538 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Mar 18 18:17:19.217419 master-0 kubenswrapper[30278]: I0318 18:17:19.217392 30278 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Mar 18 18:17:19.248112 master-0 kubenswrapper[30278]: I0318 18:17:19.248024 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Mar 18 18:17:19.275585 master-0 kubenswrapper[30278]: I0318 18:17:19.275477 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Mar 18 18:17:19.291966 master-0 kubenswrapper[30278]: I0318 18:17:19.291833 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Mar 18 18:17:19.403344 master-0 kubenswrapper[30278]: I0318 18:17:19.403166 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7cbbe035-fa50-48c9-84ca-845e93085070-config-data\") pod \"memcached-0\" (UID: \"7cbbe035-fa50-48c9-84ca-845e93085070\") " pod="openstack/memcached-0" Mar 18 18:17:19.403344 master-0 kubenswrapper[30278]: I0318 18:17:19.403255 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/7cbbe035-fa50-48c9-84ca-845e93085070-kolla-config\") pod \"memcached-0\" (UID: \"7cbbe035-fa50-48c9-84ca-845e93085070\") " pod="openstack/memcached-0" Mar 18 18:17:19.403344 master-0 kubenswrapper[30278]: I0318 18:17:19.403365 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7cbbe035-fa50-48c9-84ca-845e93085070-combined-ca-bundle\") pod \"memcached-0\" (UID: \"7cbbe035-fa50-48c9-84ca-845e93085070\") " pod="openstack/memcached-0" Mar 18 18:17:19.403344 master-0 kubenswrapper[30278]: I0318 18:17:19.403599 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/7cbbe035-fa50-48c9-84ca-845e93085070-memcached-tls-certs\") pod \"memcached-0\" (UID: \"7cbbe035-fa50-48c9-84ca-845e93085070\") " pod="openstack/memcached-0" Mar 18 18:17:19.403344 master-0 kubenswrapper[30278]: I0318 18:17:19.403738 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5xj89\" (UniqueName: \"kubernetes.io/projected/7cbbe035-fa50-48c9-84ca-845e93085070-kube-api-access-5xj89\") pod \"memcached-0\" (UID: \"7cbbe035-fa50-48c9-84ca-845e93085070\") " pod="openstack/memcached-0" Mar 18 18:17:19.519821 master-0 kubenswrapper[30278]: I0318 18:17:19.506193 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7cbbe035-fa50-48c9-84ca-845e93085070-config-data\") pod \"memcached-0\" (UID: \"7cbbe035-fa50-48c9-84ca-845e93085070\") " pod="openstack/memcached-0" Mar 18 18:17:19.519821 master-0 kubenswrapper[30278]: I0318 18:17:19.506304 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/7cbbe035-fa50-48c9-84ca-845e93085070-kolla-config\") pod \"memcached-0\" (UID: \"7cbbe035-fa50-48c9-84ca-845e93085070\") " pod="openstack/memcached-0" Mar 18 18:17:19.519821 master-0 kubenswrapper[30278]: I0318 18:17:19.506459 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7cbbe035-fa50-48c9-84ca-845e93085070-combined-ca-bundle\") pod \"memcached-0\" (UID: \"7cbbe035-fa50-48c9-84ca-845e93085070\") " pod="openstack/memcached-0" Mar 18 18:17:19.519821 master-0 kubenswrapper[30278]: I0318 18:17:19.506524 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/7cbbe035-fa50-48c9-84ca-845e93085070-memcached-tls-certs\") pod \"memcached-0\" (UID: \"7cbbe035-fa50-48c9-84ca-845e93085070\") " pod="openstack/memcached-0" Mar 18 18:17:19.519821 master-0 kubenswrapper[30278]: I0318 18:17:19.506562 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5xj89\" (UniqueName: \"kubernetes.io/projected/7cbbe035-fa50-48c9-84ca-845e93085070-kube-api-access-5xj89\") pod \"memcached-0\" (UID: \"7cbbe035-fa50-48c9-84ca-845e93085070\") " pod="openstack/memcached-0" Mar 18 18:17:19.519821 master-0 kubenswrapper[30278]: I0318 18:17:19.508207 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/7cbbe035-fa50-48c9-84ca-845e93085070-kolla-config\") pod \"memcached-0\" (UID: \"7cbbe035-fa50-48c9-84ca-845e93085070\") " pod="openstack/memcached-0" Mar 18 18:17:19.519821 master-0 kubenswrapper[30278]: I0318 18:17:19.509738 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7cbbe035-fa50-48c9-84ca-845e93085070-config-data\") pod \"memcached-0\" (UID: \"7cbbe035-fa50-48c9-84ca-845e93085070\") " pod="openstack/memcached-0" Mar 18 18:17:19.533340 master-0 kubenswrapper[30278]: I0318 18:17:19.533285 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/7cbbe035-fa50-48c9-84ca-845e93085070-memcached-tls-certs\") pod \"memcached-0\" (UID: \"7cbbe035-fa50-48c9-84ca-845e93085070\") " pod="openstack/memcached-0" Mar 18 18:17:19.534883 master-0 kubenswrapper[30278]: I0318 18:17:19.534842 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5xj89\" (UniqueName: \"kubernetes.io/projected/7cbbe035-fa50-48c9-84ca-845e93085070-kube-api-access-5xj89\") pod \"memcached-0\" (UID: \"7cbbe035-fa50-48c9-84ca-845e93085070\") " pod="openstack/memcached-0" Mar 18 18:17:19.540505 master-0 kubenswrapper[30278]: I0318 18:17:19.540440 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7cbbe035-fa50-48c9-84ca-845e93085070-combined-ca-bundle\") pod \"memcached-0\" (UID: \"7cbbe035-fa50-48c9-84ca-845e93085070\") " pod="openstack/memcached-0" Mar 18 18:17:19.599926 master-0 kubenswrapper[30278]: I0318 18:17:19.599854 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Mar 18 18:17:20.006308 master-0 kubenswrapper[30278]: I0318 18:17:20.005962 30278 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Mar 18 18:17:20.017622 master-0 kubenswrapper[30278]: I0318 18:17:20.008166 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Mar 18 18:17:20.017622 master-0 kubenswrapper[30278]: I0318 18:17:20.013883 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Mar 18 18:17:20.017622 master-0 kubenswrapper[30278]: I0318 18:17:20.014216 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Mar 18 18:17:20.017622 master-0 kubenswrapper[30278]: I0318 18:17:20.014367 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Mar 18 18:17:20.017622 master-0 kubenswrapper[30278]: I0318 18:17:20.015040 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Mar 18 18:17:20.017622 master-0 kubenswrapper[30278]: I0318 18:17:20.015152 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Mar 18 18:17:20.031468 master-0 kubenswrapper[30278]: I0318 18:17:20.027495 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Mar 18 18:17:20.156901 master-0 kubenswrapper[30278]: I0318 18:17:20.154958 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/1ec57481-0836-4458-a2bc-e7ce64175f3a-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"1ec57481-0836-4458-a2bc-e7ce64175f3a\") " pod="openstack/rabbitmq-cell1-server-0" Mar 18 18:17:20.156901 master-0 kubenswrapper[30278]: I0318 18:17:20.155029 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/1ec57481-0836-4458-a2bc-e7ce64175f3a-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"1ec57481-0836-4458-a2bc-e7ce64175f3a\") " pod="openstack/rabbitmq-cell1-server-0" Mar 18 18:17:20.156901 master-0 kubenswrapper[30278]: I0318 18:17:20.155068 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-djt6h\" (UniqueName: \"kubernetes.io/projected/1ec57481-0836-4458-a2bc-e7ce64175f3a-kube-api-access-djt6h\") pod \"rabbitmq-cell1-server-0\" (UID: \"1ec57481-0836-4458-a2bc-e7ce64175f3a\") " pod="openstack/rabbitmq-cell1-server-0" Mar 18 18:17:20.156901 master-0 kubenswrapper[30278]: I0318 18:17:20.155121 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/1ec57481-0836-4458-a2bc-e7ce64175f3a-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"1ec57481-0836-4458-a2bc-e7ce64175f3a\") " pod="openstack/rabbitmq-cell1-server-0" Mar 18 18:17:20.156901 master-0 kubenswrapper[30278]: I0318 18:17:20.155157 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/1ec57481-0836-4458-a2bc-e7ce64175f3a-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"1ec57481-0836-4458-a2bc-e7ce64175f3a\") " pod="openstack/rabbitmq-cell1-server-0" Mar 18 18:17:20.156901 master-0 kubenswrapper[30278]: I0318 18:17:20.155191 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/1ec57481-0836-4458-a2bc-e7ce64175f3a-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"1ec57481-0836-4458-a2bc-e7ce64175f3a\") " pod="openstack/rabbitmq-cell1-server-0" Mar 18 18:17:20.156901 master-0 kubenswrapper[30278]: I0318 18:17:20.155223 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-b8615c89-47bf-46e6-9065-c631d23ede51\" (UniqueName: \"kubernetes.io/csi/topolvm.io^5b1fd229-31f2-4cf1-8e32-6f39ee23a936\") pod \"rabbitmq-cell1-server-0\" (UID: \"1ec57481-0836-4458-a2bc-e7ce64175f3a\") " pod="openstack/rabbitmq-cell1-server-0" Mar 18 18:17:20.156901 master-0 kubenswrapper[30278]: I0318 18:17:20.155294 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/1ec57481-0836-4458-a2bc-e7ce64175f3a-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"1ec57481-0836-4458-a2bc-e7ce64175f3a\") " pod="openstack/rabbitmq-cell1-server-0" Mar 18 18:17:20.156901 master-0 kubenswrapper[30278]: I0318 18:17:20.155342 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/1ec57481-0836-4458-a2bc-e7ce64175f3a-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"1ec57481-0836-4458-a2bc-e7ce64175f3a\") " pod="openstack/rabbitmq-cell1-server-0" Mar 18 18:17:20.156901 master-0 kubenswrapper[30278]: I0318 18:17:20.155365 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/1ec57481-0836-4458-a2bc-e7ce64175f3a-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"1ec57481-0836-4458-a2bc-e7ce64175f3a\") " pod="openstack/rabbitmq-cell1-server-0" Mar 18 18:17:20.156901 master-0 kubenswrapper[30278]: I0318 18:17:20.155397 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1ec57481-0836-4458-a2bc-e7ce64175f3a-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"1ec57481-0836-4458-a2bc-e7ce64175f3a\") " pod="openstack/rabbitmq-cell1-server-0" Mar 18 18:17:20.156901 master-0 kubenswrapper[30278]: I0318 18:17:20.156455 30278 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Mar 18 18:17:20.281659 master-0 kubenswrapper[30278]: I0318 18:17:20.281588 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-djt6h\" (UniqueName: \"kubernetes.io/projected/1ec57481-0836-4458-a2bc-e7ce64175f3a-kube-api-access-djt6h\") pod \"rabbitmq-cell1-server-0\" (UID: \"1ec57481-0836-4458-a2bc-e7ce64175f3a\") " pod="openstack/rabbitmq-cell1-server-0" Mar 18 18:17:20.281767 master-0 kubenswrapper[30278]: I0318 18:17:20.281695 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/1ec57481-0836-4458-a2bc-e7ce64175f3a-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"1ec57481-0836-4458-a2bc-e7ce64175f3a\") " pod="openstack/rabbitmq-cell1-server-0" Mar 18 18:17:20.281767 master-0 kubenswrapper[30278]: I0318 18:17:20.281729 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/1ec57481-0836-4458-a2bc-e7ce64175f3a-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"1ec57481-0836-4458-a2bc-e7ce64175f3a\") " pod="openstack/rabbitmq-cell1-server-0" Mar 18 18:17:20.281767 master-0 kubenswrapper[30278]: I0318 18:17:20.281759 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/1ec57481-0836-4458-a2bc-e7ce64175f3a-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"1ec57481-0836-4458-a2bc-e7ce64175f3a\") " pod="openstack/rabbitmq-cell1-server-0" Mar 18 18:17:20.281870 master-0 kubenswrapper[30278]: I0318 18:17:20.281793 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-b8615c89-47bf-46e6-9065-c631d23ede51\" (UniqueName: \"kubernetes.io/csi/topolvm.io^5b1fd229-31f2-4cf1-8e32-6f39ee23a936\") pod \"rabbitmq-cell1-server-0\" (UID: \"1ec57481-0836-4458-a2bc-e7ce64175f3a\") " pod="openstack/rabbitmq-cell1-server-0" Mar 18 18:17:20.281870 master-0 kubenswrapper[30278]: I0318 18:17:20.281830 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/1ec57481-0836-4458-a2bc-e7ce64175f3a-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"1ec57481-0836-4458-a2bc-e7ce64175f3a\") " pod="openstack/rabbitmq-cell1-server-0" Mar 18 18:17:20.281870 master-0 kubenswrapper[30278]: I0318 18:17:20.281863 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/1ec57481-0836-4458-a2bc-e7ce64175f3a-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"1ec57481-0836-4458-a2bc-e7ce64175f3a\") " pod="openstack/rabbitmq-cell1-server-0" Mar 18 18:17:20.281963 master-0 kubenswrapper[30278]: I0318 18:17:20.281885 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/1ec57481-0836-4458-a2bc-e7ce64175f3a-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"1ec57481-0836-4458-a2bc-e7ce64175f3a\") " pod="openstack/rabbitmq-cell1-server-0" Mar 18 18:17:20.281963 master-0 kubenswrapper[30278]: I0318 18:17:20.281907 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1ec57481-0836-4458-a2bc-e7ce64175f3a-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"1ec57481-0836-4458-a2bc-e7ce64175f3a\") " pod="openstack/rabbitmq-cell1-server-0" Mar 18 18:17:20.288844 master-0 kubenswrapper[30278]: I0318 18:17:20.285199 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/1ec57481-0836-4458-a2bc-e7ce64175f3a-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"1ec57481-0836-4458-a2bc-e7ce64175f3a\") " pod="openstack/rabbitmq-cell1-server-0" Mar 18 18:17:20.294366 master-0 kubenswrapper[30278]: I0318 18:17:20.292511 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/1ec57481-0836-4458-a2bc-e7ce64175f3a-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"1ec57481-0836-4458-a2bc-e7ce64175f3a\") " pod="openstack/rabbitmq-cell1-server-0" Mar 18 18:17:20.294366 master-0 kubenswrapper[30278]: I0318 18:17:20.292570 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/1ec57481-0836-4458-a2bc-e7ce64175f3a-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"1ec57481-0836-4458-a2bc-e7ce64175f3a\") " pod="openstack/rabbitmq-cell1-server-0" Mar 18 18:17:20.294366 master-0 kubenswrapper[30278]: I0318 18:17:20.293159 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/1ec57481-0836-4458-a2bc-e7ce64175f3a-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"1ec57481-0836-4458-a2bc-e7ce64175f3a\") " pod="openstack/rabbitmq-cell1-server-0" Mar 18 18:17:20.310991 master-0 kubenswrapper[30278]: I0318 18:17:20.310775 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1ec57481-0836-4458-a2bc-e7ce64175f3a-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"1ec57481-0836-4458-a2bc-e7ce64175f3a\") " pod="openstack/rabbitmq-cell1-server-0" Mar 18 18:17:20.317130 master-0 kubenswrapper[30278]: I0318 18:17:20.316790 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/1ec57481-0836-4458-a2bc-e7ce64175f3a-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"1ec57481-0836-4458-a2bc-e7ce64175f3a\") " pod="openstack/rabbitmq-cell1-server-0" Mar 18 18:17:20.335637 master-0 kubenswrapper[30278]: I0318 18:17:20.335318 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/1ec57481-0836-4458-a2bc-e7ce64175f3a-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"1ec57481-0836-4458-a2bc-e7ce64175f3a\") " pod="openstack/rabbitmq-cell1-server-0" Mar 18 18:17:20.370048 master-0 kubenswrapper[30278]: I0318 18:17:20.369129 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/1ec57481-0836-4458-a2bc-e7ce64175f3a-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"1ec57481-0836-4458-a2bc-e7ce64175f3a\") " pod="openstack/rabbitmq-cell1-server-0" Mar 18 18:17:20.370048 master-0 kubenswrapper[30278]: I0318 18:17:20.369196 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/1ec57481-0836-4458-a2bc-e7ce64175f3a-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"1ec57481-0836-4458-a2bc-e7ce64175f3a\") " pod="openstack/rabbitmq-cell1-server-0" Mar 18 18:17:20.370855 master-0 kubenswrapper[30278]: I0318 18:17:20.370771 30278 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Mar 18 18:17:20.370950 master-0 kubenswrapper[30278]: I0318 18:17:20.370913 30278 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-b8615c89-47bf-46e6-9065-c631d23ede51\" (UniqueName: \"kubernetes.io/csi/topolvm.io^5b1fd229-31f2-4cf1-8e32-6f39ee23a936\") pod \"rabbitmq-cell1-server-0\" (UID: \"1ec57481-0836-4458-a2bc-e7ce64175f3a\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/topolvm.io/196d787d09348cca6dc590f1348c004e13049483f5b938211930d04570d64d0e/globalmount\"" pod="openstack/rabbitmq-cell1-server-0" Mar 18 18:17:20.371013 master-0 kubenswrapper[30278]: I0318 18:17:20.370965 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/1ec57481-0836-4458-a2bc-e7ce64175f3a-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"1ec57481-0836-4458-a2bc-e7ce64175f3a\") " pod="openstack/rabbitmq-cell1-server-0" Mar 18 18:17:20.384582 master-0 kubenswrapper[30278]: I0318 18:17:20.384519 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-djt6h\" (UniqueName: \"kubernetes.io/projected/1ec57481-0836-4458-a2bc-e7ce64175f3a-kube-api-access-djt6h\") pod \"rabbitmq-cell1-server-0\" (UID: \"1ec57481-0836-4458-a2bc-e7ce64175f3a\") " pod="openstack/rabbitmq-cell1-server-0" Mar 18 18:17:20.436320 master-0 kubenswrapper[30278]: I0318 18:17:20.433511 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-c8ce544a-ee75-42cb-9e84-ec48cf2706b9\" (UniqueName: \"kubernetes.io/csi/topolvm.io^7899ca0d-506d-408d-a7de-f4bbe4704a46\") pod \"rabbitmq-server-0\" (UID: \"a24f1688-7c02-4ac5-af8a-0a5c3847755a\") " pod="openstack/rabbitmq-server-0" Mar 18 18:17:20.451939 master-0 kubenswrapper[30278]: I0318 18:17:20.449039 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/1ec57481-0836-4458-a2bc-e7ce64175f3a-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"1ec57481-0836-4458-a2bc-e7ce64175f3a\") " pod="openstack/rabbitmq-cell1-server-0" Mar 18 18:17:20.553312 master-0 kubenswrapper[30278]: I0318 18:17:20.550882 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Mar 18 18:17:20.572357 master-0 kubenswrapper[30278]: I0318 18:17:20.570619 30278 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Mar 18 18:17:20.575435 master-0 kubenswrapper[30278]: I0318 18:17:20.574191 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Mar 18 18:17:20.586335 master-0 kubenswrapper[30278]: I0318 18:17:20.584659 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Mar 18 18:17:20.586335 master-0 kubenswrapper[30278]: I0318 18:17:20.584934 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Mar 18 18:17:20.586335 master-0 kubenswrapper[30278]: I0318 18:17:20.585146 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Mar 18 18:17:20.641340 master-0 kubenswrapper[30278]: I0318 18:17:20.640967 30278 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Mar 18 18:17:20.694150 master-0 kubenswrapper[30278]: I0318 18:17:20.694044 30278 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Mar 18 18:17:20.722486 master-0 kubenswrapper[30278]: I0318 18:17:20.720566 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-1bd5e562-8afd-40be-a340-38b540cff718\" (UniqueName: \"kubernetes.io/csi/topolvm.io^ab8aa365-a151-4277-b88a-36a592a72e15\") pod \"openstack-galera-0\" (UID: \"3a06b9e0-a605-44e2-b6e2-63b15a5bb700\") " pod="openstack/openstack-galera-0" Mar 18 18:17:20.722486 master-0 kubenswrapper[30278]: I0318 18:17:20.720839 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fl8mp\" (UniqueName: \"kubernetes.io/projected/3a06b9e0-a605-44e2-b6e2-63b15a5bb700-kube-api-access-fl8mp\") pod \"openstack-galera-0\" (UID: \"3a06b9e0-a605-44e2-b6e2-63b15a5bb700\") " pod="openstack/openstack-galera-0" Mar 18 18:17:20.722486 master-0 kubenswrapper[30278]: I0318 18:17:20.720951 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/3a06b9e0-a605-44e2-b6e2-63b15a5bb700-config-data-generated\") pod \"openstack-galera-0\" (UID: \"3a06b9e0-a605-44e2-b6e2-63b15a5bb700\") " pod="openstack/openstack-galera-0" Mar 18 18:17:20.722486 master-0 kubenswrapper[30278]: I0318 18:17:20.721701 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/3a06b9e0-a605-44e2-b6e2-63b15a5bb700-kolla-config\") pod \"openstack-galera-0\" (UID: \"3a06b9e0-a605-44e2-b6e2-63b15a5bb700\") " pod="openstack/openstack-galera-0" Mar 18 18:17:20.722486 master-0 kubenswrapper[30278]: I0318 18:17:20.721774 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3a06b9e0-a605-44e2-b6e2-63b15a5bb700-operator-scripts\") pod \"openstack-galera-0\" (UID: \"3a06b9e0-a605-44e2-b6e2-63b15a5bb700\") " pod="openstack/openstack-galera-0" Mar 18 18:17:20.722486 master-0 kubenswrapper[30278]: I0318 18:17:20.721897 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a06b9e0-a605-44e2-b6e2-63b15a5bb700-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"3a06b9e0-a605-44e2-b6e2-63b15a5bb700\") " pod="openstack/openstack-galera-0" Mar 18 18:17:20.722486 master-0 kubenswrapper[30278]: I0318 18:17:20.721974 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/3a06b9e0-a605-44e2-b6e2-63b15a5bb700-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"3a06b9e0-a605-44e2-b6e2-63b15a5bb700\") " pod="openstack/openstack-galera-0" Mar 18 18:17:20.722486 master-0 kubenswrapper[30278]: I0318 18:17:20.722116 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/3a06b9e0-a605-44e2-b6e2-63b15a5bb700-config-data-default\") pod \"openstack-galera-0\" (UID: \"3a06b9e0-a605-44e2-b6e2-63b15a5bb700\") " pod="openstack/openstack-galera-0" Mar 18 18:17:20.831594 master-0 kubenswrapper[30278]: I0318 18:17:20.831378 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/3a06b9e0-a605-44e2-b6e2-63b15a5bb700-kolla-config\") pod \"openstack-galera-0\" (UID: \"3a06b9e0-a605-44e2-b6e2-63b15a5bb700\") " pod="openstack/openstack-galera-0" Mar 18 18:17:20.832001 master-0 kubenswrapper[30278]: I0318 18:17:20.831967 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3a06b9e0-a605-44e2-b6e2-63b15a5bb700-operator-scripts\") pod \"openstack-galera-0\" (UID: \"3a06b9e0-a605-44e2-b6e2-63b15a5bb700\") " pod="openstack/openstack-galera-0" Mar 18 18:17:20.834905 master-0 kubenswrapper[30278]: I0318 18:17:20.833912 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a06b9e0-a605-44e2-b6e2-63b15a5bb700-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"3a06b9e0-a605-44e2-b6e2-63b15a5bb700\") " pod="openstack/openstack-galera-0" Mar 18 18:17:20.834905 master-0 kubenswrapper[30278]: I0318 18:17:20.833978 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/3a06b9e0-a605-44e2-b6e2-63b15a5bb700-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"3a06b9e0-a605-44e2-b6e2-63b15a5bb700\") " pod="openstack/openstack-galera-0" Mar 18 18:17:20.834905 master-0 kubenswrapper[30278]: I0318 18:17:20.834134 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/3a06b9e0-a605-44e2-b6e2-63b15a5bb700-config-data-default\") pod \"openstack-galera-0\" (UID: \"3a06b9e0-a605-44e2-b6e2-63b15a5bb700\") " pod="openstack/openstack-galera-0" Mar 18 18:17:20.834905 master-0 kubenswrapper[30278]: I0318 18:17:20.834178 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-1bd5e562-8afd-40be-a340-38b540cff718\" (UniqueName: \"kubernetes.io/csi/topolvm.io^ab8aa365-a151-4277-b88a-36a592a72e15\") pod \"openstack-galera-0\" (UID: \"3a06b9e0-a605-44e2-b6e2-63b15a5bb700\") " pod="openstack/openstack-galera-0" Mar 18 18:17:20.834905 master-0 kubenswrapper[30278]: I0318 18:17:20.834221 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fl8mp\" (UniqueName: \"kubernetes.io/projected/3a06b9e0-a605-44e2-b6e2-63b15a5bb700-kube-api-access-fl8mp\") pod \"openstack-galera-0\" (UID: \"3a06b9e0-a605-44e2-b6e2-63b15a5bb700\") " pod="openstack/openstack-galera-0" Mar 18 18:17:20.834905 master-0 kubenswrapper[30278]: I0318 18:17:20.834252 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/3a06b9e0-a605-44e2-b6e2-63b15a5bb700-config-data-generated\") pod \"openstack-galera-0\" (UID: \"3a06b9e0-a605-44e2-b6e2-63b15a5bb700\") " pod="openstack/openstack-galera-0" Mar 18 18:17:20.841903 master-0 kubenswrapper[30278]: I0318 18:17:20.838905 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/3a06b9e0-a605-44e2-b6e2-63b15a5bb700-kolla-config\") pod \"openstack-galera-0\" (UID: \"3a06b9e0-a605-44e2-b6e2-63b15a5bb700\") " pod="openstack/openstack-galera-0" Mar 18 18:17:20.844984 master-0 kubenswrapper[30278]: I0318 18:17:20.844248 30278 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Mar 18 18:17:20.844984 master-0 kubenswrapper[30278]: I0318 18:17:20.844330 30278 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-1bd5e562-8afd-40be-a340-38b540cff718\" (UniqueName: \"kubernetes.io/csi/topolvm.io^ab8aa365-a151-4277-b88a-36a592a72e15\") pod \"openstack-galera-0\" (UID: \"3a06b9e0-a605-44e2-b6e2-63b15a5bb700\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/topolvm.io/b63b149005ffaeb47040c98cb3040342f08a419ec5482c0c6037ee5c64424b67/globalmount\"" pod="openstack/openstack-galera-0" Mar 18 18:17:20.851852 master-0 kubenswrapper[30278]: I0318 18:17:20.851805 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/3a06b9e0-a605-44e2-b6e2-63b15a5bb700-config-data-default\") pod \"openstack-galera-0\" (UID: \"3a06b9e0-a605-44e2-b6e2-63b15a5bb700\") " pod="openstack/openstack-galera-0" Mar 18 18:17:20.854145 master-0 kubenswrapper[30278]: I0318 18:17:20.853980 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/3a06b9e0-a605-44e2-b6e2-63b15a5bb700-config-data-generated\") pod \"openstack-galera-0\" (UID: \"3a06b9e0-a605-44e2-b6e2-63b15a5bb700\") " pod="openstack/openstack-galera-0" Mar 18 18:17:20.854800 master-0 kubenswrapper[30278]: I0318 18:17:20.854756 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3a06b9e0-a605-44e2-b6e2-63b15a5bb700-operator-scripts\") pod \"openstack-galera-0\" (UID: \"3a06b9e0-a605-44e2-b6e2-63b15a5bb700\") " pod="openstack/openstack-galera-0" Mar 18 18:17:20.855775 master-0 kubenswrapper[30278]: I0318 18:17:20.855713 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fl8mp\" (UniqueName: \"kubernetes.io/projected/3a06b9e0-a605-44e2-b6e2-63b15a5bb700-kube-api-access-fl8mp\") pod \"openstack-galera-0\" (UID: \"3a06b9e0-a605-44e2-b6e2-63b15a5bb700\") " pod="openstack/openstack-galera-0" Mar 18 18:17:20.865021 master-0 kubenswrapper[30278]: I0318 18:17:20.864960 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/3a06b9e0-a605-44e2-b6e2-63b15a5bb700-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"3a06b9e0-a605-44e2-b6e2-63b15a5bb700\") " pod="openstack/openstack-galera-0" Mar 18 18:17:20.872385 master-0 kubenswrapper[30278]: I0318 18:17:20.872353 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a06b9e0-a605-44e2-b6e2-63b15a5bb700-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"3a06b9e0-a605-44e2-b6e2-63b15a5bb700\") " pod="openstack/openstack-galera-0" Mar 18 18:17:20.994733 master-0 kubenswrapper[30278]: I0318 18:17:20.994605 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"7cbbe035-fa50-48c9-84ca-845e93085070","Type":"ContainerStarted","Data":"c95bc5dc16b075f3b1268abb274451a03df28c2d6761e5623a63ee27ab2a007a"} Mar 18 18:17:21.583213 master-0 kubenswrapper[30278]: I0318 18:17:21.581673 30278 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Mar 18 18:17:21.993971 master-0 kubenswrapper[30278]: I0318 18:17:21.993905 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-b8615c89-47bf-46e6-9065-c631d23ede51\" (UniqueName: \"kubernetes.io/csi/topolvm.io^5b1fd229-31f2-4cf1-8e32-6f39ee23a936\") pod \"rabbitmq-cell1-server-0\" (UID: \"1ec57481-0836-4458-a2bc-e7ce64175f3a\") " pod="openstack/rabbitmq-cell1-server-0" Mar 18 18:17:22.020158 master-0 kubenswrapper[30278]: I0318 18:17:22.020090 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"a24f1688-7c02-4ac5-af8a-0a5c3847755a","Type":"ContainerStarted","Data":"49790a1410e5a12365a256f1163148112d077a2a88a4242552064165a5376047"} Mar 18 18:17:22.169028 master-0 kubenswrapper[30278]: I0318 18:17:22.167918 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Mar 18 18:17:22.324332 master-0 kubenswrapper[30278]: I0318 18:17:22.313247 30278 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Mar 18 18:17:22.332257 master-0 kubenswrapper[30278]: I0318 18:17:22.330077 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Mar 18 18:17:22.337372 master-0 kubenswrapper[30278]: I0318 18:17:22.335980 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Mar 18 18:17:22.339920 master-0 kubenswrapper[30278]: I0318 18:17:22.339670 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Mar 18 18:17:22.346266 master-0 kubenswrapper[30278]: I0318 18:17:22.345561 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Mar 18 18:17:22.376913 master-0 kubenswrapper[30278]: I0318 18:17:22.376394 30278 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Mar 18 18:17:22.473716 master-0 kubenswrapper[30278]: I0318 18:17:22.473548 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/df68dba7-dacb-48bb-9433-12ad79aba028-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"df68dba7-dacb-48bb-9433-12ad79aba028\") " pod="openstack/openstack-cell1-galera-0" Mar 18 18:17:22.473716 master-0 kubenswrapper[30278]: I0318 18:17:22.473676 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wscdv\" (UniqueName: \"kubernetes.io/projected/df68dba7-dacb-48bb-9433-12ad79aba028-kube-api-access-wscdv\") pod \"openstack-cell1-galera-0\" (UID: \"df68dba7-dacb-48bb-9433-12ad79aba028\") " pod="openstack/openstack-cell1-galera-0" Mar 18 18:17:22.474012 master-0 kubenswrapper[30278]: I0318 18:17:22.473734 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-a57b6e2d-f2e6-4288-b29d-62e564a6f476\" (UniqueName: \"kubernetes.io/csi/topolvm.io^a9dec071-845b-45b1-9003-54530975b2f4\") pod \"openstack-cell1-galera-0\" (UID: \"df68dba7-dacb-48bb-9433-12ad79aba028\") " pod="openstack/openstack-cell1-galera-0" Mar 18 18:17:22.474012 master-0 kubenswrapper[30278]: I0318 18:17:22.473770 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/df68dba7-dacb-48bb-9433-12ad79aba028-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"df68dba7-dacb-48bb-9433-12ad79aba028\") " pod="openstack/openstack-cell1-galera-0" Mar 18 18:17:22.474012 master-0 kubenswrapper[30278]: I0318 18:17:22.473827 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/df68dba7-dacb-48bb-9433-12ad79aba028-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"df68dba7-dacb-48bb-9433-12ad79aba028\") " pod="openstack/openstack-cell1-galera-0" Mar 18 18:17:22.475028 master-0 kubenswrapper[30278]: I0318 18:17:22.474522 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/df68dba7-dacb-48bb-9433-12ad79aba028-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"df68dba7-dacb-48bb-9433-12ad79aba028\") " pod="openstack/openstack-cell1-galera-0" Mar 18 18:17:22.475106 master-0 kubenswrapper[30278]: I0318 18:17:22.475054 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/df68dba7-dacb-48bb-9433-12ad79aba028-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"df68dba7-dacb-48bb-9433-12ad79aba028\") " pod="openstack/openstack-cell1-galera-0" Mar 18 18:17:22.475106 master-0 kubenswrapper[30278]: I0318 18:17:22.475079 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df68dba7-dacb-48bb-9433-12ad79aba028-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"df68dba7-dacb-48bb-9433-12ad79aba028\") " pod="openstack/openstack-cell1-galera-0" Mar 18 18:17:22.579307 master-0 kubenswrapper[30278]: I0318 18:17:22.579224 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/df68dba7-dacb-48bb-9433-12ad79aba028-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"df68dba7-dacb-48bb-9433-12ad79aba028\") " pod="openstack/openstack-cell1-galera-0" Mar 18 18:17:22.579995 master-0 kubenswrapper[30278]: I0318 18:17:22.579372 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/df68dba7-dacb-48bb-9433-12ad79aba028-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"df68dba7-dacb-48bb-9433-12ad79aba028\") " pod="openstack/openstack-cell1-galera-0" Mar 18 18:17:22.579995 master-0 kubenswrapper[30278]: I0318 18:17:22.579555 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/df68dba7-dacb-48bb-9433-12ad79aba028-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"df68dba7-dacb-48bb-9433-12ad79aba028\") " pod="openstack/openstack-cell1-galera-0" Mar 18 18:17:22.579995 master-0 kubenswrapper[30278]: I0318 18:17:22.579588 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/df68dba7-dacb-48bb-9433-12ad79aba028-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"df68dba7-dacb-48bb-9433-12ad79aba028\") " pod="openstack/openstack-cell1-galera-0" Mar 18 18:17:22.579995 master-0 kubenswrapper[30278]: I0318 18:17:22.579624 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df68dba7-dacb-48bb-9433-12ad79aba028-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"df68dba7-dacb-48bb-9433-12ad79aba028\") " pod="openstack/openstack-cell1-galera-0" Mar 18 18:17:22.579995 master-0 kubenswrapper[30278]: I0318 18:17:22.579723 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/df68dba7-dacb-48bb-9433-12ad79aba028-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"df68dba7-dacb-48bb-9433-12ad79aba028\") " pod="openstack/openstack-cell1-galera-0" Mar 18 18:17:22.579995 master-0 kubenswrapper[30278]: I0318 18:17:22.579769 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wscdv\" (UniqueName: \"kubernetes.io/projected/df68dba7-dacb-48bb-9433-12ad79aba028-kube-api-access-wscdv\") pod \"openstack-cell1-galera-0\" (UID: \"df68dba7-dacb-48bb-9433-12ad79aba028\") " pod="openstack/openstack-cell1-galera-0" Mar 18 18:17:22.579995 master-0 kubenswrapper[30278]: I0318 18:17:22.579815 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-a57b6e2d-f2e6-4288-b29d-62e564a6f476\" (UniqueName: \"kubernetes.io/csi/topolvm.io^a9dec071-845b-45b1-9003-54530975b2f4\") pod \"openstack-cell1-galera-0\" (UID: \"df68dba7-dacb-48bb-9433-12ad79aba028\") " pod="openstack/openstack-cell1-galera-0" Mar 18 18:17:22.584770 master-0 kubenswrapper[30278]: I0318 18:17:22.583632 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/df68dba7-dacb-48bb-9433-12ad79aba028-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"df68dba7-dacb-48bb-9433-12ad79aba028\") " pod="openstack/openstack-cell1-galera-0" Mar 18 18:17:22.589752 master-0 kubenswrapper[30278]: I0318 18:17:22.585398 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/df68dba7-dacb-48bb-9433-12ad79aba028-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"df68dba7-dacb-48bb-9433-12ad79aba028\") " pod="openstack/openstack-cell1-galera-0" Mar 18 18:17:22.589752 master-0 kubenswrapper[30278]: I0318 18:17:22.585816 30278 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Mar 18 18:17:22.589752 master-0 kubenswrapper[30278]: I0318 18:17:22.589156 30278 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-a57b6e2d-f2e6-4288-b29d-62e564a6f476\" (UniqueName: \"kubernetes.io/csi/topolvm.io^a9dec071-845b-45b1-9003-54530975b2f4\") pod \"openstack-cell1-galera-0\" (UID: \"df68dba7-dacb-48bb-9433-12ad79aba028\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/topolvm.io/2adb76af1bdfc86377e5ceb10275aea5afbe3692597138f58c496d8c241a4d76/globalmount\"" pod="openstack/openstack-cell1-galera-0" Mar 18 18:17:22.589752 master-0 kubenswrapper[30278]: I0318 18:17:22.589164 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/df68dba7-dacb-48bb-9433-12ad79aba028-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"df68dba7-dacb-48bb-9433-12ad79aba028\") " pod="openstack/openstack-cell1-galera-0" Mar 18 18:17:22.592672 master-0 kubenswrapper[30278]: I0318 18:17:22.591442 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/df68dba7-dacb-48bb-9433-12ad79aba028-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"df68dba7-dacb-48bb-9433-12ad79aba028\") " pod="openstack/openstack-cell1-galera-0" Mar 18 18:17:22.614528 master-0 kubenswrapper[30278]: I0318 18:17:22.614465 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df68dba7-dacb-48bb-9433-12ad79aba028-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"df68dba7-dacb-48bb-9433-12ad79aba028\") " pod="openstack/openstack-cell1-galera-0" Mar 18 18:17:22.620647 master-0 kubenswrapper[30278]: I0318 18:17:22.619469 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/df68dba7-dacb-48bb-9433-12ad79aba028-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"df68dba7-dacb-48bb-9433-12ad79aba028\") " pod="openstack/openstack-cell1-galera-0" Mar 18 18:17:22.648861 master-0 kubenswrapper[30278]: I0318 18:17:22.646009 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wscdv\" (UniqueName: \"kubernetes.io/projected/df68dba7-dacb-48bb-9433-12ad79aba028-kube-api-access-wscdv\") pod \"openstack-cell1-galera-0\" (UID: \"df68dba7-dacb-48bb-9433-12ad79aba028\") " pod="openstack/openstack-cell1-galera-0" Mar 18 18:17:23.164505 master-0 kubenswrapper[30278]: I0318 18:17:23.164429 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-1bd5e562-8afd-40be-a340-38b540cff718\" (UniqueName: \"kubernetes.io/csi/topolvm.io^ab8aa365-a151-4277-b88a-36a592a72e15\") pod \"openstack-galera-0\" (UID: \"3a06b9e0-a605-44e2-b6e2-63b15a5bb700\") " pod="openstack/openstack-galera-0" Mar 18 18:17:23.383481 master-0 kubenswrapper[30278]: I0318 18:17:23.383387 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Mar 18 18:17:23.516263 master-0 kubenswrapper[30278]: I0318 18:17:23.515884 30278 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-xntzs"] Mar 18 18:17:23.522618 master-0 kubenswrapper[30278]: I0318 18:17:23.518240 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-xntzs" Mar 18 18:17:23.532066 master-0 kubenswrapper[30278]: I0318 18:17:23.525026 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Mar 18 18:17:23.532066 master-0 kubenswrapper[30278]: I0318 18:17:23.525357 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Mar 18 18:17:23.560750 master-0 kubenswrapper[30278]: I0318 18:17:23.546113 30278 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-9qq6l"] Mar 18 18:17:23.560750 master-0 kubenswrapper[30278]: I0318 18:17:23.551002 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-9qq6l" Mar 18 18:17:23.560750 master-0 kubenswrapper[30278]: I0318 18:17:23.557545 30278 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-xntzs"] Mar 18 18:17:23.612254 master-0 kubenswrapper[30278]: I0318 18:17:23.611815 30278 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-9qq6l"] Mar 18 18:17:23.673549 master-0 kubenswrapper[30278]: I0318 18:17:23.669686 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/bb722697-8531-46a1-a93f-babc070522f4-etc-ovs\") pod \"ovn-controller-ovs-9qq6l\" (UID: \"bb722697-8531-46a1-a93f-babc070522f4\") " pod="openstack/ovn-controller-ovs-9qq6l" Mar 18 18:17:23.674023 master-0 kubenswrapper[30278]: I0318 18:17:23.673960 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/bb722697-8531-46a1-a93f-babc070522f4-var-log\") pod \"ovn-controller-ovs-9qq6l\" (UID: \"bb722697-8531-46a1-a93f-babc070522f4\") " pod="openstack/ovn-controller-ovs-9qq6l" Mar 18 18:17:23.674114 master-0 kubenswrapper[30278]: I0318 18:17:23.674049 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e01e85f2-9a8b-4862-ad33-959e38bfbc7c-scripts\") pod \"ovn-controller-xntzs\" (UID: \"e01e85f2-9a8b-4862-ad33-959e38bfbc7c\") " pod="openstack/ovn-controller-xntzs" Mar 18 18:17:23.674161 master-0 kubenswrapper[30278]: I0318 18:17:23.674129 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ng92v\" (UniqueName: \"kubernetes.io/projected/e01e85f2-9a8b-4862-ad33-959e38bfbc7c-kube-api-access-ng92v\") pod \"ovn-controller-xntzs\" (UID: \"e01e85f2-9a8b-4862-ad33-959e38bfbc7c\") " pod="openstack/ovn-controller-xntzs" Mar 18 18:17:23.674199 master-0 kubenswrapper[30278]: I0318 18:17:23.674191 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/bb722697-8531-46a1-a93f-babc070522f4-var-run\") pod \"ovn-controller-ovs-9qq6l\" (UID: \"bb722697-8531-46a1-a93f-babc070522f4\") " pod="openstack/ovn-controller-ovs-9qq6l" Mar 18 18:17:23.674352 master-0 kubenswrapper[30278]: I0318 18:17:23.674288 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/bb722697-8531-46a1-a93f-babc070522f4-scripts\") pod \"ovn-controller-ovs-9qq6l\" (UID: \"bb722697-8531-46a1-a93f-babc070522f4\") " pod="openstack/ovn-controller-ovs-9qq6l" Mar 18 18:17:23.674352 master-0 kubenswrapper[30278]: I0318 18:17:23.674326 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/bb722697-8531-46a1-a93f-babc070522f4-var-lib\") pod \"ovn-controller-ovs-9qq6l\" (UID: \"bb722697-8531-46a1-a93f-babc070522f4\") " pod="openstack/ovn-controller-ovs-9qq6l" Mar 18 18:17:23.674443 master-0 kubenswrapper[30278]: I0318 18:17:23.674401 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/e01e85f2-9a8b-4862-ad33-959e38bfbc7c-ovn-controller-tls-certs\") pod \"ovn-controller-xntzs\" (UID: \"e01e85f2-9a8b-4862-ad33-959e38bfbc7c\") " pod="openstack/ovn-controller-xntzs" Mar 18 18:17:23.677062 master-0 kubenswrapper[30278]: I0318 18:17:23.677008 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hvf5z\" (UniqueName: \"kubernetes.io/projected/bb722697-8531-46a1-a93f-babc070522f4-kube-api-access-hvf5z\") pod \"ovn-controller-ovs-9qq6l\" (UID: \"bb722697-8531-46a1-a93f-babc070522f4\") " pod="openstack/ovn-controller-ovs-9qq6l" Mar 18 18:17:23.677178 master-0 kubenswrapper[30278]: I0318 18:17:23.677154 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/e01e85f2-9a8b-4862-ad33-959e38bfbc7c-var-log-ovn\") pod \"ovn-controller-xntzs\" (UID: \"e01e85f2-9a8b-4862-ad33-959e38bfbc7c\") " pod="openstack/ovn-controller-xntzs" Mar 18 18:17:23.677219 master-0 kubenswrapper[30278]: I0318 18:17:23.677199 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e01e85f2-9a8b-4862-ad33-959e38bfbc7c-combined-ca-bundle\") pod \"ovn-controller-xntzs\" (UID: \"e01e85f2-9a8b-4862-ad33-959e38bfbc7c\") " pod="openstack/ovn-controller-xntzs" Mar 18 18:17:23.677305 master-0 kubenswrapper[30278]: I0318 18:17:23.677252 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/e01e85f2-9a8b-4862-ad33-959e38bfbc7c-var-run\") pod \"ovn-controller-xntzs\" (UID: \"e01e85f2-9a8b-4862-ad33-959e38bfbc7c\") " pod="openstack/ovn-controller-xntzs" Mar 18 18:17:23.678367 master-0 kubenswrapper[30278]: I0318 18:17:23.678071 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/e01e85f2-9a8b-4862-ad33-959e38bfbc7c-var-run-ovn\") pod \"ovn-controller-xntzs\" (UID: \"e01e85f2-9a8b-4862-ad33-959e38bfbc7c\") " pod="openstack/ovn-controller-xntzs" Mar 18 18:17:23.786178 master-0 kubenswrapper[30278]: I0318 18:17:23.784796 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/bb722697-8531-46a1-a93f-babc070522f4-var-log\") pod \"ovn-controller-ovs-9qq6l\" (UID: \"bb722697-8531-46a1-a93f-babc070522f4\") " pod="openstack/ovn-controller-ovs-9qq6l" Mar 18 18:17:23.786178 master-0 kubenswrapper[30278]: I0318 18:17:23.784902 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e01e85f2-9a8b-4862-ad33-959e38bfbc7c-scripts\") pod \"ovn-controller-xntzs\" (UID: \"e01e85f2-9a8b-4862-ad33-959e38bfbc7c\") " pod="openstack/ovn-controller-xntzs" Mar 18 18:17:23.786178 master-0 kubenswrapper[30278]: I0318 18:17:23.784973 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ng92v\" (UniqueName: \"kubernetes.io/projected/e01e85f2-9a8b-4862-ad33-959e38bfbc7c-kube-api-access-ng92v\") pod \"ovn-controller-xntzs\" (UID: \"e01e85f2-9a8b-4862-ad33-959e38bfbc7c\") " pod="openstack/ovn-controller-xntzs" Mar 18 18:17:23.790841 master-0 kubenswrapper[30278]: I0318 18:17:23.787661 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/bb722697-8531-46a1-a93f-babc070522f4-var-run\") pod \"ovn-controller-ovs-9qq6l\" (UID: \"bb722697-8531-46a1-a93f-babc070522f4\") " pod="openstack/ovn-controller-ovs-9qq6l" Mar 18 18:17:23.790841 master-0 kubenswrapper[30278]: I0318 18:17:23.787856 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/bb722697-8531-46a1-a93f-babc070522f4-scripts\") pod \"ovn-controller-ovs-9qq6l\" (UID: \"bb722697-8531-46a1-a93f-babc070522f4\") " pod="openstack/ovn-controller-ovs-9qq6l" Mar 18 18:17:23.790841 master-0 kubenswrapper[30278]: I0318 18:17:23.787898 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/bb722697-8531-46a1-a93f-babc070522f4-var-lib\") pod \"ovn-controller-ovs-9qq6l\" (UID: \"bb722697-8531-46a1-a93f-babc070522f4\") " pod="openstack/ovn-controller-ovs-9qq6l" Mar 18 18:17:23.790841 master-0 kubenswrapper[30278]: I0318 18:17:23.788040 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/e01e85f2-9a8b-4862-ad33-959e38bfbc7c-ovn-controller-tls-certs\") pod \"ovn-controller-xntzs\" (UID: \"e01e85f2-9a8b-4862-ad33-959e38bfbc7c\") " pod="openstack/ovn-controller-xntzs" Mar 18 18:17:23.790841 master-0 kubenswrapper[30278]: I0318 18:17:23.788111 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hvf5z\" (UniqueName: \"kubernetes.io/projected/bb722697-8531-46a1-a93f-babc070522f4-kube-api-access-hvf5z\") pod \"ovn-controller-ovs-9qq6l\" (UID: \"bb722697-8531-46a1-a93f-babc070522f4\") " pod="openstack/ovn-controller-ovs-9qq6l" Mar 18 18:17:23.790841 master-0 kubenswrapper[30278]: I0318 18:17:23.788173 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/e01e85f2-9a8b-4862-ad33-959e38bfbc7c-var-log-ovn\") pod \"ovn-controller-xntzs\" (UID: \"e01e85f2-9a8b-4862-ad33-959e38bfbc7c\") " pod="openstack/ovn-controller-xntzs" Mar 18 18:17:23.790841 master-0 kubenswrapper[30278]: I0318 18:17:23.788215 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e01e85f2-9a8b-4862-ad33-959e38bfbc7c-combined-ca-bundle\") pod \"ovn-controller-xntzs\" (UID: \"e01e85f2-9a8b-4862-ad33-959e38bfbc7c\") " pod="openstack/ovn-controller-xntzs" Mar 18 18:17:23.790841 master-0 kubenswrapper[30278]: I0318 18:17:23.789807 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/e01e85f2-9a8b-4862-ad33-959e38bfbc7c-var-run\") pod \"ovn-controller-xntzs\" (UID: \"e01e85f2-9a8b-4862-ad33-959e38bfbc7c\") " pod="openstack/ovn-controller-xntzs" Mar 18 18:17:23.792537 master-0 kubenswrapper[30278]: I0318 18:17:23.792472 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e01e85f2-9a8b-4862-ad33-959e38bfbc7c-scripts\") pod \"ovn-controller-xntzs\" (UID: \"e01e85f2-9a8b-4862-ad33-959e38bfbc7c\") " pod="openstack/ovn-controller-xntzs" Mar 18 18:17:23.799837 master-0 kubenswrapper[30278]: I0318 18:17:23.799790 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/e01e85f2-9a8b-4862-ad33-959e38bfbc7c-var-run-ovn\") pod \"ovn-controller-xntzs\" (UID: \"e01e85f2-9a8b-4862-ad33-959e38bfbc7c\") " pod="openstack/ovn-controller-xntzs" Mar 18 18:17:23.800031 master-0 kubenswrapper[30278]: I0318 18:17:23.799882 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/bb722697-8531-46a1-a93f-babc070522f4-etc-ovs\") pod \"ovn-controller-ovs-9qq6l\" (UID: \"bb722697-8531-46a1-a93f-babc070522f4\") " pod="openstack/ovn-controller-ovs-9qq6l" Mar 18 18:17:23.800296 master-0 kubenswrapper[30278]: I0318 18:17:23.800229 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/bb722697-8531-46a1-a93f-babc070522f4-etc-ovs\") pod \"ovn-controller-ovs-9qq6l\" (UID: \"bb722697-8531-46a1-a93f-babc070522f4\") " pod="openstack/ovn-controller-ovs-9qq6l" Mar 18 18:17:23.801781 master-0 kubenswrapper[30278]: I0318 18:17:23.801540 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/bb722697-8531-46a1-a93f-babc070522f4-var-log\") pod \"ovn-controller-ovs-9qq6l\" (UID: \"bb722697-8531-46a1-a93f-babc070522f4\") " pod="openstack/ovn-controller-ovs-9qq6l" Mar 18 18:17:23.801781 master-0 kubenswrapper[30278]: I0318 18:17:23.800682 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/e01e85f2-9a8b-4862-ad33-959e38bfbc7c-var-run-ovn\") pod \"ovn-controller-xntzs\" (UID: \"e01e85f2-9a8b-4862-ad33-959e38bfbc7c\") " pod="openstack/ovn-controller-xntzs" Mar 18 18:17:23.802711 master-0 kubenswrapper[30278]: I0318 18:17:23.802635 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/e01e85f2-9a8b-4862-ad33-959e38bfbc7c-var-run\") pod \"ovn-controller-xntzs\" (UID: \"e01e85f2-9a8b-4862-ad33-959e38bfbc7c\") " pod="openstack/ovn-controller-xntzs" Mar 18 18:17:23.803035 master-0 kubenswrapper[30278]: I0318 18:17:23.802740 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/bb722697-8531-46a1-a93f-babc070522f4-var-lib\") pod \"ovn-controller-ovs-9qq6l\" (UID: \"bb722697-8531-46a1-a93f-babc070522f4\") " pod="openstack/ovn-controller-ovs-9qq6l" Mar 18 18:17:23.803035 master-0 kubenswrapper[30278]: I0318 18:17:23.802871 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/bb722697-8531-46a1-a93f-babc070522f4-var-run\") pod \"ovn-controller-ovs-9qq6l\" (UID: \"bb722697-8531-46a1-a93f-babc070522f4\") " pod="openstack/ovn-controller-ovs-9qq6l" Mar 18 18:17:23.803035 master-0 kubenswrapper[30278]: I0318 18:17:23.803006 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/e01e85f2-9a8b-4862-ad33-959e38bfbc7c-var-log-ovn\") pod \"ovn-controller-xntzs\" (UID: \"e01e85f2-9a8b-4862-ad33-959e38bfbc7c\") " pod="openstack/ovn-controller-xntzs" Mar 18 18:17:23.805655 master-0 kubenswrapper[30278]: I0318 18:17:23.805036 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/e01e85f2-9a8b-4862-ad33-959e38bfbc7c-ovn-controller-tls-certs\") pod \"ovn-controller-xntzs\" (UID: \"e01e85f2-9a8b-4862-ad33-959e38bfbc7c\") " pod="openstack/ovn-controller-xntzs" Mar 18 18:17:23.805655 master-0 kubenswrapper[30278]: I0318 18:17:23.805176 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/bb722697-8531-46a1-a93f-babc070522f4-scripts\") pod \"ovn-controller-ovs-9qq6l\" (UID: \"bb722697-8531-46a1-a93f-babc070522f4\") " pod="openstack/ovn-controller-ovs-9qq6l" Mar 18 18:17:23.825635 master-0 kubenswrapper[30278]: I0318 18:17:23.825530 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ng92v\" (UniqueName: \"kubernetes.io/projected/e01e85f2-9a8b-4862-ad33-959e38bfbc7c-kube-api-access-ng92v\") pod \"ovn-controller-xntzs\" (UID: \"e01e85f2-9a8b-4862-ad33-959e38bfbc7c\") " pod="openstack/ovn-controller-xntzs" Mar 18 18:17:23.835870 master-0 kubenswrapper[30278]: I0318 18:17:23.831800 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hvf5z\" (UniqueName: \"kubernetes.io/projected/bb722697-8531-46a1-a93f-babc070522f4-kube-api-access-hvf5z\") pod \"ovn-controller-ovs-9qq6l\" (UID: \"bb722697-8531-46a1-a93f-babc070522f4\") " pod="openstack/ovn-controller-ovs-9qq6l" Mar 18 18:17:23.856089 master-0 kubenswrapper[30278]: I0318 18:17:23.855948 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e01e85f2-9a8b-4862-ad33-959e38bfbc7c-combined-ca-bundle\") pod \"ovn-controller-xntzs\" (UID: \"e01e85f2-9a8b-4862-ad33-959e38bfbc7c\") " pod="openstack/ovn-controller-xntzs" Mar 18 18:17:23.913391 master-0 kubenswrapper[30278]: I0318 18:17:23.912158 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-xntzs" Mar 18 18:17:23.947533 master-0 kubenswrapper[30278]: I0318 18:17:23.946156 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-9qq6l" Mar 18 18:17:24.240306 master-0 kubenswrapper[30278]: I0318 18:17:24.240115 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-a57b6e2d-f2e6-4288-b29d-62e564a6f476\" (UniqueName: \"kubernetes.io/csi/topolvm.io^a9dec071-845b-45b1-9003-54530975b2f4\") pod \"openstack-cell1-galera-0\" (UID: \"df68dba7-dacb-48bb-9433-12ad79aba028\") " pod="openstack/openstack-cell1-galera-0" Mar 18 18:17:24.476922 master-0 kubenswrapper[30278]: I0318 18:17:24.476855 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Mar 18 18:17:26.915335 master-0 kubenswrapper[30278]: I0318 18:17:26.915108 30278 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Mar 18 18:17:26.917880 master-0 kubenswrapper[30278]: I0318 18:17:26.917824 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Mar 18 18:17:26.922195 master-0 kubenswrapper[30278]: I0318 18:17:26.922120 30278 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Mar 18 18:17:26.936516 master-0 kubenswrapper[30278]: I0318 18:17:26.931436 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Mar 18 18:17:26.936516 master-0 kubenswrapper[30278]: I0318 18:17:26.931654 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Mar 18 18:17:26.936516 master-0 kubenswrapper[30278]: I0318 18:17:26.931818 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Mar 18 18:17:26.956452 master-0 kubenswrapper[30278]: I0318 18:17:26.939744 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Mar 18 18:17:27.111077 master-0 kubenswrapper[30278]: I0318 18:17:27.110724 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wxnsn\" (UniqueName: \"kubernetes.io/projected/4047014a-de6e-447d-983b-973a84e7478b-kube-api-access-wxnsn\") pod \"ovsdbserver-nb-0\" (UID: \"4047014a-de6e-447d-983b-973a84e7478b\") " pod="openstack/ovsdbserver-nb-0" Mar 18 18:17:27.111077 master-0 kubenswrapper[30278]: I0318 18:17:27.110830 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4047014a-de6e-447d-983b-973a84e7478b-config\") pod \"ovsdbserver-nb-0\" (UID: \"4047014a-de6e-447d-983b-973a84e7478b\") " pod="openstack/ovsdbserver-nb-0" Mar 18 18:17:27.111474 master-0 kubenswrapper[30278]: I0318 18:17:27.111095 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/4047014a-de6e-447d-983b-973a84e7478b-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"4047014a-de6e-447d-983b-973a84e7478b\") " pod="openstack/ovsdbserver-nb-0" Mar 18 18:17:27.111583 master-0 kubenswrapper[30278]: I0318 18:17:27.111237 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-8e8216a7-c67e-4791-8f0f-f50de466fb2f\" (UniqueName: \"kubernetes.io/csi/topolvm.io^3baf9ff5-27d8-42a8-be3b-a753bfcf8d29\") pod \"ovsdbserver-nb-0\" (UID: \"4047014a-de6e-447d-983b-973a84e7478b\") " pod="openstack/ovsdbserver-nb-0" Mar 18 18:17:27.111682 master-0 kubenswrapper[30278]: I0318 18:17:27.111660 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4047014a-de6e-447d-983b-973a84e7478b-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"4047014a-de6e-447d-983b-973a84e7478b\") " pod="openstack/ovsdbserver-nb-0" Mar 18 18:17:27.111847 master-0 kubenswrapper[30278]: I0318 18:17:27.111814 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/4047014a-de6e-447d-983b-973a84e7478b-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"4047014a-de6e-447d-983b-973a84e7478b\") " pod="openstack/ovsdbserver-nb-0" Mar 18 18:17:27.112018 master-0 kubenswrapper[30278]: I0318 18:17:27.111994 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/4047014a-de6e-447d-983b-973a84e7478b-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"4047014a-de6e-447d-983b-973a84e7478b\") " pod="openstack/ovsdbserver-nb-0" Mar 18 18:17:27.112131 master-0 kubenswrapper[30278]: I0318 18:17:27.112109 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4047014a-de6e-447d-983b-973a84e7478b-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"4047014a-de6e-447d-983b-973a84e7478b\") " pod="openstack/ovsdbserver-nb-0" Mar 18 18:17:27.218435 master-0 kubenswrapper[30278]: I0318 18:17:27.216158 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4047014a-de6e-447d-983b-973a84e7478b-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"4047014a-de6e-447d-983b-973a84e7478b\") " pod="openstack/ovsdbserver-nb-0" Mar 18 18:17:27.218435 master-0 kubenswrapper[30278]: I0318 18:17:27.216325 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wxnsn\" (UniqueName: \"kubernetes.io/projected/4047014a-de6e-447d-983b-973a84e7478b-kube-api-access-wxnsn\") pod \"ovsdbserver-nb-0\" (UID: \"4047014a-de6e-447d-983b-973a84e7478b\") " pod="openstack/ovsdbserver-nb-0" Mar 18 18:17:27.218435 master-0 kubenswrapper[30278]: I0318 18:17:27.216367 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4047014a-de6e-447d-983b-973a84e7478b-config\") pod \"ovsdbserver-nb-0\" (UID: \"4047014a-de6e-447d-983b-973a84e7478b\") " pod="openstack/ovsdbserver-nb-0" Mar 18 18:17:27.218435 master-0 kubenswrapper[30278]: I0318 18:17:27.216413 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/4047014a-de6e-447d-983b-973a84e7478b-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"4047014a-de6e-447d-983b-973a84e7478b\") " pod="openstack/ovsdbserver-nb-0" Mar 18 18:17:27.218435 master-0 kubenswrapper[30278]: I0318 18:17:27.216443 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-8e8216a7-c67e-4791-8f0f-f50de466fb2f\" (UniqueName: \"kubernetes.io/csi/topolvm.io^3baf9ff5-27d8-42a8-be3b-a753bfcf8d29\") pod \"ovsdbserver-nb-0\" (UID: \"4047014a-de6e-447d-983b-973a84e7478b\") " pod="openstack/ovsdbserver-nb-0" Mar 18 18:17:27.218435 master-0 kubenswrapper[30278]: I0318 18:17:27.216471 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4047014a-de6e-447d-983b-973a84e7478b-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"4047014a-de6e-447d-983b-973a84e7478b\") " pod="openstack/ovsdbserver-nb-0" Mar 18 18:17:27.218435 master-0 kubenswrapper[30278]: I0318 18:17:27.216506 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/4047014a-de6e-447d-983b-973a84e7478b-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"4047014a-de6e-447d-983b-973a84e7478b\") " pod="openstack/ovsdbserver-nb-0" Mar 18 18:17:27.218435 master-0 kubenswrapper[30278]: I0318 18:17:27.216543 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/4047014a-de6e-447d-983b-973a84e7478b-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"4047014a-de6e-447d-983b-973a84e7478b\") " pod="openstack/ovsdbserver-nb-0" Mar 18 18:17:27.227321 master-0 kubenswrapper[30278]: I0318 18:17:27.223437 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4047014a-de6e-447d-983b-973a84e7478b-config\") pod \"ovsdbserver-nb-0\" (UID: \"4047014a-de6e-447d-983b-973a84e7478b\") " pod="openstack/ovsdbserver-nb-0" Mar 18 18:17:27.227321 master-0 kubenswrapper[30278]: I0318 18:17:27.223842 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/4047014a-de6e-447d-983b-973a84e7478b-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"4047014a-de6e-447d-983b-973a84e7478b\") " pod="openstack/ovsdbserver-nb-0" Mar 18 18:17:27.227321 master-0 kubenswrapper[30278]: I0318 18:17:27.225245 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4047014a-de6e-447d-983b-973a84e7478b-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"4047014a-de6e-447d-983b-973a84e7478b\") " pod="openstack/ovsdbserver-nb-0" Mar 18 18:17:27.232319 master-0 kubenswrapper[30278]: I0318 18:17:27.232158 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4047014a-de6e-447d-983b-973a84e7478b-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"4047014a-de6e-447d-983b-973a84e7478b\") " pod="openstack/ovsdbserver-nb-0" Mar 18 18:17:27.240326 master-0 kubenswrapper[30278]: I0318 18:17:27.233173 30278 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Mar 18 18:17:27.240326 master-0 kubenswrapper[30278]: I0318 18:17:27.233205 30278 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-8e8216a7-c67e-4791-8f0f-f50de466fb2f\" (UniqueName: \"kubernetes.io/csi/topolvm.io^3baf9ff5-27d8-42a8-be3b-a753bfcf8d29\") pod \"ovsdbserver-nb-0\" (UID: \"4047014a-de6e-447d-983b-973a84e7478b\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/topolvm.io/eb2847c8cfda5ec44a89e14bdc92323def29e68db2d3a6df1ccfcc1cd2a50837/globalmount\"" pod="openstack/ovsdbserver-nb-0" Mar 18 18:17:27.240326 master-0 kubenswrapper[30278]: I0318 18:17:27.233821 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/4047014a-de6e-447d-983b-973a84e7478b-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"4047014a-de6e-447d-983b-973a84e7478b\") " pod="openstack/ovsdbserver-nb-0" Mar 18 18:17:27.240326 master-0 kubenswrapper[30278]: I0318 18:17:27.236249 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/4047014a-de6e-447d-983b-973a84e7478b-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"4047014a-de6e-447d-983b-973a84e7478b\") " pod="openstack/ovsdbserver-nb-0" Mar 18 18:17:27.249675 master-0 kubenswrapper[30278]: I0318 18:17:27.249629 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wxnsn\" (UniqueName: \"kubernetes.io/projected/4047014a-de6e-447d-983b-973a84e7478b-kube-api-access-wxnsn\") pod \"ovsdbserver-nb-0\" (UID: \"4047014a-de6e-447d-983b-973a84e7478b\") " pod="openstack/ovsdbserver-nb-0" Mar 18 18:17:27.988349 master-0 kubenswrapper[30278]: I0318 18:17:27.981043 30278 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Mar 18 18:17:27.988349 master-0 kubenswrapper[30278]: I0318 18:17:27.984597 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Mar 18 18:17:27.990545 master-0 kubenswrapper[30278]: I0318 18:17:27.990495 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Mar 18 18:17:27.990973 master-0 kubenswrapper[30278]: I0318 18:17:27.990954 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Mar 18 18:17:27.991388 master-0 kubenswrapper[30278]: I0318 18:17:27.991365 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Mar 18 18:17:28.041553 master-0 kubenswrapper[30278]: I0318 18:17:28.040556 30278 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Mar 18 18:17:28.154421 master-0 kubenswrapper[30278]: I0318 18:17:28.154348 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j87g9\" (UniqueName: \"kubernetes.io/projected/cbc42adf-4d99-42bb-b262-0f4163e358b8-kube-api-access-j87g9\") pod \"ovsdbserver-sb-0\" (UID: \"cbc42adf-4d99-42bb-b262-0f4163e358b8\") " pod="openstack/ovsdbserver-sb-0" Mar 18 18:17:28.154880 master-0 kubenswrapper[30278]: I0318 18:17:28.154475 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/cbc42adf-4d99-42bb-b262-0f4163e358b8-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"cbc42adf-4d99-42bb-b262-0f4163e358b8\") " pod="openstack/ovsdbserver-sb-0" Mar 18 18:17:28.154880 master-0 kubenswrapper[30278]: I0318 18:17:28.154510 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cbc42adf-4d99-42bb-b262-0f4163e358b8-config\") pod \"ovsdbserver-sb-0\" (UID: \"cbc42adf-4d99-42bb-b262-0f4163e358b8\") " pod="openstack/ovsdbserver-sb-0" Mar 18 18:17:28.155031 master-0 kubenswrapper[30278]: I0318 18:17:28.154995 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/cbc42adf-4d99-42bb-b262-0f4163e358b8-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"cbc42adf-4d99-42bb-b262-0f4163e358b8\") " pod="openstack/ovsdbserver-sb-0" Mar 18 18:17:28.155912 master-0 kubenswrapper[30278]: I0318 18:17:28.155858 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-a9585120-5867-4805-bd6b-205ce19607bb\" (UniqueName: \"kubernetes.io/csi/topolvm.io^99283db5-f71b-45e0-aded-a3dd3b44bfbd\") pod \"ovsdbserver-sb-0\" (UID: \"cbc42adf-4d99-42bb-b262-0f4163e358b8\") " pod="openstack/ovsdbserver-sb-0" Mar 18 18:17:28.156454 master-0 kubenswrapper[30278]: I0318 18:17:28.156016 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cbc42adf-4d99-42bb-b262-0f4163e358b8-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"cbc42adf-4d99-42bb-b262-0f4163e358b8\") " pod="openstack/ovsdbserver-sb-0" Mar 18 18:17:28.156454 master-0 kubenswrapper[30278]: I0318 18:17:28.156150 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/cbc42adf-4d99-42bb-b262-0f4163e358b8-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"cbc42adf-4d99-42bb-b262-0f4163e358b8\") " pod="openstack/ovsdbserver-sb-0" Mar 18 18:17:28.156454 master-0 kubenswrapper[30278]: I0318 18:17:28.156235 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/cbc42adf-4d99-42bb-b262-0f4163e358b8-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"cbc42adf-4d99-42bb-b262-0f4163e358b8\") " pod="openstack/ovsdbserver-sb-0" Mar 18 18:17:28.258690 master-0 kubenswrapper[30278]: I0318 18:17:28.258505 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j87g9\" (UniqueName: \"kubernetes.io/projected/cbc42adf-4d99-42bb-b262-0f4163e358b8-kube-api-access-j87g9\") pod \"ovsdbserver-sb-0\" (UID: \"cbc42adf-4d99-42bb-b262-0f4163e358b8\") " pod="openstack/ovsdbserver-sb-0" Mar 18 18:17:28.258690 master-0 kubenswrapper[30278]: I0318 18:17:28.258598 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/cbc42adf-4d99-42bb-b262-0f4163e358b8-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"cbc42adf-4d99-42bb-b262-0f4163e358b8\") " pod="openstack/ovsdbserver-sb-0" Mar 18 18:17:28.258690 master-0 kubenswrapper[30278]: I0318 18:17:28.258621 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cbc42adf-4d99-42bb-b262-0f4163e358b8-config\") pod \"ovsdbserver-sb-0\" (UID: \"cbc42adf-4d99-42bb-b262-0f4163e358b8\") " pod="openstack/ovsdbserver-sb-0" Mar 18 18:17:28.258690 master-0 kubenswrapper[30278]: I0318 18:17:28.258653 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/cbc42adf-4d99-42bb-b262-0f4163e358b8-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"cbc42adf-4d99-42bb-b262-0f4163e358b8\") " pod="openstack/ovsdbserver-sb-0" Mar 18 18:17:28.259101 master-0 kubenswrapper[30278]: I0318 18:17:28.258705 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-a9585120-5867-4805-bd6b-205ce19607bb\" (UniqueName: \"kubernetes.io/csi/topolvm.io^99283db5-f71b-45e0-aded-a3dd3b44bfbd\") pod \"ovsdbserver-sb-0\" (UID: \"cbc42adf-4d99-42bb-b262-0f4163e358b8\") " pod="openstack/ovsdbserver-sb-0" Mar 18 18:17:28.259101 master-0 kubenswrapper[30278]: I0318 18:17:28.258736 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cbc42adf-4d99-42bb-b262-0f4163e358b8-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"cbc42adf-4d99-42bb-b262-0f4163e358b8\") " pod="openstack/ovsdbserver-sb-0" Mar 18 18:17:28.259101 master-0 kubenswrapper[30278]: I0318 18:17:28.258767 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/cbc42adf-4d99-42bb-b262-0f4163e358b8-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"cbc42adf-4d99-42bb-b262-0f4163e358b8\") " pod="openstack/ovsdbserver-sb-0" Mar 18 18:17:28.259101 master-0 kubenswrapper[30278]: I0318 18:17:28.258789 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/cbc42adf-4d99-42bb-b262-0f4163e358b8-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"cbc42adf-4d99-42bb-b262-0f4163e358b8\") " pod="openstack/ovsdbserver-sb-0" Mar 18 18:17:28.259890 master-0 kubenswrapper[30278]: I0318 18:17:28.259857 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/cbc42adf-4d99-42bb-b262-0f4163e358b8-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"cbc42adf-4d99-42bb-b262-0f4163e358b8\") " pod="openstack/ovsdbserver-sb-0" Mar 18 18:17:28.260850 master-0 kubenswrapper[30278]: I0318 18:17:28.260630 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cbc42adf-4d99-42bb-b262-0f4163e358b8-config\") pod \"ovsdbserver-sb-0\" (UID: \"cbc42adf-4d99-42bb-b262-0f4163e358b8\") " pod="openstack/ovsdbserver-sb-0" Mar 18 18:17:28.261423 master-0 kubenswrapper[30278]: I0318 18:17:28.261391 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/cbc42adf-4d99-42bb-b262-0f4163e358b8-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"cbc42adf-4d99-42bb-b262-0f4163e358b8\") " pod="openstack/ovsdbserver-sb-0" Mar 18 18:17:28.263097 master-0 kubenswrapper[30278]: I0318 18:17:28.263071 30278 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Mar 18 18:17:28.263170 master-0 kubenswrapper[30278]: I0318 18:17:28.263111 30278 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-a9585120-5867-4805-bd6b-205ce19607bb\" (UniqueName: \"kubernetes.io/csi/topolvm.io^99283db5-f71b-45e0-aded-a3dd3b44bfbd\") pod \"ovsdbserver-sb-0\" (UID: \"cbc42adf-4d99-42bb-b262-0f4163e358b8\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/topolvm.io/40e55ba9635b9ba5cff8a0596bae7f321966da44279caae236cc97eb1afc43c5/globalmount\"" pod="openstack/ovsdbserver-sb-0" Mar 18 18:17:28.273572 master-0 kubenswrapper[30278]: I0318 18:17:28.266115 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/cbc42adf-4d99-42bb-b262-0f4163e358b8-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"cbc42adf-4d99-42bb-b262-0f4163e358b8\") " pod="openstack/ovsdbserver-sb-0" Mar 18 18:17:28.274541 master-0 kubenswrapper[30278]: I0318 18:17:28.274424 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/cbc42adf-4d99-42bb-b262-0f4163e358b8-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"cbc42adf-4d99-42bb-b262-0f4163e358b8\") " pod="openstack/ovsdbserver-sb-0" Mar 18 18:17:28.275817 master-0 kubenswrapper[30278]: I0318 18:17:28.275799 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cbc42adf-4d99-42bb-b262-0f4163e358b8-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"cbc42adf-4d99-42bb-b262-0f4163e358b8\") " pod="openstack/ovsdbserver-sb-0" Mar 18 18:17:28.287301 master-0 kubenswrapper[30278]: I0318 18:17:28.287264 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j87g9\" (UniqueName: \"kubernetes.io/projected/cbc42adf-4d99-42bb-b262-0f4163e358b8-kube-api-access-j87g9\") pod \"ovsdbserver-sb-0\" (UID: \"cbc42adf-4d99-42bb-b262-0f4163e358b8\") " pod="openstack/ovsdbserver-sb-0" Mar 18 18:17:28.695035 master-0 kubenswrapper[30278]: I0318 18:17:28.694969 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-8e8216a7-c67e-4791-8f0f-f50de466fb2f\" (UniqueName: \"kubernetes.io/csi/topolvm.io^3baf9ff5-27d8-42a8-be3b-a753bfcf8d29\") pod \"ovsdbserver-nb-0\" (UID: \"4047014a-de6e-447d-983b-973a84e7478b\") " pod="openstack/ovsdbserver-nb-0" Mar 18 18:17:28.761718 master-0 kubenswrapper[30278]: I0318 18:17:28.761614 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Mar 18 18:17:30.015637 master-0 kubenswrapper[30278]: I0318 18:17:30.015496 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-a9585120-5867-4805-bd6b-205ce19607bb\" (UniqueName: \"kubernetes.io/csi/topolvm.io^99283db5-f71b-45e0-aded-a3dd3b44bfbd\") pod \"ovsdbserver-sb-0\" (UID: \"cbc42adf-4d99-42bb-b262-0f4163e358b8\") " pod="openstack/ovsdbserver-sb-0" Mar 18 18:17:30.112389 master-0 kubenswrapper[30278]: I0318 18:17:30.112320 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Mar 18 18:17:41.883172 master-0 kubenswrapper[30278]: I0318 18:17:41.883007 30278 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Mar 18 18:17:42.262656 master-0 kubenswrapper[30278]: I0318 18:17:42.262607 30278 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 18 18:17:42.351773 master-0 kubenswrapper[30278]: I0318 18:17:42.351690 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5d859fb5df-r468z" event={"ID":"86671053-9c92-43bd-b6e2-3655bc6d3e3f","Type":"ContainerStarted","Data":"9042ac6c67965b227abf2b737d59144bcf0eef33e0953a68b38e78a83fd310d2"} Mar 18 18:17:42.351999 master-0 kubenswrapper[30278]: I0318 18:17:42.351768 30278 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5d859fb5df-r468z" podUID="86671053-9c92-43bd-b6e2-3655bc6d3e3f" containerName="init" containerID="cri-o://9042ac6c67965b227abf2b737d59144bcf0eef33e0953a68b38e78a83fd310d2" gracePeriod=10 Mar 18 18:17:42.353191 master-0 kubenswrapper[30278]: I0318 18:17:42.353124 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"df68dba7-dacb-48bb-9433-12ad79aba028","Type":"ContainerStarted","Data":"fa9d30d787091e0a8a13a26ffa0d93355f5c0c76df772a8eee5e1059aef60e3d"} Mar 18 18:17:42.550193 master-0 kubenswrapper[30278]: I0318 18:17:42.550131 30278 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-xntzs"] Mar 18 18:17:42.556542 master-0 kubenswrapper[30278]: I0318 18:17:42.556472 30278 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Mar 18 18:17:42.611309 master-0 kubenswrapper[30278]: W0318 18:17:42.610769 30278 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode01e85f2_9a8b_4862_ad33_959e38bfbc7c.slice/crio-ae35eb8e4e9a1e993ec0757c9ae5a5ad6bf6c083e343b6084c553a55161f150b WatchSource:0}: Error finding container ae35eb8e4e9a1e993ec0757c9ae5a5ad6bf6c083e343b6084c553a55161f150b: Status 404 returned error can't find the container with id ae35eb8e4e9a1e993ec0757c9ae5a5ad6bf6c083e343b6084c553a55161f150b Mar 18 18:17:42.743865 master-0 kubenswrapper[30278]: I0318 18:17:42.743803 30278 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Mar 18 18:17:43.388455 master-0 kubenswrapper[30278]: I0318 18:17:43.385967 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-xntzs" event={"ID":"e01e85f2-9a8b-4862-ad33-959e38bfbc7c","Type":"ContainerStarted","Data":"ae35eb8e4e9a1e993ec0757c9ae5a5ad6bf6c083e343b6084c553a55161f150b"} Mar 18 18:17:43.389090 master-0 kubenswrapper[30278]: I0318 18:17:43.388524 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"7cbbe035-fa50-48c9-84ca-845e93085070","Type":"ContainerStarted","Data":"297490fae0921f8cc30578bed048ff3b7c712dcf04cb5fa852fc3cb303bc611a"} Mar 18 18:17:43.392885 master-0 kubenswrapper[30278]: I0318 18:17:43.391101 30278 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Mar 18 18:17:43.396525 master-0 kubenswrapper[30278]: I0318 18:17:43.396257 30278 generic.go:334] "Generic (PLEG): container finished" podID="de6412a1-7511-4f9b-a1e6-bb1735327597" containerID="bca7d601b4d7c54916bba98d103405666aa057525d910d1df044ab0c8c2d3746" exitCode=0 Mar 18 18:17:43.396525 master-0 kubenswrapper[30278]: I0318 18:17:43.396482 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55994974c5-l544m" event={"ID":"de6412a1-7511-4f9b-a1e6-bb1735327597","Type":"ContainerDied","Data":"bca7d601b4d7c54916bba98d103405666aa057525d910d1df044ab0c8c2d3746"} Mar 18 18:17:43.399863 master-0 kubenswrapper[30278]: I0318 18:17:43.399814 30278 generic.go:334] "Generic (PLEG): container finished" podID="2a622380-55da-4d69-a65a-5db6c07eb3d7" containerID="af4c4c967d7c2e202a859d7ecff1a53ec7a2db913da686c6826ab91764856c68" exitCode=0 Mar 18 18:17:43.399941 master-0 kubenswrapper[30278]: I0318 18:17:43.399903 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6f75dd7cd9-cwrjw" event={"ID":"2a622380-55da-4d69-a65a-5db6c07eb3d7","Type":"ContainerDied","Data":"af4c4c967d7c2e202a859d7ecff1a53ec7a2db913da686c6826ab91764856c68"} Mar 18 18:17:43.402471 master-0 kubenswrapper[30278]: I0318 18:17:43.402437 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"1ec57481-0836-4458-a2bc-e7ce64175f3a","Type":"ContainerStarted","Data":"daaa66037ae3478bcfa66e6123182244bb03001f9a25c053ce7328f718ba2125"} Mar 18 18:17:43.406425 master-0 kubenswrapper[30278]: I0318 18:17:43.405102 30278 generic.go:334] "Generic (PLEG): container finished" podID="86671053-9c92-43bd-b6e2-3655bc6d3e3f" containerID="9042ac6c67965b227abf2b737d59144bcf0eef33e0953a68b38e78a83fd310d2" exitCode=0 Mar 18 18:17:43.406425 master-0 kubenswrapper[30278]: I0318 18:17:43.405157 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5d859fb5df-r468z" event={"ID":"86671053-9c92-43bd-b6e2-3655bc6d3e3f","Type":"ContainerDied","Data":"9042ac6c67965b227abf2b737d59144bcf0eef33e0953a68b38e78a83fd310d2"} Mar 18 18:17:43.406425 master-0 kubenswrapper[30278]: I0318 18:17:43.406324 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"3a06b9e0-a605-44e2-b6e2-63b15a5bb700","Type":"ContainerStarted","Data":"d66c06b4e92bc4dde926efc87e8da38372d0db5d287ee2bc5a1266d1791332e2"} Mar 18 18:17:43.409629 master-0 kubenswrapper[30278]: I0318 18:17:43.408011 30278 generic.go:334] "Generic (PLEG): container finished" podID="b558c2d8-aed9-4381-9a37-c753f736e7f2" containerID="9af7957f55fdef8c5432d1bd3562795df453b96d233be52398beb8a10d026b78" exitCode=0 Mar 18 18:17:43.409629 master-0 kubenswrapper[30278]: I0318 18:17:43.408048 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6877bbfb4f-tg9rw" event={"ID":"b558c2d8-aed9-4381-9a37-c753f736e7f2","Type":"ContainerDied","Data":"9af7957f55fdef8c5432d1bd3562795df453b96d233be52398beb8a10d026b78"} Mar 18 18:17:43.494542 master-0 kubenswrapper[30278]: I0318 18:17:43.493559 30278 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5d859fb5df-r468z" Mar 18 18:17:43.530256 master-0 kubenswrapper[30278]: I0318 18:17:43.529943 30278 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=3.600576927 podStartE2EDuration="24.529911171s" podCreationTimestamp="2026-03-18 18:17:19 +0000 UTC" firstStartedPulling="2026-03-18 18:17:20.778985363 +0000 UTC m=+1009.946169958" lastFinishedPulling="2026-03-18 18:17:41.708319607 +0000 UTC m=+1030.875504202" observedRunningTime="2026-03-18 18:17:43.52282735 +0000 UTC m=+1032.690011935" watchObservedRunningTime="2026-03-18 18:17:43.529911171 +0000 UTC m=+1032.697095776" Mar 18 18:17:43.611684 master-0 kubenswrapper[30278]: I0318 18:17:43.611609 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/86671053-9c92-43bd-b6e2-3655bc6d3e3f-config\") pod \"86671053-9c92-43bd-b6e2-3655bc6d3e3f\" (UID: \"86671053-9c92-43bd-b6e2-3655bc6d3e3f\") " Mar 18 18:17:43.611865 master-0 kubenswrapper[30278]: I0318 18:17:43.611809 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/86671053-9c92-43bd-b6e2-3655bc6d3e3f-dns-svc\") pod \"86671053-9c92-43bd-b6e2-3655bc6d3e3f\" (UID: \"86671053-9c92-43bd-b6e2-3655bc6d3e3f\") " Mar 18 18:17:43.611865 master-0 kubenswrapper[30278]: I0318 18:17:43.611860 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dwvwm\" (UniqueName: \"kubernetes.io/projected/86671053-9c92-43bd-b6e2-3655bc6d3e3f-kube-api-access-dwvwm\") pod \"86671053-9c92-43bd-b6e2-3655bc6d3e3f\" (UID: \"86671053-9c92-43bd-b6e2-3655bc6d3e3f\") " Mar 18 18:17:43.619084 master-0 kubenswrapper[30278]: I0318 18:17:43.618913 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/86671053-9c92-43bd-b6e2-3655bc6d3e3f-kube-api-access-dwvwm" (OuterVolumeSpecName: "kube-api-access-dwvwm") pod "86671053-9c92-43bd-b6e2-3655bc6d3e3f" (UID: "86671053-9c92-43bd-b6e2-3655bc6d3e3f"). InnerVolumeSpecName "kube-api-access-dwvwm". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 18:17:43.666552 master-0 kubenswrapper[30278]: I0318 18:17:43.666436 30278 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-9qq6l"] Mar 18 18:17:43.689362 master-0 kubenswrapper[30278]: I0318 18:17:43.689264 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/86671053-9c92-43bd-b6e2-3655bc6d3e3f-config" (OuterVolumeSpecName: "config") pod "86671053-9c92-43bd-b6e2-3655bc6d3e3f" (UID: "86671053-9c92-43bd-b6e2-3655bc6d3e3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 18:17:43.716833 master-0 kubenswrapper[30278]: I0318 18:17:43.716775 30278 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dwvwm\" (UniqueName: \"kubernetes.io/projected/86671053-9c92-43bd-b6e2-3655bc6d3e3f-kube-api-access-dwvwm\") on node \"master-0\" DevicePath \"\"" Mar 18 18:17:43.716976 master-0 kubenswrapper[30278]: I0318 18:17:43.716873 30278 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/86671053-9c92-43bd-b6e2-3655bc6d3e3f-config\") on node \"master-0\" DevicePath \"\"" Mar 18 18:17:43.759747 master-0 kubenswrapper[30278]: I0318 18:17:43.753589 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/86671053-9c92-43bd-b6e2-3655bc6d3e3f-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "86671053-9c92-43bd-b6e2-3655bc6d3e3f" (UID: "86671053-9c92-43bd-b6e2-3655bc6d3e3f"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 18:17:43.777106 master-0 kubenswrapper[30278]: E0318 18:17:43.777031 30278 log.go:32] "CreateContainer in sandbox from runtime service failed" err=< Mar 18 18:17:43.777106 master-0 kubenswrapper[30278]: rpc error: code = Unknown desc = container create failed: mount `/var/lib/kubelet/pods/b558c2d8-aed9-4381-9a37-c753f736e7f2/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory Mar 18 18:17:43.777106 master-0 kubenswrapper[30278]: > podSandboxID="0a8362c1e41667b292febc047a750f33f858d4c7a34ebee20941e6d523802ffd" Mar 18 18:17:43.778184 master-0 kubenswrapper[30278]: E0318 18:17:43.777930 30278 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 18 18:17:43.778184 master-0 kubenswrapper[30278]: container &Container{Name:dnsmasq-dns,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nbchf8h696h5ffh5cdh585hc5hbfh597h58dhfh554h67bh9bh5c9hfch7dh5fbhbbh567h78h669hf8h65dh55dh588h5ddh88h694h669h95h8q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9lrhn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 5353 },Host:,},GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:5,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 5353 },Host:,},GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000800000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-6877bbfb4f-tg9rw_openstack(b558c2d8-aed9-4381-9a37-c753f736e7f2): CreateContainerError: container create failed: mount `/var/lib/kubelet/pods/b558c2d8-aed9-4381-9a37-c753f736e7f2/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory Mar 18 18:17:43.778184 master-0 kubenswrapper[30278]: > logger="UnhandledError" Mar 18 18:17:43.779414 master-0 kubenswrapper[30278]: E0318 18:17:43.779216 30278 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dnsmasq-dns\" with CreateContainerError: \"container create failed: mount `/var/lib/kubelet/pods/b558c2d8-aed9-4381-9a37-c753f736e7f2/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory\\n\"" pod="openstack/dnsmasq-dns-6877bbfb4f-tg9rw" podUID="b558c2d8-aed9-4381-9a37-c753f736e7f2" Mar 18 18:17:43.819176 master-0 kubenswrapper[30278]: I0318 18:17:43.819084 30278 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/86671053-9c92-43bd-b6e2-3655bc6d3e3f-dns-svc\") on node \"master-0\" DevicePath \"\"" Mar 18 18:17:43.947393 master-0 kubenswrapper[30278]: I0318 18:17:43.947333 30278 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55994974c5-l544m" Mar 18 18:17:44.023307 master-0 kubenswrapper[30278]: I0318 18:17:44.023207 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/de6412a1-7511-4f9b-a1e6-bb1735327597-config\") pod \"de6412a1-7511-4f9b-a1e6-bb1735327597\" (UID: \"de6412a1-7511-4f9b-a1e6-bb1735327597\") " Mar 18 18:17:44.023630 master-0 kubenswrapper[30278]: I0318 18:17:44.023441 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4qsp8\" (UniqueName: \"kubernetes.io/projected/de6412a1-7511-4f9b-a1e6-bb1735327597-kube-api-access-4qsp8\") pod \"de6412a1-7511-4f9b-a1e6-bb1735327597\" (UID: \"de6412a1-7511-4f9b-a1e6-bb1735327597\") " Mar 18 18:17:44.039413 master-0 kubenswrapper[30278]: I0318 18:17:44.038944 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/de6412a1-7511-4f9b-a1e6-bb1735327597-kube-api-access-4qsp8" (OuterVolumeSpecName: "kube-api-access-4qsp8") pod "de6412a1-7511-4f9b-a1e6-bb1735327597" (UID: "de6412a1-7511-4f9b-a1e6-bb1735327597"). InnerVolumeSpecName "kube-api-access-4qsp8". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 18:17:44.069612 master-0 kubenswrapper[30278]: I0318 18:17:44.069490 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/de6412a1-7511-4f9b-a1e6-bb1735327597-config" (OuterVolumeSpecName: "config") pod "de6412a1-7511-4f9b-a1e6-bb1735327597" (UID: "de6412a1-7511-4f9b-a1e6-bb1735327597"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 18:17:44.128509 master-0 kubenswrapper[30278]: I0318 18:17:44.128451 30278 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/de6412a1-7511-4f9b-a1e6-bb1735327597-config\") on node \"master-0\" DevicePath \"\"" Mar 18 18:17:44.128509 master-0 kubenswrapper[30278]: I0318 18:17:44.128491 30278 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4qsp8\" (UniqueName: \"kubernetes.io/projected/de6412a1-7511-4f9b-a1e6-bb1735327597-kube-api-access-4qsp8\") on node \"master-0\" DevicePath \"\"" Mar 18 18:17:44.444978 master-0 kubenswrapper[30278]: I0318 18:17:44.444913 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6f75dd7cd9-cwrjw" event={"ID":"2a622380-55da-4d69-a65a-5db6c07eb3d7","Type":"ContainerStarted","Data":"37c4816589c19e349c6863522e5ddc32a57f3b909d29f52a729a36aaa52d32ff"} Mar 18 18:17:44.445755 master-0 kubenswrapper[30278]: I0318 18:17:44.445741 30278 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6f75dd7cd9-cwrjw" Mar 18 18:17:44.464686 master-0 kubenswrapper[30278]: I0318 18:17:44.464559 30278 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Mar 18 18:17:44.484887 master-0 kubenswrapper[30278]: I0318 18:17:44.482846 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"1ec57481-0836-4458-a2bc-e7ce64175f3a","Type":"ContainerStarted","Data":"19ceac3a5fb8ef40f4559bca65c3c8c753c4bfbdc4f9b3a812b1fdc6b37720ae"} Mar 18 18:17:44.488181 master-0 kubenswrapper[30278]: I0318 18:17:44.487006 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5d859fb5df-r468z" event={"ID":"86671053-9c92-43bd-b6e2-3655bc6d3e3f","Type":"ContainerDied","Data":"cd61cd5e180010d2a4daabae171646f4c170c86823c64a2eabbe1d7bd2f7679b"} Mar 18 18:17:44.488181 master-0 kubenswrapper[30278]: I0318 18:17:44.487085 30278 scope.go:117] "RemoveContainer" containerID="9042ac6c67965b227abf2b737d59144bcf0eef33e0953a68b38e78a83fd310d2" Mar 18 18:17:44.488181 master-0 kubenswrapper[30278]: I0318 18:17:44.487247 30278 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5d859fb5df-r468z" Mar 18 18:17:44.492495 master-0 kubenswrapper[30278]: I0318 18:17:44.491138 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-9qq6l" event={"ID":"bb722697-8531-46a1-a93f-babc070522f4","Type":"ContainerStarted","Data":"264eb6d947624940e65001e2cbc9aa060f718a315b3807358f5f9460457e9ea2"} Mar 18 18:17:44.496128 master-0 kubenswrapper[30278]: I0318 18:17:44.496025 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"a24f1688-7c02-4ac5-af8a-0a5c3847755a","Type":"ContainerStarted","Data":"44bbfca57ee3a298dae1a41b1a5d4d8cd7f6849b52b08d9e01ba160c423b8200"} Mar 18 18:17:44.502507 master-0 kubenswrapper[30278]: I0318 18:17:44.502470 30278 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55994974c5-l544m" Mar 18 18:17:44.514219 master-0 kubenswrapper[30278]: I0318 18:17:44.504385 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55994974c5-l544m" event={"ID":"de6412a1-7511-4f9b-a1e6-bb1735327597","Type":"ContainerDied","Data":"46387d93a78c255a10532a6accc4c949e405452319187f74ea1d7795eafe455e"} Mar 18 18:17:44.517872 master-0 kubenswrapper[30278]: I0318 18:17:44.517808 30278 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6f75dd7cd9-cwrjw" podStartSLOduration=4.539779412 podStartE2EDuration="30.517789949s" podCreationTimestamp="2026-03-18 18:17:14 +0000 UTC" firstStartedPulling="2026-03-18 18:17:15.941825487 +0000 UTC m=+1005.109010082" lastFinishedPulling="2026-03-18 18:17:41.919836024 +0000 UTC m=+1031.087020619" observedRunningTime="2026-03-18 18:17:44.515073076 +0000 UTC m=+1033.682257671" watchObservedRunningTime="2026-03-18 18:17:44.517789949 +0000 UTC m=+1033.684974544" Mar 18 18:17:44.581130 master-0 kubenswrapper[30278]: I0318 18:17:44.580743 30278 scope.go:117] "RemoveContainer" containerID="bca7d601b4d7c54916bba98d103405666aa057525d910d1df044ab0c8c2d3746" Mar 18 18:17:44.788206 master-0 kubenswrapper[30278]: I0318 18:17:44.788146 30278 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5d859fb5df-r468z"] Mar 18 18:17:44.803769 master-0 kubenswrapper[30278]: I0318 18:17:44.803686 30278 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5d859fb5df-r468z"] Mar 18 18:17:44.854650 master-0 kubenswrapper[30278]: I0318 18:17:44.854600 30278 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-55994974c5-l544m"] Mar 18 18:17:44.865387 master-0 kubenswrapper[30278]: I0318 18:17:44.865311 30278 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-55994974c5-l544m"] Mar 18 18:17:45.077833 master-0 kubenswrapper[30278]: I0318 18:17:45.077747 30278 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="86671053-9c92-43bd-b6e2-3655bc6d3e3f" path="/var/lib/kubelet/pods/86671053-9c92-43bd-b6e2-3655bc6d3e3f/volumes" Mar 18 18:17:45.078788 master-0 kubenswrapper[30278]: I0318 18:17:45.078765 30278 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="de6412a1-7511-4f9b-a1e6-bb1735327597" path="/var/lib/kubelet/pods/de6412a1-7511-4f9b-a1e6-bb1735327597/volumes" Mar 18 18:17:45.215938 master-0 kubenswrapper[30278]: I0318 18:17:45.215851 30278 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Mar 18 18:17:45.516062 master-0 kubenswrapper[30278]: I0318 18:17:45.515866 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"4047014a-de6e-447d-983b-973a84e7478b","Type":"ContainerStarted","Data":"bc16cff326b2e9a1570cc03c85392a8d543089f2637536e1e13105115f0b02df"} Mar 18 18:17:45.518782 master-0 kubenswrapper[30278]: I0318 18:17:45.518719 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"cbc42adf-4d99-42bb-b262-0f4163e358b8","Type":"ContainerStarted","Data":"610001a77d8f434068bae85459fc2646309ab4528413f1798c02143a79214fcc"} Mar 18 18:17:45.526658 master-0 kubenswrapper[30278]: I0318 18:17:45.526564 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6877bbfb4f-tg9rw" event={"ID":"b558c2d8-aed9-4381-9a37-c753f736e7f2","Type":"ContainerStarted","Data":"001f46a4ee094afca4ae3cd2910558d34083188c60a8bd9c1a047eafc77e0feb"} Mar 18 18:17:45.574958 master-0 kubenswrapper[30278]: I0318 18:17:45.574552 30278 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6877bbfb4f-tg9rw" podStartSLOduration=4.454974836 podStartE2EDuration="31.574525951s" podCreationTimestamp="2026-03-18 18:17:14 +0000 UTC" firstStartedPulling="2026-03-18 18:17:15.389035157 +0000 UTC m=+1004.556219752" lastFinishedPulling="2026-03-18 18:17:42.508586282 +0000 UTC m=+1031.675770867" observedRunningTime="2026-03-18 18:17:45.551469141 +0000 UTC m=+1034.718653736" watchObservedRunningTime="2026-03-18 18:17:45.574525951 +0000 UTC m=+1034.741710546" Mar 18 18:17:49.523172 master-0 kubenswrapper[30278]: I0318 18:17:49.523090 30278 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6877bbfb4f-tg9rw" Mar 18 18:17:49.524913 master-0 kubenswrapper[30278]: I0318 18:17:49.524877 30278 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6877bbfb4f-tg9rw" Mar 18 18:17:49.606361 master-0 kubenswrapper[30278]: I0318 18:17:49.605486 30278 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Mar 18 18:17:50.264008 master-0 kubenswrapper[30278]: I0318 18:17:50.263779 30278 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6f75dd7cd9-cwrjw" Mar 18 18:17:50.403923 master-0 kubenswrapper[30278]: I0318 18:17:50.403838 30278 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6877bbfb4f-tg9rw"] Mar 18 18:17:50.639644 master-0 kubenswrapper[30278]: I0318 18:17:50.639553 30278 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6877bbfb4f-tg9rw" podUID="b558c2d8-aed9-4381-9a37-c753f736e7f2" containerName="dnsmasq-dns" containerID="cri-o://001f46a4ee094afca4ae3cd2910558d34083188c60a8bd9c1a047eafc77e0feb" gracePeriod=10 Mar 18 18:17:51.308157 master-0 kubenswrapper[30278]: I0318 18:17:51.302500 30278 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-998757459-j6h5k"] Mar 18 18:17:51.308157 master-0 kubenswrapper[30278]: E0318 18:17:51.303130 30278 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="86671053-9c92-43bd-b6e2-3655bc6d3e3f" containerName="init" Mar 18 18:17:51.308157 master-0 kubenswrapper[30278]: I0318 18:17:51.303146 30278 state_mem.go:107] "Deleted CPUSet assignment" podUID="86671053-9c92-43bd-b6e2-3655bc6d3e3f" containerName="init" Mar 18 18:17:51.308157 master-0 kubenswrapper[30278]: E0318 18:17:51.303165 30278 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="de6412a1-7511-4f9b-a1e6-bb1735327597" containerName="init" Mar 18 18:17:51.308157 master-0 kubenswrapper[30278]: I0318 18:17:51.303172 30278 state_mem.go:107] "Deleted CPUSet assignment" podUID="de6412a1-7511-4f9b-a1e6-bb1735327597" containerName="init" Mar 18 18:17:51.308157 master-0 kubenswrapper[30278]: I0318 18:17:51.303448 30278 memory_manager.go:354] "RemoveStaleState removing state" podUID="de6412a1-7511-4f9b-a1e6-bb1735327597" containerName="init" Mar 18 18:17:51.308157 master-0 kubenswrapper[30278]: I0318 18:17:51.303492 30278 memory_manager.go:354] "RemoveStaleState removing state" podUID="86671053-9c92-43bd-b6e2-3655bc6d3e3f" containerName="init" Mar 18 18:17:51.308157 master-0 kubenswrapper[30278]: I0318 18:17:51.304749 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-998757459-j6h5k" Mar 18 18:17:51.354352 master-0 kubenswrapper[30278]: I0318 18:17:51.349416 30278 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-998757459-j6h5k"] Mar 18 18:17:51.398920 master-0 kubenswrapper[30278]: I0318 18:17:51.398845 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/845ae1c5-4eca-424e-bca5-94dafe5d0407-dns-svc\") pod \"dnsmasq-dns-998757459-j6h5k\" (UID: \"845ae1c5-4eca-424e-bca5-94dafe5d0407\") " pod="openstack/dnsmasq-dns-998757459-j6h5k" Mar 18 18:17:51.398920 master-0 kubenswrapper[30278]: I0318 18:17:51.398911 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/845ae1c5-4eca-424e-bca5-94dafe5d0407-config\") pod \"dnsmasq-dns-998757459-j6h5k\" (UID: \"845ae1c5-4eca-424e-bca5-94dafe5d0407\") " pod="openstack/dnsmasq-dns-998757459-j6h5k" Mar 18 18:17:51.399268 master-0 kubenswrapper[30278]: I0318 18:17:51.398956 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lr69v\" (UniqueName: \"kubernetes.io/projected/845ae1c5-4eca-424e-bca5-94dafe5d0407-kube-api-access-lr69v\") pod \"dnsmasq-dns-998757459-j6h5k\" (UID: \"845ae1c5-4eca-424e-bca5-94dafe5d0407\") " pod="openstack/dnsmasq-dns-998757459-j6h5k" Mar 18 18:17:51.501927 master-0 kubenswrapper[30278]: I0318 18:17:51.501761 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/845ae1c5-4eca-424e-bca5-94dafe5d0407-dns-svc\") pod \"dnsmasq-dns-998757459-j6h5k\" (UID: \"845ae1c5-4eca-424e-bca5-94dafe5d0407\") " pod="openstack/dnsmasq-dns-998757459-j6h5k" Mar 18 18:17:51.502172 master-0 kubenswrapper[30278]: I0318 18:17:51.502080 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/845ae1c5-4eca-424e-bca5-94dafe5d0407-config\") pod \"dnsmasq-dns-998757459-j6h5k\" (UID: \"845ae1c5-4eca-424e-bca5-94dafe5d0407\") " pod="openstack/dnsmasq-dns-998757459-j6h5k" Mar 18 18:17:51.502322 master-0 kubenswrapper[30278]: I0318 18:17:51.502301 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lr69v\" (UniqueName: \"kubernetes.io/projected/845ae1c5-4eca-424e-bca5-94dafe5d0407-kube-api-access-lr69v\") pod \"dnsmasq-dns-998757459-j6h5k\" (UID: \"845ae1c5-4eca-424e-bca5-94dafe5d0407\") " pod="openstack/dnsmasq-dns-998757459-j6h5k" Mar 18 18:17:51.502900 master-0 kubenswrapper[30278]: I0318 18:17:51.502858 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/845ae1c5-4eca-424e-bca5-94dafe5d0407-dns-svc\") pod \"dnsmasq-dns-998757459-j6h5k\" (UID: \"845ae1c5-4eca-424e-bca5-94dafe5d0407\") " pod="openstack/dnsmasq-dns-998757459-j6h5k" Mar 18 18:17:51.504930 master-0 kubenswrapper[30278]: I0318 18:17:51.503685 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/845ae1c5-4eca-424e-bca5-94dafe5d0407-config\") pod \"dnsmasq-dns-998757459-j6h5k\" (UID: \"845ae1c5-4eca-424e-bca5-94dafe5d0407\") " pod="openstack/dnsmasq-dns-998757459-j6h5k" Mar 18 18:17:51.522743 master-0 kubenswrapper[30278]: I0318 18:17:51.522694 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lr69v\" (UniqueName: \"kubernetes.io/projected/845ae1c5-4eca-424e-bca5-94dafe5d0407-kube-api-access-lr69v\") pod \"dnsmasq-dns-998757459-j6h5k\" (UID: \"845ae1c5-4eca-424e-bca5-94dafe5d0407\") " pod="openstack/dnsmasq-dns-998757459-j6h5k" Mar 18 18:17:51.657885 master-0 kubenswrapper[30278]: I0318 18:17:51.657468 30278 generic.go:334] "Generic (PLEG): container finished" podID="b558c2d8-aed9-4381-9a37-c753f736e7f2" containerID="001f46a4ee094afca4ae3cd2910558d34083188c60a8bd9c1a047eafc77e0feb" exitCode=0 Mar 18 18:17:51.657885 master-0 kubenswrapper[30278]: I0318 18:17:51.657531 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6877bbfb4f-tg9rw" event={"ID":"b558c2d8-aed9-4381-9a37-c753f736e7f2","Type":"ContainerDied","Data":"001f46a4ee094afca4ae3cd2910558d34083188c60a8bd9c1a047eafc77e0feb"} Mar 18 18:17:51.698167 master-0 kubenswrapper[30278]: I0318 18:17:51.697661 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-998757459-j6h5k" Mar 18 18:17:52.675930 master-0 kubenswrapper[30278]: I0318 18:17:52.675816 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6877bbfb4f-tg9rw" event={"ID":"b558c2d8-aed9-4381-9a37-c753f736e7f2","Type":"ContainerDied","Data":"0a8362c1e41667b292febc047a750f33f858d4c7a34ebee20941e6d523802ffd"} Mar 18 18:17:52.675930 master-0 kubenswrapper[30278]: I0318 18:17:52.675873 30278 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0a8362c1e41667b292febc047a750f33f858d4c7a34ebee20941e6d523802ffd" Mar 18 18:17:52.743915 master-0 kubenswrapper[30278]: I0318 18:17:52.724354 30278 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6877bbfb4f-tg9rw" Mar 18 18:17:52.763898 master-0 kubenswrapper[30278]: I0318 18:17:52.763844 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b558c2d8-aed9-4381-9a37-c753f736e7f2-dns-svc\") pod \"b558c2d8-aed9-4381-9a37-c753f736e7f2\" (UID: \"b558c2d8-aed9-4381-9a37-c753f736e7f2\") " Mar 18 18:17:52.771427 master-0 kubenswrapper[30278]: I0318 18:17:52.770882 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b558c2d8-aed9-4381-9a37-c753f736e7f2-config\") pod \"b558c2d8-aed9-4381-9a37-c753f736e7f2\" (UID: \"b558c2d8-aed9-4381-9a37-c753f736e7f2\") " Mar 18 18:17:52.772902 master-0 kubenswrapper[30278]: I0318 18:17:52.772846 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9lrhn\" (UniqueName: \"kubernetes.io/projected/b558c2d8-aed9-4381-9a37-c753f736e7f2-kube-api-access-9lrhn\") pod \"b558c2d8-aed9-4381-9a37-c753f736e7f2\" (UID: \"b558c2d8-aed9-4381-9a37-c753f736e7f2\") " Mar 18 18:17:52.839225 master-0 kubenswrapper[30278]: I0318 18:17:52.837444 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b558c2d8-aed9-4381-9a37-c753f736e7f2-kube-api-access-9lrhn" (OuterVolumeSpecName: "kube-api-access-9lrhn") pod "b558c2d8-aed9-4381-9a37-c753f736e7f2" (UID: "b558c2d8-aed9-4381-9a37-c753f736e7f2"). InnerVolumeSpecName "kube-api-access-9lrhn". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 18:17:52.892252 master-0 kubenswrapper[30278]: I0318 18:17:52.891806 30278 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9lrhn\" (UniqueName: \"kubernetes.io/projected/b558c2d8-aed9-4381-9a37-c753f736e7f2-kube-api-access-9lrhn\") on node \"master-0\" DevicePath \"\"" Mar 18 18:17:52.922542 master-0 kubenswrapper[30278]: I0318 18:17:52.922351 30278 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-998757459-j6h5k"] Mar 18 18:17:53.136217 master-0 kubenswrapper[30278]: I0318 18:17:53.136153 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b558c2d8-aed9-4381-9a37-c753f736e7f2-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "b558c2d8-aed9-4381-9a37-c753f736e7f2" (UID: "b558c2d8-aed9-4381-9a37-c753f736e7f2"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 18:17:53.201447 master-0 kubenswrapper[30278]: I0318 18:17:53.201407 30278 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b558c2d8-aed9-4381-9a37-c753f736e7f2-dns-svc\") on node \"master-0\" DevicePath \"\"" Mar 18 18:17:53.247299 master-0 kubenswrapper[30278]: I0318 18:17:53.246295 30278 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-storage-0"] Mar 18 18:17:53.247299 master-0 kubenswrapper[30278]: E0318 18:17:53.246872 30278 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b558c2d8-aed9-4381-9a37-c753f736e7f2" containerName="init" Mar 18 18:17:53.247299 master-0 kubenswrapper[30278]: I0318 18:17:53.246889 30278 state_mem.go:107] "Deleted CPUSet assignment" podUID="b558c2d8-aed9-4381-9a37-c753f736e7f2" containerName="init" Mar 18 18:17:53.247299 master-0 kubenswrapper[30278]: E0318 18:17:53.246906 30278 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b558c2d8-aed9-4381-9a37-c753f736e7f2" containerName="dnsmasq-dns" Mar 18 18:17:53.247299 master-0 kubenswrapper[30278]: I0318 18:17:53.246913 30278 state_mem.go:107] "Deleted CPUSet assignment" podUID="b558c2d8-aed9-4381-9a37-c753f736e7f2" containerName="dnsmasq-dns" Mar 18 18:17:53.247299 master-0 kubenswrapper[30278]: I0318 18:17:53.247175 30278 memory_manager.go:354] "RemoveStaleState removing state" podUID="b558c2d8-aed9-4381-9a37-c753f736e7f2" containerName="dnsmasq-dns" Mar 18 18:17:53.261394 master-0 kubenswrapper[30278]: I0318 18:17:53.259775 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Mar 18 18:17:53.270357 master-0 kubenswrapper[30278]: I0318 18:17:53.266691 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-storage-config-data" Mar 18 18:17:53.270357 master-0 kubenswrapper[30278]: I0318 18:17:53.266921 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-files" Mar 18 18:17:53.270357 master-0 kubenswrapper[30278]: I0318 18:17:53.267058 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-conf" Mar 18 18:17:53.284482 master-0 kubenswrapper[30278]: I0318 18:17:53.284102 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b558c2d8-aed9-4381-9a37-c753f736e7f2-config" (OuterVolumeSpecName: "config") pod "b558c2d8-aed9-4381-9a37-c753f736e7f2" (UID: "b558c2d8-aed9-4381-9a37-c753f736e7f2"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 18:17:53.305140 master-0 kubenswrapper[30278]: I0318 18:17:53.303675 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff27830b-378b-4338-ac41-041a9d78ed62-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"ff27830b-378b-4338-ac41-041a9d78ed62\") " pod="openstack/swift-storage-0" Mar 18 18:17:53.305628 master-0 kubenswrapper[30278]: I0318 18:17:53.305607 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/ff27830b-378b-4338-ac41-041a9d78ed62-cache\") pod \"swift-storage-0\" (UID: \"ff27830b-378b-4338-ac41-041a9d78ed62\") " pod="openstack/swift-storage-0" Mar 18 18:17:53.305930 master-0 kubenswrapper[30278]: I0318 18:17:53.305913 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/ff27830b-378b-4338-ac41-041a9d78ed62-etc-swift\") pod \"swift-storage-0\" (UID: \"ff27830b-378b-4338-ac41-041a9d78ed62\") " pod="openstack/swift-storage-0" Mar 18 18:17:53.306147 master-0 kubenswrapper[30278]: I0318 18:17:53.306115 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/ff27830b-378b-4338-ac41-041a9d78ed62-lock\") pod \"swift-storage-0\" (UID: \"ff27830b-378b-4338-ac41-041a9d78ed62\") " pod="openstack/swift-storage-0" Mar 18 18:17:53.306294 master-0 kubenswrapper[30278]: I0318 18:17:53.306254 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-flbzm\" (UniqueName: \"kubernetes.io/projected/ff27830b-378b-4338-ac41-041a9d78ed62-kube-api-access-flbzm\") pod \"swift-storage-0\" (UID: \"ff27830b-378b-4338-ac41-041a9d78ed62\") " pod="openstack/swift-storage-0" Mar 18 18:17:53.306495 master-0 kubenswrapper[30278]: I0318 18:17:53.306474 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-8fbeaec9-2106-4bb6-a352-cfa95008110d\" (UniqueName: \"kubernetes.io/csi/topolvm.io^9fba4734-e221-4edc-b2cd-c10e37525298\") pod \"swift-storage-0\" (UID: \"ff27830b-378b-4338-ac41-041a9d78ed62\") " pod="openstack/swift-storage-0" Mar 18 18:17:53.306798 master-0 kubenswrapper[30278]: I0318 18:17:53.306783 30278 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b558c2d8-aed9-4381-9a37-c753f736e7f2-config\") on node \"master-0\" DevicePath \"\"" Mar 18 18:17:53.313387 master-0 kubenswrapper[30278]: I0318 18:17:53.313329 30278 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Mar 18 18:17:53.409071 master-0 kubenswrapper[30278]: I0318 18:17:53.408371 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff27830b-378b-4338-ac41-041a9d78ed62-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"ff27830b-378b-4338-ac41-041a9d78ed62\") " pod="openstack/swift-storage-0" Mar 18 18:17:53.409071 master-0 kubenswrapper[30278]: I0318 18:17:53.408441 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/ff27830b-378b-4338-ac41-041a9d78ed62-cache\") pod \"swift-storage-0\" (UID: \"ff27830b-378b-4338-ac41-041a9d78ed62\") " pod="openstack/swift-storage-0" Mar 18 18:17:53.409071 master-0 kubenswrapper[30278]: I0318 18:17:53.408987 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/ff27830b-378b-4338-ac41-041a9d78ed62-etc-swift\") pod \"swift-storage-0\" (UID: \"ff27830b-378b-4338-ac41-041a9d78ed62\") " pod="openstack/swift-storage-0" Mar 18 18:17:53.409071 master-0 kubenswrapper[30278]: I0318 18:17:53.409038 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/ff27830b-378b-4338-ac41-041a9d78ed62-lock\") pod \"swift-storage-0\" (UID: \"ff27830b-378b-4338-ac41-041a9d78ed62\") " pod="openstack/swift-storage-0" Mar 18 18:17:53.409071 master-0 kubenswrapper[30278]: I0318 18:17:53.409064 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-flbzm\" (UniqueName: \"kubernetes.io/projected/ff27830b-378b-4338-ac41-041a9d78ed62-kube-api-access-flbzm\") pod \"swift-storage-0\" (UID: \"ff27830b-378b-4338-ac41-041a9d78ed62\") " pod="openstack/swift-storage-0" Mar 18 18:17:53.410216 master-0 kubenswrapper[30278]: E0318 18:17:53.409793 30278 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Mar 18 18:17:53.410216 master-0 kubenswrapper[30278]: E0318 18:17:53.409853 30278 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Mar 18 18:17:53.410216 master-0 kubenswrapper[30278]: E0318 18:17:53.409939 30278 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ff27830b-378b-4338-ac41-041a9d78ed62-etc-swift podName:ff27830b-378b-4338-ac41-041a9d78ed62 nodeName:}" failed. No retries permitted until 2026-03-18 18:17:53.909914235 +0000 UTC m=+1043.077098820 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/ff27830b-378b-4338-ac41-041a9d78ed62-etc-swift") pod "swift-storage-0" (UID: "ff27830b-378b-4338-ac41-041a9d78ed62") : configmap "swift-ring-files" not found Mar 18 18:17:53.410216 master-0 kubenswrapper[30278]: I0318 18:17:53.409964 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/ff27830b-378b-4338-ac41-041a9d78ed62-lock\") pod \"swift-storage-0\" (UID: \"ff27830b-378b-4338-ac41-041a9d78ed62\") " pod="openstack/swift-storage-0" Mar 18 18:17:53.410216 master-0 kubenswrapper[30278]: I0318 18:17:53.410152 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/ff27830b-378b-4338-ac41-041a9d78ed62-cache\") pod \"swift-storage-0\" (UID: \"ff27830b-378b-4338-ac41-041a9d78ed62\") " pod="openstack/swift-storage-0" Mar 18 18:17:53.412891 master-0 kubenswrapper[30278]: I0318 18:17:53.412849 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff27830b-378b-4338-ac41-041a9d78ed62-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"ff27830b-378b-4338-ac41-041a9d78ed62\") " pod="openstack/swift-storage-0" Mar 18 18:17:54.036988 master-0 kubenswrapper[30278]: I0318 18:17:53.691011 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"cbc42adf-4d99-42bb-b262-0f4163e358b8","Type":"ContainerStarted","Data":"9a637d9f6f8086594661879b04b3aaa73a2f7947fa31070d0c36164e4d115935"} Mar 18 18:17:54.036988 master-0 kubenswrapper[30278]: I0318 18:17:53.692698 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"3a06b9e0-a605-44e2-b6e2-63b15a5bb700","Type":"ContainerStarted","Data":"93e2b00dd98467e5666851d0325ac0c8623df6905802ef65ad490947ba2930c9"} Mar 18 18:17:54.036988 master-0 kubenswrapper[30278]: I0318 18:17:53.696489 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"df68dba7-dacb-48bb-9433-12ad79aba028","Type":"ContainerStarted","Data":"4a751c06f600ff6b691a0eb4a0486fbf59eb5fb3adfff1ffa59f6a691b65dcf5"} Mar 18 18:17:54.036988 master-0 kubenswrapper[30278]: I0318 18:17:53.697958 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-9qq6l" event={"ID":"bb722697-8531-46a1-a93f-babc070522f4","Type":"ContainerStarted","Data":"a1d8c4270a1bf70004c4c7f6ff9a3357198078738bde73c595fc67729504d2b9"} Mar 18 18:17:54.036988 master-0 kubenswrapper[30278]: I0318 18:17:53.700182 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-998757459-j6h5k" event={"ID":"845ae1c5-4eca-424e-bca5-94dafe5d0407","Type":"ContainerStarted","Data":"82bec908e2769f1dd571f7a27666fc622ee2ca20287497164ceab2fce821df09"} Mar 18 18:17:54.036988 master-0 kubenswrapper[30278]: I0318 18:17:53.700233 30278 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6877bbfb4f-tg9rw" Mar 18 18:17:54.051319 master-0 kubenswrapper[30278]: I0318 18:17:54.050459 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/ff27830b-378b-4338-ac41-041a9d78ed62-etc-swift\") pod \"swift-storage-0\" (UID: \"ff27830b-378b-4338-ac41-041a9d78ed62\") " pod="openstack/swift-storage-0" Mar 18 18:17:54.051319 master-0 kubenswrapper[30278]: E0318 18:17:54.050862 30278 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Mar 18 18:17:54.051319 master-0 kubenswrapper[30278]: E0318 18:17:54.050894 30278 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Mar 18 18:17:54.051319 master-0 kubenswrapper[30278]: E0318 18:17:54.050980 30278 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ff27830b-378b-4338-ac41-041a9d78ed62-etc-swift podName:ff27830b-378b-4338-ac41-041a9d78ed62 nodeName:}" failed. No retries permitted until 2026-03-18 18:17:55.050950841 +0000 UTC m=+1044.218135476 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/ff27830b-378b-4338-ac41-041a9d78ed62-etc-swift") pod "swift-storage-0" (UID: "ff27830b-378b-4338-ac41-041a9d78ed62") : configmap "swift-ring-files" not found Mar 18 18:17:54.115788 master-0 kubenswrapper[30278]: I0318 18:17:54.115719 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-flbzm\" (UniqueName: \"kubernetes.io/projected/ff27830b-378b-4338-ac41-041a9d78ed62-kube-api-access-flbzm\") pod \"swift-storage-0\" (UID: \"ff27830b-378b-4338-ac41-041a9d78ed62\") " pod="openstack/swift-storage-0" Mar 18 18:17:54.154976 master-0 kubenswrapper[30278]: I0318 18:17:54.154934 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-8fbeaec9-2106-4bb6-a352-cfa95008110d\" (UniqueName: \"kubernetes.io/csi/topolvm.io^9fba4734-e221-4edc-b2cd-c10e37525298\") pod \"swift-storage-0\" (UID: \"ff27830b-378b-4338-ac41-041a9d78ed62\") " pod="openstack/swift-storage-0" Mar 18 18:17:54.161603 master-0 kubenswrapper[30278]: I0318 18:17:54.161236 30278 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Mar 18 18:17:54.161603 master-0 kubenswrapper[30278]: I0318 18:17:54.161426 30278 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-8fbeaec9-2106-4bb6-a352-cfa95008110d\" (UniqueName: \"kubernetes.io/csi/topolvm.io^9fba4734-e221-4edc-b2cd-c10e37525298\") pod \"swift-storage-0\" (UID: \"ff27830b-378b-4338-ac41-041a9d78ed62\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/topolvm.io/fc80343cb7bf012b61aeca24d76bbe7b8273e135b5b448b6d252b44c9fa53592/globalmount\"" pod="openstack/swift-storage-0" Mar 18 18:17:54.537023 master-0 kubenswrapper[30278]: I0318 18:17:54.534093 30278 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-qsrjq"] Mar 18 18:17:54.537023 master-0 kubenswrapper[30278]: I0318 18:17:54.536140 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-qsrjq" Mar 18 18:17:54.556489 master-0 kubenswrapper[30278]: I0318 18:17:54.554098 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-config-data" Mar 18 18:17:54.575218 master-0 kubenswrapper[30278]: I0318 18:17:54.569405 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-scripts" Mar 18 18:17:54.575218 master-0 kubenswrapper[30278]: I0318 18:17:54.569722 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Mar 18 18:17:54.646834 master-0 kubenswrapper[30278]: I0318 18:17:54.646564 30278 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6877bbfb4f-tg9rw"] Mar 18 18:17:54.726298 master-0 kubenswrapper[30278]: I0318 18:17:54.721970 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b076dc06-c082-4a5e-a049-9f98858a80ff-scripts\") pod \"swift-ring-rebalance-qsrjq\" (UID: \"b076dc06-c082-4a5e-a049-9f98858a80ff\") " pod="openstack/swift-ring-rebalance-qsrjq" Mar 18 18:17:54.726298 master-0 kubenswrapper[30278]: I0318 18:17:54.722184 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/b076dc06-c082-4a5e-a049-9f98858a80ff-dispersionconf\") pod \"swift-ring-rebalance-qsrjq\" (UID: \"b076dc06-c082-4a5e-a049-9f98858a80ff\") " pod="openstack/swift-ring-rebalance-qsrjq" Mar 18 18:17:54.726298 master-0 kubenswrapper[30278]: I0318 18:17:54.723869 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b076dc06-c082-4a5e-a049-9f98858a80ff-combined-ca-bundle\") pod \"swift-ring-rebalance-qsrjq\" (UID: \"b076dc06-c082-4a5e-a049-9f98858a80ff\") " pod="openstack/swift-ring-rebalance-qsrjq" Mar 18 18:17:54.726298 master-0 kubenswrapper[30278]: I0318 18:17:54.725182 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/b076dc06-c082-4a5e-a049-9f98858a80ff-etc-swift\") pod \"swift-ring-rebalance-qsrjq\" (UID: \"b076dc06-c082-4a5e-a049-9f98858a80ff\") " pod="openstack/swift-ring-rebalance-qsrjq" Mar 18 18:17:54.737995 master-0 kubenswrapper[30278]: I0318 18:17:54.728305 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/b076dc06-c082-4a5e-a049-9f98858a80ff-ring-data-devices\") pod \"swift-ring-rebalance-qsrjq\" (UID: \"b076dc06-c082-4a5e-a049-9f98858a80ff\") " pod="openstack/swift-ring-rebalance-qsrjq" Mar 18 18:17:54.737995 master-0 kubenswrapper[30278]: I0318 18:17:54.728393 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5fgrg\" (UniqueName: \"kubernetes.io/projected/b076dc06-c082-4a5e-a049-9f98858a80ff-kube-api-access-5fgrg\") pod \"swift-ring-rebalance-qsrjq\" (UID: \"b076dc06-c082-4a5e-a049-9f98858a80ff\") " pod="openstack/swift-ring-rebalance-qsrjq" Mar 18 18:17:54.737995 master-0 kubenswrapper[30278]: I0318 18:17:54.728438 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/b076dc06-c082-4a5e-a049-9f98858a80ff-swiftconf\") pod \"swift-ring-rebalance-qsrjq\" (UID: \"b076dc06-c082-4a5e-a049-9f98858a80ff\") " pod="openstack/swift-ring-rebalance-qsrjq" Mar 18 18:17:54.737995 master-0 kubenswrapper[30278]: I0318 18:17:54.729688 30278 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-qsrjq"] Mar 18 18:17:54.759349 master-0 kubenswrapper[30278]: I0318 18:17:54.759255 30278 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6877bbfb4f-tg9rw"] Mar 18 18:17:54.764392 master-0 kubenswrapper[30278]: I0318 18:17:54.764332 30278 generic.go:334] "Generic (PLEG): container finished" podID="845ae1c5-4eca-424e-bca5-94dafe5d0407" containerID="84ca01803a4660e271b507b465e03990950dc75b95fa15960a95f3ca378866c3" exitCode=0 Mar 18 18:17:54.770927 master-0 kubenswrapper[30278]: I0318 18:17:54.765095 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-998757459-j6h5k" event={"ID":"845ae1c5-4eca-424e-bca5-94dafe5d0407","Type":"ContainerDied","Data":"84ca01803a4660e271b507b465e03990950dc75b95fa15960a95f3ca378866c3"} Mar 18 18:17:54.781043 master-0 kubenswrapper[30278]: I0318 18:17:54.775106 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"4047014a-de6e-447d-983b-973a84e7478b","Type":"ContainerStarted","Data":"5b9656099b5a08b5b3b450f1b0e125191c5796faeff0ce6273d0b41faf77aed1"} Mar 18 18:17:54.798332 master-0 kubenswrapper[30278]: I0318 18:17:54.786714 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-xntzs" event={"ID":"e01e85f2-9a8b-4862-ad33-959e38bfbc7c","Type":"ContainerStarted","Data":"1159d7f980b4e542738ccb2e21aff581f2f119db213e64d2c85dd0a7d025173e"} Mar 18 18:17:54.798332 master-0 kubenswrapper[30278]: I0318 18:17:54.790154 30278 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-xntzs" Mar 18 18:17:54.807348 master-0 kubenswrapper[30278]: I0318 18:17:54.803057 30278 generic.go:334] "Generic (PLEG): container finished" podID="bb722697-8531-46a1-a93f-babc070522f4" containerID="a1d8c4270a1bf70004c4c7f6ff9a3357198078738bde73c595fc67729504d2b9" exitCode=0 Mar 18 18:17:54.807348 master-0 kubenswrapper[30278]: I0318 18:17:54.804551 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-9qq6l" event={"ID":"bb722697-8531-46a1-a93f-babc070522f4","Type":"ContainerDied","Data":"a1d8c4270a1bf70004c4c7f6ff9a3357198078738bde73c595fc67729504d2b9"} Mar 18 18:17:54.832429 master-0 kubenswrapper[30278]: I0318 18:17:54.832342 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/b076dc06-c082-4a5e-a049-9f98858a80ff-ring-data-devices\") pod \"swift-ring-rebalance-qsrjq\" (UID: \"b076dc06-c082-4a5e-a049-9f98858a80ff\") " pod="openstack/swift-ring-rebalance-qsrjq" Mar 18 18:17:54.832537 master-0 kubenswrapper[30278]: I0318 18:17:54.832435 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5fgrg\" (UniqueName: \"kubernetes.io/projected/b076dc06-c082-4a5e-a049-9f98858a80ff-kube-api-access-5fgrg\") pod \"swift-ring-rebalance-qsrjq\" (UID: \"b076dc06-c082-4a5e-a049-9f98858a80ff\") " pod="openstack/swift-ring-rebalance-qsrjq" Mar 18 18:17:54.832537 master-0 kubenswrapper[30278]: I0318 18:17:54.832460 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/b076dc06-c082-4a5e-a049-9f98858a80ff-swiftconf\") pod \"swift-ring-rebalance-qsrjq\" (UID: \"b076dc06-c082-4a5e-a049-9f98858a80ff\") " pod="openstack/swift-ring-rebalance-qsrjq" Mar 18 18:17:54.832633 master-0 kubenswrapper[30278]: I0318 18:17:54.832537 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b076dc06-c082-4a5e-a049-9f98858a80ff-scripts\") pod \"swift-ring-rebalance-qsrjq\" (UID: \"b076dc06-c082-4a5e-a049-9f98858a80ff\") " pod="openstack/swift-ring-rebalance-qsrjq" Mar 18 18:17:54.832633 master-0 kubenswrapper[30278]: I0318 18:17:54.832600 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/b076dc06-c082-4a5e-a049-9f98858a80ff-dispersionconf\") pod \"swift-ring-rebalance-qsrjq\" (UID: \"b076dc06-c082-4a5e-a049-9f98858a80ff\") " pod="openstack/swift-ring-rebalance-qsrjq" Mar 18 18:17:54.832633 master-0 kubenswrapper[30278]: I0318 18:17:54.832621 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b076dc06-c082-4a5e-a049-9f98858a80ff-combined-ca-bundle\") pod \"swift-ring-rebalance-qsrjq\" (UID: \"b076dc06-c082-4a5e-a049-9f98858a80ff\") " pod="openstack/swift-ring-rebalance-qsrjq" Mar 18 18:17:54.832771 master-0 kubenswrapper[30278]: I0318 18:17:54.832668 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/b076dc06-c082-4a5e-a049-9f98858a80ff-etc-swift\") pod \"swift-ring-rebalance-qsrjq\" (UID: \"b076dc06-c082-4a5e-a049-9f98858a80ff\") " pod="openstack/swift-ring-rebalance-qsrjq" Mar 18 18:17:54.833570 master-0 kubenswrapper[30278]: I0318 18:17:54.833535 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/b076dc06-c082-4a5e-a049-9f98858a80ff-etc-swift\") pod \"swift-ring-rebalance-qsrjq\" (UID: \"b076dc06-c082-4a5e-a049-9f98858a80ff\") " pod="openstack/swift-ring-rebalance-qsrjq" Mar 18 18:17:54.834546 master-0 kubenswrapper[30278]: I0318 18:17:54.834511 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/b076dc06-c082-4a5e-a049-9f98858a80ff-ring-data-devices\") pod \"swift-ring-rebalance-qsrjq\" (UID: \"b076dc06-c082-4a5e-a049-9f98858a80ff\") " pod="openstack/swift-ring-rebalance-qsrjq" Mar 18 18:17:54.843663 master-0 kubenswrapper[30278]: I0318 18:17:54.837444 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/b076dc06-c082-4a5e-a049-9f98858a80ff-swiftconf\") pod \"swift-ring-rebalance-qsrjq\" (UID: \"b076dc06-c082-4a5e-a049-9f98858a80ff\") " pod="openstack/swift-ring-rebalance-qsrjq" Mar 18 18:17:54.843663 master-0 kubenswrapper[30278]: I0318 18:17:54.837924 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b076dc06-c082-4a5e-a049-9f98858a80ff-scripts\") pod \"swift-ring-rebalance-qsrjq\" (UID: \"b076dc06-c082-4a5e-a049-9f98858a80ff\") " pod="openstack/swift-ring-rebalance-qsrjq" Mar 18 18:17:54.847258 master-0 kubenswrapper[30278]: I0318 18:17:54.847225 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/b076dc06-c082-4a5e-a049-9f98858a80ff-dispersionconf\") pod \"swift-ring-rebalance-qsrjq\" (UID: \"b076dc06-c082-4a5e-a049-9f98858a80ff\") " pod="openstack/swift-ring-rebalance-qsrjq" Mar 18 18:17:54.856593 master-0 kubenswrapper[30278]: I0318 18:17:54.850500 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b076dc06-c082-4a5e-a049-9f98858a80ff-combined-ca-bundle\") pod \"swift-ring-rebalance-qsrjq\" (UID: \"b076dc06-c082-4a5e-a049-9f98858a80ff\") " pod="openstack/swift-ring-rebalance-qsrjq" Mar 18 18:17:54.864758 master-0 kubenswrapper[30278]: I0318 18:17:54.864252 30278 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-xntzs" podStartSLOduration=22.075037609 podStartE2EDuration="31.864233457s" podCreationTimestamp="2026-03-18 18:17:23 +0000 UTC" firstStartedPulling="2026-03-18 18:17:42.654341038 +0000 UTC m=+1031.821525633" lastFinishedPulling="2026-03-18 18:17:52.443536886 +0000 UTC m=+1041.610721481" observedRunningTime="2026-03-18 18:17:54.810604542 +0000 UTC m=+1043.977789137" watchObservedRunningTime="2026-03-18 18:17:54.864233457 +0000 UTC m=+1044.031418052" Mar 18 18:17:54.872151 master-0 kubenswrapper[30278]: I0318 18:17:54.871376 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5fgrg\" (UniqueName: \"kubernetes.io/projected/b076dc06-c082-4a5e-a049-9f98858a80ff-kube-api-access-5fgrg\") pod \"swift-ring-rebalance-qsrjq\" (UID: \"b076dc06-c082-4a5e-a049-9f98858a80ff\") " pod="openstack/swift-ring-rebalance-qsrjq" Mar 18 18:17:55.046060 master-0 kubenswrapper[30278]: I0318 18:17:55.042376 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-qsrjq" Mar 18 18:17:55.076086 master-0 kubenswrapper[30278]: I0318 18:17:55.076025 30278 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b558c2d8-aed9-4381-9a37-c753f736e7f2" path="/var/lib/kubelet/pods/b558c2d8-aed9-4381-9a37-c753f736e7f2/volumes" Mar 18 18:17:55.142928 master-0 kubenswrapper[30278]: I0318 18:17:55.142861 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/ff27830b-378b-4338-ac41-041a9d78ed62-etc-swift\") pod \"swift-storage-0\" (UID: \"ff27830b-378b-4338-ac41-041a9d78ed62\") " pod="openstack/swift-storage-0" Mar 18 18:17:55.145148 master-0 kubenswrapper[30278]: E0318 18:17:55.144837 30278 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Mar 18 18:17:55.145148 master-0 kubenswrapper[30278]: E0318 18:17:55.144867 30278 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Mar 18 18:17:55.145148 master-0 kubenswrapper[30278]: E0318 18:17:55.144911 30278 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ff27830b-378b-4338-ac41-041a9d78ed62-etc-swift podName:ff27830b-378b-4338-ac41-041a9d78ed62 nodeName:}" failed. No retries permitted until 2026-03-18 18:17:57.144895127 +0000 UTC m=+1046.312079722 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/ff27830b-378b-4338-ac41-041a9d78ed62-etc-swift") pod "swift-storage-0" (UID: "ff27830b-378b-4338-ac41-041a9d78ed62") : configmap "swift-ring-files" not found Mar 18 18:17:55.745289 master-0 kubenswrapper[30278]: I0318 18:17:55.744665 30278 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-qsrjq"] Mar 18 18:17:55.868388 master-0 kubenswrapper[30278]: I0318 18:17:55.868233 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-9qq6l" event={"ID":"bb722697-8531-46a1-a93f-babc070522f4","Type":"ContainerStarted","Data":"50a247c36be88f31d405da9a8fc9284baf87725c821bdaa79870f3546e3347df"} Mar 18 18:17:55.878460 master-0 kubenswrapper[30278]: I0318 18:17:55.878409 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-998757459-j6h5k" event={"ID":"845ae1c5-4eca-424e-bca5-94dafe5d0407","Type":"ContainerStarted","Data":"7a87c0d7e6ebddf357e964e81963905b6ca0a7fd0f3262fc74e874e07dc22b6f"} Mar 18 18:17:55.878695 master-0 kubenswrapper[30278]: I0318 18:17:55.878655 30278 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-998757459-j6h5k" Mar 18 18:17:55.885732 master-0 kubenswrapper[30278]: I0318 18:17:55.885680 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-qsrjq" event={"ID":"b076dc06-c082-4a5e-a049-9f98858a80ff","Type":"ContainerStarted","Data":"08cdc1705ae6c3569dadbb795116e0dbab515d7ce663f82e9fd6dcfb6a1d9a5f"} Mar 18 18:17:55.911984 master-0 kubenswrapper[30278]: I0318 18:17:55.911306 30278 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-998757459-j6h5k" podStartSLOduration=4.911268398 podStartE2EDuration="4.911268398s" podCreationTimestamp="2026-03-18 18:17:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 18:17:55.902355598 +0000 UTC m=+1045.069540193" watchObservedRunningTime="2026-03-18 18:17:55.911268398 +0000 UTC m=+1045.078452993" Mar 18 18:17:56.587324 master-0 kubenswrapper[30278]: I0318 18:17:56.587240 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-8fbeaec9-2106-4bb6-a352-cfa95008110d\" (UniqueName: \"kubernetes.io/csi/topolvm.io^9fba4734-e221-4edc-b2cd-c10e37525298\") pod \"swift-storage-0\" (UID: \"ff27830b-378b-4338-ac41-041a9d78ed62\") " pod="openstack/swift-storage-0" Mar 18 18:17:56.901675 master-0 kubenswrapper[30278]: I0318 18:17:56.901565 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-9qq6l" event={"ID":"bb722697-8531-46a1-a93f-babc070522f4","Type":"ContainerStarted","Data":"478eb1539e52e318611e93693456304b19e4ec992d5924eab2f29b7e0cda91a5"} Mar 18 18:17:56.934070 master-0 kubenswrapper[30278]: I0318 18:17:56.933953 30278 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-9qq6l" podStartSLOduration=25.208919138 podStartE2EDuration="33.933928593s" podCreationTimestamp="2026-03-18 18:17:23 +0000 UTC" firstStartedPulling="2026-03-18 18:17:43.719497117 +0000 UTC m=+1032.886681712" lastFinishedPulling="2026-03-18 18:17:52.444506572 +0000 UTC m=+1041.611691167" observedRunningTime="2026-03-18 18:17:56.93382601 +0000 UTC m=+1046.101010645" watchObservedRunningTime="2026-03-18 18:17:56.933928593 +0000 UTC m=+1046.101113198" Mar 18 18:17:57.146608 master-0 kubenswrapper[30278]: I0318 18:17:57.146345 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/ff27830b-378b-4338-ac41-041a9d78ed62-etc-swift\") pod \"swift-storage-0\" (UID: \"ff27830b-378b-4338-ac41-041a9d78ed62\") " pod="openstack/swift-storage-0" Mar 18 18:17:57.150625 master-0 kubenswrapper[30278]: E0318 18:17:57.149391 30278 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Mar 18 18:17:57.150625 master-0 kubenswrapper[30278]: E0318 18:17:57.149435 30278 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Mar 18 18:17:57.150625 master-0 kubenswrapper[30278]: E0318 18:17:57.149495 30278 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ff27830b-378b-4338-ac41-041a9d78ed62-etc-swift podName:ff27830b-378b-4338-ac41-041a9d78ed62 nodeName:}" failed. No retries permitted until 2026-03-18 18:18:01.149473708 +0000 UTC m=+1050.316658303 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/ff27830b-378b-4338-ac41-041a9d78ed62-etc-swift") pod "swift-storage-0" (UID: "ff27830b-378b-4338-ac41-041a9d78ed62") : configmap "swift-ring-files" not found Mar 18 18:17:57.913291 master-0 kubenswrapper[30278]: I0318 18:17:57.913230 30278 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-9qq6l" Mar 18 18:17:57.913291 master-0 kubenswrapper[30278]: I0318 18:17:57.913297 30278 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-9qq6l" Mar 18 18:18:00.970811 master-0 kubenswrapper[30278]: I0318 18:18:00.970124 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"df68dba7-dacb-48bb-9433-12ad79aba028","Type":"ContainerDied","Data":"4a751c06f600ff6b691a0eb4a0486fbf59eb5fb3adfff1ffa59f6a691b65dcf5"} Mar 18 18:18:00.970811 master-0 kubenswrapper[30278]: I0318 18:18:00.970169 30278 generic.go:334] "Generic (PLEG): container finished" podID="df68dba7-dacb-48bb-9433-12ad79aba028" containerID="4a751c06f600ff6b691a0eb4a0486fbf59eb5fb3adfff1ffa59f6a691b65dcf5" exitCode=0 Mar 18 18:18:00.978531 master-0 kubenswrapper[30278]: I0318 18:18:00.978240 30278 generic.go:334] "Generic (PLEG): container finished" podID="3a06b9e0-a605-44e2-b6e2-63b15a5bb700" containerID="93e2b00dd98467e5666851d0325ac0c8623df6905802ef65ad490947ba2930c9" exitCode=0 Mar 18 18:18:00.978531 master-0 kubenswrapper[30278]: I0318 18:18:00.978317 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"3a06b9e0-a605-44e2-b6e2-63b15a5bb700","Type":"ContainerDied","Data":"93e2b00dd98467e5666851d0325ac0c8623df6905802ef65ad490947ba2930c9"} Mar 18 18:18:01.168442 master-0 kubenswrapper[30278]: I0318 18:18:01.168236 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/ff27830b-378b-4338-ac41-041a9d78ed62-etc-swift\") pod \"swift-storage-0\" (UID: \"ff27830b-378b-4338-ac41-041a9d78ed62\") " pod="openstack/swift-storage-0" Mar 18 18:18:01.168752 master-0 kubenswrapper[30278]: E0318 18:18:01.168716 30278 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Mar 18 18:18:01.168752 master-0 kubenswrapper[30278]: E0318 18:18:01.168736 30278 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Mar 18 18:18:01.168839 master-0 kubenswrapper[30278]: E0318 18:18:01.168812 30278 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ff27830b-378b-4338-ac41-041a9d78ed62-etc-swift podName:ff27830b-378b-4338-ac41-041a9d78ed62 nodeName:}" failed. No retries permitted until 2026-03-18 18:18:09.168790968 +0000 UTC m=+1058.335975563 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/ff27830b-378b-4338-ac41-041a9d78ed62-etc-swift") pod "swift-storage-0" (UID: "ff27830b-378b-4338-ac41-041a9d78ed62") : configmap "swift-ring-files" not found Mar 18 18:18:01.700567 master-0 kubenswrapper[30278]: I0318 18:18:01.700504 30278 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-998757459-j6h5k" Mar 18 18:18:02.410769 master-0 kubenswrapper[30278]: I0318 18:18:02.410361 30278 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6f75dd7cd9-cwrjw"] Mar 18 18:18:02.410769 master-0 kubenswrapper[30278]: I0318 18:18:02.410682 30278 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6f75dd7cd9-cwrjw" podUID="2a622380-55da-4d69-a65a-5db6c07eb3d7" containerName="dnsmasq-dns" containerID="cri-o://37c4816589c19e349c6863522e5ddc32a57f3b909d29f52a729a36aaa52d32ff" gracePeriod=10 Mar 18 18:18:03.025721 master-0 kubenswrapper[30278]: I0318 18:18:03.025676 30278 generic.go:334] "Generic (PLEG): container finished" podID="2a622380-55da-4d69-a65a-5db6c07eb3d7" containerID="37c4816589c19e349c6863522e5ddc32a57f3b909d29f52a729a36aaa52d32ff" exitCode=0 Mar 18 18:18:03.026542 master-0 kubenswrapper[30278]: I0318 18:18:03.026464 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6f75dd7cd9-cwrjw" event={"ID":"2a622380-55da-4d69-a65a-5db6c07eb3d7","Type":"ContainerDied","Data":"37c4816589c19e349c6863522e5ddc32a57f3b909d29f52a729a36aaa52d32ff"} Mar 18 18:18:03.032831 master-0 kubenswrapper[30278]: I0318 18:18:03.032769 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"df68dba7-dacb-48bb-9433-12ad79aba028","Type":"ContainerStarted","Data":"6fae5e8c5299299b6955a3e0eac03f5007ee7b63b9d53f22c91ca5e08d07c949"} Mar 18 18:18:03.670999 master-0 kubenswrapper[30278]: I0318 18:18:03.670950 30278 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6f75dd7cd9-cwrjw" Mar 18 18:18:03.764628 master-0 kubenswrapper[30278]: I0318 18:18:03.763489 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2a622380-55da-4d69-a65a-5db6c07eb3d7-config\") pod \"2a622380-55da-4d69-a65a-5db6c07eb3d7\" (UID: \"2a622380-55da-4d69-a65a-5db6c07eb3d7\") " Mar 18 18:18:03.764628 master-0 kubenswrapper[30278]: I0318 18:18:03.763771 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2a622380-55da-4d69-a65a-5db6c07eb3d7-dns-svc\") pod \"2a622380-55da-4d69-a65a-5db6c07eb3d7\" (UID: \"2a622380-55da-4d69-a65a-5db6c07eb3d7\") " Mar 18 18:18:03.766416 master-0 kubenswrapper[30278]: I0318 18:18:03.764856 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ldhnf\" (UniqueName: \"kubernetes.io/projected/2a622380-55da-4d69-a65a-5db6c07eb3d7-kube-api-access-ldhnf\") pod \"2a622380-55da-4d69-a65a-5db6c07eb3d7\" (UID: \"2a622380-55da-4d69-a65a-5db6c07eb3d7\") " Mar 18 18:18:03.771549 master-0 kubenswrapper[30278]: I0318 18:18:03.771489 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2a622380-55da-4d69-a65a-5db6c07eb3d7-kube-api-access-ldhnf" (OuterVolumeSpecName: "kube-api-access-ldhnf") pod "2a622380-55da-4d69-a65a-5db6c07eb3d7" (UID: "2a622380-55da-4d69-a65a-5db6c07eb3d7"). InnerVolumeSpecName "kube-api-access-ldhnf". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 18:18:03.824187 master-0 kubenswrapper[30278]: I0318 18:18:03.824128 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2a622380-55da-4d69-a65a-5db6c07eb3d7-config" (OuterVolumeSpecName: "config") pod "2a622380-55da-4d69-a65a-5db6c07eb3d7" (UID: "2a622380-55da-4d69-a65a-5db6c07eb3d7"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 18:18:03.839619 master-0 kubenswrapper[30278]: I0318 18:18:03.839553 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2a622380-55da-4d69-a65a-5db6c07eb3d7-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "2a622380-55da-4d69-a65a-5db6c07eb3d7" (UID: "2a622380-55da-4d69-a65a-5db6c07eb3d7"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 18:18:03.872000 master-0 kubenswrapper[30278]: I0318 18:18:03.869720 30278 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2a622380-55da-4d69-a65a-5db6c07eb3d7-config\") on node \"master-0\" DevicePath \"\"" Mar 18 18:18:03.872000 master-0 kubenswrapper[30278]: I0318 18:18:03.869765 30278 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2a622380-55da-4d69-a65a-5db6c07eb3d7-dns-svc\") on node \"master-0\" DevicePath \"\"" Mar 18 18:18:03.872000 master-0 kubenswrapper[30278]: I0318 18:18:03.869781 30278 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ldhnf\" (UniqueName: \"kubernetes.io/projected/2a622380-55da-4d69-a65a-5db6c07eb3d7-kube-api-access-ldhnf\") on node \"master-0\" DevicePath \"\"" Mar 18 18:18:04.056767 master-0 kubenswrapper[30278]: I0318 18:18:04.056699 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"4047014a-de6e-447d-983b-973a84e7478b","Type":"ContainerStarted","Data":"f329484130e62b999a6157c13ba60dd7b41ac8512782ea98a65b366842799932"} Mar 18 18:18:04.059089 master-0 kubenswrapper[30278]: I0318 18:18:04.059039 30278 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6f75dd7cd9-cwrjw" Mar 18 18:18:04.059235 master-0 kubenswrapper[30278]: I0318 18:18:04.059039 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6f75dd7cd9-cwrjw" event={"ID":"2a622380-55da-4d69-a65a-5db6c07eb3d7","Type":"ContainerDied","Data":"9c8c85e966a4ae85504ed577d9344e74d5710e8fac922e2b0697d449410aeeda"} Mar 18 18:18:04.059401 master-0 kubenswrapper[30278]: I0318 18:18:04.059368 30278 scope.go:117] "RemoveContainer" containerID="37c4816589c19e349c6863522e5ddc32a57f3b909d29f52a729a36aaa52d32ff" Mar 18 18:18:04.061982 master-0 kubenswrapper[30278]: I0318 18:18:04.061935 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"cbc42adf-4d99-42bb-b262-0f4163e358b8","Type":"ContainerStarted","Data":"29e0ca964cd3852c550b14e7b4471ab44bb5e0da216ec3f7fdee57f1e059f175"} Mar 18 18:18:04.064086 master-0 kubenswrapper[30278]: I0318 18:18:04.064040 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"3a06b9e0-a605-44e2-b6e2-63b15a5bb700","Type":"ContainerStarted","Data":"306db4ef55669eb9ff2840849294a72a1ade5177cc7690f2301e56c8f46378ed"} Mar 18 18:18:04.067441 master-0 kubenswrapper[30278]: I0318 18:18:04.067370 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-qsrjq" event={"ID":"b076dc06-c082-4a5e-a049-9f98858a80ff","Type":"ContainerStarted","Data":"7a5967b24efde10a1141ef7a4df7be6aa95755506c535b1fed44c00b193506f2"} Mar 18 18:18:04.085100 master-0 kubenswrapper[30278]: I0318 18:18:04.085050 30278 scope.go:117] "RemoveContainer" containerID="af4c4c967d7c2e202a859d7ecff1a53ec7a2db913da686c6826ab91764856c68" Mar 18 18:18:04.147392 master-0 kubenswrapper[30278]: I0318 18:18:04.143055 30278 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=22.967729337 podStartE2EDuration="40.143027808s" podCreationTimestamp="2026-03-18 18:17:24 +0000 UTC" firstStartedPulling="2026-03-18 18:17:45.229120878 +0000 UTC m=+1034.396305473" lastFinishedPulling="2026-03-18 18:18:02.404419339 +0000 UTC m=+1051.571603944" observedRunningTime="2026-03-18 18:18:04.12532708 +0000 UTC m=+1053.292511695" watchObservedRunningTime="2026-03-18 18:18:04.143027808 +0000 UTC m=+1053.310212403" Mar 18 18:18:04.174712 master-0 kubenswrapper[30278]: I0318 18:18:04.174603 30278 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-ring-rebalance-qsrjq" podStartSLOduration=2.400377272 podStartE2EDuration="10.174577227s" podCreationTimestamp="2026-03-18 18:17:54 +0000 UTC" firstStartedPulling="2026-03-18 18:17:55.784412271 +0000 UTC m=+1044.951596866" lastFinishedPulling="2026-03-18 18:18:03.558612186 +0000 UTC m=+1052.725796821" observedRunningTime="2026-03-18 18:18:04.145295579 +0000 UTC m=+1053.312480174" watchObservedRunningTime="2026-03-18 18:18:04.174577227 +0000 UTC m=+1053.341761822" Mar 18 18:18:04.184858 master-0 kubenswrapper[30278]: I0318 18:18:04.184760 30278 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=36.953471416 podStartE2EDuration="47.184738321s" podCreationTimestamp="2026-03-18 18:17:17 +0000 UTC" firstStartedPulling="2026-03-18 18:17:42.262106543 +0000 UTC m=+1031.429291138" lastFinishedPulling="2026-03-18 18:17:52.493373458 +0000 UTC m=+1041.660558043" observedRunningTime="2026-03-18 18:18:04.166663574 +0000 UTC m=+1053.333848189" watchObservedRunningTime="2026-03-18 18:18:04.184738321 +0000 UTC m=+1053.351922926" Mar 18 18:18:04.201099 master-0 kubenswrapper[30278]: I0318 18:18:04.200471 30278 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6f75dd7cd9-cwrjw"] Mar 18 18:18:04.213446 master-0 kubenswrapper[30278]: I0318 18:18:04.213343 30278 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6f75dd7cd9-cwrjw"] Mar 18 18:18:04.216912 master-0 kubenswrapper[30278]: I0318 18:18:04.215480 30278 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=21.288013081 podStartE2EDuration="39.215453219s" podCreationTimestamp="2026-03-18 18:17:25 +0000 UTC" firstStartedPulling="2026-03-18 18:17:44.493900646 +0000 UTC m=+1033.661085241" lastFinishedPulling="2026-03-18 18:18:02.421340784 +0000 UTC m=+1051.588525379" observedRunningTime="2026-03-18 18:18:04.214566374 +0000 UTC m=+1053.381750979" watchObservedRunningTime="2026-03-18 18:18:04.215453219 +0000 UTC m=+1053.382637814" Mar 18 18:18:04.255614 master-0 kubenswrapper[30278]: I0318 18:18:04.255492 30278 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=39.515520015 podStartE2EDuration="49.255454816s" podCreationTimestamp="2026-03-18 18:17:15 +0000 UTC" firstStartedPulling="2026-03-18 18:17:42.706711369 +0000 UTC m=+1031.873895964" lastFinishedPulling="2026-03-18 18:17:52.44664617 +0000 UTC m=+1041.613830765" observedRunningTime="2026-03-18 18:18:04.237975975 +0000 UTC m=+1053.405160570" watchObservedRunningTime="2026-03-18 18:18:04.255454816 +0000 UTC m=+1053.422639421" Mar 18 18:18:04.478358 master-0 kubenswrapper[30278]: I0318 18:18:04.478145 30278 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Mar 18 18:18:04.479864 master-0 kubenswrapper[30278]: I0318 18:18:04.479786 30278 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Mar 18 18:18:04.762148 master-0 kubenswrapper[30278]: I0318 18:18:04.762007 30278 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Mar 18 18:18:04.809753 master-0 kubenswrapper[30278]: I0318 18:18:04.807137 30278 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Mar 18 18:18:05.068984 master-0 kubenswrapper[30278]: I0318 18:18:05.068918 30278 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2a622380-55da-4d69-a65a-5db6c07eb3d7" path="/var/lib/kubelet/pods/2a622380-55da-4d69-a65a-5db6c07eb3d7/volumes" Mar 18 18:18:05.083536 master-0 kubenswrapper[30278]: I0318 18:18:05.083472 30278 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Mar 18 18:18:05.114133 master-0 kubenswrapper[30278]: I0318 18:18:05.113974 30278 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Mar 18 18:18:05.124583 master-0 kubenswrapper[30278]: I0318 18:18:05.124526 30278 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Mar 18 18:18:05.483742 master-0 kubenswrapper[30278]: I0318 18:18:05.481628 30278 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-764dfbc96f-87qgh"] Mar 18 18:18:05.483742 master-0 kubenswrapper[30278]: E0318 18:18:05.482294 30278 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2a622380-55da-4d69-a65a-5db6c07eb3d7" containerName="dnsmasq-dns" Mar 18 18:18:05.483742 master-0 kubenswrapper[30278]: I0318 18:18:05.482309 30278 state_mem.go:107] "Deleted CPUSet assignment" podUID="2a622380-55da-4d69-a65a-5db6c07eb3d7" containerName="dnsmasq-dns" Mar 18 18:18:05.483742 master-0 kubenswrapper[30278]: E0318 18:18:05.482330 30278 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2a622380-55da-4d69-a65a-5db6c07eb3d7" containerName="init" Mar 18 18:18:05.483742 master-0 kubenswrapper[30278]: I0318 18:18:05.482337 30278 state_mem.go:107] "Deleted CPUSet assignment" podUID="2a622380-55da-4d69-a65a-5db6c07eb3d7" containerName="init" Mar 18 18:18:05.483742 master-0 kubenswrapper[30278]: I0318 18:18:05.482605 30278 memory_manager.go:354] "RemoveStaleState removing state" podUID="2a622380-55da-4d69-a65a-5db6c07eb3d7" containerName="dnsmasq-dns" Mar 18 18:18:05.484115 master-0 kubenswrapper[30278]: I0318 18:18:05.483808 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-764dfbc96f-87qgh" Mar 18 18:18:05.492568 master-0 kubenswrapper[30278]: I0318 18:18:05.491752 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Mar 18 18:18:05.513499 master-0 kubenswrapper[30278]: I0318 18:18:05.513442 30278 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-764dfbc96f-87qgh"] Mar 18 18:18:05.534408 master-0 kubenswrapper[30278]: I0318 18:18:05.530446 30278 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-xz9c7"] Mar 18 18:18:05.534408 master-0 kubenswrapper[30278]: I0318 18:18:05.532076 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-xz9c7" Mar 18 18:18:05.534408 master-0 kubenswrapper[30278]: I0318 18:18:05.534226 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Mar 18 18:18:05.545267 master-0 kubenswrapper[30278]: I0318 18:18:05.541926 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/8c299186-30d6-4dd9-9490-5c843f940e6d-ovs-rundir\") pod \"ovn-controller-metrics-xz9c7\" (UID: \"8c299186-30d6-4dd9-9490-5c843f940e6d\") " pod="openstack/ovn-controller-metrics-xz9c7" Mar 18 18:18:05.545267 master-0 kubenswrapper[30278]: I0318 18:18:05.542038 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/46f5672d-7f9b-4257-a9b7-f0e309d943b9-config\") pod \"dnsmasq-dns-764dfbc96f-87qgh\" (UID: \"46f5672d-7f9b-4257-a9b7-f0e309d943b9\") " pod="openstack/dnsmasq-dns-764dfbc96f-87qgh" Mar 18 18:18:05.545267 master-0 kubenswrapper[30278]: I0318 18:18:05.542066 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c299186-30d6-4dd9-9490-5c843f940e6d-combined-ca-bundle\") pod \"ovn-controller-metrics-xz9c7\" (UID: \"8c299186-30d6-4dd9-9490-5c843f940e6d\") " pod="openstack/ovn-controller-metrics-xz9c7" Mar 18 18:18:05.545267 master-0 kubenswrapper[30278]: I0318 18:18:05.542094 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/46f5672d-7f9b-4257-a9b7-f0e309d943b9-ovsdbserver-nb\") pod \"dnsmasq-dns-764dfbc96f-87qgh\" (UID: \"46f5672d-7f9b-4257-a9b7-f0e309d943b9\") " pod="openstack/dnsmasq-dns-764dfbc96f-87qgh" Mar 18 18:18:05.545267 master-0 kubenswrapper[30278]: I0318 18:18:05.542172 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8c299186-30d6-4dd9-9490-5c843f940e6d-config\") pod \"ovn-controller-metrics-xz9c7\" (UID: \"8c299186-30d6-4dd9-9490-5c843f940e6d\") " pod="openstack/ovn-controller-metrics-xz9c7" Mar 18 18:18:05.545267 master-0 kubenswrapper[30278]: I0318 18:18:05.542205 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9k84g\" (UniqueName: \"kubernetes.io/projected/46f5672d-7f9b-4257-a9b7-f0e309d943b9-kube-api-access-9k84g\") pod \"dnsmasq-dns-764dfbc96f-87qgh\" (UID: \"46f5672d-7f9b-4257-a9b7-f0e309d943b9\") " pod="openstack/dnsmasq-dns-764dfbc96f-87qgh" Mar 18 18:18:05.545267 master-0 kubenswrapper[30278]: I0318 18:18:05.542235 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/46f5672d-7f9b-4257-a9b7-f0e309d943b9-dns-svc\") pod \"dnsmasq-dns-764dfbc96f-87qgh\" (UID: \"46f5672d-7f9b-4257-a9b7-f0e309d943b9\") " pod="openstack/dnsmasq-dns-764dfbc96f-87qgh" Mar 18 18:18:05.545267 master-0 kubenswrapper[30278]: I0318 18:18:05.542358 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/8c299186-30d6-4dd9-9490-5c843f940e6d-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-xz9c7\" (UID: \"8c299186-30d6-4dd9-9490-5c843f940e6d\") " pod="openstack/ovn-controller-metrics-xz9c7" Mar 18 18:18:05.545267 master-0 kubenswrapper[30278]: I0318 18:18:05.542620 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-crkxs\" (UniqueName: \"kubernetes.io/projected/8c299186-30d6-4dd9-9490-5c843f940e6d-kube-api-access-crkxs\") pod \"ovn-controller-metrics-xz9c7\" (UID: \"8c299186-30d6-4dd9-9490-5c843f940e6d\") " pod="openstack/ovn-controller-metrics-xz9c7" Mar 18 18:18:05.545267 master-0 kubenswrapper[30278]: I0318 18:18:05.542967 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/8c299186-30d6-4dd9-9490-5c843f940e6d-ovn-rundir\") pod \"ovn-controller-metrics-xz9c7\" (UID: \"8c299186-30d6-4dd9-9490-5c843f940e6d\") " pod="openstack/ovn-controller-metrics-xz9c7" Mar 18 18:18:05.565945 master-0 kubenswrapper[30278]: I0318 18:18:05.564311 30278 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-xz9c7"] Mar 18 18:18:05.646485 master-0 kubenswrapper[30278]: I0318 18:18:05.645667 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/8c299186-30d6-4dd9-9490-5c843f940e6d-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-xz9c7\" (UID: \"8c299186-30d6-4dd9-9490-5c843f940e6d\") " pod="openstack/ovn-controller-metrics-xz9c7" Mar 18 18:18:05.646743 master-0 kubenswrapper[30278]: I0318 18:18:05.646583 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-crkxs\" (UniqueName: \"kubernetes.io/projected/8c299186-30d6-4dd9-9490-5c843f940e6d-kube-api-access-crkxs\") pod \"ovn-controller-metrics-xz9c7\" (UID: \"8c299186-30d6-4dd9-9490-5c843f940e6d\") " pod="openstack/ovn-controller-metrics-xz9c7" Mar 18 18:18:05.646743 master-0 kubenswrapper[30278]: I0318 18:18:05.646671 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/8c299186-30d6-4dd9-9490-5c843f940e6d-ovn-rundir\") pod \"ovn-controller-metrics-xz9c7\" (UID: \"8c299186-30d6-4dd9-9490-5c843f940e6d\") " pod="openstack/ovn-controller-metrics-xz9c7" Mar 18 18:18:05.646811 master-0 kubenswrapper[30278]: I0318 18:18:05.646733 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/8c299186-30d6-4dd9-9490-5c843f940e6d-ovs-rundir\") pod \"ovn-controller-metrics-xz9c7\" (UID: \"8c299186-30d6-4dd9-9490-5c843f940e6d\") " pod="openstack/ovn-controller-metrics-xz9c7" Mar 18 18:18:05.646852 master-0 kubenswrapper[30278]: I0318 18:18:05.646828 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/46f5672d-7f9b-4257-a9b7-f0e309d943b9-config\") pod \"dnsmasq-dns-764dfbc96f-87qgh\" (UID: \"46f5672d-7f9b-4257-a9b7-f0e309d943b9\") " pod="openstack/dnsmasq-dns-764dfbc96f-87qgh" Mar 18 18:18:05.646890 master-0 kubenswrapper[30278]: I0318 18:18:05.646867 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c299186-30d6-4dd9-9490-5c843f940e6d-combined-ca-bundle\") pod \"ovn-controller-metrics-xz9c7\" (UID: \"8c299186-30d6-4dd9-9490-5c843f940e6d\") " pod="openstack/ovn-controller-metrics-xz9c7" Mar 18 18:18:05.646932 master-0 kubenswrapper[30278]: I0318 18:18:05.646899 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/46f5672d-7f9b-4257-a9b7-f0e309d943b9-ovsdbserver-nb\") pod \"dnsmasq-dns-764dfbc96f-87qgh\" (UID: \"46f5672d-7f9b-4257-a9b7-f0e309d943b9\") " pod="openstack/dnsmasq-dns-764dfbc96f-87qgh" Mar 18 18:18:05.647136 master-0 kubenswrapper[30278]: I0318 18:18:05.647114 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8c299186-30d6-4dd9-9490-5c843f940e6d-config\") pod \"ovn-controller-metrics-xz9c7\" (UID: \"8c299186-30d6-4dd9-9490-5c843f940e6d\") " pod="openstack/ovn-controller-metrics-xz9c7" Mar 18 18:18:05.647189 master-0 kubenswrapper[30278]: I0318 18:18:05.647169 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9k84g\" (UniqueName: \"kubernetes.io/projected/46f5672d-7f9b-4257-a9b7-f0e309d943b9-kube-api-access-9k84g\") pod \"dnsmasq-dns-764dfbc96f-87qgh\" (UID: \"46f5672d-7f9b-4257-a9b7-f0e309d943b9\") " pod="openstack/dnsmasq-dns-764dfbc96f-87qgh" Mar 18 18:18:05.647245 master-0 kubenswrapper[30278]: I0318 18:18:05.647226 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/46f5672d-7f9b-4257-a9b7-f0e309d943b9-dns-svc\") pod \"dnsmasq-dns-764dfbc96f-87qgh\" (UID: \"46f5672d-7f9b-4257-a9b7-f0e309d943b9\") " pod="openstack/dnsmasq-dns-764dfbc96f-87qgh" Mar 18 18:18:05.647338 master-0 kubenswrapper[30278]: I0318 18:18:05.647118 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/8c299186-30d6-4dd9-9490-5c843f940e6d-ovs-rundir\") pod \"ovn-controller-metrics-xz9c7\" (UID: \"8c299186-30d6-4dd9-9490-5c843f940e6d\") " pod="openstack/ovn-controller-metrics-xz9c7" Mar 18 18:18:05.647757 master-0 kubenswrapper[30278]: I0318 18:18:05.647740 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/8c299186-30d6-4dd9-9490-5c843f940e6d-ovn-rundir\") pod \"ovn-controller-metrics-xz9c7\" (UID: \"8c299186-30d6-4dd9-9490-5c843f940e6d\") " pod="openstack/ovn-controller-metrics-xz9c7" Mar 18 18:18:05.648052 master-0 kubenswrapper[30278]: I0318 18:18:05.648011 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/46f5672d-7f9b-4257-a9b7-f0e309d943b9-config\") pod \"dnsmasq-dns-764dfbc96f-87qgh\" (UID: \"46f5672d-7f9b-4257-a9b7-f0e309d943b9\") " pod="openstack/dnsmasq-dns-764dfbc96f-87qgh" Mar 18 18:18:05.648566 master-0 kubenswrapper[30278]: I0318 18:18:05.648548 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8c299186-30d6-4dd9-9490-5c843f940e6d-config\") pod \"ovn-controller-metrics-xz9c7\" (UID: \"8c299186-30d6-4dd9-9490-5c843f940e6d\") " pod="openstack/ovn-controller-metrics-xz9c7" Mar 18 18:18:05.648937 master-0 kubenswrapper[30278]: I0318 18:18:05.648906 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/46f5672d-7f9b-4257-a9b7-f0e309d943b9-ovsdbserver-nb\") pod \"dnsmasq-dns-764dfbc96f-87qgh\" (UID: \"46f5672d-7f9b-4257-a9b7-f0e309d943b9\") " pod="openstack/dnsmasq-dns-764dfbc96f-87qgh" Mar 18 18:18:05.649582 master-0 kubenswrapper[30278]: I0318 18:18:05.649381 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/46f5672d-7f9b-4257-a9b7-f0e309d943b9-dns-svc\") pod \"dnsmasq-dns-764dfbc96f-87qgh\" (UID: \"46f5672d-7f9b-4257-a9b7-f0e309d943b9\") " pod="openstack/dnsmasq-dns-764dfbc96f-87qgh" Mar 18 18:18:05.652207 master-0 kubenswrapper[30278]: I0318 18:18:05.650840 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/8c299186-30d6-4dd9-9490-5c843f940e6d-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-xz9c7\" (UID: \"8c299186-30d6-4dd9-9490-5c843f940e6d\") " pod="openstack/ovn-controller-metrics-xz9c7" Mar 18 18:18:05.655002 master-0 kubenswrapper[30278]: I0318 18:18:05.654981 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c299186-30d6-4dd9-9490-5c843f940e6d-combined-ca-bundle\") pod \"ovn-controller-metrics-xz9c7\" (UID: \"8c299186-30d6-4dd9-9490-5c843f940e6d\") " pod="openstack/ovn-controller-metrics-xz9c7" Mar 18 18:18:05.664788 master-0 kubenswrapper[30278]: I0318 18:18:05.664768 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9k84g\" (UniqueName: \"kubernetes.io/projected/46f5672d-7f9b-4257-a9b7-f0e309d943b9-kube-api-access-9k84g\") pod \"dnsmasq-dns-764dfbc96f-87qgh\" (UID: \"46f5672d-7f9b-4257-a9b7-f0e309d943b9\") " pod="openstack/dnsmasq-dns-764dfbc96f-87qgh" Mar 18 18:18:05.666126 master-0 kubenswrapper[30278]: I0318 18:18:05.666086 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-crkxs\" (UniqueName: \"kubernetes.io/projected/8c299186-30d6-4dd9-9490-5c843f940e6d-kube-api-access-crkxs\") pod \"ovn-controller-metrics-xz9c7\" (UID: \"8c299186-30d6-4dd9-9490-5c843f940e6d\") " pod="openstack/ovn-controller-metrics-xz9c7" Mar 18 18:18:05.803871 master-0 kubenswrapper[30278]: I0318 18:18:05.801755 30278 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-764dfbc96f-87qgh"] Mar 18 18:18:05.803871 master-0 kubenswrapper[30278]: I0318 18:18:05.802971 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-764dfbc96f-87qgh" Mar 18 18:18:05.837071 master-0 kubenswrapper[30278]: I0318 18:18:05.835495 30278 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5cd749f44f-tjfmr"] Mar 18 18:18:05.837552 master-0 kubenswrapper[30278]: I0318 18:18:05.837519 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5cd749f44f-tjfmr" Mar 18 18:18:05.845917 master-0 kubenswrapper[30278]: I0318 18:18:05.845442 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Mar 18 18:18:05.852261 master-0 kubenswrapper[30278]: I0318 18:18:05.851372 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/111f82f6-d141-4c76-be8f-026f90f1858b-ovsdbserver-sb\") pod \"dnsmasq-dns-5cd749f44f-tjfmr\" (UID: \"111f82f6-d141-4c76-be8f-026f90f1858b\") " pod="openstack/dnsmasq-dns-5cd749f44f-tjfmr" Mar 18 18:18:05.852261 master-0 kubenswrapper[30278]: I0318 18:18:05.851430 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/111f82f6-d141-4c76-be8f-026f90f1858b-dns-svc\") pod \"dnsmasq-dns-5cd749f44f-tjfmr\" (UID: \"111f82f6-d141-4c76-be8f-026f90f1858b\") " pod="openstack/dnsmasq-dns-5cd749f44f-tjfmr" Mar 18 18:18:05.852261 master-0 kubenswrapper[30278]: I0318 18:18:05.851459 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sckpl\" (UniqueName: \"kubernetes.io/projected/111f82f6-d141-4c76-be8f-026f90f1858b-kube-api-access-sckpl\") pod \"dnsmasq-dns-5cd749f44f-tjfmr\" (UID: \"111f82f6-d141-4c76-be8f-026f90f1858b\") " pod="openstack/dnsmasq-dns-5cd749f44f-tjfmr" Mar 18 18:18:05.852261 master-0 kubenswrapper[30278]: I0318 18:18:05.851523 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/111f82f6-d141-4c76-be8f-026f90f1858b-ovsdbserver-nb\") pod \"dnsmasq-dns-5cd749f44f-tjfmr\" (UID: \"111f82f6-d141-4c76-be8f-026f90f1858b\") " pod="openstack/dnsmasq-dns-5cd749f44f-tjfmr" Mar 18 18:18:05.852261 master-0 kubenswrapper[30278]: I0318 18:18:05.851570 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/111f82f6-d141-4c76-be8f-026f90f1858b-config\") pod \"dnsmasq-dns-5cd749f44f-tjfmr\" (UID: \"111f82f6-d141-4c76-be8f-026f90f1858b\") " pod="openstack/dnsmasq-dns-5cd749f44f-tjfmr" Mar 18 18:18:05.885658 master-0 kubenswrapper[30278]: I0318 18:18:05.884754 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-xz9c7" Mar 18 18:18:05.892177 master-0 kubenswrapper[30278]: I0318 18:18:05.890735 30278 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5cd749f44f-tjfmr"] Mar 18 18:18:05.953362 master-0 kubenswrapper[30278]: I0318 18:18:05.953306 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/111f82f6-d141-4c76-be8f-026f90f1858b-ovsdbserver-nb\") pod \"dnsmasq-dns-5cd749f44f-tjfmr\" (UID: \"111f82f6-d141-4c76-be8f-026f90f1858b\") " pod="openstack/dnsmasq-dns-5cd749f44f-tjfmr" Mar 18 18:18:05.953709 master-0 kubenswrapper[30278]: I0318 18:18:05.953608 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/111f82f6-d141-4c76-be8f-026f90f1858b-config\") pod \"dnsmasq-dns-5cd749f44f-tjfmr\" (UID: \"111f82f6-d141-4c76-be8f-026f90f1858b\") " pod="openstack/dnsmasq-dns-5cd749f44f-tjfmr" Mar 18 18:18:05.953760 master-0 kubenswrapper[30278]: I0318 18:18:05.953736 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/111f82f6-d141-4c76-be8f-026f90f1858b-ovsdbserver-sb\") pod \"dnsmasq-dns-5cd749f44f-tjfmr\" (UID: \"111f82f6-d141-4c76-be8f-026f90f1858b\") " pod="openstack/dnsmasq-dns-5cd749f44f-tjfmr" Mar 18 18:18:05.953857 master-0 kubenswrapper[30278]: I0318 18:18:05.953768 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/111f82f6-d141-4c76-be8f-026f90f1858b-dns-svc\") pod \"dnsmasq-dns-5cd749f44f-tjfmr\" (UID: \"111f82f6-d141-4c76-be8f-026f90f1858b\") " pod="openstack/dnsmasq-dns-5cd749f44f-tjfmr" Mar 18 18:18:05.953857 master-0 kubenswrapper[30278]: I0318 18:18:05.953801 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sckpl\" (UniqueName: \"kubernetes.io/projected/111f82f6-d141-4c76-be8f-026f90f1858b-kube-api-access-sckpl\") pod \"dnsmasq-dns-5cd749f44f-tjfmr\" (UID: \"111f82f6-d141-4c76-be8f-026f90f1858b\") " pod="openstack/dnsmasq-dns-5cd749f44f-tjfmr" Mar 18 18:18:05.957253 master-0 kubenswrapper[30278]: I0318 18:18:05.955696 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/111f82f6-d141-4c76-be8f-026f90f1858b-config\") pod \"dnsmasq-dns-5cd749f44f-tjfmr\" (UID: \"111f82f6-d141-4c76-be8f-026f90f1858b\") " pod="openstack/dnsmasq-dns-5cd749f44f-tjfmr" Mar 18 18:18:05.957253 master-0 kubenswrapper[30278]: I0318 18:18:05.956625 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/111f82f6-d141-4c76-be8f-026f90f1858b-ovsdbserver-sb\") pod \"dnsmasq-dns-5cd749f44f-tjfmr\" (UID: \"111f82f6-d141-4c76-be8f-026f90f1858b\") " pod="openstack/dnsmasq-dns-5cd749f44f-tjfmr" Mar 18 18:18:05.961006 master-0 kubenswrapper[30278]: I0318 18:18:05.960958 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/111f82f6-d141-4c76-be8f-026f90f1858b-dns-svc\") pod \"dnsmasq-dns-5cd749f44f-tjfmr\" (UID: \"111f82f6-d141-4c76-be8f-026f90f1858b\") " pod="openstack/dnsmasq-dns-5cd749f44f-tjfmr" Mar 18 18:18:05.963229 master-0 kubenswrapper[30278]: I0318 18:18:05.963141 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/111f82f6-d141-4c76-be8f-026f90f1858b-ovsdbserver-nb\") pod \"dnsmasq-dns-5cd749f44f-tjfmr\" (UID: \"111f82f6-d141-4c76-be8f-026f90f1858b\") " pod="openstack/dnsmasq-dns-5cd749f44f-tjfmr" Mar 18 18:18:05.978180 master-0 kubenswrapper[30278]: I0318 18:18:05.978118 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sckpl\" (UniqueName: \"kubernetes.io/projected/111f82f6-d141-4c76-be8f-026f90f1858b-kube-api-access-sckpl\") pod \"dnsmasq-dns-5cd749f44f-tjfmr\" (UID: \"111f82f6-d141-4c76-be8f-026f90f1858b\") " pod="openstack/dnsmasq-dns-5cd749f44f-tjfmr" Mar 18 18:18:05.980210 master-0 kubenswrapper[30278]: I0318 18:18:05.980169 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5cd749f44f-tjfmr" Mar 18 18:18:06.113329 master-0 kubenswrapper[30278]: I0318 18:18:06.112758 30278 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Mar 18 18:18:06.188228 master-0 kubenswrapper[30278]: I0318 18:18:06.188091 30278 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Mar 18 18:18:06.361632 master-0 kubenswrapper[30278]: I0318 18:18:06.361548 30278 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-764dfbc96f-87qgh"] Mar 18 18:18:06.555336 master-0 kubenswrapper[30278]: I0318 18:18:06.554458 30278 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5cd749f44f-tjfmr"] Mar 18 18:18:06.568754 master-0 kubenswrapper[30278]: I0318 18:18:06.568671 30278 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-xz9c7"] Mar 18 18:18:07.112846 master-0 kubenswrapper[30278]: I0318 18:18:07.112770 30278 generic.go:334] "Generic (PLEG): container finished" podID="46f5672d-7f9b-4257-a9b7-f0e309d943b9" containerID="bef25e3880efcbf1d476e9e7fc9176d923321582d907e77e797319a167abd092" exitCode=0 Mar 18 18:18:07.113852 master-0 kubenswrapper[30278]: I0318 18:18:07.112859 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-764dfbc96f-87qgh" event={"ID":"46f5672d-7f9b-4257-a9b7-f0e309d943b9","Type":"ContainerDied","Data":"bef25e3880efcbf1d476e9e7fc9176d923321582d907e77e797319a167abd092"} Mar 18 18:18:07.113852 master-0 kubenswrapper[30278]: I0318 18:18:07.112891 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-764dfbc96f-87qgh" event={"ID":"46f5672d-7f9b-4257-a9b7-f0e309d943b9","Type":"ContainerStarted","Data":"520bd37cece497b63b9fdb19ac97f5847875dc8f83facc21f1cd73fe6d59ded2"} Mar 18 18:18:07.122415 master-0 kubenswrapper[30278]: I0318 18:18:07.122220 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-xz9c7" event={"ID":"8c299186-30d6-4dd9-9490-5c843f940e6d","Type":"ContainerStarted","Data":"5aca74515b5798ef7b42e8d1f18c62dd32c3d41165529a9f275e658b3080a273"} Mar 18 18:18:07.122415 master-0 kubenswrapper[30278]: I0318 18:18:07.122324 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-xz9c7" event={"ID":"8c299186-30d6-4dd9-9490-5c843f940e6d","Type":"ContainerStarted","Data":"433efbf15648c630932e7f9abcd415b31bc9532ad77078a0036cfe28e7844c19"} Mar 18 18:18:07.131165 master-0 kubenswrapper[30278]: I0318 18:18:07.131087 30278 generic.go:334] "Generic (PLEG): container finished" podID="111f82f6-d141-4c76-be8f-026f90f1858b" containerID="e11284d89c22726b6ec9f610e4d34419a7e1f0c009fbdf41beafd946f45236cd" exitCode=0 Mar 18 18:18:07.132358 master-0 kubenswrapper[30278]: I0318 18:18:07.131915 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5cd749f44f-tjfmr" event={"ID":"111f82f6-d141-4c76-be8f-026f90f1858b","Type":"ContainerDied","Data":"e11284d89c22726b6ec9f610e4d34419a7e1f0c009fbdf41beafd946f45236cd"} Mar 18 18:18:07.132358 master-0 kubenswrapper[30278]: I0318 18:18:07.132027 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5cd749f44f-tjfmr" event={"ID":"111f82f6-d141-4c76-be8f-026f90f1858b","Type":"ContainerStarted","Data":"c9c58b4b00ede99b52c5e0e37a2bb083521996bdb6e7dab4349c5e7fa69eab94"} Mar 18 18:18:07.271042 master-0 kubenswrapper[30278]: I0318 18:18:07.270973 30278 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-xz9c7" podStartSLOduration=2.270945277 podStartE2EDuration="2.270945277s" podCreationTimestamp="2026-03-18 18:18:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 18:18:07.238585846 +0000 UTC m=+1056.405770441" watchObservedRunningTime="2026-03-18 18:18:07.270945277 +0000 UTC m=+1056.438129872" Mar 18 18:18:07.591190 master-0 kubenswrapper[30278]: I0318 18:18:07.591149 30278 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Mar 18 18:18:07.941912 master-0 kubenswrapper[30278]: I0318 18:18:07.941479 30278 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-764dfbc96f-87qgh" Mar 18 18:18:07.978626 master-0 kubenswrapper[30278]: I0318 18:18:07.975536 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9k84g\" (UniqueName: \"kubernetes.io/projected/46f5672d-7f9b-4257-a9b7-f0e309d943b9-kube-api-access-9k84g\") pod \"46f5672d-7f9b-4257-a9b7-f0e309d943b9\" (UID: \"46f5672d-7f9b-4257-a9b7-f0e309d943b9\") " Mar 18 18:18:07.978626 master-0 kubenswrapper[30278]: I0318 18:18:07.975644 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/46f5672d-7f9b-4257-a9b7-f0e309d943b9-ovsdbserver-nb\") pod \"46f5672d-7f9b-4257-a9b7-f0e309d943b9\" (UID: \"46f5672d-7f9b-4257-a9b7-f0e309d943b9\") " Mar 18 18:18:07.978626 master-0 kubenswrapper[30278]: I0318 18:18:07.975846 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/46f5672d-7f9b-4257-a9b7-f0e309d943b9-dns-svc\") pod \"46f5672d-7f9b-4257-a9b7-f0e309d943b9\" (UID: \"46f5672d-7f9b-4257-a9b7-f0e309d943b9\") " Mar 18 18:18:07.978626 master-0 kubenswrapper[30278]: I0318 18:18:07.975915 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/46f5672d-7f9b-4257-a9b7-f0e309d943b9-config\") pod \"46f5672d-7f9b-4257-a9b7-f0e309d943b9\" (UID: \"46f5672d-7f9b-4257-a9b7-f0e309d943b9\") " Mar 18 18:18:07.984113 master-0 kubenswrapper[30278]: I0318 18:18:07.984063 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/46f5672d-7f9b-4257-a9b7-f0e309d943b9-kube-api-access-9k84g" (OuterVolumeSpecName: "kube-api-access-9k84g") pod "46f5672d-7f9b-4257-a9b7-f0e309d943b9" (UID: "46f5672d-7f9b-4257-a9b7-f0e309d943b9"). InnerVolumeSpecName "kube-api-access-9k84g". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 18:18:08.012409 master-0 kubenswrapper[30278]: I0318 18:18:08.012130 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/46f5672d-7f9b-4257-a9b7-f0e309d943b9-config" (OuterVolumeSpecName: "config") pod "46f5672d-7f9b-4257-a9b7-f0e309d943b9" (UID: "46f5672d-7f9b-4257-a9b7-f0e309d943b9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 18:18:08.024447 master-0 kubenswrapper[30278]: I0318 18:18:08.023564 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/46f5672d-7f9b-4257-a9b7-f0e309d943b9-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "46f5672d-7f9b-4257-a9b7-f0e309d943b9" (UID: "46f5672d-7f9b-4257-a9b7-f0e309d943b9"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 18:18:08.025889 master-0 kubenswrapper[30278]: I0318 18:18:08.025845 30278 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Mar 18 18:18:08.026585 master-0 kubenswrapper[30278]: E0318 18:18:08.026546 30278 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="46f5672d-7f9b-4257-a9b7-f0e309d943b9" containerName="init" Mar 18 18:18:08.026585 master-0 kubenswrapper[30278]: I0318 18:18:08.026567 30278 state_mem.go:107] "Deleted CPUSet assignment" podUID="46f5672d-7f9b-4257-a9b7-f0e309d943b9" containerName="init" Mar 18 18:18:08.026840 master-0 kubenswrapper[30278]: I0318 18:18:08.026817 30278 memory_manager.go:354] "RemoveStaleState removing state" podUID="46f5672d-7f9b-4257-a9b7-f0e309d943b9" containerName="init" Mar 18 18:18:08.028152 master-0 kubenswrapper[30278]: I0318 18:18:08.028131 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Mar 18 18:18:08.032194 master-0 kubenswrapper[30278]: I0318 18:18:08.031220 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Mar 18 18:18:08.032194 master-0 kubenswrapper[30278]: I0318 18:18:08.031338 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Mar 18 18:18:08.032194 master-0 kubenswrapper[30278]: I0318 18:18:08.031403 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Mar 18 18:18:08.052321 master-0 kubenswrapper[30278]: I0318 18:18:08.052205 30278 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Mar 18 18:18:08.084939 master-0 kubenswrapper[30278]: I0318 18:18:08.084857 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/46f5672d-7f9b-4257-a9b7-f0e309d943b9-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "46f5672d-7f9b-4257-a9b7-f0e309d943b9" (UID: "46f5672d-7f9b-4257-a9b7-f0e309d943b9"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 18:18:08.087863 master-0 kubenswrapper[30278]: I0318 18:18:08.085374 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ptgrd\" (UniqueName: \"kubernetes.io/projected/fd9e1dcd-e0d3-401a-b538-90a263db6e88-kube-api-access-ptgrd\") pod \"ovn-northd-0\" (UID: \"fd9e1dcd-e0d3-401a-b538-90a263db6e88\") " pod="openstack/ovn-northd-0" Mar 18 18:18:08.087863 master-0 kubenswrapper[30278]: I0318 18:18:08.085865 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/fd9e1dcd-e0d3-401a-b538-90a263db6e88-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"fd9e1dcd-e0d3-401a-b538-90a263db6e88\") " pod="openstack/ovn-northd-0" Mar 18 18:18:08.087863 master-0 kubenswrapper[30278]: I0318 18:18:08.086038 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/fd9e1dcd-e0d3-401a-b538-90a263db6e88-scripts\") pod \"ovn-northd-0\" (UID: \"fd9e1dcd-e0d3-401a-b538-90a263db6e88\") " pod="openstack/ovn-northd-0" Mar 18 18:18:08.087863 master-0 kubenswrapper[30278]: I0318 18:18:08.086105 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd9e1dcd-e0d3-401a-b538-90a263db6e88-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"fd9e1dcd-e0d3-401a-b538-90a263db6e88\") " pod="openstack/ovn-northd-0" Mar 18 18:18:08.087863 master-0 kubenswrapper[30278]: I0318 18:18:08.086228 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fd9e1dcd-e0d3-401a-b538-90a263db6e88-config\") pod \"ovn-northd-0\" (UID: \"fd9e1dcd-e0d3-401a-b538-90a263db6e88\") " pod="openstack/ovn-northd-0" Mar 18 18:18:08.087863 master-0 kubenswrapper[30278]: I0318 18:18:08.086475 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/fd9e1dcd-e0d3-401a-b538-90a263db6e88-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"fd9e1dcd-e0d3-401a-b538-90a263db6e88\") " pod="openstack/ovn-northd-0" Mar 18 18:18:08.087863 master-0 kubenswrapper[30278]: I0318 18:18:08.086531 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/fd9e1dcd-e0d3-401a-b538-90a263db6e88-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"fd9e1dcd-e0d3-401a-b538-90a263db6e88\") " pod="openstack/ovn-northd-0" Mar 18 18:18:08.087863 master-0 kubenswrapper[30278]: I0318 18:18:08.086728 30278 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9k84g\" (UniqueName: \"kubernetes.io/projected/46f5672d-7f9b-4257-a9b7-f0e309d943b9-kube-api-access-9k84g\") on node \"master-0\" DevicePath \"\"" Mar 18 18:18:08.087863 master-0 kubenswrapper[30278]: I0318 18:18:08.086745 30278 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/46f5672d-7f9b-4257-a9b7-f0e309d943b9-ovsdbserver-nb\") on node \"master-0\" DevicePath \"\"" Mar 18 18:18:08.087863 master-0 kubenswrapper[30278]: I0318 18:18:08.086766 30278 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/46f5672d-7f9b-4257-a9b7-f0e309d943b9-dns-svc\") on node \"master-0\" DevicePath \"\"" Mar 18 18:18:08.087863 master-0 kubenswrapper[30278]: I0318 18:18:08.086778 30278 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/46f5672d-7f9b-4257-a9b7-f0e309d943b9-config\") on node \"master-0\" DevicePath \"\"" Mar 18 18:18:08.166323 master-0 kubenswrapper[30278]: I0318 18:18:08.166229 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-764dfbc96f-87qgh" event={"ID":"46f5672d-7f9b-4257-a9b7-f0e309d943b9","Type":"ContainerDied","Data":"520bd37cece497b63b9fdb19ac97f5847875dc8f83facc21f1cd73fe6d59ded2"} Mar 18 18:18:08.166998 master-0 kubenswrapper[30278]: I0318 18:18:08.166378 30278 scope.go:117] "RemoveContainer" containerID="bef25e3880efcbf1d476e9e7fc9176d923321582d907e77e797319a167abd092" Mar 18 18:18:08.166998 master-0 kubenswrapper[30278]: I0318 18:18:08.166601 30278 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-764dfbc96f-87qgh" Mar 18 18:18:08.176161 master-0 kubenswrapper[30278]: I0318 18:18:08.176066 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5cd749f44f-tjfmr" event={"ID":"111f82f6-d141-4c76-be8f-026f90f1858b","Type":"ContainerStarted","Data":"2d4e7c538f3bf356ef1ea6888f439b1ec53892ef7b374ae1e01a22b433dc92cd"} Mar 18 18:18:08.177802 master-0 kubenswrapper[30278]: I0318 18:18:08.177732 30278 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5cd749f44f-tjfmr" Mar 18 18:18:08.195729 master-0 kubenswrapper[30278]: I0318 18:18:08.195402 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fd9e1dcd-e0d3-401a-b538-90a263db6e88-config\") pod \"ovn-northd-0\" (UID: \"fd9e1dcd-e0d3-401a-b538-90a263db6e88\") " pod="openstack/ovn-northd-0" Mar 18 18:18:08.195729 master-0 kubenswrapper[30278]: I0318 18:18:08.195682 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/fd9e1dcd-e0d3-401a-b538-90a263db6e88-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"fd9e1dcd-e0d3-401a-b538-90a263db6e88\") " pod="openstack/ovn-northd-0" Mar 18 18:18:08.195729 master-0 kubenswrapper[30278]: I0318 18:18:08.195725 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/fd9e1dcd-e0d3-401a-b538-90a263db6e88-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"fd9e1dcd-e0d3-401a-b538-90a263db6e88\") " pod="openstack/ovn-northd-0" Mar 18 18:18:08.195927 master-0 kubenswrapper[30278]: I0318 18:18:08.195821 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ptgrd\" (UniqueName: \"kubernetes.io/projected/fd9e1dcd-e0d3-401a-b538-90a263db6e88-kube-api-access-ptgrd\") pod \"ovn-northd-0\" (UID: \"fd9e1dcd-e0d3-401a-b538-90a263db6e88\") " pod="openstack/ovn-northd-0" Mar 18 18:18:08.195970 master-0 kubenswrapper[30278]: I0318 18:18:08.195946 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/fd9e1dcd-e0d3-401a-b538-90a263db6e88-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"fd9e1dcd-e0d3-401a-b538-90a263db6e88\") " pod="openstack/ovn-northd-0" Mar 18 18:18:08.200407 master-0 kubenswrapper[30278]: I0318 18:18:08.196598 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/fd9e1dcd-e0d3-401a-b538-90a263db6e88-scripts\") pod \"ovn-northd-0\" (UID: \"fd9e1dcd-e0d3-401a-b538-90a263db6e88\") " pod="openstack/ovn-northd-0" Mar 18 18:18:08.200407 master-0 kubenswrapper[30278]: I0318 18:18:08.196731 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd9e1dcd-e0d3-401a-b538-90a263db6e88-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"fd9e1dcd-e0d3-401a-b538-90a263db6e88\") " pod="openstack/ovn-northd-0" Mar 18 18:18:08.200407 master-0 kubenswrapper[30278]: I0318 18:18:08.197129 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/fd9e1dcd-e0d3-401a-b538-90a263db6e88-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"fd9e1dcd-e0d3-401a-b538-90a263db6e88\") " pod="openstack/ovn-northd-0" Mar 18 18:18:08.200407 master-0 kubenswrapper[30278]: I0318 18:18:08.197866 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fd9e1dcd-e0d3-401a-b538-90a263db6e88-config\") pod \"ovn-northd-0\" (UID: \"fd9e1dcd-e0d3-401a-b538-90a263db6e88\") " pod="openstack/ovn-northd-0" Mar 18 18:18:08.201686 master-0 kubenswrapper[30278]: I0318 18:18:08.201637 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/fd9e1dcd-e0d3-401a-b538-90a263db6e88-scripts\") pod \"ovn-northd-0\" (UID: \"fd9e1dcd-e0d3-401a-b538-90a263db6e88\") " pod="openstack/ovn-northd-0" Mar 18 18:18:08.212303 master-0 kubenswrapper[30278]: I0318 18:18:08.209015 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/fd9e1dcd-e0d3-401a-b538-90a263db6e88-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"fd9e1dcd-e0d3-401a-b538-90a263db6e88\") " pod="openstack/ovn-northd-0" Mar 18 18:18:08.226924 master-0 kubenswrapper[30278]: I0318 18:18:08.226819 30278 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5cd749f44f-tjfmr" podStartSLOduration=3.226783633 podStartE2EDuration="3.226783633s" podCreationTimestamp="2026-03-18 18:18:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 18:18:08.214745358 +0000 UTC m=+1057.381929963" watchObservedRunningTime="2026-03-18 18:18:08.226783633 +0000 UTC m=+1057.393968228" Mar 18 18:18:08.234252 master-0 kubenswrapper[30278]: I0318 18:18:08.234190 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/fd9e1dcd-e0d3-401a-b538-90a263db6e88-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"fd9e1dcd-e0d3-401a-b538-90a263db6e88\") " pod="openstack/ovn-northd-0" Mar 18 18:18:08.245272 master-0 kubenswrapper[30278]: I0318 18:18:08.245224 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ptgrd\" (UniqueName: \"kubernetes.io/projected/fd9e1dcd-e0d3-401a-b538-90a263db6e88-kube-api-access-ptgrd\") pod \"ovn-northd-0\" (UID: \"fd9e1dcd-e0d3-401a-b538-90a263db6e88\") " pod="openstack/ovn-northd-0" Mar 18 18:18:08.320466 master-0 kubenswrapper[30278]: I0318 18:18:08.320337 30278 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-764dfbc96f-87qgh"] Mar 18 18:18:08.322911 master-0 kubenswrapper[30278]: I0318 18:18:08.322848 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd9e1dcd-e0d3-401a-b538-90a263db6e88-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"fd9e1dcd-e0d3-401a-b538-90a263db6e88\") " pod="openstack/ovn-northd-0" Mar 18 18:18:08.329721 master-0 kubenswrapper[30278]: I0318 18:18:08.329635 30278 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-764dfbc96f-87qgh"] Mar 18 18:18:08.401494 master-0 kubenswrapper[30278]: I0318 18:18:08.401415 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Mar 18 18:18:08.889826 master-0 kubenswrapper[30278]: I0318 18:18:08.889763 30278 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Mar 18 18:18:08.904477 master-0 kubenswrapper[30278]: W0318 18:18:08.896439 30278 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfd9e1dcd_e0d3_401a_b538_90a263db6e88.slice/crio-5cdf2e929751abbba904348d1d9005a7aa2f6c270dae18e4574b0ce3b0139241 WatchSource:0}: Error finding container 5cdf2e929751abbba904348d1d9005a7aa2f6c270dae18e4574b0ce3b0139241: Status 404 returned error can't find the container with id 5cdf2e929751abbba904348d1d9005a7aa2f6c270dae18e4574b0ce3b0139241 Mar 18 18:18:09.070326 master-0 kubenswrapper[30278]: I0318 18:18:09.069972 30278 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="46f5672d-7f9b-4257-a9b7-f0e309d943b9" path="/var/lib/kubelet/pods/46f5672d-7f9b-4257-a9b7-f0e309d943b9/volumes" Mar 18 18:18:09.192161 master-0 kubenswrapper[30278]: I0318 18:18:09.191961 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"fd9e1dcd-e0d3-401a-b538-90a263db6e88","Type":"ContainerStarted","Data":"5cdf2e929751abbba904348d1d9005a7aa2f6c270dae18e4574b0ce3b0139241"} Mar 18 18:18:09.244458 master-0 kubenswrapper[30278]: E0318 18:18:09.244373 30278 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Mar 18 18:18:09.244458 master-0 kubenswrapper[30278]: E0318 18:18:09.244444 30278 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Mar 18 18:18:09.244823 master-0 kubenswrapper[30278]: E0318 18:18:09.244548 30278 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ff27830b-378b-4338-ac41-041a9d78ed62-etc-swift podName:ff27830b-378b-4338-ac41-041a9d78ed62 nodeName:}" failed. No retries permitted until 2026-03-18 18:18:25.244515644 +0000 UTC m=+1074.411700279 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/ff27830b-378b-4338-ac41-041a9d78ed62-etc-swift") pod "swift-storage-0" (UID: "ff27830b-378b-4338-ac41-041a9d78ed62") : configmap "swift-ring-files" not found Mar 18 18:18:09.244823 master-0 kubenswrapper[30278]: I0318 18:18:09.243951 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/ff27830b-378b-4338-ac41-041a9d78ed62-etc-swift\") pod \"swift-storage-0\" (UID: \"ff27830b-378b-4338-ac41-041a9d78ed62\") " pod="openstack/swift-storage-0" Mar 18 18:18:10.638691 master-0 kubenswrapper[30278]: I0318 18:18:10.638626 30278 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Mar 18 18:18:10.763800 master-0 kubenswrapper[30278]: I0318 18:18:10.763732 30278 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Mar 18 18:18:11.222889 master-0 kubenswrapper[30278]: I0318 18:18:11.222752 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"fd9e1dcd-e0d3-401a-b538-90a263db6e88","Type":"ContainerStarted","Data":"558e321d37b89a84c9a55faa8500e8fe85d93ba9e37f90bc49f3e0b5496a1317"} Mar 18 18:18:11.224597 master-0 kubenswrapper[30278]: I0318 18:18:11.224576 30278 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Mar 18 18:18:11.224787 master-0 kubenswrapper[30278]: I0318 18:18:11.224766 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"fd9e1dcd-e0d3-401a-b538-90a263db6e88","Type":"ContainerStarted","Data":"bbd2fc202aa0fa1138734b6d2ddf152deee0391efc610de58aa225ea834d0479"} Mar 18 18:18:11.250337 master-0 kubenswrapper[30278]: I0318 18:18:11.250213 30278 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=2.813502749 podStartE2EDuration="4.250182296s" podCreationTimestamp="2026-03-18 18:18:07 +0000 UTC" firstStartedPulling="2026-03-18 18:18:08.901238938 +0000 UTC m=+1058.068423533" lastFinishedPulling="2026-03-18 18:18:10.337918485 +0000 UTC m=+1059.505103080" observedRunningTime="2026-03-18 18:18:11.241548054 +0000 UTC m=+1060.408732649" watchObservedRunningTime="2026-03-18 18:18:11.250182296 +0000 UTC m=+1060.417366891" Mar 18 18:18:12.233611 master-0 kubenswrapper[30278]: I0318 18:18:12.233518 30278 generic.go:334] "Generic (PLEG): container finished" podID="b076dc06-c082-4a5e-a049-9f98858a80ff" containerID="7a5967b24efde10a1141ef7a4df7be6aa95755506c535b1fed44c00b193506f2" exitCode=0 Mar 18 18:18:12.234411 master-0 kubenswrapper[30278]: I0318 18:18:12.233950 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-qsrjq" event={"ID":"b076dc06-c082-4a5e-a049-9f98858a80ff","Type":"ContainerDied","Data":"7a5967b24efde10a1141ef7a4df7be6aa95755506c535b1fed44c00b193506f2"} Mar 18 18:18:13.383895 master-0 kubenswrapper[30278]: I0318 18:18:13.383839 30278 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Mar 18 18:18:13.394898 master-0 kubenswrapper[30278]: I0318 18:18:13.386564 30278 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Mar 18 18:18:13.491925 master-0 kubenswrapper[30278]: I0318 18:18:13.491766 30278 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Mar 18 18:18:13.812347 master-0 kubenswrapper[30278]: I0318 18:18:13.811564 30278 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-qsrjq" Mar 18 18:18:13.985418 master-0 kubenswrapper[30278]: I0318 18:18:13.985126 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b076dc06-c082-4a5e-a049-9f98858a80ff-scripts\") pod \"b076dc06-c082-4a5e-a049-9f98858a80ff\" (UID: \"b076dc06-c082-4a5e-a049-9f98858a80ff\") " Mar 18 18:18:13.985418 master-0 kubenswrapper[30278]: I0318 18:18:13.985395 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b076dc06-c082-4a5e-a049-9f98858a80ff-combined-ca-bundle\") pod \"b076dc06-c082-4a5e-a049-9f98858a80ff\" (UID: \"b076dc06-c082-4a5e-a049-9f98858a80ff\") " Mar 18 18:18:13.985786 master-0 kubenswrapper[30278]: I0318 18:18:13.985578 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/b076dc06-c082-4a5e-a049-9f98858a80ff-dispersionconf\") pod \"b076dc06-c082-4a5e-a049-9f98858a80ff\" (UID: \"b076dc06-c082-4a5e-a049-9f98858a80ff\") " Mar 18 18:18:13.985786 master-0 kubenswrapper[30278]: I0318 18:18:13.985637 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/b076dc06-c082-4a5e-a049-9f98858a80ff-ring-data-devices\") pod \"b076dc06-c082-4a5e-a049-9f98858a80ff\" (UID: \"b076dc06-c082-4a5e-a049-9f98858a80ff\") " Mar 18 18:18:13.985786 master-0 kubenswrapper[30278]: I0318 18:18:13.985668 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/b076dc06-c082-4a5e-a049-9f98858a80ff-swiftconf\") pod \"b076dc06-c082-4a5e-a049-9f98858a80ff\" (UID: \"b076dc06-c082-4a5e-a049-9f98858a80ff\") " Mar 18 18:18:13.985950 master-0 kubenswrapper[30278]: I0318 18:18:13.985805 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5fgrg\" (UniqueName: \"kubernetes.io/projected/b076dc06-c082-4a5e-a049-9f98858a80ff-kube-api-access-5fgrg\") pod \"b076dc06-c082-4a5e-a049-9f98858a80ff\" (UID: \"b076dc06-c082-4a5e-a049-9f98858a80ff\") " Mar 18 18:18:13.985950 master-0 kubenswrapper[30278]: I0318 18:18:13.985925 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/b076dc06-c082-4a5e-a049-9f98858a80ff-etc-swift\") pod \"b076dc06-c082-4a5e-a049-9f98858a80ff\" (UID: \"b076dc06-c082-4a5e-a049-9f98858a80ff\") " Mar 18 18:18:13.987237 master-0 kubenswrapper[30278]: I0318 18:18:13.987104 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b076dc06-c082-4a5e-a049-9f98858a80ff-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "b076dc06-c082-4a5e-a049-9f98858a80ff" (UID: "b076dc06-c082-4a5e-a049-9f98858a80ff"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 18:18:13.992178 master-0 kubenswrapper[30278]: I0318 18:18:13.990113 30278 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/b076dc06-c082-4a5e-a049-9f98858a80ff-ring-data-devices\") on node \"master-0\" DevicePath \"\"" Mar 18 18:18:13.992178 master-0 kubenswrapper[30278]: I0318 18:18:13.990868 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b076dc06-c082-4a5e-a049-9f98858a80ff-kube-api-access-5fgrg" (OuterVolumeSpecName: "kube-api-access-5fgrg") pod "b076dc06-c082-4a5e-a049-9f98858a80ff" (UID: "b076dc06-c082-4a5e-a049-9f98858a80ff"). InnerVolumeSpecName "kube-api-access-5fgrg". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 18:18:13.992178 master-0 kubenswrapper[30278]: I0318 18:18:13.991003 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b076dc06-c082-4a5e-a049-9f98858a80ff-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "b076dc06-c082-4a5e-a049-9f98858a80ff" (UID: "b076dc06-c082-4a5e-a049-9f98858a80ff"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 18 18:18:13.994236 master-0 kubenswrapper[30278]: I0318 18:18:13.994205 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b076dc06-c082-4a5e-a049-9f98858a80ff-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "b076dc06-c082-4a5e-a049-9f98858a80ff" (UID: "b076dc06-c082-4a5e-a049-9f98858a80ff"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 18:18:14.020841 master-0 kubenswrapper[30278]: I0318 18:18:14.020765 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b076dc06-c082-4a5e-a049-9f98858a80ff-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "b076dc06-c082-4a5e-a049-9f98858a80ff" (UID: "b076dc06-c082-4a5e-a049-9f98858a80ff"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 18:18:14.020841 master-0 kubenswrapper[30278]: I0318 18:18:14.020803 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b076dc06-c082-4a5e-a049-9f98858a80ff-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b076dc06-c082-4a5e-a049-9f98858a80ff" (UID: "b076dc06-c082-4a5e-a049-9f98858a80ff"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 18:18:14.030808 master-0 kubenswrapper[30278]: I0318 18:18:14.030727 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b076dc06-c082-4a5e-a049-9f98858a80ff-scripts" (OuterVolumeSpecName: "scripts") pod "b076dc06-c082-4a5e-a049-9f98858a80ff" (UID: "b076dc06-c082-4a5e-a049-9f98858a80ff"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 18:18:14.092829 master-0 kubenswrapper[30278]: I0318 18:18:14.092729 30278 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5fgrg\" (UniqueName: \"kubernetes.io/projected/b076dc06-c082-4a5e-a049-9f98858a80ff-kube-api-access-5fgrg\") on node \"master-0\" DevicePath \"\"" Mar 18 18:18:14.092829 master-0 kubenswrapper[30278]: I0318 18:18:14.092786 30278 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/b076dc06-c082-4a5e-a049-9f98858a80ff-etc-swift\") on node \"master-0\" DevicePath \"\"" Mar 18 18:18:14.092829 master-0 kubenswrapper[30278]: I0318 18:18:14.092797 30278 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b076dc06-c082-4a5e-a049-9f98858a80ff-scripts\") on node \"master-0\" DevicePath \"\"" Mar 18 18:18:14.092829 master-0 kubenswrapper[30278]: I0318 18:18:14.092807 30278 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b076dc06-c082-4a5e-a049-9f98858a80ff-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 18 18:18:14.092829 master-0 kubenswrapper[30278]: I0318 18:18:14.092816 30278 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/b076dc06-c082-4a5e-a049-9f98858a80ff-dispersionconf\") on node \"master-0\" DevicePath \"\"" Mar 18 18:18:14.092829 master-0 kubenswrapper[30278]: I0318 18:18:14.092824 30278 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/b076dc06-c082-4a5e-a049-9f98858a80ff-swiftconf\") on node \"master-0\" DevicePath \"\"" Mar 18 18:18:14.257224 master-0 kubenswrapper[30278]: I0318 18:18:14.257053 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-qsrjq" event={"ID":"b076dc06-c082-4a5e-a049-9f98858a80ff","Type":"ContainerDied","Data":"08cdc1705ae6c3569dadbb795116e0dbab515d7ce663f82e9fd6dcfb6a1d9a5f"} Mar 18 18:18:14.257224 master-0 kubenswrapper[30278]: I0318 18:18:14.257138 30278 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="08cdc1705ae6c3569dadbb795116e0dbab515d7ce663f82e9fd6dcfb6a1d9a5f" Mar 18 18:18:14.257224 master-0 kubenswrapper[30278]: I0318 18:18:14.257146 30278 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-qsrjq" Mar 18 18:18:14.403505 master-0 kubenswrapper[30278]: I0318 18:18:14.403434 30278 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Mar 18 18:18:15.595971 master-0 kubenswrapper[30278]: I0318 18:18:15.595908 30278 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-hh2hb"] Mar 18 18:18:15.597541 master-0 kubenswrapper[30278]: E0318 18:18:15.597515 30278 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b076dc06-c082-4a5e-a049-9f98858a80ff" containerName="swift-ring-rebalance" Mar 18 18:18:15.597689 master-0 kubenswrapper[30278]: I0318 18:18:15.597670 30278 state_mem.go:107] "Deleted CPUSet assignment" podUID="b076dc06-c082-4a5e-a049-9f98858a80ff" containerName="swift-ring-rebalance" Mar 18 18:18:15.598128 master-0 kubenswrapper[30278]: I0318 18:18:15.598104 30278 memory_manager.go:354] "RemoveStaleState removing state" podUID="b076dc06-c082-4a5e-a049-9f98858a80ff" containerName="swift-ring-rebalance" Mar 18 18:18:15.599295 master-0 kubenswrapper[30278]: I0318 18:18:15.599245 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-hh2hb" Mar 18 18:18:15.603596 master-0 kubenswrapper[30278]: I0318 18:18:15.603570 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Mar 18 18:18:15.607432 master-0 kubenswrapper[30278]: I0318 18:18:15.607343 30278 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-hh2hb"] Mar 18 18:18:15.746293 master-0 kubenswrapper[30278]: I0318 18:18:15.746208 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wq5sm\" (UniqueName: \"kubernetes.io/projected/f8dc776d-445d-4a68-97fc-3bcfa2d5b332-kube-api-access-wq5sm\") pod \"root-account-create-update-hh2hb\" (UID: \"f8dc776d-445d-4a68-97fc-3bcfa2d5b332\") " pod="openstack/root-account-create-update-hh2hb" Mar 18 18:18:15.746293 master-0 kubenswrapper[30278]: I0318 18:18:15.746298 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f8dc776d-445d-4a68-97fc-3bcfa2d5b332-operator-scripts\") pod \"root-account-create-update-hh2hb\" (UID: \"f8dc776d-445d-4a68-97fc-3bcfa2d5b332\") " pod="openstack/root-account-create-update-hh2hb" Mar 18 18:18:15.848746 master-0 kubenswrapper[30278]: I0318 18:18:15.848596 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wq5sm\" (UniqueName: \"kubernetes.io/projected/f8dc776d-445d-4a68-97fc-3bcfa2d5b332-kube-api-access-wq5sm\") pod \"root-account-create-update-hh2hb\" (UID: \"f8dc776d-445d-4a68-97fc-3bcfa2d5b332\") " pod="openstack/root-account-create-update-hh2hb" Mar 18 18:18:15.848746 master-0 kubenswrapper[30278]: I0318 18:18:15.848662 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f8dc776d-445d-4a68-97fc-3bcfa2d5b332-operator-scripts\") pod \"root-account-create-update-hh2hb\" (UID: \"f8dc776d-445d-4a68-97fc-3bcfa2d5b332\") " pod="openstack/root-account-create-update-hh2hb" Mar 18 18:18:15.849529 master-0 kubenswrapper[30278]: I0318 18:18:15.849489 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f8dc776d-445d-4a68-97fc-3bcfa2d5b332-operator-scripts\") pod \"root-account-create-update-hh2hb\" (UID: \"f8dc776d-445d-4a68-97fc-3bcfa2d5b332\") " pod="openstack/root-account-create-update-hh2hb" Mar 18 18:18:15.865481 master-0 kubenswrapper[30278]: I0318 18:18:15.865422 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wq5sm\" (UniqueName: \"kubernetes.io/projected/f8dc776d-445d-4a68-97fc-3bcfa2d5b332-kube-api-access-wq5sm\") pod \"root-account-create-update-hh2hb\" (UID: \"f8dc776d-445d-4a68-97fc-3bcfa2d5b332\") " pod="openstack/root-account-create-update-hh2hb" Mar 18 18:18:15.964346 master-0 kubenswrapper[30278]: I0318 18:18:15.964192 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-hh2hb" Mar 18 18:18:15.983640 master-0 kubenswrapper[30278]: I0318 18:18:15.983588 30278 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5cd749f44f-tjfmr" Mar 18 18:18:16.111490 master-0 kubenswrapper[30278]: I0318 18:18:16.109984 30278 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-998757459-j6h5k"] Mar 18 18:18:16.111490 master-0 kubenswrapper[30278]: I0318 18:18:16.110372 30278 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-998757459-j6h5k" podUID="845ae1c5-4eca-424e-bca5-94dafe5d0407" containerName="dnsmasq-dns" containerID="cri-o://7a87c0d7e6ebddf357e964e81963905b6ca0a7fd0f3262fc74e874e07dc22b6f" gracePeriod=10 Mar 18 18:18:16.287629 master-0 kubenswrapper[30278]: I0318 18:18:16.287110 30278 generic.go:334] "Generic (PLEG): container finished" podID="a24f1688-7c02-4ac5-af8a-0a5c3847755a" containerID="44bbfca57ee3a298dae1a41b1a5d4d8cd7f6849b52b08d9e01ba160c423b8200" exitCode=0 Mar 18 18:18:16.287629 master-0 kubenswrapper[30278]: I0318 18:18:16.287206 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"a24f1688-7c02-4ac5-af8a-0a5c3847755a","Type":"ContainerDied","Data":"44bbfca57ee3a298dae1a41b1a5d4d8cd7f6849b52b08d9e01ba160c423b8200"} Mar 18 18:18:16.310652 master-0 kubenswrapper[30278]: I0318 18:18:16.310586 30278 generic.go:334] "Generic (PLEG): container finished" podID="1ec57481-0836-4458-a2bc-e7ce64175f3a" containerID="19ceac3a5fb8ef40f4559bca65c3c8c753c4bfbdc4f9b3a812b1fdc6b37720ae" exitCode=0 Mar 18 18:18:16.310963 master-0 kubenswrapper[30278]: I0318 18:18:16.310678 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"1ec57481-0836-4458-a2bc-e7ce64175f3a","Type":"ContainerDied","Data":"19ceac3a5fb8ef40f4559bca65c3c8c753c4bfbdc4f9b3a812b1fdc6b37720ae"} Mar 18 18:18:16.316381 master-0 kubenswrapper[30278]: I0318 18:18:16.315441 30278 generic.go:334] "Generic (PLEG): container finished" podID="845ae1c5-4eca-424e-bca5-94dafe5d0407" containerID="7a87c0d7e6ebddf357e964e81963905b6ca0a7fd0f3262fc74e874e07dc22b6f" exitCode=0 Mar 18 18:18:16.316381 master-0 kubenswrapper[30278]: I0318 18:18:16.315535 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-998757459-j6h5k" event={"ID":"845ae1c5-4eca-424e-bca5-94dafe5d0407","Type":"ContainerDied","Data":"7a87c0d7e6ebddf357e964e81963905b6ca0a7fd0f3262fc74e874e07dc22b6f"} Mar 18 18:18:16.605357 master-0 kubenswrapper[30278]: I0318 18:18:16.605265 30278 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-hh2hb"] Mar 18 18:18:16.640655 master-0 kubenswrapper[30278]: W0318 18:18:16.640604 30278 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf8dc776d_445d_4a68_97fc_3bcfa2d5b332.slice/crio-b87d0c27d0ceeaa90666e27a037b509721848e52215f0b20ade849261e81546f WatchSource:0}: Error finding container b87d0c27d0ceeaa90666e27a037b509721848e52215f0b20ade849261e81546f: Status 404 returned error can't find the container with id b87d0c27d0ceeaa90666e27a037b509721848e52215f0b20ade849261e81546f Mar 18 18:18:16.942731 master-0 kubenswrapper[30278]: I0318 18:18:16.942658 30278 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-998757459-j6h5k" Mar 18 18:18:17.016334 master-0 kubenswrapper[30278]: I0318 18:18:17.016220 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/845ae1c5-4eca-424e-bca5-94dafe5d0407-dns-svc\") pod \"845ae1c5-4eca-424e-bca5-94dafe5d0407\" (UID: \"845ae1c5-4eca-424e-bca5-94dafe5d0407\") " Mar 18 18:18:17.016785 master-0 kubenswrapper[30278]: I0318 18:18:17.016768 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/845ae1c5-4eca-424e-bca5-94dafe5d0407-config\") pod \"845ae1c5-4eca-424e-bca5-94dafe5d0407\" (UID: \"845ae1c5-4eca-424e-bca5-94dafe5d0407\") " Mar 18 18:18:17.016962 master-0 kubenswrapper[30278]: I0318 18:18:17.016948 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lr69v\" (UniqueName: \"kubernetes.io/projected/845ae1c5-4eca-424e-bca5-94dafe5d0407-kube-api-access-lr69v\") pod \"845ae1c5-4eca-424e-bca5-94dafe5d0407\" (UID: \"845ae1c5-4eca-424e-bca5-94dafe5d0407\") " Mar 18 18:18:17.024125 master-0 kubenswrapper[30278]: I0318 18:18:17.024031 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/845ae1c5-4eca-424e-bca5-94dafe5d0407-kube-api-access-lr69v" (OuterVolumeSpecName: "kube-api-access-lr69v") pod "845ae1c5-4eca-424e-bca5-94dafe5d0407" (UID: "845ae1c5-4eca-424e-bca5-94dafe5d0407"). InnerVolumeSpecName "kube-api-access-lr69v". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 18:18:17.094979 master-0 kubenswrapper[30278]: I0318 18:18:17.094894 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/845ae1c5-4eca-424e-bca5-94dafe5d0407-config" (OuterVolumeSpecName: "config") pod "845ae1c5-4eca-424e-bca5-94dafe5d0407" (UID: "845ae1c5-4eca-424e-bca5-94dafe5d0407"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 18:18:17.103082 master-0 kubenswrapper[30278]: I0318 18:18:17.103020 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/845ae1c5-4eca-424e-bca5-94dafe5d0407-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "845ae1c5-4eca-424e-bca5-94dafe5d0407" (UID: "845ae1c5-4eca-424e-bca5-94dafe5d0407"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 18:18:17.122545 master-0 kubenswrapper[30278]: I0318 18:18:17.120725 30278 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lr69v\" (UniqueName: \"kubernetes.io/projected/845ae1c5-4eca-424e-bca5-94dafe5d0407-kube-api-access-lr69v\") on node \"master-0\" DevicePath \"\"" Mar 18 18:18:17.122545 master-0 kubenswrapper[30278]: I0318 18:18:17.120761 30278 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/845ae1c5-4eca-424e-bca5-94dafe5d0407-dns-svc\") on node \"master-0\" DevicePath \"\"" Mar 18 18:18:17.122545 master-0 kubenswrapper[30278]: I0318 18:18:17.120774 30278 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/845ae1c5-4eca-424e-bca5-94dafe5d0407-config\") on node \"master-0\" DevicePath \"\"" Mar 18 18:18:17.331545 master-0 kubenswrapper[30278]: I0318 18:18:17.331494 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"1ec57481-0836-4458-a2bc-e7ce64175f3a","Type":"ContainerStarted","Data":"4e6b95484f0c56e12088cf4f29756a82e811098fb97836ad8dad49d2b4fa91c1"} Mar 18 18:18:17.332496 master-0 kubenswrapper[30278]: I0318 18:18:17.332422 30278 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Mar 18 18:18:17.335961 master-0 kubenswrapper[30278]: I0318 18:18:17.335935 30278 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-998757459-j6h5k" Mar 18 18:18:17.336085 master-0 kubenswrapper[30278]: I0318 18:18:17.335912 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-998757459-j6h5k" event={"ID":"845ae1c5-4eca-424e-bca5-94dafe5d0407","Type":"ContainerDied","Data":"82bec908e2769f1dd571f7a27666fc622ee2ca20287497164ceab2fce821df09"} Mar 18 18:18:17.336151 master-0 kubenswrapper[30278]: I0318 18:18:17.336127 30278 scope.go:117] "RemoveContainer" containerID="7a87c0d7e6ebddf357e964e81963905b6ca0a7fd0f3262fc74e874e07dc22b6f" Mar 18 18:18:17.343182 master-0 kubenswrapper[30278]: I0318 18:18:17.342867 30278 generic.go:334] "Generic (PLEG): container finished" podID="f8dc776d-445d-4a68-97fc-3bcfa2d5b332" containerID="e4724c9c85281f21d876aa8d90072b8d727cbfd7b25d5c1cb1f462ce5febb85c" exitCode=0 Mar 18 18:18:17.343182 master-0 kubenswrapper[30278]: I0318 18:18:17.342943 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-hh2hb" event={"ID":"f8dc776d-445d-4a68-97fc-3bcfa2d5b332","Type":"ContainerDied","Data":"e4724c9c85281f21d876aa8d90072b8d727cbfd7b25d5c1cb1f462ce5febb85c"} Mar 18 18:18:17.343182 master-0 kubenswrapper[30278]: I0318 18:18:17.343040 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-hh2hb" event={"ID":"f8dc776d-445d-4a68-97fc-3bcfa2d5b332","Type":"ContainerStarted","Data":"b87d0c27d0ceeaa90666e27a037b509721848e52215f0b20ade849261e81546f"} Mar 18 18:18:17.351979 master-0 kubenswrapper[30278]: I0318 18:18:17.351935 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"a24f1688-7c02-4ac5-af8a-0a5c3847755a","Type":"ContainerStarted","Data":"fe4ff74ccc78cec5d336e1a51e0d09030ba4e0a3ab2f7f788fad698128a9138b"} Mar 18 18:18:17.352482 master-0 kubenswrapper[30278]: I0318 18:18:17.352316 30278 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Mar 18 18:18:17.366560 master-0 kubenswrapper[30278]: I0318 18:18:17.365770 30278 scope.go:117] "RemoveContainer" containerID="84ca01803a4660e271b507b465e03990950dc75b95fa15960a95f3ca378866c3" Mar 18 18:18:17.370099 master-0 kubenswrapper[30278]: I0318 18:18:17.370024 30278 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=63.37000686 podStartE2EDuration="1m3.37000686s" podCreationTimestamp="2026-03-18 18:17:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 18:18:17.368603793 +0000 UTC m=+1066.535788388" watchObservedRunningTime="2026-03-18 18:18:17.37000686 +0000 UTC m=+1066.537191455" Mar 18 18:18:17.435187 master-0 kubenswrapper[30278]: I0318 18:18:17.434993 30278 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=43.828381416 podStartE2EDuration="1m3.434969021s" podCreationTimestamp="2026-03-18 18:17:14 +0000 UTC" firstStartedPulling="2026-03-18 18:17:21.599612567 +0000 UTC m=+1010.766797162" lastFinishedPulling="2026-03-18 18:17:41.206200172 +0000 UTC m=+1030.373384767" observedRunningTime="2026-03-18 18:18:17.423770539 +0000 UTC m=+1066.590955154" watchObservedRunningTime="2026-03-18 18:18:17.434969021 +0000 UTC m=+1066.602153626" Mar 18 18:18:17.453401 master-0 kubenswrapper[30278]: I0318 18:18:17.453325 30278 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-998757459-j6h5k"] Mar 18 18:18:17.468880 master-0 kubenswrapper[30278]: I0318 18:18:17.468818 30278 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-998757459-j6h5k"] Mar 18 18:18:18.871170 master-0 kubenswrapper[30278]: I0318 18:18:18.871068 30278 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-hh2hb" Mar 18 18:18:18.969070 master-0 kubenswrapper[30278]: I0318 18:18:18.967801 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f8dc776d-445d-4a68-97fc-3bcfa2d5b332-operator-scripts\") pod \"f8dc776d-445d-4a68-97fc-3bcfa2d5b332\" (UID: \"f8dc776d-445d-4a68-97fc-3bcfa2d5b332\") " Mar 18 18:18:18.970002 master-0 kubenswrapper[30278]: I0318 18:18:18.969247 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f8dc776d-445d-4a68-97fc-3bcfa2d5b332-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "f8dc776d-445d-4a68-97fc-3bcfa2d5b332" (UID: "f8dc776d-445d-4a68-97fc-3bcfa2d5b332"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 18:18:18.970002 master-0 kubenswrapper[30278]: I0318 18:18:18.969454 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wq5sm\" (UniqueName: \"kubernetes.io/projected/f8dc776d-445d-4a68-97fc-3bcfa2d5b332-kube-api-access-wq5sm\") pod \"f8dc776d-445d-4a68-97fc-3bcfa2d5b332\" (UID: \"f8dc776d-445d-4a68-97fc-3bcfa2d5b332\") " Mar 18 18:18:18.973254 master-0 kubenswrapper[30278]: I0318 18:18:18.973096 30278 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f8dc776d-445d-4a68-97fc-3bcfa2d5b332-operator-scripts\") on node \"master-0\" DevicePath \"\"" Mar 18 18:18:18.977566 master-0 kubenswrapper[30278]: I0318 18:18:18.977494 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f8dc776d-445d-4a68-97fc-3bcfa2d5b332-kube-api-access-wq5sm" (OuterVolumeSpecName: "kube-api-access-wq5sm") pod "f8dc776d-445d-4a68-97fc-3bcfa2d5b332" (UID: "f8dc776d-445d-4a68-97fc-3bcfa2d5b332"). InnerVolumeSpecName "kube-api-access-wq5sm". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 18:18:19.072911 master-0 kubenswrapper[30278]: I0318 18:18:19.072837 30278 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="845ae1c5-4eca-424e-bca5-94dafe5d0407" path="/var/lib/kubelet/pods/845ae1c5-4eca-424e-bca5-94dafe5d0407/volumes" Mar 18 18:18:19.075787 master-0 kubenswrapper[30278]: I0318 18:18:19.075716 30278 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wq5sm\" (UniqueName: \"kubernetes.io/projected/f8dc776d-445d-4a68-97fc-3bcfa2d5b332-kube-api-access-wq5sm\") on node \"master-0\" DevicePath \"\"" Mar 18 18:18:19.378160 master-0 kubenswrapper[30278]: I0318 18:18:19.377929 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-hh2hb" event={"ID":"f8dc776d-445d-4a68-97fc-3bcfa2d5b332","Type":"ContainerDied","Data":"b87d0c27d0ceeaa90666e27a037b509721848e52215f0b20ade849261e81546f"} Mar 18 18:18:19.378160 master-0 kubenswrapper[30278]: I0318 18:18:19.377993 30278 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b87d0c27d0ceeaa90666e27a037b509721848e52215f0b20ade849261e81546f" Mar 18 18:18:19.378160 master-0 kubenswrapper[30278]: I0318 18:18:19.378059 30278 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-hh2hb" Mar 18 18:18:19.821396 master-0 kubenswrapper[30278]: I0318 18:18:19.821321 30278 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-2ftrf"] Mar 18 18:18:19.821823 master-0 kubenswrapper[30278]: E0318 18:18:19.821801 30278 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="845ae1c5-4eca-424e-bca5-94dafe5d0407" containerName="init" Mar 18 18:18:19.821823 master-0 kubenswrapper[30278]: I0318 18:18:19.821821 30278 state_mem.go:107] "Deleted CPUSet assignment" podUID="845ae1c5-4eca-424e-bca5-94dafe5d0407" containerName="init" Mar 18 18:18:19.821945 master-0 kubenswrapper[30278]: E0318 18:18:19.821841 30278 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="845ae1c5-4eca-424e-bca5-94dafe5d0407" containerName="dnsmasq-dns" Mar 18 18:18:19.821945 master-0 kubenswrapper[30278]: I0318 18:18:19.821848 30278 state_mem.go:107] "Deleted CPUSet assignment" podUID="845ae1c5-4eca-424e-bca5-94dafe5d0407" containerName="dnsmasq-dns" Mar 18 18:18:19.821945 master-0 kubenswrapper[30278]: E0318 18:18:19.821860 30278 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f8dc776d-445d-4a68-97fc-3bcfa2d5b332" containerName="mariadb-account-create-update" Mar 18 18:18:19.821945 master-0 kubenswrapper[30278]: I0318 18:18:19.821867 30278 state_mem.go:107] "Deleted CPUSet assignment" podUID="f8dc776d-445d-4a68-97fc-3bcfa2d5b332" containerName="mariadb-account-create-update" Mar 18 18:18:19.822092 master-0 kubenswrapper[30278]: I0318 18:18:19.822061 30278 memory_manager.go:354] "RemoveStaleState removing state" podUID="845ae1c5-4eca-424e-bca5-94dafe5d0407" containerName="dnsmasq-dns" Mar 18 18:18:19.822092 master-0 kubenswrapper[30278]: I0318 18:18:19.822078 30278 memory_manager.go:354] "RemoveStaleState removing state" podUID="f8dc776d-445d-4a68-97fc-3bcfa2d5b332" containerName="mariadb-account-create-update" Mar 18 18:18:19.822842 master-0 kubenswrapper[30278]: I0318 18:18:19.822816 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-2ftrf" Mar 18 18:18:19.846020 master-0 kubenswrapper[30278]: I0318 18:18:19.845957 30278 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-2ftrf"] Mar 18 18:18:19.997319 master-0 kubenswrapper[30278]: I0318 18:18:19.997241 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9tsgc\" (UniqueName: \"kubernetes.io/projected/d6e214f3-e729-4653-bd99-ed6b6989358f-kube-api-access-9tsgc\") pod \"keystone-db-create-2ftrf\" (UID: \"d6e214f3-e729-4653-bd99-ed6b6989358f\") " pod="openstack/keystone-db-create-2ftrf" Mar 18 18:18:19.998001 master-0 kubenswrapper[30278]: I0318 18:18:19.997976 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d6e214f3-e729-4653-bd99-ed6b6989358f-operator-scripts\") pod \"keystone-db-create-2ftrf\" (UID: \"d6e214f3-e729-4653-bd99-ed6b6989358f\") " pod="openstack/keystone-db-create-2ftrf" Mar 18 18:18:20.014875 master-0 kubenswrapper[30278]: I0318 18:18:20.014828 30278 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-9h6hb"] Mar 18 18:18:20.020486 master-0 kubenswrapper[30278]: I0318 18:18:20.019360 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-9h6hb" Mar 18 18:18:20.026624 master-0 kubenswrapper[30278]: I0318 18:18:20.026578 30278 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-9h6hb"] Mar 18 18:18:20.103546 master-0 kubenswrapper[30278]: I0318 18:18:20.103394 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9tsgc\" (UniqueName: \"kubernetes.io/projected/d6e214f3-e729-4653-bd99-ed6b6989358f-kube-api-access-9tsgc\") pod \"keystone-db-create-2ftrf\" (UID: \"d6e214f3-e729-4653-bd99-ed6b6989358f\") " pod="openstack/keystone-db-create-2ftrf" Mar 18 18:18:20.103546 master-0 kubenswrapper[30278]: I0318 18:18:20.103512 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d6e214f3-e729-4653-bd99-ed6b6989358f-operator-scripts\") pod \"keystone-db-create-2ftrf\" (UID: \"d6e214f3-e729-4653-bd99-ed6b6989358f\") " pod="openstack/keystone-db-create-2ftrf" Mar 18 18:18:20.135876 master-0 kubenswrapper[30278]: I0318 18:18:20.134007 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d6e214f3-e729-4653-bd99-ed6b6989358f-operator-scripts\") pod \"keystone-db-create-2ftrf\" (UID: \"d6e214f3-e729-4653-bd99-ed6b6989358f\") " pod="openstack/keystone-db-create-2ftrf" Mar 18 18:18:20.147759 master-0 kubenswrapper[30278]: I0318 18:18:20.147657 30278 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-10af-account-create-update-f6v8x"] Mar 18 18:18:20.151117 master-0 kubenswrapper[30278]: I0318 18:18:20.151072 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-10af-account-create-update-f6v8x" Mar 18 18:18:20.158952 master-0 kubenswrapper[30278]: I0318 18:18:20.158606 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9tsgc\" (UniqueName: \"kubernetes.io/projected/d6e214f3-e729-4653-bd99-ed6b6989358f-kube-api-access-9tsgc\") pod \"keystone-db-create-2ftrf\" (UID: \"d6e214f3-e729-4653-bd99-ed6b6989358f\") " pod="openstack/keystone-db-create-2ftrf" Mar 18 18:18:20.161687 master-0 kubenswrapper[30278]: I0318 18:18:20.161309 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Mar 18 18:18:20.196228 master-0 kubenswrapper[30278]: I0318 18:18:20.195826 30278 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-10af-account-create-update-f6v8x"] Mar 18 18:18:20.214756 master-0 kubenswrapper[30278]: I0318 18:18:20.214696 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sfpn4\" (UniqueName: \"kubernetes.io/projected/7d866f13-989b-4dea-b811-6fa6df274dea-kube-api-access-sfpn4\") pod \"glance-db-create-9h6hb\" (UID: \"7d866f13-989b-4dea-b811-6fa6df274dea\") " pod="openstack/glance-db-create-9h6hb" Mar 18 18:18:20.215028 master-0 kubenswrapper[30278]: I0318 18:18:20.215011 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/64f423f1-722c-4545-b52b-8750dab378a3-operator-scripts\") pod \"keystone-10af-account-create-update-f6v8x\" (UID: \"64f423f1-722c-4545-b52b-8750dab378a3\") " pod="openstack/keystone-10af-account-create-update-f6v8x" Mar 18 18:18:20.215165 master-0 kubenswrapper[30278]: I0318 18:18:20.215149 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7d866f13-989b-4dea-b811-6fa6df274dea-operator-scripts\") pod \"glance-db-create-9h6hb\" (UID: \"7d866f13-989b-4dea-b811-6fa6df274dea\") " pod="openstack/glance-db-create-9h6hb" Mar 18 18:18:20.215525 master-0 kubenswrapper[30278]: I0318 18:18:20.215345 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lfqqt\" (UniqueName: \"kubernetes.io/projected/64f423f1-722c-4545-b52b-8750dab378a3-kube-api-access-lfqqt\") pod \"keystone-10af-account-create-update-f6v8x\" (UID: \"64f423f1-722c-4545-b52b-8750dab378a3\") " pod="openstack/keystone-10af-account-create-update-f6v8x" Mar 18 18:18:20.256421 master-0 kubenswrapper[30278]: I0318 18:18:20.256330 30278 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-c37d-account-create-update-wtp9f"] Mar 18 18:18:20.258040 master-0 kubenswrapper[30278]: I0318 18:18:20.258002 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-c37d-account-create-update-wtp9f" Mar 18 18:18:20.278301 master-0 kubenswrapper[30278]: I0318 18:18:20.276596 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Mar 18 18:18:20.296447 master-0 kubenswrapper[30278]: I0318 18:18:20.294398 30278 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-c37d-account-create-update-wtp9f"] Mar 18 18:18:20.324044 master-0 kubenswrapper[30278]: I0318 18:18:20.323972 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sfpn4\" (UniqueName: \"kubernetes.io/projected/7d866f13-989b-4dea-b811-6fa6df274dea-kube-api-access-sfpn4\") pod \"glance-db-create-9h6hb\" (UID: \"7d866f13-989b-4dea-b811-6fa6df274dea\") " pod="openstack/glance-db-create-9h6hb" Mar 18 18:18:20.324413 master-0 kubenswrapper[30278]: I0318 18:18:20.324349 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/64f423f1-722c-4545-b52b-8750dab378a3-operator-scripts\") pod \"keystone-10af-account-create-update-f6v8x\" (UID: \"64f423f1-722c-4545-b52b-8750dab378a3\") " pod="openstack/keystone-10af-account-create-update-f6v8x" Mar 18 18:18:20.324705 master-0 kubenswrapper[30278]: I0318 18:18:20.324619 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7d866f13-989b-4dea-b811-6fa6df274dea-operator-scripts\") pod \"glance-db-create-9h6hb\" (UID: \"7d866f13-989b-4dea-b811-6fa6df274dea\") " pod="openstack/glance-db-create-9h6hb" Mar 18 18:18:20.324960 master-0 kubenswrapper[30278]: I0318 18:18:20.324939 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lfqqt\" (UniqueName: \"kubernetes.io/projected/64f423f1-722c-4545-b52b-8750dab378a3-kube-api-access-lfqqt\") pod \"keystone-10af-account-create-update-f6v8x\" (UID: \"64f423f1-722c-4545-b52b-8750dab378a3\") " pod="openstack/keystone-10af-account-create-update-f6v8x" Mar 18 18:18:20.328824 master-0 kubenswrapper[30278]: I0318 18:18:20.328776 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/64f423f1-722c-4545-b52b-8750dab378a3-operator-scripts\") pod \"keystone-10af-account-create-update-f6v8x\" (UID: \"64f423f1-722c-4545-b52b-8750dab378a3\") " pod="openstack/keystone-10af-account-create-update-f6v8x" Mar 18 18:18:20.337435 master-0 kubenswrapper[30278]: I0318 18:18:20.337389 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7d866f13-989b-4dea-b811-6fa6df274dea-operator-scripts\") pod \"glance-db-create-9h6hb\" (UID: \"7d866f13-989b-4dea-b811-6fa6df274dea\") " pod="openstack/glance-db-create-9h6hb" Mar 18 18:18:20.359662 master-0 kubenswrapper[30278]: I0318 18:18:20.358952 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sfpn4\" (UniqueName: \"kubernetes.io/projected/7d866f13-989b-4dea-b811-6fa6df274dea-kube-api-access-sfpn4\") pod \"glance-db-create-9h6hb\" (UID: \"7d866f13-989b-4dea-b811-6fa6df274dea\") " pod="openstack/glance-db-create-9h6hb" Mar 18 18:18:20.369034 master-0 kubenswrapper[30278]: I0318 18:18:20.363681 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lfqqt\" (UniqueName: \"kubernetes.io/projected/64f423f1-722c-4545-b52b-8750dab378a3-kube-api-access-lfqqt\") pod \"keystone-10af-account-create-update-f6v8x\" (UID: \"64f423f1-722c-4545-b52b-8750dab378a3\") " pod="openstack/keystone-10af-account-create-update-f6v8x" Mar 18 18:18:20.375086 master-0 kubenswrapper[30278]: I0318 18:18:20.374997 30278 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-x6mcz"] Mar 18 18:18:20.380980 master-0 kubenswrapper[30278]: I0318 18:18:20.380438 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-x6mcz" Mar 18 18:18:20.383433 master-0 kubenswrapper[30278]: I0318 18:18:20.383395 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-9h6hb" Mar 18 18:18:20.427586 master-0 kubenswrapper[30278]: I0318 18:18:20.427517 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/44531d8d-219a-4896-94c7-79b37cba4c80-operator-scripts\") pod \"glance-c37d-account-create-update-wtp9f\" (UID: \"44531d8d-219a-4896-94c7-79b37cba4c80\") " pod="openstack/glance-c37d-account-create-update-wtp9f" Mar 18 18:18:20.427800 master-0 kubenswrapper[30278]: I0318 18:18:20.427692 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p2gkp\" (UniqueName: \"kubernetes.io/projected/44531d8d-219a-4896-94c7-79b37cba4c80-kube-api-access-p2gkp\") pod \"glance-c37d-account-create-update-wtp9f\" (UID: \"44531d8d-219a-4896-94c7-79b37cba4c80\") " pod="openstack/glance-c37d-account-create-update-wtp9f" Mar 18 18:18:20.441383 master-0 kubenswrapper[30278]: I0318 18:18:20.441308 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-2ftrf" Mar 18 18:18:20.470100 master-0 kubenswrapper[30278]: I0318 18:18:20.468550 30278 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-x6mcz"] Mar 18 18:18:20.497345 master-0 kubenswrapper[30278]: I0318 18:18:20.496888 30278 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-8850-account-create-update-vzxfq"] Mar 18 18:18:20.498684 master-0 kubenswrapper[30278]: I0318 18:18:20.498641 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-8850-account-create-update-vzxfq" Mar 18 18:18:20.500574 master-0 kubenswrapper[30278]: I0318 18:18:20.500515 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Mar 18 18:18:20.535862 master-0 kubenswrapper[30278]: I0318 18:18:20.532058 30278 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-8850-account-create-update-vzxfq"] Mar 18 18:18:20.535862 master-0 kubenswrapper[30278]: I0318 18:18:20.533202 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ee65994b-d421-4f38-8556-5084ef3757e1-operator-scripts\") pod \"placement-db-create-x6mcz\" (UID: \"ee65994b-d421-4f38-8556-5084ef3757e1\") " pod="openstack/placement-db-create-x6mcz" Mar 18 18:18:20.535862 master-0 kubenswrapper[30278]: I0318 18:18:20.533311 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/44531d8d-219a-4896-94c7-79b37cba4c80-operator-scripts\") pod \"glance-c37d-account-create-update-wtp9f\" (UID: \"44531d8d-219a-4896-94c7-79b37cba4c80\") " pod="openstack/glance-c37d-account-create-update-wtp9f" Mar 18 18:18:20.535862 master-0 kubenswrapper[30278]: I0318 18:18:20.533380 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6m4rb\" (UniqueName: \"kubernetes.io/projected/ee65994b-d421-4f38-8556-5084ef3757e1-kube-api-access-6m4rb\") pod \"placement-db-create-x6mcz\" (UID: \"ee65994b-d421-4f38-8556-5084ef3757e1\") " pod="openstack/placement-db-create-x6mcz" Mar 18 18:18:20.536163 master-0 kubenswrapper[30278]: I0318 18:18:20.536030 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p2gkp\" (UniqueName: \"kubernetes.io/projected/44531d8d-219a-4896-94c7-79b37cba4c80-kube-api-access-p2gkp\") pod \"glance-c37d-account-create-update-wtp9f\" (UID: \"44531d8d-219a-4896-94c7-79b37cba4c80\") " pod="openstack/glance-c37d-account-create-update-wtp9f" Mar 18 18:18:20.536571 master-0 kubenswrapper[30278]: I0318 18:18:20.536457 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-10af-account-create-update-f6v8x" Mar 18 18:18:20.537423 master-0 kubenswrapper[30278]: I0318 18:18:20.536769 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/44531d8d-219a-4896-94c7-79b37cba4c80-operator-scripts\") pod \"glance-c37d-account-create-update-wtp9f\" (UID: \"44531d8d-219a-4896-94c7-79b37cba4c80\") " pod="openstack/glance-c37d-account-create-update-wtp9f" Mar 18 18:18:20.557058 master-0 kubenswrapper[30278]: I0318 18:18:20.553986 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p2gkp\" (UniqueName: \"kubernetes.io/projected/44531d8d-219a-4896-94c7-79b37cba4c80-kube-api-access-p2gkp\") pod \"glance-c37d-account-create-update-wtp9f\" (UID: \"44531d8d-219a-4896-94c7-79b37cba4c80\") " pod="openstack/glance-c37d-account-create-update-wtp9f" Mar 18 18:18:20.631014 master-0 kubenswrapper[30278]: I0318 18:18:20.629149 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-c37d-account-create-update-wtp9f" Mar 18 18:18:20.639049 master-0 kubenswrapper[30278]: I0318 18:18:20.637364 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ee65994b-d421-4f38-8556-5084ef3757e1-operator-scripts\") pod \"placement-db-create-x6mcz\" (UID: \"ee65994b-d421-4f38-8556-5084ef3757e1\") " pod="openstack/placement-db-create-x6mcz" Mar 18 18:18:20.639049 master-0 kubenswrapper[30278]: I0318 18:18:20.637448 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6m4rb\" (UniqueName: \"kubernetes.io/projected/ee65994b-d421-4f38-8556-5084ef3757e1-kube-api-access-6m4rb\") pod \"placement-db-create-x6mcz\" (UID: \"ee65994b-d421-4f38-8556-5084ef3757e1\") " pod="openstack/placement-db-create-x6mcz" Mar 18 18:18:20.639049 master-0 kubenswrapper[30278]: I0318 18:18:20.637557 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cmj59\" (UniqueName: \"kubernetes.io/projected/5867f7c5-a107-4f30-87d3-bb37abf4b2c1-kube-api-access-cmj59\") pod \"placement-8850-account-create-update-vzxfq\" (UID: \"5867f7c5-a107-4f30-87d3-bb37abf4b2c1\") " pod="openstack/placement-8850-account-create-update-vzxfq" Mar 18 18:18:20.639049 master-0 kubenswrapper[30278]: I0318 18:18:20.637617 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5867f7c5-a107-4f30-87d3-bb37abf4b2c1-operator-scripts\") pod \"placement-8850-account-create-update-vzxfq\" (UID: \"5867f7c5-a107-4f30-87d3-bb37abf4b2c1\") " pod="openstack/placement-8850-account-create-update-vzxfq" Mar 18 18:18:20.639049 master-0 kubenswrapper[30278]: I0318 18:18:20.638432 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ee65994b-d421-4f38-8556-5084ef3757e1-operator-scripts\") pod \"placement-db-create-x6mcz\" (UID: \"ee65994b-d421-4f38-8556-5084ef3757e1\") " pod="openstack/placement-db-create-x6mcz" Mar 18 18:18:20.657963 master-0 kubenswrapper[30278]: I0318 18:18:20.657930 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6m4rb\" (UniqueName: \"kubernetes.io/projected/ee65994b-d421-4f38-8556-5084ef3757e1-kube-api-access-6m4rb\") pod \"placement-db-create-x6mcz\" (UID: \"ee65994b-d421-4f38-8556-5084ef3757e1\") " pod="openstack/placement-db-create-x6mcz" Mar 18 18:18:20.739428 master-0 kubenswrapper[30278]: I0318 18:18:20.739362 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cmj59\" (UniqueName: \"kubernetes.io/projected/5867f7c5-a107-4f30-87d3-bb37abf4b2c1-kube-api-access-cmj59\") pod \"placement-8850-account-create-update-vzxfq\" (UID: \"5867f7c5-a107-4f30-87d3-bb37abf4b2c1\") " pod="openstack/placement-8850-account-create-update-vzxfq" Mar 18 18:18:20.739428 master-0 kubenswrapper[30278]: I0318 18:18:20.739436 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5867f7c5-a107-4f30-87d3-bb37abf4b2c1-operator-scripts\") pod \"placement-8850-account-create-update-vzxfq\" (UID: \"5867f7c5-a107-4f30-87d3-bb37abf4b2c1\") " pod="openstack/placement-8850-account-create-update-vzxfq" Mar 18 18:18:20.740945 master-0 kubenswrapper[30278]: I0318 18:18:20.740316 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5867f7c5-a107-4f30-87d3-bb37abf4b2c1-operator-scripts\") pod \"placement-8850-account-create-update-vzxfq\" (UID: \"5867f7c5-a107-4f30-87d3-bb37abf4b2c1\") " pod="openstack/placement-8850-account-create-update-vzxfq" Mar 18 18:18:20.813394 master-0 kubenswrapper[30278]: I0318 18:18:20.813248 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cmj59\" (UniqueName: \"kubernetes.io/projected/5867f7c5-a107-4f30-87d3-bb37abf4b2c1-kube-api-access-cmj59\") pod \"placement-8850-account-create-update-vzxfq\" (UID: \"5867f7c5-a107-4f30-87d3-bb37abf4b2c1\") " pod="openstack/placement-8850-account-create-update-vzxfq" Mar 18 18:18:20.830859 master-0 kubenswrapper[30278]: I0318 18:18:20.830801 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-x6mcz" Mar 18 18:18:20.848883 master-0 kubenswrapper[30278]: I0318 18:18:20.848366 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-8850-account-create-update-vzxfq" Mar 18 18:18:21.338765 master-0 kubenswrapper[30278]: I0318 18:18:21.338339 30278 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-9h6hb"] Mar 18 18:18:21.358732 master-0 kubenswrapper[30278]: W0318 18:18:21.358168 30278 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod64f423f1_722c_4545_b52b_8750dab378a3.slice/crio-ff61ac93a7b529c0c7bee9def7b1d7b23ee91f0270e7828de036643edf6cbec5 WatchSource:0}: Error finding container ff61ac93a7b529c0c7bee9def7b1d7b23ee91f0270e7828de036643edf6cbec5: Status 404 returned error can't find the container with id ff61ac93a7b529c0c7bee9def7b1d7b23ee91f0270e7828de036643edf6cbec5 Mar 18 18:18:21.369841 master-0 kubenswrapper[30278]: W0318 18:18:21.369643 30278 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd6e214f3_e729_4653_bd99_ed6b6989358f.slice/crio-fe577146abbeb22d8649f4b76f9a8a895709d3a74e821419c3d01f91a16327c0 WatchSource:0}: Error finding container fe577146abbeb22d8649f4b76f9a8a895709d3a74e821419c3d01f91a16327c0: Status 404 returned error can't find the container with id fe577146abbeb22d8649f4b76f9a8a895709d3a74e821419c3d01f91a16327c0 Mar 18 18:18:21.382399 master-0 kubenswrapper[30278]: I0318 18:18:21.382315 30278 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-10af-account-create-update-f6v8x"] Mar 18 18:18:21.393903 master-0 kubenswrapper[30278]: I0318 18:18:21.393141 30278 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-2ftrf"] Mar 18 18:18:21.446535 master-0 kubenswrapper[30278]: I0318 18:18:21.446486 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-9h6hb" event={"ID":"7d866f13-989b-4dea-b811-6fa6df274dea","Type":"ContainerStarted","Data":"8e9dffbeec6f2581f57ea2f4652e394d78b73ea66dd28a2ee5fd4d106d55e8b9"} Mar 18 18:18:21.462877 master-0 kubenswrapper[30278]: I0318 18:18:21.462806 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-10af-account-create-update-f6v8x" event={"ID":"64f423f1-722c-4545-b52b-8750dab378a3","Type":"ContainerStarted","Data":"ff61ac93a7b529c0c7bee9def7b1d7b23ee91f0270e7828de036643edf6cbec5"} Mar 18 18:18:21.467786 master-0 kubenswrapper[30278]: I0318 18:18:21.465470 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-2ftrf" event={"ID":"d6e214f3-e729-4653-bd99-ed6b6989358f","Type":"ContainerStarted","Data":"fe577146abbeb22d8649f4b76f9a8a895709d3a74e821419c3d01f91a16327c0"} Mar 18 18:18:21.504311 master-0 kubenswrapper[30278]: I0318 18:18:21.504243 30278 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-x6mcz"] Mar 18 18:18:21.523432 master-0 kubenswrapper[30278]: I0318 18:18:21.523235 30278 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-8850-account-create-update-vzxfq"] Mar 18 18:18:21.525309 master-0 kubenswrapper[30278]: W0318 18:18:21.525257 30278 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podee65994b_d421_4f38_8556_5084ef3757e1.slice/crio-6d36a3aeae2e8d97481dc2d30fbdc71816a5917358508e439296151a1ad27e4d WatchSource:0}: Error finding container 6d36a3aeae2e8d97481dc2d30fbdc71816a5917358508e439296151a1ad27e4d: Status 404 returned error can't find the container with id 6d36a3aeae2e8d97481dc2d30fbdc71816a5917358508e439296151a1ad27e4d Mar 18 18:18:21.588814 master-0 kubenswrapper[30278]: W0318 18:18:21.586598 30278 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5867f7c5_a107_4f30_87d3_bb37abf4b2c1.slice/crio-5527c218dbced3ab60b1cdbc239fff475bfa541d42a763139f39b1bc50488d56 WatchSource:0}: Error finding container 5527c218dbced3ab60b1cdbc239fff475bfa541d42a763139f39b1bc50488d56: Status 404 returned error can't find the container with id 5527c218dbced3ab60b1cdbc239fff475bfa541d42a763139f39b1bc50488d56 Mar 18 18:18:21.671224 master-0 kubenswrapper[30278]: I0318 18:18:21.671168 30278 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-c37d-account-create-update-wtp9f"] Mar 18 18:18:21.699544 master-0 kubenswrapper[30278]: I0318 18:18:21.699456 30278 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-998757459-j6h5k" podUID="845ae1c5-4eca-424e-bca5-94dafe5d0407" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.128.0.182:5353: i/o timeout" Mar 18 18:18:22.150673 master-0 kubenswrapper[30278]: I0318 18:18:22.150505 30278 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-hh2hb"] Mar 18 18:18:22.159872 master-0 kubenswrapper[30278]: I0318 18:18:22.159801 30278 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-hh2hb"] Mar 18 18:18:22.476712 master-0 kubenswrapper[30278]: I0318 18:18:22.476574 30278 generic.go:334] "Generic (PLEG): container finished" podID="d6e214f3-e729-4653-bd99-ed6b6989358f" containerID="b2e6373f6390c9ebb5f3d8ed5a43e0473d4a0b2c4643b660c0a2fb745832002a" exitCode=0 Mar 18 18:18:22.479857 master-0 kubenswrapper[30278]: I0318 18:18:22.476631 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-2ftrf" event={"ID":"d6e214f3-e729-4653-bd99-ed6b6989358f","Type":"ContainerDied","Data":"b2e6373f6390c9ebb5f3d8ed5a43e0473d4a0b2c4643b660c0a2fb745832002a"} Mar 18 18:18:22.482189 master-0 kubenswrapper[30278]: I0318 18:18:22.482161 30278 generic.go:334] "Generic (PLEG): container finished" podID="ee65994b-d421-4f38-8556-5084ef3757e1" containerID="babdbc9e4cc0ba0181e6869743d3ef26cc340286dbb2a2ae8769bfa7b709f2c9" exitCode=0 Mar 18 18:18:22.482412 master-0 kubenswrapper[30278]: I0318 18:18:22.482211 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-x6mcz" event={"ID":"ee65994b-d421-4f38-8556-5084ef3757e1","Type":"ContainerDied","Data":"babdbc9e4cc0ba0181e6869743d3ef26cc340286dbb2a2ae8769bfa7b709f2c9"} Mar 18 18:18:22.482412 master-0 kubenswrapper[30278]: I0318 18:18:22.482228 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-x6mcz" event={"ID":"ee65994b-d421-4f38-8556-5084ef3757e1","Type":"ContainerStarted","Data":"6d36a3aeae2e8d97481dc2d30fbdc71816a5917358508e439296151a1ad27e4d"} Mar 18 18:18:22.488808 master-0 kubenswrapper[30278]: I0318 18:18:22.483883 30278 generic.go:334] "Generic (PLEG): container finished" podID="7d866f13-989b-4dea-b811-6fa6df274dea" containerID="fcf10484e7ab380ea39f0e74aacb2240dd4a92e87873aea3f132dc4a88cc53cb" exitCode=0 Mar 18 18:18:22.488808 master-0 kubenswrapper[30278]: I0318 18:18:22.483926 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-9h6hb" event={"ID":"7d866f13-989b-4dea-b811-6fa6df274dea","Type":"ContainerDied","Data":"fcf10484e7ab380ea39f0e74aacb2240dd4a92e87873aea3f132dc4a88cc53cb"} Mar 18 18:18:22.488808 master-0 kubenswrapper[30278]: I0318 18:18:22.487506 30278 generic.go:334] "Generic (PLEG): container finished" podID="44531d8d-219a-4896-94c7-79b37cba4c80" containerID="17c455f7a1e0662923d9d419c0c5e1f9cd1590574283722215c2a2385e184da9" exitCode=0 Mar 18 18:18:22.488808 master-0 kubenswrapper[30278]: I0318 18:18:22.487684 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-c37d-account-create-update-wtp9f" event={"ID":"44531d8d-219a-4896-94c7-79b37cba4c80","Type":"ContainerDied","Data":"17c455f7a1e0662923d9d419c0c5e1f9cd1590574283722215c2a2385e184da9"} Mar 18 18:18:22.488808 master-0 kubenswrapper[30278]: I0318 18:18:22.487716 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-c37d-account-create-update-wtp9f" event={"ID":"44531d8d-219a-4896-94c7-79b37cba4c80","Type":"ContainerStarted","Data":"a1630dc858873ee2013de105babb094ebc07b2398b0333bdfcd8e66fa31deee0"} Mar 18 18:18:22.490596 master-0 kubenswrapper[30278]: I0318 18:18:22.490553 30278 generic.go:334] "Generic (PLEG): container finished" podID="5867f7c5-a107-4f30-87d3-bb37abf4b2c1" containerID="eef969672b105c7c46bbb8999d5265352343a6d3dd55529e0467d35f26275a9c" exitCode=0 Mar 18 18:18:22.490785 master-0 kubenswrapper[30278]: I0318 18:18:22.490625 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-8850-account-create-update-vzxfq" event={"ID":"5867f7c5-a107-4f30-87d3-bb37abf4b2c1","Type":"ContainerDied","Data":"eef969672b105c7c46bbb8999d5265352343a6d3dd55529e0467d35f26275a9c"} Mar 18 18:18:22.490785 master-0 kubenswrapper[30278]: I0318 18:18:22.490647 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-8850-account-create-update-vzxfq" event={"ID":"5867f7c5-a107-4f30-87d3-bb37abf4b2c1","Type":"ContainerStarted","Data":"5527c218dbced3ab60b1cdbc239fff475bfa541d42a763139f39b1bc50488d56"} Mar 18 18:18:22.497469 master-0 kubenswrapper[30278]: I0318 18:18:22.497331 30278 generic.go:334] "Generic (PLEG): container finished" podID="64f423f1-722c-4545-b52b-8750dab378a3" containerID="646a8bdc89e9b0c5db39f69adff41c6e67f51b1fce51c9e3c64b0f5e8940622b" exitCode=0 Mar 18 18:18:22.497469 master-0 kubenswrapper[30278]: I0318 18:18:22.497395 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-10af-account-create-update-f6v8x" event={"ID":"64f423f1-722c-4545-b52b-8750dab378a3","Type":"ContainerDied","Data":"646a8bdc89e9b0c5db39f69adff41c6e67f51b1fce51c9e3c64b0f5e8940622b"} Mar 18 18:18:23.072298 master-0 kubenswrapper[30278]: I0318 18:18:23.072192 30278 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f8dc776d-445d-4a68-97fc-3bcfa2d5b332" path="/var/lib/kubelet/pods/f8dc776d-445d-4a68-97fc-3bcfa2d5b332/volumes" Mar 18 18:18:24.165540 master-0 kubenswrapper[30278]: I0318 18:18:24.165462 30278 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-2ftrf" Mar 18 18:18:24.315418 master-0 kubenswrapper[30278]: I0318 18:18:24.315347 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9tsgc\" (UniqueName: \"kubernetes.io/projected/d6e214f3-e729-4653-bd99-ed6b6989358f-kube-api-access-9tsgc\") pod \"d6e214f3-e729-4653-bd99-ed6b6989358f\" (UID: \"d6e214f3-e729-4653-bd99-ed6b6989358f\") " Mar 18 18:18:24.315778 master-0 kubenswrapper[30278]: I0318 18:18:24.315678 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d6e214f3-e729-4653-bd99-ed6b6989358f-operator-scripts\") pod \"d6e214f3-e729-4653-bd99-ed6b6989358f\" (UID: \"d6e214f3-e729-4653-bd99-ed6b6989358f\") " Mar 18 18:18:24.316642 master-0 kubenswrapper[30278]: I0318 18:18:24.316604 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d6e214f3-e729-4653-bd99-ed6b6989358f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "d6e214f3-e729-4653-bd99-ed6b6989358f" (UID: "d6e214f3-e729-4653-bd99-ed6b6989358f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 18:18:24.331944 master-0 kubenswrapper[30278]: I0318 18:18:24.331871 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d6e214f3-e729-4653-bd99-ed6b6989358f-kube-api-access-9tsgc" (OuterVolumeSpecName: "kube-api-access-9tsgc") pod "d6e214f3-e729-4653-bd99-ed6b6989358f" (UID: "d6e214f3-e729-4653-bd99-ed6b6989358f"). InnerVolumeSpecName "kube-api-access-9tsgc". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 18:18:24.423915 master-0 kubenswrapper[30278]: I0318 18:18:24.422165 30278 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d6e214f3-e729-4653-bd99-ed6b6989358f-operator-scripts\") on node \"master-0\" DevicePath \"\"" Mar 18 18:18:24.423915 master-0 kubenswrapper[30278]: I0318 18:18:24.422219 30278 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9tsgc\" (UniqueName: \"kubernetes.io/projected/d6e214f3-e729-4653-bd99-ed6b6989358f-kube-api-access-9tsgc\") on node \"master-0\" DevicePath \"\"" Mar 18 18:18:24.562044 master-0 kubenswrapper[30278]: I0318 18:18:24.561983 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-2ftrf" event={"ID":"d6e214f3-e729-4653-bd99-ed6b6989358f","Type":"ContainerDied","Data":"fe577146abbeb22d8649f4b76f9a8a895709d3a74e821419c3d01f91a16327c0"} Mar 18 18:18:24.562166 master-0 kubenswrapper[30278]: I0318 18:18:24.562044 30278 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fe577146abbeb22d8649f4b76f9a8a895709d3a74e821419c3d01f91a16327c0" Mar 18 18:18:24.562166 master-0 kubenswrapper[30278]: I0318 18:18:24.562114 30278 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-2ftrf" Mar 18 18:18:24.690639 master-0 kubenswrapper[30278]: I0318 18:18:24.690568 30278 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-c37d-account-create-update-wtp9f" Mar 18 18:18:24.703288 master-0 kubenswrapper[30278]: I0318 18:18:24.703214 30278 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-8850-account-create-update-vzxfq" Mar 18 18:18:24.720669 master-0 kubenswrapper[30278]: I0318 18:18:24.716001 30278 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-10af-account-create-update-f6v8x" Mar 18 18:18:24.729521 master-0 kubenswrapper[30278]: I0318 18:18:24.729440 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/44531d8d-219a-4896-94c7-79b37cba4c80-operator-scripts\") pod \"44531d8d-219a-4896-94c7-79b37cba4c80\" (UID: \"44531d8d-219a-4896-94c7-79b37cba4c80\") " Mar 18 18:18:24.729727 master-0 kubenswrapper[30278]: I0318 18:18:24.729691 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p2gkp\" (UniqueName: \"kubernetes.io/projected/44531d8d-219a-4896-94c7-79b37cba4c80-kube-api-access-p2gkp\") pod \"44531d8d-219a-4896-94c7-79b37cba4c80\" (UID: \"44531d8d-219a-4896-94c7-79b37cba4c80\") " Mar 18 18:18:24.731627 master-0 kubenswrapper[30278]: I0318 18:18:24.731577 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/44531d8d-219a-4896-94c7-79b37cba4c80-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "44531d8d-219a-4896-94c7-79b37cba4c80" (UID: "44531d8d-219a-4896-94c7-79b37cba4c80"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 18:18:24.737003 master-0 kubenswrapper[30278]: I0318 18:18:24.736910 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44531d8d-219a-4896-94c7-79b37cba4c80-kube-api-access-p2gkp" (OuterVolumeSpecName: "kube-api-access-p2gkp") pod "44531d8d-219a-4896-94c7-79b37cba4c80" (UID: "44531d8d-219a-4896-94c7-79b37cba4c80"). InnerVolumeSpecName "kube-api-access-p2gkp". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 18:18:24.745163 master-0 kubenswrapper[30278]: I0318 18:18:24.745104 30278 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-9h6hb" Mar 18 18:18:24.754776 master-0 kubenswrapper[30278]: I0318 18:18:24.754721 30278 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p2gkp\" (UniqueName: \"kubernetes.io/projected/44531d8d-219a-4896-94c7-79b37cba4c80-kube-api-access-p2gkp\") on node \"master-0\" DevicePath \"\"" Mar 18 18:18:24.754776 master-0 kubenswrapper[30278]: I0318 18:18:24.754769 30278 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/44531d8d-219a-4896-94c7-79b37cba4c80-operator-scripts\") on node \"master-0\" DevicePath \"\"" Mar 18 18:18:24.772132 master-0 kubenswrapper[30278]: I0318 18:18:24.772058 30278 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-x6mcz" Mar 18 18:18:24.865494 master-0 kubenswrapper[30278]: I0318 18:18:24.865166 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7d866f13-989b-4dea-b811-6fa6df274dea-operator-scripts\") pod \"7d866f13-989b-4dea-b811-6fa6df274dea\" (UID: \"7d866f13-989b-4dea-b811-6fa6df274dea\") " Mar 18 18:18:24.865494 master-0 kubenswrapper[30278]: I0318 18:18:24.865343 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/64f423f1-722c-4545-b52b-8750dab378a3-operator-scripts\") pod \"64f423f1-722c-4545-b52b-8750dab378a3\" (UID: \"64f423f1-722c-4545-b52b-8750dab378a3\") " Mar 18 18:18:24.865494 master-0 kubenswrapper[30278]: I0318 18:18:24.865435 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sfpn4\" (UniqueName: \"kubernetes.io/projected/7d866f13-989b-4dea-b811-6fa6df274dea-kube-api-access-sfpn4\") pod \"7d866f13-989b-4dea-b811-6fa6df274dea\" (UID: \"7d866f13-989b-4dea-b811-6fa6df274dea\") " Mar 18 18:18:24.865494 master-0 kubenswrapper[30278]: I0318 18:18:24.865494 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6m4rb\" (UniqueName: \"kubernetes.io/projected/ee65994b-d421-4f38-8556-5084ef3757e1-kube-api-access-6m4rb\") pod \"ee65994b-d421-4f38-8556-5084ef3757e1\" (UID: \"ee65994b-d421-4f38-8556-5084ef3757e1\") " Mar 18 18:18:24.865884 master-0 kubenswrapper[30278]: I0318 18:18:24.865531 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ee65994b-d421-4f38-8556-5084ef3757e1-operator-scripts\") pod \"ee65994b-d421-4f38-8556-5084ef3757e1\" (UID: \"ee65994b-d421-4f38-8556-5084ef3757e1\") " Mar 18 18:18:24.865884 master-0 kubenswrapper[30278]: I0318 18:18:24.865576 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5867f7c5-a107-4f30-87d3-bb37abf4b2c1-operator-scripts\") pod \"5867f7c5-a107-4f30-87d3-bb37abf4b2c1\" (UID: \"5867f7c5-a107-4f30-87d3-bb37abf4b2c1\") " Mar 18 18:18:24.865884 master-0 kubenswrapper[30278]: I0318 18:18:24.865643 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cmj59\" (UniqueName: \"kubernetes.io/projected/5867f7c5-a107-4f30-87d3-bb37abf4b2c1-kube-api-access-cmj59\") pod \"5867f7c5-a107-4f30-87d3-bb37abf4b2c1\" (UID: \"5867f7c5-a107-4f30-87d3-bb37abf4b2c1\") " Mar 18 18:18:24.865884 master-0 kubenswrapper[30278]: I0318 18:18:24.865686 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lfqqt\" (UniqueName: \"kubernetes.io/projected/64f423f1-722c-4545-b52b-8750dab378a3-kube-api-access-lfqqt\") pod \"64f423f1-722c-4545-b52b-8750dab378a3\" (UID: \"64f423f1-722c-4545-b52b-8750dab378a3\") " Mar 18 18:18:24.868224 master-0 kubenswrapper[30278]: I0318 18:18:24.868153 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/64f423f1-722c-4545-b52b-8750dab378a3-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "64f423f1-722c-4545-b52b-8750dab378a3" (UID: "64f423f1-722c-4545-b52b-8750dab378a3"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 18:18:24.868339 master-0 kubenswrapper[30278]: I0318 18:18:24.868233 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7d866f13-989b-4dea-b811-6fa6df274dea-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "7d866f13-989b-4dea-b811-6fa6df274dea" (UID: "7d866f13-989b-4dea-b811-6fa6df274dea"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 18:18:24.870322 master-0 kubenswrapper[30278]: I0318 18:18:24.868816 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ee65994b-d421-4f38-8556-5084ef3757e1-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "ee65994b-d421-4f38-8556-5084ef3757e1" (UID: "ee65994b-d421-4f38-8556-5084ef3757e1"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 18:18:24.870322 master-0 kubenswrapper[30278]: I0318 18:18:24.869695 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5867f7c5-a107-4f30-87d3-bb37abf4b2c1-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "5867f7c5-a107-4f30-87d3-bb37abf4b2c1" (UID: "5867f7c5-a107-4f30-87d3-bb37abf4b2c1"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 18:18:24.875053 master-0 kubenswrapper[30278]: I0318 18:18:24.873517 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5867f7c5-a107-4f30-87d3-bb37abf4b2c1-kube-api-access-cmj59" (OuterVolumeSpecName: "kube-api-access-cmj59") pod "5867f7c5-a107-4f30-87d3-bb37abf4b2c1" (UID: "5867f7c5-a107-4f30-87d3-bb37abf4b2c1"). InnerVolumeSpecName "kube-api-access-cmj59". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 18:18:24.876844 master-0 kubenswrapper[30278]: I0318 18:18:24.876437 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7d866f13-989b-4dea-b811-6fa6df274dea-kube-api-access-sfpn4" (OuterVolumeSpecName: "kube-api-access-sfpn4") pod "7d866f13-989b-4dea-b811-6fa6df274dea" (UID: "7d866f13-989b-4dea-b811-6fa6df274dea"). InnerVolumeSpecName "kube-api-access-sfpn4". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 18:18:24.877300 master-0 kubenswrapper[30278]: I0318 18:18:24.877232 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ee65994b-d421-4f38-8556-5084ef3757e1-kube-api-access-6m4rb" (OuterVolumeSpecName: "kube-api-access-6m4rb") pod "ee65994b-d421-4f38-8556-5084ef3757e1" (UID: "ee65994b-d421-4f38-8556-5084ef3757e1"). InnerVolumeSpecName "kube-api-access-6m4rb". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 18:18:24.879352 master-0 kubenswrapper[30278]: I0318 18:18:24.878063 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/64f423f1-722c-4545-b52b-8750dab378a3-kube-api-access-lfqqt" (OuterVolumeSpecName: "kube-api-access-lfqqt") pod "64f423f1-722c-4545-b52b-8750dab378a3" (UID: "64f423f1-722c-4545-b52b-8750dab378a3"). InnerVolumeSpecName "kube-api-access-lfqqt". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 18:18:24.891310 master-0 kubenswrapper[30278]: I0318 18:18:24.889040 30278 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7d866f13-989b-4dea-b811-6fa6df274dea-operator-scripts\") on node \"master-0\" DevicePath \"\"" Mar 18 18:18:24.891310 master-0 kubenswrapper[30278]: I0318 18:18:24.889069 30278 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/64f423f1-722c-4545-b52b-8750dab378a3-operator-scripts\") on node \"master-0\" DevicePath \"\"" Mar 18 18:18:24.891310 master-0 kubenswrapper[30278]: I0318 18:18:24.889080 30278 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sfpn4\" (UniqueName: \"kubernetes.io/projected/7d866f13-989b-4dea-b811-6fa6df274dea-kube-api-access-sfpn4\") on node \"master-0\" DevicePath \"\"" Mar 18 18:18:24.891310 master-0 kubenswrapper[30278]: I0318 18:18:24.889094 30278 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6m4rb\" (UniqueName: \"kubernetes.io/projected/ee65994b-d421-4f38-8556-5084ef3757e1-kube-api-access-6m4rb\") on node \"master-0\" DevicePath \"\"" Mar 18 18:18:24.891310 master-0 kubenswrapper[30278]: I0318 18:18:24.889108 30278 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ee65994b-d421-4f38-8556-5084ef3757e1-operator-scripts\") on node \"master-0\" DevicePath \"\"" Mar 18 18:18:24.891310 master-0 kubenswrapper[30278]: I0318 18:18:24.889118 30278 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5867f7c5-a107-4f30-87d3-bb37abf4b2c1-operator-scripts\") on node \"master-0\" DevicePath \"\"" Mar 18 18:18:24.891310 master-0 kubenswrapper[30278]: I0318 18:18:24.889131 30278 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cmj59\" (UniqueName: \"kubernetes.io/projected/5867f7c5-a107-4f30-87d3-bb37abf4b2c1-kube-api-access-cmj59\") on node \"master-0\" DevicePath \"\"" Mar 18 18:18:24.891310 master-0 kubenswrapper[30278]: I0318 18:18:24.889140 30278 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lfqqt\" (UniqueName: \"kubernetes.io/projected/64f423f1-722c-4545-b52b-8750dab378a3-kube-api-access-lfqqt\") on node \"master-0\" DevicePath \"\"" Mar 18 18:18:25.299303 master-0 kubenswrapper[30278]: I0318 18:18:25.299189 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/ff27830b-378b-4338-ac41-041a9d78ed62-etc-swift\") pod \"swift-storage-0\" (UID: \"ff27830b-378b-4338-ac41-041a9d78ed62\") " pod="openstack/swift-storage-0" Mar 18 18:18:25.306764 master-0 kubenswrapper[30278]: I0318 18:18:25.306702 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/ff27830b-378b-4338-ac41-041a9d78ed62-etc-swift\") pod \"swift-storage-0\" (UID: \"ff27830b-378b-4338-ac41-041a9d78ed62\") " pod="openstack/swift-storage-0" Mar 18 18:18:25.420435 master-0 kubenswrapper[30278]: I0318 18:18:25.420204 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Mar 18 18:18:25.587093 master-0 kubenswrapper[30278]: I0318 18:18:25.587014 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-9h6hb" event={"ID":"7d866f13-989b-4dea-b811-6fa6df274dea","Type":"ContainerDied","Data":"8e9dffbeec6f2581f57ea2f4652e394d78b73ea66dd28a2ee5fd4d106d55e8b9"} Mar 18 18:18:25.587093 master-0 kubenswrapper[30278]: I0318 18:18:25.587089 30278 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8e9dffbeec6f2581f57ea2f4652e394d78b73ea66dd28a2ee5fd4d106d55e8b9" Mar 18 18:18:25.587526 master-0 kubenswrapper[30278]: I0318 18:18:25.587258 30278 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-9h6hb" Mar 18 18:18:25.597698 master-0 kubenswrapper[30278]: I0318 18:18:25.597623 30278 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-c37d-account-create-update-wtp9f" Mar 18 18:18:25.599731 master-0 kubenswrapper[30278]: I0318 18:18:25.599507 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-c37d-account-create-update-wtp9f" event={"ID":"44531d8d-219a-4896-94c7-79b37cba4c80","Type":"ContainerDied","Data":"a1630dc858873ee2013de105babb094ebc07b2398b0333bdfcd8e66fa31deee0"} Mar 18 18:18:25.599731 master-0 kubenswrapper[30278]: I0318 18:18:25.599573 30278 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a1630dc858873ee2013de105babb094ebc07b2398b0333bdfcd8e66fa31deee0" Mar 18 18:18:25.613057 master-0 kubenswrapper[30278]: I0318 18:18:25.611653 30278 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-8850-account-create-update-vzxfq" Mar 18 18:18:25.613057 master-0 kubenswrapper[30278]: I0318 18:18:25.611641 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-8850-account-create-update-vzxfq" event={"ID":"5867f7c5-a107-4f30-87d3-bb37abf4b2c1","Type":"ContainerDied","Data":"5527c218dbced3ab60b1cdbc239fff475bfa541d42a763139f39b1bc50488d56"} Mar 18 18:18:25.613057 master-0 kubenswrapper[30278]: I0318 18:18:25.612839 30278 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5527c218dbced3ab60b1cdbc239fff475bfa541d42a763139f39b1bc50488d56" Mar 18 18:18:25.615381 master-0 kubenswrapper[30278]: I0318 18:18:25.615327 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-10af-account-create-update-f6v8x" event={"ID":"64f423f1-722c-4545-b52b-8750dab378a3","Type":"ContainerDied","Data":"ff61ac93a7b529c0c7bee9def7b1d7b23ee91f0270e7828de036643edf6cbec5"} Mar 18 18:18:25.615446 master-0 kubenswrapper[30278]: I0318 18:18:25.615398 30278 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ff61ac93a7b529c0c7bee9def7b1d7b23ee91f0270e7828de036643edf6cbec5" Mar 18 18:18:25.615520 master-0 kubenswrapper[30278]: I0318 18:18:25.615499 30278 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-10af-account-create-update-f6v8x" Mar 18 18:18:25.625071 master-0 kubenswrapper[30278]: I0318 18:18:25.624917 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-x6mcz" event={"ID":"ee65994b-d421-4f38-8556-5084ef3757e1","Type":"ContainerDied","Data":"6d36a3aeae2e8d97481dc2d30fbdc71816a5917358508e439296151a1ad27e4d"} Mar 18 18:18:25.625071 master-0 kubenswrapper[30278]: I0318 18:18:25.624975 30278 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6d36a3aeae2e8d97481dc2d30fbdc71816a5917358508e439296151a1ad27e4d" Mar 18 18:18:25.625071 master-0 kubenswrapper[30278]: I0318 18:18:25.625053 30278 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-x6mcz" Mar 18 18:18:25.924918 master-0 kubenswrapper[30278]: I0318 18:18:25.924853 30278 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Mar 18 18:18:25.925548 master-0 kubenswrapper[30278]: W0318 18:18:25.925478 30278 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podff27830b_378b_4338_ac41_041a9d78ed62.slice/crio-383f46d1fe3733006371a575621477e791a552cb759a2c15ed380090d2b7d401 WatchSource:0}: Error finding container 383f46d1fe3733006371a575621477e791a552cb759a2c15ed380090d2b7d401: Status 404 returned error can't find the container with id 383f46d1fe3733006371a575621477e791a552cb759a2c15ed380090d2b7d401 Mar 18 18:18:26.644353 master-0 kubenswrapper[30278]: I0318 18:18:26.644302 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"ff27830b-378b-4338-ac41-041a9d78ed62","Type":"ContainerStarted","Data":"383f46d1fe3733006371a575621477e791a552cb759a2c15ed380090d2b7d401"} Mar 18 18:18:27.429383 master-0 kubenswrapper[30278]: I0318 18:18:27.429304 30278 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-sd6rg"] Mar 18 18:18:27.430159 master-0 kubenswrapper[30278]: E0318 18:18:27.430118 30278 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ee65994b-d421-4f38-8556-5084ef3757e1" containerName="mariadb-database-create" Mar 18 18:18:27.430159 master-0 kubenswrapper[30278]: I0318 18:18:27.430155 30278 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee65994b-d421-4f38-8556-5084ef3757e1" containerName="mariadb-database-create" Mar 18 18:18:27.430285 master-0 kubenswrapper[30278]: E0318 18:18:27.430206 30278 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d6e214f3-e729-4653-bd99-ed6b6989358f" containerName="mariadb-database-create" Mar 18 18:18:27.430413 master-0 kubenswrapper[30278]: I0318 18:18:27.430265 30278 state_mem.go:107] "Deleted CPUSet assignment" podUID="d6e214f3-e729-4653-bd99-ed6b6989358f" containerName="mariadb-database-create" Mar 18 18:18:27.430467 master-0 kubenswrapper[30278]: E0318 18:18:27.430421 30278 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="44531d8d-219a-4896-94c7-79b37cba4c80" containerName="mariadb-account-create-update" Mar 18 18:18:27.430467 master-0 kubenswrapper[30278]: I0318 18:18:27.430432 30278 state_mem.go:107] "Deleted CPUSet assignment" podUID="44531d8d-219a-4896-94c7-79b37cba4c80" containerName="mariadb-account-create-update" Mar 18 18:18:27.430467 master-0 kubenswrapper[30278]: E0318 18:18:27.430451 30278 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="64f423f1-722c-4545-b52b-8750dab378a3" containerName="mariadb-account-create-update" Mar 18 18:18:27.430467 master-0 kubenswrapper[30278]: I0318 18:18:27.430458 30278 state_mem.go:107] "Deleted CPUSet assignment" podUID="64f423f1-722c-4545-b52b-8750dab378a3" containerName="mariadb-account-create-update" Mar 18 18:18:27.430743 master-0 kubenswrapper[30278]: E0318 18:18:27.430478 30278 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5867f7c5-a107-4f30-87d3-bb37abf4b2c1" containerName="mariadb-account-create-update" Mar 18 18:18:27.430743 master-0 kubenswrapper[30278]: I0318 18:18:27.430487 30278 state_mem.go:107] "Deleted CPUSet assignment" podUID="5867f7c5-a107-4f30-87d3-bb37abf4b2c1" containerName="mariadb-account-create-update" Mar 18 18:18:27.430743 master-0 kubenswrapper[30278]: E0318 18:18:27.430500 30278 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d866f13-989b-4dea-b811-6fa6df274dea" containerName="mariadb-database-create" Mar 18 18:18:27.430743 master-0 kubenswrapper[30278]: I0318 18:18:27.430507 30278 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d866f13-989b-4dea-b811-6fa6df274dea" containerName="mariadb-database-create" Mar 18 18:18:27.430897 master-0 kubenswrapper[30278]: I0318 18:18:27.430788 30278 memory_manager.go:354] "RemoveStaleState removing state" podUID="5867f7c5-a107-4f30-87d3-bb37abf4b2c1" containerName="mariadb-account-create-update" Mar 18 18:18:27.430897 master-0 kubenswrapper[30278]: I0318 18:18:27.430809 30278 memory_manager.go:354] "RemoveStaleState removing state" podUID="d6e214f3-e729-4653-bd99-ed6b6989358f" containerName="mariadb-database-create" Mar 18 18:18:27.430897 master-0 kubenswrapper[30278]: I0318 18:18:27.430848 30278 memory_manager.go:354] "RemoveStaleState removing state" podUID="64f423f1-722c-4545-b52b-8750dab378a3" containerName="mariadb-account-create-update" Mar 18 18:18:27.430897 master-0 kubenswrapper[30278]: I0318 18:18:27.430869 30278 memory_manager.go:354] "RemoveStaleState removing state" podUID="ee65994b-d421-4f38-8556-5084ef3757e1" containerName="mariadb-database-create" Mar 18 18:18:27.430897 master-0 kubenswrapper[30278]: I0318 18:18:27.430889 30278 memory_manager.go:354] "RemoveStaleState removing state" podUID="7d866f13-989b-4dea-b811-6fa6df274dea" containerName="mariadb-database-create" Mar 18 18:18:27.431659 master-0 kubenswrapper[30278]: I0318 18:18:27.430908 30278 memory_manager.go:354] "RemoveStaleState removing state" podUID="44531d8d-219a-4896-94c7-79b37cba4c80" containerName="mariadb-account-create-update" Mar 18 18:18:27.432060 master-0 kubenswrapper[30278]: I0318 18:18:27.431974 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-sd6rg" Mar 18 18:18:27.439918 master-0 kubenswrapper[30278]: I0318 18:18:27.439623 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Mar 18 18:18:27.509722 master-0 kubenswrapper[30278]: I0318 18:18:27.509612 30278 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-sd6rg"] Mar 18 18:18:27.572123 master-0 kubenswrapper[30278]: I0318 18:18:27.572038 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/72dc0432-f429-4dbf-b1ce-d421425d6ca3-operator-scripts\") pod \"root-account-create-update-sd6rg\" (UID: \"72dc0432-f429-4dbf-b1ce-d421425d6ca3\") " pod="openstack/root-account-create-update-sd6rg" Mar 18 18:18:27.572513 master-0 kubenswrapper[30278]: I0318 18:18:27.572224 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6clmb\" (UniqueName: \"kubernetes.io/projected/72dc0432-f429-4dbf-b1ce-d421425d6ca3-kube-api-access-6clmb\") pod \"root-account-create-update-sd6rg\" (UID: \"72dc0432-f429-4dbf-b1ce-d421425d6ca3\") " pod="openstack/root-account-create-update-sd6rg" Mar 18 18:18:27.674026 master-0 kubenswrapper[30278]: I0318 18:18:27.673962 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/72dc0432-f429-4dbf-b1ce-d421425d6ca3-operator-scripts\") pod \"root-account-create-update-sd6rg\" (UID: \"72dc0432-f429-4dbf-b1ce-d421425d6ca3\") " pod="openstack/root-account-create-update-sd6rg" Mar 18 18:18:27.674770 master-0 kubenswrapper[30278]: I0318 18:18:27.674117 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6clmb\" (UniqueName: \"kubernetes.io/projected/72dc0432-f429-4dbf-b1ce-d421425d6ca3-kube-api-access-6clmb\") pod \"root-account-create-update-sd6rg\" (UID: \"72dc0432-f429-4dbf-b1ce-d421425d6ca3\") " pod="openstack/root-account-create-update-sd6rg" Mar 18 18:18:27.674890 master-0 kubenswrapper[30278]: I0318 18:18:27.674853 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/72dc0432-f429-4dbf-b1ce-d421425d6ca3-operator-scripts\") pod \"root-account-create-update-sd6rg\" (UID: \"72dc0432-f429-4dbf-b1ce-d421425d6ca3\") " pod="openstack/root-account-create-update-sd6rg" Mar 18 18:18:27.701919 master-0 kubenswrapper[30278]: I0318 18:18:27.701750 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6clmb\" (UniqueName: \"kubernetes.io/projected/72dc0432-f429-4dbf-b1ce-d421425d6ca3-kube-api-access-6clmb\") pod \"root-account-create-update-sd6rg\" (UID: \"72dc0432-f429-4dbf-b1ce-d421425d6ca3\") " pod="openstack/root-account-create-update-sd6rg" Mar 18 18:18:27.765656 master-0 kubenswrapper[30278]: I0318 18:18:27.765263 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-sd6rg" Mar 18 18:18:28.479043 master-0 kubenswrapper[30278]: I0318 18:18:28.479003 30278 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Mar 18 18:18:28.499524 master-0 kubenswrapper[30278]: I0318 18:18:28.498524 30278 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-sd6rg"] Mar 18 18:18:28.518060 master-0 kubenswrapper[30278]: W0318 18:18:28.518010 30278 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod72dc0432_f429_4dbf_b1ce_d421425d6ca3.slice/crio-2ad476cbd750fb4997f87dbf292fa4b497ff60ad5b1d15eb128379622f4c33bf WatchSource:0}: Error finding container 2ad476cbd750fb4997f87dbf292fa4b497ff60ad5b1d15eb128379622f4c33bf: Status 404 returned error can't find the container with id 2ad476cbd750fb4997f87dbf292fa4b497ff60ad5b1d15eb128379622f4c33bf Mar 18 18:18:28.686029 master-0 kubenswrapper[30278]: I0318 18:18:28.685937 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-sd6rg" event={"ID":"72dc0432-f429-4dbf-b1ce-d421425d6ca3","Type":"ContainerStarted","Data":"2ad476cbd750fb4997f87dbf292fa4b497ff60ad5b1d15eb128379622f4c33bf"} Mar 18 18:18:28.696850 master-0 kubenswrapper[30278]: I0318 18:18:28.696785 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"ff27830b-378b-4338-ac41-041a9d78ed62","Type":"ContainerStarted","Data":"cd8a7ce2e5db466421fd476c8e3891355ab40cb8dddbe660f3427e2b8100bd05"} Mar 18 18:18:28.696850 master-0 kubenswrapper[30278]: I0318 18:18:28.696813 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"ff27830b-378b-4338-ac41-041a9d78ed62","Type":"ContainerStarted","Data":"f617be1406844333b25467ff941e47e0633e5fad536ba931d20945efe12d7cf0"} Mar 18 18:18:28.975371 master-0 kubenswrapper[30278]: I0318 18:18:28.975297 30278 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-xntzs" podUID="e01e85f2-9a8b-4862-ad33-959e38bfbc7c" containerName="ovn-controller" probeResult="failure" output=< Mar 18 18:18:28.975371 master-0 kubenswrapper[30278]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Mar 18 18:18:28.975371 master-0 kubenswrapper[30278]: > Mar 18 18:18:29.000682 master-0 kubenswrapper[30278]: I0318 18:18:29.000614 30278 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-9qq6l" Mar 18 18:18:29.009137 master-0 kubenswrapper[30278]: I0318 18:18:29.007830 30278 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-9qq6l" Mar 18 18:18:29.352253 master-0 kubenswrapper[30278]: I0318 18:18:29.350110 30278 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-xntzs-config-6x9fb"] Mar 18 18:18:29.352253 master-0 kubenswrapper[30278]: I0318 18:18:29.351490 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-xntzs-config-6x9fb" Mar 18 18:18:29.354096 master-0 kubenswrapper[30278]: I0318 18:18:29.353996 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Mar 18 18:18:29.375240 master-0 kubenswrapper[30278]: I0318 18:18:29.375189 30278 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-xntzs-config-6x9fb"] Mar 18 18:18:29.421005 master-0 kubenswrapper[30278]: I0318 18:18:29.420911 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/31ce30e7-4c34-428f-b81d-a6bfaeeb38cb-var-log-ovn\") pod \"ovn-controller-xntzs-config-6x9fb\" (UID: \"31ce30e7-4c34-428f-b81d-a6bfaeeb38cb\") " pod="openstack/ovn-controller-xntzs-config-6x9fb" Mar 18 18:18:29.421267 master-0 kubenswrapper[30278]: I0318 18:18:29.421062 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/31ce30e7-4c34-428f-b81d-a6bfaeeb38cb-var-run-ovn\") pod \"ovn-controller-xntzs-config-6x9fb\" (UID: \"31ce30e7-4c34-428f-b81d-a6bfaeeb38cb\") " pod="openstack/ovn-controller-xntzs-config-6x9fb" Mar 18 18:18:29.422027 master-0 kubenswrapper[30278]: I0318 18:18:29.421285 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/31ce30e7-4c34-428f-b81d-a6bfaeeb38cb-scripts\") pod \"ovn-controller-xntzs-config-6x9fb\" (UID: \"31ce30e7-4c34-428f-b81d-a6bfaeeb38cb\") " pod="openstack/ovn-controller-xntzs-config-6x9fb" Mar 18 18:18:29.422227 master-0 kubenswrapper[30278]: I0318 18:18:29.422202 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/31ce30e7-4c34-428f-b81d-a6bfaeeb38cb-additional-scripts\") pod \"ovn-controller-xntzs-config-6x9fb\" (UID: \"31ce30e7-4c34-428f-b81d-a6bfaeeb38cb\") " pod="openstack/ovn-controller-xntzs-config-6x9fb" Mar 18 18:18:29.422343 master-0 kubenswrapper[30278]: I0318 18:18:29.422306 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p2vdp\" (UniqueName: \"kubernetes.io/projected/31ce30e7-4c34-428f-b81d-a6bfaeeb38cb-kube-api-access-p2vdp\") pod \"ovn-controller-xntzs-config-6x9fb\" (UID: \"31ce30e7-4c34-428f-b81d-a6bfaeeb38cb\") " pod="openstack/ovn-controller-xntzs-config-6x9fb" Mar 18 18:18:29.422581 master-0 kubenswrapper[30278]: I0318 18:18:29.422561 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/31ce30e7-4c34-428f-b81d-a6bfaeeb38cb-var-run\") pod \"ovn-controller-xntzs-config-6x9fb\" (UID: \"31ce30e7-4c34-428f-b81d-a6bfaeeb38cb\") " pod="openstack/ovn-controller-xntzs-config-6x9fb" Mar 18 18:18:29.525386 master-0 kubenswrapper[30278]: I0318 18:18:29.525136 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/31ce30e7-4c34-428f-b81d-a6bfaeeb38cb-var-log-ovn\") pod \"ovn-controller-xntzs-config-6x9fb\" (UID: \"31ce30e7-4c34-428f-b81d-a6bfaeeb38cb\") " pod="openstack/ovn-controller-xntzs-config-6x9fb" Mar 18 18:18:29.525386 master-0 kubenswrapper[30278]: I0318 18:18:29.525228 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/31ce30e7-4c34-428f-b81d-a6bfaeeb38cb-var-run-ovn\") pod \"ovn-controller-xntzs-config-6x9fb\" (UID: \"31ce30e7-4c34-428f-b81d-a6bfaeeb38cb\") " pod="openstack/ovn-controller-xntzs-config-6x9fb" Mar 18 18:18:29.525386 master-0 kubenswrapper[30278]: I0318 18:18:29.525312 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/31ce30e7-4c34-428f-b81d-a6bfaeeb38cb-var-log-ovn\") pod \"ovn-controller-xntzs-config-6x9fb\" (UID: \"31ce30e7-4c34-428f-b81d-a6bfaeeb38cb\") " pod="openstack/ovn-controller-xntzs-config-6x9fb" Mar 18 18:18:29.525386 master-0 kubenswrapper[30278]: I0318 18:18:29.525321 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/31ce30e7-4c34-428f-b81d-a6bfaeeb38cb-scripts\") pod \"ovn-controller-xntzs-config-6x9fb\" (UID: \"31ce30e7-4c34-428f-b81d-a6bfaeeb38cb\") " pod="openstack/ovn-controller-xntzs-config-6x9fb" Mar 18 18:18:29.525720 master-0 kubenswrapper[30278]: I0318 18:18:29.525410 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/31ce30e7-4c34-428f-b81d-a6bfaeeb38cb-additional-scripts\") pod \"ovn-controller-xntzs-config-6x9fb\" (UID: \"31ce30e7-4c34-428f-b81d-a6bfaeeb38cb\") " pod="openstack/ovn-controller-xntzs-config-6x9fb" Mar 18 18:18:29.525720 master-0 kubenswrapper[30278]: I0318 18:18:29.525440 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/31ce30e7-4c34-428f-b81d-a6bfaeeb38cb-var-run-ovn\") pod \"ovn-controller-xntzs-config-6x9fb\" (UID: \"31ce30e7-4c34-428f-b81d-a6bfaeeb38cb\") " pod="openstack/ovn-controller-xntzs-config-6x9fb" Mar 18 18:18:29.526090 master-0 kubenswrapper[30278]: I0318 18:18:29.525459 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p2vdp\" (UniqueName: \"kubernetes.io/projected/31ce30e7-4c34-428f-b81d-a6bfaeeb38cb-kube-api-access-p2vdp\") pod \"ovn-controller-xntzs-config-6x9fb\" (UID: \"31ce30e7-4c34-428f-b81d-a6bfaeeb38cb\") " pod="openstack/ovn-controller-xntzs-config-6x9fb" Mar 18 18:18:29.526090 master-0 kubenswrapper[30278]: I0318 18:18:29.525884 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/31ce30e7-4c34-428f-b81d-a6bfaeeb38cb-var-run\") pod \"ovn-controller-xntzs-config-6x9fb\" (UID: \"31ce30e7-4c34-428f-b81d-a6bfaeeb38cb\") " pod="openstack/ovn-controller-xntzs-config-6x9fb" Mar 18 18:18:29.526090 master-0 kubenswrapper[30278]: I0318 18:18:29.526035 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/31ce30e7-4c34-428f-b81d-a6bfaeeb38cb-var-run\") pod \"ovn-controller-xntzs-config-6x9fb\" (UID: \"31ce30e7-4c34-428f-b81d-a6bfaeeb38cb\") " pod="openstack/ovn-controller-xntzs-config-6x9fb" Mar 18 18:18:29.526230 master-0 kubenswrapper[30278]: I0318 18:18:29.526185 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/31ce30e7-4c34-428f-b81d-a6bfaeeb38cb-additional-scripts\") pod \"ovn-controller-xntzs-config-6x9fb\" (UID: \"31ce30e7-4c34-428f-b81d-a6bfaeeb38cb\") " pod="openstack/ovn-controller-xntzs-config-6x9fb" Mar 18 18:18:29.527657 master-0 kubenswrapper[30278]: I0318 18:18:29.527633 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/31ce30e7-4c34-428f-b81d-a6bfaeeb38cb-scripts\") pod \"ovn-controller-xntzs-config-6x9fb\" (UID: \"31ce30e7-4c34-428f-b81d-a6bfaeeb38cb\") " pod="openstack/ovn-controller-xntzs-config-6x9fb" Mar 18 18:18:29.541436 master-0 kubenswrapper[30278]: I0318 18:18:29.541395 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p2vdp\" (UniqueName: \"kubernetes.io/projected/31ce30e7-4c34-428f-b81d-a6bfaeeb38cb-kube-api-access-p2vdp\") pod \"ovn-controller-xntzs-config-6x9fb\" (UID: \"31ce30e7-4c34-428f-b81d-a6bfaeeb38cb\") " pod="openstack/ovn-controller-xntzs-config-6x9fb" Mar 18 18:18:29.671436 master-0 kubenswrapper[30278]: I0318 18:18:29.671084 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-xntzs-config-6x9fb" Mar 18 18:18:29.721810 master-0 kubenswrapper[30278]: I0318 18:18:29.721749 30278 generic.go:334] "Generic (PLEG): container finished" podID="72dc0432-f429-4dbf-b1ce-d421425d6ca3" containerID="a33768b295be066f61c1148a3aa71dce17e8db5d09cfaf3fd3f9f3ab27856c59" exitCode=0 Mar 18 18:18:29.722197 master-0 kubenswrapper[30278]: I0318 18:18:29.721858 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-sd6rg" event={"ID":"72dc0432-f429-4dbf-b1ce-d421425d6ca3","Type":"ContainerDied","Data":"a33768b295be066f61c1148a3aa71dce17e8db5d09cfaf3fd3f9f3ab27856c59"} Mar 18 18:18:29.739427 master-0 kubenswrapper[30278]: I0318 18:18:29.739356 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"ff27830b-378b-4338-ac41-041a9d78ed62","Type":"ContainerStarted","Data":"716e4a4f98606ed12e4cdf4b5a0fa921e29e398808d3c617366b6a63dd70a882"} Mar 18 18:18:29.739657 master-0 kubenswrapper[30278]: I0318 18:18:29.739437 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"ff27830b-378b-4338-ac41-041a9d78ed62","Type":"ContainerStarted","Data":"e5956c844c9e05e85ea9a8c660ce18cd7e236f32bcdf4eabfb2c3e61ffa04bc3"} Mar 18 18:18:30.196001 master-0 kubenswrapper[30278]: I0318 18:18:30.195950 30278 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-xntzs-config-6x9fb"] Mar 18 18:18:30.207538 master-0 kubenswrapper[30278]: W0318 18:18:30.207498 30278 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod31ce30e7_4c34_428f_b81d_a6bfaeeb38cb.slice/crio-44d91ac1ab2224044e9d3a303fb996e3c4b0fa19f52486afcb9f10e5787a4cf2 WatchSource:0}: Error finding container 44d91ac1ab2224044e9d3a303fb996e3c4b0fa19f52486afcb9f10e5787a4cf2: Status 404 returned error can't find the container with id 44d91ac1ab2224044e9d3a303fb996e3c4b0fa19f52486afcb9f10e5787a4cf2 Mar 18 18:18:30.402822 master-0 kubenswrapper[30278]: I0318 18:18:30.402694 30278 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-8jvr2"] Mar 18 18:18:30.405262 master-0 kubenswrapper[30278]: I0318 18:18:30.405224 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-8jvr2" Mar 18 18:18:30.411099 master-0 kubenswrapper[30278]: I0318 18:18:30.409923 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-824c8-config-data" Mar 18 18:18:30.422428 master-0 kubenswrapper[30278]: I0318 18:18:30.422362 30278 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-8jvr2"] Mar 18 18:18:30.552330 master-0 kubenswrapper[30278]: I0318 18:18:30.552213 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d2ad6a1d-4b4e-49d6-b2f1-65906269f79e-config-data\") pod \"glance-db-sync-8jvr2\" (UID: \"d2ad6a1d-4b4e-49d6-b2f1-65906269f79e\") " pod="openstack/glance-db-sync-8jvr2" Mar 18 18:18:30.552583 master-0 kubenswrapper[30278]: I0318 18:18:30.552418 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/d2ad6a1d-4b4e-49d6-b2f1-65906269f79e-db-sync-config-data\") pod \"glance-db-sync-8jvr2\" (UID: \"d2ad6a1d-4b4e-49d6-b2f1-65906269f79e\") " pod="openstack/glance-db-sync-8jvr2" Mar 18 18:18:30.552732 master-0 kubenswrapper[30278]: I0318 18:18:30.552667 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d2ad6a1d-4b4e-49d6-b2f1-65906269f79e-combined-ca-bundle\") pod \"glance-db-sync-8jvr2\" (UID: \"d2ad6a1d-4b4e-49d6-b2f1-65906269f79e\") " pod="openstack/glance-db-sync-8jvr2" Mar 18 18:18:30.552968 master-0 kubenswrapper[30278]: I0318 18:18:30.552943 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-shxgg\" (UniqueName: \"kubernetes.io/projected/d2ad6a1d-4b4e-49d6-b2f1-65906269f79e-kube-api-access-shxgg\") pod \"glance-db-sync-8jvr2\" (UID: \"d2ad6a1d-4b4e-49d6-b2f1-65906269f79e\") " pod="openstack/glance-db-sync-8jvr2" Mar 18 18:18:30.556537 master-0 kubenswrapper[30278]: I0318 18:18:30.556505 30278 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Mar 18 18:18:30.662512 master-0 kubenswrapper[30278]: I0318 18:18:30.657836 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d2ad6a1d-4b4e-49d6-b2f1-65906269f79e-config-data\") pod \"glance-db-sync-8jvr2\" (UID: \"d2ad6a1d-4b4e-49d6-b2f1-65906269f79e\") " pod="openstack/glance-db-sync-8jvr2" Mar 18 18:18:30.662512 master-0 kubenswrapper[30278]: I0318 18:18:30.657954 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/d2ad6a1d-4b4e-49d6-b2f1-65906269f79e-db-sync-config-data\") pod \"glance-db-sync-8jvr2\" (UID: \"d2ad6a1d-4b4e-49d6-b2f1-65906269f79e\") " pod="openstack/glance-db-sync-8jvr2" Mar 18 18:18:30.662512 master-0 kubenswrapper[30278]: I0318 18:18:30.658049 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d2ad6a1d-4b4e-49d6-b2f1-65906269f79e-combined-ca-bundle\") pod \"glance-db-sync-8jvr2\" (UID: \"d2ad6a1d-4b4e-49d6-b2f1-65906269f79e\") " pod="openstack/glance-db-sync-8jvr2" Mar 18 18:18:30.662512 master-0 kubenswrapper[30278]: I0318 18:18:30.658116 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-shxgg\" (UniqueName: \"kubernetes.io/projected/d2ad6a1d-4b4e-49d6-b2f1-65906269f79e-kube-api-access-shxgg\") pod \"glance-db-sync-8jvr2\" (UID: \"d2ad6a1d-4b4e-49d6-b2f1-65906269f79e\") " pod="openstack/glance-db-sync-8jvr2" Mar 18 18:18:30.697091 master-0 kubenswrapper[30278]: I0318 18:18:30.696349 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d2ad6a1d-4b4e-49d6-b2f1-65906269f79e-combined-ca-bundle\") pod \"glance-db-sync-8jvr2\" (UID: \"d2ad6a1d-4b4e-49d6-b2f1-65906269f79e\") " pod="openstack/glance-db-sync-8jvr2" Mar 18 18:18:30.697091 master-0 kubenswrapper[30278]: I0318 18:18:30.696743 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/d2ad6a1d-4b4e-49d6-b2f1-65906269f79e-db-sync-config-data\") pod \"glance-db-sync-8jvr2\" (UID: \"d2ad6a1d-4b4e-49d6-b2f1-65906269f79e\") " pod="openstack/glance-db-sync-8jvr2" Mar 18 18:18:30.702438 master-0 kubenswrapper[30278]: I0318 18:18:30.700663 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d2ad6a1d-4b4e-49d6-b2f1-65906269f79e-config-data\") pod \"glance-db-sync-8jvr2\" (UID: \"d2ad6a1d-4b4e-49d6-b2f1-65906269f79e\") " pod="openstack/glance-db-sync-8jvr2" Mar 18 18:18:30.702438 master-0 kubenswrapper[30278]: I0318 18:18:30.701098 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-shxgg\" (UniqueName: \"kubernetes.io/projected/d2ad6a1d-4b4e-49d6-b2f1-65906269f79e-kube-api-access-shxgg\") pod \"glance-db-sync-8jvr2\" (UID: \"d2ad6a1d-4b4e-49d6-b2f1-65906269f79e\") " pod="openstack/glance-db-sync-8jvr2" Mar 18 18:18:30.806647 master-0 kubenswrapper[30278]: I0318 18:18:30.805215 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"ff27830b-378b-4338-ac41-041a9d78ed62","Type":"ContainerStarted","Data":"801e84ea17056c18174020fdd09929cb1bffa6fed0248e990fde99b019b26245"} Mar 18 18:18:30.807206 master-0 kubenswrapper[30278]: I0318 18:18:30.806780 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-8jvr2" Mar 18 18:18:30.823098 master-0 kubenswrapper[30278]: I0318 18:18:30.822562 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-xntzs-config-6x9fb" event={"ID":"31ce30e7-4c34-428f-b81d-a6bfaeeb38cb","Type":"ContainerStarted","Data":"345815fa2d75307faa3529bd80deec2d21f6243026e61b5e7c804aefe401e81e"} Mar 18 18:18:30.823098 master-0 kubenswrapper[30278]: I0318 18:18:30.822638 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-xntzs-config-6x9fb" event={"ID":"31ce30e7-4c34-428f-b81d-a6bfaeeb38cb","Type":"ContainerStarted","Data":"44d91ac1ab2224044e9d3a303fb996e3c4b0fa19f52486afcb9f10e5787a4cf2"} Mar 18 18:18:30.908820 master-0 kubenswrapper[30278]: I0318 18:18:30.904894 30278 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-xntzs-config-6x9fb" podStartSLOduration=1.904873407 podStartE2EDuration="1.904873407s" podCreationTimestamp="2026-03-18 18:18:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 18:18:30.902668198 +0000 UTC m=+1080.069852793" watchObservedRunningTime="2026-03-18 18:18:30.904873407 +0000 UTC m=+1080.072058002" Mar 18 18:18:31.720576 master-0 kubenswrapper[30278]: E0318 18:18:31.717624 30278 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod31ce30e7_4c34_428f_b81d_a6bfaeeb38cb.slice/crio-345815fa2d75307faa3529bd80deec2d21f6243026e61b5e7c804aefe401e81e.scope\": RecentStats: unable to find data in memory cache]" Mar 18 18:18:31.720576 master-0 kubenswrapper[30278]: I0318 18:18:31.719741 30278 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-kl89c"] Mar 18 18:18:31.721902 master-0 kubenswrapper[30278]: I0318 18:18:31.721580 30278 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-kl89c"] Mar 18 18:18:31.721902 master-0 kubenswrapper[30278]: I0318 18:18:31.721621 30278 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-1f97-account-create-update-bc5tw"] Mar 18 18:18:31.725511 master-0 kubenswrapper[30278]: I0318 18:18:31.724076 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-kl89c" Mar 18 18:18:31.731522 master-0 kubenswrapper[30278]: I0318 18:18:31.730678 30278 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-1f97-account-create-update-bc5tw"] Mar 18 18:18:31.731522 master-0 kubenswrapper[30278]: I0318 18:18:31.730727 30278 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-rgrfw"] Mar 18 18:18:31.731522 master-0 kubenswrapper[30278]: I0318 18:18:31.730848 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-1f97-account-create-update-bc5tw" Mar 18 18:18:31.737297 master-0 kubenswrapper[30278]: I0318 18:18:31.732687 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Mar 18 18:18:31.751307 master-0 kubenswrapper[30278]: I0318 18:18:31.748152 30278 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-rgrfw"] Mar 18 18:18:31.751307 master-0 kubenswrapper[30278]: I0318 18:18:31.748225 30278 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-8ntbw"] Mar 18 18:18:31.751307 master-0 kubenswrapper[30278]: I0318 18:18:31.748306 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-rgrfw" Mar 18 18:18:31.751307 master-0 kubenswrapper[30278]: I0318 18:18:31.750085 30278 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-8ntbw"] Mar 18 18:18:31.751307 master-0 kubenswrapper[30278]: I0318 18:18:31.750141 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-8ntbw" Mar 18 18:18:31.755297 master-0 kubenswrapper[30278]: I0318 18:18:31.752394 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Mar 18 18:18:31.755297 master-0 kubenswrapper[30278]: I0318 18:18:31.753490 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Mar 18 18:18:31.755516 master-0 kubenswrapper[30278]: I0318 18:18:31.755400 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Mar 18 18:18:31.818515 master-0 kubenswrapper[30278]: I0318 18:18:31.818260 30278 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-984d-account-create-update-tqdfv"] Mar 18 18:18:31.824401 master-0 kubenswrapper[30278]: I0318 18:18:31.823079 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-984d-account-create-update-tqdfv" Mar 18 18:18:31.826664 master-0 kubenswrapper[30278]: I0318 18:18:31.825874 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Mar 18 18:18:31.838461 master-0 kubenswrapper[30278]: I0318 18:18:31.838418 30278 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-sd6rg" Mar 18 18:18:31.852543 master-0 kubenswrapper[30278]: I0318 18:18:31.852474 30278 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-984d-account-create-update-tqdfv"] Mar 18 18:18:31.864301 master-0 kubenswrapper[30278]: I0318 18:18:31.863592 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-sd6rg" event={"ID":"72dc0432-f429-4dbf-b1ce-d421425d6ca3","Type":"ContainerDied","Data":"2ad476cbd750fb4997f87dbf292fa4b497ff60ad5b1d15eb128379622f4c33bf"} Mar 18 18:18:31.864301 master-0 kubenswrapper[30278]: I0318 18:18:31.863638 30278 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2ad476cbd750fb4997f87dbf292fa4b497ff60ad5b1d15eb128379622f4c33bf" Mar 18 18:18:31.864301 master-0 kubenswrapper[30278]: I0318 18:18:31.863697 30278 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-sd6rg" Mar 18 18:18:31.871050 master-0 kubenswrapper[30278]: I0318 18:18:31.870687 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"ff27830b-378b-4338-ac41-041a9d78ed62","Type":"ContainerStarted","Data":"8cd6d2d59d41046f7573364cf35df32fd77d04241fca630749aaa7caed9caadf"} Mar 18 18:18:31.873146 master-0 kubenswrapper[30278]: I0318 18:18:31.873116 30278 generic.go:334] "Generic (PLEG): container finished" podID="31ce30e7-4c34-428f-b81d-a6bfaeeb38cb" containerID="345815fa2d75307faa3529bd80deec2d21f6243026e61b5e7c804aefe401e81e" exitCode=0 Mar 18 18:18:31.873242 master-0 kubenswrapper[30278]: I0318 18:18:31.873154 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-xntzs-config-6x9fb" event={"ID":"31ce30e7-4c34-428f-b81d-a6bfaeeb38cb","Type":"ContainerDied","Data":"345815fa2d75307faa3529bd80deec2d21f6243026e61b5e7c804aefe401e81e"} Mar 18 18:18:31.915132 master-0 kubenswrapper[30278]: I0318 18:18:31.910291 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9vjlb\" (UniqueName: \"kubernetes.io/projected/594ed543-14e4-4a71-8eb9-3482fa67fc1d-kube-api-access-9vjlb\") pod \"neutron-db-create-rgrfw\" (UID: \"594ed543-14e4-4a71-8eb9-3482fa67fc1d\") " pod="openstack/neutron-db-create-rgrfw" Mar 18 18:18:31.915132 master-0 kubenswrapper[30278]: I0318 18:18:31.910339 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/677619bc-d70e-475e-a844-b177d2cadbd9-operator-scripts\") pod \"cinder-db-create-kl89c\" (UID: \"677619bc-d70e-475e-a844-b177d2cadbd9\") " pod="openstack/cinder-db-create-kl89c" Mar 18 18:18:31.915132 master-0 kubenswrapper[30278]: I0318 18:18:31.910388 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5chp5\" (UniqueName: \"kubernetes.io/projected/677619bc-d70e-475e-a844-b177d2cadbd9-kube-api-access-5chp5\") pod \"cinder-db-create-kl89c\" (UID: \"677619bc-d70e-475e-a844-b177d2cadbd9\") " pod="openstack/cinder-db-create-kl89c" Mar 18 18:18:31.915132 master-0 kubenswrapper[30278]: I0318 18:18:31.910413 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5562q\" (UniqueName: \"kubernetes.io/projected/5edc1dc4-2f2a-4eff-bc50-10382bc71d27-kube-api-access-5562q\") pod \"cinder-1f97-account-create-update-bc5tw\" (UID: \"5edc1dc4-2f2a-4eff-bc50-10382bc71d27\") " pod="openstack/cinder-1f97-account-create-update-bc5tw" Mar 18 18:18:31.915132 master-0 kubenswrapper[30278]: I0318 18:18:31.910467 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pk8lq\" (UniqueName: \"kubernetes.io/projected/e74e301d-4637-4d16-a125-a44a5470a4ac-kube-api-access-pk8lq\") pod \"keystone-db-sync-8ntbw\" (UID: \"e74e301d-4637-4d16-a125-a44a5470a4ac\") " pod="openstack/keystone-db-sync-8ntbw" Mar 18 18:18:31.915132 master-0 kubenswrapper[30278]: I0318 18:18:31.910509 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e74e301d-4637-4d16-a125-a44a5470a4ac-combined-ca-bundle\") pod \"keystone-db-sync-8ntbw\" (UID: \"e74e301d-4637-4d16-a125-a44a5470a4ac\") " pod="openstack/keystone-db-sync-8ntbw" Mar 18 18:18:31.915132 master-0 kubenswrapper[30278]: I0318 18:18:31.910530 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5edc1dc4-2f2a-4eff-bc50-10382bc71d27-operator-scripts\") pod \"cinder-1f97-account-create-update-bc5tw\" (UID: \"5edc1dc4-2f2a-4eff-bc50-10382bc71d27\") " pod="openstack/cinder-1f97-account-create-update-bc5tw" Mar 18 18:18:31.915132 master-0 kubenswrapper[30278]: I0318 18:18:31.910558 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/594ed543-14e4-4a71-8eb9-3482fa67fc1d-operator-scripts\") pod \"neutron-db-create-rgrfw\" (UID: \"594ed543-14e4-4a71-8eb9-3482fa67fc1d\") " pod="openstack/neutron-db-create-rgrfw" Mar 18 18:18:31.915132 master-0 kubenswrapper[30278]: I0318 18:18:31.910648 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e74e301d-4637-4d16-a125-a44a5470a4ac-config-data\") pod \"keystone-db-sync-8ntbw\" (UID: \"e74e301d-4637-4d16-a125-a44a5470a4ac\") " pod="openstack/keystone-db-sync-8ntbw" Mar 18 18:18:32.011947 master-0 kubenswrapper[30278]: I0318 18:18:32.011893 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6clmb\" (UniqueName: \"kubernetes.io/projected/72dc0432-f429-4dbf-b1ce-d421425d6ca3-kube-api-access-6clmb\") pod \"72dc0432-f429-4dbf-b1ce-d421425d6ca3\" (UID: \"72dc0432-f429-4dbf-b1ce-d421425d6ca3\") " Mar 18 18:18:32.012128 master-0 kubenswrapper[30278]: I0318 18:18:32.012101 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/72dc0432-f429-4dbf-b1ce-d421425d6ca3-operator-scripts\") pod \"72dc0432-f429-4dbf-b1ce-d421425d6ca3\" (UID: \"72dc0432-f429-4dbf-b1ce-d421425d6ca3\") " Mar 18 18:18:32.013165 master-0 kubenswrapper[30278]: I0318 18:18:32.013103 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e74e301d-4637-4d16-a125-a44a5470a4ac-config-data\") pod \"keystone-db-sync-8ntbw\" (UID: \"e74e301d-4637-4d16-a125-a44a5470a4ac\") " pod="openstack/keystone-db-sync-8ntbw" Mar 18 18:18:32.013668 master-0 kubenswrapper[30278]: I0318 18:18:32.013165 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9vjlb\" (UniqueName: \"kubernetes.io/projected/594ed543-14e4-4a71-8eb9-3482fa67fc1d-kube-api-access-9vjlb\") pod \"neutron-db-create-rgrfw\" (UID: \"594ed543-14e4-4a71-8eb9-3482fa67fc1d\") " pod="openstack/neutron-db-create-rgrfw" Mar 18 18:18:32.013668 master-0 kubenswrapper[30278]: I0318 18:18:32.013190 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/677619bc-d70e-475e-a844-b177d2cadbd9-operator-scripts\") pod \"cinder-db-create-kl89c\" (UID: \"677619bc-d70e-475e-a844-b177d2cadbd9\") " pod="openstack/cinder-db-create-kl89c" Mar 18 18:18:32.013668 master-0 kubenswrapper[30278]: I0318 18:18:32.013216 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/90da1e72-16d6-4b7c-9ea2-75800f09f684-operator-scripts\") pod \"neutron-984d-account-create-update-tqdfv\" (UID: \"90da1e72-16d6-4b7c-9ea2-75800f09f684\") " pod="openstack/neutron-984d-account-create-update-tqdfv" Mar 18 18:18:32.013668 master-0 kubenswrapper[30278]: I0318 18:18:32.013252 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5chp5\" (UniqueName: \"kubernetes.io/projected/677619bc-d70e-475e-a844-b177d2cadbd9-kube-api-access-5chp5\") pod \"cinder-db-create-kl89c\" (UID: \"677619bc-d70e-475e-a844-b177d2cadbd9\") " pod="openstack/cinder-db-create-kl89c" Mar 18 18:18:32.013668 master-0 kubenswrapper[30278]: I0318 18:18:32.013340 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5562q\" (UniqueName: \"kubernetes.io/projected/5edc1dc4-2f2a-4eff-bc50-10382bc71d27-kube-api-access-5562q\") pod \"cinder-1f97-account-create-update-bc5tw\" (UID: \"5edc1dc4-2f2a-4eff-bc50-10382bc71d27\") " pod="openstack/cinder-1f97-account-create-update-bc5tw" Mar 18 18:18:32.013668 master-0 kubenswrapper[30278]: I0318 18:18:32.013395 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pk8lq\" (UniqueName: \"kubernetes.io/projected/e74e301d-4637-4d16-a125-a44a5470a4ac-kube-api-access-pk8lq\") pod \"keystone-db-sync-8ntbw\" (UID: \"e74e301d-4637-4d16-a125-a44a5470a4ac\") " pod="openstack/keystone-db-sync-8ntbw" Mar 18 18:18:32.014767 master-0 kubenswrapper[30278]: I0318 18:18:32.014738 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/677619bc-d70e-475e-a844-b177d2cadbd9-operator-scripts\") pod \"cinder-db-create-kl89c\" (UID: \"677619bc-d70e-475e-a844-b177d2cadbd9\") " pod="openstack/cinder-db-create-kl89c" Mar 18 18:18:32.027425 master-0 kubenswrapper[30278]: I0318 18:18:32.018971 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/72dc0432-f429-4dbf-b1ce-d421425d6ca3-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "72dc0432-f429-4dbf-b1ce-d421425d6ca3" (UID: "72dc0432-f429-4dbf-b1ce-d421425d6ca3"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 18:18:32.030460 master-0 kubenswrapper[30278]: I0318 18:18:32.027646 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dskzk\" (UniqueName: \"kubernetes.io/projected/90da1e72-16d6-4b7c-9ea2-75800f09f684-kube-api-access-dskzk\") pod \"neutron-984d-account-create-update-tqdfv\" (UID: \"90da1e72-16d6-4b7c-9ea2-75800f09f684\") " pod="openstack/neutron-984d-account-create-update-tqdfv" Mar 18 18:18:32.030460 master-0 kubenswrapper[30278]: I0318 18:18:32.027820 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5edc1dc4-2f2a-4eff-bc50-10382bc71d27-operator-scripts\") pod \"cinder-1f97-account-create-update-bc5tw\" (UID: \"5edc1dc4-2f2a-4eff-bc50-10382bc71d27\") " pod="openstack/cinder-1f97-account-create-update-bc5tw" Mar 18 18:18:32.030460 master-0 kubenswrapper[30278]: I0318 18:18:32.027848 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e74e301d-4637-4d16-a125-a44a5470a4ac-combined-ca-bundle\") pod \"keystone-db-sync-8ntbw\" (UID: \"e74e301d-4637-4d16-a125-a44a5470a4ac\") " pod="openstack/keystone-db-sync-8ntbw" Mar 18 18:18:32.030460 master-0 kubenswrapper[30278]: I0318 18:18:32.027960 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/594ed543-14e4-4a71-8eb9-3482fa67fc1d-operator-scripts\") pod \"neutron-db-create-rgrfw\" (UID: \"594ed543-14e4-4a71-8eb9-3482fa67fc1d\") " pod="openstack/neutron-db-create-rgrfw" Mar 18 18:18:32.030460 master-0 kubenswrapper[30278]: I0318 18:18:32.028475 30278 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/72dc0432-f429-4dbf-b1ce-d421425d6ca3-operator-scripts\") on node \"master-0\" DevicePath \"\"" Mar 18 18:18:32.030460 master-0 kubenswrapper[30278]: I0318 18:18:32.029434 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/594ed543-14e4-4a71-8eb9-3482fa67fc1d-operator-scripts\") pod \"neutron-db-create-rgrfw\" (UID: \"594ed543-14e4-4a71-8eb9-3482fa67fc1d\") " pod="openstack/neutron-db-create-rgrfw" Mar 18 18:18:32.030460 master-0 kubenswrapper[30278]: I0318 18:18:32.030065 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5edc1dc4-2f2a-4eff-bc50-10382bc71d27-operator-scripts\") pod \"cinder-1f97-account-create-update-bc5tw\" (UID: \"5edc1dc4-2f2a-4eff-bc50-10382bc71d27\") " pod="openstack/cinder-1f97-account-create-update-bc5tw" Mar 18 18:18:32.032265 master-0 kubenswrapper[30278]: I0318 18:18:32.032230 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e74e301d-4637-4d16-a125-a44a5470a4ac-config-data\") pod \"keystone-db-sync-8ntbw\" (UID: \"e74e301d-4637-4d16-a125-a44a5470a4ac\") " pod="openstack/keystone-db-sync-8ntbw" Mar 18 18:18:32.032354 master-0 kubenswrapper[30278]: I0318 18:18:32.032263 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5562q\" (UniqueName: \"kubernetes.io/projected/5edc1dc4-2f2a-4eff-bc50-10382bc71d27-kube-api-access-5562q\") pod \"cinder-1f97-account-create-update-bc5tw\" (UID: \"5edc1dc4-2f2a-4eff-bc50-10382bc71d27\") " pod="openstack/cinder-1f97-account-create-update-bc5tw" Mar 18 18:18:32.035202 master-0 kubenswrapper[30278]: I0318 18:18:32.035148 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/72dc0432-f429-4dbf-b1ce-d421425d6ca3-kube-api-access-6clmb" (OuterVolumeSpecName: "kube-api-access-6clmb") pod "72dc0432-f429-4dbf-b1ce-d421425d6ca3" (UID: "72dc0432-f429-4dbf-b1ce-d421425d6ca3"). InnerVolumeSpecName "kube-api-access-6clmb". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 18:18:32.037682 master-0 kubenswrapper[30278]: I0318 18:18:32.037335 30278 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-8jvr2"] Mar 18 18:18:32.037832 master-0 kubenswrapper[30278]: I0318 18:18:32.037786 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pk8lq\" (UniqueName: \"kubernetes.io/projected/e74e301d-4637-4d16-a125-a44a5470a4ac-kube-api-access-pk8lq\") pod \"keystone-db-sync-8ntbw\" (UID: \"e74e301d-4637-4d16-a125-a44a5470a4ac\") " pod="openstack/keystone-db-sync-8ntbw" Mar 18 18:18:32.038855 master-0 kubenswrapper[30278]: I0318 18:18:32.038824 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e74e301d-4637-4d16-a125-a44a5470a4ac-combined-ca-bundle\") pod \"keystone-db-sync-8ntbw\" (UID: \"e74e301d-4637-4d16-a125-a44a5470a4ac\") " pod="openstack/keystone-db-sync-8ntbw" Mar 18 18:18:32.040426 master-0 kubenswrapper[30278]: I0318 18:18:32.040380 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5chp5\" (UniqueName: \"kubernetes.io/projected/677619bc-d70e-475e-a844-b177d2cadbd9-kube-api-access-5chp5\") pod \"cinder-db-create-kl89c\" (UID: \"677619bc-d70e-475e-a844-b177d2cadbd9\") " pod="openstack/cinder-db-create-kl89c" Mar 18 18:18:32.041224 master-0 kubenswrapper[30278]: I0318 18:18:32.041085 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9vjlb\" (UniqueName: \"kubernetes.io/projected/594ed543-14e4-4a71-8eb9-3482fa67fc1d-kube-api-access-9vjlb\") pod \"neutron-db-create-rgrfw\" (UID: \"594ed543-14e4-4a71-8eb9-3482fa67fc1d\") " pod="openstack/neutron-db-create-rgrfw" Mar 18 18:18:32.133316 master-0 kubenswrapper[30278]: I0318 18:18:32.132847 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dskzk\" (UniqueName: \"kubernetes.io/projected/90da1e72-16d6-4b7c-9ea2-75800f09f684-kube-api-access-dskzk\") pod \"neutron-984d-account-create-update-tqdfv\" (UID: \"90da1e72-16d6-4b7c-9ea2-75800f09f684\") " pod="openstack/neutron-984d-account-create-update-tqdfv" Mar 18 18:18:32.133766 master-0 kubenswrapper[30278]: I0318 18:18:32.133746 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/90da1e72-16d6-4b7c-9ea2-75800f09f684-operator-scripts\") pod \"neutron-984d-account-create-update-tqdfv\" (UID: \"90da1e72-16d6-4b7c-9ea2-75800f09f684\") " pod="openstack/neutron-984d-account-create-update-tqdfv" Mar 18 18:18:32.133951 master-0 kubenswrapper[30278]: I0318 18:18:32.133935 30278 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6clmb\" (UniqueName: \"kubernetes.io/projected/72dc0432-f429-4dbf-b1ce-d421425d6ca3-kube-api-access-6clmb\") on node \"master-0\" DevicePath \"\"" Mar 18 18:18:32.134735 master-0 kubenswrapper[30278]: I0318 18:18:32.134711 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/90da1e72-16d6-4b7c-9ea2-75800f09f684-operator-scripts\") pod \"neutron-984d-account-create-update-tqdfv\" (UID: \"90da1e72-16d6-4b7c-9ea2-75800f09f684\") " pod="openstack/neutron-984d-account-create-update-tqdfv" Mar 18 18:18:32.157689 master-0 kubenswrapper[30278]: I0318 18:18:32.157631 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dskzk\" (UniqueName: \"kubernetes.io/projected/90da1e72-16d6-4b7c-9ea2-75800f09f684-kube-api-access-dskzk\") pod \"neutron-984d-account-create-update-tqdfv\" (UID: \"90da1e72-16d6-4b7c-9ea2-75800f09f684\") " pod="openstack/neutron-984d-account-create-update-tqdfv" Mar 18 18:18:32.172566 master-0 kubenswrapper[30278]: I0318 18:18:32.172508 30278 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Mar 18 18:18:32.261788 master-0 kubenswrapper[30278]: I0318 18:18:32.261705 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-1f97-account-create-update-bc5tw" Mar 18 18:18:32.263755 master-0 kubenswrapper[30278]: I0318 18:18:32.263693 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-kl89c" Mar 18 18:18:32.264184 master-0 kubenswrapper[30278]: I0318 18:18:32.263976 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-8ntbw" Mar 18 18:18:32.335674 master-0 kubenswrapper[30278]: I0318 18:18:32.335309 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-rgrfw" Mar 18 18:18:32.360940 master-0 kubenswrapper[30278]: I0318 18:18:32.359477 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-984d-account-create-update-tqdfv" Mar 18 18:18:32.926718 master-0 kubenswrapper[30278]: I0318 18:18:32.925232 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"ff27830b-378b-4338-ac41-041a9d78ed62","Type":"ContainerStarted","Data":"96f32e592f2d4dff5c04060849011e1ab4640e7a9a6a086fc9140c7d5e3d60d3"} Mar 18 18:18:32.926718 master-0 kubenswrapper[30278]: I0318 18:18:32.925325 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"ff27830b-378b-4338-ac41-041a9d78ed62","Type":"ContainerStarted","Data":"6325bccb093bca343c367f071d87b0950a4154d1d9c4eae31f95487431e8b318"} Mar 18 18:18:32.936933 master-0 kubenswrapper[30278]: I0318 18:18:32.936865 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-8jvr2" event={"ID":"d2ad6a1d-4b4e-49d6-b2f1-65906269f79e","Type":"ContainerStarted","Data":"32e00ef822d650a2a4c0974197e3538996394165d1e2dd63368b3a33537147c8"} Mar 18 18:18:32.944882 master-0 kubenswrapper[30278]: I0318 18:18:32.943930 30278 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-1f97-account-create-update-bc5tw"] Mar 18 18:18:32.944882 master-0 kubenswrapper[30278]: W0318 18:18:32.944545 30278 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5edc1dc4_2f2a_4eff_bc50_10382bc71d27.slice/crio-0d045eed67666e57dd651216a24ee600a4d3137c79e1c8e67be03fa25db78eb8 WatchSource:0}: Error finding container 0d045eed67666e57dd651216a24ee600a4d3137c79e1c8e67be03fa25db78eb8: Status 404 returned error can't find the container with id 0d045eed67666e57dd651216a24ee600a4d3137c79e1c8e67be03fa25db78eb8 Mar 18 18:18:33.061344 master-0 kubenswrapper[30278]: W0318 18:18:33.061300 30278 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod594ed543_14e4_4a71_8eb9_3482fa67fc1d.slice/crio-26e867fa1a626912cc3e845f2e8d6ab41e33a237515067098a77ccc16492b001 WatchSource:0}: Error finding container 26e867fa1a626912cc3e845f2e8d6ab41e33a237515067098a77ccc16492b001: Status 404 returned error can't find the container with id 26e867fa1a626912cc3e845f2e8d6ab41e33a237515067098a77ccc16492b001 Mar 18 18:18:33.098980 master-0 kubenswrapper[30278]: I0318 18:18:33.098934 30278 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-rgrfw"] Mar 18 18:18:33.100781 master-0 kubenswrapper[30278]: W0318 18:18:33.100738 30278 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode74e301d_4637_4d16_a125_a44a5470a4ac.slice/crio-688cc353e037e682babdf8b5425ffc1975a75419e7dd33e358a5dee36a64cca9 WatchSource:0}: Error finding container 688cc353e037e682babdf8b5425ffc1975a75419e7dd33e358a5dee36a64cca9: Status 404 returned error can't find the container with id 688cc353e037e682babdf8b5425ffc1975a75419e7dd33e358a5dee36a64cca9 Mar 18 18:18:33.170686 master-0 kubenswrapper[30278]: I0318 18:18:33.163234 30278 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-8ntbw"] Mar 18 18:18:33.286556 master-0 kubenswrapper[30278]: I0318 18:18:33.285960 30278 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-kl89c"] Mar 18 18:18:33.481718 master-0 kubenswrapper[30278]: I0318 18:18:33.480604 30278 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-984d-account-create-update-tqdfv"] Mar 18 18:18:33.490650 master-0 kubenswrapper[30278]: W0318 18:18:33.490531 30278 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod90da1e72_16d6_4b7c_9ea2_75800f09f684.slice/crio-8d9a51010b679499729321c78245e757515eda0fa79624f20828e7cdbdfd99ae WatchSource:0}: Error finding container 8d9a51010b679499729321c78245e757515eda0fa79624f20828e7cdbdfd99ae: Status 404 returned error can't find the container with id 8d9a51010b679499729321c78245e757515eda0fa79624f20828e7cdbdfd99ae Mar 18 18:18:33.545319 master-0 kubenswrapper[30278]: I0318 18:18:33.545250 30278 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-xntzs-config-6x9fb" Mar 18 18:18:33.623307 master-0 kubenswrapper[30278]: I0318 18:18:33.621798 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p2vdp\" (UniqueName: \"kubernetes.io/projected/31ce30e7-4c34-428f-b81d-a6bfaeeb38cb-kube-api-access-p2vdp\") pod \"31ce30e7-4c34-428f-b81d-a6bfaeeb38cb\" (UID: \"31ce30e7-4c34-428f-b81d-a6bfaeeb38cb\") " Mar 18 18:18:33.623307 master-0 kubenswrapper[30278]: I0318 18:18:33.622064 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/31ce30e7-4c34-428f-b81d-a6bfaeeb38cb-scripts\") pod \"31ce30e7-4c34-428f-b81d-a6bfaeeb38cb\" (UID: \"31ce30e7-4c34-428f-b81d-a6bfaeeb38cb\") " Mar 18 18:18:33.623307 master-0 kubenswrapper[30278]: I0318 18:18:33.622156 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/31ce30e7-4c34-428f-b81d-a6bfaeeb38cb-var-run\") pod \"31ce30e7-4c34-428f-b81d-a6bfaeeb38cb\" (UID: \"31ce30e7-4c34-428f-b81d-a6bfaeeb38cb\") " Mar 18 18:18:33.623307 master-0 kubenswrapper[30278]: I0318 18:18:33.622179 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/31ce30e7-4c34-428f-b81d-a6bfaeeb38cb-var-run-ovn\") pod \"31ce30e7-4c34-428f-b81d-a6bfaeeb38cb\" (UID: \"31ce30e7-4c34-428f-b81d-a6bfaeeb38cb\") " Mar 18 18:18:33.623307 master-0 kubenswrapper[30278]: I0318 18:18:33.622216 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/31ce30e7-4c34-428f-b81d-a6bfaeeb38cb-var-log-ovn\") pod \"31ce30e7-4c34-428f-b81d-a6bfaeeb38cb\" (UID: \"31ce30e7-4c34-428f-b81d-a6bfaeeb38cb\") " Mar 18 18:18:33.623307 master-0 kubenswrapper[30278]: I0318 18:18:33.622920 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31ce30e7-4c34-428f-b81d-a6bfaeeb38cb-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "31ce30e7-4c34-428f-b81d-a6bfaeeb38cb" (UID: "31ce30e7-4c34-428f-b81d-a6bfaeeb38cb"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 18:18:33.623307 master-0 kubenswrapper[30278]: I0318 18:18:33.622401 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/31ce30e7-4c34-428f-b81d-a6bfaeeb38cb-additional-scripts\") pod \"31ce30e7-4c34-428f-b81d-a6bfaeeb38cb\" (UID: \"31ce30e7-4c34-428f-b81d-a6bfaeeb38cb\") " Mar 18 18:18:33.627810 master-0 kubenswrapper[30278]: I0318 18:18:33.627772 30278 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/31ce30e7-4c34-428f-b81d-a6bfaeeb38cb-additional-scripts\") on node \"master-0\" DevicePath \"\"" Mar 18 18:18:33.627810 master-0 kubenswrapper[30278]: I0318 18:18:33.627772 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31ce30e7-4c34-428f-b81d-a6bfaeeb38cb-kube-api-access-p2vdp" (OuterVolumeSpecName: "kube-api-access-p2vdp") pod "31ce30e7-4c34-428f-b81d-a6bfaeeb38cb" (UID: "31ce30e7-4c34-428f-b81d-a6bfaeeb38cb"). InnerVolumeSpecName "kube-api-access-p2vdp". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 18:18:33.627941 master-0 kubenswrapper[30278]: I0318 18:18:33.627877 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/31ce30e7-4c34-428f-b81d-a6bfaeeb38cb-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "31ce30e7-4c34-428f-b81d-a6bfaeeb38cb" (UID: "31ce30e7-4c34-428f-b81d-a6bfaeeb38cb"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 18:18:33.627941 master-0 kubenswrapper[30278]: I0318 18:18:33.627902 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/31ce30e7-4c34-428f-b81d-a6bfaeeb38cb-var-run" (OuterVolumeSpecName: "var-run") pod "31ce30e7-4c34-428f-b81d-a6bfaeeb38cb" (UID: "31ce30e7-4c34-428f-b81d-a6bfaeeb38cb"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 18:18:33.627941 master-0 kubenswrapper[30278]: I0318 18:18:33.627922 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/31ce30e7-4c34-428f-b81d-a6bfaeeb38cb-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "31ce30e7-4c34-428f-b81d-a6bfaeeb38cb" (UID: "31ce30e7-4c34-428f-b81d-a6bfaeeb38cb"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 18:18:33.629487 master-0 kubenswrapper[30278]: I0318 18:18:33.628445 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31ce30e7-4c34-428f-b81d-a6bfaeeb38cb-scripts" (OuterVolumeSpecName: "scripts") pod "31ce30e7-4c34-428f-b81d-a6bfaeeb38cb" (UID: "31ce30e7-4c34-428f-b81d-a6bfaeeb38cb"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 18:18:33.732045 master-0 kubenswrapper[30278]: I0318 18:18:33.732002 30278 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/31ce30e7-4c34-428f-b81d-a6bfaeeb38cb-scripts\") on node \"master-0\" DevicePath \"\"" Mar 18 18:18:33.732177 master-0 kubenswrapper[30278]: I0318 18:18:33.732067 30278 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/31ce30e7-4c34-428f-b81d-a6bfaeeb38cb-var-run\") on node \"master-0\" DevicePath \"\"" Mar 18 18:18:33.732177 master-0 kubenswrapper[30278]: I0318 18:18:33.732082 30278 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/31ce30e7-4c34-428f-b81d-a6bfaeeb38cb-var-run-ovn\") on node \"master-0\" DevicePath \"\"" Mar 18 18:18:33.732177 master-0 kubenswrapper[30278]: I0318 18:18:33.732092 30278 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/31ce30e7-4c34-428f-b81d-a6bfaeeb38cb-var-log-ovn\") on node \"master-0\" DevicePath \"\"" Mar 18 18:18:33.732177 master-0 kubenswrapper[30278]: I0318 18:18:33.732105 30278 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p2vdp\" (UniqueName: \"kubernetes.io/projected/31ce30e7-4c34-428f-b81d-a6bfaeeb38cb-kube-api-access-p2vdp\") on node \"master-0\" DevicePath \"\"" Mar 18 18:18:34.005726 master-0 kubenswrapper[30278]: I0318 18:18:34.005655 30278 generic.go:334] "Generic (PLEG): container finished" podID="677619bc-d70e-475e-a844-b177d2cadbd9" containerID="5ed7f5107caaebf5a85b17d251385a605e48bea3d72a52db94383dfd61577d87" exitCode=0 Mar 18 18:18:34.006524 master-0 kubenswrapper[30278]: I0318 18:18:34.005753 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-kl89c" event={"ID":"677619bc-d70e-475e-a844-b177d2cadbd9","Type":"ContainerDied","Data":"5ed7f5107caaebf5a85b17d251385a605e48bea3d72a52db94383dfd61577d87"} Mar 18 18:18:34.006524 master-0 kubenswrapper[30278]: I0318 18:18:34.005790 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-kl89c" event={"ID":"677619bc-d70e-475e-a844-b177d2cadbd9","Type":"ContainerStarted","Data":"dc53e68dc9b0b45a54c54c8951df5517bf93f59dfd22e339c52b8a1a4d4e1cdd"} Mar 18 18:18:34.009843 master-0 kubenswrapper[30278]: I0318 18:18:34.009805 30278 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-xntzs-config-6x9fb"] Mar 18 18:18:34.021632 master-0 kubenswrapper[30278]: I0318 18:18:34.021150 30278 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-xntzs" Mar 18 18:18:34.024105 master-0 kubenswrapper[30278]: I0318 18:18:34.023984 30278 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-xntzs-config-6x9fb"] Mar 18 18:18:34.025126 master-0 kubenswrapper[30278]: I0318 18:18:34.025090 30278 generic.go:334] "Generic (PLEG): container finished" podID="594ed543-14e4-4a71-8eb9-3482fa67fc1d" containerID="852ae041e5da3444089eefd0c7d63ef201ba470835beae646d4a698c5c3b2ace" exitCode=0 Mar 18 18:18:34.026443 master-0 kubenswrapper[30278]: I0318 18:18:34.026416 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-rgrfw" event={"ID":"594ed543-14e4-4a71-8eb9-3482fa67fc1d","Type":"ContainerDied","Data":"852ae041e5da3444089eefd0c7d63ef201ba470835beae646d4a698c5c3b2ace"} Mar 18 18:18:34.026580 master-0 kubenswrapper[30278]: I0318 18:18:34.026558 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-rgrfw" event={"ID":"594ed543-14e4-4a71-8eb9-3482fa67fc1d","Type":"ContainerStarted","Data":"26e867fa1a626912cc3e845f2e8d6ab41e33a237515067098a77ccc16492b001"} Mar 18 18:18:34.028215 master-0 kubenswrapper[30278]: I0318 18:18:34.028192 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-984d-account-create-update-tqdfv" event={"ID":"90da1e72-16d6-4b7c-9ea2-75800f09f684","Type":"ContainerStarted","Data":"18acea1c051b28b3d89e15f54bbb6396115dc9ce363ff2503a188f24b9c2455f"} Mar 18 18:18:34.028339 master-0 kubenswrapper[30278]: I0318 18:18:34.028324 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-984d-account-create-update-tqdfv" event={"ID":"90da1e72-16d6-4b7c-9ea2-75800f09f684","Type":"ContainerStarted","Data":"8d9a51010b679499729321c78245e757515eda0fa79624f20828e7cdbdfd99ae"} Mar 18 18:18:34.053775 master-0 kubenswrapper[30278]: I0318 18:18:34.053606 30278 generic.go:334] "Generic (PLEG): container finished" podID="5edc1dc4-2f2a-4eff-bc50-10382bc71d27" containerID="faea6e52c7a494f1d9ce3a8c2e7a2b71a4c866d6b07e2b75994d36d756acb57f" exitCode=0 Mar 18 18:18:34.053775 master-0 kubenswrapper[30278]: I0318 18:18:34.053724 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-1f97-account-create-update-bc5tw" event={"ID":"5edc1dc4-2f2a-4eff-bc50-10382bc71d27","Type":"ContainerDied","Data":"faea6e52c7a494f1d9ce3a8c2e7a2b71a4c866d6b07e2b75994d36d756acb57f"} Mar 18 18:18:34.053775 master-0 kubenswrapper[30278]: I0318 18:18:34.053761 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-1f97-account-create-update-bc5tw" event={"ID":"5edc1dc4-2f2a-4eff-bc50-10382bc71d27","Type":"ContainerStarted","Data":"0d045eed67666e57dd651216a24ee600a4d3137c79e1c8e67be03fa25db78eb8"} Mar 18 18:18:34.059818 master-0 kubenswrapper[30278]: I0318 18:18:34.059746 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-8ntbw" event={"ID":"e74e301d-4637-4d16-a125-a44a5470a4ac","Type":"ContainerStarted","Data":"688cc353e037e682babdf8b5425ffc1975a75419e7dd33e358a5dee36a64cca9"} Mar 18 18:18:34.067744 master-0 kubenswrapper[30278]: I0318 18:18:34.067631 30278 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-984d-account-create-update-tqdfv" podStartSLOduration=3.067608184 podStartE2EDuration="3.067608184s" podCreationTimestamp="2026-03-18 18:18:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 18:18:34.065938339 +0000 UTC m=+1083.233122934" watchObservedRunningTime="2026-03-18 18:18:34.067608184 +0000 UTC m=+1083.234792779" Mar 18 18:18:34.068254 master-0 kubenswrapper[30278]: I0318 18:18:34.068207 30278 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="44d91ac1ab2224044e9d3a303fb996e3c4b0fa19f52486afcb9f10e5787a4cf2" Mar 18 18:18:34.068359 master-0 kubenswrapper[30278]: I0318 18:18:34.068334 30278 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-xntzs-config-6x9fb" Mar 18 18:18:34.257105 master-0 kubenswrapper[30278]: I0318 18:18:34.256459 30278 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-xntzs-config-m5q6f"] Mar 18 18:18:34.258637 master-0 kubenswrapper[30278]: E0318 18:18:34.257352 30278 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="31ce30e7-4c34-428f-b81d-a6bfaeeb38cb" containerName="ovn-config" Mar 18 18:18:34.258637 master-0 kubenswrapper[30278]: I0318 18:18:34.257368 30278 state_mem.go:107] "Deleted CPUSet assignment" podUID="31ce30e7-4c34-428f-b81d-a6bfaeeb38cb" containerName="ovn-config" Mar 18 18:18:34.258637 master-0 kubenswrapper[30278]: E0318 18:18:34.257438 30278 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="72dc0432-f429-4dbf-b1ce-d421425d6ca3" containerName="mariadb-account-create-update" Mar 18 18:18:34.258637 master-0 kubenswrapper[30278]: I0318 18:18:34.257445 30278 state_mem.go:107] "Deleted CPUSet assignment" podUID="72dc0432-f429-4dbf-b1ce-d421425d6ca3" containerName="mariadb-account-create-update" Mar 18 18:18:34.258637 master-0 kubenswrapper[30278]: I0318 18:18:34.257880 30278 memory_manager.go:354] "RemoveStaleState removing state" podUID="72dc0432-f429-4dbf-b1ce-d421425d6ca3" containerName="mariadb-account-create-update" Mar 18 18:18:34.258637 master-0 kubenswrapper[30278]: I0318 18:18:34.257895 30278 memory_manager.go:354] "RemoveStaleState removing state" podUID="31ce30e7-4c34-428f-b81d-a6bfaeeb38cb" containerName="ovn-config" Mar 18 18:18:34.259438 master-0 kubenswrapper[30278]: I0318 18:18:34.259415 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-xntzs-config-m5q6f" Mar 18 18:18:34.261127 master-0 kubenswrapper[30278]: I0318 18:18:34.261034 30278 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-xntzs-config-m5q6f"] Mar 18 18:18:34.263799 master-0 kubenswrapper[30278]: I0318 18:18:34.263536 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Mar 18 18:18:34.351539 master-0 kubenswrapper[30278]: I0318 18:18:34.351411 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/c521ef3e-16b2-4bc3-a67b-3c86ff255bd8-var-run\") pod \"ovn-controller-xntzs-config-m5q6f\" (UID: \"c521ef3e-16b2-4bc3-a67b-3c86ff255bd8\") " pod="openstack/ovn-controller-xntzs-config-m5q6f" Mar 18 18:18:34.351818 master-0 kubenswrapper[30278]: I0318 18:18:34.351541 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c521ef3e-16b2-4bc3-a67b-3c86ff255bd8-scripts\") pod \"ovn-controller-xntzs-config-m5q6f\" (UID: \"c521ef3e-16b2-4bc3-a67b-3c86ff255bd8\") " pod="openstack/ovn-controller-xntzs-config-m5q6f" Mar 18 18:18:34.351818 master-0 kubenswrapper[30278]: I0318 18:18:34.351625 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/c521ef3e-16b2-4bc3-a67b-3c86ff255bd8-additional-scripts\") pod \"ovn-controller-xntzs-config-m5q6f\" (UID: \"c521ef3e-16b2-4bc3-a67b-3c86ff255bd8\") " pod="openstack/ovn-controller-xntzs-config-m5q6f" Mar 18 18:18:34.351818 master-0 kubenswrapper[30278]: I0318 18:18:34.351685 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-258zk\" (UniqueName: \"kubernetes.io/projected/c521ef3e-16b2-4bc3-a67b-3c86ff255bd8-kube-api-access-258zk\") pod \"ovn-controller-xntzs-config-m5q6f\" (UID: \"c521ef3e-16b2-4bc3-a67b-3c86ff255bd8\") " pod="openstack/ovn-controller-xntzs-config-m5q6f" Mar 18 18:18:34.351818 master-0 kubenswrapper[30278]: I0318 18:18:34.351710 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/c521ef3e-16b2-4bc3-a67b-3c86ff255bd8-var-run-ovn\") pod \"ovn-controller-xntzs-config-m5q6f\" (UID: \"c521ef3e-16b2-4bc3-a67b-3c86ff255bd8\") " pod="openstack/ovn-controller-xntzs-config-m5q6f" Mar 18 18:18:34.351818 master-0 kubenswrapper[30278]: I0318 18:18:34.351735 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/c521ef3e-16b2-4bc3-a67b-3c86ff255bd8-var-log-ovn\") pod \"ovn-controller-xntzs-config-m5q6f\" (UID: \"c521ef3e-16b2-4bc3-a67b-3c86ff255bd8\") " pod="openstack/ovn-controller-xntzs-config-m5q6f" Mar 18 18:18:34.463724 master-0 kubenswrapper[30278]: I0318 18:18:34.463657 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c521ef3e-16b2-4bc3-a67b-3c86ff255bd8-scripts\") pod \"ovn-controller-xntzs-config-m5q6f\" (UID: \"c521ef3e-16b2-4bc3-a67b-3c86ff255bd8\") " pod="openstack/ovn-controller-xntzs-config-m5q6f" Mar 18 18:18:34.464353 master-0 kubenswrapper[30278]: I0318 18:18:34.464331 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c521ef3e-16b2-4bc3-a67b-3c86ff255bd8-scripts\") pod \"ovn-controller-xntzs-config-m5q6f\" (UID: \"c521ef3e-16b2-4bc3-a67b-3c86ff255bd8\") " pod="openstack/ovn-controller-xntzs-config-m5q6f" Mar 18 18:18:34.464951 master-0 kubenswrapper[30278]: I0318 18:18:34.464910 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/c521ef3e-16b2-4bc3-a67b-3c86ff255bd8-additional-scripts\") pod \"ovn-controller-xntzs-config-m5q6f\" (UID: \"c521ef3e-16b2-4bc3-a67b-3c86ff255bd8\") " pod="openstack/ovn-controller-xntzs-config-m5q6f" Mar 18 18:18:34.465417 master-0 kubenswrapper[30278]: I0318 18:18:34.465397 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-258zk\" (UniqueName: \"kubernetes.io/projected/c521ef3e-16b2-4bc3-a67b-3c86ff255bd8-kube-api-access-258zk\") pod \"ovn-controller-xntzs-config-m5q6f\" (UID: \"c521ef3e-16b2-4bc3-a67b-3c86ff255bd8\") " pod="openstack/ovn-controller-xntzs-config-m5q6f" Mar 18 18:18:34.466474 master-0 kubenswrapper[30278]: I0318 18:18:34.466424 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/c521ef3e-16b2-4bc3-a67b-3c86ff255bd8-var-run-ovn\") pod \"ovn-controller-xntzs-config-m5q6f\" (UID: \"c521ef3e-16b2-4bc3-a67b-3c86ff255bd8\") " pod="openstack/ovn-controller-xntzs-config-m5q6f" Mar 18 18:18:34.466732 master-0 kubenswrapper[30278]: I0318 18:18:34.466708 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/c521ef3e-16b2-4bc3-a67b-3c86ff255bd8-var-log-ovn\") pod \"ovn-controller-xntzs-config-m5q6f\" (UID: \"c521ef3e-16b2-4bc3-a67b-3c86ff255bd8\") " pod="openstack/ovn-controller-xntzs-config-m5q6f" Mar 18 18:18:34.467427 master-0 kubenswrapper[30278]: I0318 18:18:34.467119 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/c521ef3e-16b2-4bc3-a67b-3c86ff255bd8-additional-scripts\") pod \"ovn-controller-xntzs-config-m5q6f\" (UID: \"c521ef3e-16b2-4bc3-a67b-3c86ff255bd8\") " pod="openstack/ovn-controller-xntzs-config-m5q6f" Mar 18 18:18:34.467427 master-0 kubenswrapper[30278]: I0318 18:18:34.467244 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/c521ef3e-16b2-4bc3-a67b-3c86ff255bd8-var-run-ovn\") pod \"ovn-controller-xntzs-config-m5q6f\" (UID: \"c521ef3e-16b2-4bc3-a67b-3c86ff255bd8\") " pod="openstack/ovn-controller-xntzs-config-m5q6f" Mar 18 18:18:34.467632 master-0 kubenswrapper[30278]: I0318 18:18:34.467148 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/c521ef3e-16b2-4bc3-a67b-3c86ff255bd8-var-run\") pod \"ovn-controller-xntzs-config-m5q6f\" (UID: \"c521ef3e-16b2-4bc3-a67b-3c86ff255bd8\") " pod="openstack/ovn-controller-xntzs-config-m5q6f" Mar 18 18:18:34.467809 master-0 kubenswrapper[30278]: I0318 18:18:34.467782 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/c521ef3e-16b2-4bc3-a67b-3c86ff255bd8-var-run\") pod \"ovn-controller-xntzs-config-m5q6f\" (UID: \"c521ef3e-16b2-4bc3-a67b-3c86ff255bd8\") " pod="openstack/ovn-controller-xntzs-config-m5q6f" Mar 18 18:18:34.468051 master-0 kubenswrapper[30278]: I0318 18:18:34.467739 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/c521ef3e-16b2-4bc3-a67b-3c86ff255bd8-var-log-ovn\") pod \"ovn-controller-xntzs-config-m5q6f\" (UID: \"c521ef3e-16b2-4bc3-a67b-3c86ff255bd8\") " pod="openstack/ovn-controller-xntzs-config-m5q6f" Mar 18 18:18:34.522293 master-0 kubenswrapper[30278]: I0318 18:18:34.522107 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-258zk\" (UniqueName: \"kubernetes.io/projected/c521ef3e-16b2-4bc3-a67b-3c86ff255bd8-kube-api-access-258zk\") pod \"ovn-controller-xntzs-config-m5q6f\" (UID: \"c521ef3e-16b2-4bc3-a67b-3c86ff255bd8\") " pod="openstack/ovn-controller-xntzs-config-m5q6f" Mar 18 18:18:34.689395 master-0 kubenswrapper[30278]: I0318 18:18:34.688320 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-xntzs-config-m5q6f" Mar 18 18:18:34.900862 master-0 kubenswrapper[30278]: E0318 18:18:34.900735 30278 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 192.168.32.10:45388->192.168.32.10:36439: write tcp 192.168.32.10:45388->192.168.32.10:36439: write: broken pipe Mar 18 18:18:35.074457 master-0 kubenswrapper[30278]: I0318 18:18:35.074365 30278 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31ce30e7-4c34-428f-b81d-a6bfaeeb38cb" path="/var/lib/kubelet/pods/31ce30e7-4c34-428f-b81d-a6bfaeeb38cb/volumes" Mar 18 18:18:35.103976 master-0 kubenswrapper[30278]: I0318 18:18:35.103892 30278 generic.go:334] "Generic (PLEG): container finished" podID="90da1e72-16d6-4b7c-9ea2-75800f09f684" containerID="18acea1c051b28b3d89e15f54bbb6396115dc9ce363ff2503a188f24b9c2455f" exitCode=0 Mar 18 18:18:35.107398 master-0 kubenswrapper[30278]: I0318 18:18:35.106522 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-984d-account-create-update-tqdfv" event={"ID":"90da1e72-16d6-4b7c-9ea2-75800f09f684","Type":"ContainerDied","Data":"18acea1c051b28b3d89e15f54bbb6396115dc9ce363ff2503a188f24b9c2455f"} Mar 18 18:18:35.752290 master-0 kubenswrapper[30278]: I0318 18:18:35.749820 30278 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-kl89c" Mar 18 18:18:35.827956 master-0 kubenswrapper[30278]: I0318 18:18:35.827863 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5chp5\" (UniqueName: \"kubernetes.io/projected/677619bc-d70e-475e-a844-b177d2cadbd9-kube-api-access-5chp5\") pod \"677619bc-d70e-475e-a844-b177d2cadbd9\" (UID: \"677619bc-d70e-475e-a844-b177d2cadbd9\") " Mar 18 18:18:35.828148 master-0 kubenswrapper[30278]: I0318 18:18:35.827976 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/677619bc-d70e-475e-a844-b177d2cadbd9-operator-scripts\") pod \"677619bc-d70e-475e-a844-b177d2cadbd9\" (UID: \"677619bc-d70e-475e-a844-b177d2cadbd9\") " Mar 18 18:18:35.830171 master-0 kubenswrapper[30278]: I0318 18:18:35.830084 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/677619bc-d70e-475e-a844-b177d2cadbd9-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "677619bc-d70e-475e-a844-b177d2cadbd9" (UID: "677619bc-d70e-475e-a844-b177d2cadbd9"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 18:18:35.830499 master-0 kubenswrapper[30278]: I0318 18:18:35.830425 30278 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/677619bc-d70e-475e-a844-b177d2cadbd9-operator-scripts\") on node \"master-0\" DevicePath \"\"" Mar 18 18:18:35.858679 master-0 kubenswrapper[30278]: I0318 18:18:35.858590 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/677619bc-d70e-475e-a844-b177d2cadbd9-kube-api-access-5chp5" (OuterVolumeSpecName: "kube-api-access-5chp5") pod "677619bc-d70e-475e-a844-b177d2cadbd9" (UID: "677619bc-d70e-475e-a844-b177d2cadbd9"). InnerVolumeSpecName "kube-api-access-5chp5". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 18:18:35.901302 master-0 kubenswrapper[30278]: I0318 18:18:35.898763 30278 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-rgrfw" Mar 18 18:18:35.920350 master-0 kubenswrapper[30278]: I0318 18:18:35.914684 30278 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-1f97-account-create-update-bc5tw" Mar 18 18:18:35.942468 master-0 kubenswrapper[30278]: I0318 18:18:35.934739 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5562q\" (UniqueName: \"kubernetes.io/projected/5edc1dc4-2f2a-4eff-bc50-10382bc71d27-kube-api-access-5562q\") pod \"5edc1dc4-2f2a-4eff-bc50-10382bc71d27\" (UID: \"5edc1dc4-2f2a-4eff-bc50-10382bc71d27\") " Mar 18 18:18:35.942468 master-0 kubenswrapper[30278]: I0318 18:18:35.935096 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5edc1dc4-2f2a-4eff-bc50-10382bc71d27-operator-scripts\") pod \"5edc1dc4-2f2a-4eff-bc50-10382bc71d27\" (UID: \"5edc1dc4-2f2a-4eff-bc50-10382bc71d27\") " Mar 18 18:18:35.942468 master-0 kubenswrapper[30278]: I0318 18:18:35.935151 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9vjlb\" (UniqueName: \"kubernetes.io/projected/594ed543-14e4-4a71-8eb9-3482fa67fc1d-kube-api-access-9vjlb\") pod \"594ed543-14e4-4a71-8eb9-3482fa67fc1d\" (UID: \"594ed543-14e4-4a71-8eb9-3482fa67fc1d\") " Mar 18 18:18:35.942468 master-0 kubenswrapper[30278]: I0318 18:18:35.935243 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/594ed543-14e4-4a71-8eb9-3482fa67fc1d-operator-scripts\") pod \"594ed543-14e4-4a71-8eb9-3482fa67fc1d\" (UID: \"594ed543-14e4-4a71-8eb9-3482fa67fc1d\") " Mar 18 18:18:35.942468 master-0 kubenswrapper[30278]: I0318 18:18:35.936306 30278 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5chp5\" (UniqueName: \"kubernetes.io/projected/677619bc-d70e-475e-a844-b177d2cadbd9-kube-api-access-5chp5\") on node \"master-0\" DevicePath \"\"" Mar 18 18:18:35.942468 master-0 kubenswrapper[30278]: I0318 18:18:35.937016 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5edc1dc4-2f2a-4eff-bc50-10382bc71d27-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "5edc1dc4-2f2a-4eff-bc50-10382bc71d27" (UID: "5edc1dc4-2f2a-4eff-bc50-10382bc71d27"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 18:18:35.942468 master-0 kubenswrapper[30278]: I0318 18:18:35.937131 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/594ed543-14e4-4a71-8eb9-3482fa67fc1d-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "594ed543-14e4-4a71-8eb9-3482fa67fc1d" (UID: "594ed543-14e4-4a71-8eb9-3482fa67fc1d"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 18:18:35.944789 master-0 kubenswrapper[30278]: I0318 18:18:35.944738 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5edc1dc4-2f2a-4eff-bc50-10382bc71d27-kube-api-access-5562q" (OuterVolumeSpecName: "kube-api-access-5562q") pod "5edc1dc4-2f2a-4eff-bc50-10382bc71d27" (UID: "5edc1dc4-2f2a-4eff-bc50-10382bc71d27"). InnerVolumeSpecName "kube-api-access-5562q". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 18:18:35.953436 master-0 kubenswrapper[30278]: I0318 18:18:35.949437 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/594ed543-14e4-4a71-8eb9-3482fa67fc1d-kube-api-access-9vjlb" (OuterVolumeSpecName: "kube-api-access-9vjlb") pod "594ed543-14e4-4a71-8eb9-3482fa67fc1d" (UID: "594ed543-14e4-4a71-8eb9-3482fa67fc1d"). InnerVolumeSpecName "kube-api-access-9vjlb". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 18:18:36.037545 master-0 kubenswrapper[30278]: I0318 18:18:36.037484 30278 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5562q\" (UniqueName: \"kubernetes.io/projected/5edc1dc4-2f2a-4eff-bc50-10382bc71d27-kube-api-access-5562q\") on node \"master-0\" DevicePath \"\"" Mar 18 18:18:36.037545 master-0 kubenswrapper[30278]: I0318 18:18:36.037530 30278 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5edc1dc4-2f2a-4eff-bc50-10382bc71d27-operator-scripts\") on node \"master-0\" DevicePath \"\"" Mar 18 18:18:36.037545 master-0 kubenswrapper[30278]: I0318 18:18:36.037541 30278 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9vjlb\" (UniqueName: \"kubernetes.io/projected/594ed543-14e4-4a71-8eb9-3482fa67fc1d-kube-api-access-9vjlb\") on node \"master-0\" DevicePath \"\"" Mar 18 18:18:36.037545 master-0 kubenswrapper[30278]: I0318 18:18:36.037552 30278 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/594ed543-14e4-4a71-8eb9-3482fa67fc1d-operator-scripts\") on node \"master-0\" DevicePath \"\"" Mar 18 18:18:36.117313 master-0 kubenswrapper[30278]: I0318 18:18:36.114531 30278 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-xntzs-config-m5q6f"] Mar 18 18:18:36.155754 master-0 kubenswrapper[30278]: I0318 18:18:36.155704 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-rgrfw" event={"ID":"594ed543-14e4-4a71-8eb9-3482fa67fc1d","Type":"ContainerDied","Data":"26e867fa1a626912cc3e845f2e8d6ab41e33a237515067098a77ccc16492b001"} Mar 18 18:18:36.155922 master-0 kubenswrapper[30278]: I0318 18:18:36.155824 30278 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-rgrfw" Mar 18 18:18:36.156014 master-0 kubenswrapper[30278]: I0318 18:18:36.155994 30278 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="26e867fa1a626912cc3e845f2e8d6ab41e33a237515067098a77ccc16492b001" Mar 18 18:18:36.164685 master-0 kubenswrapper[30278]: I0318 18:18:36.162039 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-kl89c" event={"ID":"677619bc-d70e-475e-a844-b177d2cadbd9","Type":"ContainerDied","Data":"dc53e68dc9b0b45a54c54c8951df5517bf93f59dfd22e339c52b8a1a4d4e1cdd"} Mar 18 18:18:36.164685 master-0 kubenswrapper[30278]: I0318 18:18:36.162177 30278 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dc53e68dc9b0b45a54c54c8951df5517bf93f59dfd22e339c52b8a1a4d4e1cdd" Mar 18 18:18:36.164685 master-0 kubenswrapper[30278]: I0318 18:18:36.162438 30278 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-kl89c" Mar 18 18:18:36.179231 master-0 kubenswrapper[30278]: I0318 18:18:36.179142 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-1f97-account-create-update-bc5tw" event={"ID":"5edc1dc4-2f2a-4eff-bc50-10382bc71d27","Type":"ContainerDied","Data":"0d045eed67666e57dd651216a24ee600a4d3137c79e1c8e67be03fa25db78eb8"} Mar 18 18:18:36.179231 master-0 kubenswrapper[30278]: I0318 18:18:36.179191 30278 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-1f97-account-create-update-bc5tw" Mar 18 18:18:36.179600 master-0 kubenswrapper[30278]: I0318 18:18:36.179216 30278 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0d045eed67666e57dd651216a24ee600a4d3137c79e1c8e67be03fa25db78eb8" Mar 18 18:18:36.193782 master-0 kubenswrapper[30278]: I0318 18:18:36.193728 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"ff27830b-378b-4338-ac41-041a9d78ed62","Type":"ContainerStarted","Data":"cde74017299391d770fcd41dacb8e7f2bd5c70d42d1a7bde38611020813e6ef6"} Mar 18 18:18:36.193936 master-0 kubenswrapper[30278]: I0318 18:18:36.193919 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"ff27830b-378b-4338-ac41-041a9d78ed62","Type":"ContainerStarted","Data":"14c3b3d7efd697d7fc369dc374f13cbdd026c65e918ff842a5df91de0f0f6d12"} Mar 18 18:18:36.709187 master-0 kubenswrapper[30278]: I0318 18:18:36.709130 30278 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-984d-account-create-update-tqdfv" Mar 18 18:18:36.874795 master-0 kubenswrapper[30278]: I0318 18:18:36.874086 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dskzk\" (UniqueName: \"kubernetes.io/projected/90da1e72-16d6-4b7c-9ea2-75800f09f684-kube-api-access-dskzk\") pod \"90da1e72-16d6-4b7c-9ea2-75800f09f684\" (UID: \"90da1e72-16d6-4b7c-9ea2-75800f09f684\") " Mar 18 18:18:36.874795 master-0 kubenswrapper[30278]: I0318 18:18:36.874462 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/90da1e72-16d6-4b7c-9ea2-75800f09f684-operator-scripts\") pod \"90da1e72-16d6-4b7c-9ea2-75800f09f684\" (UID: \"90da1e72-16d6-4b7c-9ea2-75800f09f684\") " Mar 18 18:18:36.875534 master-0 kubenswrapper[30278]: I0318 18:18:36.875478 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/90da1e72-16d6-4b7c-9ea2-75800f09f684-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "90da1e72-16d6-4b7c-9ea2-75800f09f684" (UID: "90da1e72-16d6-4b7c-9ea2-75800f09f684"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 18:18:36.882886 master-0 kubenswrapper[30278]: I0318 18:18:36.876240 30278 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/90da1e72-16d6-4b7c-9ea2-75800f09f684-operator-scripts\") on node \"master-0\" DevicePath \"\"" Mar 18 18:18:36.882886 master-0 kubenswrapper[30278]: I0318 18:18:36.882845 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/90da1e72-16d6-4b7c-9ea2-75800f09f684-kube-api-access-dskzk" (OuterVolumeSpecName: "kube-api-access-dskzk") pod "90da1e72-16d6-4b7c-9ea2-75800f09f684" (UID: "90da1e72-16d6-4b7c-9ea2-75800f09f684"). InnerVolumeSpecName "kube-api-access-dskzk". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 18:18:36.979301 master-0 kubenswrapper[30278]: I0318 18:18:36.979008 30278 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dskzk\" (UniqueName: \"kubernetes.io/projected/90da1e72-16d6-4b7c-9ea2-75800f09f684-kube-api-access-dskzk\") on node \"master-0\" DevicePath \"\"" Mar 18 18:18:37.216370 master-0 kubenswrapper[30278]: I0318 18:18:37.216312 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-xntzs-config-m5q6f" event={"ID":"c521ef3e-16b2-4bc3-a67b-3c86ff255bd8","Type":"ContainerStarted","Data":"844666b31f555d1f241e2bf72292cd9ee160903f58170efe6e318dda13b0da28"} Mar 18 18:18:37.228966 master-0 kubenswrapper[30278]: I0318 18:18:37.216391 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-xntzs-config-m5q6f" event={"ID":"c521ef3e-16b2-4bc3-a67b-3c86ff255bd8","Type":"ContainerStarted","Data":"f339a3d06370b7c74671a9549735876d88418f89e3079acd86686b9bfd822572"} Mar 18 18:18:37.241937 master-0 kubenswrapper[30278]: I0318 18:18:37.241199 30278 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-xntzs-config-m5q6f" podStartSLOduration=3.241181284 podStartE2EDuration="3.241181284s" podCreationTimestamp="2026-03-18 18:18:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 18:18:37.237926646 +0000 UTC m=+1086.405111241" watchObservedRunningTime="2026-03-18 18:18:37.241181284 +0000 UTC m=+1086.408365879" Mar 18 18:18:37.245911 master-0 kubenswrapper[30278]: I0318 18:18:37.244960 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-984d-account-create-update-tqdfv" event={"ID":"90da1e72-16d6-4b7c-9ea2-75800f09f684","Type":"ContainerDied","Data":"8d9a51010b679499729321c78245e757515eda0fa79624f20828e7cdbdfd99ae"} Mar 18 18:18:37.245911 master-0 kubenswrapper[30278]: I0318 18:18:37.245029 30278 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8d9a51010b679499729321c78245e757515eda0fa79624f20828e7cdbdfd99ae" Mar 18 18:18:37.245911 master-0 kubenswrapper[30278]: I0318 18:18:37.245119 30278 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-984d-account-create-update-tqdfv" Mar 18 18:18:37.270781 master-0 kubenswrapper[30278]: I0318 18:18:37.270717 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"ff27830b-378b-4338-ac41-041a9d78ed62","Type":"ContainerStarted","Data":"46234f253cb00d226008d1d1360d611222c760b31d631f9f4993d402bbc6adfd"} Mar 18 18:18:37.270781 master-0 kubenswrapper[30278]: I0318 18:18:37.270776 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"ff27830b-378b-4338-ac41-041a9d78ed62","Type":"ContainerStarted","Data":"e05db0931f394d8f5c4e848d68c873e636b69fe0a7b63b0a6da28068807f48a9"} Mar 18 18:18:37.270781 master-0 kubenswrapper[30278]: I0318 18:18:37.270789 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"ff27830b-378b-4338-ac41-041a9d78ed62","Type":"ContainerStarted","Data":"466e0d3c8d976d2dc2e1645a35efed3a71059d10901dfbebc754fccdfd5c6f30"} Mar 18 18:18:38.294852 master-0 kubenswrapper[30278]: I0318 18:18:38.294784 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"ff27830b-378b-4338-ac41-041a9d78ed62","Type":"ContainerStarted","Data":"d0bae0ba71f7513cc6a3dcf780753430694cecd680a35bd4fb47b014562b4e79"} Mar 18 18:18:40.337388 master-0 kubenswrapper[30278]: I0318 18:18:40.337315 30278 generic.go:334] "Generic (PLEG): container finished" podID="c521ef3e-16b2-4bc3-a67b-3c86ff255bd8" containerID="844666b31f555d1f241e2bf72292cd9ee160903f58170efe6e318dda13b0da28" exitCode=0 Mar 18 18:18:40.337889 master-0 kubenswrapper[30278]: I0318 18:18:40.337438 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-xntzs-config-m5q6f" event={"ID":"c521ef3e-16b2-4bc3-a67b-3c86ff255bd8","Type":"ContainerDied","Data":"844666b31f555d1f241e2bf72292cd9ee160903f58170efe6e318dda13b0da28"} Mar 18 18:18:41.385513 master-0 kubenswrapper[30278]: I0318 18:18:41.384299 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-8ntbw" event={"ID":"e74e301d-4637-4d16-a125-a44a5470a4ac","Type":"ContainerStarted","Data":"3a280414da8a04a2cd1ebde3265fc9500fb64e24de155f65f37b5659c0868c66"} Mar 18 18:18:41.396929 master-0 kubenswrapper[30278]: I0318 18:18:41.395531 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"ff27830b-378b-4338-ac41-041a9d78ed62","Type":"ContainerStarted","Data":"bfe510bda8d136b924aa09c31e3e88ad31e3600531971ada3b1f47fc4c89cf8c"} Mar 18 18:18:41.416863 master-0 kubenswrapper[30278]: I0318 18:18:41.416726 30278 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-8ntbw" podStartSLOduration=3.358236412 podStartE2EDuration="10.416687759s" podCreationTimestamp="2026-03-18 18:18:31 +0000 UTC" firstStartedPulling="2026-03-18 18:18:33.158182709 +0000 UTC m=+1082.325367304" lastFinishedPulling="2026-03-18 18:18:40.216634046 +0000 UTC m=+1089.383818651" observedRunningTime="2026-03-18 18:18:41.408810867 +0000 UTC m=+1090.575995462" watchObservedRunningTime="2026-03-18 18:18:41.416687759 +0000 UTC m=+1090.583872354" Mar 18 18:18:41.479817 master-0 kubenswrapper[30278]: I0318 18:18:41.479659 30278 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-storage-0" podStartSLOduration=40.972893575 podStartE2EDuration="50.479579223s" podCreationTimestamp="2026-03-18 18:17:51 +0000 UTC" firstStartedPulling="2026-03-18 18:18:25.929092427 +0000 UTC m=+1075.096277022" lastFinishedPulling="2026-03-18 18:18:35.435778075 +0000 UTC m=+1084.602962670" observedRunningTime="2026-03-18 18:18:41.45496823 +0000 UTC m=+1090.622152845" watchObservedRunningTime="2026-03-18 18:18:41.479579223 +0000 UTC m=+1090.646763818" Mar 18 18:18:41.913705 master-0 kubenswrapper[30278]: I0318 18:18:41.913646 30278 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7595586f5-65zhn"] Mar 18 18:18:41.914320 master-0 kubenswrapper[30278]: E0318 18:18:41.914301 30278 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="677619bc-d70e-475e-a844-b177d2cadbd9" containerName="mariadb-database-create" Mar 18 18:18:41.914379 master-0 kubenswrapper[30278]: I0318 18:18:41.914322 30278 state_mem.go:107] "Deleted CPUSet assignment" podUID="677619bc-d70e-475e-a844-b177d2cadbd9" containerName="mariadb-database-create" Mar 18 18:18:41.914379 master-0 kubenswrapper[30278]: E0318 18:18:41.914349 30278 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="90da1e72-16d6-4b7c-9ea2-75800f09f684" containerName="mariadb-account-create-update" Mar 18 18:18:41.914379 master-0 kubenswrapper[30278]: I0318 18:18:41.914355 30278 state_mem.go:107] "Deleted CPUSet assignment" podUID="90da1e72-16d6-4b7c-9ea2-75800f09f684" containerName="mariadb-account-create-update" Mar 18 18:18:41.914476 master-0 kubenswrapper[30278]: E0318 18:18:41.914380 30278 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5edc1dc4-2f2a-4eff-bc50-10382bc71d27" containerName="mariadb-account-create-update" Mar 18 18:18:41.914476 master-0 kubenswrapper[30278]: I0318 18:18:41.914386 30278 state_mem.go:107] "Deleted CPUSet assignment" podUID="5edc1dc4-2f2a-4eff-bc50-10382bc71d27" containerName="mariadb-account-create-update" Mar 18 18:18:41.914476 master-0 kubenswrapper[30278]: E0318 18:18:41.914411 30278 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="594ed543-14e4-4a71-8eb9-3482fa67fc1d" containerName="mariadb-database-create" Mar 18 18:18:41.914476 master-0 kubenswrapper[30278]: I0318 18:18:41.914418 30278 state_mem.go:107] "Deleted CPUSet assignment" podUID="594ed543-14e4-4a71-8eb9-3482fa67fc1d" containerName="mariadb-database-create" Mar 18 18:18:41.915016 master-0 kubenswrapper[30278]: I0318 18:18:41.914993 30278 memory_manager.go:354] "RemoveStaleState removing state" podUID="677619bc-d70e-475e-a844-b177d2cadbd9" containerName="mariadb-database-create" Mar 18 18:18:41.915711 master-0 kubenswrapper[30278]: I0318 18:18:41.915014 30278 memory_manager.go:354] "RemoveStaleState removing state" podUID="594ed543-14e4-4a71-8eb9-3482fa67fc1d" containerName="mariadb-database-create" Mar 18 18:18:41.915711 master-0 kubenswrapper[30278]: I0318 18:18:41.915045 30278 memory_manager.go:354] "RemoveStaleState removing state" podUID="5edc1dc4-2f2a-4eff-bc50-10382bc71d27" containerName="mariadb-account-create-update" Mar 18 18:18:41.915711 master-0 kubenswrapper[30278]: I0318 18:18:41.915077 30278 memory_manager.go:354] "RemoveStaleState removing state" podUID="90da1e72-16d6-4b7c-9ea2-75800f09f684" containerName="mariadb-account-create-update" Mar 18 18:18:41.920755 master-0 kubenswrapper[30278]: I0318 18:18:41.920715 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7595586f5-65zhn" Mar 18 18:18:41.926647 master-0 kubenswrapper[30278]: I0318 18:18:41.926597 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-swift-storage-0" Mar 18 18:18:41.964973 master-0 kubenswrapper[30278]: I0318 18:18:41.960766 30278 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7595586f5-65zhn"] Mar 18 18:18:42.001794 master-0 kubenswrapper[30278]: I0318 18:18:42.001703 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8c241dad-460b-41da-b26d-a8d64e7d803a-dns-swift-storage-0\") pod \"dnsmasq-dns-7595586f5-65zhn\" (UID: \"8c241dad-460b-41da-b26d-a8d64e7d803a\") " pod="openstack/dnsmasq-dns-7595586f5-65zhn" Mar 18 18:18:42.001794 master-0 kubenswrapper[30278]: I0318 18:18:42.001791 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sx2rj\" (UniqueName: \"kubernetes.io/projected/8c241dad-460b-41da-b26d-a8d64e7d803a-kube-api-access-sx2rj\") pod \"dnsmasq-dns-7595586f5-65zhn\" (UID: \"8c241dad-460b-41da-b26d-a8d64e7d803a\") " pod="openstack/dnsmasq-dns-7595586f5-65zhn" Mar 18 18:18:42.002073 master-0 kubenswrapper[30278]: I0318 18:18:42.002028 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8c241dad-460b-41da-b26d-a8d64e7d803a-ovsdbserver-sb\") pod \"dnsmasq-dns-7595586f5-65zhn\" (UID: \"8c241dad-460b-41da-b26d-a8d64e7d803a\") " pod="openstack/dnsmasq-dns-7595586f5-65zhn" Mar 18 18:18:42.002116 master-0 kubenswrapper[30278]: I0318 18:18:42.002092 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8c241dad-460b-41da-b26d-a8d64e7d803a-dns-svc\") pod \"dnsmasq-dns-7595586f5-65zhn\" (UID: \"8c241dad-460b-41da-b26d-a8d64e7d803a\") " pod="openstack/dnsmasq-dns-7595586f5-65zhn" Mar 18 18:18:42.002264 master-0 kubenswrapper[30278]: I0318 18:18:42.002235 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8c241dad-460b-41da-b26d-a8d64e7d803a-ovsdbserver-nb\") pod \"dnsmasq-dns-7595586f5-65zhn\" (UID: \"8c241dad-460b-41da-b26d-a8d64e7d803a\") " pod="openstack/dnsmasq-dns-7595586f5-65zhn" Mar 18 18:18:42.002474 master-0 kubenswrapper[30278]: I0318 18:18:42.002411 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8c241dad-460b-41da-b26d-a8d64e7d803a-config\") pod \"dnsmasq-dns-7595586f5-65zhn\" (UID: \"8c241dad-460b-41da-b26d-a8d64e7d803a\") " pod="openstack/dnsmasq-dns-7595586f5-65zhn" Mar 18 18:18:42.105014 master-0 kubenswrapper[30278]: I0318 18:18:42.104956 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8c241dad-460b-41da-b26d-a8d64e7d803a-dns-swift-storage-0\") pod \"dnsmasq-dns-7595586f5-65zhn\" (UID: \"8c241dad-460b-41da-b26d-a8d64e7d803a\") " pod="openstack/dnsmasq-dns-7595586f5-65zhn" Mar 18 18:18:42.105323 master-0 kubenswrapper[30278]: I0318 18:18:42.105308 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sx2rj\" (UniqueName: \"kubernetes.io/projected/8c241dad-460b-41da-b26d-a8d64e7d803a-kube-api-access-sx2rj\") pod \"dnsmasq-dns-7595586f5-65zhn\" (UID: \"8c241dad-460b-41da-b26d-a8d64e7d803a\") " pod="openstack/dnsmasq-dns-7595586f5-65zhn" Mar 18 18:18:42.105441 master-0 kubenswrapper[30278]: I0318 18:18:42.105425 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8c241dad-460b-41da-b26d-a8d64e7d803a-ovsdbserver-sb\") pod \"dnsmasq-dns-7595586f5-65zhn\" (UID: \"8c241dad-460b-41da-b26d-a8d64e7d803a\") " pod="openstack/dnsmasq-dns-7595586f5-65zhn" Mar 18 18:18:42.105616 master-0 kubenswrapper[30278]: I0318 18:18:42.105579 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8c241dad-460b-41da-b26d-a8d64e7d803a-dns-svc\") pod \"dnsmasq-dns-7595586f5-65zhn\" (UID: \"8c241dad-460b-41da-b26d-a8d64e7d803a\") " pod="openstack/dnsmasq-dns-7595586f5-65zhn" Mar 18 18:18:42.105877 master-0 kubenswrapper[30278]: I0318 18:18:42.105845 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8c241dad-460b-41da-b26d-a8d64e7d803a-ovsdbserver-nb\") pod \"dnsmasq-dns-7595586f5-65zhn\" (UID: \"8c241dad-460b-41da-b26d-a8d64e7d803a\") " pod="openstack/dnsmasq-dns-7595586f5-65zhn" Mar 18 18:18:42.106000 master-0 kubenswrapper[30278]: I0318 18:18:42.105983 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8c241dad-460b-41da-b26d-a8d64e7d803a-config\") pod \"dnsmasq-dns-7595586f5-65zhn\" (UID: \"8c241dad-460b-41da-b26d-a8d64e7d803a\") " pod="openstack/dnsmasq-dns-7595586f5-65zhn" Mar 18 18:18:42.107727 master-0 kubenswrapper[30278]: I0318 18:18:42.107707 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8c241dad-460b-41da-b26d-a8d64e7d803a-ovsdbserver-sb\") pod \"dnsmasq-dns-7595586f5-65zhn\" (UID: \"8c241dad-460b-41da-b26d-a8d64e7d803a\") " pod="openstack/dnsmasq-dns-7595586f5-65zhn" Mar 18 18:18:42.109101 master-0 kubenswrapper[30278]: I0318 18:18:42.108484 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8c241dad-460b-41da-b26d-a8d64e7d803a-config\") pod \"dnsmasq-dns-7595586f5-65zhn\" (UID: \"8c241dad-460b-41da-b26d-a8d64e7d803a\") " pod="openstack/dnsmasq-dns-7595586f5-65zhn" Mar 18 18:18:42.111182 master-0 kubenswrapper[30278]: I0318 18:18:42.109837 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8c241dad-460b-41da-b26d-a8d64e7d803a-dns-swift-storage-0\") pod \"dnsmasq-dns-7595586f5-65zhn\" (UID: \"8c241dad-460b-41da-b26d-a8d64e7d803a\") " pod="openstack/dnsmasq-dns-7595586f5-65zhn" Mar 18 18:18:42.111182 master-0 kubenswrapper[30278]: I0318 18:18:42.109769 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8c241dad-460b-41da-b26d-a8d64e7d803a-dns-svc\") pod \"dnsmasq-dns-7595586f5-65zhn\" (UID: \"8c241dad-460b-41da-b26d-a8d64e7d803a\") " pod="openstack/dnsmasq-dns-7595586f5-65zhn" Mar 18 18:18:42.114845 master-0 kubenswrapper[30278]: I0318 18:18:42.114525 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8c241dad-460b-41da-b26d-a8d64e7d803a-ovsdbserver-nb\") pod \"dnsmasq-dns-7595586f5-65zhn\" (UID: \"8c241dad-460b-41da-b26d-a8d64e7d803a\") " pod="openstack/dnsmasq-dns-7595586f5-65zhn" Mar 18 18:18:42.136380 master-0 kubenswrapper[30278]: I0318 18:18:42.132026 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sx2rj\" (UniqueName: \"kubernetes.io/projected/8c241dad-460b-41da-b26d-a8d64e7d803a-kube-api-access-sx2rj\") pod \"dnsmasq-dns-7595586f5-65zhn\" (UID: \"8c241dad-460b-41da-b26d-a8d64e7d803a\") " pod="openstack/dnsmasq-dns-7595586f5-65zhn" Mar 18 18:18:42.280831 master-0 kubenswrapper[30278]: I0318 18:18:42.280659 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7595586f5-65zhn" Mar 18 18:18:49.182895 master-0 kubenswrapper[30278]: I0318 18:18:49.181082 30278 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-xntzs-config-m5q6f" Mar 18 18:18:49.325650 master-0 kubenswrapper[30278]: I0318 18:18:49.325576 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-258zk\" (UniqueName: \"kubernetes.io/projected/c521ef3e-16b2-4bc3-a67b-3c86ff255bd8-kube-api-access-258zk\") pod \"c521ef3e-16b2-4bc3-a67b-3c86ff255bd8\" (UID: \"c521ef3e-16b2-4bc3-a67b-3c86ff255bd8\") " Mar 18 18:18:49.326210 master-0 kubenswrapper[30278]: I0318 18:18:49.325724 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/c521ef3e-16b2-4bc3-a67b-3c86ff255bd8-additional-scripts\") pod \"c521ef3e-16b2-4bc3-a67b-3c86ff255bd8\" (UID: \"c521ef3e-16b2-4bc3-a67b-3c86ff255bd8\") " Mar 18 18:18:49.326210 master-0 kubenswrapper[30278]: I0318 18:18:49.325848 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/c521ef3e-16b2-4bc3-a67b-3c86ff255bd8-var-run\") pod \"c521ef3e-16b2-4bc3-a67b-3c86ff255bd8\" (UID: \"c521ef3e-16b2-4bc3-a67b-3c86ff255bd8\") " Mar 18 18:18:49.326210 master-0 kubenswrapper[30278]: I0318 18:18:49.325936 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/c521ef3e-16b2-4bc3-a67b-3c86ff255bd8-var-run-ovn\") pod \"c521ef3e-16b2-4bc3-a67b-3c86ff255bd8\" (UID: \"c521ef3e-16b2-4bc3-a67b-3c86ff255bd8\") " Mar 18 18:18:49.326210 master-0 kubenswrapper[30278]: I0318 18:18:49.325997 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/c521ef3e-16b2-4bc3-a67b-3c86ff255bd8-var-log-ovn\") pod \"c521ef3e-16b2-4bc3-a67b-3c86ff255bd8\" (UID: \"c521ef3e-16b2-4bc3-a67b-3c86ff255bd8\") " Mar 18 18:18:49.326210 master-0 kubenswrapper[30278]: I0318 18:18:49.326025 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c521ef3e-16b2-4bc3-a67b-3c86ff255bd8-scripts\") pod \"c521ef3e-16b2-4bc3-a67b-3c86ff255bd8\" (UID: \"c521ef3e-16b2-4bc3-a67b-3c86ff255bd8\") " Mar 18 18:18:49.326473 master-0 kubenswrapper[30278]: I0318 18:18:49.326318 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c521ef3e-16b2-4bc3-a67b-3c86ff255bd8-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "c521ef3e-16b2-4bc3-a67b-3c86ff255bd8" (UID: "c521ef3e-16b2-4bc3-a67b-3c86ff255bd8"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 18:18:49.326473 master-0 kubenswrapper[30278]: I0318 18:18:49.326417 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c521ef3e-16b2-4bc3-a67b-3c86ff255bd8-var-run" (OuterVolumeSpecName: "var-run") pod "c521ef3e-16b2-4bc3-a67b-3c86ff255bd8" (UID: "c521ef3e-16b2-4bc3-a67b-3c86ff255bd8"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 18:18:49.326568 master-0 kubenswrapper[30278]: I0318 18:18:49.326482 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c521ef3e-16b2-4bc3-a67b-3c86ff255bd8-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "c521ef3e-16b2-4bc3-a67b-3c86ff255bd8" (UID: "c521ef3e-16b2-4bc3-a67b-3c86ff255bd8"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 18:18:49.326611 master-0 kubenswrapper[30278]: I0318 18:18:49.326536 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c521ef3e-16b2-4bc3-a67b-3c86ff255bd8-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "c521ef3e-16b2-4bc3-a67b-3c86ff255bd8" (UID: "c521ef3e-16b2-4bc3-a67b-3c86ff255bd8"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 18:18:49.326913 master-0 kubenswrapper[30278]: I0318 18:18:49.326879 30278 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/c521ef3e-16b2-4bc3-a67b-3c86ff255bd8-var-run-ovn\") on node \"master-0\" DevicePath \"\"" Mar 18 18:18:49.326913 master-0 kubenswrapper[30278]: I0318 18:18:49.326906 30278 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/c521ef3e-16b2-4bc3-a67b-3c86ff255bd8-var-log-ovn\") on node \"master-0\" DevicePath \"\"" Mar 18 18:18:49.327015 master-0 kubenswrapper[30278]: I0318 18:18:49.326919 30278 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/c521ef3e-16b2-4bc3-a67b-3c86ff255bd8-additional-scripts\") on node \"master-0\" DevicePath \"\"" Mar 18 18:18:49.327015 master-0 kubenswrapper[30278]: I0318 18:18:49.326933 30278 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/c521ef3e-16b2-4bc3-a67b-3c86ff255bd8-var-run\") on node \"master-0\" DevicePath \"\"" Mar 18 18:18:49.327878 master-0 kubenswrapper[30278]: I0318 18:18:49.327809 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c521ef3e-16b2-4bc3-a67b-3c86ff255bd8-scripts" (OuterVolumeSpecName: "scripts") pod "c521ef3e-16b2-4bc3-a67b-3c86ff255bd8" (UID: "c521ef3e-16b2-4bc3-a67b-3c86ff255bd8"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 18:18:49.330754 master-0 kubenswrapper[30278]: I0318 18:18:49.330669 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c521ef3e-16b2-4bc3-a67b-3c86ff255bd8-kube-api-access-258zk" (OuterVolumeSpecName: "kube-api-access-258zk") pod "c521ef3e-16b2-4bc3-a67b-3c86ff255bd8" (UID: "c521ef3e-16b2-4bc3-a67b-3c86ff255bd8"). InnerVolumeSpecName "kube-api-access-258zk". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 18:18:49.429592 master-0 kubenswrapper[30278]: I0318 18:18:49.429472 30278 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c521ef3e-16b2-4bc3-a67b-3c86ff255bd8-scripts\") on node \"master-0\" DevicePath \"\"" Mar 18 18:18:49.429592 master-0 kubenswrapper[30278]: I0318 18:18:49.429541 30278 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-258zk\" (UniqueName: \"kubernetes.io/projected/c521ef3e-16b2-4bc3-a67b-3c86ff255bd8-kube-api-access-258zk\") on node \"master-0\" DevicePath \"\"" Mar 18 18:18:49.527401 master-0 kubenswrapper[30278]: I0318 18:18:49.527342 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-xntzs-config-m5q6f" event={"ID":"c521ef3e-16b2-4bc3-a67b-3c86ff255bd8","Type":"ContainerDied","Data":"f339a3d06370b7c74671a9549735876d88418f89e3079acd86686b9bfd822572"} Mar 18 18:18:49.527401 master-0 kubenswrapper[30278]: I0318 18:18:49.527406 30278 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f339a3d06370b7c74671a9549735876d88418f89e3079acd86686b9bfd822572" Mar 18 18:18:49.527656 master-0 kubenswrapper[30278]: I0318 18:18:49.527494 30278 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-xntzs-config-m5q6f" Mar 18 18:18:49.536628 master-0 kubenswrapper[30278]: I0318 18:18:49.536555 30278 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7595586f5-65zhn"] Mar 18 18:18:49.538963 master-0 kubenswrapper[30278]: W0318 18:18:49.538903 30278 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8c241dad_460b_41da_b26d_a8d64e7d803a.slice/crio-7912e63b5f8ce09441a491c262fac82f800a5ef9349b7b830b8ee01f2b5e3d7e WatchSource:0}: Error finding container 7912e63b5f8ce09441a491c262fac82f800a5ef9349b7b830b8ee01f2b5e3d7e: Status 404 returned error can't find the container with id 7912e63b5f8ce09441a491c262fac82f800a5ef9349b7b830b8ee01f2b5e3d7e Mar 18 18:18:49.539049 master-0 kubenswrapper[30278]: I0318 18:18:49.538980 30278 generic.go:334] "Generic (PLEG): container finished" podID="e74e301d-4637-4d16-a125-a44a5470a4ac" containerID="3a280414da8a04a2cd1ebde3265fc9500fb64e24de155f65f37b5659c0868c66" exitCode=0 Mar 18 18:18:49.539049 master-0 kubenswrapper[30278]: I0318 18:18:49.539024 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-8ntbw" event={"ID":"e74e301d-4637-4d16-a125-a44a5470a4ac","Type":"ContainerDied","Data":"3a280414da8a04a2cd1ebde3265fc9500fb64e24de155f65f37b5659c0868c66"} Mar 18 18:18:50.300729 master-0 kubenswrapper[30278]: I0318 18:18:50.300552 30278 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-xntzs-config-m5q6f"] Mar 18 18:18:50.312804 master-0 kubenswrapper[30278]: I0318 18:18:50.312726 30278 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-xntzs-config-m5q6f"] Mar 18 18:18:50.555385 master-0 kubenswrapper[30278]: I0318 18:18:50.555317 30278 generic.go:334] "Generic (PLEG): container finished" podID="8c241dad-460b-41da-b26d-a8d64e7d803a" containerID="ea8dae639c1bad33556579588b993245591e6cf6ca1c7e8f3e9c3ff65dc087e2" exitCode=0 Mar 18 18:18:50.555385 master-0 kubenswrapper[30278]: I0318 18:18:50.555372 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7595586f5-65zhn" event={"ID":"8c241dad-460b-41da-b26d-a8d64e7d803a","Type":"ContainerDied","Data":"ea8dae639c1bad33556579588b993245591e6cf6ca1c7e8f3e9c3ff65dc087e2"} Mar 18 18:18:50.555702 master-0 kubenswrapper[30278]: I0318 18:18:50.555419 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7595586f5-65zhn" event={"ID":"8c241dad-460b-41da-b26d-a8d64e7d803a","Type":"ContainerStarted","Data":"7912e63b5f8ce09441a491c262fac82f800a5ef9349b7b830b8ee01f2b5e3d7e"} Mar 18 18:18:50.557646 master-0 kubenswrapper[30278]: I0318 18:18:50.557530 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-8jvr2" event={"ID":"d2ad6a1d-4b4e-49d6-b2f1-65906269f79e","Type":"ContainerStarted","Data":"f2afc3a340d8bd8f0a25947752ace23263bb74350aa0b395b28ec18f336be7ca"} Mar 18 18:18:50.667395 master-0 kubenswrapper[30278]: I0318 18:18:50.667203 30278 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-8jvr2" podStartSLOduration=3.640319404 podStartE2EDuration="20.667173587s" podCreationTimestamp="2026-03-18 18:18:30 +0000 UTC" firstStartedPulling="2026-03-18 18:18:32.010729713 +0000 UTC m=+1081.177914308" lastFinishedPulling="2026-03-18 18:18:49.037583896 +0000 UTC m=+1098.204768491" observedRunningTime="2026-03-18 18:18:50.654824375 +0000 UTC m=+1099.822008980" watchObservedRunningTime="2026-03-18 18:18:50.667173587 +0000 UTC m=+1099.834358192" Mar 18 18:18:51.079001 master-0 kubenswrapper[30278]: I0318 18:18:51.078624 30278 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c521ef3e-16b2-4bc3-a67b-3c86ff255bd8" path="/var/lib/kubelet/pods/c521ef3e-16b2-4bc3-a67b-3c86ff255bd8/volumes" Mar 18 18:18:51.104929 master-0 kubenswrapper[30278]: I0318 18:18:51.104869 30278 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-8ntbw" Mar 18 18:18:51.183415 master-0 kubenswrapper[30278]: I0318 18:18:51.179909 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e74e301d-4637-4d16-a125-a44a5470a4ac-combined-ca-bundle\") pod \"e74e301d-4637-4d16-a125-a44a5470a4ac\" (UID: \"e74e301d-4637-4d16-a125-a44a5470a4ac\") " Mar 18 18:18:51.183415 master-0 kubenswrapper[30278]: I0318 18:18:51.180096 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pk8lq\" (UniqueName: \"kubernetes.io/projected/e74e301d-4637-4d16-a125-a44a5470a4ac-kube-api-access-pk8lq\") pod \"e74e301d-4637-4d16-a125-a44a5470a4ac\" (UID: \"e74e301d-4637-4d16-a125-a44a5470a4ac\") " Mar 18 18:18:51.183415 master-0 kubenswrapper[30278]: I0318 18:18:51.180247 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e74e301d-4637-4d16-a125-a44a5470a4ac-config-data\") pod \"e74e301d-4637-4d16-a125-a44a5470a4ac\" (UID: \"e74e301d-4637-4d16-a125-a44a5470a4ac\") " Mar 18 18:18:51.185014 master-0 kubenswrapper[30278]: I0318 18:18:51.184935 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e74e301d-4637-4d16-a125-a44a5470a4ac-kube-api-access-pk8lq" (OuterVolumeSpecName: "kube-api-access-pk8lq") pod "e74e301d-4637-4d16-a125-a44a5470a4ac" (UID: "e74e301d-4637-4d16-a125-a44a5470a4ac"). InnerVolumeSpecName "kube-api-access-pk8lq". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 18:18:51.204612 master-0 kubenswrapper[30278]: I0318 18:18:51.204524 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e74e301d-4637-4d16-a125-a44a5470a4ac-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e74e301d-4637-4d16-a125-a44a5470a4ac" (UID: "e74e301d-4637-4d16-a125-a44a5470a4ac"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 18:18:51.246360 master-0 kubenswrapper[30278]: I0318 18:18:51.246166 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e74e301d-4637-4d16-a125-a44a5470a4ac-config-data" (OuterVolumeSpecName: "config-data") pod "e74e301d-4637-4d16-a125-a44a5470a4ac" (UID: "e74e301d-4637-4d16-a125-a44a5470a4ac"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 18:18:51.287882 master-0 kubenswrapper[30278]: I0318 18:18:51.284710 30278 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e74e301d-4637-4d16-a125-a44a5470a4ac-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 18 18:18:51.287882 master-0 kubenswrapper[30278]: I0318 18:18:51.284793 30278 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pk8lq\" (UniqueName: \"kubernetes.io/projected/e74e301d-4637-4d16-a125-a44a5470a4ac-kube-api-access-pk8lq\") on node \"master-0\" DevicePath \"\"" Mar 18 18:18:51.287882 master-0 kubenswrapper[30278]: I0318 18:18:51.284822 30278 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e74e301d-4637-4d16-a125-a44a5470a4ac-config-data\") on node \"master-0\" DevicePath \"\"" Mar 18 18:18:51.576024 master-0 kubenswrapper[30278]: I0318 18:18:51.575950 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-8ntbw" event={"ID":"e74e301d-4637-4d16-a125-a44a5470a4ac","Type":"ContainerDied","Data":"688cc353e037e682babdf8b5425ffc1975a75419e7dd33e358a5dee36a64cca9"} Mar 18 18:18:51.576024 master-0 kubenswrapper[30278]: I0318 18:18:51.575990 30278 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-8ntbw" Mar 18 18:18:51.576024 master-0 kubenswrapper[30278]: I0318 18:18:51.576010 30278 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="688cc353e037e682babdf8b5425ffc1975a75419e7dd33e358a5dee36a64cca9" Mar 18 18:18:51.579896 master-0 kubenswrapper[30278]: I0318 18:18:51.579856 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7595586f5-65zhn" event={"ID":"8c241dad-460b-41da-b26d-a8d64e7d803a","Type":"ContainerStarted","Data":"7a81a883114e1a12177ca8d6f7382b82c0367b325e8605a697b9c287f186227d"} Mar 18 18:18:51.580580 master-0 kubenswrapper[30278]: I0318 18:18:51.580020 30278 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7595586f5-65zhn" Mar 18 18:18:51.624103 master-0 kubenswrapper[30278]: I0318 18:18:51.624005 30278 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7595586f5-65zhn" podStartSLOduration=10.623974789 podStartE2EDuration="10.623974789s" podCreationTimestamp="2026-03-18 18:18:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 18:18:51.604669139 +0000 UTC m=+1100.771853734" watchObservedRunningTime="2026-03-18 18:18:51.623974789 +0000 UTC m=+1100.791159384" Mar 18 18:18:53.041255 master-0 kubenswrapper[30278]: I0318 18:18:53.041140 30278 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-kwm5v"] Mar 18 18:18:53.042383 master-0 kubenswrapper[30278]: E0318 18:18:53.041879 30278 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e74e301d-4637-4d16-a125-a44a5470a4ac" containerName="keystone-db-sync" Mar 18 18:18:53.042383 master-0 kubenswrapper[30278]: I0318 18:18:53.041906 30278 state_mem.go:107] "Deleted CPUSet assignment" podUID="e74e301d-4637-4d16-a125-a44a5470a4ac" containerName="keystone-db-sync" Mar 18 18:18:53.042383 master-0 kubenswrapper[30278]: E0318 18:18:53.041932 30278 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c521ef3e-16b2-4bc3-a67b-3c86ff255bd8" containerName="ovn-config" Mar 18 18:18:53.042383 master-0 kubenswrapper[30278]: I0318 18:18:53.041941 30278 state_mem.go:107] "Deleted CPUSet assignment" podUID="c521ef3e-16b2-4bc3-a67b-3c86ff255bd8" containerName="ovn-config" Mar 18 18:18:53.042383 master-0 kubenswrapper[30278]: I0318 18:18:53.042259 30278 memory_manager.go:354] "RemoveStaleState removing state" podUID="c521ef3e-16b2-4bc3-a67b-3c86ff255bd8" containerName="ovn-config" Mar 18 18:18:53.042383 master-0 kubenswrapper[30278]: I0318 18:18:53.042379 30278 memory_manager.go:354] "RemoveStaleState removing state" podUID="e74e301d-4637-4d16-a125-a44a5470a4ac" containerName="keystone-db-sync" Mar 18 18:18:53.043450 master-0 kubenswrapper[30278]: I0318 18:18:53.043374 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-kwm5v" Mar 18 18:18:53.051440 master-0 kubenswrapper[30278]: I0318 18:18:53.047823 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Mar 18 18:18:53.051440 master-0 kubenswrapper[30278]: I0318 18:18:53.048233 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Mar 18 18:18:53.051440 master-0 kubenswrapper[30278]: I0318 18:18:53.048548 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Mar 18 18:18:53.051440 master-0 kubenswrapper[30278]: I0318 18:18:53.048778 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Mar 18 18:18:53.131193 master-0 kubenswrapper[30278]: I0318 18:18:53.131114 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/5fa13ce1-ac91-4c75-8846-7679dfbd543b-fernet-keys\") pod \"keystone-bootstrap-kwm5v\" (UID: \"5fa13ce1-ac91-4c75-8846-7679dfbd543b\") " pod="openstack/keystone-bootstrap-kwm5v" Mar 18 18:18:53.132481 master-0 kubenswrapper[30278]: I0318 18:18:53.132434 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5fa13ce1-ac91-4c75-8846-7679dfbd543b-combined-ca-bundle\") pod \"keystone-bootstrap-kwm5v\" (UID: \"5fa13ce1-ac91-4c75-8846-7679dfbd543b\") " pod="openstack/keystone-bootstrap-kwm5v" Mar 18 18:18:53.132581 master-0 kubenswrapper[30278]: I0318 18:18:53.132498 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5fa13ce1-ac91-4c75-8846-7679dfbd543b-config-data\") pod \"keystone-bootstrap-kwm5v\" (UID: \"5fa13ce1-ac91-4c75-8846-7679dfbd543b\") " pod="openstack/keystone-bootstrap-kwm5v" Mar 18 18:18:53.132581 master-0 kubenswrapper[30278]: I0318 18:18:53.132537 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5fa13ce1-ac91-4c75-8846-7679dfbd543b-scripts\") pod \"keystone-bootstrap-kwm5v\" (UID: \"5fa13ce1-ac91-4c75-8846-7679dfbd543b\") " pod="openstack/keystone-bootstrap-kwm5v" Mar 18 18:18:53.132689 master-0 kubenswrapper[30278]: I0318 18:18:53.132581 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/5fa13ce1-ac91-4c75-8846-7679dfbd543b-credential-keys\") pod \"keystone-bootstrap-kwm5v\" (UID: \"5fa13ce1-ac91-4c75-8846-7679dfbd543b\") " pod="openstack/keystone-bootstrap-kwm5v" Mar 18 18:18:53.132795 master-0 kubenswrapper[30278]: I0318 18:18:53.132709 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-956g2\" (UniqueName: \"kubernetes.io/projected/5fa13ce1-ac91-4c75-8846-7679dfbd543b-kube-api-access-956g2\") pod \"keystone-bootstrap-kwm5v\" (UID: \"5fa13ce1-ac91-4c75-8846-7679dfbd543b\") " pod="openstack/keystone-bootstrap-kwm5v" Mar 18 18:18:53.207330 master-0 kubenswrapper[30278]: I0318 18:18:53.206121 30278 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-kwm5v"] Mar 18 18:18:53.256309 master-0 kubenswrapper[30278]: I0318 18:18:53.255736 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5fa13ce1-ac91-4c75-8846-7679dfbd543b-combined-ca-bundle\") pod \"keystone-bootstrap-kwm5v\" (UID: \"5fa13ce1-ac91-4c75-8846-7679dfbd543b\") " pod="openstack/keystone-bootstrap-kwm5v" Mar 18 18:18:53.256309 master-0 kubenswrapper[30278]: I0318 18:18:53.255849 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5fa13ce1-ac91-4c75-8846-7679dfbd543b-config-data\") pod \"keystone-bootstrap-kwm5v\" (UID: \"5fa13ce1-ac91-4c75-8846-7679dfbd543b\") " pod="openstack/keystone-bootstrap-kwm5v" Mar 18 18:18:53.256309 master-0 kubenswrapper[30278]: I0318 18:18:53.255878 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5fa13ce1-ac91-4c75-8846-7679dfbd543b-scripts\") pod \"keystone-bootstrap-kwm5v\" (UID: \"5fa13ce1-ac91-4c75-8846-7679dfbd543b\") " pod="openstack/keystone-bootstrap-kwm5v" Mar 18 18:18:53.256309 master-0 kubenswrapper[30278]: I0318 18:18:53.255921 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/5fa13ce1-ac91-4c75-8846-7679dfbd543b-credential-keys\") pod \"keystone-bootstrap-kwm5v\" (UID: \"5fa13ce1-ac91-4c75-8846-7679dfbd543b\") " pod="openstack/keystone-bootstrap-kwm5v" Mar 18 18:18:53.256309 master-0 kubenswrapper[30278]: I0318 18:18:53.255949 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-956g2\" (UniqueName: \"kubernetes.io/projected/5fa13ce1-ac91-4c75-8846-7679dfbd543b-kube-api-access-956g2\") pod \"keystone-bootstrap-kwm5v\" (UID: \"5fa13ce1-ac91-4c75-8846-7679dfbd543b\") " pod="openstack/keystone-bootstrap-kwm5v" Mar 18 18:18:53.256309 master-0 kubenswrapper[30278]: I0318 18:18:53.256022 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/5fa13ce1-ac91-4c75-8846-7679dfbd543b-fernet-keys\") pod \"keystone-bootstrap-kwm5v\" (UID: \"5fa13ce1-ac91-4c75-8846-7679dfbd543b\") " pod="openstack/keystone-bootstrap-kwm5v" Mar 18 18:18:53.290638 master-0 kubenswrapper[30278]: I0318 18:18:53.286701 30278 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7595586f5-65zhn"] Mar 18 18:18:53.290638 master-0 kubenswrapper[30278]: I0318 18:18:53.288015 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5fa13ce1-ac91-4c75-8846-7679dfbd543b-combined-ca-bundle\") pod \"keystone-bootstrap-kwm5v\" (UID: \"5fa13ce1-ac91-4c75-8846-7679dfbd543b\") " pod="openstack/keystone-bootstrap-kwm5v" Mar 18 18:18:53.295784 master-0 kubenswrapper[30278]: I0318 18:18:53.294977 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/5fa13ce1-ac91-4c75-8846-7679dfbd543b-credential-keys\") pod \"keystone-bootstrap-kwm5v\" (UID: \"5fa13ce1-ac91-4c75-8846-7679dfbd543b\") " pod="openstack/keystone-bootstrap-kwm5v" Mar 18 18:18:53.298734 master-0 kubenswrapper[30278]: I0318 18:18:53.298671 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5fa13ce1-ac91-4c75-8846-7679dfbd543b-scripts\") pod \"keystone-bootstrap-kwm5v\" (UID: \"5fa13ce1-ac91-4c75-8846-7679dfbd543b\") " pod="openstack/keystone-bootstrap-kwm5v" Mar 18 18:18:53.303315 master-0 kubenswrapper[30278]: I0318 18:18:53.299581 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5fa13ce1-ac91-4c75-8846-7679dfbd543b-config-data\") pod \"keystone-bootstrap-kwm5v\" (UID: \"5fa13ce1-ac91-4c75-8846-7679dfbd543b\") " pod="openstack/keystone-bootstrap-kwm5v" Mar 18 18:18:53.314519 master-0 kubenswrapper[30278]: I0318 18:18:53.312725 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-956g2\" (UniqueName: \"kubernetes.io/projected/5fa13ce1-ac91-4c75-8846-7679dfbd543b-kube-api-access-956g2\") pod \"keystone-bootstrap-kwm5v\" (UID: \"5fa13ce1-ac91-4c75-8846-7679dfbd543b\") " pod="openstack/keystone-bootstrap-kwm5v" Mar 18 18:18:53.333308 master-0 kubenswrapper[30278]: I0318 18:18:53.315934 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/5fa13ce1-ac91-4c75-8846-7679dfbd543b-fernet-keys\") pod \"keystone-bootstrap-kwm5v\" (UID: \"5fa13ce1-ac91-4c75-8846-7679dfbd543b\") " pod="openstack/keystone-bootstrap-kwm5v" Mar 18 18:18:53.370433 master-0 kubenswrapper[30278]: I0318 18:18:53.370354 30278 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-578b778949-qc575"] Mar 18 18:18:53.380471 master-0 kubenswrapper[30278]: I0318 18:18:53.379324 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-578b778949-qc575" Mar 18 18:18:53.386779 master-0 kubenswrapper[30278]: I0318 18:18:53.386709 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-kwm5v" Mar 18 18:18:53.402577 master-0 kubenswrapper[30278]: I0318 18:18:53.400230 30278 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-578b778949-qc575"] Mar 18 18:18:53.469333 master-0 kubenswrapper[30278]: I0318 18:18:53.466927 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6289380a-9a02-490f-9a25-aaa36affc839-dns-swift-storage-0\") pod \"dnsmasq-dns-578b778949-qc575\" (UID: \"6289380a-9a02-490f-9a25-aaa36affc839\") " pod="openstack/dnsmasq-dns-578b778949-qc575" Mar 18 18:18:53.469333 master-0 kubenswrapper[30278]: I0318 18:18:53.467011 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6289380a-9a02-490f-9a25-aaa36affc839-dns-svc\") pod \"dnsmasq-dns-578b778949-qc575\" (UID: \"6289380a-9a02-490f-9a25-aaa36affc839\") " pod="openstack/dnsmasq-dns-578b778949-qc575" Mar 18 18:18:53.469333 master-0 kubenswrapper[30278]: I0318 18:18:53.467056 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2h2jt\" (UniqueName: \"kubernetes.io/projected/6289380a-9a02-490f-9a25-aaa36affc839-kube-api-access-2h2jt\") pod \"dnsmasq-dns-578b778949-qc575\" (UID: \"6289380a-9a02-490f-9a25-aaa36affc839\") " pod="openstack/dnsmasq-dns-578b778949-qc575" Mar 18 18:18:53.469333 master-0 kubenswrapper[30278]: I0318 18:18:53.467125 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6289380a-9a02-490f-9a25-aaa36affc839-ovsdbserver-sb\") pod \"dnsmasq-dns-578b778949-qc575\" (UID: \"6289380a-9a02-490f-9a25-aaa36affc839\") " pod="openstack/dnsmasq-dns-578b778949-qc575" Mar 18 18:18:53.469333 master-0 kubenswrapper[30278]: I0318 18:18:53.467146 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6289380a-9a02-490f-9a25-aaa36affc839-ovsdbserver-nb\") pod \"dnsmasq-dns-578b778949-qc575\" (UID: \"6289380a-9a02-490f-9a25-aaa36affc839\") " pod="openstack/dnsmasq-dns-578b778949-qc575" Mar 18 18:18:53.469333 master-0 kubenswrapper[30278]: I0318 18:18:53.467194 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6289380a-9a02-490f-9a25-aaa36affc839-config\") pod \"dnsmasq-dns-578b778949-qc575\" (UID: \"6289380a-9a02-490f-9a25-aaa36affc839\") " pod="openstack/dnsmasq-dns-578b778949-qc575" Mar 18 18:18:53.572729 master-0 kubenswrapper[30278]: I0318 18:18:53.572662 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2h2jt\" (UniqueName: \"kubernetes.io/projected/6289380a-9a02-490f-9a25-aaa36affc839-kube-api-access-2h2jt\") pod \"dnsmasq-dns-578b778949-qc575\" (UID: \"6289380a-9a02-490f-9a25-aaa36affc839\") " pod="openstack/dnsmasq-dns-578b778949-qc575" Mar 18 18:18:53.573006 master-0 kubenswrapper[30278]: I0318 18:18:53.572780 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6289380a-9a02-490f-9a25-aaa36affc839-ovsdbserver-sb\") pod \"dnsmasq-dns-578b778949-qc575\" (UID: \"6289380a-9a02-490f-9a25-aaa36affc839\") " pod="openstack/dnsmasq-dns-578b778949-qc575" Mar 18 18:18:53.573006 master-0 kubenswrapper[30278]: I0318 18:18:53.572820 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6289380a-9a02-490f-9a25-aaa36affc839-ovsdbserver-nb\") pod \"dnsmasq-dns-578b778949-qc575\" (UID: \"6289380a-9a02-490f-9a25-aaa36affc839\") " pod="openstack/dnsmasq-dns-578b778949-qc575" Mar 18 18:18:53.573006 master-0 kubenswrapper[30278]: I0318 18:18:53.572857 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6289380a-9a02-490f-9a25-aaa36affc839-config\") pod \"dnsmasq-dns-578b778949-qc575\" (UID: \"6289380a-9a02-490f-9a25-aaa36affc839\") " pod="openstack/dnsmasq-dns-578b778949-qc575" Mar 18 18:18:53.573006 master-0 kubenswrapper[30278]: I0318 18:18:53.572950 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6289380a-9a02-490f-9a25-aaa36affc839-dns-swift-storage-0\") pod \"dnsmasq-dns-578b778949-qc575\" (UID: \"6289380a-9a02-490f-9a25-aaa36affc839\") " pod="openstack/dnsmasq-dns-578b778949-qc575" Mar 18 18:18:53.573006 master-0 kubenswrapper[30278]: I0318 18:18:53.572987 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6289380a-9a02-490f-9a25-aaa36affc839-dns-svc\") pod \"dnsmasq-dns-578b778949-qc575\" (UID: \"6289380a-9a02-490f-9a25-aaa36affc839\") " pod="openstack/dnsmasq-dns-578b778949-qc575" Mar 18 18:18:53.574462 master-0 kubenswrapper[30278]: I0318 18:18:53.574422 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6289380a-9a02-490f-9a25-aaa36affc839-ovsdbserver-nb\") pod \"dnsmasq-dns-578b778949-qc575\" (UID: \"6289380a-9a02-490f-9a25-aaa36affc839\") " pod="openstack/dnsmasq-dns-578b778949-qc575" Mar 18 18:18:53.574975 master-0 kubenswrapper[30278]: I0318 18:18:53.574949 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6289380a-9a02-490f-9a25-aaa36affc839-config\") pod \"dnsmasq-dns-578b778949-qc575\" (UID: \"6289380a-9a02-490f-9a25-aaa36affc839\") " pod="openstack/dnsmasq-dns-578b778949-qc575" Mar 18 18:18:53.575552 master-0 kubenswrapper[30278]: I0318 18:18:53.575521 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6289380a-9a02-490f-9a25-aaa36affc839-dns-swift-storage-0\") pod \"dnsmasq-dns-578b778949-qc575\" (UID: \"6289380a-9a02-490f-9a25-aaa36affc839\") " pod="openstack/dnsmasq-dns-578b778949-qc575" Mar 18 18:18:53.576108 master-0 kubenswrapper[30278]: I0318 18:18:53.576079 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6289380a-9a02-490f-9a25-aaa36affc839-dns-svc\") pod \"dnsmasq-dns-578b778949-qc575\" (UID: \"6289380a-9a02-490f-9a25-aaa36affc839\") " pod="openstack/dnsmasq-dns-578b778949-qc575" Mar 18 18:18:53.576229 master-0 kubenswrapper[30278]: I0318 18:18:53.576206 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6289380a-9a02-490f-9a25-aaa36affc839-ovsdbserver-sb\") pod \"dnsmasq-dns-578b778949-qc575\" (UID: \"6289380a-9a02-490f-9a25-aaa36affc839\") " pod="openstack/dnsmasq-dns-578b778949-qc575" Mar 18 18:18:53.585471 master-0 kubenswrapper[30278]: I0318 18:18:53.584987 30278 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ironic-db-create-vdk4s"] Mar 18 18:18:53.589336 master-0 kubenswrapper[30278]: I0318 18:18:53.587663 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-db-create-vdk4s" Mar 18 18:18:53.623465 master-0 kubenswrapper[30278]: I0318 18:18:53.623401 30278 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7595586f5-65zhn" podUID="8c241dad-460b-41da-b26d-a8d64e7d803a" containerName="dnsmasq-dns" containerID="cri-o://7a81a883114e1a12177ca8d6f7382b82c0367b325e8605a697b9c287f186227d" gracePeriod=10 Mar 18 18:18:53.625104 master-0 kubenswrapper[30278]: I0318 18:18:53.625081 30278 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-7kvlq"] Mar 18 18:18:53.635118 master-0 kubenswrapper[30278]: I0318 18:18:53.631435 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2h2jt\" (UniqueName: \"kubernetes.io/projected/6289380a-9a02-490f-9a25-aaa36affc839-kube-api-access-2h2jt\") pod \"dnsmasq-dns-578b778949-qc575\" (UID: \"6289380a-9a02-490f-9a25-aaa36affc839\") " pod="openstack/dnsmasq-dns-578b778949-qc575" Mar 18 18:18:53.641115 master-0 kubenswrapper[30278]: I0318 18:18:53.641052 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-7kvlq" Mar 18 18:18:53.673321 master-0 kubenswrapper[30278]: I0318 18:18:53.669189 30278 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-b9df6-db-sync-dxpjk"] Mar 18 18:18:53.673321 master-0 kubenswrapper[30278]: I0318 18:18:53.671549 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-b9df6-db-sync-dxpjk" Mar 18 18:18:53.684252 master-0 kubenswrapper[30278]: I0318 18:18:53.675492 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/93a52bc7-f284-44c3-afd7-738547756dd4-operator-scripts\") pod \"ironic-db-create-vdk4s\" (UID: \"93a52bc7-f284-44c3-afd7-738547756dd4\") " pod="openstack/ironic-db-create-vdk4s" Mar 18 18:18:53.692317 master-0 kubenswrapper[30278]: I0318 18:18:53.680291 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Mar 18 18:18:53.692317 master-0 kubenswrapper[30278]: I0318 18:18:53.680454 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Mar 18 18:18:53.692317 master-0 kubenswrapper[30278]: I0318 18:18:53.684939 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-b9df6-config-data" Mar 18 18:18:53.692317 master-0 kubenswrapper[30278]: I0318 18:18:53.685859 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-b9df6-scripts" Mar 18 18:18:53.692844 master-0 kubenswrapper[30278]: I0318 18:18:53.684966 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g6s9p\" (UniqueName: \"kubernetes.io/projected/93a52bc7-f284-44c3-afd7-738547756dd4-kube-api-access-g6s9p\") pod \"ironic-db-create-vdk4s\" (UID: \"93a52bc7-f284-44c3-afd7-738547756dd4\") " pod="openstack/ironic-db-create-vdk4s" Mar 18 18:18:53.734474 master-0 kubenswrapper[30278]: I0318 18:18:53.734432 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-578b778949-qc575" Mar 18 18:18:53.798330 master-0 kubenswrapper[30278]: I0318 18:18:53.798078 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/c5b88faf-e795-428e-8c3b-5a81d27c4a63-config\") pod \"neutron-db-sync-7kvlq\" (UID: \"c5b88faf-e795-428e-8c3b-5a81d27c4a63\") " pod="openstack/neutron-db-sync-7kvlq" Mar 18 18:18:53.798330 master-0 kubenswrapper[30278]: I0318 18:18:53.798224 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d8gqc\" (UniqueName: \"kubernetes.io/projected/c5b88faf-e795-428e-8c3b-5a81d27c4a63-kube-api-access-d8gqc\") pod \"neutron-db-sync-7kvlq\" (UID: \"c5b88faf-e795-428e-8c3b-5a81d27c4a63\") " pod="openstack/neutron-db-sync-7kvlq" Mar 18 18:18:53.798621 master-0 kubenswrapper[30278]: I0318 18:18:53.798376 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/47f543cd-d5bf-4421-aae3-516afd48c609-scripts\") pod \"cinder-b9df6-db-sync-dxpjk\" (UID: \"47f543cd-d5bf-4421-aae3-516afd48c609\") " pod="openstack/cinder-b9df6-db-sync-dxpjk" Mar 18 18:18:53.798621 master-0 kubenswrapper[30278]: I0318 18:18:53.798513 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/47f543cd-d5bf-4421-aae3-516afd48c609-combined-ca-bundle\") pod \"cinder-b9df6-db-sync-dxpjk\" (UID: \"47f543cd-d5bf-4421-aae3-516afd48c609\") " pod="openstack/cinder-b9df6-db-sync-dxpjk" Mar 18 18:18:53.798621 master-0 kubenswrapper[30278]: I0318 18:18:53.798542 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c5b88faf-e795-428e-8c3b-5a81d27c4a63-combined-ca-bundle\") pod \"neutron-db-sync-7kvlq\" (UID: \"c5b88faf-e795-428e-8c3b-5a81d27c4a63\") " pod="openstack/neutron-db-sync-7kvlq" Mar 18 18:18:53.798764 master-0 kubenswrapper[30278]: I0318 18:18:53.798655 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/47f543cd-d5bf-4421-aae3-516afd48c609-db-sync-config-data\") pod \"cinder-b9df6-db-sync-dxpjk\" (UID: \"47f543cd-d5bf-4421-aae3-516afd48c609\") " pod="openstack/cinder-b9df6-db-sync-dxpjk" Mar 18 18:18:53.798764 master-0 kubenswrapper[30278]: I0318 18:18:53.798753 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/93a52bc7-f284-44c3-afd7-738547756dd4-operator-scripts\") pod \"ironic-db-create-vdk4s\" (UID: \"93a52bc7-f284-44c3-afd7-738547756dd4\") " pod="openstack/ironic-db-create-vdk4s" Mar 18 18:18:53.798847 master-0 kubenswrapper[30278]: I0318 18:18:53.798816 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/47f543cd-d5bf-4421-aae3-516afd48c609-config-data\") pod \"cinder-b9df6-db-sync-dxpjk\" (UID: \"47f543cd-d5bf-4421-aae3-516afd48c609\") " pod="openstack/cinder-b9df6-db-sync-dxpjk" Mar 18 18:18:53.805316 master-0 kubenswrapper[30278]: I0318 18:18:53.798939 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9xmsb\" (UniqueName: \"kubernetes.io/projected/47f543cd-d5bf-4421-aae3-516afd48c609-kube-api-access-9xmsb\") pod \"cinder-b9df6-db-sync-dxpjk\" (UID: \"47f543cd-d5bf-4421-aae3-516afd48c609\") " pod="openstack/cinder-b9df6-db-sync-dxpjk" Mar 18 18:18:53.805316 master-0 kubenswrapper[30278]: I0318 18:18:53.798981 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/47f543cd-d5bf-4421-aae3-516afd48c609-etc-machine-id\") pod \"cinder-b9df6-db-sync-dxpjk\" (UID: \"47f543cd-d5bf-4421-aae3-516afd48c609\") " pod="openstack/cinder-b9df6-db-sync-dxpjk" Mar 18 18:18:53.805316 master-0 kubenswrapper[30278]: I0318 18:18:53.799083 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g6s9p\" (UniqueName: \"kubernetes.io/projected/93a52bc7-f284-44c3-afd7-738547756dd4-kube-api-access-g6s9p\") pod \"ironic-db-create-vdk4s\" (UID: \"93a52bc7-f284-44c3-afd7-738547756dd4\") " pod="openstack/ironic-db-create-vdk4s" Mar 18 18:18:53.819300 master-0 kubenswrapper[30278]: I0318 18:18:53.814376 30278 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-7kvlq"] Mar 18 18:18:53.823397 master-0 kubenswrapper[30278]: I0318 18:18:53.821124 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/93a52bc7-f284-44c3-afd7-738547756dd4-operator-scripts\") pod \"ironic-db-create-vdk4s\" (UID: \"93a52bc7-f284-44c3-afd7-738547756dd4\") " pod="openstack/ironic-db-create-vdk4s" Mar 18 18:18:53.839326 master-0 kubenswrapper[30278]: I0318 18:18:53.835870 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g6s9p\" (UniqueName: \"kubernetes.io/projected/93a52bc7-f284-44c3-afd7-738547756dd4-kube-api-access-g6s9p\") pod \"ironic-db-create-vdk4s\" (UID: \"93a52bc7-f284-44c3-afd7-738547756dd4\") " pod="openstack/ironic-db-create-vdk4s" Mar 18 18:18:53.918314 master-0 kubenswrapper[30278]: I0318 18:18:53.913033 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9xmsb\" (UniqueName: \"kubernetes.io/projected/47f543cd-d5bf-4421-aae3-516afd48c609-kube-api-access-9xmsb\") pod \"cinder-b9df6-db-sync-dxpjk\" (UID: \"47f543cd-d5bf-4421-aae3-516afd48c609\") " pod="openstack/cinder-b9df6-db-sync-dxpjk" Mar 18 18:18:53.918314 master-0 kubenswrapper[30278]: I0318 18:18:53.913114 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/47f543cd-d5bf-4421-aae3-516afd48c609-etc-machine-id\") pod \"cinder-b9df6-db-sync-dxpjk\" (UID: \"47f543cd-d5bf-4421-aae3-516afd48c609\") " pod="openstack/cinder-b9df6-db-sync-dxpjk" Mar 18 18:18:53.918314 master-0 kubenswrapper[30278]: I0318 18:18:53.913200 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/c5b88faf-e795-428e-8c3b-5a81d27c4a63-config\") pod \"neutron-db-sync-7kvlq\" (UID: \"c5b88faf-e795-428e-8c3b-5a81d27c4a63\") " pod="openstack/neutron-db-sync-7kvlq" Mar 18 18:18:53.918314 master-0 kubenswrapper[30278]: I0318 18:18:53.913230 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d8gqc\" (UniqueName: \"kubernetes.io/projected/c5b88faf-e795-428e-8c3b-5a81d27c4a63-kube-api-access-d8gqc\") pod \"neutron-db-sync-7kvlq\" (UID: \"c5b88faf-e795-428e-8c3b-5a81d27c4a63\") " pod="openstack/neutron-db-sync-7kvlq" Mar 18 18:18:53.918314 master-0 kubenswrapper[30278]: I0318 18:18:53.913280 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/47f543cd-d5bf-4421-aae3-516afd48c609-scripts\") pod \"cinder-b9df6-db-sync-dxpjk\" (UID: \"47f543cd-d5bf-4421-aae3-516afd48c609\") " pod="openstack/cinder-b9df6-db-sync-dxpjk" Mar 18 18:18:53.918314 master-0 kubenswrapper[30278]: I0318 18:18:53.913325 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c5b88faf-e795-428e-8c3b-5a81d27c4a63-combined-ca-bundle\") pod \"neutron-db-sync-7kvlq\" (UID: \"c5b88faf-e795-428e-8c3b-5a81d27c4a63\") " pod="openstack/neutron-db-sync-7kvlq" Mar 18 18:18:53.918314 master-0 kubenswrapper[30278]: I0318 18:18:53.913342 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/47f543cd-d5bf-4421-aae3-516afd48c609-combined-ca-bundle\") pod \"cinder-b9df6-db-sync-dxpjk\" (UID: \"47f543cd-d5bf-4421-aae3-516afd48c609\") " pod="openstack/cinder-b9df6-db-sync-dxpjk" Mar 18 18:18:53.918314 master-0 kubenswrapper[30278]: I0318 18:18:53.913377 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/47f543cd-d5bf-4421-aae3-516afd48c609-db-sync-config-data\") pod \"cinder-b9df6-db-sync-dxpjk\" (UID: \"47f543cd-d5bf-4421-aae3-516afd48c609\") " pod="openstack/cinder-b9df6-db-sync-dxpjk" Mar 18 18:18:53.918314 master-0 kubenswrapper[30278]: I0318 18:18:53.913415 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/47f543cd-d5bf-4421-aae3-516afd48c609-config-data\") pod \"cinder-b9df6-db-sync-dxpjk\" (UID: \"47f543cd-d5bf-4421-aae3-516afd48c609\") " pod="openstack/cinder-b9df6-db-sync-dxpjk" Mar 18 18:18:53.923424 master-0 kubenswrapper[30278]: I0318 18:18:53.920763 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/47f543cd-d5bf-4421-aae3-516afd48c609-etc-machine-id\") pod \"cinder-b9df6-db-sync-dxpjk\" (UID: \"47f543cd-d5bf-4421-aae3-516afd48c609\") " pod="openstack/cinder-b9df6-db-sync-dxpjk" Mar 18 18:18:53.940213 master-0 kubenswrapper[30278]: I0318 18:18:53.940145 30278 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-db-create-vdk4s"] Mar 18 18:18:53.970318 master-0 kubenswrapper[30278]: I0318 18:18:53.970103 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c5b88faf-e795-428e-8c3b-5a81d27c4a63-combined-ca-bundle\") pod \"neutron-db-sync-7kvlq\" (UID: \"c5b88faf-e795-428e-8c3b-5a81d27c4a63\") " pod="openstack/neutron-db-sync-7kvlq" Mar 18 18:18:54.032037 master-0 kubenswrapper[30278]: I0318 18:18:53.996210 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/47f543cd-d5bf-4421-aae3-516afd48c609-config-data\") pod \"cinder-b9df6-db-sync-dxpjk\" (UID: \"47f543cd-d5bf-4421-aae3-516afd48c609\") " pod="openstack/cinder-b9df6-db-sync-dxpjk" Mar 18 18:18:54.032037 master-0 kubenswrapper[30278]: I0318 18:18:54.030989 30278 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-b9df6-db-sync-dxpjk"] Mar 18 18:18:54.056600 master-0 kubenswrapper[30278]: I0318 18:18:54.056124 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/c5b88faf-e795-428e-8c3b-5a81d27c4a63-config\") pod \"neutron-db-sync-7kvlq\" (UID: \"c5b88faf-e795-428e-8c3b-5a81d27c4a63\") " pod="openstack/neutron-db-sync-7kvlq" Mar 18 18:18:54.058701 master-0 kubenswrapper[30278]: I0318 18:18:54.058661 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/47f543cd-d5bf-4421-aae3-516afd48c609-scripts\") pod \"cinder-b9df6-db-sync-dxpjk\" (UID: \"47f543cd-d5bf-4421-aae3-516afd48c609\") " pod="openstack/cinder-b9df6-db-sync-dxpjk" Mar 18 18:18:54.074391 master-0 kubenswrapper[30278]: I0318 18:18:54.073963 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/47f543cd-d5bf-4421-aae3-516afd48c609-db-sync-config-data\") pod \"cinder-b9df6-db-sync-dxpjk\" (UID: \"47f543cd-d5bf-4421-aae3-516afd48c609\") " pod="openstack/cinder-b9df6-db-sync-dxpjk" Mar 18 18:18:54.083117 master-0 kubenswrapper[30278]: I0318 18:18:54.080890 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-db-create-vdk4s" Mar 18 18:18:54.115198 master-0 kubenswrapper[30278]: I0318 18:18:54.114096 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/47f543cd-d5bf-4421-aae3-516afd48c609-combined-ca-bundle\") pod \"cinder-b9df6-db-sync-dxpjk\" (UID: \"47f543cd-d5bf-4421-aae3-516afd48c609\") " pod="openstack/cinder-b9df6-db-sync-dxpjk" Mar 18 18:18:54.115198 master-0 kubenswrapper[30278]: I0318 18:18:54.114097 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d8gqc\" (UniqueName: \"kubernetes.io/projected/c5b88faf-e795-428e-8c3b-5a81d27c4a63-kube-api-access-d8gqc\") pod \"neutron-db-sync-7kvlq\" (UID: \"c5b88faf-e795-428e-8c3b-5a81d27c4a63\") " pod="openstack/neutron-db-sync-7kvlq" Mar 18 18:18:54.115198 master-0 kubenswrapper[30278]: I0318 18:18:54.114307 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9xmsb\" (UniqueName: \"kubernetes.io/projected/47f543cd-d5bf-4421-aae3-516afd48c609-kube-api-access-9xmsb\") pod \"cinder-b9df6-db-sync-dxpjk\" (UID: \"47f543cd-d5bf-4421-aae3-516afd48c609\") " pod="openstack/cinder-b9df6-db-sync-dxpjk" Mar 18 18:18:54.219811 master-0 kubenswrapper[30278]: I0318 18:18:54.211899 30278 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ironic-f681-account-create-update-qx2xl"] Mar 18 18:18:54.219811 master-0 kubenswrapper[30278]: I0318 18:18:54.214148 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-f681-account-create-update-qx2xl" Mar 18 18:18:54.304407 master-0 kubenswrapper[30278]: I0318 18:18:54.283286 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ironic-db-secret" Mar 18 18:18:54.309262 master-0 kubenswrapper[30278]: I0318 18:18:54.308648 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-7kvlq" Mar 18 18:18:54.348878 master-0 kubenswrapper[30278]: I0318 18:18:54.345197 30278 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-f681-account-create-update-qx2xl"] Mar 18 18:18:54.401252 master-0 kubenswrapper[30278]: I0318 18:18:54.401191 30278 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-rngq2"] Mar 18 18:18:54.406418 master-0 kubenswrapper[30278]: I0318 18:18:54.405934 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-rngq2" Mar 18 18:18:54.412697 master-0 kubenswrapper[30278]: I0318 18:18:54.410834 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Mar 18 18:18:54.412697 master-0 kubenswrapper[30278]: I0318 18:18:54.411324 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Mar 18 18:18:54.500121 master-0 kubenswrapper[30278]: I0318 18:18:54.493566 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8fa4d0fa-8b6c-4d8c-acf3-3e438a0c9441-operator-scripts\") pod \"ironic-f681-account-create-update-qx2xl\" (UID: \"8fa4d0fa-8b6c-4d8c-acf3-3e438a0c9441\") " pod="openstack/ironic-f681-account-create-update-qx2xl" Mar 18 18:18:54.500121 master-0 kubenswrapper[30278]: I0318 18:18:54.493634 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xdjfm\" (UniqueName: \"kubernetes.io/projected/8fa4d0fa-8b6c-4d8c-acf3-3e438a0c9441-kube-api-access-xdjfm\") pod \"ironic-f681-account-create-update-qx2xl\" (UID: \"8fa4d0fa-8b6c-4d8c-acf3-3e438a0c9441\") " pod="openstack/ironic-f681-account-create-update-qx2xl" Mar 18 18:18:54.500121 master-0 kubenswrapper[30278]: I0318 18:18:54.494000 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-b9df6-db-sync-dxpjk" Mar 18 18:18:54.503782 master-0 kubenswrapper[30278]: I0318 18:18:54.503170 30278 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-rngq2"] Mar 18 18:18:54.521208 master-0 kubenswrapper[30278]: I0318 18:18:54.521111 30278 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-578b778949-qc575"] Mar 18 18:18:54.536548 master-0 kubenswrapper[30278]: I0318 18:18:54.536486 30278 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-c74f744c5-h9zsh"] Mar 18 18:18:54.550550 master-0 kubenswrapper[30278]: I0318 18:18:54.549522 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-c74f744c5-h9zsh" Mar 18 18:18:54.568386 master-0 kubenswrapper[30278]: I0318 18:18:54.567355 30278 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-c74f744c5-h9zsh"] Mar 18 18:18:54.614872 master-0 kubenswrapper[30278]: I0318 18:18:54.614799 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t692d\" (UniqueName: \"kubernetes.io/projected/fdcd674f-1047-437f-90ed-187b8b5eb882-kube-api-access-t692d\") pod \"placement-db-sync-rngq2\" (UID: \"fdcd674f-1047-437f-90ed-187b8b5eb882\") " pod="openstack/placement-db-sync-rngq2" Mar 18 18:18:54.615170 master-0 kubenswrapper[30278]: I0318 18:18:54.614951 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fdcd674f-1047-437f-90ed-187b8b5eb882-config-data\") pod \"placement-db-sync-rngq2\" (UID: \"fdcd674f-1047-437f-90ed-187b8b5eb882\") " pod="openstack/placement-db-sync-rngq2" Mar 18 18:18:54.615510 master-0 kubenswrapper[30278]: I0318 18:18:54.615473 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fdcd674f-1047-437f-90ed-187b8b5eb882-scripts\") pod \"placement-db-sync-rngq2\" (UID: \"fdcd674f-1047-437f-90ed-187b8b5eb882\") " pod="openstack/placement-db-sync-rngq2" Mar 18 18:18:54.615674 master-0 kubenswrapper[30278]: I0318 18:18:54.615625 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fdcd674f-1047-437f-90ed-187b8b5eb882-logs\") pod \"placement-db-sync-rngq2\" (UID: \"fdcd674f-1047-437f-90ed-187b8b5eb882\") " pod="openstack/placement-db-sync-rngq2" Mar 18 18:18:54.616084 master-0 kubenswrapper[30278]: I0318 18:18:54.616044 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fdcd674f-1047-437f-90ed-187b8b5eb882-combined-ca-bundle\") pod \"placement-db-sync-rngq2\" (UID: \"fdcd674f-1047-437f-90ed-187b8b5eb882\") " pod="openstack/placement-db-sync-rngq2" Mar 18 18:18:54.616190 master-0 kubenswrapper[30278]: I0318 18:18:54.616156 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8fa4d0fa-8b6c-4d8c-acf3-3e438a0c9441-operator-scripts\") pod \"ironic-f681-account-create-update-qx2xl\" (UID: \"8fa4d0fa-8b6c-4d8c-acf3-3e438a0c9441\") " pod="openstack/ironic-f681-account-create-update-qx2xl" Mar 18 18:18:54.616637 master-0 kubenswrapper[30278]: I0318 18:18:54.616599 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xdjfm\" (UniqueName: \"kubernetes.io/projected/8fa4d0fa-8b6c-4d8c-acf3-3e438a0c9441-kube-api-access-xdjfm\") pod \"ironic-f681-account-create-update-qx2xl\" (UID: \"8fa4d0fa-8b6c-4d8c-acf3-3e438a0c9441\") " pod="openstack/ironic-f681-account-create-update-qx2xl" Mar 18 18:18:54.619988 master-0 kubenswrapper[30278]: I0318 18:18:54.619798 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8fa4d0fa-8b6c-4d8c-acf3-3e438a0c9441-operator-scripts\") pod \"ironic-f681-account-create-update-qx2xl\" (UID: \"8fa4d0fa-8b6c-4d8c-acf3-3e438a0c9441\") " pod="openstack/ironic-f681-account-create-update-qx2xl" Mar 18 18:18:54.648946 master-0 kubenswrapper[30278]: I0318 18:18:54.648887 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xdjfm\" (UniqueName: \"kubernetes.io/projected/8fa4d0fa-8b6c-4d8c-acf3-3e438a0c9441-kube-api-access-xdjfm\") pod \"ironic-f681-account-create-update-qx2xl\" (UID: \"8fa4d0fa-8b6c-4d8c-acf3-3e438a0c9441\") " pod="openstack/ironic-f681-account-create-update-qx2xl" Mar 18 18:18:54.674652 master-0 kubenswrapper[30278]: I0318 18:18:54.674560 30278 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-kwm5v"] Mar 18 18:18:54.677633 master-0 kubenswrapper[30278]: I0318 18:18:54.677562 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-kwm5v" event={"ID":"5fa13ce1-ac91-4c75-8846-7679dfbd543b","Type":"ContainerStarted","Data":"0ede28e869f430b7e2de730d7632fc2ec6beadcfdaf27b6db08fa8244177a137"} Mar 18 18:18:54.683830 master-0 kubenswrapper[30278]: I0318 18:18:54.683761 30278 generic.go:334] "Generic (PLEG): container finished" podID="8c241dad-460b-41da-b26d-a8d64e7d803a" containerID="7a81a883114e1a12177ca8d6f7382b82c0367b325e8605a697b9c287f186227d" exitCode=0 Mar 18 18:18:54.683830 master-0 kubenswrapper[30278]: I0318 18:18:54.683826 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7595586f5-65zhn" event={"ID":"8c241dad-460b-41da-b26d-a8d64e7d803a","Type":"ContainerDied","Data":"7a81a883114e1a12177ca8d6f7382b82c0367b325e8605a697b9c287f186227d"} Mar 18 18:18:54.720482 master-0 kubenswrapper[30278]: I0318 18:18:54.718423 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fdcd674f-1047-437f-90ed-187b8b5eb882-config-data\") pod \"placement-db-sync-rngq2\" (UID: \"fdcd674f-1047-437f-90ed-187b8b5eb882\") " pod="openstack/placement-db-sync-rngq2" Mar 18 18:18:54.720482 master-0 kubenswrapper[30278]: I0318 18:18:54.718545 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/febd1792-9c89-4923-b8b8-0e41a1be1f1c-ovsdbserver-sb\") pod \"dnsmasq-dns-c74f744c5-h9zsh\" (UID: \"febd1792-9c89-4923-b8b8-0e41a1be1f1c\") " pod="openstack/dnsmasq-dns-c74f744c5-h9zsh" Mar 18 18:18:54.720482 master-0 kubenswrapper[30278]: I0318 18:18:54.718575 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/febd1792-9c89-4923-b8b8-0e41a1be1f1c-config\") pod \"dnsmasq-dns-c74f744c5-h9zsh\" (UID: \"febd1792-9c89-4923-b8b8-0e41a1be1f1c\") " pod="openstack/dnsmasq-dns-c74f744c5-h9zsh" Mar 18 18:18:54.720482 master-0 kubenswrapper[30278]: I0318 18:18:54.718631 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fdcd674f-1047-437f-90ed-187b8b5eb882-scripts\") pod \"placement-db-sync-rngq2\" (UID: \"fdcd674f-1047-437f-90ed-187b8b5eb882\") " pod="openstack/placement-db-sync-rngq2" Mar 18 18:18:54.720482 master-0 kubenswrapper[30278]: I0318 18:18:54.718656 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xq7tr\" (UniqueName: \"kubernetes.io/projected/febd1792-9c89-4923-b8b8-0e41a1be1f1c-kube-api-access-xq7tr\") pod \"dnsmasq-dns-c74f744c5-h9zsh\" (UID: \"febd1792-9c89-4923-b8b8-0e41a1be1f1c\") " pod="openstack/dnsmasq-dns-c74f744c5-h9zsh" Mar 18 18:18:54.720482 master-0 kubenswrapper[30278]: I0318 18:18:54.718698 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fdcd674f-1047-437f-90ed-187b8b5eb882-logs\") pod \"placement-db-sync-rngq2\" (UID: \"fdcd674f-1047-437f-90ed-187b8b5eb882\") " pod="openstack/placement-db-sync-rngq2" Mar 18 18:18:54.720482 master-0 kubenswrapper[30278]: I0318 18:18:54.718738 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fdcd674f-1047-437f-90ed-187b8b5eb882-combined-ca-bundle\") pod \"placement-db-sync-rngq2\" (UID: \"fdcd674f-1047-437f-90ed-187b8b5eb882\") " pod="openstack/placement-db-sync-rngq2" Mar 18 18:18:54.720482 master-0 kubenswrapper[30278]: I0318 18:18:54.718774 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/febd1792-9c89-4923-b8b8-0e41a1be1f1c-ovsdbserver-nb\") pod \"dnsmasq-dns-c74f744c5-h9zsh\" (UID: \"febd1792-9c89-4923-b8b8-0e41a1be1f1c\") " pod="openstack/dnsmasq-dns-c74f744c5-h9zsh" Mar 18 18:18:54.720482 master-0 kubenswrapper[30278]: I0318 18:18:54.718802 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/febd1792-9c89-4923-b8b8-0e41a1be1f1c-dns-swift-storage-0\") pod \"dnsmasq-dns-c74f744c5-h9zsh\" (UID: \"febd1792-9c89-4923-b8b8-0e41a1be1f1c\") " pod="openstack/dnsmasq-dns-c74f744c5-h9zsh" Mar 18 18:18:54.720482 master-0 kubenswrapper[30278]: I0318 18:18:54.718859 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t692d\" (UniqueName: \"kubernetes.io/projected/fdcd674f-1047-437f-90ed-187b8b5eb882-kube-api-access-t692d\") pod \"placement-db-sync-rngq2\" (UID: \"fdcd674f-1047-437f-90ed-187b8b5eb882\") " pod="openstack/placement-db-sync-rngq2" Mar 18 18:18:54.720482 master-0 kubenswrapper[30278]: I0318 18:18:54.718881 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/febd1792-9c89-4923-b8b8-0e41a1be1f1c-dns-svc\") pod \"dnsmasq-dns-c74f744c5-h9zsh\" (UID: \"febd1792-9c89-4923-b8b8-0e41a1be1f1c\") " pod="openstack/dnsmasq-dns-c74f744c5-h9zsh" Mar 18 18:18:54.720482 master-0 kubenswrapper[30278]: I0318 18:18:54.720130 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fdcd674f-1047-437f-90ed-187b8b5eb882-logs\") pod \"placement-db-sync-rngq2\" (UID: \"fdcd674f-1047-437f-90ed-187b8b5eb882\") " pod="openstack/placement-db-sync-rngq2" Mar 18 18:18:54.731366 master-0 kubenswrapper[30278]: I0318 18:18:54.728238 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fdcd674f-1047-437f-90ed-187b8b5eb882-scripts\") pod \"placement-db-sync-rngq2\" (UID: \"fdcd674f-1047-437f-90ed-187b8b5eb882\") " pod="openstack/placement-db-sync-rngq2" Mar 18 18:18:54.731366 master-0 kubenswrapper[30278]: I0318 18:18:54.730599 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fdcd674f-1047-437f-90ed-187b8b5eb882-combined-ca-bundle\") pod \"placement-db-sync-rngq2\" (UID: \"fdcd674f-1047-437f-90ed-187b8b5eb882\") " pod="openstack/placement-db-sync-rngq2" Mar 18 18:18:54.731366 master-0 kubenswrapper[30278]: I0318 18:18:54.731012 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fdcd674f-1047-437f-90ed-187b8b5eb882-config-data\") pod \"placement-db-sync-rngq2\" (UID: \"fdcd674f-1047-437f-90ed-187b8b5eb882\") " pod="openstack/placement-db-sync-rngq2" Mar 18 18:18:54.752474 master-0 kubenswrapper[30278]: I0318 18:18:54.751256 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t692d\" (UniqueName: \"kubernetes.io/projected/fdcd674f-1047-437f-90ed-187b8b5eb882-kube-api-access-t692d\") pod \"placement-db-sync-rngq2\" (UID: \"fdcd674f-1047-437f-90ed-187b8b5eb882\") " pod="openstack/placement-db-sync-rngq2" Mar 18 18:18:54.821763 master-0 kubenswrapper[30278]: I0318 18:18:54.821155 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/febd1792-9c89-4923-b8b8-0e41a1be1f1c-ovsdbserver-sb\") pod \"dnsmasq-dns-c74f744c5-h9zsh\" (UID: \"febd1792-9c89-4923-b8b8-0e41a1be1f1c\") " pod="openstack/dnsmasq-dns-c74f744c5-h9zsh" Mar 18 18:18:54.821763 master-0 kubenswrapper[30278]: I0318 18:18:54.821245 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/febd1792-9c89-4923-b8b8-0e41a1be1f1c-config\") pod \"dnsmasq-dns-c74f744c5-h9zsh\" (UID: \"febd1792-9c89-4923-b8b8-0e41a1be1f1c\") " pod="openstack/dnsmasq-dns-c74f744c5-h9zsh" Mar 18 18:18:54.821763 master-0 kubenswrapper[30278]: I0318 18:18:54.821315 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xq7tr\" (UniqueName: \"kubernetes.io/projected/febd1792-9c89-4923-b8b8-0e41a1be1f1c-kube-api-access-xq7tr\") pod \"dnsmasq-dns-c74f744c5-h9zsh\" (UID: \"febd1792-9c89-4923-b8b8-0e41a1be1f1c\") " pod="openstack/dnsmasq-dns-c74f744c5-h9zsh" Mar 18 18:18:54.821763 master-0 kubenswrapper[30278]: I0318 18:18:54.821414 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/febd1792-9c89-4923-b8b8-0e41a1be1f1c-ovsdbserver-nb\") pod \"dnsmasq-dns-c74f744c5-h9zsh\" (UID: \"febd1792-9c89-4923-b8b8-0e41a1be1f1c\") " pod="openstack/dnsmasq-dns-c74f744c5-h9zsh" Mar 18 18:18:54.821763 master-0 kubenswrapper[30278]: I0318 18:18:54.821444 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/febd1792-9c89-4923-b8b8-0e41a1be1f1c-dns-swift-storage-0\") pod \"dnsmasq-dns-c74f744c5-h9zsh\" (UID: \"febd1792-9c89-4923-b8b8-0e41a1be1f1c\") " pod="openstack/dnsmasq-dns-c74f744c5-h9zsh" Mar 18 18:18:54.821763 master-0 kubenswrapper[30278]: I0318 18:18:54.821496 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/febd1792-9c89-4923-b8b8-0e41a1be1f1c-dns-svc\") pod \"dnsmasq-dns-c74f744c5-h9zsh\" (UID: \"febd1792-9c89-4923-b8b8-0e41a1be1f1c\") " pod="openstack/dnsmasq-dns-c74f744c5-h9zsh" Mar 18 18:18:54.824627 master-0 kubenswrapper[30278]: I0318 18:18:54.824411 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/febd1792-9c89-4923-b8b8-0e41a1be1f1c-dns-svc\") pod \"dnsmasq-dns-c74f744c5-h9zsh\" (UID: \"febd1792-9c89-4923-b8b8-0e41a1be1f1c\") " pod="openstack/dnsmasq-dns-c74f744c5-h9zsh" Mar 18 18:18:54.827003 master-0 kubenswrapper[30278]: I0318 18:18:54.826637 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/febd1792-9c89-4923-b8b8-0e41a1be1f1c-ovsdbserver-sb\") pod \"dnsmasq-dns-c74f744c5-h9zsh\" (UID: \"febd1792-9c89-4923-b8b8-0e41a1be1f1c\") " pod="openstack/dnsmasq-dns-c74f744c5-h9zsh" Mar 18 18:18:54.830306 master-0 kubenswrapper[30278]: I0318 18:18:54.827249 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/febd1792-9c89-4923-b8b8-0e41a1be1f1c-config\") pod \"dnsmasq-dns-c74f744c5-h9zsh\" (UID: \"febd1792-9c89-4923-b8b8-0e41a1be1f1c\") " pod="openstack/dnsmasq-dns-c74f744c5-h9zsh" Mar 18 18:18:54.836980 master-0 kubenswrapper[30278]: I0318 18:18:54.836915 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/febd1792-9c89-4923-b8b8-0e41a1be1f1c-dns-swift-storage-0\") pod \"dnsmasq-dns-c74f744c5-h9zsh\" (UID: \"febd1792-9c89-4923-b8b8-0e41a1be1f1c\") " pod="openstack/dnsmasq-dns-c74f744c5-h9zsh" Mar 18 18:18:54.841653 master-0 kubenswrapper[30278]: I0318 18:18:54.840421 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/febd1792-9c89-4923-b8b8-0e41a1be1f1c-ovsdbserver-nb\") pod \"dnsmasq-dns-c74f744c5-h9zsh\" (UID: \"febd1792-9c89-4923-b8b8-0e41a1be1f1c\") " pod="openstack/dnsmasq-dns-c74f744c5-h9zsh" Mar 18 18:18:54.864162 master-0 kubenswrapper[30278]: I0318 18:18:54.862390 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xq7tr\" (UniqueName: \"kubernetes.io/projected/febd1792-9c89-4923-b8b8-0e41a1be1f1c-kube-api-access-xq7tr\") pod \"dnsmasq-dns-c74f744c5-h9zsh\" (UID: \"febd1792-9c89-4923-b8b8-0e41a1be1f1c\") " pod="openstack/dnsmasq-dns-c74f744c5-h9zsh" Mar 18 18:18:54.934521 master-0 kubenswrapper[30278]: I0318 18:18:54.934431 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-f681-account-create-update-qx2xl" Mar 18 18:18:54.991298 master-0 kubenswrapper[30278]: I0318 18:18:54.988511 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-rngq2" Mar 18 18:18:55.005692 master-0 kubenswrapper[30278]: I0318 18:18:55.003424 30278 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-578b778949-qc575"] Mar 18 18:18:55.028981 master-0 kubenswrapper[30278]: I0318 18:18:55.026841 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-c74f744c5-h9zsh" Mar 18 18:18:55.248716 master-0 kubenswrapper[30278]: I0318 18:18:55.247191 30278 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7595586f5-65zhn" Mar 18 18:18:55.328900 master-0 kubenswrapper[30278]: I0318 18:18:55.316553 30278 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-7kvlq"] Mar 18 18:18:55.403573 master-0 kubenswrapper[30278]: I0318 18:18:55.374721 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8c241dad-460b-41da-b26d-a8d64e7d803a-dns-svc\") pod \"8c241dad-460b-41da-b26d-a8d64e7d803a\" (UID: \"8c241dad-460b-41da-b26d-a8d64e7d803a\") " Mar 18 18:18:55.403573 master-0 kubenswrapper[30278]: I0318 18:18:55.374858 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8c241dad-460b-41da-b26d-a8d64e7d803a-ovsdbserver-nb\") pod \"8c241dad-460b-41da-b26d-a8d64e7d803a\" (UID: \"8c241dad-460b-41da-b26d-a8d64e7d803a\") " Mar 18 18:18:55.403573 master-0 kubenswrapper[30278]: I0318 18:18:55.374986 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8c241dad-460b-41da-b26d-a8d64e7d803a-config\") pod \"8c241dad-460b-41da-b26d-a8d64e7d803a\" (UID: \"8c241dad-460b-41da-b26d-a8d64e7d803a\") " Mar 18 18:18:55.403573 master-0 kubenswrapper[30278]: I0318 18:18:55.375083 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sx2rj\" (UniqueName: \"kubernetes.io/projected/8c241dad-460b-41da-b26d-a8d64e7d803a-kube-api-access-sx2rj\") pod \"8c241dad-460b-41da-b26d-a8d64e7d803a\" (UID: \"8c241dad-460b-41da-b26d-a8d64e7d803a\") " Mar 18 18:18:55.403573 master-0 kubenswrapper[30278]: I0318 18:18:55.375117 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8c241dad-460b-41da-b26d-a8d64e7d803a-dns-swift-storage-0\") pod \"8c241dad-460b-41da-b26d-a8d64e7d803a\" (UID: \"8c241dad-460b-41da-b26d-a8d64e7d803a\") " Mar 18 18:18:55.403573 master-0 kubenswrapper[30278]: I0318 18:18:55.375212 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8c241dad-460b-41da-b26d-a8d64e7d803a-ovsdbserver-sb\") pod \"8c241dad-460b-41da-b26d-a8d64e7d803a\" (UID: \"8c241dad-460b-41da-b26d-a8d64e7d803a\") " Mar 18 18:18:55.403573 master-0 kubenswrapper[30278]: I0318 18:18:55.394736 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8c241dad-460b-41da-b26d-a8d64e7d803a-kube-api-access-sx2rj" (OuterVolumeSpecName: "kube-api-access-sx2rj") pod "8c241dad-460b-41da-b26d-a8d64e7d803a" (UID: "8c241dad-460b-41da-b26d-a8d64e7d803a"). InnerVolumeSpecName "kube-api-access-sx2rj". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 18:18:55.421312 master-0 kubenswrapper[30278]: I0318 18:18:55.420704 30278 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sx2rj\" (UniqueName: \"kubernetes.io/projected/8c241dad-460b-41da-b26d-a8d64e7d803a-kube-api-access-sx2rj\") on node \"master-0\" DevicePath \"\"" Mar 18 18:18:55.503059 master-0 kubenswrapper[30278]: I0318 18:18:55.492704 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8c241dad-460b-41da-b26d-a8d64e7d803a-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "8c241dad-460b-41da-b26d-a8d64e7d803a" (UID: "8c241dad-460b-41da-b26d-a8d64e7d803a"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 18:18:55.518312 master-0 kubenswrapper[30278]: I0318 18:18:55.512154 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8c241dad-460b-41da-b26d-a8d64e7d803a-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "8c241dad-460b-41da-b26d-a8d64e7d803a" (UID: "8c241dad-460b-41da-b26d-a8d64e7d803a"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 18:18:55.527534 master-0 kubenswrapper[30278]: I0318 18:18:55.527436 30278 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-db-create-vdk4s"] Mar 18 18:18:55.549097 master-0 kubenswrapper[30278]: I0318 18:18:55.529551 30278 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8c241dad-460b-41da-b26d-a8d64e7d803a-ovsdbserver-nb\") on node \"master-0\" DevicePath \"\"" Mar 18 18:18:55.549097 master-0 kubenswrapper[30278]: I0318 18:18:55.529577 30278 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8c241dad-460b-41da-b26d-a8d64e7d803a-ovsdbserver-sb\") on node \"master-0\" DevicePath \"\"" Mar 18 18:18:55.557493 master-0 kubenswrapper[30278]: W0318 18:18:55.556244 30278 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod93a52bc7_f284_44c3_afd7_738547756dd4.slice/crio-87d90a4ac97602357b0a73c8562603eaea2d79261d9f701e1710e75018952914 WatchSource:0}: Error finding container 87d90a4ac97602357b0a73c8562603eaea2d79261d9f701e1710e75018952914: Status 404 returned error can't find the container with id 87d90a4ac97602357b0a73c8562603eaea2d79261d9f701e1710e75018952914 Mar 18 18:18:55.613736 master-0 kubenswrapper[30278]: I0318 18:18:55.613659 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8c241dad-460b-41da-b26d-a8d64e7d803a-config" (OuterVolumeSpecName: "config") pod "8c241dad-460b-41da-b26d-a8d64e7d803a" (UID: "8c241dad-460b-41da-b26d-a8d64e7d803a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 18:18:55.623012 master-0 kubenswrapper[30278]: I0318 18:18:55.622908 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8c241dad-460b-41da-b26d-a8d64e7d803a-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "8c241dad-460b-41da-b26d-a8d64e7d803a" (UID: "8c241dad-460b-41da-b26d-a8d64e7d803a"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 18:18:55.639039 master-0 kubenswrapper[30278]: I0318 18:18:55.638944 30278 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8c241dad-460b-41da-b26d-a8d64e7d803a-config\") on node \"master-0\" DevicePath \"\"" Mar 18 18:18:55.639039 master-0 kubenswrapper[30278]: I0318 18:18:55.639009 30278 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8c241dad-460b-41da-b26d-a8d64e7d803a-dns-swift-storage-0\") on node \"master-0\" DevicePath \"\"" Mar 18 18:18:55.639521 master-0 kubenswrapper[30278]: I0318 18:18:55.639157 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8c241dad-460b-41da-b26d-a8d64e7d803a-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "8c241dad-460b-41da-b26d-a8d64e7d803a" (UID: "8c241dad-460b-41da-b26d-a8d64e7d803a"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 18:18:55.702927 master-0 kubenswrapper[30278]: I0318 18:18:55.702837 30278 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-b9df6-db-sync-dxpjk"] Mar 18 18:18:55.730982 master-0 kubenswrapper[30278]: I0318 18:18:55.730679 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-kwm5v" event={"ID":"5fa13ce1-ac91-4c75-8846-7679dfbd543b","Type":"ContainerStarted","Data":"19ba6ac1a7adc3781b9fc8ccb4cd5a1cf73198f2789227d7e11a56f78c34e3de"} Mar 18 18:18:55.744007 master-0 kubenswrapper[30278]: I0318 18:18:55.743029 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-db-create-vdk4s" event={"ID":"93a52bc7-f284-44c3-afd7-738547756dd4","Type":"ContainerStarted","Data":"87d90a4ac97602357b0a73c8562603eaea2d79261d9f701e1710e75018952914"} Mar 18 18:18:55.744220 master-0 kubenswrapper[30278]: I0318 18:18:55.744135 30278 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8c241dad-460b-41da-b26d-a8d64e7d803a-dns-svc\") on node \"master-0\" DevicePath \"\"" Mar 18 18:18:55.753650 master-0 kubenswrapper[30278]: I0318 18:18:55.752792 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-578b778949-qc575" event={"ID":"6289380a-9a02-490f-9a25-aaa36affc839","Type":"ContainerStarted","Data":"60472d7dbc135079009d7f483f31cdb8db54d243aad5fbb56eb8e488d175f19b"} Mar 18 18:18:55.757972 master-0 kubenswrapper[30278]: I0318 18:18:55.757892 30278 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-kwm5v" podStartSLOduration=3.757867104 podStartE2EDuration="3.757867104s" podCreationTimestamp="2026-03-18 18:18:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 18:18:55.756657321 +0000 UTC m=+1104.923841916" watchObservedRunningTime="2026-03-18 18:18:55.757867104 +0000 UTC m=+1104.925051699" Mar 18 18:18:55.782828 master-0 kubenswrapper[30278]: I0318 18:18:55.777073 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7595586f5-65zhn" event={"ID":"8c241dad-460b-41da-b26d-a8d64e7d803a","Type":"ContainerDied","Data":"7912e63b5f8ce09441a491c262fac82f800a5ef9349b7b830b8ee01f2b5e3d7e"} Mar 18 18:18:55.782828 master-0 kubenswrapper[30278]: I0318 18:18:55.777138 30278 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7595586f5-65zhn" Mar 18 18:18:55.782828 master-0 kubenswrapper[30278]: I0318 18:18:55.777172 30278 scope.go:117] "RemoveContainer" containerID="7a81a883114e1a12177ca8d6f7382b82c0367b325e8605a697b9c287f186227d" Mar 18 18:18:55.818423 master-0 kubenswrapper[30278]: I0318 18:18:55.817984 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-7kvlq" event={"ID":"c5b88faf-e795-428e-8c3b-5a81d27c4a63","Type":"ContainerStarted","Data":"17e0c1cebd9bfb0a798eefa9aff161a9086970ef11dc3ee4c84558885f39d039"} Mar 18 18:18:55.889606 master-0 kubenswrapper[30278]: I0318 18:18:55.889545 30278 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7595586f5-65zhn"] Mar 18 18:18:55.913011 master-0 kubenswrapper[30278]: I0318 18:18:55.910099 30278 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7595586f5-65zhn"] Mar 18 18:18:55.933882 master-0 kubenswrapper[30278]: I0318 18:18:55.928638 30278 scope.go:117] "RemoveContainer" containerID="ea8dae639c1bad33556579588b993245591e6cf6ca1c7e8f3e9c3ff65dc087e2" Mar 18 18:18:55.960604 master-0 kubenswrapper[30278]: I0318 18:18:55.960525 30278 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-f681-account-create-update-qx2xl"] Mar 18 18:18:55.995140 master-0 kubenswrapper[30278]: W0318 18:18:55.995099 30278 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8fa4d0fa_8b6c_4d8c_acf3_3e438a0c9441.slice/crio-b2a5942cd182a92b9b1c59c0e6b4a6716ad8c68c1ed8d89d9e5afa39e9a9e04f WatchSource:0}: Error finding container b2a5942cd182a92b9b1c59c0e6b4a6716ad8c68c1ed8d89d9e5afa39e9a9e04f: Status 404 returned error can't find the container with id b2a5942cd182a92b9b1c59c0e6b4a6716ad8c68c1ed8d89d9e5afa39e9a9e04f Mar 18 18:18:56.158311 master-0 kubenswrapper[30278]: I0318 18:18:56.157426 30278 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-c74f744c5-h9zsh"] Mar 18 18:18:56.251302 master-0 kubenswrapper[30278]: I0318 18:18:56.249605 30278 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-rngq2"] Mar 18 18:18:56.871566 master-0 kubenswrapper[30278]: I0318 18:18:56.870426 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-rngq2" event={"ID":"fdcd674f-1047-437f-90ed-187b8b5eb882","Type":"ContainerStarted","Data":"f1e6065d349f72e99a49e0b23713aafb6837627c5e9bfc88e7313bbe167f6c83"} Mar 18 18:18:56.882043 master-0 kubenswrapper[30278]: I0318 18:18:56.873959 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-b9df6-db-sync-dxpjk" event={"ID":"47f543cd-d5bf-4421-aae3-516afd48c609","Type":"ContainerStarted","Data":"7fc557b94f8a0c72e26d7c9c3686f23d710bfdf15e08e903058c45ee06f352c4"} Mar 18 18:18:56.882043 master-0 kubenswrapper[30278]: I0318 18:18:56.876853 30278 generic.go:334] "Generic (PLEG): container finished" podID="6289380a-9a02-490f-9a25-aaa36affc839" containerID="9f789f8419e1cff2a987b5a54398c190159ff3339baae9d90bad6d9264845204" exitCode=0 Mar 18 18:18:56.882043 master-0 kubenswrapper[30278]: I0318 18:18:56.877248 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-578b778949-qc575" event={"ID":"6289380a-9a02-490f-9a25-aaa36affc839","Type":"ContainerDied","Data":"9f789f8419e1cff2a987b5a54398c190159ff3339baae9d90bad6d9264845204"} Mar 18 18:18:56.893695 master-0 kubenswrapper[30278]: I0318 18:18:56.893600 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-7kvlq" event={"ID":"c5b88faf-e795-428e-8c3b-5a81d27c4a63","Type":"ContainerStarted","Data":"5aef6bdfc2372b6574b3548d6a02f06098f5653e91ae94679695f8dc98e67a7e"} Mar 18 18:18:56.904383 master-0 kubenswrapper[30278]: I0318 18:18:56.896782 30278 generic.go:334] "Generic (PLEG): container finished" podID="febd1792-9c89-4923-b8b8-0e41a1be1f1c" containerID="0d23360114c86fd68c8778bc6bac919bc1305b85cd38e0a0c0aff1c2f4857c9a" exitCode=0 Mar 18 18:18:56.904383 master-0 kubenswrapper[30278]: I0318 18:18:56.896871 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-c74f744c5-h9zsh" event={"ID":"febd1792-9c89-4923-b8b8-0e41a1be1f1c","Type":"ContainerDied","Data":"0d23360114c86fd68c8778bc6bac919bc1305b85cd38e0a0c0aff1c2f4857c9a"} Mar 18 18:18:56.904383 master-0 kubenswrapper[30278]: I0318 18:18:56.896909 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-c74f744c5-h9zsh" event={"ID":"febd1792-9c89-4923-b8b8-0e41a1be1f1c","Type":"ContainerStarted","Data":"1386f4b98a1e85219deaa09dca69421577ce22b6568570d2a4dde2e682c4f364"} Mar 18 18:18:56.907478 master-0 kubenswrapper[30278]: I0318 18:18:56.907214 30278 generic.go:334] "Generic (PLEG): container finished" podID="93a52bc7-f284-44c3-afd7-738547756dd4" containerID="356cccd88ce9157d5feba3e9ae53765bd5905d3c544e7fa6f3aa03588e8b9aa9" exitCode=0 Mar 18 18:18:56.907478 master-0 kubenswrapper[30278]: I0318 18:18:56.907319 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-db-create-vdk4s" event={"ID":"93a52bc7-f284-44c3-afd7-738547756dd4","Type":"ContainerDied","Data":"356cccd88ce9157d5feba3e9ae53765bd5905d3c544e7fa6f3aa03588e8b9aa9"} Mar 18 18:18:56.916895 master-0 kubenswrapper[30278]: I0318 18:18:56.914586 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-f681-account-create-update-qx2xl" event={"ID":"8fa4d0fa-8b6c-4d8c-acf3-3e438a0c9441","Type":"ContainerStarted","Data":"bfb4ef6a2c143dd9fbc927e45866450608c373f1228cc58bbe20f709ef808629"} Mar 18 18:18:56.916895 master-0 kubenswrapper[30278]: I0318 18:18:56.914664 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-f681-account-create-update-qx2xl" event={"ID":"8fa4d0fa-8b6c-4d8c-acf3-3e438a0c9441","Type":"ContainerStarted","Data":"b2a5942cd182a92b9b1c59c0e6b4a6716ad8c68c1ed8d89d9e5afa39e9a9e04f"} Mar 18 18:18:57.015334 master-0 kubenswrapper[30278]: I0318 18:18:57.009043 30278 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-7kvlq" podStartSLOduration=4.009007473 podStartE2EDuration="4.009007473s" podCreationTimestamp="2026-03-18 18:18:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 18:18:56.95657953 +0000 UTC m=+1106.123764125" watchObservedRunningTime="2026-03-18 18:18:57.009007473 +0000 UTC m=+1106.176192068" Mar 18 18:18:57.096690 master-0 kubenswrapper[30278]: I0318 18:18:57.095234 30278 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8c241dad-460b-41da-b26d-a8d64e7d803a" path="/var/lib/kubelet/pods/8c241dad-460b-41da-b26d-a8d64e7d803a/volumes" Mar 18 18:18:57.603520 master-0 kubenswrapper[30278]: I0318 18:18:57.603444 30278 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-578b778949-qc575" Mar 18 18:18:57.765019 master-0 kubenswrapper[30278]: I0318 18:18:57.764841 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6289380a-9a02-490f-9a25-aaa36affc839-dns-swift-storage-0\") pod \"6289380a-9a02-490f-9a25-aaa36affc839\" (UID: \"6289380a-9a02-490f-9a25-aaa36affc839\") " Mar 18 18:18:57.765019 master-0 kubenswrapper[30278]: I0318 18:18:57.764970 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6289380a-9a02-490f-9a25-aaa36affc839-ovsdbserver-sb\") pod \"6289380a-9a02-490f-9a25-aaa36affc839\" (UID: \"6289380a-9a02-490f-9a25-aaa36affc839\") " Mar 18 18:18:57.765342 master-0 kubenswrapper[30278]: I0318 18:18:57.765073 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6289380a-9a02-490f-9a25-aaa36affc839-ovsdbserver-nb\") pod \"6289380a-9a02-490f-9a25-aaa36affc839\" (UID: \"6289380a-9a02-490f-9a25-aaa36affc839\") " Mar 18 18:18:57.765395 master-0 kubenswrapper[30278]: I0318 18:18:57.765364 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6289380a-9a02-490f-9a25-aaa36affc839-dns-svc\") pod \"6289380a-9a02-490f-9a25-aaa36affc839\" (UID: \"6289380a-9a02-490f-9a25-aaa36affc839\") " Mar 18 18:18:57.765444 master-0 kubenswrapper[30278]: I0318 18:18:57.765424 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6289380a-9a02-490f-9a25-aaa36affc839-config\") pod \"6289380a-9a02-490f-9a25-aaa36affc839\" (UID: \"6289380a-9a02-490f-9a25-aaa36affc839\") " Mar 18 18:18:57.765480 master-0 kubenswrapper[30278]: I0318 18:18:57.765460 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2h2jt\" (UniqueName: \"kubernetes.io/projected/6289380a-9a02-490f-9a25-aaa36affc839-kube-api-access-2h2jt\") pod \"6289380a-9a02-490f-9a25-aaa36affc839\" (UID: \"6289380a-9a02-490f-9a25-aaa36affc839\") " Mar 18 18:18:57.772412 master-0 kubenswrapper[30278]: I0318 18:18:57.772339 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6289380a-9a02-490f-9a25-aaa36affc839-kube-api-access-2h2jt" (OuterVolumeSpecName: "kube-api-access-2h2jt") pod "6289380a-9a02-490f-9a25-aaa36affc839" (UID: "6289380a-9a02-490f-9a25-aaa36affc839"). InnerVolumeSpecName "kube-api-access-2h2jt". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 18:18:57.790845 master-0 kubenswrapper[30278]: I0318 18:18:57.790738 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6289380a-9a02-490f-9a25-aaa36affc839-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "6289380a-9a02-490f-9a25-aaa36affc839" (UID: "6289380a-9a02-490f-9a25-aaa36affc839"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 18:18:57.798030 master-0 kubenswrapper[30278]: I0318 18:18:57.797952 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6289380a-9a02-490f-9a25-aaa36affc839-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "6289380a-9a02-490f-9a25-aaa36affc839" (UID: "6289380a-9a02-490f-9a25-aaa36affc839"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 18:18:57.799680 master-0 kubenswrapper[30278]: I0318 18:18:57.799595 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6289380a-9a02-490f-9a25-aaa36affc839-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "6289380a-9a02-490f-9a25-aaa36affc839" (UID: "6289380a-9a02-490f-9a25-aaa36affc839"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 18:18:57.806621 master-0 kubenswrapper[30278]: I0318 18:18:57.806473 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6289380a-9a02-490f-9a25-aaa36affc839-config" (OuterVolumeSpecName: "config") pod "6289380a-9a02-490f-9a25-aaa36affc839" (UID: "6289380a-9a02-490f-9a25-aaa36affc839"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 18:18:57.815025 master-0 kubenswrapper[30278]: I0318 18:18:57.814969 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6289380a-9a02-490f-9a25-aaa36affc839-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "6289380a-9a02-490f-9a25-aaa36affc839" (UID: "6289380a-9a02-490f-9a25-aaa36affc839"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 18:18:57.871762 master-0 kubenswrapper[30278]: I0318 18:18:57.870928 30278 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6289380a-9a02-490f-9a25-aaa36affc839-dns-svc\") on node \"master-0\" DevicePath \"\"" Mar 18 18:18:57.871762 master-0 kubenswrapper[30278]: I0318 18:18:57.870993 30278 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6289380a-9a02-490f-9a25-aaa36affc839-config\") on node \"master-0\" DevicePath \"\"" Mar 18 18:18:57.871762 master-0 kubenswrapper[30278]: I0318 18:18:57.871006 30278 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2h2jt\" (UniqueName: \"kubernetes.io/projected/6289380a-9a02-490f-9a25-aaa36affc839-kube-api-access-2h2jt\") on node \"master-0\" DevicePath \"\"" Mar 18 18:18:57.871762 master-0 kubenswrapper[30278]: I0318 18:18:57.871018 30278 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6289380a-9a02-490f-9a25-aaa36affc839-dns-swift-storage-0\") on node \"master-0\" DevicePath \"\"" Mar 18 18:18:57.871762 master-0 kubenswrapper[30278]: I0318 18:18:57.871029 30278 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6289380a-9a02-490f-9a25-aaa36affc839-ovsdbserver-sb\") on node \"master-0\" DevicePath \"\"" Mar 18 18:18:57.871762 master-0 kubenswrapper[30278]: I0318 18:18:57.871038 30278 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6289380a-9a02-490f-9a25-aaa36affc839-ovsdbserver-nb\") on node \"master-0\" DevicePath \"\"" Mar 18 18:18:57.933328 master-0 kubenswrapper[30278]: I0318 18:18:57.933205 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-c74f744c5-h9zsh" event={"ID":"febd1792-9c89-4923-b8b8-0e41a1be1f1c","Type":"ContainerStarted","Data":"c27def519c22909a531884df533f9961a3e2ad9da3f520eb969db8c43fcbd006"} Mar 18 18:18:57.933662 master-0 kubenswrapper[30278]: I0318 18:18:57.933344 30278 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-c74f744c5-h9zsh" Mar 18 18:18:57.936110 master-0 kubenswrapper[30278]: I0318 18:18:57.935996 30278 generic.go:334] "Generic (PLEG): container finished" podID="8fa4d0fa-8b6c-4d8c-acf3-3e438a0c9441" containerID="bfb4ef6a2c143dd9fbc927e45866450608c373f1228cc58bbe20f709ef808629" exitCode=0 Mar 18 18:18:57.936110 master-0 kubenswrapper[30278]: I0318 18:18:57.936079 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-f681-account-create-update-qx2xl" event={"ID":"8fa4d0fa-8b6c-4d8c-acf3-3e438a0c9441","Type":"ContainerDied","Data":"bfb4ef6a2c143dd9fbc927e45866450608c373f1228cc58bbe20f709ef808629"} Mar 18 18:18:57.941569 master-0 kubenswrapper[30278]: I0318 18:18:57.941454 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-578b778949-qc575" event={"ID":"6289380a-9a02-490f-9a25-aaa36affc839","Type":"ContainerDied","Data":"60472d7dbc135079009d7f483f31cdb8db54d243aad5fbb56eb8e488d175f19b"} Mar 18 18:18:57.941569 master-0 kubenswrapper[30278]: I0318 18:18:57.941551 30278 scope.go:117] "RemoveContainer" containerID="9f789f8419e1cff2a987b5a54398c190159ff3339baae9d90bad6d9264845204" Mar 18 18:18:57.941816 master-0 kubenswrapper[30278]: I0318 18:18:57.941555 30278 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-578b778949-qc575" Mar 18 18:18:58.645300 master-0 kubenswrapper[30278]: I0318 18:18:58.643637 30278 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-c74f744c5-h9zsh" podStartSLOduration=4.64361025 podStartE2EDuration="4.64361025s" podCreationTimestamp="2026-03-18 18:18:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 18:18:58.620045965 +0000 UTC m=+1107.787230560" watchObservedRunningTime="2026-03-18 18:18:58.64361025 +0000 UTC m=+1107.810794845" Mar 18 18:18:58.706233 master-0 kubenswrapper[30278]: I0318 18:18:58.706150 30278 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-f681-account-create-update-qx2xl" Mar 18 18:18:58.707274 master-0 kubenswrapper[30278]: I0318 18:18:58.707220 30278 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-db-create-vdk4s" Mar 18 18:18:58.854074 master-0 kubenswrapper[30278]: I0318 18:18:58.853748 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xdjfm\" (UniqueName: \"kubernetes.io/projected/8fa4d0fa-8b6c-4d8c-acf3-3e438a0c9441-kube-api-access-xdjfm\") pod \"8fa4d0fa-8b6c-4d8c-acf3-3e438a0c9441\" (UID: \"8fa4d0fa-8b6c-4d8c-acf3-3e438a0c9441\") " Mar 18 18:18:58.854074 master-0 kubenswrapper[30278]: I0318 18:18:58.853840 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/93a52bc7-f284-44c3-afd7-738547756dd4-operator-scripts\") pod \"93a52bc7-f284-44c3-afd7-738547756dd4\" (UID: \"93a52bc7-f284-44c3-afd7-738547756dd4\") " Mar 18 18:18:58.854074 master-0 kubenswrapper[30278]: I0318 18:18:58.854000 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8fa4d0fa-8b6c-4d8c-acf3-3e438a0c9441-operator-scripts\") pod \"8fa4d0fa-8b6c-4d8c-acf3-3e438a0c9441\" (UID: \"8fa4d0fa-8b6c-4d8c-acf3-3e438a0c9441\") " Mar 18 18:18:58.854795 master-0 kubenswrapper[30278]: I0318 18:18:58.854233 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g6s9p\" (UniqueName: \"kubernetes.io/projected/93a52bc7-f284-44c3-afd7-738547756dd4-kube-api-access-g6s9p\") pod \"93a52bc7-f284-44c3-afd7-738547756dd4\" (UID: \"93a52bc7-f284-44c3-afd7-738547756dd4\") " Mar 18 18:18:58.854795 master-0 kubenswrapper[30278]: I0318 18:18:58.854645 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/93a52bc7-f284-44c3-afd7-738547756dd4-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "93a52bc7-f284-44c3-afd7-738547756dd4" (UID: "93a52bc7-f284-44c3-afd7-738547756dd4"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 18:18:58.855144 master-0 kubenswrapper[30278]: I0318 18:18:58.855095 30278 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/93a52bc7-f284-44c3-afd7-738547756dd4-operator-scripts\") on node \"master-0\" DevicePath \"\"" Mar 18 18:18:58.855338 master-0 kubenswrapper[30278]: I0318 18:18:58.855236 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8fa4d0fa-8b6c-4d8c-acf3-3e438a0c9441-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "8fa4d0fa-8b6c-4d8c-acf3-3e438a0c9441" (UID: "8fa4d0fa-8b6c-4d8c-acf3-3e438a0c9441"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 18:18:58.859207 master-0 kubenswrapper[30278]: I0318 18:18:58.859105 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/93a52bc7-f284-44c3-afd7-738547756dd4-kube-api-access-g6s9p" (OuterVolumeSpecName: "kube-api-access-g6s9p") pod "93a52bc7-f284-44c3-afd7-738547756dd4" (UID: "93a52bc7-f284-44c3-afd7-738547756dd4"). InnerVolumeSpecName "kube-api-access-g6s9p". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 18:18:58.859501 master-0 kubenswrapper[30278]: I0318 18:18:58.859439 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8fa4d0fa-8b6c-4d8c-acf3-3e438a0c9441-kube-api-access-xdjfm" (OuterVolumeSpecName: "kube-api-access-xdjfm") pod "8fa4d0fa-8b6c-4d8c-acf3-3e438a0c9441" (UID: "8fa4d0fa-8b6c-4d8c-acf3-3e438a0c9441"). InnerVolumeSpecName "kube-api-access-xdjfm". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 18:18:58.961627 master-0 kubenswrapper[30278]: I0318 18:18:58.959167 30278 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xdjfm\" (UniqueName: \"kubernetes.io/projected/8fa4d0fa-8b6c-4d8c-acf3-3e438a0c9441-kube-api-access-xdjfm\") on node \"master-0\" DevicePath \"\"" Mar 18 18:18:58.961627 master-0 kubenswrapper[30278]: I0318 18:18:58.959235 30278 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8fa4d0fa-8b6c-4d8c-acf3-3e438a0c9441-operator-scripts\") on node \"master-0\" DevicePath \"\"" Mar 18 18:18:58.961627 master-0 kubenswrapper[30278]: I0318 18:18:58.959249 30278 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g6s9p\" (UniqueName: \"kubernetes.io/projected/93a52bc7-f284-44c3-afd7-738547756dd4-kube-api-access-g6s9p\") on node \"master-0\" DevicePath \"\"" Mar 18 18:18:58.997869 master-0 kubenswrapper[30278]: I0318 18:18:58.997320 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-db-create-vdk4s" event={"ID":"93a52bc7-f284-44c3-afd7-738547756dd4","Type":"ContainerDied","Data":"87d90a4ac97602357b0a73c8562603eaea2d79261d9f701e1710e75018952914"} Mar 18 18:18:58.997869 master-0 kubenswrapper[30278]: I0318 18:18:58.997403 30278 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="87d90a4ac97602357b0a73c8562603eaea2d79261d9f701e1710e75018952914" Mar 18 18:18:58.997869 master-0 kubenswrapper[30278]: I0318 18:18:58.997547 30278 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-db-create-vdk4s" Mar 18 18:18:59.043649 master-0 kubenswrapper[30278]: I0318 18:18:59.042475 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-f681-account-create-update-qx2xl" event={"ID":"8fa4d0fa-8b6c-4d8c-acf3-3e438a0c9441","Type":"ContainerDied","Data":"b2a5942cd182a92b9b1c59c0e6b4a6716ad8c68c1ed8d89d9e5afa39e9a9e04f"} Mar 18 18:18:59.043649 master-0 kubenswrapper[30278]: I0318 18:18:59.042551 30278 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b2a5942cd182a92b9b1c59c0e6b4a6716ad8c68c1ed8d89d9e5afa39e9a9e04f" Mar 18 18:18:59.043649 master-0 kubenswrapper[30278]: I0318 18:18:59.042669 30278 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-f681-account-create-update-qx2xl" Mar 18 18:18:59.174519 master-0 kubenswrapper[30278]: I0318 18:18:59.174251 30278 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-578b778949-qc575"] Mar 18 18:18:59.777613 master-0 kubenswrapper[30278]: I0318 18:18:59.773467 30278 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-578b778949-qc575"] Mar 18 18:19:01.076196 master-0 kubenswrapper[30278]: I0318 18:19:01.075303 30278 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6289380a-9a02-490f-9a25-aaa36affc839" path="/var/lib/kubelet/pods/6289380a-9a02-490f-9a25-aaa36affc839/volumes" Mar 18 18:19:03.140117 master-0 kubenswrapper[30278]: I0318 18:19:03.140056 30278 generic.go:334] "Generic (PLEG): container finished" podID="5fa13ce1-ac91-4c75-8846-7679dfbd543b" containerID="19ba6ac1a7adc3781b9fc8ccb4cd5a1cf73198f2789227d7e11a56f78c34e3de" exitCode=0 Mar 18 18:19:03.141191 master-0 kubenswrapper[30278]: I0318 18:19:03.140160 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-kwm5v" event={"ID":"5fa13ce1-ac91-4c75-8846-7679dfbd543b","Type":"ContainerDied","Data":"19ba6ac1a7adc3781b9fc8ccb4cd5a1cf73198f2789227d7e11a56f78c34e3de"} Mar 18 18:19:03.147939 master-0 kubenswrapper[30278]: I0318 18:19:03.147868 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-rngq2" event={"ID":"fdcd674f-1047-437f-90ed-187b8b5eb882","Type":"ContainerStarted","Data":"8114536c648e8ee4c6fac290e15293c386ac61167916bdac848f7c7443b8463b"} Mar 18 18:19:03.218862 master-0 kubenswrapper[30278]: I0318 18:19:03.218771 30278 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-rngq2" podStartSLOduration=3.142556741 podStartE2EDuration="9.218696399s" podCreationTimestamp="2026-03-18 18:18:54 +0000 UTC" firstStartedPulling="2026-03-18 18:18:56.285847795 +0000 UTC m=+1105.453032390" lastFinishedPulling="2026-03-18 18:19:02.361987463 +0000 UTC m=+1111.529172048" observedRunningTime="2026-03-18 18:19:03.190800217 +0000 UTC m=+1112.357984812" watchObservedRunningTime="2026-03-18 18:19:03.218696399 +0000 UTC m=+1112.385881024" Mar 18 18:19:04.752254 master-0 kubenswrapper[30278]: I0318 18:19:04.752179 30278 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-kwm5v" Mar 18 18:19:04.861100 master-0 kubenswrapper[30278]: I0318 18:19:04.860192 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5fa13ce1-ac91-4c75-8846-7679dfbd543b-combined-ca-bundle\") pod \"5fa13ce1-ac91-4c75-8846-7679dfbd543b\" (UID: \"5fa13ce1-ac91-4c75-8846-7679dfbd543b\") " Mar 18 18:19:04.861100 master-0 kubenswrapper[30278]: I0318 18:19:04.860521 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-956g2\" (UniqueName: \"kubernetes.io/projected/5fa13ce1-ac91-4c75-8846-7679dfbd543b-kube-api-access-956g2\") pod \"5fa13ce1-ac91-4c75-8846-7679dfbd543b\" (UID: \"5fa13ce1-ac91-4c75-8846-7679dfbd543b\") " Mar 18 18:19:04.861100 master-0 kubenswrapper[30278]: I0318 18:19:04.860596 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/5fa13ce1-ac91-4c75-8846-7679dfbd543b-fernet-keys\") pod \"5fa13ce1-ac91-4c75-8846-7679dfbd543b\" (UID: \"5fa13ce1-ac91-4c75-8846-7679dfbd543b\") " Mar 18 18:19:04.861100 master-0 kubenswrapper[30278]: I0318 18:19:04.860650 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5fa13ce1-ac91-4c75-8846-7679dfbd543b-scripts\") pod \"5fa13ce1-ac91-4c75-8846-7679dfbd543b\" (UID: \"5fa13ce1-ac91-4c75-8846-7679dfbd543b\") " Mar 18 18:19:04.861100 master-0 kubenswrapper[30278]: I0318 18:19:04.860812 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/5fa13ce1-ac91-4c75-8846-7679dfbd543b-credential-keys\") pod \"5fa13ce1-ac91-4c75-8846-7679dfbd543b\" (UID: \"5fa13ce1-ac91-4c75-8846-7679dfbd543b\") " Mar 18 18:19:04.861100 master-0 kubenswrapper[30278]: I0318 18:19:04.860844 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5fa13ce1-ac91-4c75-8846-7679dfbd543b-config-data\") pod \"5fa13ce1-ac91-4c75-8846-7679dfbd543b\" (UID: \"5fa13ce1-ac91-4c75-8846-7679dfbd543b\") " Mar 18 18:19:04.869048 master-0 kubenswrapper[30278]: I0318 18:19:04.868987 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fa13ce1-ac91-4c75-8846-7679dfbd543b-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "5fa13ce1-ac91-4c75-8846-7679dfbd543b" (UID: "5fa13ce1-ac91-4c75-8846-7679dfbd543b"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 18:19:04.871548 master-0 kubenswrapper[30278]: I0318 18:19:04.871465 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fa13ce1-ac91-4c75-8846-7679dfbd543b-kube-api-access-956g2" (OuterVolumeSpecName: "kube-api-access-956g2") pod "5fa13ce1-ac91-4c75-8846-7679dfbd543b" (UID: "5fa13ce1-ac91-4c75-8846-7679dfbd543b"). InnerVolumeSpecName "kube-api-access-956g2". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 18:19:04.879820 master-0 kubenswrapper[30278]: I0318 18:19:04.879309 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fa13ce1-ac91-4c75-8846-7679dfbd543b-scripts" (OuterVolumeSpecName: "scripts") pod "5fa13ce1-ac91-4c75-8846-7679dfbd543b" (UID: "5fa13ce1-ac91-4c75-8846-7679dfbd543b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 18:19:04.897029 master-0 kubenswrapper[30278]: I0318 18:19:04.895320 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fa13ce1-ac91-4c75-8846-7679dfbd543b-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "5fa13ce1-ac91-4c75-8846-7679dfbd543b" (UID: "5fa13ce1-ac91-4c75-8846-7679dfbd543b"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 18:19:04.943024 master-0 kubenswrapper[30278]: I0318 18:19:04.942961 30278 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ironic-db-sync-ggb6f"] Mar 18 18:19:04.943687 master-0 kubenswrapper[30278]: E0318 18:19:04.943650 30278 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6289380a-9a02-490f-9a25-aaa36affc839" containerName="init" Mar 18 18:19:04.943687 master-0 kubenswrapper[30278]: I0318 18:19:04.943673 30278 state_mem.go:107] "Deleted CPUSet assignment" podUID="6289380a-9a02-490f-9a25-aaa36affc839" containerName="init" Mar 18 18:19:04.943807 master-0 kubenswrapper[30278]: E0318 18:19:04.943704 30278 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5fa13ce1-ac91-4c75-8846-7679dfbd543b" containerName="keystone-bootstrap" Mar 18 18:19:04.943807 master-0 kubenswrapper[30278]: I0318 18:19:04.943712 30278 state_mem.go:107] "Deleted CPUSet assignment" podUID="5fa13ce1-ac91-4c75-8846-7679dfbd543b" containerName="keystone-bootstrap" Mar 18 18:19:04.943807 master-0 kubenswrapper[30278]: E0318 18:19:04.943756 30278 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8fa4d0fa-8b6c-4d8c-acf3-3e438a0c9441" containerName="mariadb-account-create-update" Mar 18 18:19:04.943807 master-0 kubenswrapper[30278]: I0318 18:19:04.943763 30278 state_mem.go:107] "Deleted CPUSet assignment" podUID="8fa4d0fa-8b6c-4d8c-acf3-3e438a0c9441" containerName="mariadb-account-create-update" Mar 18 18:19:04.943807 master-0 kubenswrapper[30278]: E0318 18:19:04.943777 30278 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8c241dad-460b-41da-b26d-a8d64e7d803a" containerName="init" Mar 18 18:19:04.943807 master-0 kubenswrapper[30278]: I0318 18:19:04.943783 30278 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c241dad-460b-41da-b26d-a8d64e7d803a" containerName="init" Mar 18 18:19:04.943807 master-0 kubenswrapper[30278]: E0318 18:19:04.943798 30278 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="93a52bc7-f284-44c3-afd7-738547756dd4" containerName="mariadb-database-create" Mar 18 18:19:04.943807 master-0 kubenswrapper[30278]: I0318 18:19:04.943807 30278 state_mem.go:107] "Deleted CPUSet assignment" podUID="93a52bc7-f284-44c3-afd7-738547756dd4" containerName="mariadb-database-create" Mar 18 18:19:04.944098 master-0 kubenswrapper[30278]: E0318 18:19:04.943830 30278 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8c241dad-460b-41da-b26d-a8d64e7d803a" containerName="dnsmasq-dns" Mar 18 18:19:04.944098 master-0 kubenswrapper[30278]: I0318 18:19:04.943837 30278 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c241dad-460b-41da-b26d-a8d64e7d803a" containerName="dnsmasq-dns" Mar 18 18:19:04.944098 master-0 kubenswrapper[30278]: I0318 18:19:04.944055 30278 memory_manager.go:354] "RemoveStaleState removing state" podUID="5fa13ce1-ac91-4c75-8846-7679dfbd543b" containerName="keystone-bootstrap" Mar 18 18:19:04.944098 master-0 kubenswrapper[30278]: I0318 18:19:04.944095 30278 memory_manager.go:354] "RemoveStaleState removing state" podUID="6289380a-9a02-490f-9a25-aaa36affc839" containerName="init" Mar 18 18:19:04.944223 master-0 kubenswrapper[30278]: I0318 18:19:04.944137 30278 memory_manager.go:354] "RemoveStaleState removing state" podUID="8c241dad-460b-41da-b26d-a8d64e7d803a" containerName="dnsmasq-dns" Mar 18 18:19:04.944223 master-0 kubenswrapper[30278]: I0318 18:19:04.944149 30278 memory_manager.go:354] "RemoveStaleState removing state" podUID="8fa4d0fa-8b6c-4d8c-acf3-3e438a0c9441" containerName="mariadb-account-create-update" Mar 18 18:19:04.944223 master-0 kubenswrapper[30278]: I0318 18:19:04.944158 30278 memory_manager.go:354] "RemoveStaleState removing state" podUID="93a52bc7-f284-44c3-afd7-738547756dd4" containerName="mariadb-database-create" Mar 18 18:19:04.945569 master-0 kubenswrapper[30278]: I0318 18:19:04.945543 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-db-sync-ggb6f" Mar 18 18:19:04.953674 master-0 kubenswrapper[30278]: I0318 18:19:04.953461 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ironic-scripts" Mar 18 18:19:04.953965 master-0 kubenswrapper[30278]: I0318 18:19:04.953720 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ironic-config-data" Mar 18 18:19:04.980962 master-0 kubenswrapper[30278]: I0318 18:19:04.980853 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fa13ce1-ac91-4c75-8846-7679dfbd543b-config-data" (OuterVolumeSpecName: "config-data") pod "5fa13ce1-ac91-4c75-8846-7679dfbd543b" (UID: "5fa13ce1-ac91-4c75-8846-7679dfbd543b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 18:19:04.983618 master-0 kubenswrapper[30278]: I0318 18:19:04.983541 30278 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/5fa13ce1-ac91-4c75-8846-7679dfbd543b-credential-keys\") on node \"master-0\" DevicePath \"\"" Mar 18 18:19:04.983618 master-0 kubenswrapper[30278]: I0318 18:19:04.983618 30278 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5fa13ce1-ac91-4c75-8846-7679dfbd543b-config-data\") on node \"master-0\" DevicePath \"\"" Mar 18 18:19:04.983789 master-0 kubenswrapper[30278]: I0318 18:19:04.983631 30278 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-956g2\" (UniqueName: \"kubernetes.io/projected/5fa13ce1-ac91-4c75-8846-7679dfbd543b-kube-api-access-956g2\") on node \"master-0\" DevicePath \"\"" Mar 18 18:19:04.983789 master-0 kubenswrapper[30278]: I0318 18:19:04.983644 30278 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/5fa13ce1-ac91-4c75-8846-7679dfbd543b-fernet-keys\") on node \"master-0\" DevicePath \"\"" Mar 18 18:19:04.983789 master-0 kubenswrapper[30278]: I0318 18:19:04.983655 30278 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5fa13ce1-ac91-4c75-8846-7679dfbd543b-scripts\") on node \"master-0\" DevicePath \"\"" Mar 18 18:19:05.002449 master-0 kubenswrapper[30278]: I0318 18:19:05.002372 30278 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-db-sync-ggb6f"] Mar 18 18:19:05.002746 master-0 kubenswrapper[30278]: I0318 18:19:05.002550 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fa13ce1-ac91-4c75-8846-7679dfbd543b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5fa13ce1-ac91-4c75-8846-7679dfbd543b" (UID: "5fa13ce1-ac91-4c75-8846-7679dfbd543b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 18:19:05.031704 master-0 kubenswrapper[30278]: I0318 18:19:05.031618 30278 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-c74f744c5-h9zsh" Mar 18 18:19:05.087325 master-0 kubenswrapper[30278]: I0318 18:19:05.087191 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/ade5c277-043b-4e56-bc7c-63961acf67c4-config-data-merged\") pod \"ironic-db-sync-ggb6f\" (UID: \"ade5c277-043b-4e56-bc7c-63961acf67c4\") " pod="openstack/ironic-db-sync-ggb6f" Mar 18 18:19:05.089512 master-0 kubenswrapper[30278]: I0318 18:19:05.087902 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ade5c277-043b-4e56-bc7c-63961acf67c4-combined-ca-bundle\") pod \"ironic-db-sync-ggb6f\" (UID: \"ade5c277-043b-4e56-bc7c-63961acf67c4\") " pod="openstack/ironic-db-sync-ggb6f" Mar 18 18:19:05.089512 master-0 kubenswrapper[30278]: I0318 18:19:05.087991 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/ade5c277-043b-4e56-bc7c-63961acf67c4-etc-podinfo\") pod \"ironic-db-sync-ggb6f\" (UID: \"ade5c277-043b-4e56-bc7c-63961acf67c4\") " pod="openstack/ironic-db-sync-ggb6f" Mar 18 18:19:05.089512 master-0 kubenswrapper[30278]: I0318 18:19:05.088148 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ade5c277-043b-4e56-bc7c-63961acf67c4-config-data\") pod \"ironic-db-sync-ggb6f\" (UID: \"ade5c277-043b-4e56-bc7c-63961acf67c4\") " pod="openstack/ironic-db-sync-ggb6f" Mar 18 18:19:05.089512 master-0 kubenswrapper[30278]: I0318 18:19:05.088373 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t2w7h\" (UniqueName: \"kubernetes.io/projected/ade5c277-043b-4e56-bc7c-63961acf67c4-kube-api-access-t2w7h\") pod \"ironic-db-sync-ggb6f\" (UID: \"ade5c277-043b-4e56-bc7c-63961acf67c4\") " pod="openstack/ironic-db-sync-ggb6f" Mar 18 18:19:05.089512 master-0 kubenswrapper[30278]: I0318 18:19:05.088453 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ade5c277-043b-4e56-bc7c-63961acf67c4-scripts\") pod \"ironic-db-sync-ggb6f\" (UID: \"ade5c277-043b-4e56-bc7c-63961acf67c4\") " pod="openstack/ironic-db-sync-ggb6f" Mar 18 18:19:05.089512 master-0 kubenswrapper[30278]: I0318 18:19:05.088579 30278 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5fa13ce1-ac91-4c75-8846-7679dfbd543b-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 18 18:19:05.208098 master-0 kubenswrapper[30278]: I0318 18:19:05.207934 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t2w7h\" (UniqueName: \"kubernetes.io/projected/ade5c277-043b-4e56-bc7c-63961acf67c4-kube-api-access-t2w7h\") pod \"ironic-db-sync-ggb6f\" (UID: \"ade5c277-043b-4e56-bc7c-63961acf67c4\") " pod="openstack/ironic-db-sync-ggb6f" Mar 18 18:19:05.208395 master-0 kubenswrapper[30278]: I0318 18:19:05.208066 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ade5c277-043b-4e56-bc7c-63961acf67c4-scripts\") pod \"ironic-db-sync-ggb6f\" (UID: \"ade5c277-043b-4e56-bc7c-63961acf67c4\") " pod="openstack/ironic-db-sync-ggb6f" Mar 18 18:19:05.208824 master-0 kubenswrapper[30278]: I0318 18:19:05.208630 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/ade5c277-043b-4e56-bc7c-63961acf67c4-config-data-merged\") pod \"ironic-db-sync-ggb6f\" (UID: \"ade5c277-043b-4e56-bc7c-63961acf67c4\") " pod="openstack/ironic-db-sync-ggb6f" Mar 18 18:19:05.208824 master-0 kubenswrapper[30278]: I0318 18:19:05.208752 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ade5c277-043b-4e56-bc7c-63961acf67c4-combined-ca-bundle\") pod \"ironic-db-sync-ggb6f\" (UID: \"ade5c277-043b-4e56-bc7c-63961acf67c4\") " pod="openstack/ironic-db-sync-ggb6f" Mar 18 18:19:05.208931 master-0 kubenswrapper[30278]: I0318 18:19:05.208862 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/ade5c277-043b-4e56-bc7c-63961acf67c4-etc-podinfo\") pod \"ironic-db-sync-ggb6f\" (UID: \"ade5c277-043b-4e56-bc7c-63961acf67c4\") " pod="openstack/ironic-db-sync-ggb6f" Mar 18 18:19:05.209134 master-0 kubenswrapper[30278]: I0318 18:19:05.209107 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ade5c277-043b-4e56-bc7c-63961acf67c4-config-data\") pod \"ironic-db-sync-ggb6f\" (UID: \"ade5c277-043b-4e56-bc7c-63961acf67c4\") " pod="openstack/ironic-db-sync-ggb6f" Mar 18 18:19:05.209983 master-0 kubenswrapper[30278]: I0318 18:19:05.209737 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/ade5c277-043b-4e56-bc7c-63961acf67c4-config-data-merged\") pod \"ironic-db-sync-ggb6f\" (UID: \"ade5c277-043b-4e56-bc7c-63961acf67c4\") " pod="openstack/ironic-db-sync-ggb6f" Mar 18 18:19:05.210735 master-0 kubenswrapper[30278]: I0318 18:19:05.210711 30278 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5cd749f44f-tjfmr"] Mar 18 18:19:05.211083 master-0 kubenswrapper[30278]: I0318 18:19:05.211033 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-kwm5v" event={"ID":"5fa13ce1-ac91-4c75-8846-7679dfbd543b","Type":"ContainerDied","Data":"0ede28e869f430b7e2de730d7632fc2ec6beadcfdaf27b6db08fa8244177a137"} Mar 18 18:19:05.211189 master-0 kubenswrapper[30278]: I0318 18:19:05.211090 30278 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0ede28e869f430b7e2de730d7632fc2ec6beadcfdaf27b6db08fa8244177a137" Mar 18 18:19:05.213185 master-0 kubenswrapper[30278]: I0318 18:19:05.213161 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ade5c277-043b-4e56-bc7c-63961acf67c4-scripts\") pod \"ironic-db-sync-ggb6f\" (UID: \"ade5c277-043b-4e56-bc7c-63961acf67c4\") " pod="openstack/ironic-db-sync-ggb6f" Mar 18 18:19:05.213306 master-0 kubenswrapper[30278]: I0318 18:19:05.213159 30278 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5cd749f44f-tjfmr" podUID="111f82f6-d141-4c76-be8f-026f90f1858b" containerName="dnsmasq-dns" containerID="cri-o://2d4e7c538f3bf356ef1ea6888f439b1ec53892ef7b374ae1e01a22b433dc92cd" gracePeriod=10 Mar 18 18:19:05.214560 master-0 kubenswrapper[30278]: I0318 18:19:05.214491 30278 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-kwm5v" Mar 18 18:19:05.225543 master-0 kubenswrapper[30278]: I0318 18:19:05.225159 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/ade5c277-043b-4e56-bc7c-63961acf67c4-etc-podinfo\") pod \"ironic-db-sync-ggb6f\" (UID: \"ade5c277-043b-4e56-bc7c-63961acf67c4\") " pod="openstack/ironic-db-sync-ggb6f" Mar 18 18:19:05.225543 master-0 kubenswrapper[30278]: I0318 18:19:05.225540 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ade5c277-043b-4e56-bc7c-63961acf67c4-combined-ca-bundle\") pod \"ironic-db-sync-ggb6f\" (UID: \"ade5c277-043b-4e56-bc7c-63961acf67c4\") " pod="openstack/ironic-db-sync-ggb6f" Mar 18 18:19:05.241635 master-0 kubenswrapper[30278]: I0318 18:19:05.241162 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t2w7h\" (UniqueName: \"kubernetes.io/projected/ade5c277-043b-4e56-bc7c-63961acf67c4-kube-api-access-t2w7h\") pod \"ironic-db-sync-ggb6f\" (UID: \"ade5c277-043b-4e56-bc7c-63961acf67c4\") " pod="openstack/ironic-db-sync-ggb6f" Mar 18 18:19:05.247191 master-0 kubenswrapper[30278]: I0318 18:19:05.247101 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ade5c277-043b-4e56-bc7c-63961acf67c4-config-data\") pod \"ironic-db-sync-ggb6f\" (UID: \"ade5c277-043b-4e56-bc7c-63961acf67c4\") " pod="openstack/ironic-db-sync-ggb6f" Mar 18 18:19:05.370413 master-0 kubenswrapper[30278]: I0318 18:19:05.367601 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-db-sync-ggb6f" Mar 18 18:19:05.426058 master-0 kubenswrapper[30278]: I0318 18:19:05.424327 30278 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-kwm5v"] Mar 18 18:19:05.442381 master-0 kubenswrapper[30278]: I0318 18:19:05.442287 30278 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-kwm5v"] Mar 18 18:19:05.545427 master-0 kubenswrapper[30278]: I0318 18:19:05.537590 30278 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-8zspc"] Mar 18 18:19:05.545427 master-0 kubenswrapper[30278]: I0318 18:19:05.540524 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-8zspc" Mar 18 18:19:05.555415 master-0 kubenswrapper[30278]: I0318 18:19:05.549570 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Mar 18 18:19:05.555415 master-0 kubenswrapper[30278]: I0318 18:19:05.549829 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Mar 18 18:19:05.555415 master-0 kubenswrapper[30278]: I0318 18:19:05.549968 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Mar 18 18:19:05.564296 master-0 kubenswrapper[30278]: I0318 18:19:05.561720 30278 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-8zspc"] Mar 18 18:19:05.633326 master-0 kubenswrapper[30278]: I0318 18:19:05.633077 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/bdb2d7ca-85c4-45d9-b5cd-ded86df14c3e-fernet-keys\") pod \"keystone-bootstrap-8zspc\" (UID: \"bdb2d7ca-85c4-45d9-b5cd-ded86df14c3e\") " pod="openstack/keystone-bootstrap-8zspc" Mar 18 18:19:05.633326 master-0 kubenswrapper[30278]: I0318 18:19:05.633215 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bdb2d7ca-85c4-45d9-b5cd-ded86df14c3e-scripts\") pod \"keystone-bootstrap-8zspc\" (UID: \"bdb2d7ca-85c4-45d9-b5cd-ded86df14c3e\") " pod="openstack/keystone-bootstrap-8zspc" Mar 18 18:19:05.633326 master-0 kubenswrapper[30278]: I0318 18:19:05.633243 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/bdb2d7ca-85c4-45d9-b5cd-ded86df14c3e-credential-keys\") pod \"keystone-bootstrap-8zspc\" (UID: \"bdb2d7ca-85c4-45d9-b5cd-ded86df14c3e\") " pod="openstack/keystone-bootstrap-8zspc" Mar 18 18:19:05.633326 master-0 kubenswrapper[30278]: I0318 18:19:05.633296 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bdb2d7ca-85c4-45d9-b5cd-ded86df14c3e-combined-ca-bundle\") pod \"keystone-bootstrap-8zspc\" (UID: \"bdb2d7ca-85c4-45d9-b5cd-ded86df14c3e\") " pod="openstack/keystone-bootstrap-8zspc" Mar 18 18:19:05.633767 master-0 kubenswrapper[30278]: I0318 18:19:05.633351 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bdb2d7ca-85c4-45d9-b5cd-ded86df14c3e-config-data\") pod \"keystone-bootstrap-8zspc\" (UID: \"bdb2d7ca-85c4-45d9-b5cd-ded86df14c3e\") " pod="openstack/keystone-bootstrap-8zspc" Mar 18 18:19:05.633767 master-0 kubenswrapper[30278]: I0318 18:19:05.633391 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cjjkn\" (UniqueName: \"kubernetes.io/projected/bdb2d7ca-85c4-45d9-b5cd-ded86df14c3e-kube-api-access-cjjkn\") pod \"keystone-bootstrap-8zspc\" (UID: \"bdb2d7ca-85c4-45d9-b5cd-ded86df14c3e\") " pod="openstack/keystone-bootstrap-8zspc" Mar 18 18:19:05.736761 master-0 kubenswrapper[30278]: I0318 18:19:05.736657 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bdb2d7ca-85c4-45d9-b5cd-ded86df14c3e-scripts\") pod \"keystone-bootstrap-8zspc\" (UID: \"bdb2d7ca-85c4-45d9-b5cd-ded86df14c3e\") " pod="openstack/keystone-bootstrap-8zspc" Mar 18 18:19:05.736761 master-0 kubenswrapper[30278]: I0318 18:19:05.736752 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/bdb2d7ca-85c4-45d9-b5cd-ded86df14c3e-credential-keys\") pod \"keystone-bootstrap-8zspc\" (UID: \"bdb2d7ca-85c4-45d9-b5cd-ded86df14c3e\") " pod="openstack/keystone-bootstrap-8zspc" Mar 18 18:19:05.737198 master-0 kubenswrapper[30278]: I0318 18:19:05.736794 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bdb2d7ca-85c4-45d9-b5cd-ded86df14c3e-combined-ca-bundle\") pod \"keystone-bootstrap-8zspc\" (UID: \"bdb2d7ca-85c4-45d9-b5cd-ded86df14c3e\") " pod="openstack/keystone-bootstrap-8zspc" Mar 18 18:19:05.737198 master-0 kubenswrapper[30278]: I0318 18:19:05.736835 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bdb2d7ca-85c4-45d9-b5cd-ded86df14c3e-config-data\") pod \"keystone-bootstrap-8zspc\" (UID: \"bdb2d7ca-85c4-45d9-b5cd-ded86df14c3e\") " pod="openstack/keystone-bootstrap-8zspc" Mar 18 18:19:05.737198 master-0 kubenswrapper[30278]: I0318 18:19:05.736868 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cjjkn\" (UniqueName: \"kubernetes.io/projected/bdb2d7ca-85c4-45d9-b5cd-ded86df14c3e-kube-api-access-cjjkn\") pod \"keystone-bootstrap-8zspc\" (UID: \"bdb2d7ca-85c4-45d9-b5cd-ded86df14c3e\") " pod="openstack/keystone-bootstrap-8zspc" Mar 18 18:19:05.737198 master-0 kubenswrapper[30278]: I0318 18:19:05.737002 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/bdb2d7ca-85c4-45d9-b5cd-ded86df14c3e-fernet-keys\") pod \"keystone-bootstrap-8zspc\" (UID: \"bdb2d7ca-85c4-45d9-b5cd-ded86df14c3e\") " pod="openstack/keystone-bootstrap-8zspc" Mar 18 18:19:05.749641 master-0 kubenswrapper[30278]: I0318 18:19:05.749490 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/bdb2d7ca-85c4-45d9-b5cd-ded86df14c3e-credential-keys\") pod \"keystone-bootstrap-8zspc\" (UID: \"bdb2d7ca-85c4-45d9-b5cd-ded86df14c3e\") " pod="openstack/keystone-bootstrap-8zspc" Mar 18 18:19:05.751659 master-0 kubenswrapper[30278]: I0318 18:19:05.751606 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/bdb2d7ca-85c4-45d9-b5cd-ded86df14c3e-fernet-keys\") pod \"keystone-bootstrap-8zspc\" (UID: \"bdb2d7ca-85c4-45d9-b5cd-ded86df14c3e\") " pod="openstack/keystone-bootstrap-8zspc" Mar 18 18:19:05.752487 master-0 kubenswrapper[30278]: I0318 18:19:05.752448 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bdb2d7ca-85c4-45d9-b5cd-ded86df14c3e-config-data\") pod \"keystone-bootstrap-8zspc\" (UID: \"bdb2d7ca-85c4-45d9-b5cd-ded86df14c3e\") " pod="openstack/keystone-bootstrap-8zspc" Mar 18 18:19:05.756297 master-0 kubenswrapper[30278]: I0318 18:19:05.753018 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bdb2d7ca-85c4-45d9-b5cd-ded86df14c3e-scripts\") pod \"keystone-bootstrap-8zspc\" (UID: \"bdb2d7ca-85c4-45d9-b5cd-ded86df14c3e\") " pod="openstack/keystone-bootstrap-8zspc" Mar 18 18:19:05.769260 master-0 kubenswrapper[30278]: I0318 18:19:05.766733 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bdb2d7ca-85c4-45d9-b5cd-ded86df14c3e-combined-ca-bundle\") pod \"keystone-bootstrap-8zspc\" (UID: \"bdb2d7ca-85c4-45d9-b5cd-ded86df14c3e\") " pod="openstack/keystone-bootstrap-8zspc" Mar 18 18:19:05.775252 master-0 kubenswrapper[30278]: I0318 18:19:05.775190 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cjjkn\" (UniqueName: \"kubernetes.io/projected/bdb2d7ca-85c4-45d9-b5cd-ded86df14c3e-kube-api-access-cjjkn\") pod \"keystone-bootstrap-8zspc\" (UID: \"bdb2d7ca-85c4-45d9-b5cd-ded86df14c3e\") " pod="openstack/keystone-bootstrap-8zspc" Mar 18 18:19:05.898774 master-0 kubenswrapper[30278]: I0318 18:19:05.898686 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-8zspc" Mar 18 18:19:05.981846 master-0 kubenswrapper[30278]: I0318 18:19:05.981461 30278 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-5cd749f44f-tjfmr" podUID="111f82f6-d141-4c76-be8f-026f90f1858b" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.128.0.187:5353: connect: connection refused" Mar 18 18:19:07.068059 master-0 kubenswrapper[30278]: I0318 18:19:07.068000 30278 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fa13ce1-ac91-4c75-8846-7679dfbd543b" path="/var/lib/kubelet/pods/5fa13ce1-ac91-4c75-8846-7679dfbd543b/volumes" Mar 18 18:19:10.982715 master-0 kubenswrapper[30278]: I0318 18:19:10.982534 30278 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-5cd749f44f-tjfmr" podUID="111f82f6-d141-4c76-be8f-026f90f1858b" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.128.0.187:5353: connect: connection refused" Mar 18 18:19:14.352930 master-0 kubenswrapper[30278]: I0318 18:19:14.352842 30278 generic.go:334] "Generic (PLEG): container finished" podID="fdcd674f-1047-437f-90ed-187b8b5eb882" containerID="8114536c648e8ee4c6fac290e15293c386ac61167916bdac848f7c7443b8463b" exitCode=0 Mar 18 18:19:14.353787 master-0 kubenswrapper[30278]: I0318 18:19:14.352939 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-rngq2" event={"ID":"fdcd674f-1047-437f-90ed-187b8b5eb882","Type":"ContainerDied","Data":"8114536c648e8ee4c6fac290e15293c386ac61167916bdac848f7c7443b8463b"} Mar 18 18:19:14.355823 master-0 kubenswrapper[30278]: I0318 18:19:14.355764 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5cd749f44f-tjfmr" event={"ID":"111f82f6-d141-4c76-be8f-026f90f1858b","Type":"ContainerDied","Data":"2d4e7c538f3bf356ef1ea6888f439b1ec53892ef7b374ae1e01a22b433dc92cd"} Mar 18 18:19:14.357516 master-0 kubenswrapper[30278]: I0318 18:19:14.355702 30278 generic.go:334] "Generic (PLEG): container finished" podID="111f82f6-d141-4c76-be8f-026f90f1858b" containerID="2d4e7c538f3bf356ef1ea6888f439b1ec53892ef7b374ae1e01a22b433dc92cd" exitCode=0 Mar 18 18:19:15.889530 master-0 kubenswrapper[30278]: I0318 18:19:15.889462 30278 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-rngq2" Mar 18 18:19:16.004094 master-0 kubenswrapper[30278]: I0318 18:19:16.004013 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fdcd674f-1047-437f-90ed-187b8b5eb882-scripts\") pod \"fdcd674f-1047-437f-90ed-187b8b5eb882\" (UID: \"fdcd674f-1047-437f-90ed-187b8b5eb882\") " Mar 18 18:19:16.007178 master-0 kubenswrapper[30278]: I0318 18:19:16.007125 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t692d\" (UniqueName: \"kubernetes.io/projected/fdcd674f-1047-437f-90ed-187b8b5eb882-kube-api-access-t692d\") pod \"fdcd674f-1047-437f-90ed-187b8b5eb882\" (UID: \"fdcd674f-1047-437f-90ed-187b8b5eb882\") " Mar 18 18:19:16.007503 master-0 kubenswrapper[30278]: I0318 18:19:16.007340 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fdcd674f-1047-437f-90ed-187b8b5eb882-combined-ca-bundle\") pod \"fdcd674f-1047-437f-90ed-187b8b5eb882\" (UID: \"fdcd674f-1047-437f-90ed-187b8b5eb882\") " Mar 18 18:19:16.007503 master-0 kubenswrapper[30278]: I0318 18:19:16.007425 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fdcd674f-1047-437f-90ed-187b8b5eb882-config-data\") pod \"fdcd674f-1047-437f-90ed-187b8b5eb882\" (UID: \"fdcd674f-1047-437f-90ed-187b8b5eb882\") " Mar 18 18:19:16.007597 master-0 kubenswrapper[30278]: I0318 18:19:16.007541 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fdcd674f-1047-437f-90ed-187b8b5eb882-logs\") pod \"fdcd674f-1047-437f-90ed-187b8b5eb882\" (UID: \"fdcd674f-1047-437f-90ed-187b8b5eb882\") " Mar 18 18:19:16.013309 master-0 kubenswrapper[30278]: I0318 18:19:16.012480 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fdcd674f-1047-437f-90ed-187b8b5eb882-scripts" (OuterVolumeSpecName: "scripts") pod "fdcd674f-1047-437f-90ed-187b8b5eb882" (UID: "fdcd674f-1047-437f-90ed-187b8b5eb882"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 18:19:16.013526 master-0 kubenswrapper[30278]: I0318 18:19:16.013318 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fdcd674f-1047-437f-90ed-187b8b5eb882-logs" (OuterVolumeSpecName: "logs") pod "fdcd674f-1047-437f-90ed-187b8b5eb882" (UID: "fdcd674f-1047-437f-90ed-187b8b5eb882"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 18 18:19:16.030996 master-0 kubenswrapper[30278]: I0318 18:19:16.029118 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fdcd674f-1047-437f-90ed-187b8b5eb882-kube-api-access-t692d" (OuterVolumeSpecName: "kube-api-access-t692d") pod "fdcd674f-1047-437f-90ed-187b8b5eb882" (UID: "fdcd674f-1047-437f-90ed-187b8b5eb882"). InnerVolumeSpecName "kube-api-access-t692d". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 18:19:16.046151 master-0 kubenswrapper[30278]: I0318 18:19:16.046072 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fdcd674f-1047-437f-90ed-187b8b5eb882-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fdcd674f-1047-437f-90ed-187b8b5eb882" (UID: "fdcd674f-1047-437f-90ed-187b8b5eb882"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 18:19:16.058856 master-0 kubenswrapper[30278]: I0318 18:19:16.058039 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fdcd674f-1047-437f-90ed-187b8b5eb882-config-data" (OuterVolumeSpecName: "config-data") pod "fdcd674f-1047-437f-90ed-187b8b5eb882" (UID: "fdcd674f-1047-437f-90ed-187b8b5eb882"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 18:19:16.088928 master-0 kubenswrapper[30278]: I0318 18:19:16.088865 30278 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5cd749f44f-tjfmr" Mar 18 18:19:16.112416 master-0 kubenswrapper[30278]: I0318 18:19:16.111323 30278 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fdcd674f-1047-437f-90ed-187b8b5eb882-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 18 18:19:16.112416 master-0 kubenswrapper[30278]: I0318 18:19:16.111380 30278 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fdcd674f-1047-437f-90ed-187b8b5eb882-config-data\") on node \"master-0\" DevicePath \"\"" Mar 18 18:19:16.112416 master-0 kubenswrapper[30278]: I0318 18:19:16.111483 30278 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fdcd674f-1047-437f-90ed-187b8b5eb882-logs\") on node \"master-0\" DevicePath \"\"" Mar 18 18:19:16.112416 master-0 kubenswrapper[30278]: I0318 18:19:16.111496 30278 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fdcd674f-1047-437f-90ed-187b8b5eb882-scripts\") on node \"master-0\" DevicePath \"\"" Mar 18 18:19:16.129469 master-0 kubenswrapper[30278]: I0318 18:19:16.129389 30278 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t692d\" (UniqueName: \"kubernetes.io/projected/fdcd674f-1047-437f-90ed-187b8b5eb882-kube-api-access-t692d\") on node \"master-0\" DevicePath \"\"" Mar 18 18:19:16.265409 master-0 kubenswrapper[30278]: I0318 18:19:16.233095 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/111f82f6-d141-4c76-be8f-026f90f1858b-dns-svc\") pod \"111f82f6-d141-4c76-be8f-026f90f1858b\" (UID: \"111f82f6-d141-4c76-be8f-026f90f1858b\") " Mar 18 18:19:16.265409 master-0 kubenswrapper[30278]: I0318 18:19:16.233215 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/111f82f6-d141-4c76-be8f-026f90f1858b-ovsdbserver-nb\") pod \"111f82f6-d141-4c76-be8f-026f90f1858b\" (UID: \"111f82f6-d141-4c76-be8f-026f90f1858b\") " Mar 18 18:19:16.265409 master-0 kubenswrapper[30278]: I0318 18:19:16.233362 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/111f82f6-d141-4c76-be8f-026f90f1858b-config\") pod \"111f82f6-d141-4c76-be8f-026f90f1858b\" (UID: \"111f82f6-d141-4c76-be8f-026f90f1858b\") " Mar 18 18:19:16.265409 master-0 kubenswrapper[30278]: I0318 18:19:16.233646 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sckpl\" (UniqueName: \"kubernetes.io/projected/111f82f6-d141-4c76-be8f-026f90f1858b-kube-api-access-sckpl\") pod \"111f82f6-d141-4c76-be8f-026f90f1858b\" (UID: \"111f82f6-d141-4c76-be8f-026f90f1858b\") " Mar 18 18:19:16.265409 master-0 kubenswrapper[30278]: I0318 18:19:16.233727 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/111f82f6-d141-4c76-be8f-026f90f1858b-ovsdbserver-sb\") pod \"111f82f6-d141-4c76-be8f-026f90f1858b\" (UID: \"111f82f6-d141-4c76-be8f-026f90f1858b\") " Mar 18 18:19:16.265409 master-0 kubenswrapper[30278]: I0318 18:19:16.242850 30278 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-db-sync-ggb6f"] Mar 18 18:19:16.265409 master-0 kubenswrapper[30278]: I0318 18:19:16.249627 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/111f82f6-d141-4c76-be8f-026f90f1858b-kube-api-access-sckpl" (OuterVolumeSpecName: "kube-api-access-sckpl") pod "111f82f6-d141-4c76-be8f-026f90f1858b" (UID: "111f82f6-d141-4c76-be8f-026f90f1858b"). InnerVolumeSpecName "kube-api-access-sckpl". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 18:19:16.265409 master-0 kubenswrapper[30278]: W0318 18:19:16.262625 30278 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podade5c277_043b_4e56_bc7c_63961acf67c4.slice/crio-bd4c079deb0364dce36ec4761cc19856c7970f99a9360c6a26523e3484d1691d WatchSource:0}: Error finding container bd4c079deb0364dce36ec4761cc19856c7970f99a9360c6a26523e3484d1691d: Status 404 returned error can't find the container with id bd4c079deb0364dce36ec4761cc19856c7970f99a9360c6a26523e3484d1691d Mar 18 18:19:16.274809 master-0 kubenswrapper[30278]: I0318 18:19:16.274035 30278 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-8zspc"] Mar 18 18:19:16.307728 master-0 kubenswrapper[30278]: I0318 18:19:16.307651 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/111f82f6-d141-4c76-be8f-026f90f1858b-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "111f82f6-d141-4c76-be8f-026f90f1858b" (UID: "111f82f6-d141-4c76-be8f-026f90f1858b"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 18:19:16.337638 master-0 kubenswrapper[30278]: I0318 18:19:16.337529 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/111f82f6-d141-4c76-be8f-026f90f1858b-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "111f82f6-d141-4c76-be8f-026f90f1858b" (UID: "111f82f6-d141-4c76-be8f-026f90f1858b"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 18:19:16.337877 master-0 kubenswrapper[30278]: I0318 18:19:16.337735 30278 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sckpl\" (UniqueName: \"kubernetes.io/projected/111f82f6-d141-4c76-be8f-026f90f1858b-kube-api-access-sckpl\") on node \"master-0\" DevicePath \"\"" Mar 18 18:19:16.337877 master-0 kubenswrapper[30278]: I0318 18:19:16.337808 30278 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/111f82f6-d141-4c76-be8f-026f90f1858b-ovsdbserver-sb\") on node \"master-0\" DevicePath \"\"" Mar 18 18:19:16.338042 master-0 kubenswrapper[30278]: I0318 18:19:16.338003 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/111f82f6-d141-4c76-be8f-026f90f1858b-config" (OuterVolumeSpecName: "config") pod "111f82f6-d141-4c76-be8f-026f90f1858b" (UID: "111f82f6-d141-4c76-be8f-026f90f1858b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 18:19:16.338521 master-0 kubenswrapper[30278]: I0318 18:19:16.338491 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/111f82f6-d141-4c76-be8f-026f90f1858b-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "111f82f6-d141-4c76-be8f-026f90f1858b" (UID: "111f82f6-d141-4c76-be8f-026f90f1858b"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 18:19:16.389994 master-0 kubenswrapper[30278]: I0318 18:19:16.389925 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5cd749f44f-tjfmr" event={"ID":"111f82f6-d141-4c76-be8f-026f90f1858b","Type":"ContainerDied","Data":"c9c58b4b00ede99b52c5e0e37a2bb083521996bdb6e7dab4349c5e7fa69eab94"} Mar 18 18:19:16.390121 master-0 kubenswrapper[30278]: I0318 18:19:16.390020 30278 scope.go:117] "RemoveContainer" containerID="2d4e7c538f3bf356ef1ea6888f439b1ec53892ef7b374ae1e01a22b433dc92cd" Mar 18 18:19:16.393976 master-0 kubenswrapper[30278]: I0318 18:19:16.393919 30278 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5cd749f44f-tjfmr" Mar 18 18:19:16.398835 master-0 kubenswrapper[30278]: I0318 18:19:16.398779 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-rngq2" event={"ID":"fdcd674f-1047-437f-90ed-187b8b5eb882","Type":"ContainerDied","Data":"f1e6065d349f72e99a49e0b23713aafb6837627c5e9bfc88e7313bbe167f6c83"} Mar 18 18:19:16.398835 master-0 kubenswrapper[30278]: I0318 18:19:16.398816 30278 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-rngq2" Mar 18 18:19:16.398972 master-0 kubenswrapper[30278]: I0318 18:19:16.398835 30278 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f1e6065d349f72e99a49e0b23713aafb6837627c5e9bfc88e7313bbe167f6c83" Mar 18 18:19:16.401052 master-0 kubenswrapper[30278]: I0318 18:19:16.400990 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-8zspc" event={"ID":"bdb2d7ca-85c4-45d9-b5cd-ded86df14c3e","Type":"ContainerStarted","Data":"8bc5c163fbfa773ce3e94894eef8017e99f2508b8d87bfc8f22f7875d13b09a7"} Mar 18 18:19:16.404099 master-0 kubenswrapper[30278]: I0318 18:19:16.403933 30278 generic.go:334] "Generic (PLEG): container finished" podID="d2ad6a1d-4b4e-49d6-b2f1-65906269f79e" containerID="f2afc3a340d8bd8f0a25947752ace23263bb74350aa0b395b28ec18f336be7ca" exitCode=0 Mar 18 18:19:16.404099 master-0 kubenswrapper[30278]: I0318 18:19:16.404018 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-8jvr2" event={"ID":"d2ad6a1d-4b4e-49d6-b2f1-65906269f79e","Type":"ContainerDied","Data":"f2afc3a340d8bd8f0a25947752ace23263bb74350aa0b395b28ec18f336be7ca"} Mar 18 18:19:16.410552 master-0 kubenswrapper[30278]: I0318 18:19:16.410498 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-db-sync-ggb6f" event={"ID":"ade5c277-043b-4e56-bc7c-63961acf67c4","Type":"ContainerStarted","Data":"bd4c079deb0364dce36ec4761cc19856c7970f99a9360c6a26523e3484d1691d"} Mar 18 18:19:16.437398 master-0 kubenswrapper[30278]: I0318 18:19:16.435606 30278 scope.go:117] "RemoveContainer" containerID="e11284d89c22726b6ec9f610e4d34419a7e1f0c009fbdf41beafd946f45236cd" Mar 18 18:19:16.445433 master-0 kubenswrapper[30278]: I0318 18:19:16.442132 30278 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/111f82f6-d141-4c76-be8f-026f90f1858b-dns-svc\") on node \"master-0\" DevicePath \"\"" Mar 18 18:19:16.445433 master-0 kubenswrapper[30278]: I0318 18:19:16.442182 30278 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/111f82f6-d141-4c76-be8f-026f90f1858b-ovsdbserver-nb\") on node \"master-0\" DevicePath \"\"" Mar 18 18:19:16.445433 master-0 kubenswrapper[30278]: I0318 18:19:16.442194 30278 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/111f82f6-d141-4c76-be8f-026f90f1858b-config\") on node \"master-0\" DevicePath \"\"" Mar 18 18:19:16.480645 master-0 kubenswrapper[30278]: I0318 18:19:16.480553 30278 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5cd749f44f-tjfmr"] Mar 18 18:19:16.498677 master-0 kubenswrapper[30278]: I0318 18:19:16.498611 30278 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5cd749f44f-tjfmr"] Mar 18 18:19:16.662417 master-0 kubenswrapper[30278]: I0318 18:19:16.662332 30278 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-7db756448-vwstn"] Mar 18 18:19:16.663168 master-0 kubenswrapper[30278]: E0318 18:19:16.663127 30278 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="111f82f6-d141-4c76-be8f-026f90f1858b" containerName="init" Mar 18 18:19:16.663229 master-0 kubenswrapper[30278]: I0318 18:19:16.663169 30278 state_mem.go:107] "Deleted CPUSet assignment" podUID="111f82f6-d141-4c76-be8f-026f90f1858b" containerName="init" Mar 18 18:19:16.663229 master-0 kubenswrapper[30278]: E0318 18:19:16.663215 30278 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fdcd674f-1047-437f-90ed-187b8b5eb882" containerName="placement-db-sync" Mar 18 18:19:16.663229 master-0 kubenswrapper[30278]: I0318 18:19:16.663225 30278 state_mem.go:107] "Deleted CPUSet assignment" podUID="fdcd674f-1047-437f-90ed-187b8b5eb882" containerName="placement-db-sync" Mar 18 18:19:16.663337 master-0 kubenswrapper[30278]: E0318 18:19:16.663247 30278 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="111f82f6-d141-4c76-be8f-026f90f1858b" containerName="dnsmasq-dns" Mar 18 18:19:16.663337 master-0 kubenswrapper[30278]: I0318 18:19:16.663258 30278 state_mem.go:107] "Deleted CPUSet assignment" podUID="111f82f6-d141-4c76-be8f-026f90f1858b" containerName="dnsmasq-dns" Mar 18 18:19:16.663663 master-0 kubenswrapper[30278]: I0318 18:19:16.663627 30278 memory_manager.go:354] "RemoveStaleState removing state" podUID="fdcd674f-1047-437f-90ed-187b8b5eb882" containerName="placement-db-sync" Mar 18 18:19:16.663717 master-0 kubenswrapper[30278]: I0318 18:19:16.663674 30278 memory_manager.go:354] "RemoveStaleState removing state" podUID="111f82f6-d141-4c76-be8f-026f90f1858b" containerName="dnsmasq-dns" Mar 18 18:19:16.665271 master-0 kubenswrapper[30278]: I0318 18:19:16.665229 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-7db756448-vwstn" Mar 18 18:19:16.668553 master-0 kubenswrapper[30278]: I0318 18:19:16.668498 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Mar 18 18:19:16.668769 master-0 kubenswrapper[30278]: I0318 18:19:16.668740 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Mar 18 18:19:16.668942 master-0 kubenswrapper[30278]: I0318 18:19:16.668915 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Mar 18 18:19:16.669166 master-0 kubenswrapper[30278]: I0318 18:19:16.669127 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Mar 18 18:19:16.717620 master-0 kubenswrapper[30278]: I0318 18:19:16.717552 30278 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-7db756448-vwstn"] Mar 18 18:19:16.851311 master-0 kubenswrapper[30278]: I0318 18:19:16.851045 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ca02800f-5799-45c1-8737-409cb6665117-combined-ca-bundle\") pod \"placement-7db756448-vwstn\" (UID: \"ca02800f-5799-45c1-8737-409cb6665117\") " pod="openstack/placement-7db756448-vwstn" Mar 18 18:19:16.851646 master-0 kubenswrapper[30278]: I0318 18:19:16.851621 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rpv4n\" (UniqueName: \"kubernetes.io/projected/ca02800f-5799-45c1-8737-409cb6665117-kube-api-access-rpv4n\") pod \"placement-7db756448-vwstn\" (UID: \"ca02800f-5799-45c1-8737-409cb6665117\") " pod="openstack/placement-7db756448-vwstn" Mar 18 18:19:16.851957 master-0 kubenswrapper[30278]: I0318 18:19:16.851933 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ca02800f-5799-45c1-8737-409cb6665117-config-data\") pod \"placement-7db756448-vwstn\" (UID: \"ca02800f-5799-45c1-8737-409cb6665117\") " pod="openstack/placement-7db756448-vwstn" Mar 18 18:19:16.852304 master-0 kubenswrapper[30278]: I0318 18:19:16.852261 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ca02800f-5799-45c1-8737-409cb6665117-internal-tls-certs\") pod \"placement-7db756448-vwstn\" (UID: \"ca02800f-5799-45c1-8737-409cb6665117\") " pod="openstack/placement-7db756448-vwstn" Mar 18 18:19:16.852474 master-0 kubenswrapper[30278]: I0318 18:19:16.852454 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ca02800f-5799-45c1-8737-409cb6665117-public-tls-certs\") pod \"placement-7db756448-vwstn\" (UID: \"ca02800f-5799-45c1-8737-409cb6665117\") " pod="openstack/placement-7db756448-vwstn" Mar 18 18:19:16.852978 master-0 kubenswrapper[30278]: I0318 18:19:16.852957 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ca02800f-5799-45c1-8737-409cb6665117-scripts\") pod \"placement-7db756448-vwstn\" (UID: \"ca02800f-5799-45c1-8737-409cb6665117\") " pod="openstack/placement-7db756448-vwstn" Mar 18 18:19:16.853148 master-0 kubenswrapper[30278]: I0318 18:19:16.853130 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ca02800f-5799-45c1-8737-409cb6665117-logs\") pod \"placement-7db756448-vwstn\" (UID: \"ca02800f-5799-45c1-8737-409cb6665117\") " pod="openstack/placement-7db756448-vwstn" Mar 18 18:19:16.958258 master-0 kubenswrapper[30278]: I0318 18:19:16.957503 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ca02800f-5799-45c1-8737-409cb6665117-public-tls-certs\") pod \"placement-7db756448-vwstn\" (UID: \"ca02800f-5799-45c1-8737-409cb6665117\") " pod="openstack/placement-7db756448-vwstn" Mar 18 18:19:16.958258 master-0 kubenswrapper[30278]: I0318 18:19:16.957876 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ca02800f-5799-45c1-8737-409cb6665117-scripts\") pod \"placement-7db756448-vwstn\" (UID: \"ca02800f-5799-45c1-8737-409cb6665117\") " pod="openstack/placement-7db756448-vwstn" Mar 18 18:19:16.958258 master-0 kubenswrapper[30278]: I0318 18:19:16.957956 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ca02800f-5799-45c1-8737-409cb6665117-logs\") pod \"placement-7db756448-vwstn\" (UID: \"ca02800f-5799-45c1-8737-409cb6665117\") " pod="openstack/placement-7db756448-vwstn" Mar 18 18:19:16.958258 master-0 kubenswrapper[30278]: I0318 18:19:16.958044 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ca02800f-5799-45c1-8737-409cb6665117-combined-ca-bundle\") pod \"placement-7db756448-vwstn\" (UID: \"ca02800f-5799-45c1-8737-409cb6665117\") " pod="openstack/placement-7db756448-vwstn" Mar 18 18:19:16.958258 master-0 kubenswrapper[30278]: I0318 18:19:16.958080 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rpv4n\" (UniqueName: \"kubernetes.io/projected/ca02800f-5799-45c1-8737-409cb6665117-kube-api-access-rpv4n\") pod \"placement-7db756448-vwstn\" (UID: \"ca02800f-5799-45c1-8737-409cb6665117\") " pod="openstack/placement-7db756448-vwstn" Mar 18 18:19:16.959173 master-0 kubenswrapper[30278]: I0318 18:19:16.958703 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ca02800f-5799-45c1-8737-409cb6665117-logs\") pod \"placement-7db756448-vwstn\" (UID: \"ca02800f-5799-45c1-8737-409cb6665117\") " pod="openstack/placement-7db756448-vwstn" Mar 18 18:19:16.959173 master-0 kubenswrapper[30278]: I0318 18:19:16.958763 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ca02800f-5799-45c1-8737-409cb6665117-config-data\") pod \"placement-7db756448-vwstn\" (UID: \"ca02800f-5799-45c1-8737-409cb6665117\") " pod="openstack/placement-7db756448-vwstn" Mar 18 18:19:16.959173 master-0 kubenswrapper[30278]: I0318 18:19:16.958957 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ca02800f-5799-45c1-8737-409cb6665117-internal-tls-certs\") pod \"placement-7db756448-vwstn\" (UID: \"ca02800f-5799-45c1-8737-409cb6665117\") " pod="openstack/placement-7db756448-vwstn" Mar 18 18:19:16.964265 master-0 kubenswrapper[30278]: I0318 18:19:16.961822 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ca02800f-5799-45c1-8737-409cb6665117-public-tls-certs\") pod \"placement-7db756448-vwstn\" (UID: \"ca02800f-5799-45c1-8737-409cb6665117\") " pod="openstack/placement-7db756448-vwstn" Mar 18 18:19:16.972406 master-0 kubenswrapper[30278]: I0318 18:19:16.971350 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ca02800f-5799-45c1-8737-409cb6665117-combined-ca-bundle\") pod \"placement-7db756448-vwstn\" (UID: \"ca02800f-5799-45c1-8737-409cb6665117\") " pod="openstack/placement-7db756448-vwstn" Mar 18 18:19:16.972406 master-0 kubenswrapper[30278]: I0318 18:19:16.972163 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ca02800f-5799-45c1-8737-409cb6665117-internal-tls-certs\") pod \"placement-7db756448-vwstn\" (UID: \"ca02800f-5799-45c1-8737-409cb6665117\") " pod="openstack/placement-7db756448-vwstn" Mar 18 18:19:16.973229 master-0 kubenswrapper[30278]: I0318 18:19:16.973101 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ca02800f-5799-45c1-8737-409cb6665117-scripts\") pod \"placement-7db756448-vwstn\" (UID: \"ca02800f-5799-45c1-8737-409cb6665117\") " pod="openstack/placement-7db756448-vwstn" Mar 18 18:19:16.979595 master-0 kubenswrapper[30278]: I0318 18:19:16.978903 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ca02800f-5799-45c1-8737-409cb6665117-config-data\") pod \"placement-7db756448-vwstn\" (UID: \"ca02800f-5799-45c1-8737-409cb6665117\") " pod="openstack/placement-7db756448-vwstn" Mar 18 18:19:16.997028 master-0 kubenswrapper[30278]: I0318 18:19:16.996965 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rpv4n\" (UniqueName: \"kubernetes.io/projected/ca02800f-5799-45c1-8737-409cb6665117-kube-api-access-rpv4n\") pod \"placement-7db756448-vwstn\" (UID: \"ca02800f-5799-45c1-8737-409cb6665117\") " pod="openstack/placement-7db756448-vwstn" Mar 18 18:19:17.032307 master-0 kubenswrapper[30278]: I0318 18:19:17.032228 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-7db756448-vwstn" Mar 18 18:19:17.091794 master-0 kubenswrapper[30278]: I0318 18:19:17.091619 30278 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="111f82f6-d141-4c76-be8f-026f90f1858b" path="/var/lib/kubelet/pods/111f82f6-d141-4c76-be8f-026f90f1858b/volumes" Mar 18 18:19:17.438415 master-0 kubenswrapper[30278]: I0318 18:19:17.438188 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-b9df6-db-sync-dxpjk" event={"ID":"47f543cd-d5bf-4421-aae3-516afd48c609","Type":"ContainerStarted","Data":"535479142a27fb06a482b3a4e51258b7ab945ee4e49c3aec0da0d12548de907d"} Mar 18 18:19:17.449868 master-0 kubenswrapper[30278]: I0318 18:19:17.449604 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-8zspc" event={"ID":"bdb2d7ca-85c4-45d9-b5cd-ded86df14c3e","Type":"ContainerStarted","Data":"d86e4877f095c4082693c13c77e19aa35642939d5f75f0c4eb69c076b6cc76dd"} Mar 18 18:19:17.473037 master-0 kubenswrapper[30278]: I0318 18:19:17.472947 30278 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-b9df6-db-sync-dxpjk" podStartSLOduration=4.535495084 podStartE2EDuration="24.472923841s" podCreationTimestamp="2026-03-18 18:18:53 +0000 UTC" firstStartedPulling="2026-03-18 18:18:55.740916778 +0000 UTC m=+1104.908101373" lastFinishedPulling="2026-03-18 18:19:15.678345545 +0000 UTC m=+1124.845530130" observedRunningTime="2026-03-18 18:19:17.462710815 +0000 UTC m=+1126.629895460" watchObservedRunningTime="2026-03-18 18:19:17.472923841 +0000 UTC m=+1126.640108426" Mar 18 18:19:17.570736 master-0 kubenswrapper[30278]: I0318 18:19:17.570533 30278 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-8zspc" podStartSLOduration=12.570470428 podStartE2EDuration="12.570470428s" podCreationTimestamp="2026-03-18 18:19:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 18:19:17.498057098 +0000 UTC m=+1126.665241683" watchObservedRunningTime="2026-03-18 18:19:17.570470428 +0000 UTC m=+1126.737655023" Mar 18 18:19:17.571880 master-0 kubenswrapper[30278]: I0318 18:19:17.571831 30278 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-7db756448-vwstn"] Mar 18 18:19:17.580869 master-0 kubenswrapper[30278]: W0318 18:19:17.580793 30278 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podca02800f_5799_45c1_8737_409cb6665117.slice/crio-b9a9c189983cd3d176a2296250543355871dbcb18b0c063a465eb65dd7550341 WatchSource:0}: Error finding container b9a9c189983cd3d176a2296250543355871dbcb18b0c063a465eb65dd7550341: Status 404 returned error can't find the container with id b9a9c189983cd3d176a2296250543355871dbcb18b0c063a465eb65dd7550341 Mar 18 18:19:18.015378 master-0 kubenswrapper[30278]: I0318 18:19:18.015337 30278 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-8jvr2" Mar 18 18:19:18.109437 master-0 kubenswrapper[30278]: I0318 18:19:18.108825 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-shxgg\" (UniqueName: \"kubernetes.io/projected/d2ad6a1d-4b4e-49d6-b2f1-65906269f79e-kube-api-access-shxgg\") pod \"d2ad6a1d-4b4e-49d6-b2f1-65906269f79e\" (UID: \"d2ad6a1d-4b4e-49d6-b2f1-65906269f79e\") " Mar 18 18:19:18.109437 master-0 kubenswrapper[30278]: I0318 18:19:18.108937 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d2ad6a1d-4b4e-49d6-b2f1-65906269f79e-config-data\") pod \"d2ad6a1d-4b4e-49d6-b2f1-65906269f79e\" (UID: \"d2ad6a1d-4b4e-49d6-b2f1-65906269f79e\") " Mar 18 18:19:18.109437 master-0 kubenswrapper[30278]: I0318 18:19:18.109071 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d2ad6a1d-4b4e-49d6-b2f1-65906269f79e-combined-ca-bundle\") pod \"d2ad6a1d-4b4e-49d6-b2f1-65906269f79e\" (UID: \"d2ad6a1d-4b4e-49d6-b2f1-65906269f79e\") " Mar 18 18:19:18.109437 master-0 kubenswrapper[30278]: I0318 18:19:18.109151 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/d2ad6a1d-4b4e-49d6-b2f1-65906269f79e-db-sync-config-data\") pod \"d2ad6a1d-4b4e-49d6-b2f1-65906269f79e\" (UID: \"d2ad6a1d-4b4e-49d6-b2f1-65906269f79e\") " Mar 18 18:19:18.114565 master-0 kubenswrapper[30278]: I0318 18:19:18.114532 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d2ad6a1d-4b4e-49d6-b2f1-65906269f79e-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "d2ad6a1d-4b4e-49d6-b2f1-65906269f79e" (UID: "d2ad6a1d-4b4e-49d6-b2f1-65906269f79e"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 18:19:18.115439 master-0 kubenswrapper[30278]: I0318 18:19:18.115385 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d2ad6a1d-4b4e-49d6-b2f1-65906269f79e-kube-api-access-shxgg" (OuterVolumeSpecName: "kube-api-access-shxgg") pod "d2ad6a1d-4b4e-49d6-b2f1-65906269f79e" (UID: "d2ad6a1d-4b4e-49d6-b2f1-65906269f79e"). InnerVolumeSpecName "kube-api-access-shxgg". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 18:19:18.163221 master-0 kubenswrapper[30278]: I0318 18:19:18.163117 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d2ad6a1d-4b4e-49d6-b2f1-65906269f79e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d2ad6a1d-4b4e-49d6-b2f1-65906269f79e" (UID: "d2ad6a1d-4b4e-49d6-b2f1-65906269f79e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 18:19:18.207733 master-0 kubenswrapper[30278]: I0318 18:19:18.207601 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d2ad6a1d-4b4e-49d6-b2f1-65906269f79e-config-data" (OuterVolumeSpecName: "config-data") pod "d2ad6a1d-4b4e-49d6-b2f1-65906269f79e" (UID: "d2ad6a1d-4b4e-49d6-b2f1-65906269f79e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 18:19:18.213147 master-0 kubenswrapper[30278]: I0318 18:19:18.213100 30278 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-shxgg\" (UniqueName: \"kubernetes.io/projected/d2ad6a1d-4b4e-49d6-b2f1-65906269f79e-kube-api-access-shxgg\") on node \"master-0\" DevicePath \"\"" Mar 18 18:19:18.213147 master-0 kubenswrapper[30278]: I0318 18:19:18.213144 30278 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d2ad6a1d-4b4e-49d6-b2f1-65906269f79e-config-data\") on node \"master-0\" DevicePath \"\"" Mar 18 18:19:18.213250 master-0 kubenswrapper[30278]: I0318 18:19:18.213157 30278 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d2ad6a1d-4b4e-49d6-b2f1-65906269f79e-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 18 18:19:18.213250 master-0 kubenswrapper[30278]: I0318 18:19:18.213171 30278 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/d2ad6a1d-4b4e-49d6-b2f1-65906269f79e-db-sync-config-data\") on node \"master-0\" DevicePath \"\"" Mar 18 18:19:18.487955 master-0 kubenswrapper[30278]: I0318 18:19:18.487832 30278 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-8jvr2" Mar 18 18:19:18.488559 master-0 kubenswrapper[30278]: I0318 18:19:18.487941 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-8jvr2" event={"ID":"d2ad6a1d-4b4e-49d6-b2f1-65906269f79e","Type":"ContainerDied","Data":"32e00ef822d650a2a4c0974197e3538996394165d1e2dd63368b3a33537147c8"} Mar 18 18:19:18.488624 master-0 kubenswrapper[30278]: I0318 18:19:18.488579 30278 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="32e00ef822d650a2a4c0974197e3538996394165d1e2dd63368b3a33537147c8" Mar 18 18:19:18.493350 master-0 kubenswrapper[30278]: I0318 18:19:18.493299 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-7db756448-vwstn" event={"ID":"ca02800f-5799-45c1-8737-409cb6665117","Type":"ContainerStarted","Data":"07c0151d0e77c6b415e88f17fe047729fe52781df6ec02f05b17131801556584"} Mar 18 18:19:18.493465 master-0 kubenswrapper[30278]: I0318 18:19:18.493359 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-7db756448-vwstn" event={"ID":"ca02800f-5799-45c1-8737-409cb6665117","Type":"ContainerStarted","Data":"bb4c4cb453389606886622e8b73636f3049a1f4c97339b0c1df7e6a0aa350f3a"} Mar 18 18:19:18.493465 master-0 kubenswrapper[30278]: I0318 18:19:18.493373 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-7db756448-vwstn" event={"ID":"ca02800f-5799-45c1-8737-409cb6665117","Type":"ContainerStarted","Data":"b9a9c189983cd3d176a2296250543355871dbcb18b0c063a465eb65dd7550341"} Mar 18 18:19:19.484405 master-0 kubenswrapper[30278]: I0318 18:19:19.480044 30278 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-7db756448-vwstn" podStartSLOduration=3.480025022 podStartE2EDuration="3.480025022s" podCreationTimestamp="2026-03-18 18:19:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 18:19:18.526996721 +0000 UTC m=+1127.694181316" watchObservedRunningTime="2026-03-18 18:19:19.480025022 +0000 UTC m=+1128.647209617" Mar 18 18:19:19.529340 master-0 kubenswrapper[30278]: I0318 18:19:19.525946 30278 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-97cb45bf9-q6h4g"] Mar 18 18:19:19.529340 master-0 kubenswrapper[30278]: E0318 18:19:19.526555 30278 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d2ad6a1d-4b4e-49d6-b2f1-65906269f79e" containerName="glance-db-sync" Mar 18 18:19:19.529340 master-0 kubenswrapper[30278]: I0318 18:19:19.526576 30278 state_mem.go:107] "Deleted CPUSet assignment" podUID="d2ad6a1d-4b4e-49d6-b2f1-65906269f79e" containerName="glance-db-sync" Mar 18 18:19:19.529340 master-0 kubenswrapper[30278]: I0318 18:19:19.526877 30278 memory_manager.go:354] "RemoveStaleState removing state" podUID="d2ad6a1d-4b4e-49d6-b2f1-65906269f79e" containerName="glance-db-sync" Mar 18 18:19:19.529340 master-0 kubenswrapper[30278]: I0318 18:19:19.528106 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-97cb45bf9-q6h4g" Mar 18 18:19:19.537200 master-0 kubenswrapper[30278]: I0318 18:19:19.534463 30278 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-7db756448-vwstn" Mar 18 18:19:19.537200 master-0 kubenswrapper[30278]: I0318 18:19:19.534512 30278 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-7db756448-vwstn" Mar 18 18:19:19.578852 master-0 kubenswrapper[30278]: I0318 18:19:19.555483 30278 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-97cb45bf9-q6h4g"] Mar 18 18:19:19.664879 master-0 kubenswrapper[30278]: I0318 18:19:19.664804 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d4a913f3-9113-409f-bddd-65390f556fd2-dns-swift-storage-0\") pod \"dnsmasq-dns-97cb45bf9-q6h4g\" (UID: \"d4a913f3-9113-409f-bddd-65390f556fd2\") " pod="openstack/dnsmasq-dns-97cb45bf9-q6h4g" Mar 18 18:19:19.664879 master-0 kubenswrapper[30278]: I0318 18:19:19.664886 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d4a913f3-9113-409f-bddd-65390f556fd2-ovsdbserver-nb\") pod \"dnsmasq-dns-97cb45bf9-q6h4g\" (UID: \"d4a913f3-9113-409f-bddd-65390f556fd2\") " pod="openstack/dnsmasq-dns-97cb45bf9-q6h4g" Mar 18 18:19:19.665155 master-0 kubenswrapper[30278]: I0318 18:19:19.665019 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d4a913f3-9113-409f-bddd-65390f556fd2-ovsdbserver-sb\") pod \"dnsmasq-dns-97cb45bf9-q6h4g\" (UID: \"d4a913f3-9113-409f-bddd-65390f556fd2\") " pod="openstack/dnsmasq-dns-97cb45bf9-q6h4g" Mar 18 18:19:19.665155 master-0 kubenswrapper[30278]: I0318 18:19:19.665051 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d4a913f3-9113-409f-bddd-65390f556fd2-dns-svc\") pod \"dnsmasq-dns-97cb45bf9-q6h4g\" (UID: \"d4a913f3-9113-409f-bddd-65390f556fd2\") " pod="openstack/dnsmasq-dns-97cb45bf9-q6h4g" Mar 18 18:19:19.665155 master-0 kubenswrapper[30278]: I0318 18:19:19.665080 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d4a913f3-9113-409f-bddd-65390f556fd2-config\") pod \"dnsmasq-dns-97cb45bf9-q6h4g\" (UID: \"d4a913f3-9113-409f-bddd-65390f556fd2\") " pod="openstack/dnsmasq-dns-97cb45bf9-q6h4g" Mar 18 18:19:19.665155 master-0 kubenswrapper[30278]: I0318 18:19:19.665109 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z6vz9\" (UniqueName: \"kubernetes.io/projected/d4a913f3-9113-409f-bddd-65390f556fd2-kube-api-access-z6vz9\") pod \"dnsmasq-dns-97cb45bf9-q6h4g\" (UID: \"d4a913f3-9113-409f-bddd-65390f556fd2\") " pod="openstack/dnsmasq-dns-97cb45bf9-q6h4g" Mar 18 18:19:19.767568 master-0 kubenswrapper[30278]: I0318 18:19:19.767427 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d4a913f3-9113-409f-bddd-65390f556fd2-ovsdbserver-sb\") pod \"dnsmasq-dns-97cb45bf9-q6h4g\" (UID: \"d4a913f3-9113-409f-bddd-65390f556fd2\") " pod="openstack/dnsmasq-dns-97cb45bf9-q6h4g" Mar 18 18:19:19.767568 master-0 kubenswrapper[30278]: I0318 18:19:19.767518 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d4a913f3-9113-409f-bddd-65390f556fd2-dns-svc\") pod \"dnsmasq-dns-97cb45bf9-q6h4g\" (UID: \"d4a913f3-9113-409f-bddd-65390f556fd2\") " pod="openstack/dnsmasq-dns-97cb45bf9-q6h4g" Mar 18 18:19:19.767842 master-0 kubenswrapper[30278]: I0318 18:19:19.767568 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d4a913f3-9113-409f-bddd-65390f556fd2-config\") pod \"dnsmasq-dns-97cb45bf9-q6h4g\" (UID: \"d4a913f3-9113-409f-bddd-65390f556fd2\") " pod="openstack/dnsmasq-dns-97cb45bf9-q6h4g" Mar 18 18:19:19.767842 master-0 kubenswrapper[30278]: I0318 18:19:19.767724 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z6vz9\" (UniqueName: \"kubernetes.io/projected/d4a913f3-9113-409f-bddd-65390f556fd2-kube-api-access-z6vz9\") pod \"dnsmasq-dns-97cb45bf9-q6h4g\" (UID: \"d4a913f3-9113-409f-bddd-65390f556fd2\") " pod="openstack/dnsmasq-dns-97cb45bf9-q6h4g" Mar 18 18:19:19.769649 master-0 kubenswrapper[30278]: I0318 18:19:19.768149 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d4a913f3-9113-409f-bddd-65390f556fd2-dns-swift-storage-0\") pod \"dnsmasq-dns-97cb45bf9-q6h4g\" (UID: \"d4a913f3-9113-409f-bddd-65390f556fd2\") " pod="openstack/dnsmasq-dns-97cb45bf9-q6h4g" Mar 18 18:19:19.769649 master-0 kubenswrapper[30278]: I0318 18:19:19.768299 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d4a913f3-9113-409f-bddd-65390f556fd2-ovsdbserver-nb\") pod \"dnsmasq-dns-97cb45bf9-q6h4g\" (UID: \"d4a913f3-9113-409f-bddd-65390f556fd2\") " pod="openstack/dnsmasq-dns-97cb45bf9-q6h4g" Mar 18 18:19:19.769649 master-0 kubenswrapper[30278]: I0318 18:19:19.769402 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d4a913f3-9113-409f-bddd-65390f556fd2-dns-svc\") pod \"dnsmasq-dns-97cb45bf9-q6h4g\" (UID: \"d4a913f3-9113-409f-bddd-65390f556fd2\") " pod="openstack/dnsmasq-dns-97cb45bf9-q6h4g" Mar 18 18:19:19.769891 master-0 kubenswrapper[30278]: I0318 18:19:19.769745 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d4a913f3-9113-409f-bddd-65390f556fd2-ovsdbserver-nb\") pod \"dnsmasq-dns-97cb45bf9-q6h4g\" (UID: \"d4a913f3-9113-409f-bddd-65390f556fd2\") " pod="openstack/dnsmasq-dns-97cb45bf9-q6h4g" Mar 18 18:19:19.772110 master-0 kubenswrapper[30278]: I0318 18:19:19.772058 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d4a913f3-9113-409f-bddd-65390f556fd2-config\") pod \"dnsmasq-dns-97cb45bf9-q6h4g\" (UID: \"d4a913f3-9113-409f-bddd-65390f556fd2\") " pod="openstack/dnsmasq-dns-97cb45bf9-q6h4g" Mar 18 18:19:19.772192 master-0 kubenswrapper[30278]: I0318 18:19:19.772064 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d4a913f3-9113-409f-bddd-65390f556fd2-ovsdbserver-sb\") pod \"dnsmasq-dns-97cb45bf9-q6h4g\" (UID: \"d4a913f3-9113-409f-bddd-65390f556fd2\") " pod="openstack/dnsmasq-dns-97cb45bf9-q6h4g" Mar 18 18:19:19.775126 master-0 kubenswrapper[30278]: I0318 18:19:19.775003 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d4a913f3-9113-409f-bddd-65390f556fd2-dns-swift-storage-0\") pod \"dnsmasq-dns-97cb45bf9-q6h4g\" (UID: \"d4a913f3-9113-409f-bddd-65390f556fd2\") " pod="openstack/dnsmasq-dns-97cb45bf9-q6h4g" Mar 18 18:19:19.792080 master-0 kubenswrapper[30278]: I0318 18:19:19.792021 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z6vz9\" (UniqueName: \"kubernetes.io/projected/d4a913f3-9113-409f-bddd-65390f556fd2-kube-api-access-z6vz9\") pod \"dnsmasq-dns-97cb45bf9-q6h4g\" (UID: \"d4a913f3-9113-409f-bddd-65390f556fd2\") " pod="openstack/dnsmasq-dns-97cb45bf9-q6h4g" Mar 18 18:19:19.907094 master-0 kubenswrapper[30278]: I0318 18:19:19.907013 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-97cb45bf9-q6h4g" Mar 18 18:19:20.900510 master-0 kubenswrapper[30278]: I0318 18:19:20.900441 30278 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-824c8-default-external-api-0"] Mar 18 18:19:20.911972 master-0 kubenswrapper[30278]: I0318 18:19:20.911512 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-824c8-default-external-api-0" Mar 18 18:19:20.914199 master-0 kubenswrapper[30278]: I0318 18:19:20.914159 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Mar 18 18:19:20.916378 master-0 kubenswrapper[30278]: I0318 18:19:20.914527 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-824c8-default-external-config-data" Mar 18 18:19:21.005202 master-0 kubenswrapper[30278]: I0318 18:19:21.004775 30278 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-5cd749f44f-tjfmr" podUID="111f82f6-d141-4c76-be8f-026f90f1858b" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.128.0.187:5353: i/o timeout" Mar 18 18:19:21.023471 master-0 kubenswrapper[30278]: I0318 18:19:21.023377 30278 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-824c8-default-external-api-0"] Mar 18 18:19:21.434133 master-0 kubenswrapper[30278]: I0318 18:19:21.434002 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fc1a0fd5-e12c-4425-bf93-544d29a3d545-config-data\") pod \"glance-824c8-default-external-api-0\" (UID: \"fc1a0fd5-e12c-4425-bf93-544d29a3d545\") " pod="openstack/glance-824c8-default-external-api-0" Mar 18 18:19:21.434564 master-0 kubenswrapper[30278]: I0318 18:19:21.434154 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jmpmv\" (UniqueName: \"kubernetes.io/projected/fc1a0fd5-e12c-4425-bf93-544d29a3d545-kube-api-access-jmpmv\") pod \"glance-824c8-default-external-api-0\" (UID: \"fc1a0fd5-e12c-4425-bf93-544d29a3d545\") " pod="openstack/glance-824c8-default-external-api-0" Mar 18 18:19:21.434564 master-0 kubenswrapper[30278]: I0318 18:19:21.434238 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fc1a0fd5-e12c-4425-bf93-544d29a3d545-logs\") pod \"glance-824c8-default-external-api-0\" (UID: \"fc1a0fd5-e12c-4425-bf93-544d29a3d545\") " pod="openstack/glance-824c8-default-external-api-0" Mar 18 18:19:21.434786 master-0 kubenswrapper[30278]: I0318 18:19:21.434717 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc1a0fd5-e12c-4425-bf93-544d29a3d545-combined-ca-bundle\") pod \"glance-824c8-default-external-api-0\" (UID: \"fc1a0fd5-e12c-4425-bf93-544d29a3d545\") " pod="openstack/glance-824c8-default-external-api-0" Mar 18 18:19:21.435014 master-0 kubenswrapper[30278]: I0318 18:19:21.434976 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/fc1a0fd5-e12c-4425-bf93-544d29a3d545-httpd-run\") pod \"glance-824c8-default-external-api-0\" (UID: \"fc1a0fd5-e12c-4425-bf93-544d29a3d545\") " pod="openstack/glance-824c8-default-external-api-0" Mar 18 18:19:21.439243 master-0 kubenswrapper[30278]: I0318 18:19:21.437271 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-3e454bbb-ecf6-4956-8a52-dc3d9c4be123\" (UniqueName: \"kubernetes.io/csi/topolvm.io^f0964722-5a61-42c7-8300-def06defe560\") pod \"glance-824c8-default-external-api-0\" (UID: \"fc1a0fd5-e12c-4425-bf93-544d29a3d545\") " pod="openstack/glance-824c8-default-external-api-0" Mar 18 18:19:21.439243 master-0 kubenswrapper[30278]: I0318 18:19:21.437440 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fc1a0fd5-e12c-4425-bf93-544d29a3d545-scripts\") pod \"glance-824c8-default-external-api-0\" (UID: \"fc1a0fd5-e12c-4425-bf93-544d29a3d545\") " pod="openstack/glance-824c8-default-external-api-0" Mar 18 18:19:21.539938 master-0 kubenswrapper[30278]: I0318 18:19:21.539863 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fc1a0fd5-e12c-4425-bf93-544d29a3d545-logs\") pod \"glance-824c8-default-external-api-0\" (UID: \"fc1a0fd5-e12c-4425-bf93-544d29a3d545\") " pod="openstack/glance-824c8-default-external-api-0" Mar 18 18:19:21.540224 master-0 kubenswrapper[30278]: I0318 18:19:21.539982 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc1a0fd5-e12c-4425-bf93-544d29a3d545-combined-ca-bundle\") pod \"glance-824c8-default-external-api-0\" (UID: \"fc1a0fd5-e12c-4425-bf93-544d29a3d545\") " pod="openstack/glance-824c8-default-external-api-0" Mar 18 18:19:21.540224 master-0 kubenswrapper[30278]: I0318 18:19:21.540041 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/fc1a0fd5-e12c-4425-bf93-544d29a3d545-httpd-run\") pod \"glance-824c8-default-external-api-0\" (UID: \"fc1a0fd5-e12c-4425-bf93-544d29a3d545\") " pod="openstack/glance-824c8-default-external-api-0" Mar 18 18:19:21.540224 master-0 kubenswrapper[30278]: I0318 18:19:21.540085 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-3e454bbb-ecf6-4956-8a52-dc3d9c4be123\" (UniqueName: \"kubernetes.io/csi/topolvm.io^f0964722-5a61-42c7-8300-def06defe560\") pod \"glance-824c8-default-external-api-0\" (UID: \"fc1a0fd5-e12c-4425-bf93-544d29a3d545\") " pod="openstack/glance-824c8-default-external-api-0" Mar 18 18:19:21.540224 master-0 kubenswrapper[30278]: I0318 18:19:21.540124 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fc1a0fd5-e12c-4425-bf93-544d29a3d545-scripts\") pod \"glance-824c8-default-external-api-0\" (UID: \"fc1a0fd5-e12c-4425-bf93-544d29a3d545\") " pod="openstack/glance-824c8-default-external-api-0" Mar 18 18:19:21.540224 master-0 kubenswrapper[30278]: I0318 18:19:21.540193 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fc1a0fd5-e12c-4425-bf93-544d29a3d545-config-data\") pod \"glance-824c8-default-external-api-0\" (UID: \"fc1a0fd5-e12c-4425-bf93-544d29a3d545\") " pod="openstack/glance-824c8-default-external-api-0" Mar 18 18:19:21.540466 master-0 kubenswrapper[30278]: I0318 18:19:21.540233 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jmpmv\" (UniqueName: \"kubernetes.io/projected/fc1a0fd5-e12c-4425-bf93-544d29a3d545-kube-api-access-jmpmv\") pod \"glance-824c8-default-external-api-0\" (UID: \"fc1a0fd5-e12c-4425-bf93-544d29a3d545\") " pod="openstack/glance-824c8-default-external-api-0" Mar 18 18:19:21.541056 master-0 kubenswrapper[30278]: I0318 18:19:21.541028 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fc1a0fd5-e12c-4425-bf93-544d29a3d545-logs\") pod \"glance-824c8-default-external-api-0\" (UID: \"fc1a0fd5-e12c-4425-bf93-544d29a3d545\") " pod="openstack/glance-824c8-default-external-api-0" Mar 18 18:19:21.543731 master-0 kubenswrapper[30278]: I0318 18:19:21.542050 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/fc1a0fd5-e12c-4425-bf93-544d29a3d545-httpd-run\") pod \"glance-824c8-default-external-api-0\" (UID: \"fc1a0fd5-e12c-4425-bf93-544d29a3d545\") " pod="openstack/glance-824c8-default-external-api-0" Mar 18 18:19:21.548185 master-0 kubenswrapper[30278]: I0318 18:19:21.548130 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fc1a0fd5-e12c-4425-bf93-544d29a3d545-scripts\") pod \"glance-824c8-default-external-api-0\" (UID: \"fc1a0fd5-e12c-4425-bf93-544d29a3d545\") " pod="openstack/glance-824c8-default-external-api-0" Mar 18 18:19:21.549292 master-0 kubenswrapper[30278]: I0318 18:19:21.548515 30278 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Mar 18 18:19:21.549292 master-0 kubenswrapper[30278]: I0318 18:19:21.548598 30278 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-3e454bbb-ecf6-4956-8a52-dc3d9c4be123\" (UniqueName: \"kubernetes.io/csi/topolvm.io^f0964722-5a61-42c7-8300-def06defe560\") pod \"glance-824c8-default-external-api-0\" (UID: \"fc1a0fd5-e12c-4425-bf93-544d29a3d545\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/topolvm.io/94c3d9a5864b2a0676e8a45c98800fb7c7e5f534272efb0ca320119ec8f41cb2/globalmount\"" pod="openstack/glance-824c8-default-external-api-0" Mar 18 18:19:21.549292 master-0 kubenswrapper[30278]: I0318 18:19:21.549183 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc1a0fd5-e12c-4425-bf93-544d29a3d545-combined-ca-bundle\") pod \"glance-824c8-default-external-api-0\" (UID: \"fc1a0fd5-e12c-4425-bf93-544d29a3d545\") " pod="openstack/glance-824c8-default-external-api-0" Mar 18 18:19:21.550294 master-0 kubenswrapper[30278]: I0318 18:19:21.550190 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fc1a0fd5-e12c-4425-bf93-544d29a3d545-config-data\") pod \"glance-824c8-default-external-api-0\" (UID: \"fc1a0fd5-e12c-4425-bf93-544d29a3d545\") " pod="openstack/glance-824c8-default-external-api-0" Mar 18 18:19:21.558228 master-0 kubenswrapper[30278]: I0318 18:19:21.558138 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jmpmv\" (UniqueName: \"kubernetes.io/projected/fc1a0fd5-e12c-4425-bf93-544d29a3d545-kube-api-access-jmpmv\") pod \"glance-824c8-default-external-api-0\" (UID: \"fc1a0fd5-e12c-4425-bf93-544d29a3d545\") " pod="openstack/glance-824c8-default-external-api-0" Mar 18 18:19:21.692553 master-0 kubenswrapper[30278]: I0318 18:19:21.692358 30278 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-824c8-default-internal-api-0"] Mar 18 18:19:21.694415 master-0 kubenswrapper[30278]: I0318 18:19:21.694374 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-824c8-default-internal-api-0" Mar 18 18:19:21.702903 master-0 kubenswrapper[30278]: I0318 18:19:21.702838 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-824c8-default-internal-config-data" Mar 18 18:19:21.724476 master-0 kubenswrapper[30278]: I0318 18:19:21.719319 30278 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-824c8-default-internal-api-0"] Mar 18 18:19:21.850610 master-0 kubenswrapper[30278]: I0318 18:19:21.850506 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-73c4547d-a129-4633-b041-3e0c5a9c7e49\" (UniqueName: \"kubernetes.io/csi/topolvm.io^8ca167e8-4fa3-444f-a5bf-31afa92c7150\") pod \"glance-824c8-default-internal-api-0\" (UID: \"2acc3d40-c66c-4573-be45-36889199ee65\") " pod="openstack/glance-824c8-default-internal-api-0" Mar 18 18:19:21.850610 master-0 kubenswrapper[30278]: I0318 18:19:21.850595 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2acc3d40-c66c-4573-be45-36889199ee65-config-data\") pod \"glance-824c8-default-internal-api-0\" (UID: \"2acc3d40-c66c-4573-be45-36889199ee65\") " pod="openstack/glance-824c8-default-internal-api-0" Mar 18 18:19:21.851085 master-0 kubenswrapper[30278]: I0318 18:19:21.850706 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2acc3d40-c66c-4573-be45-36889199ee65-logs\") pod \"glance-824c8-default-internal-api-0\" (UID: \"2acc3d40-c66c-4573-be45-36889199ee65\") " pod="openstack/glance-824c8-default-internal-api-0" Mar 18 18:19:21.851085 master-0 kubenswrapper[30278]: I0318 18:19:21.850784 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wcvm2\" (UniqueName: \"kubernetes.io/projected/2acc3d40-c66c-4573-be45-36889199ee65-kube-api-access-wcvm2\") pod \"glance-824c8-default-internal-api-0\" (UID: \"2acc3d40-c66c-4573-be45-36889199ee65\") " pod="openstack/glance-824c8-default-internal-api-0" Mar 18 18:19:21.851085 master-0 kubenswrapper[30278]: I0318 18:19:21.850822 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2acc3d40-c66c-4573-be45-36889199ee65-combined-ca-bundle\") pod \"glance-824c8-default-internal-api-0\" (UID: \"2acc3d40-c66c-4573-be45-36889199ee65\") " pod="openstack/glance-824c8-default-internal-api-0" Mar 18 18:19:21.851085 master-0 kubenswrapper[30278]: I0318 18:19:21.850953 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/2acc3d40-c66c-4573-be45-36889199ee65-httpd-run\") pod \"glance-824c8-default-internal-api-0\" (UID: \"2acc3d40-c66c-4573-be45-36889199ee65\") " pod="openstack/glance-824c8-default-internal-api-0" Mar 18 18:19:21.851306 master-0 kubenswrapper[30278]: I0318 18:19:21.851089 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2acc3d40-c66c-4573-be45-36889199ee65-scripts\") pod \"glance-824c8-default-internal-api-0\" (UID: \"2acc3d40-c66c-4573-be45-36889199ee65\") " pod="openstack/glance-824c8-default-internal-api-0" Mar 18 18:19:21.955637 master-0 kubenswrapper[30278]: I0318 18:19:21.953592 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2acc3d40-c66c-4573-be45-36889199ee65-scripts\") pod \"glance-824c8-default-internal-api-0\" (UID: \"2acc3d40-c66c-4573-be45-36889199ee65\") " pod="openstack/glance-824c8-default-internal-api-0" Mar 18 18:19:21.955637 master-0 kubenswrapper[30278]: I0318 18:19:21.953699 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-73c4547d-a129-4633-b041-3e0c5a9c7e49\" (UniqueName: \"kubernetes.io/csi/topolvm.io^8ca167e8-4fa3-444f-a5bf-31afa92c7150\") pod \"glance-824c8-default-internal-api-0\" (UID: \"2acc3d40-c66c-4573-be45-36889199ee65\") " pod="openstack/glance-824c8-default-internal-api-0" Mar 18 18:19:21.955637 master-0 kubenswrapper[30278]: I0318 18:19:21.953722 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2acc3d40-c66c-4573-be45-36889199ee65-config-data\") pod \"glance-824c8-default-internal-api-0\" (UID: \"2acc3d40-c66c-4573-be45-36889199ee65\") " pod="openstack/glance-824c8-default-internal-api-0" Mar 18 18:19:21.955637 master-0 kubenswrapper[30278]: I0318 18:19:21.953806 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2acc3d40-c66c-4573-be45-36889199ee65-logs\") pod \"glance-824c8-default-internal-api-0\" (UID: \"2acc3d40-c66c-4573-be45-36889199ee65\") " pod="openstack/glance-824c8-default-internal-api-0" Mar 18 18:19:21.955637 master-0 kubenswrapper[30278]: I0318 18:19:21.953868 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wcvm2\" (UniqueName: \"kubernetes.io/projected/2acc3d40-c66c-4573-be45-36889199ee65-kube-api-access-wcvm2\") pod \"glance-824c8-default-internal-api-0\" (UID: \"2acc3d40-c66c-4573-be45-36889199ee65\") " pod="openstack/glance-824c8-default-internal-api-0" Mar 18 18:19:21.955637 master-0 kubenswrapper[30278]: I0318 18:19:21.954786 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2acc3d40-c66c-4573-be45-36889199ee65-logs\") pod \"glance-824c8-default-internal-api-0\" (UID: \"2acc3d40-c66c-4573-be45-36889199ee65\") " pod="openstack/glance-824c8-default-internal-api-0" Mar 18 18:19:21.955637 master-0 kubenswrapper[30278]: I0318 18:19:21.954830 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2acc3d40-c66c-4573-be45-36889199ee65-combined-ca-bundle\") pod \"glance-824c8-default-internal-api-0\" (UID: \"2acc3d40-c66c-4573-be45-36889199ee65\") " pod="openstack/glance-824c8-default-internal-api-0" Mar 18 18:19:21.955637 master-0 kubenswrapper[30278]: I0318 18:19:21.954866 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/2acc3d40-c66c-4573-be45-36889199ee65-httpd-run\") pod \"glance-824c8-default-internal-api-0\" (UID: \"2acc3d40-c66c-4573-be45-36889199ee65\") " pod="openstack/glance-824c8-default-internal-api-0" Mar 18 18:19:21.955637 master-0 kubenswrapper[30278]: I0318 18:19:21.955145 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/2acc3d40-c66c-4573-be45-36889199ee65-httpd-run\") pod \"glance-824c8-default-internal-api-0\" (UID: \"2acc3d40-c66c-4573-be45-36889199ee65\") " pod="openstack/glance-824c8-default-internal-api-0" Mar 18 18:19:21.958483 master-0 kubenswrapper[30278]: I0318 18:19:21.958452 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2acc3d40-c66c-4573-be45-36889199ee65-config-data\") pod \"glance-824c8-default-internal-api-0\" (UID: \"2acc3d40-c66c-4573-be45-36889199ee65\") " pod="openstack/glance-824c8-default-internal-api-0" Mar 18 18:19:21.959482 master-0 kubenswrapper[30278]: I0318 18:19:21.959433 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2acc3d40-c66c-4573-be45-36889199ee65-scripts\") pod \"glance-824c8-default-internal-api-0\" (UID: \"2acc3d40-c66c-4573-be45-36889199ee65\") " pod="openstack/glance-824c8-default-internal-api-0" Mar 18 18:19:21.969065 master-0 kubenswrapper[30278]: I0318 18:19:21.967247 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2acc3d40-c66c-4573-be45-36889199ee65-combined-ca-bundle\") pod \"glance-824c8-default-internal-api-0\" (UID: \"2acc3d40-c66c-4573-be45-36889199ee65\") " pod="openstack/glance-824c8-default-internal-api-0" Mar 18 18:19:21.969065 master-0 kubenswrapper[30278]: I0318 18:19:21.967907 30278 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Mar 18 18:19:21.969065 master-0 kubenswrapper[30278]: I0318 18:19:21.967935 30278 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-73c4547d-a129-4633-b041-3e0c5a9c7e49\" (UniqueName: \"kubernetes.io/csi/topolvm.io^8ca167e8-4fa3-444f-a5bf-31afa92c7150\") pod \"glance-824c8-default-internal-api-0\" (UID: \"2acc3d40-c66c-4573-be45-36889199ee65\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/topolvm.io/c03db859bc87c72425359af32b7c24b69cb9246d9bdaabebd809ecb82cb00bf5/globalmount\"" pod="openstack/glance-824c8-default-internal-api-0" Mar 18 18:19:21.987209 master-0 kubenswrapper[30278]: I0318 18:19:21.987155 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wcvm2\" (UniqueName: \"kubernetes.io/projected/2acc3d40-c66c-4573-be45-36889199ee65-kube-api-access-wcvm2\") pod \"glance-824c8-default-internal-api-0\" (UID: \"2acc3d40-c66c-4573-be45-36889199ee65\") " pod="openstack/glance-824c8-default-internal-api-0" Mar 18 18:19:23.645842 master-0 kubenswrapper[30278]: I0318 18:19:23.645575 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-3e454bbb-ecf6-4956-8a52-dc3d9c4be123\" (UniqueName: \"kubernetes.io/csi/topolvm.io^f0964722-5a61-42c7-8300-def06defe560\") pod \"glance-824c8-default-external-api-0\" (UID: \"fc1a0fd5-e12c-4425-bf93-544d29a3d545\") " pod="openstack/glance-824c8-default-external-api-0" Mar 18 18:19:23.651915 master-0 kubenswrapper[30278]: I0318 18:19:23.651839 30278 generic.go:334] "Generic (PLEG): container finished" podID="bdb2d7ca-85c4-45d9-b5cd-ded86df14c3e" containerID="d86e4877f095c4082693c13c77e19aa35642939d5f75f0c4eb69c076b6cc76dd" exitCode=0 Mar 18 18:19:23.652051 master-0 kubenswrapper[30278]: I0318 18:19:23.651912 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-8zspc" event={"ID":"bdb2d7ca-85c4-45d9-b5cd-ded86df14c3e","Type":"ContainerDied","Data":"d86e4877f095c4082693c13c77e19aa35642939d5f75f0c4eb69c076b6cc76dd"} Mar 18 18:19:23.943050 master-0 kubenswrapper[30278]: I0318 18:19:23.942896 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-824c8-default-external-api-0" Mar 18 18:19:24.123268 master-0 kubenswrapper[30278]: I0318 18:19:24.120667 30278 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-824c8-default-external-api-0"] Mar 18 18:19:24.399949 master-0 kubenswrapper[30278]: I0318 18:19:24.399515 30278 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-824c8-default-internal-api-0"] Mar 18 18:19:24.401007 master-0 kubenswrapper[30278]: E0318 18:19:24.400959 30278 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[glance], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openstack/glance-824c8-default-internal-api-0" podUID="2acc3d40-c66c-4573-be45-36889199ee65" Mar 18 18:19:24.664752 master-0 kubenswrapper[30278]: I0318 18:19:24.664577 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-824c8-default-internal-api-0" Mar 18 18:19:24.681522 master-0 kubenswrapper[30278]: I0318 18:19:24.681453 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-824c8-default-internal-api-0" Mar 18 18:19:24.764648 master-0 kubenswrapper[30278]: I0318 18:19:24.764538 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2acc3d40-c66c-4573-be45-36889199ee65-logs\") pod \"2acc3d40-c66c-4573-be45-36889199ee65\" (UID: \"2acc3d40-c66c-4573-be45-36889199ee65\") " Mar 18 18:19:24.764941 master-0 kubenswrapper[30278]: I0318 18:19:24.764662 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2acc3d40-c66c-4573-be45-36889199ee65-config-data\") pod \"2acc3d40-c66c-4573-be45-36889199ee65\" (UID: \"2acc3d40-c66c-4573-be45-36889199ee65\") " Mar 18 18:19:24.765075 master-0 kubenswrapper[30278]: I0318 18:19:24.765020 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2acc3d40-c66c-4573-be45-36889199ee65-combined-ca-bundle\") pod \"2acc3d40-c66c-4573-be45-36889199ee65\" (UID: \"2acc3d40-c66c-4573-be45-36889199ee65\") " Mar 18 18:19:24.765294 master-0 kubenswrapper[30278]: I0318 18:19:24.765181 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2acc3d40-c66c-4573-be45-36889199ee65-logs" (OuterVolumeSpecName: "logs") pod "2acc3d40-c66c-4573-be45-36889199ee65" (UID: "2acc3d40-c66c-4573-be45-36889199ee65"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 18 18:19:24.765382 master-0 kubenswrapper[30278]: I0318 18:19:24.765322 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2acc3d40-c66c-4573-be45-36889199ee65-scripts\") pod \"2acc3d40-c66c-4573-be45-36889199ee65\" (UID: \"2acc3d40-c66c-4573-be45-36889199ee65\") " Mar 18 18:19:24.765487 master-0 kubenswrapper[30278]: I0318 18:19:24.765461 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/2acc3d40-c66c-4573-be45-36889199ee65-httpd-run\") pod \"2acc3d40-c66c-4573-be45-36889199ee65\" (UID: \"2acc3d40-c66c-4573-be45-36889199ee65\") " Mar 18 18:19:24.766164 master-0 kubenswrapper[30278]: I0318 18:19:24.765728 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wcvm2\" (UniqueName: \"kubernetes.io/projected/2acc3d40-c66c-4573-be45-36889199ee65-kube-api-access-wcvm2\") pod \"2acc3d40-c66c-4573-be45-36889199ee65\" (UID: \"2acc3d40-c66c-4573-be45-36889199ee65\") " Mar 18 18:19:24.766164 master-0 kubenswrapper[30278]: I0318 18:19:24.766095 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2acc3d40-c66c-4573-be45-36889199ee65-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "2acc3d40-c66c-4573-be45-36889199ee65" (UID: "2acc3d40-c66c-4573-be45-36889199ee65"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 18 18:19:24.767339 master-0 kubenswrapper[30278]: I0318 18:19:24.767305 30278 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2acc3d40-c66c-4573-be45-36889199ee65-logs\") on node \"master-0\" DevicePath \"\"" Mar 18 18:19:24.767435 master-0 kubenswrapper[30278]: I0318 18:19:24.767330 30278 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/2acc3d40-c66c-4573-be45-36889199ee65-httpd-run\") on node \"master-0\" DevicePath \"\"" Mar 18 18:19:24.770182 master-0 kubenswrapper[30278]: I0318 18:19:24.770079 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2acc3d40-c66c-4573-be45-36889199ee65-scripts" (OuterVolumeSpecName: "scripts") pod "2acc3d40-c66c-4573-be45-36889199ee65" (UID: "2acc3d40-c66c-4573-be45-36889199ee65"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 18:19:24.770339 master-0 kubenswrapper[30278]: I0318 18:19:24.770262 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2acc3d40-c66c-4573-be45-36889199ee65-config-data" (OuterVolumeSpecName: "config-data") pod "2acc3d40-c66c-4573-be45-36889199ee65" (UID: "2acc3d40-c66c-4573-be45-36889199ee65"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 18:19:24.770427 master-0 kubenswrapper[30278]: I0318 18:19:24.770390 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2acc3d40-c66c-4573-be45-36889199ee65-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2acc3d40-c66c-4573-be45-36889199ee65" (UID: "2acc3d40-c66c-4573-be45-36889199ee65"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 18:19:24.797515 master-0 kubenswrapper[30278]: I0318 18:19:24.793604 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2acc3d40-c66c-4573-be45-36889199ee65-kube-api-access-wcvm2" (OuterVolumeSpecName: "kube-api-access-wcvm2") pod "2acc3d40-c66c-4573-be45-36889199ee65" (UID: "2acc3d40-c66c-4573-be45-36889199ee65"). InnerVolumeSpecName "kube-api-access-wcvm2". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 18:19:24.873685 master-0 kubenswrapper[30278]: I0318 18:19:24.873598 30278 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2acc3d40-c66c-4573-be45-36889199ee65-config-data\") on node \"master-0\" DevicePath \"\"" Mar 18 18:19:24.873685 master-0 kubenswrapper[30278]: I0318 18:19:24.873653 30278 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2acc3d40-c66c-4573-be45-36889199ee65-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 18 18:19:24.873685 master-0 kubenswrapper[30278]: I0318 18:19:24.873664 30278 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2acc3d40-c66c-4573-be45-36889199ee65-scripts\") on node \"master-0\" DevicePath \"\"" Mar 18 18:19:24.873685 master-0 kubenswrapper[30278]: I0318 18:19:24.873675 30278 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wcvm2\" (UniqueName: \"kubernetes.io/projected/2acc3d40-c66c-4573-be45-36889199ee65-kube-api-access-wcvm2\") on node \"master-0\" DevicePath \"\"" Mar 18 18:19:25.560097 master-0 kubenswrapper[30278]: I0318 18:19:25.560031 30278 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-8zspc" Mar 18 18:19:25.659041 master-0 kubenswrapper[30278]: I0318 18:19:25.659000 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-73c4547d-a129-4633-b041-3e0c5a9c7e49\" (UniqueName: \"kubernetes.io/csi/topolvm.io^8ca167e8-4fa3-444f-a5bf-31afa92c7150\") pod \"glance-824c8-default-internal-api-0\" (UID: \"2acc3d40-c66c-4573-be45-36889199ee65\") " pod="openstack/glance-824c8-default-internal-api-0" Mar 18 18:19:25.696998 master-0 kubenswrapper[30278]: I0318 18:19:25.694198 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bdb2d7ca-85c4-45d9-b5cd-ded86df14c3e-scripts\") pod \"bdb2d7ca-85c4-45d9-b5cd-ded86df14c3e\" (UID: \"bdb2d7ca-85c4-45d9-b5cd-ded86df14c3e\") " Mar 18 18:19:25.696998 master-0 kubenswrapper[30278]: I0318 18:19:25.694420 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/bdb2d7ca-85c4-45d9-b5cd-ded86df14c3e-credential-keys\") pod \"bdb2d7ca-85c4-45d9-b5cd-ded86df14c3e\" (UID: \"bdb2d7ca-85c4-45d9-b5cd-ded86df14c3e\") " Mar 18 18:19:25.696998 master-0 kubenswrapper[30278]: I0318 18:19:25.694503 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bdb2d7ca-85c4-45d9-b5cd-ded86df14c3e-config-data\") pod \"bdb2d7ca-85c4-45d9-b5cd-ded86df14c3e\" (UID: \"bdb2d7ca-85c4-45d9-b5cd-ded86df14c3e\") " Mar 18 18:19:25.696998 master-0 kubenswrapper[30278]: I0318 18:19:25.694850 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bdb2d7ca-85c4-45d9-b5cd-ded86df14c3e-combined-ca-bundle\") pod \"bdb2d7ca-85c4-45d9-b5cd-ded86df14c3e\" (UID: \"bdb2d7ca-85c4-45d9-b5cd-ded86df14c3e\") " Mar 18 18:19:25.696998 master-0 kubenswrapper[30278]: I0318 18:19:25.694991 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cjjkn\" (UniqueName: \"kubernetes.io/projected/bdb2d7ca-85c4-45d9-b5cd-ded86df14c3e-kube-api-access-cjjkn\") pod \"bdb2d7ca-85c4-45d9-b5cd-ded86df14c3e\" (UID: \"bdb2d7ca-85c4-45d9-b5cd-ded86df14c3e\") " Mar 18 18:19:25.696998 master-0 kubenswrapper[30278]: I0318 18:19:25.695026 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/bdb2d7ca-85c4-45d9-b5cd-ded86df14c3e-fernet-keys\") pod \"bdb2d7ca-85c4-45d9-b5cd-ded86df14c3e\" (UID: \"bdb2d7ca-85c4-45d9-b5cd-ded86df14c3e\") " Mar 18 18:19:25.703375 master-0 kubenswrapper[30278]: I0318 18:19:25.701749 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bdb2d7ca-85c4-45d9-b5cd-ded86df14c3e-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "bdb2d7ca-85c4-45d9-b5cd-ded86df14c3e" (UID: "bdb2d7ca-85c4-45d9-b5cd-ded86df14c3e"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 18:19:25.709821 master-0 kubenswrapper[30278]: I0318 18:19:25.704520 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bdb2d7ca-85c4-45d9-b5cd-ded86df14c3e-scripts" (OuterVolumeSpecName: "scripts") pod "bdb2d7ca-85c4-45d9-b5cd-ded86df14c3e" (UID: "bdb2d7ca-85c4-45d9-b5cd-ded86df14c3e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 18:19:25.709821 master-0 kubenswrapper[30278]: I0318 18:19:25.704707 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bdb2d7ca-85c4-45d9-b5cd-ded86df14c3e-kube-api-access-cjjkn" (OuterVolumeSpecName: "kube-api-access-cjjkn") pod "bdb2d7ca-85c4-45d9-b5cd-ded86df14c3e" (UID: "bdb2d7ca-85c4-45d9-b5cd-ded86df14c3e"). InnerVolumeSpecName "kube-api-access-cjjkn". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 18:19:25.709821 master-0 kubenswrapper[30278]: I0318 18:19:25.706068 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-824c8-default-internal-api-0" Mar 18 18:19:25.709821 master-0 kubenswrapper[30278]: I0318 18:19:25.706313 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-8zspc" event={"ID":"bdb2d7ca-85c4-45d9-b5cd-ded86df14c3e","Type":"ContainerDied","Data":"8bc5c163fbfa773ce3e94894eef8017e99f2508b8d87bfc8f22f7875d13b09a7"} Mar 18 18:19:25.709821 master-0 kubenswrapper[30278]: I0318 18:19:25.706370 30278 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8bc5c163fbfa773ce3e94894eef8017e99f2508b8d87bfc8f22f7875d13b09a7" Mar 18 18:19:25.709821 master-0 kubenswrapper[30278]: I0318 18:19:25.706386 30278 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-8zspc" Mar 18 18:19:25.709821 master-0 kubenswrapper[30278]: I0318 18:19:25.709722 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bdb2d7ca-85c4-45d9-b5cd-ded86df14c3e-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "bdb2d7ca-85c4-45d9-b5cd-ded86df14c3e" (UID: "bdb2d7ca-85c4-45d9-b5cd-ded86df14c3e"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 18:19:25.725346 master-0 kubenswrapper[30278]: I0318 18:19:25.724965 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bdb2d7ca-85c4-45d9-b5cd-ded86df14c3e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bdb2d7ca-85c4-45d9-b5cd-ded86df14c3e" (UID: "bdb2d7ca-85c4-45d9-b5cd-ded86df14c3e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 18:19:25.729015 master-0 kubenswrapper[30278]: I0318 18:19:25.728954 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bdb2d7ca-85c4-45d9-b5cd-ded86df14c3e-config-data" (OuterVolumeSpecName: "config-data") pod "bdb2d7ca-85c4-45d9-b5cd-ded86df14c3e" (UID: "bdb2d7ca-85c4-45d9-b5cd-ded86df14c3e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 18:19:25.799397 master-0 kubenswrapper[30278]: I0318 18:19:25.799333 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/csi/topolvm.io^8ca167e8-4fa3-444f-a5bf-31afa92c7150\") pod \"2acc3d40-c66c-4573-be45-36889199ee65\" (UID: \"2acc3d40-c66c-4573-be45-36889199ee65\") " Mar 18 18:19:25.800145 master-0 kubenswrapper[30278]: I0318 18:19:25.800109 30278 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bdb2d7ca-85c4-45d9-b5cd-ded86df14c3e-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 18 18:19:25.800145 master-0 kubenswrapper[30278]: I0318 18:19:25.800137 30278 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cjjkn\" (UniqueName: \"kubernetes.io/projected/bdb2d7ca-85c4-45d9-b5cd-ded86df14c3e-kube-api-access-cjjkn\") on node \"master-0\" DevicePath \"\"" Mar 18 18:19:25.800233 master-0 kubenswrapper[30278]: I0318 18:19:25.800151 30278 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/bdb2d7ca-85c4-45d9-b5cd-ded86df14c3e-fernet-keys\") on node \"master-0\" DevicePath \"\"" Mar 18 18:19:25.800233 master-0 kubenswrapper[30278]: I0318 18:19:25.800162 30278 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bdb2d7ca-85c4-45d9-b5cd-ded86df14c3e-scripts\") on node \"master-0\" DevicePath \"\"" Mar 18 18:19:25.800233 master-0 kubenswrapper[30278]: I0318 18:19:25.800171 30278 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/bdb2d7ca-85c4-45d9-b5cd-ded86df14c3e-credential-keys\") on node \"master-0\" DevicePath \"\"" Mar 18 18:19:25.800233 master-0 kubenswrapper[30278]: I0318 18:19:25.800181 30278 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bdb2d7ca-85c4-45d9-b5cd-ded86df14c3e-config-data\") on node \"master-0\" DevicePath \"\"" Mar 18 18:19:25.846343 master-0 kubenswrapper[30278]: I0318 18:19:25.845849 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/topolvm.io^8ca167e8-4fa3-444f-a5bf-31afa92c7150" (OuterVolumeSpecName: "glance") pod "2acc3d40-c66c-4573-be45-36889199ee65" (UID: "2acc3d40-c66c-4573-be45-36889199ee65"). InnerVolumeSpecName "pvc-73c4547d-a129-4633-b041-3e0c5a9c7e49". PluginName "kubernetes.io/csi", VolumeGidValue "" Mar 18 18:19:25.907146 master-0 kubenswrapper[30278]: I0318 18:19:25.907074 30278 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-73c4547d-a129-4633-b041-3e0c5a9c7e49\" (UniqueName: \"kubernetes.io/csi/topolvm.io^8ca167e8-4fa3-444f-a5bf-31afa92c7150\") on node \"master-0\" " Mar 18 18:19:25.949543 master-0 kubenswrapper[30278]: I0318 18:19:25.949378 30278 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Mar 18 18:19:25.949793 master-0 kubenswrapper[30278]: I0318 18:19:25.949620 30278 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-73c4547d-a129-4633-b041-3e0c5a9c7e49" (UniqueName: "kubernetes.io/csi/topolvm.io^8ca167e8-4fa3-444f-a5bf-31afa92c7150") on node "master-0" Mar 18 18:19:26.021329 master-0 kubenswrapper[30278]: I0318 18:19:26.016890 30278 reconciler_common.go:293] "Volume detached for volume \"pvc-73c4547d-a129-4633-b041-3e0c5a9c7e49\" (UniqueName: \"kubernetes.io/csi/topolvm.io^8ca167e8-4fa3-444f-a5bf-31afa92c7150\") on node \"master-0\" DevicePath \"\"" Mar 18 18:19:26.140522 master-0 kubenswrapper[30278]: I0318 18:19:26.135374 30278 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-6f67d74887-q4vt6"] Mar 18 18:19:26.144424 master-0 kubenswrapper[30278]: E0318 18:19:26.144326 30278 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bdb2d7ca-85c4-45d9-b5cd-ded86df14c3e" containerName="keystone-bootstrap" Mar 18 18:19:26.144424 master-0 kubenswrapper[30278]: I0318 18:19:26.144386 30278 state_mem.go:107] "Deleted CPUSet assignment" podUID="bdb2d7ca-85c4-45d9-b5cd-ded86df14c3e" containerName="keystone-bootstrap" Mar 18 18:19:26.145910 master-0 kubenswrapper[30278]: I0318 18:19:26.145872 30278 memory_manager.go:354] "RemoveStaleState removing state" podUID="bdb2d7ca-85c4-45d9-b5cd-ded86df14c3e" containerName="keystone-bootstrap" Mar 18 18:19:26.159720 master-0 kubenswrapper[30278]: I0318 18:19:26.158429 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-6f67d74887-q4vt6" Mar 18 18:19:26.164559 master-0 kubenswrapper[30278]: I0318 18:19:26.164218 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Mar 18 18:19:26.164559 master-0 kubenswrapper[30278]: I0318 18:19:26.164491 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Mar 18 18:19:26.164855 master-0 kubenswrapper[30278]: I0318 18:19:26.164798 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Mar 18 18:19:26.164898 master-0 kubenswrapper[30278]: I0318 18:19:26.164880 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Mar 18 18:19:26.164991 master-0 kubenswrapper[30278]: I0318 18:19:26.164823 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Mar 18 18:19:26.195500 master-0 kubenswrapper[30278]: I0318 18:19:26.188029 30278 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-6f67d74887-q4vt6"] Mar 18 18:19:26.283379 master-0 kubenswrapper[30278]: I0318 18:19:26.271926 30278 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-97cb45bf9-q6h4g"] Mar 18 18:19:26.307726 master-0 kubenswrapper[30278]: I0318 18:19:26.307666 30278 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-824c8-default-internal-api-0"] Mar 18 18:19:26.319487 master-0 kubenswrapper[30278]: I0318 18:19:26.319267 30278 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-824c8-default-internal-api-0"] Mar 18 18:19:26.330959 master-0 kubenswrapper[30278]: I0318 18:19:26.330856 30278 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-824c8-default-internal-api-0"] Mar 18 18:19:26.333712 master-0 kubenswrapper[30278]: I0318 18:19:26.333620 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-824c8-default-internal-api-0" Mar 18 18:19:26.337371 master-0 kubenswrapper[30278]: I0318 18:19:26.337241 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-824c8-default-internal-config-data" Mar 18 18:19:26.337573 master-0 kubenswrapper[30278]: I0318 18:19:26.337531 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Mar 18 18:19:26.337910 master-0 kubenswrapper[30278]: I0318 18:19:26.337806 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/8ed0b9d6-4657-4f09-945d-eaec083a0836-fernet-keys\") pod \"keystone-6f67d74887-q4vt6\" (UID: \"8ed0b9d6-4657-4f09-945d-eaec083a0836\") " pod="openstack/keystone-6f67d74887-q4vt6" Mar 18 18:19:26.337979 master-0 kubenswrapper[30278]: I0318 18:19:26.337957 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t5lv9\" (UniqueName: \"kubernetes.io/projected/8ed0b9d6-4657-4f09-945d-eaec083a0836-kube-api-access-t5lv9\") pod \"keystone-6f67d74887-q4vt6\" (UID: \"8ed0b9d6-4657-4f09-945d-eaec083a0836\") " pod="openstack/keystone-6f67d74887-q4vt6" Mar 18 18:19:26.338158 master-0 kubenswrapper[30278]: I0318 18:19:26.338132 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8ed0b9d6-4657-4f09-945d-eaec083a0836-public-tls-certs\") pod \"keystone-6f67d74887-q4vt6\" (UID: \"8ed0b9d6-4657-4f09-945d-eaec083a0836\") " pod="openstack/keystone-6f67d74887-q4vt6" Mar 18 18:19:26.338225 master-0 kubenswrapper[30278]: I0318 18:19:26.338196 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/8ed0b9d6-4657-4f09-945d-eaec083a0836-credential-keys\") pod \"keystone-6f67d74887-q4vt6\" (UID: \"8ed0b9d6-4657-4f09-945d-eaec083a0836\") " pod="openstack/keystone-6f67d74887-q4vt6" Mar 18 18:19:26.338397 master-0 kubenswrapper[30278]: I0318 18:19:26.338373 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8ed0b9d6-4657-4f09-945d-eaec083a0836-internal-tls-certs\") pod \"keystone-6f67d74887-q4vt6\" (UID: \"8ed0b9d6-4657-4f09-945d-eaec083a0836\") " pod="openstack/keystone-6f67d74887-q4vt6" Mar 18 18:19:26.338483 master-0 kubenswrapper[30278]: I0318 18:19:26.338420 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8ed0b9d6-4657-4f09-945d-eaec083a0836-config-data\") pod \"keystone-6f67d74887-q4vt6\" (UID: \"8ed0b9d6-4657-4f09-945d-eaec083a0836\") " pod="openstack/keystone-6f67d74887-q4vt6" Mar 18 18:19:26.338773 master-0 kubenswrapper[30278]: I0318 18:19:26.338734 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ed0b9d6-4657-4f09-945d-eaec083a0836-combined-ca-bundle\") pod \"keystone-6f67d74887-q4vt6\" (UID: \"8ed0b9d6-4657-4f09-945d-eaec083a0836\") " pod="openstack/keystone-6f67d74887-q4vt6" Mar 18 18:19:26.338859 master-0 kubenswrapper[30278]: I0318 18:19:26.338816 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8ed0b9d6-4657-4f09-945d-eaec083a0836-scripts\") pod \"keystone-6f67d74887-q4vt6\" (UID: \"8ed0b9d6-4657-4f09-945d-eaec083a0836\") " pod="openstack/keystone-6f67d74887-q4vt6" Mar 18 18:19:26.347187 master-0 kubenswrapper[30278]: I0318 18:19:26.347110 30278 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-824c8-default-internal-api-0"] Mar 18 18:19:26.441711 master-0 kubenswrapper[30278]: I0318 18:19:26.441512 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ed0b9d6-4657-4f09-945d-eaec083a0836-combined-ca-bundle\") pod \"keystone-6f67d74887-q4vt6\" (UID: \"8ed0b9d6-4657-4f09-945d-eaec083a0836\") " pod="openstack/keystone-6f67d74887-q4vt6" Mar 18 18:19:26.441711 master-0 kubenswrapper[30278]: I0318 18:19:26.441587 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8ed0b9d6-4657-4f09-945d-eaec083a0836-scripts\") pod \"keystone-6f67d74887-q4vt6\" (UID: \"8ed0b9d6-4657-4f09-945d-eaec083a0836\") " pod="openstack/keystone-6f67d74887-q4vt6" Mar 18 18:19:26.442286 master-0 kubenswrapper[30278]: I0318 18:19:26.441804 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/8ed0b9d6-4657-4f09-945d-eaec083a0836-fernet-keys\") pod \"keystone-6f67d74887-q4vt6\" (UID: \"8ed0b9d6-4657-4f09-945d-eaec083a0836\") " pod="openstack/keystone-6f67d74887-q4vt6" Mar 18 18:19:26.442286 master-0 kubenswrapper[30278]: I0318 18:19:26.441982 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t5lv9\" (UniqueName: \"kubernetes.io/projected/8ed0b9d6-4657-4f09-945d-eaec083a0836-kube-api-access-t5lv9\") pod \"keystone-6f67d74887-q4vt6\" (UID: \"8ed0b9d6-4657-4f09-945d-eaec083a0836\") " pod="openstack/keystone-6f67d74887-q4vt6" Mar 18 18:19:26.442286 master-0 kubenswrapper[30278]: I0318 18:19:26.442224 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8ed0b9d6-4657-4f09-945d-eaec083a0836-public-tls-certs\") pod \"keystone-6f67d74887-q4vt6\" (UID: \"8ed0b9d6-4657-4f09-945d-eaec083a0836\") " pod="openstack/keystone-6f67d74887-q4vt6" Mar 18 18:19:26.442286 master-0 kubenswrapper[30278]: I0318 18:19:26.442249 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/8ed0b9d6-4657-4f09-945d-eaec083a0836-credential-keys\") pod \"keystone-6f67d74887-q4vt6\" (UID: \"8ed0b9d6-4657-4f09-945d-eaec083a0836\") " pod="openstack/keystone-6f67d74887-q4vt6" Mar 18 18:19:26.442530 master-0 kubenswrapper[30278]: I0318 18:19:26.442486 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8ed0b9d6-4657-4f09-945d-eaec083a0836-internal-tls-certs\") pod \"keystone-6f67d74887-q4vt6\" (UID: \"8ed0b9d6-4657-4f09-945d-eaec083a0836\") " pod="openstack/keystone-6f67d74887-q4vt6" Mar 18 18:19:26.442608 master-0 kubenswrapper[30278]: I0318 18:19:26.442597 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8ed0b9d6-4657-4f09-945d-eaec083a0836-config-data\") pod \"keystone-6f67d74887-q4vt6\" (UID: \"8ed0b9d6-4657-4f09-945d-eaec083a0836\") " pod="openstack/keystone-6f67d74887-q4vt6" Mar 18 18:19:26.447183 master-0 kubenswrapper[30278]: I0318 18:19:26.446805 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8ed0b9d6-4657-4f09-945d-eaec083a0836-internal-tls-certs\") pod \"keystone-6f67d74887-q4vt6\" (UID: \"8ed0b9d6-4657-4f09-945d-eaec083a0836\") " pod="openstack/keystone-6f67d74887-q4vt6" Mar 18 18:19:26.448744 master-0 kubenswrapper[30278]: I0318 18:19:26.447553 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ed0b9d6-4657-4f09-945d-eaec083a0836-combined-ca-bundle\") pod \"keystone-6f67d74887-q4vt6\" (UID: \"8ed0b9d6-4657-4f09-945d-eaec083a0836\") " pod="openstack/keystone-6f67d74887-q4vt6" Mar 18 18:19:26.448744 master-0 kubenswrapper[30278]: I0318 18:19:26.447829 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8ed0b9d6-4657-4f09-945d-eaec083a0836-scripts\") pod \"keystone-6f67d74887-q4vt6\" (UID: \"8ed0b9d6-4657-4f09-945d-eaec083a0836\") " pod="openstack/keystone-6f67d74887-q4vt6" Mar 18 18:19:26.448913 master-0 kubenswrapper[30278]: I0318 18:19:26.448625 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8ed0b9d6-4657-4f09-945d-eaec083a0836-public-tls-certs\") pod \"keystone-6f67d74887-q4vt6\" (UID: \"8ed0b9d6-4657-4f09-945d-eaec083a0836\") " pod="openstack/keystone-6f67d74887-q4vt6" Mar 18 18:19:26.450239 master-0 kubenswrapper[30278]: I0318 18:19:26.449513 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/8ed0b9d6-4657-4f09-945d-eaec083a0836-fernet-keys\") pod \"keystone-6f67d74887-q4vt6\" (UID: \"8ed0b9d6-4657-4f09-945d-eaec083a0836\") " pod="openstack/keystone-6f67d74887-q4vt6" Mar 18 18:19:26.450239 master-0 kubenswrapper[30278]: I0318 18:19:26.449988 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/8ed0b9d6-4657-4f09-945d-eaec083a0836-credential-keys\") pod \"keystone-6f67d74887-q4vt6\" (UID: \"8ed0b9d6-4657-4f09-945d-eaec083a0836\") " pod="openstack/keystone-6f67d74887-q4vt6" Mar 18 18:19:26.450239 master-0 kubenswrapper[30278]: I0318 18:19:26.450192 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8ed0b9d6-4657-4f09-945d-eaec083a0836-config-data\") pod \"keystone-6f67d74887-q4vt6\" (UID: \"8ed0b9d6-4657-4f09-945d-eaec083a0836\") " pod="openstack/keystone-6f67d74887-q4vt6" Mar 18 18:19:26.545722 master-0 kubenswrapper[30278]: I0318 18:19:26.545531 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-73c4547d-a129-4633-b041-3e0c5a9c7e49\" (UniqueName: \"kubernetes.io/csi/topolvm.io^8ca167e8-4fa3-444f-a5bf-31afa92c7150\") pod \"glance-824c8-default-internal-api-0\" (UID: \"8b68cf46-84fc-418f-9c01-915501356564\") " pod="openstack/glance-824c8-default-internal-api-0" Mar 18 18:19:26.545722 master-0 kubenswrapper[30278]: I0318 18:19:26.545609 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8b68cf46-84fc-418f-9c01-915501356564-combined-ca-bundle\") pod \"glance-824c8-default-internal-api-0\" (UID: \"8b68cf46-84fc-418f-9c01-915501356564\") " pod="openstack/glance-824c8-default-internal-api-0" Mar 18 18:19:26.545722 master-0 kubenswrapper[30278]: I0318 18:19:26.545664 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8b68cf46-84fc-418f-9c01-915501356564-config-data\") pod \"glance-824c8-default-internal-api-0\" (UID: \"8b68cf46-84fc-418f-9c01-915501356564\") " pod="openstack/glance-824c8-default-internal-api-0" Mar 18 18:19:26.545722 master-0 kubenswrapper[30278]: I0318 18:19:26.545712 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/8b68cf46-84fc-418f-9c01-915501356564-httpd-run\") pod \"glance-824c8-default-internal-api-0\" (UID: \"8b68cf46-84fc-418f-9c01-915501356564\") " pod="openstack/glance-824c8-default-internal-api-0" Mar 18 18:19:26.546855 master-0 kubenswrapper[30278]: I0318 18:19:26.546677 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8b68cf46-84fc-418f-9c01-915501356564-logs\") pod \"glance-824c8-default-internal-api-0\" (UID: \"8b68cf46-84fc-418f-9c01-915501356564\") " pod="openstack/glance-824c8-default-internal-api-0" Mar 18 18:19:26.547244 master-0 kubenswrapper[30278]: I0318 18:19:26.547191 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8b68cf46-84fc-418f-9c01-915501356564-internal-tls-certs\") pod \"glance-824c8-default-internal-api-0\" (UID: \"8b68cf46-84fc-418f-9c01-915501356564\") " pod="openstack/glance-824c8-default-internal-api-0" Mar 18 18:19:26.547476 master-0 kubenswrapper[30278]: I0318 18:19:26.547432 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mb4rf\" (UniqueName: \"kubernetes.io/projected/8b68cf46-84fc-418f-9c01-915501356564-kube-api-access-mb4rf\") pod \"glance-824c8-default-internal-api-0\" (UID: \"8b68cf46-84fc-418f-9c01-915501356564\") " pod="openstack/glance-824c8-default-internal-api-0" Mar 18 18:19:26.547585 master-0 kubenswrapper[30278]: I0318 18:19:26.547557 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8b68cf46-84fc-418f-9c01-915501356564-scripts\") pod \"glance-824c8-default-internal-api-0\" (UID: \"8b68cf46-84fc-418f-9c01-915501356564\") " pod="openstack/glance-824c8-default-internal-api-0" Mar 18 18:19:26.628196 master-0 kubenswrapper[30278]: I0318 18:19:26.628154 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t5lv9\" (UniqueName: \"kubernetes.io/projected/8ed0b9d6-4657-4f09-945d-eaec083a0836-kube-api-access-t5lv9\") pod \"keystone-6f67d74887-q4vt6\" (UID: \"8ed0b9d6-4657-4f09-945d-eaec083a0836\") " pod="openstack/keystone-6f67d74887-q4vt6" Mar 18 18:19:26.650492 master-0 kubenswrapper[30278]: I0318 18:19:26.650420 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mb4rf\" (UniqueName: \"kubernetes.io/projected/8b68cf46-84fc-418f-9c01-915501356564-kube-api-access-mb4rf\") pod \"glance-824c8-default-internal-api-0\" (UID: \"8b68cf46-84fc-418f-9c01-915501356564\") " pod="openstack/glance-824c8-default-internal-api-0" Mar 18 18:19:26.650835 master-0 kubenswrapper[30278]: I0318 18:19:26.650820 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8b68cf46-84fc-418f-9c01-915501356564-scripts\") pod \"glance-824c8-default-internal-api-0\" (UID: \"8b68cf46-84fc-418f-9c01-915501356564\") " pod="openstack/glance-824c8-default-internal-api-0" Mar 18 18:19:26.651014 master-0 kubenswrapper[30278]: I0318 18:19:26.650998 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-73c4547d-a129-4633-b041-3e0c5a9c7e49\" (UniqueName: \"kubernetes.io/csi/topolvm.io^8ca167e8-4fa3-444f-a5bf-31afa92c7150\") pod \"glance-824c8-default-internal-api-0\" (UID: \"8b68cf46-84fc-418f-9c01-915501356564\") " pod="openstack/glance-824c8-default-internal-api-0" Mar 18 18:19:26.651114 master-0 kubenswrapper[30278]: I0318 18:19:26.651099 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8b68cf46-84fc-418f-9c01-915501356564-combined-ca-bundle\") pod \"glance-824c8-default-internal-api-0\" (UID: \"8b68cf46-84fc-418f-9c01-915501356564\") " pod="openstack/glance-824c8-default-internal-api-0" Mar 18 18:19:26.651229 master-0 kubenswrapper[30278]: I0318 18:19:26.651215 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8b68cf46-84fc-418f-9c01-915501356564-config-data\") pod \"glance-824c8-default-internal-api-0\" (UID: \"8b68cf46-84fc-418f-9c01-915501356564\") " pod="openstack/glance-824c8-default-internal-api-0" Mar 18 18:19:26.651578 master-0 kubenswrapper[30278]: I0318 18:19:26.651560 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/8b68cf46-84fc-418f-9c01-915501356564-httpd-run\") pod \"glance-824c8-default-internal-api-0\" (UID: \"8b68cf46-84fc-418f-9c01-915501356564\") " pod="openstack/glance-824c8-default-internal-api-0" Mar 18 18:19:26.651746 master-0 kubenswrapper[30278]: I0318 18:19:26.651728 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8b68cf46-84fc-418f-9c01-915501356564-logs\") pod \"glance-824c8-default-internal-api-0\" (UID: \"8b68cf46-84fc-418f-9c01-915501356564\") " pod="openstack/glance-824c8-default-internal-api-0" Mar 18 18:19:26.651885 master-0 kubenswrapper[30278]: I0318 18:19:26.651868 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8b68cf46-84fc-418f-9c01-915501356564-internal-tls-certs\") pod \"glance-824c8-default-internal-api-0\" (UID: \"8b68cf46-84fc-418f-9c01-915501356564\") " pod="openstack/glance-824c8-default-internal-api-0" Mar 18 18:19:26.652194 master-0 kubenswrapper[30278]: I0318 18:19:26.652168 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/8b68cf46-84fc-418f-9c01-915501356564-httpd-run\") pod \"glance-824c8-default-internal-api-0\" (UID: \"8b68cf46-84fc-418f-9c01-915501356564\") " pod="openstack/glance-824c8-default-internal-api-0" Mar 18 18:19:26.652358 master-0 kubenswrapper[30278]: I0318 18:19:26.652294 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8b68cf46-84fc-418f-9c01-915501356564-logs\") pod \"glance-824c8-default-internal-api-0\" (UID: \"8b68cf46-84fc-418f-9c01-915501356564\") " pod="openstack/glance-824c8-default-internal-api-0" Mar 18 18:19:26.654484 master-0 kubenswrapper[30278]: I0318 18:19:26.654454 30278 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Mar 18 18:19:26.654562 master-0 kubenswrapper[30278]: I0318 18:19:26.654501 30278 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-73c4547d-a129-4633-b041-3e0c5a9c7e49\" (UniqueName: \"kubernetes.io/csi/topolvm.io^8ca167e8-4fa3-444f-a5bf-31afa92c7150\") pod \"glance-824c8-default-internal-api-0\" (UID: \"8b68cf46-84fc-418f-9c01-915501356564\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/topolvm.io/c03db859bc87c72425359af32b7c24b69cb9246d9bdaabebd809ecb82cb00bf5/globalmount\"" pod="openstack/glance-824c8-default-internal-api-0" Mar 18 18:19:26.656594 master-0 kubenswrapper[30278]: I0318 18:19:26.656533 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8b68cf46-84fc-418f-9c01-915501356564-config-data\") pod \"glance-824c8-default-internal-api-0\" (UID: \"8b68cf46-84fc-418f-9c01-915501356564\") " pod="openstack/glance-824c8-default-internal-api-0" Mar 18 18:19:26.656998 master-0 kubenswrapper[30278]: I0318 18:19:26.656958 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8b68cf46-84fc-418f-9c01-915501356564-combined-ca-bundle\") pod \"glance-824c8-default-internal-api-0\" (UID: \"8b68cf46-84fc-418f-9c01-915501356564\") " pod="openstack/glance-824c8-default-internal-api-0" Mar 18 18:19:26.657830 master-0 kubenswrapper[30278]: I0318 18:19:26.657720 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8b68cf46-84fc-418f-9c01-915501356564-internal-tls-certs\") pod \"glance-824c8-default-internal-api-0\" (UID: \"8b68cf46-84fc-418f-9c01-915501356564\") " pod="openstack/glance-824c8-default-internal-api-0" Mar 18 18:19:26.660691 master-0 kubenswrapper[30278]: I0318 18:19:26.660652 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8b68cf46-84fc-418f-9c01-915501356564-scripts\") pod \"glance-824c8-default-internal-api-0\" (UID: \"8b68cf46-84fc-418f-9c01-915501356564\") " pod="openstack/glance-824c8-default-internal-api-0" Mar 18 18:19:26.719290 master-0 kubenswrapper[30278]: I0318 18:19:26.719172 30278 generic.go:334] "Generic (PLEG): container finished" podID="ade5c277-043b-4e56-bc7c-63961acf67c4" containerID="de60923da44db9a8409700af1fdd19f110177b440776a1265a771f8915eed79d" exitCode=0 Mar 18 18:19:26.720069 master-0 kubenswrapper[30278]: I0318 18:19:26.719303 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-db-sync-ggb6f" event={"ID":"ade5c277-043b-4e56-bc7c-63961acf67c4","Type":"ContainerDied","Data":"de60923da44db9a8409700af1fdd19f110177b440776a1265a771f8915eed79d"} Mar 18 18:19:26.722049 master-0 kubenswrapper[30278]: I0318 18:19:26.721974 30278 generic.go:334] "Generic (PLEG): container finished" podID="d4a913f3-9113-409f-bddd-65390f556fd2" containerID="25d839eaf37c8784a412280dfb11d9917cd5ad33a8209d1f8aa0b357baaee826" exitCode=0 Mar 18 18:19:26.722049 master-0 kubenswrapper[30278]: I0318 18:19:26.722011 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-97cb45bf9-q6h4g" event={"ID":"d4a913f3-9113-409f-bddd-65390f556fd2","Type":"ContainerDied","Data":"25d839eaf37c8784a412280dfb11d9917cd5ad33a8209d1f8aa0b357baaee826"} Mar 18 18:19:26.722049 master-0 kubenswrapper[30278]: I0318 18:19:26.722029 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-97cb45bf9-q6h4g" event={"ID":"d4a913f3-9113-409f-bddd-65390f556fd2","Type":"ContainerStarted","Data":"511f668727545fed9c9fcb998bea80ea5c3529d934686802ddc390716b046dd4"} Mar 18 18:19:26.872143 master-0 kubenswrapper[30278]: I0318 18:19:26.872054 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-6f67d74887-q4vt6" Mar 18 18:19:26.977774 master-0 kubenswrapper[30278]: I0318 18:19:26.977677 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mb4rf\" (UniqueName: \"kubernetes.io/projected/8b68cf46-84fc-418f-9c01-915501356564-kube-api-access-mb4rf\") pod \"glance-824c8-default-internal-api-0\" (UID: \"8b68cf46-84fc-418f-9c01-915501356564\") " pod="openstack/glance-824c8-default-internal-api-0" Mar 18 18:19:27.023349 master-0 kubenswrapper[30278]: I0318 18:19:27.022048 30278 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-824c8-default-external-api-0"] Mar 18 18:19:27.099984 master-0 kubenswrapper[30278]: I0318 18:19:27.099903 30278 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2acc3d40-c66c-4573-be45-36889199ee65" path="/var/lib/kubelet/pods/2acc3d40-c66c-4573-be45-36889199ee65/volumes" Mar 18 18:19:27.527775 master-0 kubenswrapper[30278]: I0318 18:19:27.527690 30278 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-6f67d74887-q4vt6"] Mar 18 18:19:27.768591 master-0 kubenswrapper[30278]: I0318 18:19:27.768457 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-db-sync-ggb6f" event={"ID":"ade5c277-043b-4e56-bc7c-63961acf67c4","Type":"ContainerStarted","Data":"58151d8c3ff62ab987e3ac88b6bec7ca0ac0420f8b3ac36b27cdb02e07049acc"} Mar 18 18:19:27.783511 master-0 kubenswrapper[30278]: I0318 18:19:27.783460 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-97cb45bf9-q6h4g" event={"ID":"d4a913f3-9113-409f-bddd-65390f556fd2","Type":"ContainerStarted","Data":"f68027f51845209e98c591538e0bb1d45a061a07370b8fe1f51a8ab90a7e1977"} Mar 18 18:19:27.784238 master-0 kubenswrapper[30278]: I0318 18:19:27.784195 30278 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-97cb45bf9-q6h4g" Mar 18 18:19:27.785901 master-0 kubenswrapper[30278]: I0318 18:19:27.785837 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-824c8-default-external-api-0" event={"ID":"fc1a0fd5-e12c-4425-bf93-544d29a3d545","Type":"ContainerStarted","Data":"7d7bf932682e599c125b500291eae526bd6ff86bee24f3f35da2a3522a008102"} Mar 18 18:19:27.788372 master-0 kubenswrapper[30278]: I0318 18:19:27.788201 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-6f67d74887-q4vt6" event={"ID":"8ed0b9d6-4657-4f09-945d-eaec083a0836","Type":"ContainerStarted","Data":"44c38382fcc484cb0da95e6836affca9bf06cbde524348cb896be1249618ec39"} Mar 18 18:19:27.810305 master-0 kubenswrapper[30278]: I0318 18:19:27.806324 30278 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ironic-db-sync-ggb6f" podStartSLOduration=14.520593618 podStartE2EDuration="23.806307636s" podCreationTimestamp="2026-03-18 18:19:04 +0000 UTC" firstStartedPulling="2026-03-18 18:19:16.274037839 +0000 UTC m=+1125.441222444" lastFinishedPulling="2026-03-18 18:19:25.559751867 +0000 UTC m=+1134.726936462" observedRunningTime="2026-03-18 18:19:27.801788924 +0000 UTC m=+1136.968973519" watchObservedRunningTime="2026-03-18 18:19:27.806307636 +0000 UTC m=+1136.973492231" Mar 18 18:19:27.837048 master-0 kubenswrapper[30278]: I0318 18:19:27.836942 30278 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-97cb45bf9-q6h4g" podStartSLOduration=8.836913641 podStartE2EDuration="8.836913641s" podCreationTimestamp="2026-03-18 18:19:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 18:19:27.823329895 +0000 UTC m=+1136.990514490" watchObservedRunningTime="2026-03-18 18:19:27.836913641 +0000 UTC m=+1137.004098236" Mar 18 18:19:28.006702 master-0 kubenswrapper[30278]: I0318 18:19:28.006652 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-73c4547d-a129-4633-b041-3e0c5a9c7e49\" (UniqueName: \"kubernetes.io/csi/topolvm.io^8ca167e8-4fa3-444f-a5bf-31afa92c7150\") pod \"glance-824c8-default-internal-api-0\" (UID: \"8b68cf46-84fc-418f-9c01-915501356564\") " pod="openstack/glance-824c8-default-internal-api-0" Mar 18 18:19:28.153458 master-0 kubenswrapper[30278]: I0318 18:19:28.152774 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-824c8-default-internal-api-0" Mar 18 18:19:28.824350 master-0 kubenswrapper[30278]: I0318 18:19:28.824265 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-824c8-default-external-api-0" event={"ID":"fc1a0fd5-e12c-4425-bf93-544d29a3d545","Type":"ContainerStarted","Data":"41ae04646b8ab48f916fb6e6df649600ff7e22d01a8647d3d99222fc0f7ebc08"} Mar 18 18:19:28.824924 master-0 kubenswrapper[30278]: I0318 18:19:28.824365 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-824c8-default-external-api-0" event={"ID":"fc1a0fd5-e12c-4425-bf93-544d29a3d545","Type":"ContainerStarted","Data":"9dfe5fab1c3bb5a2f5da3ec4e1bb3b4a60698a547406ccc1afa3a917a83ece9b"} Mar 18 18:19:28.824924 master-0 kubenswrapper[30278]: I0318 18:19:28.824432 30278 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-824c8-default-external-api-0" podUID="fc1a0fd5-e12c-4425-bf93-544d29a3d545" containerName="glance-log" containerID="cri-o://9dfe5fab1c3bb5a2f5da3ec4e1bb3b4a60698a547406ccc1afa3a917a83ece9b" gracePeriod=30 Mar 18 18:19:28.824924 master-0 kubenswrapper[30278]: I0318 18:19:28.824636 30278 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-824c8-default-external-api-0" podUID="fc1a0fd5-e12c-4425-bf93-544d29a3d545" containerName="glance-httpd" containerID="cri-o://41ae04646b8ab48f916fb6e6df649600ff7e22d01a8647d3d99222fc0f7ebc08" gracePeriod=30 Mar 18 18:19:28.835246 master-0 kubenswrapper[30278]: I0318 18:19:28.833762 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-6f67d74887-q4vt6" event={"ID":"8ed0b9d6-4657-4f09-945d-eaec083a0836","Type":"ContainerStarted","Data":"05e15fb1a43af3b7795e6693a23ed1f2ff160d7ffcc55aaef52d315789e5d4de"} Mar 18 18:19:28.835246 master-0 kubenswrapper[30278]: I0318 18:19:28.833899 30278 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-6f67d74887-q4vt6" Mar 18 18:19:28.860706 master-0 kubenswrapper[30278]: I0318 18:19:28.860373 30278 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-824c8-default-external-api-0" podStartSLOduration=10.860351106 podStartE2EDuration="10.860351106s" podCreationTimestamp="2026-03-18 18:19:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 18:19:28.854351875 +0000 UTC m=+1138.021536470" watchObservedRunningTime="2026-03-18 18:19:28.860351106 +0000 UTC m=+1138.027535701" Mar 18 18:19:28.899486 master-0 kubenswrapper[30278]: I0318 18:19:28.899408 30278 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-824c8-default-internal-api-0"] Mar 18 18:19:28.901345 master-0 kubenswrapper[30278]: I0318 18:19:28.901222 30278 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-6f67d74887-q4vt6" podStartSLOduration=3.901179157 podStartE2EDuration="3.901179157s" podCreationTimestamp="2026-03-18 18:19:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 18:19:28.887242781 +0000 UTC m=+1138.054427376" watchObservedRunningTime="2026-03-18 18:19:28.901179157 +0000 UTC m=+1138.068363752" Mar 18 18:19:29.756908 master-0 kubenswrapper[30278]: I0318 18:19:29.756840 30278 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-824c8-default-external-api-0" Mar 18 18:19:29.873947 master-0 kubenswrapper[30278]: I0318 18:19:29.873799 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-824c8-default-internal-api-0" event={"ID":"8b68cf46-84fc-418f-9c01-915501356564","Type":"ContainerStarted","Data":"b210bd87d2938ab2e1e8490aaafed8058301eef43cdb4f631906bab135491d8a"} Mar 18 18:19:29.878603 master-0 kubenswrapper[30278]: I0318 18:19:29.878538 30278 generic.go:334] "Generic (PLEG): container finished" podID="fc1a0fd5-e12c-4425-bf93-544d29a3d545" containerID="41ae04646b8ab48f916fb6e6df649600ff7e22d01a8647d3d99222fc0f7ebc08" exitCode=143 Mar 18 18:19:29.878791 master-0 kubenswrapper[30278]: I0318 18:19:29.878627 30278 generic.go:334] "Generic (PLEG): container finished" podID="fc1a0fd5-e12c-4425-bf93-544d29a3d545" containerID="9dfe5fab1c3bb5a2f5da3ec4e1bb3b4a60698a547406ccc1afa3a917a83ece9b" exitCode=143 Mar 18 18:19:29.878791 master-0 kubenswrapper[30278]: I0318 18:19:29.878539 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-824c8-default-external-api-0" event={"ID":"fc1a0fd5-e12c-4425-bf93-544d29a3d545","Type":"ContainerDied","Data":"41ae04646b8ab48f916fb6e6df649600ff7e22d01a8647d3d99222fc0f7ebc08"} Mar 18 18:19:29.878791 master-0 kubenswrapper[30278]: I0318 18:19:29.878671 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-824c8-default-external-api-0" event={"ID":"fc1a0fd5-e12c-4425-bf93-544d29a3d545","Type":"ContainerDied","Data":"9dfe5fab1c3bb5a2f5da3ec4e1bb3b4a60698a547406ccc1afa3a917a83ece9b"} Mar 18 18:19:29.878791 master-0 kubenswrapper[30278]: I0318 18:19:29.878695 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-824c8-default-external-api-0" event={"ID":"fc1a0fd5-e12c-4425-bf93-544d29a3d545","Type":"ContainerDied","Data":"7d7bf932682e599c125b500291eae526bd6ff86bee24f3f35da2a3522a008102"} Mar 18 18:19:29.878791 master-0 kubenswrapper[30278]: I0318 18:19:29.878597 30278 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-824c8-default-external-api-0" Mar 18 18:19:29.879554 master-0 kubenswrapper[30278]: I0318 18:19:29.879494 30278 scope.go:117] "RemoveContainer" containerID="41ae04646b8ab48f916fb6e6df649600ff7e22d01a8647d3d99222fc0f7ebc08" Mar 18 18:19:29.904861 master-0 kubenswrapper[30278]: I0318 18:19:29.904735 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fc1a0fd5-e12c-4425-bf93-544d29a3d545-config-data\") pod \"fc1a0fd5-e12c-4425-bf93-544d29a3d545\" (UID: \"fc1a0fd5-e12c-4425-bf93-544d29a3d545\") " Mar 18 18:19:29.905385 master-0 kubenswrapper[30278]: I0318 18:19:29.904913 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jmpmv\" (UniqueName: \"kubernetes.io/projected/fc1a0fd5-e12c-4425-bf93-544d29a3d545-kube-api-access-jmpmv\") pod \"fc1a0fd5-e12c-4425-bf93-544d29a3d545\" (UID: \"fc1a0fd5-e12c-4425-bf93-544d29a3d545\") " Mar 18 18:19:29.905385 master-0 kubenswrapper[30278]: I0318 18:19:29.905047 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fc1a0fd5-e12c-4425-bf93-544d29a3d545-logs\") pod \"fc1a0fd5-e12c-4425-bf93-544d29a3d545\" (UID: \"fc1a0fd5-e12c-4425-bf93-544d29a3d545\") " Mar 18 18:19:29.905385 master-0 kubenswrapper[30278]: I0318 18:19:29.905213 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/csi/topolvm.io^f0964722-5a61-42c7-8300-def06defe560\") pod \"fc1a0fd5-e12c-4425-bf93-544d29a3d545\" (UID: \"fc1a0fd5-e12c-4425-bf93-544d29a3d545\") " Mar 18 18:19:29.906905 master-0 kubenswrapper[30278]: I0318 18:19:29.905266 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/fc1a0fd5-e12c-4425-bf93-544d29a3d545-httpd-run\") pod \"fc1a0fd5-e12c-4425-bf93-544d29a3d545\" (UID: \"fc1a0fd5-e12c-4425-bf93-544d29a3d545\") " Mar 18 18:19:29.907014 master-0 kubenswrapper[30278]: I0318 18:19:29.906931 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fc1a0fd5-e12c-4425-bf93-544d29a3d545-scripts\") pod \"fc1a0fd5-e12c-4425-bf93-544d29a3d545\" (UID: \"fc1a0fd5-e12c-4425-bf93-544d29a3d545\") " Mar 18 18:19:29.907228 master-0 kubenswrapper[30278]: I0318 18:19:29.907077 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc1a0fd5-e12c-4425-bf93-544d29a3d545-combined-ca-bundle\") pod \"fc1a0fd5-e12c-4425-bf93-544d29a3d545\" (UID: \"fc1a0fd5-e12c-4425-bf93-544d29a3d545\") " Mar 18 18:19:29.907591 master-0 kubenswrapper[30278]: I0318 18:19:29.907553 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fc1a0fd5-e12c-4425-bf93-544d29a3d545-logs" (OuterVolumeSpecName: "logs") pod "fc1a0fd5-e12c-4425-bf93-544d29a3d545" (UID: "fc1a0fd5-e12c-4425-bf93-544d29a3d545"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 18 18:19:29.908088 master-0 kubenswrapper[30278]: I0318 18:19:29.908035 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fc1a0fd5-e12c-4425-bf93-544d29a3d545-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "fc1a0fd5-e12c-4425-bf93-544d29a3d545" (UID: "fc1a0fd5-e12c-4425-bf93-544d29a3d545"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 18 18:19:29.909290 master-0 kubenswrapper[30278]: I0318 18:19:29.909263 30278 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fc1a0fd5-e12c-4425-bf93-544d29a3d545-logs\") on node \"master-0\" DevicePath \"\"" Mar 18 18:19:29.909382 master-0 kubenswrapper[30278]: I0318 18:19:29.909366 30278 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/fc1a0fd5-e12c-4425-bf93-544d29a3d545-httpd-run\") on node \"master-0\" DevicePath \"\"" Mar 18 18:19:29.912626 master-0 kubenswrapper[30278]: I0318 18:19:29.912602 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fc1a0fd5-e12c-4425-bf93-544d29a3d545-kube-api-access-jmpmv" (OuterVolumeSpecName: "kube-api-access-jmpmv") pod "fc1a0fd5-e12c-4425-bf93-544d29a3d545" (UID: "fc1a0fd5-e12c-4425-bf93-544d29a3d545"). InnerVolumeSpecName "kube-api-access-jmpmv". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 18:19:29.917559 master-0 kubenswrapper[30278]: I0318 18:19:29.917486 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fc1a0fd5-e12c-4425-bf93-544d29a3d545-scripts" (OuterVolumeSpecName: "scripts") pod "fc1a0fd5-e12c-4425-bf93-544d29a3d545" (UID: "fc1a0fd5-e12c-4425-bf93-544d29a3d545"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 18:19:29.929922 master-0 kubenswrapper[30278]: I0318 18:19:29.923385 30278 scope.go:117] "RemoveContainer" containerID="9dfe5fab1c3bb5a2f5da3ec4e1bb3b4a60698a547406ccc1afa3a917a83ece9b" Mar 18 18:19:29.951105 master-0 kubenswrapper[30278]: I0318 18:19:29.950729 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/topolvm.io^f0964722-5a61-42c7-8300-def06defe560" (OuterVolumeSpecName: "glance") pod "fc1a0fd5-e12c-4425-bf93-544d29a3d545" (UID: "fc1a0fd5-e12c-4425-bf93-544d29a3d545"). InnerVolumeSpecName "pvc-3e454bbb-ecf6-4956-8a52-dc3d9c4be123". PluginName "kubernetes.io/csi", VolumeGidValue "" Mar 18 18:19:29.974534 master-0 kubenswrapper[30278]: I0318 18:19:29.974479 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fc1a0fd5-e12c-4425-bf93-544d29a3d545-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fc1a0fd5-e12c-4425-bf93-544d29a3d545" (UID: "fc1a0fd5-e12c-4425-bf93-544d29a3d545"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 18:19:29.988786 master-0 kubenswrapper[30278]: I0318 18:19:29.988719 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fc1a0fd5-e12c-4425-bf93-544d29a3d545-config-data" (OuterVolumeSpecName: "config-data") pod "fc1a0fd5-e12c-4425-bf93-544d29a3d545" (UID: "fc1a0fd5-e12c-4425-bf93-544d29a3d545"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 18:19:30.012180 master-0 kubenswrapper[30278]: I0318 18:19:30.012014 30278 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-3e454bbb-ecf6-4956-8a52-dc3d9c4be123\" (UniqueName: \"kubernetes.io/csi/topolvm.io^f0964722-5a61-42c7-8300-def06defe560\") on node \"master-0\" " Mar 18 18:19:30.012180 master-0 kubenswrapper[30278]: I0318 18:19:30.012073 30278 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fc1a0fd5-e12c-4425-bf93-544d29a3d545-scripts\") on node \"master-0\" DevicePath \"\"" Mar 18 18:19:30.012180 master-0 kubenswrapper[30278]: I0318 18:19:30.012095 30278 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc1a0fd5-e12c-4425-bf93-544d29a3d545-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 18 18:19:30.012180 master-0 kubenswrapper[30278]: I0318 18:19:30.012110 30278 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fc1a0fd5-e12c-4425-bf93-544d29a3d545-config-data\") on node \"master-0\" DevicePath \"\"" Mar 18 18:19:30.012180 master-0 kubenswrapper[30278]: I0318 18:19:30.012124 30278 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jmpmv\" (UniqueName: \"kubernetes.io/projected/fc1a0fd5-e12c-4425-bf93-544d29a3d545-kube-api-access-jmpmv\") on node \"master-0\" DevicePath \"\"" Mar 18 18:19:30.055263 master-0 kubenswrapper[30278]: I0318 18:19:30.055209 30278 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Mar 18 18:19:30.055641 master-0 kubenswrapper[30278]: I0318 18:19:30.055617 30278 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-3e454bbb-ecf6-4956-8a52-dc3d9c4be123" (UniqueName: "kubernetes.io/csi/topolvm.io^f0964722-5a61-42c7-8300-def06defe560") on node "master-0" Mar 18 18:19:30.123083 master-0 kubenswrapper[30278]: I0318 18:19:30.116411 30278 reconciler_common.go:293] "Volume detached for volume \"pvc-3e454bbb-ecf6-4956-8a52-dc3d9c4be123\" (UniqueName: \"kubernetes.io/csi/topolvm.io^f0964722-5a61-42c7-8300-def06defe560\") on node \"master-0\" DevicePath \"\"" Mar 18 18:19:30.128335 master-0 kubenswrapper[30278]: I0318 18:19:30.128232 30278 scope.go:117] "RemoveContainer" containerID="41ae04646b8ab48f916fb6e6df649600ff7e22d01a8647d3d99222fc0f7ebc08" Mar 18 18:19:30.129683 master-0 kubenswrapper[30278]: E0318 18:19:30.129611 30278 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"41ae04646b8ab48f916fb6e6df649600ff7e22d01a8647d3d99222fc0f7ebc08\": container with ID starting with 41ae04646b8ab48f916fb6e6df649600ff7e22d01a8647d3d99222fc0f7ebc08 not found: ID does not exist" containerID="41ae04646b8ab48f916fb6e6df649600ff7e22d01a8647d3d99222fc0f7ebc08" Mar 18 18:19:30.129765 master-0 kubenswrapper[30278]: I0318 18:19:30.129696 30278 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"41ae04646b8ab48f916fb6e6df649600ff7e22d01a8647d3d99222fc0f7ebc08"} err="failed to get container status \"41ae04646b8ab48f916fb6e6df649600ff7e22d01a8647d3d99222fc0f7ebc08\": rpc error: code = NotFound desc = could not find container \"41ae04646b8ab48f916fb6e6df649600ff7e22d01a8647d3d99222fc0f7ebc08\": container with ID starting with 41ae04646b8ab48f916fb6e6df649600ff7e22d01a8647d3d99222fc0f7ebc08 not found: ID does not exist" Mar 18 18:19:30.129765 master-0 kubenswrapper[30278]: I0318 18:19:30.129732 30278 scope.go:117] "RemoveContainer" containerID="9dfe5fab1c3bb5a2f5da3ec4e1bb3b4a60698a547406ccc1afa3a917a83ece9b" Mar 18 18:19:30.130756 master-0 kubenswrapper[30278]: E0318 18:19:30.130721 30278 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9dfe5fab1c3bb5a2f5da3ec4e1bb3b4a60698a547406ccc1afa3a917a83ece9b\": container with ID starting with 9dfe5fab1c3bb5a2f5da3ec4e1bb3b4a60698a547406ccc1afa3a917a83ece9b not found: ID does not exist" containerID="9dfe5fab1c3bb5a2f5da3ec4e1bb3b4a60698a547406ccc1afa3a917a83ece9b" Mar 18 18:19:30.130820 master-0 kubenswrapper[30278]: I0318 18:19:30.130758 30278 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9dfe5fab1c3bb5a2f5da3ec4e1bb3b4a60698a547406ccc1afa3a917a83ece9b"} err="failed to get container status \"9dfe5fab1c3bb5a2f5da3ec4e1bb3b4a60698a547406ccc1afa3a917a83ece9b\": rpc error: code = NotFound desc = could not find container \"9dfe5fab1c3bb5a2f5da3ec4e1bb3b4a60698a547406ccc1afa3a917a83ece9b\": container with ID starting with 9dfe5fab1c3bb5a2f5da3ec4e1bb3b4a60698a547406ccc1afa3a917a83ece9b not found: ID does not exist" Mar 18 18:19:30.130820 master-0 kubenswrapper[30278]: I0318 18:19:30.130781 30278 scope.go:117] "RemoveContainer" containerID="41ae04646b8ab48f916fb6e6df649600ff7e22d01a8647d3d99222fc0f7ebc08" Mar 18 18:19:30.131862 master-0 kubenswrapper[30278]: I0318 18:19:30.131812 30278 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"41ae04646b8ab48f916fb6e6df649600ff7e22d01a8647d3d99222fc0f7ebc08"} err="failed to get container status \"41ae04646b8ab48f916fb6e6df649600ff7e22d01a8647d3d99222fc0f7ebc08\": rpc error: code = NotFound desc = could not find container \"41ae04646b8ab48f916fb6e6df649600ff7e22d01a8647d3d99222fc0f7ebc08\": container with ID starting with 41ae04646b8ab48f916fb6e6df649600ff7e22d01a8647d3d99222fc0f7ebc08 not found: ID does not exist" Mar 18 18:19:30.131862 master-0 kubenswrapper[30278]: I0318 18:19:30.131852 30278 scope.go:117] "RemoveContainer" containerID="9dfe5fab1c3bb5a2f5da3ec4e1bb3b4a60698a547406ccc1afa3a917a83ece9b" Mar 18 18:19:30.133551 master-0 kubenswrapper[30278]: I0318 18:19:30.133507 30278 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9dfe5fab1c3bb5a2f5da3ec4e1bb3b4a60698a547406ccc1afa3a917a83ece9b"} err="failed to get container status \"9dfe5fab1c3bb5a2f5da3ec4e1bb3b4a60698a547406ccc1afa3a917a83ece9b\": rpc error: code = NotFound desc = could not find container \"9dfe5fab1c3bb5a2f5da3ec4e1bb3b4a60698a547406ccc1afa3a917a83ece9b\": container with ID starting with 9dfe5fab1c3bb5a2f5da3ec4e1bb3b4a60698a547406ccc1afa3a917a83ece9b not found: ID does not exist" Mar 18 18:19:30.245316 master-0 kubenswrapper[30278]: I0318 18:19:30.236917 30278 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-824c8-default-external-api-0"] Mar 18 18:19:30.298501 master-0 kubenswrapper[30278]: I0318 18:19:30.279462 30278 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-824c8-default-external-api-0"] Mar 18 18:19:30.338512 master-0 kubenswrapper[30278]: I0318 18:19:30.338421 30278 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-824c8-default-external-api-0"] Mar 18 18:19:30.342574 master-0 kubenswrapper[30278]: E0318 18:19:30.342512 30278 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fc1a0fd5-e12c-4425-bf93-544d29a3d545" containerName="glance-httpd" Mar 18 18:19:30.342752 master-0 kubenswrapper[30278]: I0318 18:19:30.342621 30278 state_mem.go:107] "Deleted CPUSet assignment" podUID="fc1a0fd5-e12c-4425-bf93-544d29a3d545" containerName="glance-httpd" Mar 18 18:19:30.342752 master-0 kubenswrapper[30278]: E0318 18:19:30.342717 30278 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fc1a0fd5-e12c-4425-bf93-544d29a3d545" containerName="glance-log" Mar 18 18:19:30.342752 master-0 kubenswrapper[30278]: I0318 18:19:30.342729 30278 state_mem.go:107] "Deleted CPUSet assignment" podUID="fc1a0fd5-e12c-4425-bf93-544d29a3d545" containerName="glance-log" Mar 18 18:19:30.361087 master-0 kubenswrapper[30278]: I0318 18:19:30.356937 30278 memory_manager.go:354] "RemoveStaleState removing state" podUID="fc1a0fd5-e12c-4425-bf93-544d29a3d545" containerName="glance-httpd" Mar 18 18:19:30.361087 master-0 kubenswrapper[30278]: I0318 18:19:30.357048 30278 memory_manager.go:354] "RemoveStaleState removing state" podUID="fc1a0fd5-e12c-4425-bf93-544d29a3d545" containerName="glance-log" Mar 18 18:19:30.361087 master-0 kubenswrapper[30278]: I0318 18:19:30.358857 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-824c8-default-external-api-0" Mar 18 18:19:30.367035 master-0 kubenswrapper[30278]: I0318 18:19:30.366947 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Mar 18 18:19:30.367393 master-0 kubenswrapper[30278]: I0318 18:19:30.367252 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-824c8-default-external-config-data" Mar 18 18:19:30.370750 master-0 kubenswrapper[30278]: I0318 18:19:30.370661 30278 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-824c8-default-external-api-0"] Mar 18 18:19:30.433440 master-0 kubenswrapper[30278]: I0318 18:19:30.433261 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/977956f8-854b-4c87-9485-c67f2be25e4c-config-data\") pod \"glance-824c8-default-external-api-0\" (UID: \"977956f8-854b-4c87-9485-c67f2be25e4c\") " pod="openstack/glance-824c8-default-external-api-0" Mar 18 18:19:30.434107 master-0 kubenswrapper[30278]: I0318 18:19:30.434085 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-3e454bbb-ecf6-4956-8a52-dc3d9c4be123\" (UniqueName: \"kubernetes.io/csi/topolvm.io^f0964722-5a61-42c7-8300-def06defe560\") pod \"glance-824c8-default-external-api-0\" (UID: \"977956f8-854b-4c87-9485-c67f2be25e4c\") " pod="openstack/glance-824c8-default-external-api-0" Mar 18 18:19:30.434325 master-0 kubenswrapper[30278]: I0318 18:19:30.434305 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/977956f8-854b-4c87-9485-c67f2be25e4c-logs\") pod \"glance-824c8-default-external-api-0\" (UID: \"977956f8-854b-4c87-9485-c67f2be25e4c\") " pod="openstack/glance-824c8-default-external-api-0" Mar 18 18:19:30.434556 master-0 kubenswrapper[30278]: I0318 18:19:30.434540 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/977956f8-854b-4c87-9485-c67f2be25e4c-httpd-run\") pod \"glance-824c8-default-external-api-0\" (UID: \"977956f8-854b-4c87-9485-c67f2be25e4c\") " pod="openstack/glance-824c8-default-external-api-0" Mar 18 18:19:30.434742 master-0 kubenswrapper[30278]: I0318 18:19:30.434728 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/977956f8-854b-4c87-9485-c67f2be25e4c-scripts\") pod \"glance-824c8-default-external-api-0\" (UID: \"977956f8-854b-4c87-9485-c67f2be25e4c\") " pod="openstack/glance-824c8-default-external-api-0" Mar 18 18:19:30.434862 master-0 kubenswrapper[30278]: I0318 18:19:30.434830 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kx65f\" (UniqueName: \"kubernetes.io/projected/977956f8-854b-4c87-9485-c67f2be25e4c-kube-api-access-kx65f\") pod \"glance-824c8-default-external-api-0\" (UID: \"977956f8-854b-4c87-9485-c67f2be25e4c\") " pod="openstack/glance-824c8-default-external-api-0" Mar 18 18:19:30.435034 master-0 kubenswrapper[30278]: I0318 18:19:30.434998 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/977956f8-854b-4c87-9485-c67f2be25e4c-combined-ca-bundle\") pod \"glance-824c8-default-external-api-0\" (UID: \"977956f8-854b-4c87-9485-c67f2be25e4c\") " pod="openstack/glance-824c8-default-external-api-0" Mar 18 18:19:30.435343 master-0 kubenswrapper[30278]: I0318 18:19:30.435311 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/977956f8-854b-4c87-9485-c67f2be25e4c-public-tls-certs\") pod \"glance-824c8-default-external-api-0\" (UID: \"977956f8-854b-4c87-9485-c67f2be25e4c\") " pod="openstack/glance-824c8-default-external-api-0" Mar 18 18:19:30.538316 master-0 kubenswrapper[30278]: I0318 18:19:30.538195 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/977956f8-854b-4c87-9485-c67f2be25e4c-scripts\") pod \"glance-824c8-default-external-api-0\" (UID: \"977956f8-854b-4c87-9485-c67f2be25e4c\") " pod="openstack/glance-824c8-default-external-api-0" Mar 18 18:19:30.538675 master-0 kubenswrapper[30278]: I0318 18:19:30.538403 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kx65f\" (UniqueName: \"kubernetes.io/projected/977956f8-854b-4c87-9485-c67f2be25e4c-kube-api-access-kx65f\") pod \"glance-824c8-default-external-api-0\" (UID: \"977956f8-854b-4c87-9485-c67f2be25e4c\") " pod="openstack/glance-824c8-default-external-api-0" Mar 18 18:19:30.539512 master-0 kubenswrapper[30278]: I0318 18:19:30.538559 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/977956f8-854b-4c87-9485-c67f2be25e4c-combined-ca-bundle\") pod \"glance-824c8-default-external-api-0\" (UID: \"977956f8-854b-4c87-9485-c67f2be25e4c\") " pod="openstack/glance-824c8-default-external-api-0" Mar 18 18:19:30.539641 master-0 kubenswrapper[30278]: I0318 18:19:30.539615 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/977956f8-854b-4c87-9485-c67f2be25e4c-public-tls-certs\") pod \"glance-824c8-default-external-api-0\" (UID: \"977956f8-854b-4c87-9485-c67f2be25e4c\") " pod="openstack/glance-824c8-default-external-api-0" Mar 18 18:19:30.539771 master-0 kubenswrapper[30278]: I0318 18:19:30.539737 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/977956f8-854b-4c87-9485-c67f2be25e4c-config-data\") pod \"glance-824c8-default-external-api-0\" (UID: \"977956f8-854b-4c87-9485-c67f2be25e4c\") " pod="openstack/glance-824c8-default-external-api-0" Mar 18 18:19:30.540307 master-0 kubenswrapper[30278]: I0318 18:19:30.540241 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-3e454bbb-ecf6-4956-8a52-dc3d9c4be123\" (UniqueName: \"kubernetes.io/csi/topolvm.io^f0964722-5a61-42c7-8300-def06defe560\") pod \"glance-824c8-default-external-api-0\" (UID: \"977956f8-854b-4c87-9485-c67f2be25e4c\") " pod="openstack/glance-824c8-default-external-api-0" Mar 18 18:19:30.540552 master-0 kubenswrapper[30278]: I0318 18:19:30.540500 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/977956f8-854b-4c87-9485-c67f2be25e4c-logs\") pod \"glance-824c8-default-external-api-0\" (UID: \"977956f8-854b-4c87-9485-c67f2be25e4c\") " pod="openstack/glance-824c8-default-external-api-0" Mar 18 18:19:30.540638 master-0 kubenswrapper[30278]: I0318 18:19:30.540614 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/977956f8-854b-4c87-9485-c67f2be25e4c-httpd-run\") pod \"glance-824c8-default-external-api-0\" (UID: \"977956f8-854b-4c87-9485-c67f2be25e4c\") " pod="openstack/glance-824c8-default-external-api-0" Mar 18 18:19:30.541168 master-0 kubenswrapper[30278]: I0318 18:19:30.541114 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/977956f8-854b-4c87-9485-c67f2be25e4c-httpd-run\") pod \"glance-824c8-default-external-api-0\" (UID: \"977956f8-854b-4c87-9485-c67f2be25e4c\") " pod="openstack/glance-824c8-default-external-api-0" Mar 18 18:19:30.541814 master-0 kubenswrapper[30278]: I0318 18:19:30.541712 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/977956f8-854b-4c87-9485-c67f2be25e4c-logs\") pod \"glance-824c8-default-external-api-0\" (UID: \"977956f8-854b-4c87-9485-c67f2be25e4c\") " pod="openstack/glance-824c8-default-external-api-0" Mar 18 18:19:30.542401 master-0 kubenswrapper[30278]: I0318 18:19:30.542360 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/977956f8-854b-4c87-9485-c67f2be25e4c-scripts\") pod \"glance-824c8-default-external-api-0\" (UID: \"977956f8-854b-4c87-9485-c67f2be25e4c\") " pod="openstack/glance-824c8-default-external-api-0" Mar 18 18:19:30.542846 master-0 kubenswrapper[30278]: I0318 18:19:30.542812 30278 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Mar 18 18:19:30.542846 master-0 kubenswrapper[30278]: I0318 18:19:30.542849 30278 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-3e454bbb-ecf6-4956-8a52-dc3d9c4be123\" (UniqueName: \"kubernetes.io/csi/topolvm.io^f0964722-5a61-42c7-8300-def06defe560\") pod \"glance-824c8-default-external-api-0\" (UID: \"977956f8-854b-4c87-9485-c67f2be25e4c\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/topolvm.io/94c3d9a5864b2a0676e8a45c98800fb7c7e5f534272efb0ca320119ec8f41cb2/globalmount\"" pod="openstack/glance-824c8-default-external-api-0" Mar 18 18:19:30.543869 master-0 kubenswrapper[30278]: I0318 18:19:30.543837 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/977956f8-854b-4c87-9485-c67f2be25e4c-public-tls-certs\") pod \"glance-824c8-default-external-api-0\" (UID: \"977956f8-854b-4c87-9485-c67f2be25e4c\") " pod="openstack/glance-824c8-default-external-api-0" Mar 18 18:19:30.544442 master-0 kubenswrapper[30278]: I0318 18:19:30.544400 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/977956f8-854b-4c87-9485-c67f2be25e4c-combined-ca-bundle\") pod \"glance-824c8-default-external-api-0\" (UID: \"977956f8-854b-4c87-9485-c67f2be25e4c\") " pod="openstack/glance-824c8-default-external-api-0" Mar 18 18:19:30.548210 master-0 kubenswrapper[30278]: I0318 18:19:30.548167 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/977956f8-854b-4c87-9485-c67f2be25e4c-config-data\") pod \"glance-824c8-default-external-api-0\" (UID: \"977956f8-854b-4c87-9485-c67f2be25e4c\") " pod="openstack/glance-824c8-default-external-api-0" Mar 18 18:19:30.555769 master-0 kubenswrapper[30278]: I0318 18:19:30.554730 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kx65f\" (UniqueName: \"kubernetes.io/projected/977956f8-854b-4c87-9485-c67f2be25e4c-kube-api-access-kx65f\") pod \"glance-824c8-default-external-api-0\" (UID: \"977956f8-854b-4c87-9485-c67f2be25e4c\") " pod="openstack/glance-824c8-default-external-api-0" Mar 18 18:19:30.895402 master-0 kubenswrapper[30278]: I0318 18:19:30.895321 30278 generic.go:334] "Generic (PLEG): container finished" podID="47f543cd-d5bf-4421-aae3-516afd48c609" containerID="535479142a27fb06a482b3a4e51258b7ab945ee4e49c3aec0da0d12548de907d" exitCode=0 Mar 18 18:19:30.896690 master-0 kubenswrapper[30278]: I0318 18:19:30.896579 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-b9df6-db-sync-dxpjk" event={"ID":"47f543cd-d5bf-4421-aae3-516afd48c609","Type":"ContainerDied","Data":"535479142a27fb06a482b3a4e51258b7ab945ee4e49c3aec0da0d12548de907d"} Mar 18 18:19:30.900584 master-0 kubenswrapper[30278]: I0318 18:19:30.900548 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-824c8-default-internal-api-0" event={"ID":"8b68cf46-84fc-418f-9c01-915501356564","Type":"ContainerStarted","Data":"c846084ab1d1864fded3953bbd85313f0b201b8dd632f29e8018ebc7fb0d0f4a"} Mar 18 18:19:30.900839 master-0 kubenswrapper[30278]: I0318 18:19:30.900786 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-824c8-default-internal-api-0" event={"ID":"8b68cf46-84fc-418f-9c01-915501356564","Type":"ContainerStarted","Data":"12f888b489aa0e87b8b8d9e347d25c40f5ff39fc8456b52c776698003f1f51eb"} Mar 18 18:19:30.952921 master-0 kubenswrapper[30278]: I0318 18:19:30.951141 30278 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-824c8-default-internal-api-0" podStartSLOduration=4.95110644 podStartE2EDuration="4.95110644s" podCreationTimestamp="2026-03-18 18:19:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 18:19:30.944198704 +0000 UTC m=+1140.111383299" watchObservedRunningTime="2026-03-18 18:19:30.95110644 +0000 UTC m=+1140.118291045" Mar 18 18:19:31.081424 master-0 kubenswrapper[30278]: I0318 18:19:31.075775 30278 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fc1a0fd5-e12c-4425-bf93-544d29a3d545" path="/var/lib/kubelet/pods/fc1a0fd5-e12c-4425-bf93-544d29a3d545/volumes" Mar 18 18:19:31.929879 master-0 kubenswrapper[30278]: I0318 18:19:31.929786 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-3e454bbb-ecf6-4956-8a52-dc3d9c4be123\" (UniqueName: \"kubernetes.io/csi/topolvm.io^f0964722-5a61-42c7-8300-def06defe560\") pod \"glance-824c8-default-external-api-0\" (UID: \"977956f8-854b-4c87-9485-c67f2be25e4c\") " pod="openstack/glance-824c8-default-external-api-0" Mar 18 18:19:32.212257 master-0 kubenswrapper[30278]: I0318 18:19:32.212185 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-824c8-default-external-api-0" Mar 18 18:19:32.467405 master-0 kubenswrapper[30278]: I0318 18:19:32.467312 30278 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-b9df6-db-sync-dxpjk" Mar 18 18:19:32.511213 master-0 kubenswrapper[30278]: I0318 18:19:32.511003 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/47f543cd-d5bf-4421-aae3-516afd48c609-etc-machine-id\") pod \"47f543cd-d5bf-4421-aae3-516afd48c609\" (UID: \"47f543cd-d5bf-4421-aae3-516afd48c609\") " Mar 18 18:19:32.511607 master-0 kubenswrapper[30278]: I0318 18:19:32.511342 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/47f543cd-d5bf-4421-aae3-516afd48c609-db-sync-config-data\") pod \"47f543cd-d5bf-4421-aae3-516afd48c609\" (UID: \"47f543cd-d5bf-4421-aae3-516afd48c609\") " Mar 18 18:19:32.512421 master-0 kubenswrapper[30278]: I0318 18:19:32.511770 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/47f543cd-d5bf-4421-aae3-516afd48c609-combined-ca-bundle\") pod \"47f543cd-d5bf-4421-aae3-516afd48c609\" (UID: \"47f543cd-d5bf-4421-aae3-516afd48c609\") " Mar 18 18:19:32.512421 master-0 kubenswrapper[30278]: I0318 18:19:32.511868 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xmsb\" (UniqueName: \"kubernetes.io/projected/47f543cd-d5bf-4421-aae3-516afd48c609-kube-api-access-9xmsb\") pod \"47f543cd-d5bf-4421-aae3-516afd48c609\" (UID: \"47f543cd-d5bf-4421-aae3-516afd48c609\") " Mar 18 18:19:32.512421 master-0 kubenswrapper[30278]: I0318 18:19:32.511939 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/47f543cd-d5bf-4421-aae3-516afd48c609-config-data\") pod \"47f543cd-d5bf-4421-aae3-516afd48c609\" (UID: \"47f543cd-d5bf-4421-aae3-516afd48c609\") " Mar 18 18:19:32.512421 master-0 kubenswrapper[30278]: I0318 18:19:32.512081 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/47f543cd-d5bf-4421-aae3-516afd48c609-scripts\") pod \"47f543cd-d5bf-4421-aae3-516afd48c609\" (UID: \"47f543cd-d5bf-4421-aae3-516afd48c609\") " Mar 18 18:19:32.519606 master-0 kubenswrapper[30278]: I0318 18:19:32.519205 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/47f543cd-d5bf-4421-aae3-516afd48c609-scripts" (OuterVolumeSpecName: "scripts") pod "47f543cd-d5bf-4421-aae3-516afd48c609" (UID: "47f543cd-d5bf-4421-aae3-516afd48c609"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 18:19:32.519855 master-0 kubenswrapper[30278]: I0318 18:19:32.519762 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/47f543cd-d5bf-4421-aae3-516afd48c609-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "47f543cd-d5bf-4421-aae3-516afd48c609" (UID: "47f543cd-d5bf-4421-aae3-516afd48c609"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 18:19:32.538660 master-0 kubenswrapper[30278]: I0318 18:19:32.538583 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/47f543cd-d5bf-4421-aae3-516afd48c609-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "47f543cd-d5bf-4421-aae3-516afd48c609" (UID: "47f543cd-d5bf-4421-aae3-516afd48c609"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 18:19:32.538897 master-0 kubenswrapper[30278]: I0318 18:19:32.538700 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/47f543cd-d5bf-4421-aae3-516afd48c609-kube-api-access-9xmsb" (OuterVolumeSpecName: "kube-api-access-9xmsb") pod "47f543cd-d5bf-4421-aae3-516afd48c609" (UID: "47f543cd-d5bf-4421-aae3-516afd48c609"). InnerVolumeSpecName "kube-api-access-9xmsb". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 18:19:32.557379 master-0 kubenswrapper[30278]: I0318 18:19:32.557220 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/47f543cd-d5bf-4421-aae3-516afd48c609-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "47f543cd-d5bf-4421-aae3-516afd48c609" (UID: "47f543cd-d5bf-4421-aae3-516afd48c609"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 18:19:32.584438 master-0 kubenswrapper[30278]: I0318 18:19:32.584264 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/47f543cd-d5bf-4421-aae3-516afd48c609-config-data" (OuterVolumeSpecName: "config-data") pod "47f543cd-d5bf-4421-aae3-516afd48c609" (UID: "47f543cd-d5bf-4421-aae3-516afd48c609"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 18:19:32.616527 master-0 kubenswrapper[30278]: I0318 18:19:32.616419 30278 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/47f543cd-d5bf-4421-aae3-516afd48c609-scripts\") on node \"master-0\" DevicePath \"\"" Mar 18 18:19:32.616527 master-0 kubenswrapper[30278]: I0318 18:19:32.616503 30278 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/47f543cd-d5bf-4421-aae3-516afd48c609-etc-machine-id\") on node \"master-0\" DevicePath \"\"" Mar 18 18:19:32.616527 master-0 kubenswrapper[30278]: I0318 18:19:32.616522 30278 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/47f543cd-d5bf-4421-aae3-516afd48c609-db-sync-config-data\") on node \"master-0\" DevicePath \"\"" Mar 18 18:19:32.616527 master-0 kubenswrapper[30278]: I0318 18:19:32.616536 30278 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/47f543cd-d5bf-4421-aae3-516afd48c609-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 18 18:19:32.616527 master-0 kubenswrapper[30278]: I0318 18:19:32.616550 30278 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xmsb\" (UniqueName: \"kubernetes.io/projected/47f543cd-d5bf-4421-aae3-516afd48c609-kube-api-access-9xmsb\") on node \"master-0\" DevicePath \"\"" Mar 18 18:19:32.617155 master-0 kubenswrapper[30278]: I0318 18:19:32.616564 30278 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/47f543cd-d5bf-4421-aae3-516afd48c609-config-data\") on node \"master-0\" DevicePath \"\"" Mar 18 18:19:32.834132 master-0 kubenswrapper[30278]: I0318 18:19:32.834059 30278 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-824c8-default-external-api-0"] Mar 18 18:19:32.937863 master-0 kubenswrapper[30278]: I0318 18:19:32.937800 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-824c8-default-external-api-0" event={"ID":"977956f8-854b-4c87-9485-c67f2be25e4c","Type":"ContainerStarted","Data":"a4239717f1db3dd24ab3ccb320d6862448099f1946549c7c4f7e654578269862"} Mar 18 18:19:32.942074 master-0 kubenswrapper[30278]: I0318 18:19:32.942018 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-b9df6-db-sync-dxpjk" event={"ID":"47f543cd-d5bf-4421-aae3-516afd48c609","Type":"ContainerDied","Data":"7fc557b94f8a0c72e26d7c9c3686f23d710bfdf15e08e903058c45ee06f352c4"} Mar 18 18:19:32.942074 master-0 kubenswrapper[30278]: I0318 18:19:32.942070 30278 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7fc557b94f8a0c72e26d7c9c3686f23d710bfdf15e08e903058c45ee06f352c4" Mar 18 18:19:32.942388 master-0 kubenswrapper[30278]: I0318 18:19:32.942160 30278 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-b9df6-db-sync-dxpjk" Mar 18 18:19:33.395408 master-0 kubenswrapper[30278]: I0318 18:19:33.386001 30278 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-b9df6-scheduler-0"] Mar 18 18:19:33.395408 master-0 kubenswrapper[30278]: E0318 18:19:33.388194 30278 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="47f543cd-d5bf-4421-aae3-516afd48c609" containerName="cinder-b9df6-db-sync" Mar 18 18:19:33.395408 master-0 kubenswrapper[30278]: I0318 18:19:33.388219 30278 state_mem.go:107] "Deleted CPUSet assignment" podUID="47f543cd-d5bf-4421-aae3-516afd48c609" containerName="cinder-b9df6-db-sync" Mar 18 18:19:33.395408 master-0 kubenswrapper[30278]: I0318 18:19:33.390296 30278 memory_manager.go:354] "RemoveStaleState removing state" podUID="47f543cd-d5bf-4421-aae3-516afd48c609" containerName="cinder-b9df6-db-sync" Mar 18 18:19:33.442449 master-0 kubenswrapper[30278]: I0318 18:19:33.437990 30278 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-b9df6-scheduler-0"] Mar 18 18:19:33.442449 master-0 kubenswrapper[30278]: I0318 18:19:33.438165 30278 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-b9df6-volume-lvm-iscsi-0"] Mar 18 18:19:33.442449 master-0 kubenswrapper[30278]: I0318 18:19:33.439132 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-b9df6-scheduler-0" Mar 18 18:19:33.454695 master-0 kubenswrapper[30278]: I0318 18:19:33.454639 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-b9df6-volume-lvm-iscsi-0" Mar 18 18:19:33.455454 master-0 kubenswrapper[30278]: I0318 18:19:33.455431 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-b9df6-config-data" Mar 18 18:19:33.480003 master-0 kubenswrapper[30278]: I0318 18:19:33.479949 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-b9df6-scheduler-config-data" Mar 18 18:19:33.487622 master-0 kubenswrapper[30278]: I0318 18:19:33.484548 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-b9df6-scripts" Mar 18 18:19:33.487622 master-0 kubenswrapper[30278]: I0318 18:19:33.486122 30278 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-b9df6-volume-lvm-iscsi-0"] Mar 18 18:19:33.509615 master-0 kubenswrapper[30278]: I0318 18:19:33.509087 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-b9df6-volume-lvm-iscsi-config-data" Mar 18 18:19:33.608379 master-0 kubenswrapper[30278]: I0318 18:19:33.603580 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/cbe0e9fb-60fc-4ada-ad1a-014ee622d073-lib-modules\") pod \"cinder-b9df6-volume-lvm-iscsi-0\" (UID: \"cbe0e9fb-60fc-4ada-ad1a-014ee622d073\") " pod="openstack/cinder-b9df6-volume-lvm-iscsi-0" Mar 18 18:19:33.608379 master-0 kubenswrapper[30278]: I0318 18:19:33.603679 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dc87ffe2-a115-459b-a5b1-87c747b1df2a-config-data\") pod \"cinder-b9df6-scheduler-0\" (UID: \"dc87ffe2-a115-459b-a5b1-87c747b1df2a\") " pod="openstack/cinder-b9df6-scheduler-0" Mar 18 18:19:33.608379 master-0 kubenswrapper[30278]: I0318 18:19:33.603705 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/dc87ffe2-a115-459b-a5b1-87c747b1df2a-etc-machine-id\") pod \"cinder-b9df6-scheduler-0\" (UID: \"dc87ffe2-a115-459b-a5b1-87c747b1df2a\") " pod="openstack/cinder-b9df6-scheduler-0" Mar 18 18:19:33.608379 master-0 kubenswrapper[30278]: I0318 18:19:33.603733 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cbe0e9fb-60fc-4ada-ad1a-014ee622d073-config-data\") pod \"cinder-b9df6-volume-lvm-iscsi-0\" (UID: \"cbe0e9fb-60fc-4ada-ad1a-014ee622d073\") " pod="openstack/cinder-b9df6-volume-lvm-iscsi-0" Mar 18 18:19:33.608379 master-0 kubenswrapper[30278]: I0318 18:19:33.603760 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/cbe0e9fb-60fc-4ada-ad1a-014ee622d073-var-locks-cinder\") pod \"cinder-b9df6-volume-lvm-iscsi-0\" (UID: \"cbe0e9fb-60fc-4ada-ad1a-014ee622d073\") " pod="openstack/cinder-b9df6-volume-lvm-iscsi-0" Mar 18 18:19:33.608379 master-0 kubenswrapper[30278]: I0318 18:19:33.603814 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/cbe0e9fb-60fc-4ada-ad1a-014ee622d073-config-data-custom\") pod \"cinder-b9df6-volume-lvm-iscsi-0\" (UID: \"cbe0e9fb-60fc-4ada-ad1a-014ee622d073\") " pod="openstack/cinder-b9df6-volume-lvm-iscsi-0" Mar 18 18:19:33.608379 master-0 kubenswrapper[30278]: I0318 18:19:33.603859 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/cbe0e9fb-60fc-4ada-ad1a-014ee622d073-dev\") pod \"cinder-b9df6-volume-lvm-iscsi-0\" (UID: \"cbe0e9fb-60fc-4ada-ad1a-014ee622d073\") " pod="openstack/cinder-b9df6-volume-lvm-iscsi-0" Mar 18 18:19:33.608379 master-0 kubenswrapper[30278]: I0318 18:19:33.603883 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z45w4\" (UniqueName: \"kubernetes.io/projected/dc87ffe2-a115-459b-a5b1-87c747b1df2a-kube-api-access-z45w4\") pod \"cinder-b9df6-scheduler-0\" (UID: \"dc87ffe2-a115-459b-a5b1-87c747b1df2a\") " pod="openstack/cinder-b9df6-scheduler-0" Mar 18 18:19:33.608379 master-0 kubenswrapper[30278]: I0318 18:19:33.603914 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d4jsb\" (UniqueName: \"kubernetes.io/projected/cbe0e9fb-60fc-4ada-ad1a-014ee622d073-kube-api-access-d4jsb\") pod \"cinder-b9df6-volume-lvm-iscsi-0\" (UID: \"cbe0e9fb-60fc-4ada-ad1a-014ee622d073\") " pod="openstack/cinder-b9df6-volume-lvm-iscsi-0" Mar 18 18:19:33.608379 master-0 kubenswrapper[30278]: I0318 18:19:33.603950 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cbe0e9fb-60fc-4ada-ad1a-014ee622d073-combined-ca-bundle\") pod \"cinder-b9df6-volume-lvm-iscsi-0\" (UID: \"cbe0e9fb-60fc-4ada-ad1a-014ee622d073\") " pod="openstack/cinder-b9df6-volume-lvm-iscsi-0" Mar 18 18:19:33.608379 master-0 kubenswrapper[30278]: I0318 18:19:33.603965 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc87ffe2-a115-459b-a5b1-87c747b1df2a-combined-ca-bundle\") pod \"cinder-b9df6-scheduler-0\" (UID: \"dc87ffe2-a115-459b-a5b1-87c747b1df2a\") " pod="openstack/cinder-b9df6-scheduler-0" Mar 18 18:19:33.608379 master-0 kubenswrapper[30278]: I0318 18:19:33.603992 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/cbe0e9fb-60fc-4ada-ad1a-014ee622d073-var-lib-cinder\") pod \"cinder-b9df6-volume-lvm-iscsi-0\" (UID: \"cbe0e9fb-60fc-4ada-ad1a-014ee622d073\") " pod="openstack/cinder-b9df6-volume-lvm-iscsi-0" Mar 18 18:19:33.608379 master-0 kubenswrapper[30278]: I0318 18:19:33.604008 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/cbe0e9fb-60fc-4ada-ad1a-014ee622d073-etc-iscsi\") pod \"cinder-b9df6-volume-lvm-iscsi-0\" (UID: \"cbe0e9fb-60fc-4ada-ad1a-014ee622d073\") " pod="openstack/cinder-b9df6-volume-lvm-iscsi-0" Mar 18 18:19:33.608379 master-0 kubenswrapper[30278]: I0318 18:19:33.604027 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/cbe0e9fb-60fc-4ada-ad1a-014ee622d073-etc-nvme\") pod \"cinder-b9df6-volume-lvm-iscsi-0\" (UID: \"cbe0e9fb-60fc-4ada-ad1a-014ee622d073\") " pod="openstack/cinder-b9df6-volume-lvm-iscsi-0" Mar 18 18:19:33.608379 master-0 kubenswrapper[30278]: I0318 18:19:33.604063 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/cbe0e9fb-60fc-4ada-ad1a-014ee622d073-etc-machine-id\") pod \"cinder-b9df6-volume-lvm-iscsi-0\" (UID: \"cbe0e9fb-60fc-4ada-ad1a-014ee622d073\") " pod="openstack/cinder-b9df6-volume-lvm-iscsi-0" Mar 18 18:19:33.608379 master-0 kubenswrapper[30278]: I0318 18:19:33.604096 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/dc87ffe2-a115-459b-a5b1-87c747b1df2a-config-data-custom\") pod \"cinder-b9df6-scheduler-0\" (UID: \"dc87ffe2-a115-459b-a5b1-87c747b1df2a\") " pod="openstack/cinder-b9df6-scheduler-0" Mar 18 18:19:33.608379 master-0 kubenswrapper[30278]: I0318 18:19:33.604115 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/cbe0e9fb-60fc-4ada-ad1a-014ee622d073-run\") pod \"cinder-b9df6-volume-lvm-iscsi-0\" (UID: \"cbe0e9fb-60fc-4ada-ad1a-014ee622d073\") " pod="openstack/cinder-b9df6-volume-lvm-iscsi-0" Mar 18 18:19:33.608379 master-0 kubenswrapper[30278]: I0318 18:19:33.604160 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/cbe0e9fb-60fc-4ada-ad1a-014ee622d073-sys\") pod \"cinder-b9df6-volume-lvm-iscsi-0\" (UID: \"cbe0e9fb-60fc-4ada-ad1a-014ee622d073\") " pod="openstack/cinder-b9df6-volume-lvm-iscsi-0" Mar 18 18:19:33.608379 master-0 kubenswrapper[30278]: I0318 18:19:33.604183 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/cbe0e9fb-60fc-4ada-ad1a-014ee622d073-var-locks-brick\") pod \"cinder-b9df6-volume-lvm-iscsi-0\" (UID: \"cbe0e9fb-60fc-4ada-ad1a-014ee622d073\") " pod="openstack/cinder-b9df6-volume-lvm-iscsi-0" Mar 18 18:19:33.608379 master-0 kubenswrapper[30278]: I0318 18:19:33.604204 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cbe0e9fb-60fc-4ada-ad1a-014ee622d073-scripts\") pod \"cinder-b9df6-volume-lvm-iscsi-0\" (UID: \"cbe0e9fb-60fc-4ada-ad1a-014ee622d073\") " pod="openstack/cinder-b9df6-volume-lvm-iscsi-0" Mar 18 18:19:33.608379 master-0 kubenswrapper[30278]: I0318 18:19:33.604219 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dc87ffe2-a115-459b-a5b1-87c747b1df2a-scripts\") pod \"cinder-b9df6-scheduler-0\" (UID: \"dc87ffe2-a115-459b-a5b1-87c747b1df2a\") " pod="openstack/cinder-b9df6-scheduler-0" Mar 18 18:19:33.656814 master-0 kubenswrapper[30278]: I0318 18:19:33.656737 30278 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-97cb45bf9-q6h4g"] Mar 18 18:19:33.657170 master-0 kubenswrapper[30278]: I0318 18:19:33.657132 30278 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-97cb45bf9-q6h4g" podUID="d4a913f3-9113-409f-bddd-65390f556fd2" containerName="dnsmasq-dns" containerID="cri-o://f68027f51845209e98c591538e0bb1d45a061a07370b8fe1f51a8ab90a7e1977" gracePeriod=10 Mar 18 18:19:33.694309 master-0 kubenswrapper[30278]: I0318 18:19:33.684534 30278 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-97cb45bf9-q6h4g" Mar 18 18:19:33.711320 master-0 kubenswrapper[30278]: I0318 18:19:33.709143 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/cbe0e9fb-60fc-4ada-ad1a-014ee622d073-var-locks-cinder\") pod \"cinder-b9df6-volume-lvm-iscsi-0\" (UID: \"cbe0e9fb-60fc-4ada-ad1a-014ee622d073\") " pod="openstack/cinder-b9df6-volume-lvm-iscsi-0" Mar 18 18:19:33.711320 master-0 kubenswrapper[30278]: I0318 18:19:33.709871 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/cbe0e9fb-60fc-4ada-ad1a-014ee622d073-config-data-custom\") pod \"cinder-b9df6-volume-lvm-iscsi-0\" (UID: \"cbe0e9fb-60fc-4ada-ad1a-014ee622d073\") " pod="openstack/cinder-b9df6-volume-lvm-iscsi-0" Mar 18 18:19:33.711320 master-0 kubenswrapper[30278]: I0318 18:19:33.710003 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/cbe0e9fb-60fc-4ada-ad1a-014ee622d073-dev\") pod \"cinder-b9df6-volume-lvm-iscsi-0\" (UID: \"cbe0e9fb-60fc-4ada-ad1a-014ee622d073\") " pod="openstack/cinder-b9df6-volume-lvm-iscsi-0" Mar 18 18:19:33.711320 master-0 kubenswrapper[30278]: I0318 18:19:33.710054 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z45w4\" (UniqueName: \"kubernetes.io/projected/dc87ffe2-a115-459b-a5b1-87c747b1df2a-kube-api-access-z45w4\") pod \"cinder-b9df6-scheduler-0\" (UID: \"dc87ffe2-a115-459b-a5b1-87c747b1df2a\") " pod="openstack/cinder-b9df6-scheduler-0" Mar 18 18:19:33.711320 master-0 kubenswrapper[30278]: I0318 18:19:33.710128 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d4jsb\" (UniqueName: \"kubernetes.io/projected/cbe0e9fb-60fc-4ada-ad1a-014ee622d073-kube-api-access-d4jsb\") pod \"cinder-b9df6-volume-lvm-iscsi-0\" (UID: \"cbe0e9fb-60fc-4ada-ad1a-014ee622d073\") " pod="openstack/cinder-b9df6-volume-lvm-iscsi-0" Mar 18 18:19:33.711320 master-0 kubenswrapper[30278]: I0318 18:19:33.710450 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cbe0e9fb-60fc-4ada-ad1a-014ee622d073-combined-ca-bundle\") pod \"cinder-b9df6-volume-lvm-iscsi-0\" (UID: \"cbe0e9fb-60fc-4ada-ad1a-014ee622d073\") " pod="openstack/cinder-b9df6-volume-lvm-iscsi-0" Mar 18 18:19:33.711320 master-0 kubenswrapper[30278]: I0318 18:19:33.710469 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc87ffe2-a115-459b-a5b1-87c747b1df2a-combined-ca-bundle\") pod \"cinder-b9df6-scheduler-0\" (UID: \"dc87ffe2-a115-459b-a5b1-87c747b1df2a\") " pod="openstack/cinder-b9df6-scheduler-0" Mar 18 18:19:33.711320 master-0 kubenswrapper[30278]: I0318 18:19:33.710531 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/cbe0e9fb-60fc-4ada-ad1a-014ee622d073-var-lib-cinder\") pod \"cinder-b9df6-volume-lvm-iscsi-0\" (UID: \"cbe0e9fb-60fc-4ada-ad1a-014ee622d073\") " pod="openstack/cinder-b9df6-volume-lvm-iscsi-0" Mar 18 18:19:33.711320 master-0 kubenswrapper[30278]: I0318 18:19:33.710565 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/cbe0e9fb-60fc-4ada-ad1a-014ee622d073-etc-iscsi\") pod \"cinder-b9df6-volume-lvm-iscsi-0\" (UID: \"cbe0e9fb-60fc-4ada-ad1a-014ee622d073\") " pod="openstack/cinder-b9df6-volume-lvm-iscsi-0" Mar 18 18:19:33.711320 master-0 kubenswrapper[30278]: I0318 18:19:33.710594 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/cbe0e9fb-60fc-4ada-ad1a-014ee622d073-etc-nvme\") pod \"cinder-b9df6-volume-lvm-iscsi-0\" (UID: \"cbe0e9fb-60fc-4ada-ad1a-014ee622d073\") " pod="openstack/cinder-b9df6-volume-lvm-iscsi-0" Mar 18 18:19:33.711320 master-0 kubenswrapper[30278]: I0318 18:19:33.710654 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/cbe0e9fb-60fc-4ada-ad1a-014ee622d073-etc-machine-id\") pod \"cinder-b9df6-volume-lvm-iscsi-0\" (UID: \"cbe0e9fb-60fc-4ada-ad1a-014ee622d073\") " pod="openstack/cinder-b9df6-volume-lvm-iscsi-0" Mar 18 18:19:33.711320 master-0 kubenswrapper[30278]: I0318 18:19:33.710736 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/dc87ffe2-a115-459b-a5b1-87c747b1df2a-config-data-custom\") pod \"cinder-b9df6-scheduler-0\" (UID: \"dc87ffe2-a115-459b-a5b1-87c747b1df2a\") " pod="openstack/cinder-b9df6-scheduler-0" Mar 18 18:19:33.711320 master-0 kubenswrapper[30278]: I0318 18:19:33.710758 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/cbe0e9fb-60fc-4ada-ad1a-014ee622d073-run\") pod \"cinder-b9df6-volume-lvm-iscsi-0\" (UID: \"cbe0e9fb-60fc-4ada-ad1a-014ee622d073\") " pod="openstack/cinder-b9df6-volume-lvm-iscsi-0" Mar 18 18:19:33.711320 master-0 kubenswrapper[30278]: I0318 18:19:33.710860 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/cbe0e9fb-60fc-4ada-ad1a-014ee622d073-sys\") pod \"cinder-b9df6-volume-lvm-iscsi-0\" (UID: \"cbe0e9fb-60fc-4ada-ad1a-014ee622d073\") " pod="openstack/cinder-b9df6-volume-lvm-iscsi-0" Mar 18 18:19:33.711320 master-0 kubenswrapper[30278]: I0318 18:19:33.710901 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/cbe0e9fb-60fc-4ada-ad1a-014ee622d073-var-locks-brick\") pod \"cinder-b9df6-volume-lvm-iscsi-0\" (UID: \"cbe0e9fb-60fc-4ada-ad1a-014ee622d073\") " pod="openstack/cinder-b9df6-volume-lvm-iscsi-0" Mar 18 18:19:33.711320 master-0 kubenswrapper[30278]: I0318 18:19:33.710999 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cbe0e9fb-60fc-4ada-ad1a-014ee622d073-scripts\") pod \"cinder-b9df6-volume-lvm-iscsi-0\" (UID: \"cbe0e9fb-60fc-4ada-ad1a-014ee622d073\") " pod="openstack/cinder-b9df6-volume-lvm-iscsi-0" Mar 18 18:19:33.711320 master-0 kubenswrapper[30278]: I0318 18:19:33.711039 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dc87ffe2-a115-459b-a5b1-87c747b1df2a-scripts\") pod \"cinder-b9df6-scheduler-0\" (UID: \"dc87ffe2-a115-459b-a5b1-87c747b1df2a\") " pod="openstack/cinder-b9df6-scheduler-0" Mar 18 18:19:33.711320 master-0 kubenswrapper[30278]: I0318 18:19:33.711101 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/cbe0e9fb-60fc-4ada-ad1a-014ee622d073-lib-modules\") pod \"cinder-b9df6-volume-lvm-iscsi-0\" (UID: \"cbe0e9fb-60fc-4ada-ad1a-014ee622d073\") " pod="openstack/cinder-b9df6-volume-lvm-iscsi-0" Mar 18 18:19:33.711320 master-0 kubenswrapper[30278]: I0318 18:19:33.711132 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dc87ffe2-a115-459b-a5b1-87c747b1df2a-config-data\") pod \"cinder-b9df6-scheduler-0\" (UID: \"dc87ffe2-a115-459b-a5b1-87c747b1df2a\") " pod="openstack/cinder-b9df6-scheduler-0" Mar 18 18:19:33.711320 master-0 kubenswrapper[30278]: I0318 18:19:33.711155 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/dc87ffe2-a115-459b-a5b1-87c747b1df2a-etc-machine-id\") pod \"cinder-b9df6-scheduler-0\" (UID: \"dc87ffe2-a115-459b-a5b1-87c747b1df2a\") " pod="openstack/cinder-b9df6-scheduler-0" Mar 18 18:19:33.711320 master-0 kubenswrapper[30278]: I0318 18:19:33.711204 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cbe0e9fb-60fc-4ada-ad1a-014ee622d073-config-data\") pod \"cinder-b9df6-volume-lvm-iscsi-0\" (UID: \"cbe0e9fb-60fc-4ada-ad1a-014ee622d073\") " pod="openstack/cinder-b9df6-volume-lvm-iscsi-0" Mar 18 18:19:33.723337 master-0 kubenswrapper[30278]: I0318 18:19:33.713722 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/cbe0e9fb-60fc-4ada-ad1a-014ee622d073-etc-nvme\") pod \"cinder-b9df6-volume-lvm-iscsi-0\" (UID: \"cbe0e9fb-60fc-4ada-ad1a-014ee622d073\") " pod="openstack/cinder-b9df6-volume-lvm-iscsi-0" Mar 18 18:19:33.723337 master-0 kubenswrapper[30278]: I0318 18:19:33.714106 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/cbe0e9fb-60fc-4ada-ad1a-014ee622d073-var-locks-cinder\") pod \"cinder-b9df6-volume-lvm-iscsi-0\" (UID: \"cbe0e9fb-60fc-4ada-ad1a-014ee622d073\") " pod="openstack/cinder-b9df6-volume-lvm-iscsi-0" Mar 18 18:19:33.723337 master-0 kubenswrapper[30278]: I0318 18:19:33.714473 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/cbe0e9fb-60fc-4ada-ad1a-014ee622d073-dev\") pod \"cinder-b9df6-volume-lvm-iscsi-0\" (UID: \"cbe0e9fb-60fc-4ada-ad1a-014ee622d073\") " pod="openstack/cinder-b9df6-volume-lvm-iscsi-0" Mar 18 18:19:33.723337 master-0 kubenswrapper[30278]: I0318 18:19:33.714491 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/cbe0e9fb-60fc-4ada-ad1a-014ee622d073-lib-modules\") pod \"cinder-b9df6-volume-lvm-iscsi-0\" (UID: \"cbe0e9fb-60fc-4ada-ad1a-014ee622d073\") " pod="openstack/cinder-b9df6-volume-lvm-iscsi-0" Mar 18 18:19:33.723337 master-0 kubenswrapper[30278]: I0318 18:19:33.714678 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/cbe0e9fb-60fc-4ada-ad1a-014ee622d073-var-locks-brick\") pod \"cinder-b9df6-volume-lvm-iscsi-0\" (UID: \"cbe0e9fb-60fc-4ada-ad1a-014ee622d073\") " pod="openstack/cinder-b9df6-volume-lvm-iscsi-0" Mar 18 18:19:33.723337 master-0 kubenswrapper[30278]: I0318 18:19:33.715106 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/cbe0e9fb-60fc-4ada-ad1a-014ee622d073-var-lib-cinder\") pod \"cinder-b9df6-volume-lvm-iscsi-0\" (UID: \"cbe0e9fb-60fc-4ada-ad1a-014ee622d073\") " pod="openstack/cinder-b9df6-volume-lvm-iscsi-0" Mar 18 18:19:33.723337 master-0 kubenswrapper[30278]: I0318 18:19:33.715161 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/cbe0e9fb-60fc-4ada-ad1a-014ee622d073-etc-iscsi\") pod \"cinder-b9df6-volume-lvm-iscsi-0\" (UID: \"cbe0e9fb-60fc-4ada-ad1a-014ee622d073\") " pod="openstack/cinder-b9df6-volume-lvm-iscsi-0" Mar 18 18:19:33.723337 master-0 kubenswrapper[30278]: I0318 18:19:33.715188 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/cbe0e9fb-60fc-4ada-ad1a-014ee622d073-run\") pod \"cinder-b9df6-volume-lvm-iscsi-0\" (UID: \"cbe0e9fb-60fc-4ada-ad1a-014ee622d073\") " pod="openstack/cinder-b9df6-volume-lvm-iscsi-0" Mar 18 18:19:33.723337 master-0 kubenswrapper[30278]: I0318 18:19:33.715218 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/cbe0e9fb-60fc-4ada-ad1a-014ee622d073-sys\") pod \"cinder-b9df6-volume-lvm-iscsi-0\" (UID: \"cbe0e9fb-60fc-4ada-ad1a-014ee622d073\") " pod="openstack/cinder-b9df6-volume-lvm-iscsi-0" Mar 18 18:19:33.723337 master-0 kubenswrapper[30278]: I0318 18:19:33.717860 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/cbe0e9fb-60fc-4ada-ad1a-014ee622d073-etc-machine-id\") pod \"cinder-b9df6-volume-lvm-iscsi-0\" (UID: \"cbe0e9fb-60fc-4ada-ad1a-014ee622d073\") " pod="openstack/cinder-b9df6-volume-lvm-iscsi-0" Mar 18 18:19:33.723337 master-0 kubenswrapper[30278]: I0318 18:19:33.721584 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/dc87ffe2-a115-459b-a5b1-87c747b1df2a-etc-machine-id\") pod \"cinder-b9df6-scheduler-0\" (UID: \"dc87ffe2-a115-459b-a5b1-87c747b1df2a\") " pod="openstack/cinder-b9df6-scheduler-0" Mar 18 18:19:33.744438 master-0 kubenswrapper[30278]: I0318 18:19:33.734313 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dc87ffe2-a115-459b-a5b1-87c747b1df2a-config-data\") pod \"cinder-b9df6-scheduler-0\" (UID: \"dc87ffe2-a115-459b-a5b1-87c747b1df2a\") " pod="openstack/cinder-b9df6-scheduler-0" Mar 18 18:19:33.744438 master-0 kubenswrapper[30278]: I0318 18:19:33.734796 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cbe0e9fb-60fc-4ada-ad1a-014ee622d073-config-data\") pod \"cinder-b9df6-volume-lvm-iscsi-0\" (UID: \"cbe0e9fb-60fc-4ada-ad1a-014ee622d073\") " pod="openstack/cinder-b9df6-volume-lvm-iscsi-0" Mar 18 18:19:33.744438 master-0 kubenswrapper[30278]: I0318 18:19:33.739630 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cbe0e9fb-60fc-4ada-ad1a-014ee622d073-scripts\") pod \"cinder-b9df6-volume-lvm-iscsi-0\" (UID: \"cbe0e9fb-60fc-4ada-ad1a-014ee622d073\") " pod="openstack/cinder-b9df6-volume-lvm-iscsi-0" Mar 18 18:19:33.744438 master-0 kubenswrapper[30278]: I0318 18:19:33.741579 30278 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-65f9768575-656gb"] Mar 18 18:19:33.744438 master-0 kubenswrapper[30278]: I0318 18:19:33.743853 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dc87ffe2-a115-459b-a5b1-87c747b1df2a-scripts\") pod \"cinder-b9df6-scheduler-0\" (UID: \"dc87ffe2-a115-459b-a5b1-87c747b1df2a\") " pod="openstack/cinder-b9df6-scheduler-0" Mar 18 18:19:33.744438 master-0 kubenswrapper[30278]: I0318 18:19:33.744468 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cbe0e9fb-60fc-4ada-ad1a-014ee622d073-combined-ca-bundle\") pod \"cinder-b9df6-volume-lvm-iscsi-0\" (UID: \"cbe0e9fb-60fc-4ada-ad1a-014ee622d073\") " pod="openstack/cinder-b9df6-volume-lvm-iscsi-0" Mar 18 18:19:33.811764 master-0 kubenswrapper[30278]: I0318 18:19:33.811702 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d4jsb\" (UniqueName: \"kubernetes.io/projected/cbe0e9fb-60fc-4ada-ad1a-014ee622d073-kube-api-access-d4jsb\") pod \"cinder-b9df6-volume-lvm-iscsi-0\" (UID: \"cbe0e9fb-60fc-4ada-ad1a-014ee622d073\") " pod="openstack/cinder-b9df6-volume-lvm-iscsi-0" Mar 18 18:19:33.813669 master-0 kubenswrapper[30278]: I0318 18:19:33.813594 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-65f9768575-656gb" Mar 18 18:19:33.820915 master-0 kubenswrapper[30278]: I0318 18:19:33.820852 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc87ffe2-a115-459b-a5b1-87c747b1df2a-combined-ca-bundle\") pod \"cinder-b9df6-scheduler-0\" (UID: \"dc87ffe2-a115-459b-a5b1-87c747b1df2a\") " pod="openstack/cinder-b9df6-scheduler-0" Mar 18 18:19:33.821050 master-0 kubenswrapper[30278]: I0318 18:19:33.820993 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/dc87ffe2-a115-459b-a5b1-87c747b1df2a-config-data-custom\") pod \"cinder-b9df6-scheduler-0\" (UID: \"dc87ffe2-a115-459b-a5b1-87c747b1df2a\") " pod="openstack/cinder-b9df6-scheduler-0" Mar 18 18:19:33.822362 master-0 kubenswrapper[30278]: I0318 18:19:33.822292 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/cbe0e9fb-60fc-4ada-ad1a-014ee622d073-config-data-custom\") pod \"cinder-b9df6-volume-lvm-iscsi-0\" (UID: \"cbe0e9fb-60fc-4ada-ad1a-014ee622d073\") " pod="openstack/cinder-b9df6-volume-lvm-iscsi-0" Mar 18 18:19:33.848686 master-0 kubenswrapper[30278]: I0318 18:19:33.848609 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z45w4\" (UniqueName: \"kubernetes.io/projected/dc87ffe2-a115-459b-a5b1-87c747b1df2a-kube-api-access-z45w4\") pod \"cinder-b9df6-scheduler-0\" (UID: \"dc87ffe2-a115-459b-a5b1-87c747b1df2a\") " pod="openstack/cinder-b9df6-scheduler-0" Mar 18 18:19:33.908135 master-0 kubenswrapper[30278]: I0318 18:19:33.907994 30278 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-b9df6-backup-0"] Mar 18 18:19:33.912568 master-0 kubenswrapper[30278]: I0318 18:19:33.912504 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-b9df6-backup-0" Mar 18 18:19:33.952334 master-0 kubenswrapper[30278]: I0318 18:19:33.951347 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/eb9a9407-6790-44a8-8e7d-fa95e4e42bdc-dns-swift-storage-0\") pod \"dnsmasq-dns-65f9768575-656gb\" (UID: \"eb9a9407-6790-44a8-8e7d-fa95e4e42bdc\") " pod="openstack/dnsmasq-dns-65f9768575-656gb" Mar 18 18:19:33.952334 master-0 kubenswrapper[30278]: I0318 18:19:33.951459 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/eb9a9407-6790-44a8-8e7d-fa95e4e42bdc-ovsdbserver-nb\") pod \"dnsmasq-dns-65f9768575-656gb\" (UID: \"eb9a9407-6790-44a8-8e7d-fa95e4e42bdc\") " pod="openstack/dnsmasq-dns-65f9768575-656gb" Mar 18 18:19:33.952334 master-0 kubenswrapper[30278]: I0318 18:19:33.951494 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v4662\" (UniqueName: \"kubernetes.io/projected/eb9a9407-6790-44a8-8e7d-fa95e4e42bdc-kube-api-access-v4662\") pod \"dnsmasq-dns-65f9768575-656gb\" (UID: \"eb9a9407-6790-44a8-8e7d-fa95e4e42bdc\") " pod="openstack/dnsmasq-dns-65f9768575-656gb" Mar 18 18:19:33.952334 master-0 kubenswrapper[30278]: I0318 18:19:33.951537 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eb9a9407-6790-44a8-8e7d-fa95e4e42bdc-config\") pod \"dnsmasq-dns-65f9768575-656gb\" (UID: \"eb9a9407-6790-44a8-8e7d-fa95e4e42bdc\") " pod="openstack/dnsmasq-dns-65f9768575-656gb" Mar 18 18:19:33.952334 master-0 kubenswrapper[30278]: I0318 18:19:33.951576 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/eb9a9407-6790-44a8-8e7d-fa95e4e42bdc-dns-svc\") pod \"dnsmasq-dns-65f9768575-656gb\" (UID: \"eb9a9407-6790-44a8-8e7d-fa95e4e42bdc\") " pod="openstack/dnsmasq-dns-65f9768575-656gb" Mar 18 18:19:33.952334 master-0 kubenswrapper[30278]: I0318 18:19:33.951598 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/eb9a9407-6790-44a8-8e7d-fa95e4e42bdc-ovsdbserver-sb\") pod \"dnsmasq-dns-65f9768575-656gb\" (UID: \"eb9a9407-6790-44a8-8e7d-fa95e4e42bdc\") " pod="openstack/dnsmasq-dns-65f9768575-656gb" Mar 18 18:19:33.961417 master-0 kubenswrapper[30278]: I0318 18:19:33.953954 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-b9df6-backup-config-data" Mar 18 18:19:33.992439 master-0 kubenswrapper[30278]: I0318 18:19:33.992314 30278 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-65f9768575-656gb"] Mar 18 18:19:34.010084 master-0 kubenswrapper[30278]: I0318 18:19:34.010018 30278 generic.go:334] "Generic (PLEG): container finished" podID="c5b88faf-e795-428e-8c3b-5a81d27c4a63" containerID="5aef6bdfc2372b6574b3548d6a02f06098f5653e91ae94679695f8dc98e67a7e" exitCode=0 Mar 18 18:19:34.010556 master-0 kubenswrapper[30278]: I0318 18:19:34.010518 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-7kvlq" event={"ID":"c5b88faf-e795-428e-8c3b-5a81d27c4a63","Type":"ContainerDied","Data":"5aef6bdfc2372b6574b3548d6a02f06098f5653e91ae94679695f8dc98e67a7e"} Mar 18 18:19:34.024099 master-0 kubenswrapper[30278]: I0318 18:19:34.021291 30278 generic.go:334] "Generic (PLEG): container finished" podID="d4a913f3-9113-409f-bddd-65390f556fd2" containerID="f68027f51845209e98c591538e0bb1d45a061a07370b8fe1f51a8ab90a7e1977" exitCode=0 Mar 18 18:19:34.024099 master-0 kubenswrapper[30278]: I0318 18:19:34.021503 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-97cb45bf9-q6h4g" event={"ID":"d4a913f3-9113-409f-bddd-65390f556fd2","Type":"ContainerDied","Data":"f68027f51845209e98c591538e0bb1d45a061a07370b8fe1f51a8ab90a7e1977"} Mar 18 18:19:34.065286 master-0 kubenswrapper[30278]: I0318 18:19:34.060174 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/becba0cb-b638-43c2-af99-4269efec025f-dev\") pod \"cinder-b9df6-backup-0\" (UID: \"becba0cb-b638-43c2-af99-4269efec025f\") " pod="openstack/cinder-b9df6-backup-0" Mar 18 18:19:34.065286 master-0 kubenswrapper[30278]: I0318 18:19:34.060259 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/becba0cb-b638-43c2-af99-4269efec025f-config-data-custom\") pod \"cinder-b9df6-backup-0\" (UID: \"becba0cb-b638-43c2-af99-4269efec025f\") " pod="openstack/cinder-b9df6-backup-0" Mar 18 18:19:34.065286 master-0 kubenswrapper[30278]: I0318 18:19:34.060301 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/becba0cb-b638-43c2-af99-4269efec025f-sys\") pod \"cinder-b9df6-backup-0\" (UID: \"becba0cb-b638-43c2-af99-4269efec025f\") " pod="openstack/cinder-b9df6-backup-0" Mar 18 18:19:34.065286 master-0 kubenswrapper[30278]: I0318 18:19:34.060321 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qh9jw\" (UniqueName: \"kubernetes.io/projected/becba0cb-b638-43c2-af99-4269efec025f-kube-api-access-qh9jw\") pod \"cinder-b9df6-backup-0\" (UID: \"becba0cb-b638-43c2-af99-4269efec025f\") " pod="openstack/cinder-b9df6-backup-0" Mar 18 18:19:34.065286 master-0 kubenswrapper[30278]: I0318 18:19:34.060388 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/eb9a9407-6790-44a8-8e7d-fa95e4e42bdc-dns-swift-storage-0\") pod \"dnsmasq-dns-65f9768575-656gb\" (UID: \"eb9a9407-6790-44a8-8e7d-fa95e4e42bdc\") " pod="openstack/dnsmasq-dns-65f9768575-656gb" Mar 18 18:19:34.065286 master-0 kubenswrapper[30278]: I0318 18:19:34.060406 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/becba0cb-b638-43c2-af99-4269efec025f-run\") pod \"cinder-b9df6-backup-0\" (UID: \"becba0cb-b638-43c2-af99-4269efec025f\") " pod="openstack/cinder-b9df6-backup-0" Mar 18 18:19:34.065286 master-0 kubenswrapper[30278]: I0318 18:19:34.060429 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/becba0cb-b638-43c2-af99-4269efec025f-lib-modules\") pod \"cinder-b9df6-backup-0\" (UID: \"becba0cb-b638-43c2-af99-4269efec025f\") " pod="openstack/cinder-b9df6-backup-0" Mar 18 18:19:34.065286 master-0 kubenswrapper[30278]: I0318 18:19:34.060472 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/becba0cb-b638-43c2-af99-4269efec025f-config-data\") pod \"cinder-b9df6-backup-0\" (UID: \"becba0cb-b638-43c2-af99-4269efec025f\") " pod="openstack/cinder-b9df6-backup-0" Mar 18 18:19:34.065286 master-0 kubenswrapper[30278]: I0318 18:19:34.060501 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/eb9a9407-6790-44a8-8e7d-fa95e4e42bdc-ovsdbserver-nb\") pod \"dnsmasq-dns-65f9768575-656gb\" (UID: \"eb9a9407-6790-44a8-8e7d-fa95e4e42bdc\") " pod="openstack/dnsmasq-dns-65f9768575-656gb" Mar 18 18:19:34.065286 master-0 kubenswrapper[30278]: I0318 18:19:34.060535 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/becba0cb-b638-43c2-af99-4269efec025f-etc-iscsi\") pod \"cinder-b9df6-backup-0\" (UID: \"becba0cb-b638-43c2-af99-4269efec025f\") " pod="openstack/cinder-b9df6-backup-0" Mar 18 18:19:34.065286 master-0 kubenswrapper[30278]: I0318 18:19:34.060558 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v4662\" (UniqueName: \"kubernetes.io/projected/eb9a9407-6790-44a8-8e7d-fa95e4e42bdc-kube-api-access-v4662\") pod \"dnsmasq-dns-65f9768575-656gb\" (UID: \"eb9a9407-6790-44a8-8e7d-fa95e4e42bdc\") " pod="openstack/dnsmasq-dns-65f9768575-656gb" Mar 18 18:19:34.065286 master-0 kubenswrapper[30278]: I0318 18:19:34.060580 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/becba0cb-b638-43c2-af99-4269efec025f-combined-ca-bundle\") pod \"cinder-b9df6-backup-0\" (UID: \"becba0cb-b638-43c2-af99-4269efec025f\") " pod="openstack/cinder-b9df6-backup-0" Mar 18 18:19:34.065286 master-0 kubenswrapper[30278]: I0318 18:19:34.060602 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/becba0cb-b638-43c2-af99-4269efec025f-etc-machine-id\") pod \"cinder-b9df6-backup-0\" (UID: \"becba0cb-b638-43c2-af99-4269efec025f\") " pod="openstack/cinder-b9df6-backup-0" Mar 18 18:19:34.065286 master-0 kubenswrapper[30278]: I0318 18:19:34.060636 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/becba0cb-b638-43c2-af99-4269efec025f-var-lib-cinder\") pod \"cinder-b9df6-backup-0\" (UID: \"becba0cb-b638-43c2-af99-4269efec025f\") " pod="openstack/cinder-b9df6-backup-0" Mar 18 18:19:34.065286 master-0 kubenswrapper[30278]: I0318 18:19:34.060658 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eb9a9407-6790-44a8-8e7d-fa95e4e42bdc-config\") pod \"dnsmasq-dns-65f9768575-656gb\" (UID: \"eb9a9407-6790-44a8-8e7d-fa95e4e42bdc\") " pod="openstack/dnsmasq-dns-65f9768575-656gb" Mar 18 18:19:34.065286 master-0 kubenswrapper[30278]: I0318 18:19:34.060675 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/becba0cb-b638-43c2-af99-4269efec025f-var-locks-cinder\") pod \"cinder-b9df6-backup-0\" (UID: \"becba0cb-b638-43c2-af99-4269efec025f\") " pod="openstack/cinder-b9df6-backup-0" Mar 18 18:19:34.065286 master-0 kubenswrapper[30278]: I0318 18:19:34.060708 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/becba0cb-b638-43c2-af99-4269efec025f-var-locks-brick\") pod \"cinder-b9df6-backup-0\" (UID: \"becba0cb-b638-43c2-af99-4269efec025f\") " pod="openstack/cinder-b9df6-backup-0" Mar 18 18:19:34.065286 master-0 kubenswrapper[30278]: I0318 18:19:34.060731 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/eb9a9407-6790-44a8-8e7d-fa95e4e42bdc-dns-svc\") pod \"dnsmasq-dns-65f9768575-656gb\" (UID: \"eb9a9407-6790-44a8-8e7d-fa95e4e42bdc\") " pod="openstack/dnsmasq-dns-65f9768575-656gb" Mar 18 18:19:34.065286 master-0 kubenswrapper[30278]: I0318 18:19:34.060756 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/eb9a9407-6790-44a8-8e7d-fa95e4e42bdc-ovsdbserver-sb\") pod \"dnsmasq-dns-65f9768575-656gb\" (UID: \"eb9a9407-6790-44a8-8e7d-fa95e4e42bdc\") " pod="openstack/dnsmasq-dns-65f9768575-656gb" Mar 18 18:19:34.065286 master-0 kubenswrapper[30278]: I0318 18:19:34.060774 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/becba0cb-b638-43c2-af99-4269efec025f-etc-nvme\") pod \"cinder-b9df6-backup-0\" (UID: \"becba0cb-b638-43c2-af99-4269efec025f\") " pod="openstack/cinder-b9df6-backup-0" Mar 18 18:19:34.065286 master-0 kubenswrapper[30278]: I0318 18:19:34.060804 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/becba0cb-b638-43c2-af99-4269efec025f-scripts\") pod \"cinder-b9df6-backup-0\" (UID: \"becba0cb-b638-43c2-af99-4269efec025f\") " pod="openstack/cinder-b9df6-backup-0" Mar 18 18:19:34.065286 master-0 kubenswrapper[30278]: I0318 18:19:34.061047 30278 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-b9df6-backup-0"] Mar 18 18:19:34.065286 master-0 kubenswrapper[30278]: I0318 18:19:34.064223 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/eb9a9407-6790-44a8-8e7d-fa95e4e42bdc-ovsdbserver-nb\") pod \"dnsmasq-dns-65f9768575-656gb\" (UID: \"eb9a9407-6790-44a8-8e7d-fa95e4e42bdc\") " pod="openstack/dnsmasq-dns-65f9768575-656gb" Mar 18 18:19:34.065286 master-0 kubenswrapper[30278]: I0318 18:19:34.064876 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eb9a9407-6790-44a8-8e7d-fa95e4e42bdc-config\") pod \"dnsmasq-dns-65f9768575-656gb\" (UID: \"eb9a9407-6790-44a8-8e7d-fa95e4e42bdc\") " pod="openstack/dnsmasq-dns-65f9768575-656gb" Mar 18 18:19:34.066161 master-0 kubenswrapper[30278]: I0318 18:19:34.065378 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/eb9a9407-6790-44a8-8e7d-fa95e4e42bdc-ovsdbserver-sb\") pod \"dnsmasq-dns-65f9768575-656gb\" (UID: \"eb9a9407-6790-44a8-8e7d-fa95e4e42bdc\") " pod="openstack/dnsmasq-dns-65f9768575-656gb" Mar 18 18:19:34.066161 master-0 kubenswrapper[30278]: I0318 18:19:34.065472 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/eb9a9407-6790-44a8-8e7d-fa95e4e42bdc-dns-swift-storage-0\") pod \"dnsmasq-dns-65f9768575-656gb\" (UID: \"eb9a9407-6790-44a8-8e7d-fa95e4e42bdc\") " pod="openstack/dnsmasq-dns-65f9768575-656gb" Mar 18 18:19:34.066161 master-0 kubenswrapper[30278]: I0318 18:19:34.065675 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/eb9a9407-6790-44a8-8e7d-fa95e4e42bdc-dns-svc\") pod \"dnsmasq-dns-65f9768575-656gb\" (UID: \"eb9a9407-6790-44a8-8e7d-fa95e4e42bdc\") " pod="openstack/dnsmasq-dns-65f9768575-656gb" Mar 18 18:19:34.086390 master-0 kubenswrapper[30278]: I0318 18:19:34.086333 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v4662\" (UniqueName: \"kubernetes.io/projected/eb9a9407-6790-44a8-8e7d-fa95e4e42bdc-kube-api-access-v4662\") pod \"dnsmasq-dns-65f9768575-656gb\" (UID: \"eb9a9407-6790-44a8-8e7d-fa95e4e42bdc\") " pod="openstack/dnsmasq-dns-65f9768575-656gb" Mar 18 18:19:34.100520 master-0 kubenswrapper[30278]: I0318 18:19:34.100436 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-b9df6-scheduler-0" Mar 18 18:19:34.127328 master-0 kubenswrapper[30278]: I0318 18:19:34.121925 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-b9df6-volume-lvm-iscsi-0" Mar 18 18:19:34.143882 master-0 kubenswrapper[30278]: I0318 18:19:34.143565 30278 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-b9df6-api-0"] Mar 18 18:19:34.145943 master-0 kubenswrapper[30278]: I0318 18:19:34.145902 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-b9df6-api-0" Mar 18 18:19:34.149246 master-0 kubenswrapper[30278]: I0318 18:19:34.149197 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-b9df6-api-config-data" Mar 18 18:19:34.164561 master-0 kubenswrapper[30278]: I0318 18:19:34.164396 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/becba0cb-b638-43c2-af99-4269efec025f-var-locks-brick\") pod \"cinder-b9df6-backup-0\" (UID: \"becba0cb-b638-43c2-af99-4269efec025f\") " pod="openstack/cinder-b9df6-backup-0" Mar 18 18:19:34.164561 master-0 kubenswrapper[30278]: I0318 18:19:34.164503 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/becba0cb-b638-43c2-af99-4269efec025f-etc-nvme\") pod \"cinder-b9df6-backup-0\" (UID: \"becba0cb-b638-43c2-af99-4269efec025f\") " pod="openstack/cinder-b9df6-backup-0" Mar 18 18:19:34.164561 master-0 kubenswrapper[30278]: I0318 18:19:34.164552 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/becba0cb-b638-43c2-af99-4269efec025f-scripts\") pod \"cinder-b9df6-backup-0\" (UID: \"becba0cb-b638-43c2-af99-4269efec025f\") " pod="openstack/cinder-b9df6-backup-0" Mar 18 18:19:34.164858 master-0 kubenswrapper[30278]: I0318 18:19:34.164601 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/becba0cb-b638-43c2-af99-4269efec025f-dev\") pod \"cinder-b9df6-backup-0\" (UID: \"becba0cb-b638-43c2-af99-4269efec025f\") " pod="openstack/cinder-b9df6-backup-0" Mar 18 18:19:34.164858 master-0 kubenswrapper[30278]: I0318 18:19:34.164627 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/becba0cb-b638-43c2-af99-4269efec025f-config-data-custom\") pod \"cinder-b9df6-backup-0\" (UID: \"becba0cb-b638-43c2-af99-4269efec025f\") " pod="openstack/cinder-b9df6-backup-0" Mar 18 18:19:34.164858 master-0 kubenswrapper[30278]: I0318 18:19:34.164634 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/becba0cb-b638-43c2-af99-4269efec025f-var-locks-brick\") pod \"cinder-b9df6-backup-0\" (UID: \"becba0cb-b638-43c2-af99-4269efec025f\") " pod="openstack/cinder-b9df6-backup-0" Mar 18 18:19:34.164858 master-0 kubenswrapper[30278]: I0318 18:19:34.164652 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qh9jw\" (UniqueName: \"kubernetes.io/projected/becba0cb-b638-43c2-af99-4269efec025f-kube-api-access-qh9jw\") pod \"cinder-b9df6-backup-0\" (UID: \"becba0cb-b638-43c2-af99-4269efec025f\") " pod="openstack/cinder-b9df6-backup-0" Mar 18 18:19:34.164858 master-0 kubenswrapper[30278]: I0318 18:19:34.164765 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/becba0cb-b638-43c2-af99-4269efec025f-sys\") pod \"cinder-b9df6-backup-0\" (UID: \"becba0cb-b638-43c2-af99-4269efec025f\") " pod="openstack/cinder-b9df6-backup-0" Mar 18 18:19:34.166460 master-0 kubenswrapper[30278]: I0318 18:19:34.164982 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/becba0cb-b638-43c2-af99-4269efec025f-run\") pod \"cinder-b9df6-backup-0\" (UID: \"becba0cb-b638-43c2-af99-4269efec025f\") " pod="openstack/cinder-b9df6-backup-0" Mar 18 18:19:34.166460 master-0 kubenswrapper[30278]: I0318 18:19:34.165010 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/becba0cb-b638-43c2-af99-4269efec025f-lib-modules\") pod \"cinder-b9df6-backup-0\" (UID: \"becba0cb-b638-43c2-af99-4269efec025f\") " pod="openstack/cinder-b9df6-backup-0" Mar 18 18:19:34.166548 master-0 kubenswrapper[30278]: I0318 18:19:34.166476 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/becba0cb-b638-43c2-af99-4269efec025f-sys\") pod \"cinder-b9df6-backup-0\" (UID: \"becba0cb-b638-43c2-af99-4269efec025f\") " pod="openstack/cinder-b9df6-backup-0" Mar 18 18:19:34.168719 master-0 kubenswrapper[30278]: I0318 18:19:34.167011 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/becba0cb-b638-43c2-af99-4269efec025f-lib-modules\") pod \"cinder-b9df6-backup-0\" (UID: \"becba0cb-b638-43c2-af99-4269efec025f\") " pod="openstack/cinder-b9df6-backup-0" Mar 18 18:19:34.168719 master-0 kubenswrapper[30278]: I0318 18:19:34.167065 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/becba0cb-b638-43c2-af99-4269efec025f-run\") pod \"cinder-b9df6-backup-0\" (UID: \"becba0cb-b638-43c2-af99-4269efec025f\") " pod="openstack/cinder-b9df6-backup-0" Mar 18 18:19:34.168719 master-0 kubenswrapper[30278]: I0318 18:19:34.167172 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/becba0cb-b638-43c2-af99-4269efec025f-dev\") pod \"cinder-b9df6-backup-0\" (UID: \"becba0cb-b638-43c2-af99-4269efec025f\") " pod="openstack/cinder-b9df6-backup-0" Mar 18 18:19:34.168719 master-0 kubenswrapper[30278]: I0318 18:19:34.167560 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/becba0cb-b638-43c2-af99-4269efec025f-etc-nvme\") pod \"cinder-b9df6-backup-0\" (UID: \"becba0cb-b638-43c2-af99-4269efec025f\") " pod="openstack/cinder-b9df6-backup-0" Mar 18 18:19:34.168719 master-0 kubenswrapper[30278]: I0318 18:19:34.167751 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/becba0cb-b638-43c2-af99-4269efec025f-config-data\") pod \"cinder-b9df6-backup-0\" (UID: \"becba0cb-b638-43c2-af99-4269efec025f\") " pod="openstack/cinder-b9df6-backup-0" Mar 18 18:19:34.168719 master-0 kubenswrapper[30278]: I0318 18:19:34.167871 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/becba0cb-b638-43c2-af99-4269efec025f-etc-iscsi\") pod \"cinder-b9df6-backup-0\" (UID: \"becba0cb-b638-43c2-af99-4269efec025f\") " pod="openstack/cinder-b9df6-backup-0" Mar 18 18:19:34.168719 master-0 kubenswrapper[30278]: I0318 18:19:34.167915 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/becba0cb-b638-43c2-af99-4269efec025f-combined-ca-bundle\") pod \"cinder-b9df6-backup-0\" (UID: \"becba0cb-b638-43c2-af99-4269efec025f\") " pod="openstack/cinder-b9df6-backup-0" Mar 18 18:19:34.168719 master-0 kubenswrapper[30278]: I0318 18:19:34.167932 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/becba0cb-b638-43c2-af99-4269efec025f-etc-machine-id\") pod \"cinder-b9df6-backup-0\" (UID: \"becba0cb-b638-43c2-af99-4269efec025f\") " pod="openstack/cinder-b9df6-backup-0" Mar 18 18:19:34.168719 master-0 kubenswrapper[30278]: I0318 18:19:34.168015 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/becba0cb-b638-43c2-af99-4269efec025f-var-lib-cinder\") pod \"cinder-b9df6-backup-0\" (UID: \"becba0cb-b638-43c2-af99-4269efec025f\") " pod="openstack/cinder-b9df6-backup-0" Mar 18 18:19:34.168719 master-0 kubenswrapper[30278]: I0318 18:19:34.168061 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/becba0cb-b638-43c2-af99-4269efec025f-var-locks-cinder\") pod \"cinder-b9df6-backup-0\" (UID: \"becba0cb-b638-43c2-af99-4269efec025f\") " pod="openstack/cinder-b9df6-backup-0" Mar 18 18:19:34.168719 master-0 kubenswrapper[30278]: I0318 18:19:34.168237 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/becba0cb-b638-43c2-af99-4269efec025f-var-locks-cinder\") pod \"cinder-b9df6-backup-0\" (UID: \"becba0cb-b638-43c2-af99-4269efec025f\") " pod="openstack/cinder-b9df6-backup-0" Mar 18 18:19:34.168719 master-0 kubenswrapper[30278]: I0318 18:19:34.168392 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/becba0cb-b638-43c2-af99-4269efec025f-etc-machine-id\") pod \"cinder-b9df6-backup-0\" (UID: \"becba0cb-b638-43c2-af99-4269efec025f\") " pod="openstack/cinder-b9df6-backup-0" Mar 18 18:19:34.168719 master-0 kubenswrapper[30278]: I0318 18:19:34.168437 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/becba0cb-b638-43c2-af99-4269efec025f-var-lib-cinder\") pod \"cinder-b9df6-backup-0\" (UID: \"becba0cb-b638-43c2-af99-4269efec025f\") " pod="openstack/cinder-b9df6-backup-0" Mar 18 18:19:34.171790 master-0 kubenswrapper[30278]: I0318 18:19:34.171743 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/becba0cb-b638-43c2-af99-4269efec025f-etc-iscsi\") pod \"cinder-b9df6-backup-0\" (UID: \"becba0cb-b638-43c2-af99-4269efec025f\") " pod="openstack/cinder-b9df6-backup-0" Mar 18 18:19:34.174435 master-0 kubenswrapper[30278]: I0318 18:19:34.173517 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/becba0cb-b638-43c2-af99-4269efec025f-config-data\") pod \"cinder-b9df6-backup-0\" (UID: \"becba0cb-b638-43c2-af99-4269efec025f\") " pod="openstack/cinder-b9df6-backup-0" Mar 18 18:19:34.182430 master-0 kubenswrapper[30278]: I0318 18:19:34.181358 30278 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-b9df6-api-0"] Mar 18 18:19:34.194057 master-0 kubenswrapper[30278]: I0318 18:19:34.194009 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/becba0cb-b638-43c2-af99-4269efec025f-scripts\") pod \"cinder-b9df6-backup-0\" (UID: \"becba0cb-b638-43c2-af99-4269efec025f\") " pod="openstack/cinder-b9df6-backup-0" Mar 18 18:19:34.208553 master-0 kubenswrapper[30278]: I0318 18:19:34.206920 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/becba0cb-b638-43c2-af99-4269efec025f-combined-ca-bundle\") pod \"cinder-b9df6-backup-0\" (UID: \"becba0cb-b638-43c2-af99-4269efec025f\") " pod="openstack/cinder-b9df6-backup-0" Mar 18 18:19:34.208553 master-0 kubenswrapper[30278]: I0318 18:19:34.207013 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/becba0cb-b638-43c2-af99-4269efec025f-config-data-custom\") pod \"cinder-b9df6-backup-0\" (UID: \"becba0cb-b638-43c2-af99-4269efec025f\") " pod="openstack/cinder-b9df6-backup-0" Mar 18 18:19:34.239817 master-0 kubenswrapper[30278]: I0318 18:19:34.239666 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qh9jw\" (UniqueName: \"kubernetes.io/projected/becba0cb-b638-43c2-af99-4269efec025f-kube-api-access-qh9jw\") pod \"cinder-b9df6-backup-0\" (UID: \"becba0cb-b638-43c2-af99-4269efec025f\") " pod="openstack/cinder-b9df6-backup-0" Mar 18 18:19:34.287100 master-0 kubenswrapper[30278]: I0318 18:19:34.287042 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-65f9768575-656gb" Mar 18 18:19:34.287695 master-0 kubenswrapper[30278]: I0318 18:19:34.287630 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/02c0bb6e-e750-41b0-8b7b-afb80c5293af-combined-ca-bundle\") pod \"cinder-b9df6-api-0\" (UID: \"02c0bb6e-e750-41b0-8b7b-afb80c5293af\") " pod="openstack/cinder-b9df6-api-0" Mar 18 18:19:34.287819 master-0 kubenswrapper[30278]: I0318 18:19:34.287794 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/02c0bb6e-e750-41b0-8b7b-afb80c5293af-scripts\") pod \"cinder-b9df6-api-0\" (UID: \"02c0bb6e-e750-41b0-8b7b-afb80c5293af\") " pod="openstack/cinder-b9df6-api-0" Mar 18 18:19:34.288178 master-0 kubenswrapper[30278]: I0318 18:19:34.288152 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/02c0bb6e-e750-41b0-8b7b-afb80c5293af-config-data\") pod \"cinder-b9df6-api-0\" (UID: \"02c0bb6e-e750-41b0-8b7b-afb80c5293af\") " pod="openstack/cinder-b9df6-api-0" Mar 18 18:19:34.288355 master-0 kubenswrapper[30278]: I0318 18:19:34.288333 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/02c0bb6e-e750-41b0-8b7b-afb80c5293af-logs\") pod \"cinder-b9df6-api-0\" (UID: \"02c0bb6e-e750-41b0-8b7b-afb80c5293af\") " pod="openstack/cinder-b9df6-api-0" Mar 18 18:19:34.288437 master-0 kubenswrapper[30278]: I0318 18:19:34.288416 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mptjs\" (UniqueName: \"kubernetes.io/projected/02c0bb6e-e750-41b0-8b7b-afb80c5293af-kube-api-access-mptjs\") pod \"cinder-b9df6-api-0\" (UID: \"02c0bb6e-e750-41b0-8b7b-afb80c5293af\") " pod="openstack/cinder-b9df6-api-0" Mar 18 18:19:34.288526 master-0 kubenswrapper[30278]: I0318 18:19:34.288505 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/02c0bb6e-e750-41b0-8b7b-afb80c5293af-etc-machine-id\") pod \"cinder-b9df6-api-0\" (UID: \"02c0bb6e-e750-41b0-8b7b-afb80c5293af\") " pod="openstack/cinder-b9df6-api-0" Mar 18 18:19:34.288575 master-0 kubenswrapper[30278]: I0318 18:19:34.288532 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/02c0bb6e-e750-41b0-8b7b-afb80c5293af-config-data-custom\") pod \"cinder-b9df6-api-0\" (UID: \"02c0bb6e-e750-41b0-8b7b-afb80c5293af\") " pod="openstack/cinder-b9df6-api-0" Mar 18 18:19:34.314347 master-0 kubenswrapper[30278]: I0318 18:19:34.311633 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-b9df6-backup-0" Mar 18 18:19:34.397154 master-0 kubenswrapper[30278]: I0318 18:19:34.397083 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/02c0bb6e-e750-41b0-8b7b-afb80c5293af-combined-ca-bundle\") pod \"cinder-b9df6-api-0\" (UID: \"02c0bb6e-e750-41b0-8b7b-afb80c5293af\") " pod="openstack/cinder-b9df6-api-0" Mar 18 18:19:34.397377 master-0 kubenswrapper[30278]: I0318 18:19:34.397236 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/02c0bb6e-e750-41b0-8b7b-afb80c5293af-scripts\") pod \"cinder-b9df6-api-0\" (UID: \"02c0bb6e-e750-41b0-8b7b-afb80c5293af\") " pod="openstack/cinder-b9df6-api-0" Mar 18 18:19:34.397861 master-0 kubenswrapper[30278]: I0318 18:19:34.397557 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/02c0bb6e-e750-41b0-8b7b-afb80c5293af-config-data\") pod \"cinder-b9df6-api-0\" (UID: \"02c0bb6e-e750-41b0-8b7b-afb80c5293af\") " pod="openstack/cinder-b9df6-api-0" Mar 18 18:19:34.399829 master-0 kubenswrapper[30278]: I0318 18:19:34.398709 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/02c0bb6e-e750-41b0-8b7b-afb80c5293af-logs\") pod \"cinder-b9df6-api-0\" (UID: \"02c0bb6e-e750-41b0-8b7b-afb80c5293af\") " pod="openstack/cinder-b9df6-api-0" Mar 18 18:19:34.399829 master-0 kubenswrapper[30278]: I0318 18:19:34.398804 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mptjs\" (UniqueName: \"kubernetes.io/projected/02c0bb6e-e750-41b0-8b7b-afb80c5293af-kube-api-access-mptjs\") pod \"cinder-b9df6-api-0\" (UID: \"02c0bb6e-e750-41b0-8b7b-afb80c5293af\") " pod="openstack/cinder-b9df6-api-0" Mar 18 18:19:34.399829 master-0 kubenswrapper[30278]: I0318 18:19:34.398916 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/02c0bb6e-e750-41b0-8b7b-afb80c5293af-etc-machine-id\") pod \"cinder-b9df6-api-0\" (UID: \"02c0bb6e-e750-41b0-8b7b-afb80c5293af\") " pod="openstack/cinder-b9df6-api-0" Mar 18 18:19:34.399829 master-0 kubenswrapper[30278]: I0318 18:19:34.398940 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/02c0bb6e-e750-41b0-8b7b-afb80c5293af-config-data-custom\") pod \"cinder-b9df6-api-0\" (UID: \"02c0bb6e-e750-41b0-8b7b-afb80c5293af\") " pod="openstack/cinder-b9df6-api-0" Mar 18 18:19:34.399829 master-0 kubenswrapper[30278]: I0318 18:19:34.399715 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/02c0bb6e-e750-41b0-8b7b-afb80c5293af-logs\") pod \"cinder-b9df6-api-0\" (UID: \"02c0bb6e-e750-41b0-8b7b-afb80c5293af\") " pod="openstack/cinder-b9df6-api-0" Mar 18 18:19:34.399829 master-0 kubenswrapper[30278]: I0318 18:19:34.399790 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/02c0bb6e-e750-41b0-8b7b-afb80c5293af-etc-machine-id\") pod \"cinder-b9df6-api-0\" (UID: \"02c0bb6e-e750-41b0-8b7b-afb80c5293af\") " pod="openstack/cinder-b9df6-api-0" Mar 18 18:19:34.408232 master-0 kubenswrapper[30278]: I0318 18:19:34.405876 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/02c0bb6e-e750-41b0-8b7b-afb80c5293af-config-data\") pod \"cinder-b9df6-api-0\" (UID: \"02c0bb6e-e750-41b0-8b7b-afb80c5293af\") " pod="openstack/cinder-b9df6-api-0" Mar 18 18:19:34.408232 master-0 kubenswrapper[30278]: I0318 18:19:34.406484 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/02c0bb6e-e750-41b0-8b7b-afb80c5293af-combined-ca-bundle\") pod \"cinder-b9df6-api-0\" (UID: \"02c0bb6e-e750-41b0-8b7b-afb80c5293af\") " pod="openstack/cinder-b9df6-api-0" Mar 18 18:19:34.413317 master-0 kubenswrapper[30278]: I0318 18:19:34.411228 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/02c0bb6e-e750-41b0-8b7b-afb80c5293af-config-data-custom\") pod \"cinder-b9df6-api-0\" (UID: \"02c0bb6e-e750-41b0-8b7b-afb80c5293af\") " pod="openstack/cinder-b9df6-api-0" Mar 18 18:19:34.413317 master-0 kubenswrapper[30278]: I0318 18:19:34.411384 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/02c0bb6e-e750-41b0-8b7b-afb80c5293af-scripts\") pod \"cinder-b9df6-api-0\" (UID: \"02c0bb6e-e750-41b0-8b7b-afb80c5293af\") " pod="openstack/cinder-b9df6-api-0" Mar 18 18:19:34.437069 master-0 kubenswrapper[30278]: I0318 18:19:34.437002 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mptjs\" (UniqueName: \"kubernetes.io/projected/02c0bb6e-e750-41b0-8b7b-afb80c5293af-kube-api-access-mptjs\") pod \"cinder-b9df6-api-0\" (UID: \"02c0bb6e-e750-41b0-8b7b-afb80c5293af\") " pod="openstack/cinder-b9df6-api-0" Mar 18 18:19:34.573641 master-0 kubenswrapper[30278]: I0318 18:19:34.572865 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-b9df6-api-0" Mar 18 18:19:34.735723 master-0 kubenswrapper[30278]: I0318 18:19:34.715757 30278 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-97cb45bf9-q6h4g" Mar 18 18:19:34.828958 master-0 kubenswrapper[30278]: I0318 18:19:34.828876 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z6vz9\" (UniqueName: \"kubernetes.io/projected/d4a913f3-9113-409f-bddd-65390f556fd2-kube-api-access-z6vz9\") pod \"d4a913f3-9113-409f-bddd-65390f556fd2\" (UID: \"d4a913f3-9113-409f-bddd-65390f556fd2\") " Mar 18 18:19:34.836511 master-0 kubenswrapper[30278]: I0318 18:19:34.835414 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d4a913f3-9113-409f-bddd-65390f556fd2-dns-swift-storage-0\") pod \"d4a913f3-9113-409f-bddd-65390f556fd2\" (UID: \"d4a913f3-9113-409f-bddd-65390f556fd2\") " Mar 18 18:19:34.836511 master-0 kubenswrapper[30278]: I0318 18:19:34.835610 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d4a913f3-9113-409f-bddd-65390f556fd2-config\") pod \"d4a913f3-9113-409f-bddd-65390f556fd2\" (UID: \"d4a913f3-9113-409f-bddd-65390f556fd2\") " Mar 18 18:19:34.836511 master-0 kubenswrapper[30278]: I0318 18:19:34.835738 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d4a913f3-9113-409f-bddd-65390f556fd2-ovsdbserver-nb\") pod \"d4a913f3-9113-409f-bddd-65390f556fd2\" (UID: \"d4a913f3-9113-409f-bddd-65390f556fd2\") " Mar 18 18:19:34.836511 master-0 kubenswrapper[30278]: I0318 18:19:34.835802 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d4a913f3-9113-409f-bddd-65390f556fd2-dns-svc\") pod \"d4a913f3-9113-409f-bddd-65390f556fd2\" (UID: \"d4a913f3-9113-409f-bddd-65390f556fd2\") " Mar 18 18:19:34.836511 master-0 kubenswrapper[30278]: I0318 18:19:34.836021 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d4a913f3-9113-409f-bddd-65390f556fd2-ovsdbserver-sb\") pod \"d4a913f3-9113-409f-bddd-65390f556fd2\" (UID: \"d4a913f3-9113-409f-bddd-65390f556fd2\") " Mar 18 18:19:34.847656 master-0 kubenswrapper[30278]: I0318 18:19:34.847474 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d4a913f3-9113-409f-bddd-65390f556fd2-kube-api-access-z6vz9" (OuterVolumeSpecName: "kube-api-access-z6vz9") pod "d4a913f3-9113-409f-bddd-65390f556fd2" (UID: "d4a913f3-9113-409f-bddd-65390f556fd2"). InnerVolumeSpecName "kube-api-access-z6vz9". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 18:19:34.922136 master-0 kubenswrapper[30278]: I0318 18:19:34.918826 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d4a913f3-9113-409f-bddd-65390f556fd2-config" (OuterVolumeSpecName: "config") pod "d4a913f3-9113-409f-bddd-65390f556fd2" (UID: "d4a913f3-9113-409f-bddd-65390f556fd2"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 18:19:34.947119 master-0 kubenswrapper[30278]: I0318 18:19:34.947014 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d4a913f3-9113-409f-bddd-65390f556fd2-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "d4a913f3-9113-409f-bddd-65390f556fd2" (UID: "d4a913f3-9113-409f-bddd-65390f556fd2"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 18:19:34.953423 master-0 kubenswrapper[30278]: I0318 18:19:34.953344 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d4a913f3-9113-409f-bddd-65390f556fd2-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "d4a913f3-9113-409f-bddd-65390f556fd2" (UID: "d4a913f3-9113-409f-bddd-65390f556fd2"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 18:19:34.956107 master-0 kubenswrapper[30278]: I0318 18:19:34.955912 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d4a913f3-9113-409f-bddd-65390f556fd2-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "d4a913f3-9113-409f-bddd-65390f556fd2" (UID: "d4a913f3-9113-409f-bddd-65390f556fd2"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 18:19:34.957404 master-0 kubenswrapper[30278]: I0318 18:19:34.957250 30278 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d4a913f3-9113-409f-bddd-65390f556fd2-ovsdbserver-sb\") on node \"master-0\" DevicePath \"\"" Mar 18 18:19:34.957404 master-0 kubenswrapper[30278]: I0318 18:19:34.957293 30278 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z6vz9\" (UniqueName: \"kubernetes.io/projected/d4a913f3-9113-409f-bddd-65390f556fd2-kube-api-access-z6vz9\") on node \"master-0\" DevicePath \"\"" Mar 18 18:19:34.957404 master-0 kubenswrapper[30278]: I0318 18:19:34.957314 30278 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d4a913f3-9113-409f-bddd-65390f556fd2-dns-swift-storage-0\") on node \"master-0\" DevicePath \"\"" Mar 18 18:19:34.957404 master-0 kubenswrapper[30278]: I0318 18:19:34.957328 30278 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d4a913f3-9113-409f-bddd-65390f556fd2-config\") on node \"master-0\" DevicePath \"\"" Mar 18 18:19:34.957404 master-0 kubenswrapper[30278]: I0318 18:19:34.957338 30278 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d4a913f3-9113-409f-bddd-65390f556fd2-ovsdbserver-nb\") on node \"master-0\" DevicePath \"\"" Mar 18 18:19:34.988402 master-0 kubenswrapper[30278]: I0318 18:19:34.988246 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d4a913f3-9113-409f-bddd-65390f556fd2-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "d4a913f3-9113-409f-bddd-65390f556fd2" (UID: "d4a913f3-9113-409f-bddd-65390f556fd2"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 18:19:35.067413 master-0 kubenswrapper[30278]: I0318 18:19:35.066097 30278 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d4a913f3-9113-409f-bddd-65390f556fd2-dns-svc\") on node \"master-0\" DevicePath \"\"" Mar 18 18:19:35.104544 master-0 kubenswrapper[30278]: I0318 18:19:35.104464 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-824c8-default-external-api-0" event={"ID":"977956f8-854b-4c87-9485-c67f2be25e4c","Type":"ContainerStarted","Data":"9bde390825853fa8bc84374619f4e1229ec06f662a37380bbbe5265e63a1c43f"} Mar 18 18:19:35.105914 master-0 kubenswrapper[30278]: I0318 18:19:35.105842 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-97cb45bf9-q6h4g" event={"ID":"d4a913f3-9113-409f-bddd-65390f556fd2","Type":"ContainerDied","Data":"511f668727545fed9c9fcb998bea80ea5c3529d934686802ddc390716b046dd4"} Mar 18 18:19:35.105986 master-0 kubenswrapper[30278]: I0318 18:19:35.105930 30278 scope.go:117] "RemoveContainer" containerID="f68027f51845209e98c591538e0bb1d45a061a07370b8fe1f51a8ab90a7e1977" Mar 18 18:19:35.106050 master-0 kubenswrapper[30278]: I0318 18:19:35.105961 30278 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-97cb45bf9-q6h4g" Mar 18 18:19:35.175646 master-0 kubenswrapper[30278]: I0318 18:19:35.175573 30278 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-b9df6-volume-lvm-iscsi-0"] Mar 18 18:19:35.196748 master-0 kubenswrapper[30278]: I0318 18:19:35.183839 30278 scope.go:117] "RemoveContainer" containerID="25d839eaf37c8784a412280dfb11d9917cd5ad33a8209d1f8aa0b357baaee826" Mar 18 18:19:35.215850 master-0 kubenswrapper[30278]: I0318 18:19:35.214679 30278 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-b9df6-scheduler-0"] Mar 18 18:19:35.253922 master-0 kubenswrapper[30278]: I0318 18:19:35.253868 30278 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-97cb45bf9-q6h4g"] Mar 18 18:19:35.267468 master-0 kubenswrapper[30278]: I0318 18:19:35.266686 30278 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-97cb45bf9-q6h4g"] Mar 18 18:19:35.364406 master-0 kubenswrapper[30278]: I0318 18:19:35.362259 30278 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-65f9768575-656gb"] Mar 18 18:19:35.457344 master-0 kubenswrapper[30278]: W0318 18:19:35.456877 30278 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbecba0cb_b638_43c2_af99_4269efec025f.slice/crio-5af7f35c19ede10622e868d083978bb44c7e2639e0ba7db5b6f752acfea890ed WatchSource:0}: Error finding container 5af7f35c19ede10622e868d083978bb44c7e2639e0ba7db5b6f752acfea890ed: Status 404 returned error can't find the container with id 5af7f35c19ede10622e868d083978bb44c7e2639e0ba7db5b6f752acfea890ed Mar 18 18:19:35.465264 master-0 kubenswrapper[30278]: I0318 18:19:35.464205 30278 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-b9df6-backup-0"] Mar 18 18:19:35.685595 master-0 kubenswrapper[30278]: I0318 18:19:35.685520 30278 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-b9df6-api-0"] Mar 18 18:19:35.852727 master-0 kubenswrapper[30278]: I0318 18:19:35.852191 30278 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-b9df6-api-0"] Mar 18 18:19:36.059513 master-0 kubenswrapper[30278]: I0318 18:19:36.057199 30278 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-7kvlq" Mar 18 18:19:36.140969 master-0 kubenswrapper[30278]: I0318 18:19:36.128920 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d8gqc\" (UniqueName: \"kubernetes.io/projected/c5b88faf-e795-428e-8c3b-5a81d27c4a63-kube-api-access-d8gqc\") pod \"c5b88faf-e795-428e-8c3b-5a81d27c4a63\" (UID: \"c5b88faf-e795-428e-8c3b-5a81d27c4a63\") " Mar 18 18:19:36.140969 master-0 kubenswrapper[30278]: I0318 18:19:36.130173 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/c5b88faf-e795-428e-8c3b-5a81d27c4a63-config\") pod \"c5b88faf-e795-428e-8c3b-5a81d27c4a63\" (UID: \"c5b88faf-e795-428e-8c3b-5a81d27c4a63\") " Mar 18 18:19:36.140969 master-0 kubenswrapper[30278]: I0318 18:19:36.130980 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c5b88faf-e795-428e-8c3b-5a81d27c4a63-combined-ca-bundle\") pod \"c5b88faf-e795-428e-8c3b-5a81d27c4a63\" (UID: \"c5b88faf-e795-428e-8c3b-5a81d27c4a63\") " Mar 18 18:19:36.140969 master-0 kubenswrapper[30278]: I0318 18:19:36.133583 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c5b88faf-e795-428e-8c3b-5a81d27c4a63-kube-api-access-d8gqc" (OuterVolumeSpecName: "kube-api-access-d8gqc") pod "c5b88faf-e795-428e-8c3b-5a81d27c4a63" (UID: "c5b88faf-e795-428e-8c3b-5a81d27c4a63"). InnerVolumeSpecName "kube-api-access-d8gqc". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 18:19:36.140969 master-0 kubenswrapper[30278]: I0318 18:19:36.138771 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-b9df6-scheduler-0" event={"ID":"dc87ffe2-a115-459b-a5b1-87c747b1df2a","Type":"ContainerStarted","Data":"892bad1d70ad2453cff505cfd66860d3bd14099acd80b9afdf4006d4dd3becd9"} Mar 18 18:19:36.177800 master-0 kubenswrapper[30278]: I0318 18:19:36.177656 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-7kvlq" event={"ID":"c5b88faf-e795-428e-8c3b-5a81d27c4a63","Type":"ContainerDied","Data":"17e0c1cebd9bfb0a798eefa9aff161a9086970ef11dc3ee4c84558885f39d039"} Mar 18 18:19:36.177800 master-0 kubenswrapper[30278]: I0318 18:19:36.177718 30278 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="17e0c1cebd9bfb0a798eefa9aff161a9086970ef11dc3ee4c84558885f39d039" Mar 18 18:19:36.177800 master-0 kubenswrapper[30278]: I0318 18:19:36.177781 30278 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-7kvlq" Mar 18 18:19:36.183132 master-0 kubenswrapper[30278]: I0318 18:19:36.181347 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-824c8-default-external-api-0" event={"ID":"977956f8-854b-4c87-9485-c67f2be25e4c","Type":"ContainerStarted","Data":"601bbe39186558f953385a8cf55b5d14e96586d54733d7ee0664550e6f44b211"} Mar 18 18:19:36.184541 master-0 kubenswrapper[30278]: I0318 18:19:36.183646 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-b9df6-api-0" event={"ID":"02c0bb6e-e750-41b0-8b7b-afb80c5293af","Type":"ContainerStarted","Data":"1fa9335e400bc7897a1a472ff36621ea5eca2394fd78749b6459b55ebf5141d1"} Mar 18 18:19:36.192513 master-0 kubenswrapper[30278]: I0318 18:19:36.192446 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c5b88faf-e795-428e-8c3b-5a81d27c4a63-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c5b88faf-e795-428e-8c3b-5a81d27c4a63" (UID: "c5b88faf-e795-428e-8c3b-5a81d27c4a63"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 18:19:36.197063 master-0 kubenswrapper[30278]: I0318 18:19:36.196993 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-b9df6-volume-lvm-iscsi-0" event={"ID":"cbe0e9fb-60fc-4ada-ad1a-014ee622d073","Type":"ContainerStarted","Data":"dde23f94f42e6e7fb93d517cbb3066a90c45983dd99f30857d20836e6a55e1b5"} Mar 18 18:19:36.201656 master-0 kubenswrapper[30278]: I0318 18:19:36.201610 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-b9df6-backup-0" event={"ID":"becba0cb-b638-43c2-af99-4269efec025f","Type":"ContainerStarted","Data":"5af7f35c19ede10622e868d083978bb44c7e2639e0ba7db5b6f752acfea890ed"} Mar 18 18:19:36.223166 master-0 kubenswrapper[30278]: I0318 18:19:36.222124 30278 generic.go:334] "Generic (PLEG): container finished" podID="eb9a9407-6790-44a8-8e7d-fa95e4e42bdc" containerID="f0e4a6ccbb9489996f6b30452c31aa977efa84ff68518bd03c506bbd5ae87a9e" exitCode=0 Mar 18 18:19:36.223166 master-0 kubenswrapper[30278]: I0318 18:19:36.222195 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-65f9768575-656gb" event={"ID":"eb9a9407-6790-44a8-8e7d-fa95e4e42bdc","Type":"ContainerDied","Data":"f0e4a6ccbb9489996f6b30452c31aa977efa84ff68518bd03c506bbd5ae87a9e"} Mar 18 18:19:36.223166 master-0 kubenswrapper[30278]: I0318 18:19:36.222230 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-65f9768575-656gb" event={"ID":"eb9a9407-6790-44a8-8e7d-fa95e4e42bdc","Type":"ContainerStarted","Data":"8ee04943a65861d663691398a40108a49e4d2a1771a6ca12577e769c11676f7d"} Mar 18 18:19:36.223449 master-0 kubenswrapper[30278]: I0318 18:19:36.223125 30278 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-824c8-default-external-api-0" podStartSLOduration=6.223100579 podStartE2EDuration="6.223100579s" podCreationTimestamp="2026-03-18 18:19:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 18:19:36.210152071 +0000 UTC m=+1145.377336676" watchObservedRunningTime="2026-03-18 18:19:36.223100579 +0000 UTC m=+1145.390285174" Mar 18 18:19:36.235596 master-0 kubenswrapper[30278]: I0318 18:19:36.235565 30278 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d8gqc\" (UniqueName: \"kubernetes.io/projected/c5b88faf-e795-428e-8c3b-5a81d27c4a63-kube-api-access-d8gqc\") on node \"master-0\" DevicePath \"\"" Mar 18 18:19:36.235844 master-0 kubenswrapper[30278]: I0318 18:19:36.235736 30278 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c5b88faf-e795-428e-8c3b-5a81d27c4a63-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 18 18:19:36.264585 master-0 kubenswrapper[30278]: I0318 18:19:36.264513 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c5b88faf-e795-428e-8c3b-5a81d27c4a63-config" (OuterVolumeSpecName: "config") pod "c5b88faf-e795-428e-8c3b-5a81d27c4a63" (UID: "c5b88faf-e795-428e-8c3b-5a81d27c4a63"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 18:19:36.339053 master-0 kubenswrapper[30278]: I0318 18:19:36.338998 30278 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/c5b88faf-e795-428e-8c3b-5a81d27c4a63-config\") on node \"master-0\" DevicePath \"\"" Mar 18 18:19:37.082811 master-0 kubenswrapper[30278]: I0318 18:19:37.082727 30278 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d4a913f3-9113-409f-bddd-65390f556fd2" path="/var/lib/kubelet/pods/d4a913f3-9113-409f-bddd-65390f556fd2/volumes" Mar 18 18:19:37.286148 master-0 kubenswrapper[30278]: I0318 18:19:37.285748 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-b9df6-api-0" event={"ID":"02c0bb6e-e750-41b0-8b7b-afb80c5293af","Type":"ContainerStarted","Data":"0fe42852e6c7d741b6e8a65082acb652743d86ad493d8e163044d213b996e225"} Mar 18 18:19:37.302386 master-0 kubenswrapper[30278]: I0318 18:19:37.301759 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-65f9768575-656gb" event={"ID":"eb9a9407-6790-44a8-8e7d-fa95e4e42bdc","Type":"ContainerStarted","Data":"46838e177af19fce1d6f447384e5cf0b6eda185a99a202c32c5511c1d0b73095"} Mar 18 18:19:37.363810 master-0 kubenswrapper[30278]: I0318 18:19:37.363694 30278 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-65f9768575-656gb" podStartSLOduration=4.363651989 podStartE2EDuration="4.363651989s" podCreationTimestamp="2026-03-18 18:19:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 18:19:37.35286367 +0000 UTC m=+1146.520048265" watchObservedRunningTime="2026-03-18 18:19:37.363651989 +0000 UTC m=+1146.530836584" Mar 18 18:19:37.462300 master-0 kubenswrapper[30278]: I0318 18:19:37.462215 30278 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-65f9768575-656gb"] Mar 18 18:19:37.558243 master-0 kubenswrapper[30278]: I0318 18:19:37.558114 30278 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7c894db6df-849s7"] Mar 18 18:19:37.558908 master-0 kubenswrapper[30278]: E0318 18:19:37.558883 30278 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d4a913f3-9113-409f-bddd-65390f556fd2" containerName="dnsmasq-dns" Mar 18 18:19:37.558908 master-0 kubenswrapper[30278]: I0318 18:19:37.558907 30278 state_mem.go:107] "Deleted CPUSet assignment" podUID="d4a913f3-9113-409f-bddd-65390f556fd2" containerName="dnsmasq-dns" Mar 18 18:19:37.558995 master-0 kubenswrapper[30278]: E0318 18:19:37.558962 30278 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d4a913f3-9113-409f-bddd-65390f556fd2" containerName="init" Mar 18 18:19:37.558995 master-0 kubenswrapper[30278]: I0318 18:19:37.558971 30278 state_mem.go:107] "Deleted CPUSet assignment" podUID="d4a913f3-9113-409f-bddd-65390f556fd2" containerName="init" Mar 18 18:19:37.559095 master-0 kubenswrapper[30278]: E0318 18:19:37.558996 30278 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c5b88faf-e795-428e-8c3b-5a81d27c4a63" containerName="neutron-db-sync" Mar 18 18:19:37.559095 master-0 kubenswrapper[30278]: I0318 18:19:37.559004 30278 state_mem.go:107] "Deleted CPUSet assignment" podUID="c5b88faf-e795-428e-8c3b-5a81d27c4a63" containerName="neutron-db-sync" Mar 18 18:19:37.560056 master-0 kubenswrapper[30278]: I0318 18:19:37.560027 30278 memory_manager.go:354] "RemoveStaleState removing state" podUID="d4a913f3-9113-409f-bddd-65390f556fd2" containerName="dnsmasq-dns" Mar 18 18:19:37.560115 master-0 kubenswrapper[30278]: I0318 18:19:37.560091 30278 memory_manager.go:354] "RemoveStaleState removing state" podUID="c5b88faf-e795-428e-8c3b-5a81d27c4a63" containerName="neutron-db-sync" Mar 18 18:19:37.564875 master-0 kubenswrapper[30278]: I0318 18:19:37.563497 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7c894db6df-849s7" Mar 18 18:19:37.632587 master-0 kubenswrapper[30278]: I0318 18:19:37.632524 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4b1a145b-099e-49a1-b32c-31ce823b9ec9-config\") pod \"dnsmasq-dns-7c894db6df-849s7\" (UID: \"4b1a145b-099e-49a1-b32c-31ce823b9ec9\") " pod="openstack/dnsmasq-dns-7c894db6df-849s7" Mar 18 18:19:37.632840 master-0 kubenswrapper[30278]: I0318 18:19:37.632688 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4b1a145b-099e-49a1-b32c-31ce823b9ec9-dns-svc\") pod \"dnsmasq-dns-7c894db6df-849s7\" (UID: \"4b1a145b-099e-49a1-b32c-31ce823b9ec9\") " pod="openstack/dnsmasq-dns-7c894db6df-849s7" Mar 18 18:19:37.632840 master-0 kubenswrapper[30278]: I0318 18:19:37.632753 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4b1a145b-099e-49a1-b32c-31ce823b9ec9-ovsdbserver-sb\") pod \"dnsmasq-dns-7c894db6df-849s7\" (UID: \"4b1a145b-099e-49a1-b32c-31ce823b9ec9\") " pod="openstack/dnsmasq-dns-7c894db6df-849s7" Mar 18 18:19:37.632902 master-0 kubenswrapper[30278]: I0318 18:19:37.632840 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v65r4\" (UniqueName: \"kubernetes.io/projected/4b1a145b-099e-49a1-b32c-31ce823b9ec9-kube-api-access-v65r4\") pod \"dnsmasq-dns-7c894db6df-849s7\" (UID: \"4b1a145b-099e-49a1-b32c-31ce823b9ec9\") " pod="openstack/dnsmasq-dns-7c894db6df-849s7" Mar 18 18:19:37.632933 master-0 kubenswrapper[30278]: I0318 18:19:37.632902 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4b1a145b-099e-49a1-b32c-31ce823b9ec9-dns-swift-storage-0\") pod \"dnsmasq-dns-7c894db6df-849s7\" (UID: \"4b1a145b-099e-49a1-b32c-31ce823b9ec9\") " pod="openstack/dnsmasq-dns-7c894db6df-849s7" Mar 18 18:19:37.632987 master-0 kubenswrapper[30278]: I0318 18:19:37.632932 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4b1a145b-099e-49a1-b32c-31ce823b9ec9-ovsdbserver-nb\") pod \"dnsmasq-dns-7c894db6df-849s7\" (UID: \"4b1a145b-099e-49a1-b32c-31ce823b9ec9\") " pod="openstack/dnsmasq-dns-7c894db6df-849s7" Mar 18 18:19:37.664530 master-0 kubenswrapper[30278]: I0318 18:19:37.650552 30278 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7c894db6df-849s7"] Mar 18 18:19:37.796928 master-0 kubenswrapper[30278]: I0318 18:19:37.796859 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4b1a145b-099e-49a1-b32c-31ce823b9ec9-dns-swift-storage-0\") pod \"dnsmasq-dns-7c894db6df-849s7\" (UID: \"4b1a145b-099e-49a1-b32c-31ce823b9ec9\") " pod="openstack/dnsmasq-dns-7c894db6df-849s7" Mar 18 18:19:37.796928 master-0 kubenswrapper[30278]: I0318 18:19:37.796933 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4b1a145b-099e-49a1-b32c-31ce823b9ec9-ovsdbserver-nb\") pod \"dnsmasq-dns-7c894db6df-849s7\" (UID: \"4b1a145b-099e-49a1-b32c-31ce823b9ec9\") " pod="openstack/dnsmasq-dns-7c894db6df-849s7" Mar 18 18:19:37.797214 master-0 kubenswrapper[30278]: I0318 18:19:37.797082 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4b1a145b-099e-49a1-b32c-31ce823b9ec9-config\") pod \"dnsmasq-dns-7c894db6df-849s7\" (UID: \"4b1a145b-099e-49a1-b32c-31ce823b9ec9\") " pod="openstack/dnsmasq-dns-7c894db6df-849s7" Mar 18 18:19:37.797454 master-0 kubenswrapper[30278]: I0318 18:19:37.797427 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4b1a145b-099e-49a1-b32c-31ce823b9ec9-dns-svc\") pod \"dnsmasq-dns-7c894db6df-849s7\" (UID: \"4b1a145b-099e-49a1-b32c-31ce823b9ec9\") " pod="openstack/dnsmasq-dns-7c894db6df-849s7" Mar 18 18:19:37.800582 master-0 kubenswrapper[30278]: I0318 18:19:37.800535 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4b1a145b-099e-49a1-b32c-31ce823b9ec9-dns-swift-storage-0\") pod \"dnsmasq-dns-7c894db6df-849s7\" (UID: \"4b1a145b-099e-49a1-b32c-31ce823b9ec9\") " pod="openstack/dnsmasq-dns-7c894db6df-849s7" Mar 18 18:19:37.801715 master-0 kubenswrapper[30278]: I0318 18:19:37.801673 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4b1a145b-099e-49a1-b32c-31ce823b9ec9-ovsdbserver-nb\") pod \"dnsmasq-dns-7c894db6df-849s7\" (UID: \"4b1a145b-099e-49a1-b32c-31ce823b9ec9\") " pod="openstack/dnsmasq-dns-7c894db6df-849s7" Mar 18 18:19:37.802302 master-0 kubenswrapper[30278]: I0318 18:19:37.802233 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4b1a145b-099e-49a1-b32c-31ce823b9ec9-ovsdbserver-sb\") pod \"dnsmasq-dns-7c894db6df-849s7\" (UID: \"4b1a145b-099e-49a1-b32c-31ce823b9ec9\") " pod="openstack/dnsmasq-dns-7c894db6df-849s7" Mar 18 18:19:37.802452 master-0 kubenswrapper[30278]: I0318 18:19:37.802425 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4b1a145b-099e-49a1-b32c-31ce823b9ec9-dns-svc\") pod \"dnsmasq-dns-7c894db6df-849s7\" (UID: \"4b1a145b-099e-49a1-b32c-31ce823b9ec9\") " pod="openstack/dnsmasq-dns-7c894db6df-849s7" Mar 18 18:19:37.806199 master-0 kubenswrapper[30278]: I0318 18:19:37.806140 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v65r4\" (UniqueName: \"kubernetes.io/projected/4b1a145b-099e-49a1-b32c-31ce823b9ec9-kube-api-access-v65r4\") pod \"dnsmasq-dns-7c894db6df-849s7\" (UID: \"4b1a145b-099e-49a1-b32c-31ce823b9ec9\") " pod="openstack/dnsmasq-dns-7c894db6df-849s7" Mar 18 18:19:37.806382 master-0 kubenswrapper[30278]: I0318 18:19:37.806338 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4b1a145b-099e-49a1-b32c-31ce823b9ec9-config\") pod \"dnsmasq-dns-7c894db6df-849s7\" (UID: \"4b1a145b-099e-49a1-b32c-31ce823b9ec9\") " pod="openstack/dnsmasq-dns-7c894db6df-849s7" Mar 18 18:19:37.809076 master-0 kubenswrapper[30278]: I0318 18:19:37.809034 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4b1a145b-099e-49a1-b32c-31ce823b9ec9-ovsdbserver-sb\") pod \"dnsmasq-dns-7c894db6df-849s7\" (UID: \"4b1a145b-099e-49a1-b32c-31ce823b9ec9\") " pod="openstack/dnsmasq-dns-7c894db6df-849s7" Mar 18 18:19:37.826590 master-0 kubenswrapper[30278]: I0318 18:19:37.826524 30278 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-594bd7cb-dvb64"] Mar 18 18:19:37.829090 master-0 kubenswrapper[30278]: I0318 18:19:37.829038 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v65r4\" (UniqueName: \"kubernetes.io/projected/4b1a145b-099e-49a1-b32c-31ce823b9ec9-kube-api-access-v65r4\") pod \"dnsmasq-dns-7c894db6df-849s7\" (UID: \"4b1a145b-099e-49a1-b32c-31ce823b9ec9\") " pod="openstack/dnsmasq-dns-7c894db6df-849s7" Mar 18 18:19:37.843150 master-0 kubenswrapper[30278]: I0318 18:19:37.843085 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-594bd7cb-dvb64" Mar 18 18:19:37.847526 master-0 kubenswrapper[30278]: I0318 18:19:37.846845 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Mar 18 18:19:37.847526 master-0 kubenswrapper[30278]: I0318 18:19:37.847327 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Mar 18 18:19:37.861465 master-0 kubenswrapper[30278]: I0318 18:19:37.860088 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Mar 18 18:19:37.918182 master-0 kubenswrapper[30278]: I0318 18:19:37.918120 30278 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-594bd7cb-dvb64"] Mar 18 18:19:37.918591 master-0 kubenswrapper[30278]: I0318 18:19:37.918526 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7c894db6df-849s7" Mar 18 18:19:38.045324 master-0 kubenswrapper[30278]: I0318 18:19:38.032721 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xp5f6\" (UniqueName: \"kubernetes.io/projected/a8d16e57-7093-4361-bdda-ecd48ea1328f-kube-api-access-xp5f6\") pod \"neutron-594bd7cb-dvb64\" (UID: \"a8d16e57-7093-4361-bdda-ecd48ea1328f\") " pod="openstack/neutron-594bd7cb-dvb64" Mar 18 18:19:38.045324 master-0 kubenswrapper[30278]: I0318 18:19:38.032845 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/a8d16e57-7093-4361-bdda-ecd48ea1328f-config\") pod \"neutron-594bd7cb-dvb64\" (UID: \"a8d16e57-7093-4361-bdda-ecd48ea1328f\") " pod="openstack/neutron-594bd7cb-dvb64" Mar 18 18:19:38.045324 master-0 kubenswrapper[30278]: I0318 18:19:38.032955 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/a8d16e57-7093-4361-bdda-ecd48ea1328f-httpd-config\") pod \"neutron-594bd7cb-dvb64\" (UID: \"a8d16e57-7093-4361-bdda-ecd48ea1328f\") " pod="openstack/neutron-594bd7cb-dvb64" Mar 18 18:19:38.045324 master-0 kubenswrapper[30278]: I0318 18:19:38.032991 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a8d16e57-7093-4361-bdda-ecd48ea1328f-combined-ca-bundle\") pod \"neutron-594bd7cb-dvb64\" (UID: \"a8d16e57-7093-4361-bdda-ecd48ea1328f\") " pod="openstack/neutron-594bd7cb-dvb64" Mar 18 18:19:38.045324 master-0 kubenswrapper[30278]: I0318 18:19:38.033151 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/a8d16e57-7093-4361-bdda-ecd48ea1328f-ovndb-tls-certs\") pod \"neutron-594bd7cb-dvb64\" (UID: \"a8d16e57-7093-4361-bdda-ecd48ea1328f\") " pod="openstack/neutron-594bd7cb-dvb64" Mar 18 18:19:38.139238 master-0 kubenswrapper[30278]: I0318 18:19:38.139153 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xp5f6\" (UniqueName: \"kubernetes.io/projected/a8d16e57-7093-4361-bdda-ecd48ea1328f-kube-api-access-xp5f6\") pod \"neutron-594bd7cb-dvb64\" (UID: \"a8d16e57-7093-4361-bdda-ecd48ea1328f\") " pod="openstack/neutron-594bd7cb-dvb64" Mar 18 18:19:38.139896 master-0 kubenswrapper[30278]: I0318 18:19:38.139319 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/a8d16e57-7093-4361-bdda-ecd48ea1328f-config\") pod \"neutron-594bd7cb-dvb64\" (UID: \"a8d16e57-7093-4361-bdda-ecd48ea1328f\") " pod="openstack/neutron-594bd7cb-dvb64" Mar 18 18:19:38.139896 master-0 kubenswrapper[30278]: I0318 18:19:38.139528 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/a8d16e57-7093-4361-bdda-ecd48ea1328f-httpd-config\") pod \"neutron-594bd7cb-dvb64\" (UID: \"a8d16e57-7093-4361-bdda-ecd48ea1328f\") " pod="openstack/neutron-594bd7cb-dvb64" Mar 18 18:19:38.139896 master-0 kubenswrapper[30278]: I0318 18:19:38.139580 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a8d16e57-7093-4361-bdda-ecd48ea1328f-combined-ca-bundle\") pod \"neutron-594bd7cb-dvb64\" (UID: \"a8d16e57-7093-4361-bdda-ecd48ea1328f\") " pod="openstack/neutron-594bd7cb-dvb64" Mar 18 18:19:38.139896 master-0 kubenswrapper[30278]: I0318 18:19:38.139804 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/a8d16e57-7093-4361-bdda-ecd48ea1328f-ovndb-tls-certs\") pod \"neutron-594bd7cb-dvb64\" (UID: \"a8d16e57-7093-4361-bdda-ecd48ea1328f\") " pod="openstack/neutron-594bd7cb-dvb64" Mar 18 18:19:38.155054 master-0 kubenswrapper[30278]: I0318 18:19:38.153678 30278 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-824c8-default-internal-api-0" Mar 18 18:19:38.155054 master-0 kubenswrapper[30278]: I0318 18:19:38.153749 30278 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-824c8-default-internal-api-0" Mar 18 18:19:38.178101 master-0 kubenswrapper[30278]: I0318 18:19:38.169020 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/a8d16e57-7093-4361-bdda-ecd48ea1328f-httpd-config\") pod \"neutron-594bd7cb-dvb64\" (UID: \"a8d16e57-7093-4361-bdda-ecd48ea1328f\") " pod="openstack/neutron-594bd7cb-dvb64" Mar 18 18:19:38.178101 master-0 kubenswrapper[30278]: I0318 18:19:38.173195 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xp5f6\" (UniqueName: \"kubernetes.io/projected/a8d16e57-7093-4361-bdda-ecd48ea1328f-kube-api-access-xp5f6\") pod \"neutron-594bd7cb-dvb64\" (UID: \"a8d16e57-7093-4361-bdda-ecd48ea1328f\") " pod="openstack/neutron-594bd7cb-dvb64" Mar 18 18:19:38.178101 master-0 kubenswrapper[30278]: I0318 18:19:38.176939 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/a8d16e57-7093-4361-bdda-ecd48ea1328f-config\") pod \"neutron-594bd7cb-dvb64\" (UID: \"a8d16e57-7093-4361-bdda-ecd48ea1328f\") " pod="openstack/neutron-594bd7cb-dvb64" Mar 18 18:19:38.181339 master-0 kubenswrapper[30278]: I0318 18:19:38.180556 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/a8d16e57-7093-4361-bdda-ecd48ea1328f-ovndb-tls-certs\") pod \"neutron-594bd7cb-dvb64\" (UID: \"a8d16e57-7093-4361-bdda-ecd48ea1328f\") " pod="openstack/neutron-594bd7cb-dvb64" Mar 18 18:19:38.185411 master-0 kubenswrapper[30278]: I0318 18:19:38.185369 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a8d16e57-7093-4361-bdda-ecd48ea1328f-combined-ca-bundle\") pod \"neutron-594bd7cb-dvb64\" (UID: \"a8d16e57-7093-4361-bdda-ecd48ea1328f\") " pod="openstack/neutron-594bd7cb-dvb64" Mar 18 18:19:38.198541 master-0 kubenswrapper[30278]: I0318 18:19:38.198469 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-594bd7cb-dvb64" Mar 18 18:19:38.309821 master-0 kubenswrapper[30278]: I0318 18:19:38.309738 30278 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-824c8-default-internal-api-0" Mar 18 18:19:38.360726 master-0 kubenswrapper[30278]: I0318 18:19:38.360398 30278 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-824c8-default-internal-api-0" Mar 18 18:19:38.446057 master-0 kubenswrapper[30278]: I0318 18:19:38.444984 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-b9df6-volume-lvm-iscsi-0" event={"ID":"cbe0e9fb-60fc-4ada-ad1a-014ee622d073","Type":"ContainerStarted","Data":"5f908b9eac52cdcee4d6db989497be64a8362ef172c24b716bb915c3b7b3759f"} Mar 18 18:19:38.525847 master-0 kubenswrapper[30278]: I0318 18:19:38.496683 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-b9df6-backup-0" event={"ID":"becba0cb-b638-43c2-af99-4269efec025f","Type":"ContainerStarted","Data":"5f752524153596ac50ab6daedf229a8bfa1068017f02d13516e4cd57d730d34a"} Mar 18 18:19:38.525847 master-0 kubenswrapper[30278]: I0318 18:19:38.525599 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-b9df6-scheduler-0" event={"ID":"dc87ffe2-a115-459b-a5b1-87c747b1df2a","Type":"ContainerStarted","Data":"9a02e62a3e62d8fb9454fb31b0378519a508bf2684f327619897e3838d76217b"} Mar 18 18:19:38.545257 master-0 kubenswrapper[30278]: I0318 18:19:38.537582 30278 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-824c8-default-internal-api-0" Mar 18 18:19:38.545257 master-0 kubenswrapper[30278]: I0318 18:19:38.537636 30278 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-824c8-default-internal-api-0" Mar 18 18:19:38.545257 master-0 kubenswrapper[30278]: I0318 18:19:38.537645 30278 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-65f9768575-656gb" Mar 18 18:19:38.741071 master-0 kubenswrapper[30278]: I0318 18:19:38.738373 30278 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7c894db6df-849s7"] Mar 18 18:19:39.196588 master-0 kubenswrapper[30278]: I0318 18:19:39.195417 30278 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-594bd7cb-dvb64"] Mar 18 18:19:39.572322 master-0 kubenswrapper[30278]: I0318 18:19:39.572081 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-594bd7cb-dvb64" event={"ID":"a8d16e57-7093-4361-bdda-ecd48ea1328f","Type":"ContainerStarted","Data":"f438a24dfc9cf86889066bea19b111988a9729c48a502ae0a568d1c6bb1211ad"} Mar 18 18:19:39.585265 master-0 kubenswrapper[30278]: I0318 18:19:39.582427 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-b9df6-api-0" event={"ID":"02c0bb6e-e750-41b0-8b7b-afb80c5293af","Type":"ContainerStarted","Data":"c877b75290c06ba958516d80c6ff9a8cccb2594be6bd80c1ef692354416b2e44"} Mar 18 18:19:39.585265 master-0 kubenswrapper[30278]: I0318 18:19:39.582646 30278 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-b9df6-api-0" podUID="02c0bb6e-e750-41b0-8b7b-afb80c5293af" containerName="cinder-b9df6-api-log" containerID="cri-o://0fe42852e6c7d741b6e8a65082acb652743d86ad493d8e163044d213b996e225" gracePeriod=30 Mar 18 18:19:39.585265 master-0 kubenswrapper[30278]: I0318 18:19:39.583004 30278 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-b9df6-api-0" Mar 18 18:19:39.585265 master-0 kubenswrapper[30278]: I0318 18:19:39.583491 30278 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-b9df6-api-0" podUID="02c0bb6e-e750-41b0-8b7b-afb80c5293af" containerName="cinder-api" containerID="cri-o://c877b75290c06ba958516d80c6ff9a8cccb2594be6bd80c1ef692354416b2e44" gracePeriod=30 Mar 18 18:19:39.603221 master-0 kubenswrapper[30278]: I0318 18:19:39.603124 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-b9df6-volume-lvm-iscsi-0" event={"ID":"cbe0e9fb-60fc-4ada-ad1a-014ee622d073","Type":"ContainerStarted","Data":"c2179651cf320be9b2cbf9de707a3d441bcdc8c7688856dfc73060240635fe0a"} Mar 18 18:19:39.614349 master-0 kubenswrapper[30278]: I0318 18:19:39.613810 30278 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-b9df6-api-0" podStartSLOduration=6.613782127 podStartE2EDuration="6.613782127s" podCreationTimestamp="2026-03-18 18:19:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 18:19:39.605429352 +0000 UTC m=+1148.772613957" watchObservedRunningTime="2026-03-18 18:19:39.613782127 +0000 UTC m=+1148.780966712" Mar 18 18:19:39.629744 master-0 kubenswrapper[30278]: I0318 18:19:39.627942 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-b9df6-backup-0" event={"ID":"becba0cb-b638-43c2-af99-4269efec025f","Type":"ContainerStarted","Data":"e8b243fa279883bc52898f8f5369679ed1db473712e3d62cce59c92571df19ab"} Mar 18 18:19:39.631321 master-0 kubenswrapper[30278]: I0318 18:19:39.631036 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7c894db6df-849s7" event={"ID":"4b1a145b-099e-49a1-b32c-31ce823b9ec9","Type":"ContainerStarted","Data":"bce8cd631508aa3523c8beff1c7dd1b2cc84219bc94d36f929afca72d950027c"} Mar 18 18:19:39.632020 master-0 kubenswrapper[30278]: I0318 18:19:39.631964 30278 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-65f9768575-656gb" podUID="eb9a9407-6790-44a8-8e7d-fa95e4e42bdc" containerName="dnsmasq-dns" containerID="cri-o://46838e177af19fce1d6f447384e5cf0b6eda185a99a202c32c5511c1d0b73095" gracePeriod=10 Mar 18 18:19:39.665756 master-0 kubenswrapper[30278]: I0318 18:19:39.665516 30278 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-b9df6-volume-lvm-iscsi-0" podStartSLOduration=5.034108358 podStartE2EDuration="6.665489969s" podCreationTimestamp="2026-03-18 18:19:33 +0000 UTC" firstStartedPulling="2026-03-18 18:19:35.186440067 +0000 UTC m=+1144.353624662" lastFinishedPulling="2026-03-18 18:19:36.817821668 +0000 UTC m=+1145.985006273" observedRunningTime="2026-03-18 18:19:39.656571399 +0000 UTC m=+1148.823755994" watchObservedRunningTime="2026-03-18 18:19:39.665489969 +0000 UTC m=+1148.832674564" Mar 18 18:19:39.734310 master-0 kubenswrapper[30278]: I0318 18:19:39.732692 30278 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-b9df6-backup-0" podStartSLOduration=5.094123705 podStartE2EDuration="6.732665268s" podCreationTimestamp="2026-03-18 18:19:33 +0000 UTC" firstStartedPulling="2026-03-18 18:19:35.460633913 +0000 UTC m=+1144.627818508" lastFinishedPulling="2026-03-18 18:19:37.099175486 +0000 UTC m=+1146.266360071" observedRunningTime="2026-03-18 18:19:39.703958125 +0000 UTC m=+1148.871142720" watchObservedRunningTime="2026-03-18 18:19:39.732665268 +0000 UTC m=+1148.899849863" Mar 18 18:19:40.354367 master-0 kubenswrapper[30278]: I0318 18:19:40.353915 30278 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-65f9768575-656gb" Mar 18 18:19:40.500826 master-0 kubenswrapper[30278]: I0318 18:19:40.490033 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/eb9a9407-6790-44a8-8e7d-fa95e4e42bdc-ovsdbserver-sb\") pod \"eb9a9407-6790-44a8-8e7d-fa95e4e42bdc\" (UID: \"eb9a9407-6790-44a8-8e7d-fa95e4e42bdc\") " Mar 18 18:19:40.500826 master-0 kubenswrapper[30278]: I0318 18:19:40.490142 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/eb9a9407-6790-44a8-8e7d-fa95e4e42bdc-dns-svc\") pod \"eb9a9407-6790-44a8-8e7d-fa95e4e42bdc\" (UID: \"eb9a9407-6790-44a8-8e7d-fa95e4e42bdc\") " Mar 18 18:19:40.500826 master-0 kubenswrapper[30278]: I0318 18:19:40.490203 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v4662\" (UniqueName: \"kubernetes.io/projected/eb9a9407-6790-44a8-8e7d-fa95e4e42bdc-kube-api-access-v4662\") pod \"eb9a9407-6790-44a8-8e7d-fa95e4e42bdc\" (UID: \"eb9a9407-6790-44a8-8e7d-fa95e4e42bdc\") " Mar 18 18:19:40.500826 master-0 kubenswrapper[30278]: I0318 18:19:40.490227 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eb9a9407-6790-44a8-8e7d-fa95e4e42bdc-config\") pod \"eb9a9407-6790-44a8-8e7d-fa95e4e42bdc\" (UID: \"eb9a9407-6790-44a8-8e7d-fa95e4e42bdc\") " Mar 18 18:19:40.500826 master-0 kubenswrapper[30278]: I0318 18:19:40.490296 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/eb9a9407-6790-44a8-8e7d-fa95e4e42bdc-dns-swift-storage-0\") pod \"eb9a9407-6790-44a8-8e7d-fa95e4e42bdc\" (UID: \"eb9a9407-6790-44a8-8e7d-fa95e4e42bdc\") " Mar 18 18:19:40.500826 master-0 kubenswrapper[30278]: I0318 18:19:40.490359 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/eb9a9407-6790-44a8-8e7d-fa95e4e42bdc-ovsdbserver-nb\") pod \"eb9a9407-6790-44a8-8e7d-fa95e4e42bdc\" (UID: \"eb9a9407-6790-44a8-8e7d-fa95e4e42bdc\") " Mar 18 18:19:40.535299 master-0 kubenswrapper[30278]: I0318 18:19:40.535215 30278 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-5776b66b45-w6n4j"] Mar 18 18:19:40.536064 master-0 kubenswrapper[30278]: E0318 18:19:40.536035 30278 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb9a9407-6790-44a8-8e7d-fa95e4e42bdc" containerName="init" Mar 18 18:19:40.536064 master-0 kubenswrapper[30278]: I0318 18:19:40.536061 30278 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb9a9407-6790-44a8-8e7d-fa95e4e42bdc" containerName="init" Mar 18 18:19:40.536176 master-0 kubenswrapper[30278]: E0318 18:19:40.536092 30278 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb9a9407-6790-44a8-8e7d-fa95e4e42bdc" containerName="dnsmasq-dns" Mar 18 18:19:40.536176 master-0 kubenswrapper[30278]: I0318 18:19:40.536101 30278 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb9a9407-6790-44a8-8e7d-fa95e4e42bdc" containerName="dnsmasq-dns" Mar 18 18:19:40.536564 master-0 kubenswrapper[30278]: I0318 18:19:40.536445 30278 memory_manager.go:354] "RemoveStaleState removing state" podUID="eb9a9407-6790-44a8-8e7d-fa95e4e42bdc" containerName="dnsmasq-dns" Mar 18 18:19:40.538193 master-0 kubenswrapper[30278]: I0318 18:19:40.538156 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5776b66b45-w6n4j" Mar 18 18:19:40.547300 master-0 kubenswrapper[30278]: I0318 18:19:40.546721 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Mar 18 18:19:40.547300 master-0 kubenswrapper[30278]: I0318 18:19:40.547045 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Mar 18 18:19:40.566319 master-0 kubenswrapper[30278]: I0318 18:19:40.555590 30278 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-5776b66b45-w6n4j"] Mar 18 18:19:40.602411 master-0 kubenswrapper[30278]: I0318 18:19:40.593483 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/f0ecd562-b219-44d6-b27a-99af0ae48f35-httpd-config\") pod \"neutron-5776b66b45-w6n4j\" (UID: \"f0ecd562-b219-44d6-b27a-99af0ae48f35\") " pod="openstack/neutron-5776b66b45-w6n4j" Mar 18 18:19:40.602411 master-0 kubenswrapper[30278]: I0318 18:19:40.593585 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z7j2k\" (UniqueName: \"kubernetes.io/projected/f0ecd562-b219-44d6-b27a-99af0ae48f35-kube-api-access-z7j2k\") pod \"neutron-5776b66b45-w6n4j\" (UID: \"f0ecd562-b219-44d6-b27a-99af0ae48f35\") " pod="openstack/neutron-5776b66b45-w6n4j" Mar 18 18:19:40.602411 master-0 kubenswrapper[30278]: I0318 18:19:40.593630 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f0ecd562-b219-44d6-b27a-99af0ae48f35-internal-tls-certs\") pod \"neutron-5776b66b45-w6n4j\" (UID: \"f0ecd562-b219-44d6-b27a-99af0ae48f35\") " pod="openstack/neutron-5776b66b45-w6n4j" Mar 18 18:19:40.602411 master-0 kubenswrapper[30278]: I0318 18:19:40.593662 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f0ecd562-b219-44d6-b27a-99af0ae48f35-public-tls-certs\") pod \"neutron-5776b66b45-w6n4j\" (UID: \"f0ecd562-b219-44d6-b27a-99af0ae48f35\") " pod="openstack/neutron-5776b66b45-w6n4j" Mar 18 18:19:40.602411 master-0 kubenswrapper[30278]: I0318 18:19:40.593696 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f0ecd562-b219-44d6-b27a-99af0ae48f35-combined-ca-bundle\") pod \"neutron-5776b66b45-w6n4j\" (UID: \"f0ecd562-b219-44d6-b27a-99af0ae48f35\") " pod="openstack/neutron-5776b66b45-w6n4j" Mar 18 18:19:40.602411 master-0 kubenswrapper[30278]: I0318 18:19:40.593709 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/f0ecd562-b219-44d6-b27a-99af0ae48f35-ovndb-tls-certs\") pod \"neutron-5776b66b45-w6n4j\" (UID: \"f0ecd562-b219-44d6-b27a-99af0ae48f35\") " pod="openstack/neutron-5776b66b45-w6n4j" Mar 18 18:19:40.602411 master-0 kubenswrapper[30278]: I0318 18:19:40.593762 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/f0ecd562-b219-44d6-b27a-99af0ae48f35-config\") pod \"neutron-5776b66b45-w6n4j\" (UID: \"f0ecd562-b219-44d6-b27a-99af0ae48f35\") " pod="openstack/neutron-5776b66b45-w6n4j" Mar 18 18:19:40.602411 master-0 kubenswrapper[30278]: I0318 18:19:40.599058 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eb9a9407-6790-44a8-8e7d-fa95e4e42bdc-kube-api-access-v4662" (OuterVolumeSpecName: "kube-api-access-v4662") pod "eb9a9407-6790-44a8-8e7d-fa95e4e42bdc" (UID: "eb9a9407-6790-44a8-8e7d-fa95e4e42bdc"). InnerVolumeSpecName "kube-api-access-v4662". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 18:19:40.665426 master-0 kubenswrapper[30278]: I0318 18:19:40.659167 30278 generic.go:334] "Generic (PLEG): container finished" podID="eb9a9407-6790-44a8-8e7d-fa95e4e42bdc" containerID="46838e177af19fce1d6f447384e5cf0b6eda185a99a202c32c5511c1d0b73095" exitCode=0 Mar 18 18:19:40.665426 master-0 kubenswrapper[30278]: I0318 18:19:40.659321 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-65f9768575-656gb" event={"ID":"eb9a9407-6790-44a8-8e7d-fa95e4e42bdc","Type":"ContainerDied","Data":"46838e177af19fce1d6f447384e5cf0b6eda185a99a202c32c5511c1d0b73095"} Mar 18 18:19:40.665426 master-0 kubenswrapper[30278]: I0318 18:19:40.659358 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-65f9768575-656gb" event={"ID":"eb9a9407-6790-44a8-8e7d-fa95e4e42bdc","Type":"ContainerDied","Data":"8ee04943a65861d663691398a40108a49e4d2a1771a6ca12577e769c11676f7d"} Mar 18 18:19:40.665426 master-0 kubenswrapper[30278]: I0318 18:19:40.659394 30278 scope.go:117] "RemoveContainer" containerID="46838e177af19fce1d6f447384e5cf0b6eda185a99a202c32c5511c1d0b73095" Mar 18 18:19:40.665426 master-0 kubenswrapper[30278]: I0318 18:19:40.659610 30278 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-65f9768575-656gb" Mar 18 18:19:40.695962 master-0 kubenswrapper[30278]: I0318 18:19:40.694991 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-594bd7cb-dvb64" event={"ID":"a8d16e57-7093-4361-bdda-ecd48ea1328f","Type":"ContainerStarted","Data":"b5dcd73154a049e80ab13b2eb80bcf7481b7aceaf5de5b0d4df0bed066bb9647"} Mar 18 18:19:40.701543 master-0 kubenswrapper[30278]: I0318 18:19:40.700854 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/f0ecd562-b219-44d6-b27a-99af0ae48f35-httpd-config\") pod \"neutron-5776b66b45-w6n4j\" (UID: \"f0ecd562-b219-44d6-b27a-99af0ae48f35\") " pod="openstack/neutron-5776b66b45-w6n4j" Mar 18 18:19:40.701543 master-0 kubenswrapper[30278]: I0318 18:19:40.701073 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z7j2k\" (UniqueName: \"kubernetes.io/projected/f0ecd562-b219-44d6-b27a-99af0ae48f35-kube-api-access-z7j2k\") pod \"neutron-5776b66b45-w6n4j\" (UID: \"f0ecd562-b219-44d6-b27a-99af0ae48f35\") " pod="openstack/neutron-5776b66b45-w6n4j" Mar 18 18:19:40.701543 master-0 kubenswrapper[30278]: I0318 18:19:40.701142 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f0ecd562-b219-44d6-b27a-99af0ae48f35-internal-tls-certs\") pod \"neutron-5776b66b45-w6n4j\" (UID: \"f0ecd562-b219-44d6-b27a-99af0ae48f35\") " pod="openstack/neutron-5776b66b45-w6n4j" Mar 18 18:19:40.705372 master-0 kubenswrapper[30278]: I0318 18:19:40.703005 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f0ecd562-b219-44d6-b27a-99af0ae48f35-public-tls-certs\") pod \"neutron-5776b66b45-w6n4j\" (UID: \"f0ecd562-b219-44d6-b27a-99af0ae48f35\") " pod="openstack/neutron-5776b66b45-w6n4j" Mar 18 18:19:40.705372 master-0 kubenswrapper[30278]: I0318 18:19:40.703075 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f0ecd562-b219-44d6-b27a-99af0ae48f35-combined-ca-bundle\") pod \"neutron-5776b66b45-w6n4j\" (UID: \"f0ecd562-b219-44d6-b27a-99af0ae48f35\") " pod="openstack/neutron-5776b66b45-w6n4j" Mar 18 18:19:40.705372 master-0 kubenswrapper[30278]: I0318 18:19:40.703101 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/f0ecd562-b219-44d6-b27a-99af0ae48f35-ovndb-tls-certs\") pod \"neutron-5776b66b45-w6n4j\" (UID: \"f0ecd562-b219-44d6-b27a-99af0ae48f35\") " pod="openstack/neutron-5776b66b45-w6n4j" Mar 18 18:19:40.705372 master-0 kubenswrapper[30278]: I0318 18:19:40.703205 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/f0ecd562-b219-44d6-b27a-99af0ae48f35-config\") pod \"neutron-5776b66b45-w6n4j\" (UID: \"f0ecd562-b219-44d6-b27a-99af0ae48f35\") " pod="openstack/neutron-5776b66b45-w6n4j" Mar 18 18:19:40.705372 master-0 kubenswrapper[30278]: I0318 18:19:40.703584 30278 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v4662\" (UniqueName: \"kubernetes.io/projected/eb9a9407-6790-44a8-8e7d-fa95e4e42bdc-kube-api-access-v4662\") on node \"master-0\" DevicePath \"\"" Mar 18 18:19:40.725468 master-0 kubenswrapper[30278]: I0318 18:19:40.723976 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-b9df6-scheduler-0" event={"ID":"dc87ffe2-a115-459b-a5b1-87c747b1df2a","Type":"ContainerStarted","Data":"a1fbdaacf468e074ed53a5c6fd3ee74d06779d8590f5a96d747c9e4f020d50ce"} Mar 18 18:19:40.735318 master-0 kubenswrapper[30278]: I0318 18:19:40.728687 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/f0ecd562-b219-44d6-b27a-99af0ae48f35-config\") pod \"neutron-5776b66b45-w6n4j\" (UID: \"f0ecd562-b219-44d6-b27a-99af0ae48f35\") " pod="openstack/neutron-5776b66b45-w6n4j" Mar 18 18:19:40.760301 master-0 kubenswrapper[30278]: I0318 18:19:40.758248 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f0ecd562-b219-44d6-b27a-99af0ae48f35-internal-tls-certs\") pod \"neutron-5776b66b45-w6n4j\" (UID: \"f0ecd562-b219-44d6-b27a-99af0ae48f35\") " pod="openstack/neutron-5776b66b45-w6n4j" Mar 18 18:19:40.772891 master-0 kubenswrapper[30278]: I0318 18:19:40.768524 30278 scope.go:117] "RemoveContainer" containerID="f0e4a6ccbb9489996f6b30452c31aa977efa84ff68518bd03c506bbd5ae87a9e" Mar 18 18:19:40.772891 master-0 kubenswrapper[30278]: I0318 18:19:40.770787 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/f0ecd562-b219-44d6-b27a-99af0ae48f35-ovndb-tls-certs\") pod \"neutron-5776b66b45-w6n4j\" (UID: \"f0ecd562-b219-44d6-b27a-99af0ae48f35\") " pod="openstack/neutron-5776b66b45-w6n4j" Mar 18 18:19:40.778527 master-0 kubenswrapper[30278]: I0318 18:19:40.778479 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/f0ecd562-b219-44d6-b27a-99af0ae48f35-httpd-config\") pod \"neutron-5776b66b45-w6n4j\" (UID: \"f0ecd562-b219-44d6-b27a-99af0ae48f35\") " pod="openstack/neutron-5776b66b45-w6n4j" Mar 18 18:19:40.802377 master-0 kubenswrapper[30278]: I0318 18:19:40.793043 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z7j2k\" (UniqueName: \"kubernetes.io/projected/f0ecd562-b219-44d6-b27a-99af0ae48f35-kube-api-access-z7j2k\") pod \"neutron-5776b66b45-w6n4j\" (UID: \"f0ecd562-b219-44d6-b27a-99af0ae48f35\") " pod="openstack/neutron-5776b66b45-w6n4j" Mar 18 18:19:40.802826 master-0 kubenswrapper[30278]: I0318 18:19:40.802527 30278 generic.go:334] "Generic (PLEG): container finished" podID="02c0bb6e-e750-41b0-8b7b-afb80c5293af" containerID="c877b75290c06ba958516d80c6ff9a8cccb2594be6bd80c1ef692354416b2e44" exitCode=0 Mar 18 18:19:40.802826 master-0 kubenswrapper[30278]: I0318 18:19:40.802578 30278 generic.go:334] "Generic (PLEG): container finished" podID="02c0bb6e-e750-41b0-8b7b-afb80c5293af" containerID="0fe42852e6c7d741b6e8a65082acb652743d86ad493d8e163044d213b996e225" exitCode=143 Mar 18 18:19:40.802826 master-0 kubenswrapper[30278]: I0318 18:19:40.802676 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-b9df6-api-0" event={"ID":"02c0bb6e-e750-41b0-8b7b-afb80c5293af","Type":"ContainerDied","Data":"c877b75290c06ba958516d80c6ff9a8cccb2594be6bd80c1ef692354416b2e44"} Mar 18 18:19:40.802826 master-0 kubenswrapper[30278]: I0318 18:19:40.802714 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-b9df6-api-0" event={"ID":"02c0bb6e-e750-41b0-8b7b-afb80c5293af","Type":"ContainerDied","Data":"0fe42852e6c7d741b6e8a65082acb652743d86ad493d8e163044d213b996e225"} Mar 18 18:19:40.809420 master-0 kubenswrapper[30278]: I0318 18:19:40.808605 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f0ecd562-b219-44d6-b27a-99af0ae48f35-combined-ca-bundle\") pod \"neutron-5776b66b45-w6n4j\" (UID: \"f0ecd562-b219-44d6-b27a-99af0ae48f35\") " pod="openstack/neutron-5776b66b45-w6n4j" Mar 18 18:19:40.818719 master-0 kubenswrapper[30278]: I0318 18:19:40.818321 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f0ecd562-b219-44d6-b27a-99af0ae48f35-public-tls-certs\") pod \"neutron-5776b66b45-w6n4j\" (UID: \"f0ecd562-b219-44d6-b27a-99af0ae48f35\") " pod="openstack/neutron-5776b66b45-w6n4j" Mar 18 18:19:40.841103 master-0 kubenswrapper[30278]: I0318 18:19:40.841011 30278 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-b9df6-scheduler-0" podStartSLOduration=6.235378334 podStartE2EDuration="7.840978491s" podCreationTimestamp="2026-03-18 18:19:33 +0000 UTC" firstStartedPulling="2026-03-18 18:19:35.186946891 +0000 UTC m=+1144.354131486" lastFinishedPulling="2026-03-18 18:19:36.792547038 +0000 UTC m=+1145.959731643" observedRunningTime="2026-03-18 18:19:40.795117825 +0000 UTC m=+1149.962302430" watchObservedRunningTime="2026-03-18 18:19:40.840978491 +0000 UTC m=+1150.008163086" Mar 18 18:19:40.883664 master-0 kubenswrapper[30278]: I0318 18:19:40.881946 30278 generic.go:334] "Generic (PLEG): container finished" podID="4b1a145b-099e-49a1-b32c-31ce823b9ec9" containerID="b17f23a5ee5550f2fec431706d4df8bc8ecaa39de923ea21e0a1506453a069c5" exitCode=0 Mar 18 18:19:40.886059 master-0 kubenswrapper[30278]: I0318 18:19:40.884442 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7c894db6df-849s7" event={"ID":"4b1a145b-099e-49a1-b32c-31ce823b9ec9","Type":"ContainerDied","Data":"b17f23a5ee5550f2fec431706d4df8bc8ecaa39de923ea21e0a1506453a069c5"} Mar 18 18:19:40.886059 master-0 kubenswrapper[30278]: I0318 18:19:40.884474 30278 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 18 18:19:40.886059 master-0 kubenswrapper[30278]: I0318 18:19:40.884563 30278 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 18 18:19:40.957328 master-0 kubenswrapper[30278]: I0318 18:19:40.949937 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eb9a9407-6790-44a8-8e7d-fa95e4e42bdc-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "eb9a9407-6790-44a8-8e7d-fa95e4e42bdc" (UID: "eb9a9407-6790-44a8-8e7d-fa95e4e42bdc"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 18:19:40.978438 master-0 kubenswrapper[30278]: I0318 18:19:40.976766 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5776b66b45-w6n4j" Mar 18 18:19:41.020365 master-0 kubenswrapper[30278]: I0318 18:19:41.017068 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eb9a9407-6790-44a8-8e7d-fa95e4e42bdc-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "eb9a9407-6790-44a8-8e7d-fa95e4e42bdc" (UID: "eb9a9407-6790-44a8-8e7d-fa95e4e42bdc"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 18:19:41.020365 master-0 kubenswrapper[30278]: I0318 18:19:41.017110 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eb9a9407-6790-44a8-8e7d-fa95e4e42bdc-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "eb9a9407-6790-44a8-8e7d-fa95e4e42bdc" (UID: "eb9a9407-6790-44a8-8e7d-fa95e4e42bdc"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 18:19:41.029743 master-0 kubenswrapper[30278]: I0318 18:19:41.029705 30278 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/eb9a9407-6790-44a8-8e7d-fa95e4e42bdc-dns-swift-storage-0\") on node \"master-0\" DevicePath \"\"" Mar 18 18:19:41.029743 master-0 kubenswrapper[30278]: I0318 18:19:41.029743 30278 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/eb9a9407-6790-44a8-8e7d-fa95e4e42bdc-ovsdbserver-sb\") on node \"master-0\" DevicePath \"\"" Mar 18 18:19:41.029894 master-0 kubenswrapper[30278]: I0318 18:19:41.029754 30278 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/eb9a9407-6790-44a8-8e7d-fa95e4e42bdc-dns-svc\") on node \"master-0\" DevicePath \"\"" Mar 18 18:19:41.031435 master-0 kubenswrapper[30278]: I0318 18:19:41.031373 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eb9a9407-6790-44a8-8e7d-fa95e4e42bdc-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "eb9a9407-6790-44a8-8e7d-fa95e4e42bdc" (UID: "eb9a9407-6790-44a8-8e7d-fa95e4e42bdc"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 18:19:41.086589 master-0 kubenswrapper[30278]: I0318 18:19:41.084244 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eb9a9407-6790-44a8-8e7d-fa95e4e42bdc-config" (OuterVolumeSpecName: "config") pod "eb9a9407-6790-44a8-8e7d-fa95e4e42bdc" (UID: "eb9a9407-6790-44a8-8e7d-fa95e4e42bdc"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 18:19:41.134376 master-0 kubenswrapper[30278]: I0318 18:19:41.134219 30278 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eb9a9407-6790-44a8-8e7d-fa95e4e42bdc-config\") on node \"master-0\" DevicePath \"\"" Mar 18 18:19:41.134498 master-0 kubenswrapper[30278]: I0318 18:19:41.134467 30278 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/eb9a9407-6790-44a8-8e7d-fa95e4e42bdc-ovsdbserver-nb\") on node \"master-0\" DevicePath \"\"" Mar 18 18:19:41.247054 master-0 kubenswrapper[30278]: I0318 18:19:41.245794 30278 scope.go:117] "RemoveContainer" containerID="46838e177af19fce1d6f447384e5cf0b6eda185a99a202c32c5511c1d0b73095" Mar 18 18:19:41.247789 master-0 kubenswrapper[30278]: E0318 18:19:41.247699 30278 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"46838e177af19fce1d6f447384e5cf0b6eda185a99a202c32c5511c1d0b73095\": container with ID starting with 46838e177af19fce1d6f447384e5cf0b6eda185a99a202c32c5511c1d0b73095 not found: ID does not exist" containerID="46838e177af19fce1d6f447384e5cf0b6eda185a99a202c32c5511c1d0b73095" Mar 18 18:19:41.247789 master-0 kubenswrapper[30278]: I0318 18:19:41.247759 30278 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"46838e177af19fce1d6f447384e5cf0b6eda185a99a202c32c5511c1d0b73095"} err="failed to get container status \"46838e177af19fce1d6f447384e5cf0b6eda185a99a202c32c5511c1d0b73095\": rpc error: code = NotFound desc = could not find container \"46838e177af19fce1d6f447384e5cf0b6eda185a99a202c32c5511c1d0b73095\": container with ID starting with 46838e177af19fce1d6f447384e5cf0b6eda185a99a202c32c5511c1d0b73095 not found: ID does not exist" Mar 18 18:19:41.247789 master-0 kubenswrapper[30278]: I0318 18:19:41.247790 30278 scope.go:117] "RemoveContainer" containerID="f0e4a6ccbb9489996f6b30452c31aa977efa84ff68518bd03c506bbd5ae87a9e" Mar 18 18:19:41.253338 master-0 kubenswrapper[30278]: E0318 18:19:41.249857 30278 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f0e4a6ccbb9489996f6b30452c31aa977efa84ff68518bd03c506bbd5ae87a9e\": container with ID starting with f0e4a6ccbb9489996f6b30452c31aa977efa84ff68518bd03c506bbd5ae87a9e not found: ID does not exist" containerID="f0e4a6ccbb9489996f6b30452c31aa977efa84ff68518bd03c506bbd5ae87a9e" Mar 18 18:19:41.253338 master-0 kubenswrapper[30278]: I0318 18:19:41.249888 30278 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f0e4a6ccbb9489996f6b30452c31aa977efa84ff68518bd03c506bbd5ae87a9e"} err="failed to get container status \"f0e4a6ccbb9489996f6b30452c31aa977efa84ff68518bd03c506bbd5ae87a9e\": rpc error: code = NotFound desc = could not find container \"f0e4a6ccbb9489996f6b30452c31aa977efa84ff68518bd03c506bbd5ae87a9e\": container with ID starting with f0e4a6ccbb9489996f6b30452c31aa977efa84ff68518bd03c506bbd5ae87a9e not found: ID does not exist" Mar 18 18:19:41.296809 master-0 kubenswrapper[30278]: I0318 18:19:41.296641 30278 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-b9df6-api-0" Mar 18 18:19:41.434588 master-0 kubenswrapper[30278]: I0318 18:19:41.434461 30278 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-65f9768575-656gb"] Mar 18 18:19:41.448297 master-0 kubenswrapper[30278]: I0318 18:19:41.448245 30278 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-65f9768575-656gb"] Mar 18 18:19:41.476809 master-0 kubenswrapper[30278]: I0318 18:19:41.472075 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mptjs\" (UniqueName: \"kubernetes.io/projected/02c0bb6e-e750-41b0-8b7b-afb80c5293af-kube-api-access-mptjs\") pod \"02c0bb6e-e750-41b0-8b7b-afb80c5293af\" (UID: \"02c0bb6e-e750-41b0-8b7b-afb80c5293af\") " Mar 18 18:19:41.476809 master-0 kubenswrapper[30278]: I0318 18:19:41.472181 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/02c0bb6e-e750-41b0-8b7b-afb80c5293af-etc-machine-id\") pod \"02c0bb6e-e750-41b0-8b7b-afb80c5293af\" (UID: \"02c0bb6e-e750-41b0-8b7b-afb80c5293af\") " Mar 18 18:19:41.476809 master-0 kubenswrapper[30278]: I0318 18:19:41.472288 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/02c0bb6e-e750-41b0-8b7b-afb80c5293af-logs\") pod \"02c0bb6e-e750-41b0-8b7b-afb80c5293af\" (UID: \"02c0bb6e-e750-41b0-8b7b-afb80c5293af\") " Mar 18 18:19:41.476809 master-0 kubenswrapper[30278]: I0318 18:19:41.472361 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/02c0bb6e-e750-41b0-8b7b-afb80c5293af-config-data-custom\") pod \"02c0bb6e-e750-41b0-8b7b-afb80c5293af\" (UID: \"02c0bb6e-e750-41b0-8b7b-afb80c5293af\") " Mar 18 18:19:41.476809 master-0 kubenswrapper[30278]: I0318 18:19:41.472383 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/02c0bb6e-e750-41b0-8b7b-afb80c5293af-combined-ca-bundle\") pod \"02c0bb6e-e750-41b0-8b7b-afb80c5293af\" (UID: \"02c0bb6e-e750-41b0-8b7b-afb80c5293af\") " Mar 18 18:19:41.476809 master-0 kubenswrapper[30278]: I0318 18:19:41.472403 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/02c0bb6e-e750-41b0-8b7b-afb80c5293af-config-data\") pod \"02c0bb6e-e750-41b0-8b7b-afb80c5293af\" (UID: \"02c0bb6e-e750-41b0-8b7b-afb80c5293af\") " Mar 18 18:19:41.476809 master-0 kubenswrapper[30278]: I0318 18:19:41.472429 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/02c0bb6e-e750-41b0-8b7b-afb80c5293af-scripts\") pod \"02c0bb6e-e750-41b0-8b7b-afb80c5293af\" (UID: \"02c0bb6e-e750-41b0-8b7b-afb80c5293af\") " Mar 18 18:19:41.494040 master-0 kubenswrapper[30278]: I0318 18:19:41.480383 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/02c0bb6e-e750-41b0-8b7b-afb80c5293af-scripts" (OuterVolumeSpecName: "scripts") pod "02c0bb6e-e750-41b0-8b7b-afb80c5293af" (UID: "02c0bb6e-e750-41b0-8b7b-afb80c5293af"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 18:19:41.494040 master-0 kubenswrapper[30278]: I0318 18:19:41.484570 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/02c0bb6e-e750-41b0-8b7b-afb80c5293af-kube-api-access-mptjs" (OuterVolumeSpecName: "kube-api-access-mptjs") pod "02c0bb6e-e750-41b0-8b7b-afb80c5293af" (UID: "02c0bb6e-e750-41b0-8b7b-afb80c5293af"). InnerVolumeSpecName "kube-api-access-mptjs". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 18:19:41.494040 master-0 kubenswrapper[30278]: I0318 18:19:41.484608 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/02c0bb6e-e750-41b0-8b7b-afb80c5293af-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "02c0bb6e-e750-41b0-8b7b-afb80c5293af" (UID: "02c0bb6e-e750-41b0-8b7b-afb80c5293af"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 18:19:41.494040 master-0 kubenswrapper[30278]: I0318 18:19:41.484760 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/02c0bb6e-e750-41b0-8b7b-afb80c5293af-logs" (OuterVolumeSpecName: "logs") pod "02c0bb6e-e750-41b0-8b7b-afb80c5293af" (UID: "02c0bb6e-e750-41b0-8b7b-afb80c5293af"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 18 18:19:41.494040 master-0 kubenswrapper[30278]: I0318 18:19:41.488335 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/02c0bb6e-e750-41b0-8b7b-afb80c5293af-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "02c0bb6e-e750-41b0-8b7b-afb80c5293af" (UID: "02c0bb6e-e750-41b0-8b7b-afb80c5293af"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 18:19:41.521343 master-0 kubenswrapper[30278]: I0318 18:19:41.521291 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/02c0bb6e-e750-41b0-8b7b-afb80c5293af-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "02c0bb6e-e750-41b0-8b7b-afb80c5293af" (UID: "02c0bb6e-e750-41b0-8b7b-afb80c5293af"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 18:19:41.571066 master-0 kubenswrapper[30278]: I0318 18:19:41.569194 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/02c0bb6e-e750-41b0-8b7b-afb80c5293af-config-data" (OuterVolumeSpecName: "config-data") pod "02c0bb6e-e750-41b0-8b7b-afb80c5293af" (UID: "02c0bb6e-e750-41b0-8b7b-afb80c5293af"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 18:19:41.580337 master-0 kubenswrapper[30278]: I0318 18:19:41.576844 30278 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/02c0bb6e-e750-41b0-8b7b-afb80c5293af-scripts\") on node \"master-0\" DevicePath \"\"" Mar 18 18:19:41.580337 master-0 kubenswrapper[30278]: I0318 18:19:41.576883 30278 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mptjs\" (UniqueName: \"kubernetes.io/projected/02c0bb6e-e750-41b0-8b7b-afb80c5293af-kube-api-access-mptjs\") on node \"master-0\" DevicePath \"\"" Mar 18 18:19:41.580337 master-0 kubenswrapper[30278]: I0318 18:19:41.576897 30278 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/02c0bb6e-e750-41b0-8b7b-afb80c5293af-etc-machine-id\") on node \"master-0\" DevicePath \"\"" Mar 18 18:19:41.580337 master-0 kubenswrapper[30278]: I0318 18:19:41.576909 30278 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/02c0bb6e-e750-41b0-8b7b-afb80c5293af-logs\") on node \"master-0\" DevicePath \"\"" Mar 18 18:19:41.580337 master-0 kubenswrapper[30278]: I0318 18:19:41.576919 30278 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/02c0bb6e-e750-41b0-8b7b-afb80c5293af-config-data-custom\") on node \"master-0\" DevicePath \"\"" Mar 18 18:19:41.580337 master-0 kubenswrapper[30278]: I0318 18:19:41.576929 30278 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/02c0bb6e-e750-41b0-8b7b-afb80c5293af-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 18 18:19:41.580337 master-0 kubenswrapper[30278]: I0318 18:19:41.576939 30278 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/02c0bb6e-e750-41b0-8b7b-afb80c5293af-config-data\") on node \"master-0\" DevicePath \"\"" Mar 18 18:19:41.811531 master-0 kubenswrapper[30278]: W0318 18:19:41.810915 30278 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf0ecd562_b219_44d6_b27a_99af0ae48f35.slice/crio-94ca9a0cd936b40818d1e049c9fe40e93bbd2b132bf62ddbb06a30ba8530c52d WatchSource:0}: Error finding container 94ca9a0cd936b40818d1e049c9fe40e93bbd2b132bf62ddbb06a30ba8530c52d: Status 404 returned error can't find the container with id 94ca9a0cd936b40818d1e049c9fe40e93bbd2b132bf62ddbb06a30ba8530c52d Mar 18 18:19:41.829334 master-0 kubenswrapper[30278]: I0318 18:19:41.820127 30278 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-5776b66b45-w6n4j"] Mar 18 18:19:41.913952 master-0 kubenswrapper[30278]: I0318 18:19:41.913863 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5776b66b45-w6n4j" event={"ID":"f0ecd562-b219-44d6-b27a-99af0ae48f35","Type":"ContainerStarted","Data":"94ca9a0cd936b40818d1e049c9fe40e93bbd2b132bf62ddbb06a30ba8530c52d"} Mar 18 18:19:41.926392 master-0 kubenswrapper[30278]: I0318 18:19:41.925103 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-b9df6-api-0" event={"ID":"02c0bb6e-e750-41b0-8b7b-afb80c5293af","Type":"ContainerDied","Data":"1fa9335e400bc7897a1a472ff36621ea5eca2394fd78749b6459b55ebf5141d1"} Mar 18 18:19:41.926392 master-0 kubenswrapper[30278]: I0318 18:19:41.925152 30278 scope.go:117] "RemoveContainer" containerID="c877b75290c06ba958516d80c6ff9a8cccb2594be6bd80c1ef692354416b2e44" Mar 18 18:19:41.926392 master-0 kubenswrapper[30278]: I0318 18:19:41.925301 30278 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-b9df6-api-0" Mar 18 18:19:41.953216 master-0 kubenswrapper[30278]: I0318 18:19:41.947885 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7c894db6df-849s7" event={"ID":"4b1a145b-099e-49a1-b32c-31ce823b9ec9","Type":"ContainerStarted","Data":"6f27e8c136fb6a3e7fa13efc01810453672b6d84613cd5ea67c9ec948f266cba"} Mar 18 18:19:41.953216 master-0 kubenswrapper[30278]: I0318 18:19:41.948182 30278 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7c894db6df-849s7" Mar 18 18:19:41.971814 master-0 kubenswrapper[30278]: I0318 18:19:41.961414 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-594bd7cb-dvb64" event={"ID":"a8d16e57-7093-4361-bdda-ecd48ea1328f","Type":"ContainerStarted","Data":"89f9a2f243d56eb15727bacfbebb53635e792bb42d34a4b447dd4b068abbaaaf"} Mar 18 18:19:41.971814 master-0 kubenswrapper[30278]: I0318 18:19:41.961551 30278 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-594bd7cb-dvb64" Mar 18 18:19:41.997390 master-0 kubenswrapper[30278]: I0318 18:19:41.994188 30278 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7c894db6df-849s7" podStartSLOduration=4.994062028 podStartE2EDuration="4.994062028s" podCreationTimestamp="2026-03-18 18:19:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 18:19:41.98149799 +0000 UTC m=+1151.148682595" watchObservedRunningTime="2026-03-18 18:19:41.994062028 +0000 UTC m=+1151.161246633" Mar 18 18:19:42.056884 master-0 kubenswrapper[30278]: I0318 18:19:42.056833 30278 scope.go:117] "RemoveContainer" containerID="0fe42852e6c7d741b6e8a65082acb652743d86ad493d8e163044d213b996e225" Mar 18 18:19:42.080121 master-0 kubenswrapper[30278]: I0318 18:19:42.079992 30278 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-594bd7cb-dvb64" podStartSLOduration=5.079966592 podStartE2EDuration="5.079966592s" podCreationTimestamp="2026-03-18 18:19:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 18:19:42.006868833 +0000 UTC m=+1151.174053428" watchObservedRunningTime="2026-03-18 18:19:42.079966592 +0000 UTC m=+1151.247151187" Mar 18 18:19:42.140186 master-0 kubenswrapper[30278]: I0318 18:19:42.140060 30278 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-b9df6-api-0"] Mar 18 18:19:42.173910 master-0 kubenswrapper[30278]: I0318 18:19:42.173770 30278 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-b9df6-api-0"] Mar 18 18:19:42.218190 master-0 kubenswrapper[30278]: I0318 18:19:42.218111 30278 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-824c8-default-external-api-0" Mar 18 18:19:42.235124 master-0 kubenswrapper[30278]: I0318 18:19:42.227763 30278 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-b9df6-api-0"] Mar 18 18:19:42.235124 master-0 kubenswrapper[30278]: E0318 18:19:42.228499 30278 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="02c0bb6e-e750-41b0-8b7b-afb80c5293af" containerName="cinder-api" Mar 18 18:19:42.235124 master-0 kubenswrapper[30278]: I0318 18:19:42.228516 30278 state_mem.go:107] "Deleted CPUSet assignment" podUID="02c0bb6e-e750-41b0-8b7b-afb80c5293af" containerName="cinder-api" Mar 18 18:19:42.235124 master-0 kubenswrapper[30278]: E0318 18:19:42.228547 30278 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="02c0bb6e-e750-41b0-8b7b-afb80c5293af" containerName="cinder-b9df6-api-log" Mar 18 18:19:42.235124 master-0 kubenswrapper[30278]: I0318 18:19:42.228559 30278 state_mem.go:107] "Deleted CPUSet assignment" podUID="02c0bb6e-e750-41b0-8b7b-afb80c5293af" containerName="cinder-b9df6-api-log" Mar 18 18:19:42.235124 master-0 kubenswrapper[30278]: I0318 18:19:42.228874 30278 memory_manager.go:354] "RemoveStaleState removing state" podUID="02c0bb6e-e750-41b0-8b7b-afb80c5293af" containerName="cinder-api" Mar 18 18:19:42.235124 master-0 kubenswrapper[30278]: I0318 18:19:42.228924 30278 memory_manager.go:354] "RemoveStaleState removing state" podUID="02c0bb6e-e750-41b0-8b7b-afb80c5293af" containerName="cinder-b9df6-api-log" Mar 18 18:19:42.235124 master-0 kubenswrapper[30278]: I0318 18:19:42.230251 30278 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-824c8-default-external-api-0" Mar 18 18:19:42.235124 master-0 kubenswrapper[30278]: I0318 18:19:42.230372 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-b9df6-api-0" Mar 18 18:19:42.246993 master-0 kubenswrapper[30278]: I0318 18:19:42.246918 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-b9df6-api-config-data" Mar 18 18:19:42.247230 master-0 kubenswrapper[30278]: I0318 18:19:42.247153 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Mar 18 18:19:42.247406 master-0 kubenswrapper[30278]: I0318 18:19:42.247308 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Mar 18 18:19:42.301866 master-0 kubenswrapper[30278]: I0318 18:19:42.299967 30278 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-b9df6-api-0"] Mar 18 18:19:42.338090 master-0 kubenswrapper[30278]: I0318 18:19:42.337239 30278 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-824c8-default-external-api-0" Mar 18 18:19:42.343297 master-0 kubenswrapper[30278]: I0318 18:19:42.339826 30278 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-824c8-default-external-api-0" Mar 18 18:19:42.357303 master-0 kubenswrapper[30278]: I0318 18:19:42.353427 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/631bd59b-37e5-49a9-98de-41b91dd3425a-public-tls-certs\") pod \"cinder-b9df6-api-0\" (UID: \"631bd59b-37e5-49a9-98de-41b91dd3425a\") " pod="openstack/cinder-b9df6-api-0" Mar 18 18:19:42.357303 master-0 kubenswrapper[30278]: I0318 18:19:42.353541 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4wsvd\" (UniqueName: \"kubernetes.io/projected/631bd59b-37e5-49a9-98de-41b91dd3425a-kube-api-access-4wsvd\") pod \"cinder-b9df6-api-0\" (UID: \"631bd59b-37e5-49a9-98de-41b91dd3425a\") " pod="openstack/cinder-b9df6-api-0" Mar 18 18:19:42.357303 master-0 kubenswrapper[30278]: I0318 18:19:42.353574 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/631bd59b-37e5-49a9-98de-41b91dd3425a-combined-ca-bundle\") pod \"cinder-b9df6-api-0\" (UID: \"631bd59b-37e5-49a9-98de-41b91dd3425a\") " pod="openstack/cinder-b9df6-api-0" Mar 18 18:19:42.357303 master-0 kubenswrapper[30278]: I0318 18:19:42.353851 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/631bd59b-37e5-49a9-98de-41b91dd3425a-scripts\") pod \"cinder-b9df6-api-0\" (UID: \"631bd59b-37e5-49a9-98de-41b91dd3425a\") " pod="openstack/cinder-b9df6-api-0" Mar 18 18:19:42.357303 master-0 kubenswrapper[30278]: I0318 18:19:42.353892 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/631bd59b-37e5-49a9-98de-41b91dd3425a-logs\") pod \"cinder-b9df6-api-0\" (UID: \"631bd59b-37e5-49a9-98de-41b91dd3425a\") " pod="openstack/cinder-b9df6-api-0" Mar 18 18:19:42.357303 master-0 kubenswrapper[30278]: I0318 18:19:42.354130 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/631bd59b-37e5-49a9-98de-41b91dd3425a-config-data-custom\") pod \"cinder-b9df6-api-0\" (UID: \"631bd59b-37e5-49a9-98de-41b91dd3425a\") " pod="openstack/cinder-b9df6-api-0" Mar 18 18:19:42.357303 master-0 kubenswrapper[30278]: I0318 18:19:42.354742 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/631bd59b-37e5-49a9-98de-41b91dd3425a-etc-machine-id\") pod \"cinder-b9df6-api-0\" (UID: \"631bd59b-37e5-49a9-98de-41b91dd3425a\") " pod="openstack/cinder-b9df6-api-0" Mar 18 18:19:42.357303 master-0 kubenswrapper[30278]: I0318 18:19:42.354782 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/631bd59b-37e5-49a9-98de-41b91dd3425a-internal-tls-certs\") pod \"cinder-b9df6-api-0\" (UID: \"631bd59b-37e5-49a9-98de-41b91dd3425a\") " pod="openstack/cinder-b9df6-api-0" Mar 18 18:19:42.357303 master-0 kubenswrapper[30278]: I0318 18:19:42.354841 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/631bd59b-37e5-49a9-98de-41b91dd3425a-config-data\") pod \"cinder-b9df6-api-0\" (UID: \"631bd59b-37e5-49a9-98de-41b91dd3425a\") " pod="openstack/cinder-b9df6-api-0" Mar 18 18:19:42.465297 master-0 kubenswrapper[30278]: I0318 18:19:42.460059 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/631bd59b-37e5-49a9-98de-41b91dd3425a-logs\") pod \"cinder-b9df6-api-0\" (UID: \"631bd59b-37e5-49a9-98de-41b91dd3425a\") " pod="openstack/cinder-b9df6-api-0" Mar 18 18:19:42.465297 master-0 kubenswrapper[30278]: I0318 18:19:42.460420 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/631bd59b-37e5-49a9-98de-41b91dd3425a-config-data-custom\") pod \"cinder-b9df6-api-0\" (UID: \"631bd59b-37e5-49a9-98de-41b91dd3425a\") " pod="openstack/cinder-b9df6-api-0" Mar 18 18:19:42.465297 master-0 kubenswrapper[30278]: I0318 18:19:42.460512 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/631bd59b-37e5-49a9-98de-41b91dd3425a-etc-machine-id\") pod \"cinder-b9df6-api-0\" (UID: \"631bd59b-37e5-49a9-98de-41b91dd3425a\") " pod="openstack/cinder-b9df6-api-0" Mar 18 18:19:42.465297 master-0 kubenswrapper[30278]: I0318 18:19:42.460530 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/631bd59b-37e5-49a9-98de-41b91dd3425a-internal-tls-certs\") pod \"cinder-b9df6-api-0\" (UID: \"631bd59b-37e5-49a9-98de-41b91dd3425a\") " pod="openstack/cinder-b9df6-api-0" Mar 18 18:19:42.465297 master-0 kubenswrapper[30278]: I0318 18:19:42.460549 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/631bd59b-37e5-49a9-98de-41b91dd3425a-config-data\") pod \"cinder-b9df6-api-0\" (UID: \"631bd59b-37e5-49a9-98de-41b91dd3425a\") " pod="openstack/cinder-b9df6-api-0" Mar 18 18:19:42.465297 master-0 kubenswrapper[30278]: I0318 18:19:42.460592 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/631bd59b-37e5-49a9-98de-41b91dd3425a-public-tls-certs\") pod \"cinder-b9df6-api-0\" (UID: \"631bd59b-37e5-49a9-98de-41b91dd3425a\") " pod="openstack/cinder-b9df6-api-0" Mar 18 18:19:42.465297 master-0 kubenswrapper[30278]: I0318 18:19:42.460643 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4wsvd\" (UniqueName: \"kubernetes.io/projected/631bd59b-37e5-49a9-98de-41b91dd3425a-kube-api-access-4wsvd\") pod \"cinder-b9df6-api-0\" (UID: \"631bd59b-37e5-49a9-98de-41b91dd3425a\") " pod="openstack/cinder-b9df6-api-0" Mar 18 18:19:42.465297 master-0 kubenswrapper[30278]: I0318 18:19:42.460661 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/631bd59b-37e5-49a9-98de-41b91dd3425a-combined-ca-bundle\") pod \"cinder-b9df6-api-0\" (UID: \"631bd59b-37e5-49a9-98de-41b91dd3425a\") " pod="openstack/cinder-b9df6-api-0" Mar 18 18:19:42.465297 master-0 kubenswrapper[30278]: I0318 18:19:42.460712 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/631bd59b-37e5-49a9-98de-41b91dd3425a-logs\") pod \"cinder-b9df6-api-0\" (UID: \"631bd59b-37e5-49a9-98de-41b91dd3425a\") " pod="openstack/cinder-b9df6-api-0" Mar 18 18:19:42.465297 master-0 kubenswrapper[30278]: I0318 18:19:42.460741 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/631bd59b-37e5-49a9-98de-41b91dd3425a-scripts\") pod \"cinder-b9df6-api-0\" (UID: \"631bd59b-37e5-49a9-98de-41b91dd3425a\") " pod="openstack/cinder-b9df6-api-0" Mar 18 18:19:42.465297 master-0 kubenswrapper[30278]: I0318 18:19:42.464561 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/631bd59b-37e5-49a9-98de-41b91dd3425a-etc-machine-id\") pod \"cinder-b9df6-api-0\" (UID: \"631bd59b-37e5-49a9-98de-41b91dd3425a\") " pod="openstack/cinder-b9df6-api-0" Mar 18 18:19:42.473413 master-0 kubenswrapper[30278]: I0318 18:19:42.470404 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/631bd59b-37e5-49a9-98de-41b91dd3425a-internal-tls-certs\") pod \"cinder-b9df6-api-0\" (UID: \"631bd59b-37e5-49a9-98de-41b91dd3425a\") " pod="openstack/cinder-b9df6-api-0" Mar 18 18:19:42.473413 master-0 kubenswrapper[30278]: I0318 18:19:42.470668 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/631bd59b-37e5-49a9-98de-41b91dd3425a-config-data\") pod \"cinder-b9df6-api-0\" (UID: \"631bd59b-37e5-49a9-98de-41b91dd3425a\") " pod="openstack/cinder-b9df6-api-0" Mar 18 18:19:42.473413 master-0 kubenswrapper[30278]: I0318 18:19:42.471372 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/631bd59b-37e5-49a9-98de-41b91dd3425a-scripts\") pod \"cinder-b9df6-api-0\" (UID: \"631bd59b-37e5-49a9-98de-41b91dd3425a\") " pod="openstack/cinder-b9df6-api-0" Mar 18 18:19:42.480291 master-0 kubenswrapper[30278]: I0318 18:19:42.474253 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/631bd59b-37e5-49a9-98de-41b91dd3425a-combined-ca-bundle\") pod \"cinder-b9df6-api-0\" (UID: \"631bd59b-37e5-49a9-98de-41b91dd3425a\") " pod="openstack/cinder-b9df6-api-0" Mar 18 18:19:42.487520 master-0 kubenswrapper[30278]: I0318 18:19:42.485806 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/631bd59b-37e5-49a9-98de-41b91dd3425a-public-tls-certs\") pod \"cinder-b9df6-api-0\" (UID: \"631bd59b-37e5-49a9-98de-41b91dd3425a\") " pod="openstack/cinder-b9df6-api-0" Mar 18 18:19:42.518582 master-0 kubenswrapper[30278]: I0318 18:19:42.516161 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4wsvd\" (UniqueName: \"kubernetes.io/projected/631bd59b-37e5-49a9-98de-41b91dd3425a-kube-api-access-4wsvd\") pod \"cinder-b9df6-api-0\" (UID: \"631bd59b-37e5-49a9-98de-41b91dd3425a\") " pod="openstack/cinder-b9df6-api-0" Mar 18 18:19:42.521048 master-0 kubenswrapper[30278]: I0318 18:19:42.518686 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/631bd59b-37e5-49a9-98de-41b91dd3425a-config-data-custom\") pod \"cinder-b9df6-api-0\" (UID: \"631bd59b-37e5-49a9-98de-41b91dd3425a\") " pod="openstack/cinder-b9df6-api-0" Mar 18 18:19:42.573394 master-0 kubenswrapper[30278]: I0318 18:19:42.573110 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-b9df6-api-0" Mar 18 18:19:42.734051 master-0 kubenswrapper[30278]: I0318 18:19:42.733977 30278 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-824c8-default-internal-api-0" Mar 18 18:19:42.735494 master-0 kubenswrapper[30278]: I0318 18:19:42.734141 30278 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 18 18:19:42.911785 master-0 kubenswrapper[30278]: I0318 18:19:42.911619 30278 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-824c8-default-internal-api-0" Mar 18 18:19:43.151315 master-0 kubenswrapper[30278]: I0318 18:19:43.136802 30278 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-5776b66b45-w6n4j" podStartSLOduration=3.136779417 podStartE2EDuration="3.136779417s" podCreationTimestamp="2026-03-18 18:19:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 18:19:43.112722979 +0000 UTC m=+1152.279907574" watchObservedRunningTime="2026-03-18 18:19:43.136779417 +0000 UTC m=+1152.303964012" Mar 18 18:19:43.151315 master-0 kubenswrapper[30278]: I0318 18:19:43.137567 30278 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="02c0bb6e-e750-41b0-8b7b-afb80c5293af" path="/var/lib/kubelet/pods/02c0bb6e-e750-41b0-8b7b-afb80c5293af/volumes" Mar 18 18:19:43.237320 master-0 kubenswrapper[30278]: I0318 18:19:43.211818 30278 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eb9a9407-6790-44a8-8e7d-fa95e4e42bdc" path="/var/lib/kubelet/pods/eb9a9407-6790-44a8-8e7d-fa95e4e42bdc/volumes" Mar 18 18:19:43.267695 master-0 kubenswrapper[30278]: I0318 18:19:43.266053 30278 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-5776b66b45-w6n4j" Mar 18 18:19:43.267695 master-0 kubenswrapper[30278]: I0318 18:19:43.266104 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5776b66b45-w6n4j" event={"ID":"f0ecd562-b219-44d6-b27a-99af0ae48f35","Type":"ContainerStarted","Data":"33e08c6cb82aad3da7e88aee341be6071b8bf84a9e38f3336a66c648d40e2f90"} Mar 18 18:19:43.267695 master-0 kubenswrapper[30278]: I0318 18:19:43.266124 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5776b66b45-w6n4j" event={"ID":"f0ecd562-b219-44d6-b27a-99af0ae48f35","Type":"ContainerStarted","Data":"2e72ee495eb4750f4b1be92076421aef0e051975e688ba14beec3772c64f8cde"} Mar 18 18:19:43.267695 master-0 kubenswrapper[30278]: I0318 18:19:43.266143 30278 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-824c8-default-external-api-0" Mar 18 18:19:43.267695 master-0 kubenswrapper[30278]: I0318 18:19:43.266154 30278 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-824c8-default-external-api-0" Mar 18 18:19:43.267695 master-0 kubenswrapper[30278]: I0318 18:19:43.266164 30278 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-b9df6-api-0"] Mar 18 18:19:44.103827 master-0 kubenswrapper[30278]: I0318 18:19:44.103762 30278 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-b9df6-scheduler-0" Mar 18 18:19:44.111311 master-0 kubenswrapper[30278]: I0318 18:19:44.110067 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-b9df6-api-0" event={"ID":"631bd59b-37e5-49a9-98de-41b91dd3425a","Type":"ContainerStarted","Data":"0997c2779eeb19578f2a3e85095e40ab7dbcb5786adff92b986f2ce83b018be7"} Mar 18 18:19:44.124088 master-0 kubenswrapper[30278]: I0318 18:19:44.124024 30278 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-b9df6-volume-lvm-iscsi-0" Mar 18 18:19:44.319848 master-0 kubenswrapper[30278]: I0318 18:19:44.315814 30278 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-b9df6-backup-0" Mar 18 18:19:44.460153 master-0 kubenswrapper[30278]: I0318 18:19:44.460029 30278 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-b9df6-scheduler-0" Mar 18 18:19:44.997387 master-0 kubenswrapper[30278]: I0318 18:19:44.994795 30278 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-b9df6-volume-lvm-iscsi-0" Mar 18 18:19:45.186411 master-0 kubenswrapper[30278]: I0318 18:19:45.176412 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-b9df6-api-0" event={"ID":"631bd59b-37e5-49a9-98de-41b91dd3425a","Type":"ContainerStarted","Data":"f7e239bf4d29dce777ed516c2dce9fd2cab3b934f8f7dd76af32f8e28b6b32e5"} Mar 18 18:19:45.186411 master-0 kubenswrapper[30278]: I0318 18:19:45.176738 30278 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 18 18:19:45.186411 master-0 kubenswrapper[30278]: I0318 18:19:45.176752 30278 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 18 18:19:45.284176 master-0 kubenswrapper[30278]: I0318 18:19:45.283994 30278 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-b9df6-scheduler-0"] Mar 18 18:19:45.353804 master-0 kubenswrapper[30278]: I0318 18:19:45.352701 30278 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-b9df6-volume-lvm-iscsi-0"] Mar 18 18:19:45.536847 master-0 kubenswrapper[30278]: I0318 18:19:45.536497 30278 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-b9df6-backup-0" Mar 18 18:19:45.686908 master-0 kubenswrapper[30278]: I0318 18:19:45.686809 30278 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-b9df6-backup-0"] Mar 18 18:19:46.193882 master-0 kubenswrapper[30278]: I0318 18:19:46.193810 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-b9df6-api-0" event={"ID":"631bd59b-37e5-49a9-98de-41b91dd3425a","Type":"ContainerStarted","Data":"7358ae2fa08d64303f9e4fc5940e0d0d76ecc841b9e0b1bfc0ba0163b362de54"} Mar 18 18:19:46.196686 master-0 kubenswrapper[30278]: I0318 18:19:46.194131 30278 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-b9df6-backup-0" podUID="becba0cb-b638-43c2-af99-4269efec025f" containerName="cinder-backup" containerID="cri-o://5f752524153596ac50ab6daedf229a8bfa1068017f02d13516e4cd57d730d34a" gracePeriod=30 Mar 18 18:19:46.196686 master-0 kubenswrapper[30278]: I0318 18:19:46.194481 30278 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-b9df6-backup-0" podUID="becba0cb-b638-43c2-af99-4269efec025f" containerName="probe" containerID="cri-o://e8b243fa279883bc52898f8f5369679ed1db473712e3d62cce59c92571df19ab" gracePeriod=30 Mar 18 18:19:46.196686 master-0 kubenswrapper[30278]: I0318 18:19:46.194701 30278 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-b9df6-scheduler-0" podUID="dc87ffe2-a115-459b-a5b1-87c747b1df2a" containerName="cinder-scheduler" containerID="cri-o://9a02e62a3e62d8fb9454fb31b0378519a508bf2684f327619897e3838d76217b" gracePeriod=30 Mar 18 18:19:46.196686 master-0 kubenswrapper[30278]: I0318 18:19:46.194828 30278 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-b9df6-scheduler-0" podUID="dc87ffe2-a115-459b-a5b1-87c747b1df2a" containerName="probe" containerID="cri-o://a1fbdaacf468e074ed53a5c6fd3ee74d06779d8590f5a96d747c9e4f020d50ce" gracePeriod=30 Mar 18 18:19:46.196686 master-0 kubenswrapper[30278]: I0318 18:19:46.195211 30278 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-b9df6-volume-lvm-iscsi-0" podUID="cbe0e9fb-60fc-4ada-ad1a-014ee622d073" containerName="cinder-volume" containerID="cri-o://5f908b9eac52cdcee4d6db989497be64a8362ef172c24b716bb915c3b7b3759f" gracePeriod=30 Mar 18 18:19:46.196686 master-0 kubenswrapper[30278]: I0318 18:19:46.195349 30278 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-b9df6-volume-lvm-iscsi-0" podUID="cbe0e9fb-60fc-4ada-ad1a-014ee622d073" containerName="probe" containerID="cri-o://c2179651cf320be9b2cbf9de707a3d441bcdc8c7688856dfc73060240635fe0a" gracePeriod=30 Mar 18 18:19:46.252511 master-0 kubenswrapper[30278]: I0318 18:19:46.248704 30278 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-b9df6-api-0" podStartSLOduration=4.248676145 podStartE2EDuration="4.248676145s" podCreationTimestamp="2026-03-18 18:19:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 18:19:46.229913849 +0000 UTC m=+1155.397098444" watchObservedRunningTime="2026-03-18 18:19:46.248676145 +0000 UTC m=+1155.415860740" Mar 18 18:19:46.299828 master-0 kubenswrapper[30278]: I0318 18:19:46.299710 30278 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-824c8-default-external-api-0" Mar 18 18:19:46.300125 master-0 kubenswrapper[30278]: I0318 18:19:46.299886 30278 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 18 18:19:46.684522 master-0 kubenswrapper[30278]: I0318 18:19:46.684445 30278 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-824c8-default-external-api-0" Mar 18 18:19:47.217346 master-0 kubenswrapper[30278]: I0318 18:19:47.216222 30278 generic.go:334] "Generic (PLEG): container finished" podID="cbe0e9fb-60fc-4ada-ad1a-014ee622d073" containerID="5f908b9eac52cdcee4d6db989497be64a8362ef172c24b716bb915c3b7b3759f" exitCode=0 Mar 18 18:19:47.218001 master-0 kubenswrapper[30278]: I0318 18:19:47.217345 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-b9df6-volume-lvm-iscsi-0" event={"ID":"cbe0e9fb-60fc-4ada-ad1a-014ee622d073","Type":"ContainerDied","Data":"5f908b9eac52cdcee4d6db989497be64a8362ef172c24b716bb915c3b7b3759f"} Mar 18 18:19:47.218052 master-0 kubenswrapper[30278]: I0318 18:19:47.218023 30278 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-b9df6-api-0" Mar 18 18:19:47.920621 master-0 kubenswrapper[30278]: I0318 18:19:47.920571 30278 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7c894db6df-849s7" Mar 18 18:19:48.001197 master-0 kubenswrapper[30278]: I0318 18:19:48.001151 30278 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-b9df6-volume-lvm-iscsi-0" Mar 18 18:19:48.087379 master-0 kubenswrapper[30278]: I0318 18:19:48.080527 30278 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-c74f744c5-h9zsh"] Mar 18 18:19:48.087379 master-0 kubenswrapper[30278]: I0318 18:19:48.080861 30278 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-c74f744c5-h9zsh" podUID="febd1792-9c89-4923-b8b8-0e41a1be1f1c" containerName="dnsmasq-dns" containerID="cri-o://c27def519c22909a531884df533f9961a3e2ad9da3f520eb969db8c43fcbd006" gracePeriod=10 Mar 18 18:19:48.198313 master-0 kubenswrapper[30278]: I0318 18:19:48.197689 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/cbe0e9fb-60fc-4ada-ad1a-014ee622d073-lib-modules\") pod \"cbe0e9fb-60fc-4ada-ad1a-014ee622d073\" (UID: \"cbe0e9fb-60fc-4ada-ad1a-014ee622d073\") " Mar 18 18:19:48.198313 master-0 kubenswrapper[30278]: I0318 18:19:48.197858 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cbe0e9fb-60fc-4ada-ad1a-014ee622d073-scripts\") pod \"cbe0e9fb-60fc-4ada-ad1a-014ee622d073\" (UID: \"cbe0e9fb-60fc-4ada-ad1a-014ee622d073\") " Mar 18 18:19:48.198313 master-0 kubenswrapper[30278]: I0318 18:19:48.197895 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cbe0e9fb-60fc-4ada-ad1a-014ee622d073-combined-ca-bundle\") pod \"cbe0e9fb-60fc-4ada-ad1a-014ee622d073\" (UID: \"cbe0e9fb-60fc-4ada-ad1a-014ee622d073\") " Mar 18 18:19:48.198736 master-0 kubenswrapper[30278]: I0318 18:19:48.198425 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cbe0e9fb-60fc-4ada-ad1a-014ee622d073-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "cbe0e9fb-60fc-4ada-ad1a-014ee622d073" (UID: "cbe0e9fb-60fc-4ada-ad1a-014ee622d073"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 18:19:48.198736 master-0 kubenswrapper[30278]: I0318 18:19:48.198536 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/cbe0e9fb-60fc-4ada-ad1a-014ee622d073-dev\") pod \"cbe0e9fb-60fc-4ada-ad1a-014ee622d073\" (UID: \"cbe0e9fb-60fc-4ada-ad1a-014ee622d073\") " Mar 18 18:19:48.198736 master-0 kubenswrapper[30278]: I0318 18:19:48.198565 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/cbe0e9fb-60fc-4ada-ad1a-014ee622d073-sys\") pod \"cbe0e9fb-60fc-4ada-ad1a-014ee622d073\" (UID: \"cbe0e9fb-60fc-4ada-ad1a-014ee622d073\") " Mar 18 18:19:48.199397 master-0 kubenswrapper[30278]: I0318 18:19:48.199056 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/cbe0e9fb-60fc-4ada-ad1a-014ee622d073-var-locks-brick\") pod \"cbe0e9fb-60fc-4ada-ad1a-014ee622d073\" (UID: \"cbe0e9fb-60fc-4ada-ad1a-014ee622d073\") " Mar 18 18:19:48.199397 master-0 kubenswrapper[30278]: I0318 18:19:48.199151 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/cbe0e9fb-60fc-4ada-ad1a-014ee622d073-config-data-custom\") pod \"cbe0e9fb-60fc-4ada-ad1a-014ee622d073\" (UID: \"cbe0e9fb-60fc-4ada-ad1a-014ee622d073\") " Mar 18 18:19:48.199397 master-0 kubenswrapper[30278]: I0318 18:19:48.199300 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/cbe0e9fb-60fc-4ada-ad1a-014ee622d073-run\") pod \"cbe0e9fb-60fc-4ada-ad1a-014ee622d073\" (UID: \"cbe0e9fb-60fc-4ada-ad1a-014ee622d073\") " Mar 18 18:19:48.199567 master-0 kubenswrapper[30278]: I0318 18:19:48.199456 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/cbe0e9fb-60fc-4ada-ad1a-014ee622d073-etc-iscsi\") pod \"cbe0e9fb-60fc-4ada-ad1a-014ee622d073\" (UID: \"cbe0e9fb-60fc-4ada-ad1a-014ee622d073\") " Mar 18 18:19:48.199567 master-0 kubenswrapper[30278]: I0318 18:19:48.199485 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cbe0e9fb-60fc-4ada-ad1a-014ee622d073-config-data\") pod \"cbe0e9fb-60fc-4ada-ad1a-014ee622d073\" (UID: \"cbe0e9fb-60fc-4ada-ad1a-014ee622d073\") " Mar 18 18:19:48.199567 master-0 kubenswrapper[30278]: I0318 18:19:48.199539 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/cbe0e9fb-60fc-4ada-ad1a-014ee622d073-etc-machine-id\") pod \"cbe0e9fb-60fc-4ada-ad1a-014ee622d073\" (UID: \"cbe0e9fb-60fc-4ada-ad1a-014ee622d073\") " Mar 18 18:19:48.199567 master-0 kubenswrapper[30278]: I0318 18:19:48.199563 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4jsb\" (UniqueName: \"kubernetes.io/projected/cbe0e9fb-60fc-4ada-ad1a-014ee622d073-kube-api-access-d4jsb\") pod \"cbe0e9fb-60fc-4ada-ad1a-014ee622d073\" (UID: \"cbe0e9fb-60fc-4ada-ad1a-014ee622d073\") " Mar 18 18:19:48.199750 master-0 kubenswrapper[30278]: I0318 18:19:48.199612 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/cbe0e9fb-60fc-4ada-ad1a-014ee622d073-var-lib-cinder\") pod \"cbe0e9fb-60fc-4ada-ad1a-014ee622d073\" (UID: \"cbe0e9fb-60fc-4ada-ad1a-014ee622d073\") " Mar 18 18:19:48.199750 master-0 kubenswrapper[30278]: I0318 18:19:48.199693 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/cbe0e9fb-60fc-4ada-ad1a-014ee622d073-var-locks-cinder\") pod \"cbe0e9fb-60fc-4ada-ad1a-014ee622d073\" (UID: \"cbe0e9fb-60fc-4ada-ad1a-014ee622d073\") " Mar 18 18:19:48.199852 master-0 kubenswrapper[30278]: I0318 18:19:48.199780 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/cbe0e9fb-60fc-4ada-ad1a-014ee622d073-etc-nvme\") pod \"cbe0e9fb-60fc-4ada-ad1a-014ee622d073\" (UID: \"cbe0e9fb-60fc-4ada-ad1a-014ee622d073\") " Mar 18 18:19:48.203412 master-0 kubenswrapper[30278]: I0318 18:19:48.200342 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cbe0e9fb-60fc-4ada-ad1a-014ee622d073-etc-iscsi" (OuterVolumeSpecName: "etc-iscsi") pod "cbe0e9fb-60fc-4ada-ad1a-014ee622d073" (UID: "cbe0e9fb-60fc-4ada-ad1a-014ee622d073"). InnerVolumeSpecName "etc-iscsi". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 18:19:48.203412 master-0 kubenswrapper[30278]: I0318 18:19:48.200388 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cbe0e9fb-60fc-4ada-ad1a-014ee622d073-dev" (OuterVolumeSpecName: "dev") pod "cbe0e9fb-60fc-4ada-ad1a-014ee622d073" (UID: "cbe0e9fb-60fc-4ada-ad1a-014ee622d073"). InnerVolumeSpecName "dev". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 18:19:48.203412 master-0 kubenswrapper[30278]: I0318 18:19:48.200461 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cbe0e9fb-60fc-4ada-ad1a-014ee622d073-etc-nvme" (OuterVolumeSpecName: "etc-nvme") pod "cbe0e9fb-60fc-4ada-ad1a-014ee622d073" (UID: "cbe0e9fb-60fc-4ada-ad1a-014ee622d073"). InnerVolumeSpecName "etc-nvme". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 18:19:48.203412 master-0 kubenswrapper[30278]: I0318 18:19:48.201078 30278 reconciler_common.go:293] "Volume detached for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/cbe0e9fb-60fc-4ada-ad1a-014ee622d073-etc-nvme\") on node \"master-0\" DevicePath \"\"" Mar 18 18:19:48.203412 master-0 kubenswrapper[30278]: I0318 18:19:48.201100 30278 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/cbe0e9fb-60fc-4ada-ad1a-014ee622d073-lib-modules\") on node \"master-0\" DevicePath \"\"" Mar 18 18:19:48.203412 master-0 kubenswrapper[30278]: I0318 18:19:48.201117 30278 reconciler_common.go:293] "Volume detached for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/cbe0e9fb-60fc-4ada-ad1a-014ee622d073-dev\") on node \"master-0\" DevicePath \"\"" Mar 18 18:19:48.203412 master-0 kubenswrapper[30278]: I0318 18:19:48.201126 30278 reconciler_common.go:293] "Volume detached for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/cbe0e9fb-60fc-4ada-ad1a-014ee622d073-etc-iscsi\") on node \"master-0\" DevicePath \"\"" Mar 18 18:19:48.203412 master-0 kubenswrapper[30278]: I0318 18:19:48.201403 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cbe0e9fb-60fc-4ada-ad1a-014ee622d073-sys" (OuterVolumeSpecName: "sys") pod "cbe0e9fb-60fc-4ada-ad1a-014ee622d073" (UID: "cbe0e9fb-60fc-4ada-ad1a-014ee622d073"). InnerVolumeSpecName "sys". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 18:19:48.203412 master-0 kubenswrapper[30278]: I0318 18:19:48.201440 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cbe0e9fb-60fc-4ada-ad1a-014ee622d073-var-locks-brick" (OuterVolumeSpecName: "var-locks-brick") pod "cbe0e9fb-60fc-4ada-ad1a-014ee622d073" (UID: "cbe0e9fb-60fc-4ada-ad1a-014ee622d073"). InnerVolumeSpecName "var-locks-brick". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 18:19:48.203412 master-0 kubenswrapper[30278]: I0318 18:19:48.203199 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cbe0e9fb-60fc-4ada-ad1a-014ee622d073-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "cbe0e9fb-60fc-4ada-ad1a-014ee622d073" (UID: "cbe0e9fb-60fc-4ada-ad1a-014ee622d073"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 18:19:48.204112 master-0 kubenswrapper[30278]: I0318 18:19:48.203643 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cbe0e9fb-60fc-4ada-ad1a-014ee622d073-run" (OuterVolumeSpecName: "run") pod "cbe0e9fb-60fc-4ada-ad1a-014ee622d073" (UID: "cbe0e9fb-60fc-4ada-ad1a-014ee622d073"). InnerVolumeSpecName "run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 18:19:48.205334 master-0 kubenswrapper[30278]: I0318 18:19:48.204546 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cbe0e9fb-60fc-4ada-ad1a-014ee622d073-var-locks-cinder" (OuterVolumeSpecName: "var-locks-cinder") pod "cbe0e9fb-60fc-4ada-ad1a-014ee622d073" (UID: "cbe0e9fb-60fc-4ada-ad1a-014ee622d073"). InnerVolumeSpecName "var-locks-cinder". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 18:19:48.251658 master-0 kubenswrapper[30278]: I0318 18:19:48.203261 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cbe0e9fb-60fc-4ada-ad1a-014ee622d073-var-lib-cinder" (OuterVolumeSpecName: "var-lib-cinder") pod "cbe0e9fb-60fc-4ada-ad1a-014ee622d073" (UID: "cbe0e9fb-60fc-4ada-ad1a-014ee622d073"). InnerVolumeSpecName "var-lib-cinder". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 18:19:48.251658 master-0 kubenswrapper[30278]: I0318 18:19:48.218501 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cbe0e9fb-60fc-4ada-ad1a-014ee622d073-kube-api-access-d4jsb" (OuterVolumeSpecName: "kube-api-access-d4jsb") pod "cbe0e9fb-60fc-4ada-ad1a-014ee622d073" (UID: "cbe0e9fb-60fc-4ada-ad1a-014ee622d073"). InnerVolumeSpecName "kube-api-access-d4jsb". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 18:19:48.251658 master-0 kubenswrapper[30278]: I0318 18:19:48.218529 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cbe0e9fb-60fc-4ada-ad1a-014ee622d073-scripts" (OuterVolumeSpecName: "scripts") pod "cbe0e9fb-60fc-4ada-ad1a-014ee622d073" (UID: "cbe0e9fb-60fc-4ada-ad1a-014ee622d073"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 18:19:48.251658 master-0 kubenswrapper[30278]: I0318 18:19:48.230610 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cbe0e9fb-60fc-4ada-ad1a-014ee622d073-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "cbe0e9fb-60fc-4ada-ad1a-014ee622d073" (UID: "cbe0e9fb-60fc-4ada-ad1a-014ee622d073"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 18:19:48.266346 master-0 kubenswrapper[30278]: I0318 18:19:48.266239 30278 generic.go:334] "Generic (PLEG): container finished" podID="dc87ffe2-a115-459b-a5b1-87c747b1df2a" containerID="a1fbdaacf468e074ed53a5c6fd3ee74d06779d8590f5a96d747c9e4f020d50ce" exitCode=0 Mar 18 18:19:48.266423 master-0 kubenswrapper[30278]: I0318 18:19:48.266392 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-b9df6-scheduler-0" event={"ID":"dc87ffe2-a115-459b-a5b1-87c747b1df2a","Type":"ContainerDied","Data":"a1fbdaacf468e074ed53a5c6fd3ee74d06779d8590f5a96d747c9e4f020d50ce"} Mar 18 18:19:48.301334 master-0 kubenswrapper[30278]: I0318 18:19:48.298429 30278 generic.go:334] "Generic (PLEG): container finished" podID="cbe0e9fb-60fc-4ada-ad1a-014ee622d073" containerID="c2179651cf320be9b2cbf9de707a3d441bcdc8c7688856dfc73060240635fe0a" exitCode=0 Mar 18 18:19:48.301334 master-0 kubenswrapper[30278]: I0318 18:19:48.298541 30278 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-b9df6-volume-lvm-iscsi-0" Mar 18 18:19:48.301334 master-0 kubenswrapper[30278]: I0318 18:19:48.298587 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-b9df6-volume-lvm-iscsi-0" event={"ID":"cbe0e9fb-60fc-4ada-ad1a-014ee622d073","Type":"ContainerDied","Data":"c2179651cf320be9b2cbf9de707a3d441bcdc8c7688856dfc73060240635fe0a"} Mar 18 18:19:48.301334 master-0 kubenswrapper[30278]: I0318 18:19:48.298627 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-b9df6-volume-lvm-iscsi-0" event={"ID":"cbe0e9fb-60fc-4ada-ad1a-014ee622d073","Type":"ContainerDied","Data":"dde23f94f42e6e7fb93d517cbb3066a90c45983dd99f30857d20836e6a55e1b5"} Mar 18 18:19:48.301334 master-0 kubenswrapper[30278]: I0318 18:19:48.298648 30278 scope.go:117] "RemoveContainer" containerID="c2179651cf320be9b2cbf9de707a3d441bcdc8c7688856dfc73060240635fe0a" Mar 18 18:19:48.303821 master-0 kubenswrapper[30278]: I0318 18:19:48.303706 30278 reconciler_common.go:293] "Volume detached for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/cbe0e9fb-60fc-4ada-ad1a-014ee622d073-sys\") on node \"master-0\" DevicePath \"\"" Mar 18 18:19:48.303821 master-0 kubenswrapper[30278]: I0318 18:19:48.303752 30278 reconciler_common.go:293] "Volume detached for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/cbe0e9fb-60fc-4ada-ad1a-014ee622d073-var-locks-brick\") on node \"master-0\" DevicePath \"\"" Mar 18 18:19:48.303821 master-0 kubenswrapper[30278]: I0318 18:19:48.303772 30278 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/cbe0e9fb-60fc-4ada-ad1a-014ee622d073-config-data-custom\") on node \"master-0\" DevicePath \"\"" Mar 18 18:19:48.303821 master-0 kubenswrapper[30278]: I0318 18:19:48.303785 30278 reconciler_common.go:293] "Volume detached for volume \"run\" (UniqueName: \"kubernetes.io/host-path/cbe0e9fb-60fc-4ada-ad1a-014ee622d073-run\") on node \"master-0\" DevicePath \"\"" Mar 18 18:19:48.303821 master-0 kubenswrapper[30278]: I0318 18:19:48.303804 30278 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/cbe0e9fb-60fc-4ada-ad1a-014ee622d073-etc-machine-id\") on node \"master-0\" DevicePath \"\"" Mar 18 18:19:48.303821 master-0 kubenswrapper[30278]: I0318 18:19:48.303818 30278 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4jsb\" (UniqueName: \"kubernetes.io/projected/cbe0e9fb-60fc-4ada-ad1a-014ee622d073-kube-api-access-d4jsb\") on node \"master-0\" DevicePath \"\"" Mar 18 18:19:48.303821 master-0 kubenswrapper[30278]: I0318 18:19:48.303830 30278 reconciler_common.go:293] "Volume detached for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/cbe0e9fb-60fc-4ada-ad1a-014ee622d073-var-lib-cinder\") on node \"master-0\" DevicePath \"\"" Mar 18 18:19:48.304098 master-0 kubenswrapper[30278]: I0318 18:19:48.303842 30278 reconciler_common.go:293] "Volume detached for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/cbe0e9fb-60fc-4ada-ad1a-014ee622d073-var-locks-cinder\") on node \"master-0\" DevicePath \"\"" Mar 18 18:19:48.304098 master-0 kubenswrapper[30278]: I0318 18:19:48.303854 30278 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cbe0e9fb-60fc-4ada-ad1a-014ee622d073-scripts\") on node \"master-0\" DevicePath \"\"" Mar 18 18:19:48.309662 master-0 kubenswrapper[30278]: I0318 18:19:48.308448 30278 generic.go:334] "Generic (PLEG): container finished" podID="becba0cb-b638-43c2-af99-4269efec025f" containerID="e8b243fa279883bc52898f8f5369679ed1db473712e3d62cce59c92571df19ab" exitCode=0 Mar 18 18:19:48.309662 master-0 kubenswrapper[30278]: I0318 18:19:48.308501 30278 generic.go:334] "Generic (PLEG): container finished" podID="becba0cb-b638-43c2-af99-4269efec025f" containerID="5f752524153596ac50ab6daedf229a8bfa1068017f02d13516e4cd57d730d34a" exitCode=0 Mar 18 18:19:48.310008 master-0 kubenswrapper[30278]: I0318 18:19:48.309955 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-b9df6-backup-0" event={"ID":"becba0cb-b638-43c2-af99-4269efec025f","Type":"ContainerDied","Data":"e8b243fa279883bc52898f8f5369679ed1db473712e3d62cce59c92571df19ab"} Mar 18 18:19:48.310077 master-0 kubenswrapper[30278]: I0318 18:19:48.310008 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-b9df6-backup-0" event={"ID":"becba0cb-b638-43c2-af99-4269efec025f","Type":"ContainerDied","Data":"5f752524153596ac50ab6daedf229a8bfa1068017f02d13516e4cd57d730d34a"} Mar 18 18:19:48.340744 master-0 kubenswrapper[30278]: I0318 18:19:48.340603 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cbe0e9fb-60fc-4ada-ad1a-014ee622d073-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "cbe0e9fb-60fc-4ada-ad1a-014ee622d073" (UID: "cbe0e9fb-60fc-4ada-ad1a-014ee622d073"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 18:19:48.353654 master-0 kubenswrapper[30278]: I0318 18:19:48.353527 30278 scope.go:117] "RemoveContainer" containerID="5f908b9eac52cdcee4d6db989497be64a8362ef172c24b716bb915c3b7b3759f" Mar 18 18:19:48.415860 master-0 kubenswrapper[30278]: I0318 18:19:48.408994 30278 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cbe0e9fb-60fc-4ada-ad1a-014ee622d073-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 18 18:19:48.509314 master-0 kubenswrapper[30278]: I0318 18:19:48.506217 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cbe0e9fb-60fc-4ada-ad1a-014ee622d073-config-data" (OuterVolumeSpecName: "config-data") pod "cbe0e9fb-60fc-4ada-ad1a-014ee622d073" (UID: "cbe0e9fb-60fc-4ada-ad1a-014ee622d073"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 18:19:48.515750 master-0 kubenswrapper[30278]: I0318 18:19:48.512243 30278 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cbe0e9fb-60fc-4ada-ad1a-014ee622d073-config-data\") on node \"master-0\" DevicePath \"\"" Mar 18 18:19:48.629593 master-0 kubenswrapper[30278]: I0318 18:19:48.628579 30278 scope.go:117] "RemoveContainer" containerID="c2179651cf320be9b2cbf9de707a3d441bcdc8c7688856dfc73060240635fe0a" Mar 18 18:19:48.633302 master-0 kubenswrapper[30278]: E0318 18:19:48.629946 30278 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c2179651cf320be9b2cbf9de707a3d441bcdc8c7688856dfc73060240635fe0a\": container with ID starting with c2179651cf320be9b2cbf9de707a3d441bcdc8c7688856dfc73060240635fe0a not found: ID does not exist" containerID="c2179651cf320be9b2cbf9de707a3d441bcdc8c7688856dfc73060240635fe0a" Mar 18 18:19:48.633302 master-0 kubenswrapper[30278]: I0318 18:19:48.630036 30278 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c2179651cf320be9b2cbf9de707a3d441bcdc8c7688856dfc73060240635fe0a"} err="failed to get container status \"c2179651cf320be9b2cbf9de707a3d441bcdc8c7688856dfc73060240635fe0a\": rpc error: code = NotFound desc = could not find container \"c2179651cf320be9b2cbf9de707a3d441bcdc8c7688856dfc73060240635fe0a\": container with ID starting with c2179651cf320be9b2cbf9de707a3d441bcdc8c7688856dfc73060240635fe0a not found: ID does not exist" Mar 18 18:19:48.633302 master-0 kubenswrapper[30278]: I0318 18:19:48.630117 30278 scope.go:117] "RemoveContainer" containerID="5f908b9eac52cdcee4d6db989497be64a8362ef172c24b716bb915c3b7b3759f" Mar 18 18:19:48.633302 master-0 kubenswrapper[30278]: E0318 18:19:48.630571 30278 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5f908b9eac52cdcee4d6db989497be64a8362ef172c24b716bb915c3b7b3759f\": container with ID starting with 5f908b9eac52cdcee4d6db989497be64a8362ef172c24b716bb915c3b7b3759f not found: ID does not exist" containerID="5f908b9eac52cdcee4d6db989497be64a8362ef172c24b716bb915c3b7b3759f" Mar 18 18:19:48.633302 master-0 kubenswrapper[30278]: I0318 18:19:48.630637 30278 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5f908b9eac52cdcee4d6db989497be64a8362ef172c24b716bb915c3b7b3759f"} err="failed to get container status \"5f908b9eac52cdcee4d6db989497be64a8362ef172c24b716bb915c3b7b3759f\": rpc error: code = NotFound desc = could not find container \"5f908b9eac52cdcee4d6db989497be64a8362ef172c24b716bb915c3b7b3759f\": container with ID starting with 5f908b9eac52cdcee4d6db989497be64a8362ef172c24b716bb915c3b7b3759f not found: ID does not exist" Mar 18 18:19:48.708305 master-0 kubenswrapper[30278]: I0318 18:19:48.707361 30278 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-b9df6-volume-lvm-iscsi-0"] Mar 18 18:19:48.820809 master-0 kubenswrapper[30278]: I0318 18:19:48.816690 30278 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-b9df6-volume-lvm-iscsi-0"] Mar 18 18:19:48.839303 master-0 kubenswrapper[30278]: I0318 18:19:48.836176 30278 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-b9df6-volume-lvm-iscsi-0"] Mar 18 18:19:48.839303 master-0 kubenswrapper[30278]: E0318 18:19:48.836839 30278 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cbe0e9fb-60fc-4ada-ad1a-014ee622d073" containerName="probe" Mar 18 18:19:48.839303 master-0 kubenswrapper[30278]: I0318 18:19:48.836855 30278 state_mem.go:107] "Deleted CPUSet assignment" podUID="cbe0e9fb-60fc-4ada-ad1a-014ee622d073" containerName="probe" Mar 18 18:19:48.839303 master-0 kubenswrapper[30278]: E0318 18:19:48.836908 30278 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cbe0e9fb-60fc-4ada-ad1a-014ee622d073" containerName="cinder-volume" Mar 18 18:19:48.839303 master-0 kubenswrapper[30278]: I0318 18:19:48.836914 30278 state_mem.go:107] "Deleted CPUSet assignment" podUID="cbe0e9fb-60fc-4ada-ad1a-014ee622d073" containerName="cinder-volume" Mar 18 18:19:48.839303 master-0 kubenswrapper[30278]: I0318 18:19:48.837213 30278 memory_manager.go:354] "RemoveStaleState removing state" podUID="cbe0e9fb-60fc-4ada-ad1a-014ee622d073" containerName="probe" Mar 18 18:19:48.839303 master-0 kubenswrapper[30278]: I0318 18:19:48.837226 30278 memory_manager.go:354] "RemoveStaleState removing state" podUID="cbe0e9fb-60fc-4ada-ad1a-014ee622d073" containerName="cinder-volume" Mar 18 18:19:48.839796 master-0 kubenswrapper[30278]: I0318 18:19:48.839437 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-b9df6-volume-lvm-iscsi-0" Mar 18 18:19:48.848301 master-0 kubenswrapper[30278]: I0318 18:19:48.848007 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-b9df6-volume-lvm-iscsi-config-data" Mar 18 18:19:48.864593 master-0 kubenswrapper[30278]: I0318 18:19:48.861826 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/87b1fa77-70e4-4d90-a808-8ec6a7526a12-scripts\") pod \"cinder-b9df6-volume-lvm-iscsi-0\" (UID: \"87b1fa77-70e4-4d90-a808-8ec6a7526a12\") " pod="openstack/cinder-b9df6-volume-lvm-iscsi-0" Mar 18 18:19:48.864593 master-0 kubenswrapper[30278]: I0318 18:19:48.861903 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/87b1fa77-70e4-4d90-a808-8ec6a7526a12-sys\") pod \"cinder-b9df6-volume-lvm-iscsi-0\" (UID: \"87b1fa77-70e4-4d90-a808-8ec6a7526a12\") " pod="openstack/cinder-b9df6-volume-lvm-iscsi-0" Mar 18 18:19:48.864593 master-0 kubenswrapper[30278]: I0318 18:19:48.861970 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/87b1fa77-70e4-4d90-a808-8ec6a7526a12-lib-modules\") pod \"cinder-b9df6-volume-lvm-iscsi-0\" (UID: \"87b1fa77-70e4-4d90-a808-8ec6a7526a12\") " pod="openstack/cinder-b9df6-volume-lvm-iscsi-0" Mar 18 18:19:48.864593 master-0 kubenswrapper[30278]: I0318 18:19:48.862016 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/87b1fa77-70e4-4d90-a808-8ec6a7526a12-var-locks-cinder\") pod \"cinder-b9df6-volume-lvm-iscsi-0\" (UID: \"87b1fa77-70e4-4d90-a808-8ec6a7526a12\") " pod="openstack/cinder-b9df6-volume-lvm-iscsi-0" Mar 18 18:19:48.864593 master-0 kubenswrapper[30278]: I0318 18:19:48.862056 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/87b1fa77-70e4-4d90-a808-8ec6a7526a12-dev\") pod \"cinder-b9df6-volume-lvm-iscsi-0\" (UID: \"87b1fa77-70e4-4d90-a808-8ec6a7526a12\") " pod="openstack/cinder-b9df6-volume-lvm-iscsi-0" Mar 18 18:19:48.864593 master-0 kubenswrapper[30278]: I0318 18:19:48.862169 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nwkg5\" (UniqueName: \"kubernetes.io/projected/87b1fa77-70e4-4d90-a808-8ec6a7526a12-kube-api-access-nwkg5\") pod \"cinder-b9df6-volume-lvm-iscsi-0\" (UID: \"87b1fa77-70e4-4d90-a808-8ec6a7526a12\") " pod="openstack/cinder-b9df6-volume-lvm-iscsi-0" Mar 18 18:19:48.864593 master-0 kubenswrapper[30278]: I0318 18:19:48.862429 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/87b1fa77-70e4-4d90-a808-8ec6a7526a12-combined-ca-bundle\") pod \"cinder-b9df6-volume-lvm-iscsi-0\" (UID: \"87b1fa77-70e4-4d90-a808-8ec6a7526a12\") " pod="openstack/cinder-b9df6-volume-lvm-iscsi-0" Mar 18 18:19:48.864593 master-0 kubenswrapper[30278]: I0318 18:19:48.862496 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/87b1fa77-70e4-4d90-a808-8ec6a7526a12-var-locks-brick\") pod \"cinder-b9df6-volume-lvm-iscsi-0\" (UID: \"87b1fa77-70e4-4d90-a808-8ec6a7526a12\") " pod="openstack/cinder-b9df6-volume-lvm-iscsi-0" Mar 18 18:19:48.864593 master-0 kubenswrapper[30278]: I0318 18:19:48.862625 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/87b1fa77-70e4-4d90-a808-8ec6a7526a12-etc-iscsi\") pod \"cinder-b9df6-volume-lvm-iscsi-0\" (UID: \"87b1fa77-70e4-4d90-a808-8ec6a7526a12\") " pod="openstack/cinder-b9df6-volume-lvm-iscsi-0" Mar 18 18:19:48.864593 master-0 kubenswrapper[30278]: I0318 18:19:48.862690 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/87b1fa77-70e4-4d90-a808-8ec6a7526a12-var-lib-cinder\") pod \"cinder-b9df6-volume-lvm-iscsi-0\" (UID: \"87b1fa77-70e4-4d90-a808-8ec6a7526a12\") " pod="openstack/cinder-b9df6-volume-lvm-iscsi-0" Mar 18 18:19:48.864593 master-0 kubenswrapper[30278]: I0318 18:19:48.862711 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/87b1fa77-70e4-4d90-a808-8ec6a7526a12-etc-machine-id\") pod \"cinder-b9df6-volume-lvm-iscsi-0\" (UID: \"87b1fa77-70e4-4d90-a808-8ec6a7526a12\") " pod="openstack/cinder-b9df6-volume-lvm-iscsi-0" Mar 18 18:19:48.864593 master-0 kubenswrapper[30278]: I0318 18:19:48.862742 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/87b1fa77-70e4-4d90-a808-8ec6a7526a12-etc-nvme\") pod \"cinder-b9df6-volume-lvm-iscsi-0\" (UID: \"87b1fa77-70e4-4d90-a808-8ec6a7526a12\") " pod="openstack/cinder-b9df6-volume-lvm-iscsi-0" Mar 18 18:19:48.864593 master-0 kubenswrapper[30278]: I0318 18:19:48.862780 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/87b1fa77-70e4-4d90-a808-8ec6a7526a12-run\") pod \"cinder-b9df6-volume-lvm-iscsi-0\" (UID: \"87b1fa77-70e4-4d90-a808-8ec6a7526a12\") " pod="openstack/cinder-b9df6-volume-lvm-iscsi-0" Mar 18 18:19:48.864593 master-0 kubenswrapper[30278]: I0318 18:19:48.862825 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/87b1fa77-70e4-4d90-a808-8ec6a7526a12-config-data-custom\") pod \"cinder-b9df6-volume-lvm-iscsi-0\" (UID: \"87b1fa77-70e4-4d90-a808-8ec6a7526a12\") " pod="openstack/cinder-b9df6-volume-lvm-iscsi-0" Mar 18 18:19:48.864593 master-0 kubenswrapper[30278]: I0318 18:19:48.862851 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/87b1fa77-70e4-4d90-a808-8ec6a7526a12-config-data\") pod \"cinder-b9df6-volume-lvm-iscsi-0\" (UID: \"87b1fa77-70e4-4d90-a808-8ec6a7526a12\") " pod="openstack/cinder-b9df6-volume-lvm-iscsi-0" Mar 18 18:19:48.864593 master-0 kubenswrapper[30278]: I0318 18:19:48.863348 30278 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-b9df6-volume-lvm-iscsi-0"] Mar 18 18:19:48.944609 master-0 kubenswrapper[30278]: I0318 18:19:48.942242 30278 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-7db756448-vwstn" Mar 18 18:19:48.971301 master-0 kubenswrapper[30278]: I0318 18:19:48.966112 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/87b1fa77-70e4-4d90-a808-8ec6a7526a12-var-lib-cinder\") pod \"cinder-b9df6-volume-lvm-iscsi-0\" (UID: \"87b1fa77-70e4-4d90-a808-8ec6a7526a12\") " pod="openstack/cinder-b9df6-volume-lvm-iscsi-0" Mar 18 18:19:48.971301 master-0 kubenswrapper[30278]: I0318 18:19:48.966166 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/87b1fa77-70e4-4d90-a808-8ec6a7526a12-etc-machine-id\") pod \"cinder-b9df6-volume-lvm-iscsi-0\" (UID: \"87b1fa77-70e4-4d90-a808-8ec6a7526a12\") " pod="openstack/cinder-b9df6-volume-lvm-iscsi-0" Mar 18 18:19:48.971301 master-0 kubenswrapper[30278]: I0318 18:19:48.966199 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/87b1fa77-70e4-4d90-a808-8ec6a7526a12-etc-nvme\") pod \"cinder-b9df6-volume-lvm-iscsi-0\" (UID: \"87b1fa77-70e4-4d90-a808-8ec6a7526a12\") " pod="openstack/cinder-b9df6-volume-lvm-iscsi-0" Mar 18 18:19:48.971301 master-0 kubenswrapper[30278]: I0318 18:19:48.966227 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/87b1fa77-70e4-4d90-a808-8ec6a7526a12-run\") pod \"cinder-b9df6-volume-lvm-iscsi-0\" (UID: \"87b1fa77-70e4-4d90-a808-8ec6a7526a12\") " pod="openstack/cinder-b9df6-volume-lvm-iscsi-0" Mar 18 18:19:48.971301 master-0 kubenswrapper[30278]: I0318 18:19:48.967030 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/87b1fa77-70e4-4d90-a808-8ec6a7526a12-config-data-custom\") pod \"cinder-b9df6-volume-lvm-iscsi-0\" (UID: \"87b1fa77-70e4-4d90-a808-8ec6a7526a12\") " pod="openstack/cinder-b9df6-volume-lvm-iscsi-0" Mar 18 18:19:48.971301 master-0 kubenswrapper[30278]: I0318 18:19:48.968153 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/87b1fa77-70e4-4d90-a808-8ec6a7526a12-etc-nvme\") pod \"cinder-b9df6-volume-lvm-iscsi-0\" (UID: \"87b1fa77-70e4-4d90-a808-8ec6a7526a12\") " pod="openstack/cinder-b9df6-volume-lvm-iscsi-0" Mar 18 18:19:48.971301 master-0 kubenswrapper[30278]: I0318 18:19:48.968610 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/87b1fa77-70e4-4d90-a808-8ec6a7526a12-run\") pod \"cinder-b9df6-volume-lvm-iscsi-0\" (UID: \"87b1fa77-70e4-4d90-a808-8ec6a7526a12\") " pod="openstack/cinder-b9df6-volume-lvm-iscsi-0" Mar 18 18:19:48.971301 master-0 kubenswrapper[30278]: I0318 18:19:48.968887 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/87b1fa77-70e4-4d90-a808-8ec6a7526a12-config-data\") pod \"cinder-b9df6-volume-lvm-iscsi-0\" (UID: \"87b1fa77-70e4-4d90-a808-8ec6a7526a12\") " pod="openstack/cinder-b9df6-volume-lvm-iscsi-0" Mar 18 18:19:48.971301 master-0 kubenswrapper[30278]: I0318 18:19:48.968957 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/87b1fa77-70e4-4d90-a808-8ec6a7526a12-scripts\") pod \"cinder-b9df6-volume-lvm-iscsi-0\" (UID: \"87b1fa77-70e4-4d90-a808-8ec6a7526a12\") " pod="openstack/cinder-b9df6-volume-lvm-iscsi-0" Mar 18 18:19:48.971301 master-0 kubenswrapper[30278]: I0318 18:19:48.968982 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/87b1fa77-70e4-4d90-a808-8ec6a7526a12-sys\") pod \"cinder-b9df6-volume-lvm-iscsi-0\" (UID: \"87b1fa77-70e4-4d90-a808-8ec6a7526a12\") " pod="openstack/cinder-b9df6-volume-lvm-iscsi-0" Mar 18 18:19:48.971301 master-0 kubenswrapper[30278]: I0318 18:19:48.969068 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/87b1fa77-70e4-4d90-a808-8ec6a7526a12-lib-modules\") pod \"cinder-b9df6-volume-lvm-iscsi-0\" (UID: \"87b1fa77-70e4-4d90-a808-8ec6a7526a12\") " pod="openstack/cinder-b9df6-volume-lvm-iscsi-0" Mar 18 18:19:48.971301 master-0 kubenswrapper[30278]: I0318 18:19:48.969117 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/87b1fa77-70e4-4d90-a808-8ec6a7526a12-var-locks-cinder\") pod \"cinder-b9df6-volume-lvm-iscsi-0\" (UID: \"87b1fa77-70e4-4d90-a808-8ec6a7526a12\") " pod="openstack/cinder-b9df6-volume-lvm-iscsi-0" Mar 18 18:19:48.971301 master-0 kubenswrapper[30278]: I0318 18:19:48.969190 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/87b1fa77-70e4-4d90-a808-8ec6a7526a12-dev\") pod \"cinder-b9df6-volume-lvm-iscsi-0\" (UID: \"87b1fa77-70e4-4d90-a808-8ec6a7526a12\") " pod="openstack/cinder-b9df6-volume-lvm-iscsi-0" Mar 18 18:19:48.971301 master-0 kubenswrapper[30278]: I0318 18:19:48.969376 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nwkg5\" (UniqueName: \"kubernetes.io/projected/87b1fa77-70e4-4d90-a808-8ec6a7526a12-kube-api-access-nwkg5\") pod \"cinder-b9df6-volume-lvm-iscsi-0\" (UID: \"87b1fa77-70e4-4d90-a808-8ec6a7526a12\") " pod="openstack/cinder-b9df6-volume-lvm-iscsi-0" Mar 18 18:19:48.971301 master-0 kubenswrapper[30278]: I0318 18:19:48.969481 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/87b1fa77-70e4-4d90-a808-8ec6a7526a12-combined-ca-bundle\") pod \"cinder-b9df6-volume-lvm-iscsi-0\" (UID: \"87b1fa77-70e4-4d90-a808-8ec6a7526a12\") " pod="openstack/cinder-b9df6-volume-lvm-iscsi-0" Mar 18 18:19:48.971301 master-0 kubenswrapper[30278]: I0318 18:19:48.969535 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/87b1fa77-70e4-4d90-a808-8ec6a7526a12-var-locks-brick\") pod \"cinder-b9df6-volume-lvm-iscsi-0\" (UID: \"87b1fa77-70e4-4d90-a808-8ec6a7526a12\") " pod="openstack/cinder-b9df6-volume-lvm-iscsi-0" Mar 18 18:19:48.971301 master-0 kubenswrapper[30278]: I0318 18:19:48.969603 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/87b1fa77-70e4-4d90-a808-8ec6a7526a12-etc-iscsi\") pod \"cinder-b9df6-volume-lvm-iscsi-0\" (UID: \"87b1fa77-70e4-4d90-a808-8ec6a7526a12\") " pod="openstack/cinder-b9df6-volume-lvm-iscsi-0" Mar 18 18:19:48.971301 master-0 kubenswrapper[30278]: I0318 18:19:48.969630 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/87b1fa77-70e4-4d90-a808-8ec6a7526a12-var-locks-cinder\") pod \"cinder-b9df6-volume-lvm-iscsi-0\" (UID: \"87b1fa77-70e4-4d90-a808-8ec6a7526a12\") " pod="openstack/cinder-b9df6-volume-lvm-iscsi-0" Mar 18 18:19:48.971301 master-0 kubenswrapper[30278]: I0318 18:19:48.969700 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/87b1fa77-70e4-4d90-a808-8ec6a7526a12-etc-machine-id\") pod \"cinder-b9df6-volume-lvm-iscsi-0\" (UID: \"87b1fa77-70e4-4d90-a808-8ec6a7526a12\") " pod="openstack/cinder-b9df6-volume-lvm-iscsi-0" Mar 18 18:19:48.971301 master-0 kubenswrapper[30278]: I0318 18:19:48.969784 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/87b1fa77-70e4-4d90-a808-8ec6a7526a12-dev\") pod \"cinder-b9df6-volume-lvm-iscsi-0\" (UID: \"87b1fa77-70e4-4d90-a808-8ec6a7526a12\") " pod="openstack/cinder-b9df6-volume-lvm-iscsi-0" Mar 18 18:19:48.971301 master-0 kubenswrapper[30278]: I0318 18:19:48.970002 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/87b1fa77-70e4-4d90-a808-8ec6a7526a12-sys\") pod \"cinder-b9df6-volume-lvm-iscsi-0\" (UID: \"87b1fa77-70e4-4d90-a808-8ec6a7526a12\") " pod="openstack/cinder-b9df6-volume-lvm-iscsi-0" Mar 18 18:19:48.971301 master-0 kubenswrapper[30278]: I0318 18:19:48.970802 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/87b1fa77-70e4-4d90-a808-8ec6a7526a12-var-locks-brick\") pod \"cinder-b9df6-volume-lvm-iscsi-0\" (UID: \"87b1fa77-70e4-4d90-a808-8ec6a7526a12\") " pod="openstack/cinder-b9df6-volume-lvm-iscsi-0" Mar 18 18:19:48.971301 master-0 kubenswrapper[30278]: I0318 18:19:48.970838 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/87b1fa77-70e4-4d90-a808-8ec6a7526a12-etc-iscsi\") pod \"cinder-b9df6-volume-lvm-iscsi-0\" (UID: \"87b1fa77-70e4-4d90-a808-8ec6a7526a12\") " pod="openstack/cinder-b9df6-volume-lvm-iscsi-0" Mar 18 18:19:48.971301 master-0 kubenswrapper[30278]: I0318 18:19:48.971127 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/87b1fa77-70e4-4d90-a808-8ec6a7526a12-lib-modules\") pod \"cinder-b9df6-volume-lvm-iscsi-0\" (UID: \"87b1fa77-70e4-4d90-a808-8ec6a7526a12\") " pod="openstack/cinder-b9df6-volume-lvm-iscsi-0" Mar 18 18:19:48.992095 master-0 kubenswrapper[30278]: I0318 18:19:48.973852 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/87b1fa77-70e4-4d90-a808-8ec6a7526a12-var-lib-cinder\") pod \"cinder-b9df6-volume-lvm-iscsi-0\" (UID: \"87b1fa77-70e4-4d90-a808-8ec6a7526a12\") " pod="openstack/cinder-b9df6-volume-lvm-iscsi-0" Mar 18 18:19:49.013531 master-0 kubenswrapper[30278]: I0318 18:19:49.008956 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/87b1fa77-70e4-4d90-a808-8ec6a7526a12-combined-ca-bundle\") pod \"cinder-b9df6-volume-lvm-iscsi-0\" (UID: \"87b1fa77-70e4-4d90-a808-8ec6a7526a12\") " pod="openstack/cinder-b9df6-volume-lvm-iscsi-0" Mar 18 18:19:49.034306 master-0 kubenswrapper[30278]: I0318 18:19:49.031146 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nwkg5\" (UniqueName: \"kubernetes.io/projected/87b1fa77-70e4-4d90-a808-8ec6a7526a12-kube-api-access-nwkg5\") pod \"cinder-b9df6-volume-lvm-iscsi-0\" (UID: \"87b1fa77-70e4-4d90-a808-8ec6a7526a12\") " pod="openstack/cinder-b9df6-volume-lvm-iscsi-0" Mar 18 18:19:49.034306 master-0 kubenswrapper[30278]: I0318 18:19:49.031994 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/87b1fa77-70e4-4d90-a808-8ec6a7526a12-config-data\") pod \"cinder-b9df6-volume-lvm-iscsi-0\" (UID: \"87b1fa77-70e4-4d90-a808-8ec6a7526a12\") " pod="openstack/cinder-b9df6-volume-lvm-iscsi-0" Mar 18 18:19:49.034306 master-0 kubenswrapper[30278]: I0318 18:19:49.032327 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/87b1fa77-70e4-4d90-a808-8ec6a7526a12-scripts\") pod \"cinder-b9df6-volume-lvm-iscsi-0\" (UID: \"87b1fa77-70e4-4d90-a808-8ec6a7526a12\") " pod="openstack/cinder-b9df6-volume-lvm-iscsi-0" Mar 18 18:19:49.034306 master-0 kubenswrapper[30278]: I0318 18:19:49.032692 30278 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-b9df6-backup-0" Mar 18 18:19:49.034715 master-0 kubenswrapper[30278]: I0318 18:19:49.034464 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/87b1fa77-70e4-4d90-a808-8ec6a7526a12-config-data-custom\") pod \"cinder-b9df6-volume-lvm-iscsi-0\" (UID: \"87b1fa77-70e4-4d90-a808-8ec6a7526a12\") " pod="openstack/cinder-b9df6-volume-lvm-iscsi-0" Mar 18 18:19:49.195504 master-0 kubenswrapper[30278]: I0318 18:19:49.195092 30278 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cbe0e9fb-60fc-4ada-ad1a-014ee622d073" path="/var/lib/kubelet/pods/cbe0e9fb-60fc-4ada-ad1a-014ee622d073/volumes" Mar 18 18:19:49.242606 master-0 kubenswrapper[30278]: I0318 18:19:49.242538 30278 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-b9df6-scheduler-0" Mar 18 18:19:49.254248 master-0 kubenswrapper[30278]: I0318 18:19:49.254170 30278 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-c74f744c5-h9zsh" Mar 18 18:19:49.268104 master-0 kubenswrapper[30278]: I0318 18:19:49.267998 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-b9df6-volume-lvm-iscsi-0" Mar 18 18:19:49.310061 master-0 kubenswrapper[30278]: I0318 18:19:49.309965 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/becba0cb-b638-43c2-af99-4269efec025f-sys\") pod \"becba0cb-b638-43c2-af99-4269efec025f\" (UID: \"becba0cb-b638-43c2-af99-4269efec025f\") " Mar 18 18:19:49.310061 master-0 kubenswrapper[30278]: I0318 18:19:49.310063 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/becba0cb-b638-43c2-af99-4269efec025f-etc-machine-id\") pod \"becba0cb-b638-43c2-af99-4269efec025f\" (UID: \"becba0cb-b638-43c2-af99-4269efec025f\") " Mar 18 18:19:49.310465 master-0 kubenswrapper[30278]: I0318 18:19:49.310169 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/becba0cb-b638-43c2-af99-4269efec025f-combined-ca-bundle\") pod \"becba0cb-b638-43c2-af99-4269efec025f\" (UID: \"becba0cb-b638-43c2-af99-4269efec025f\") " Mar 18 18:19:49.310465 master-0 kubenswrapper[30278]: I0318 18:19:49.310379 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/becba0cb-b638-43c2-af99-4269efec025f-run\") pod \"becba0cb-b638-43c2-af99-4269efec025f\" (UID: \"becba0cb-b638-43c2-af99-4269efec025f\") " Mar 18 18:19:49.310546 master-0 kubenswrapper[30278]: I0318 18:19:49.310473 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/becba0cb-b638-43c2-af99-4269efec025f-etc-nvme\") pod \"becba0cb-b638-43c2-af99-4269efec025f\" (UID: \"becba0cb-b638-43c2-af99-4269efec025f\") " Mar 18 18:19:49.311512 master-0 kubenswrapper[30278]: I0318 18:19:49.310620 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qh9jw\" (UniqueName: \"kubernetes.io/projected/becba0cb-b638-43c2-af99-4269efec025f-kube-api-access-qh9jw\") pod \"becba0cb-b638-43c2-af99-4269efec025f\" (UID: \"becba0cb-b638-43c2-af99-4269efec025f\") " Mar 18 18:19:49.311512 master-0 kubenswrapper[30278]: I0318 18:19:49.310709 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/becba0cb-b638-43c2-af99-4269efec025f-var-locks-brick\") pod \"becba0cb-b638-43c2-af99-4269efec025f\" (UID: \"becba0cb-b638-43c2-af99-4269efec025f\") " Mar 18 18:19:49.311512 master-0 kubenswrapper[30278]: I0318 18:19:49.310804 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/becba0cb-b638-43c2-af99-4269efec025f-var-lib-cinder\") pod \"becba0cb-b638-43c2-af99-4269efec025f\" (UID: \"becba0cb-b638-43c2-af99-4269efec025f\") " Mar 18 18:19:49.311512 master-0 kubenswrapper[30278]: I0318 18:19:49.310830 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/becba0cb-b638-43c2-af99-4269efec025f-var-locks-cinder\") pod \"becba0cb-b638-43c2-af99-4269efec025f\" (UID: \"becba0cb-b638-43c2-af99-4269efec025f\") " Mar 18 18:19:49.311512 master-0 kubenswrapper[30278]: I0318 18:19:49.310866 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/becba0cb-b638-43c2-af99-4269efec025f-config-data\") pod \"becba0cb-b638-43c2-af99-4269efec025f\" (UID: \"becba0cb-b638-43c2-af99-4269efec025f\") " Mar 18 18:19:49.311512 master-0 kubenswrapper[30278]: I0318 18:19:49.310959 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/becba0cb-b638-43c2-af99-4269efec025f-run" (OuterVolumeSpecName: "run") pod "becba0cb-b638-43c2-af99-4269efec025f" (UID: "becba0cb-b638-43c2-af99-4269efec025f"). InnerVolumeSpecName "run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 18:19:49.311512 master-0 kubenswrapper[30278]: I0318 18:19:49.311036 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/becba0cb-b638-43c2-af99-4269efec025f-sys" (OuterVolumeSpecName: "sys") pod "becba0cb-b638-43c2-af99-4269efec025f" (UID: "becba0cb-b638-43c2-af99-4269efec025f"). InnerVolumeSpecName "sys". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 18:19:49.311512 master-0 kubenswrapper[30278]: I0318 18:19:49.311056 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/becba0cb-b638-43c2-af99-4269efec025f-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "becba0cb-b638-43c2-af99-4269efec025f" (UID: "becba0cb-b638-43c2-af99-4269efec025f"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 18:19:49.311512 master-0 kubenswrapper[30278]: I0318 18:19:49.311194 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/becba0cb-b638-43c2-af99-4269efec025f-etc-nvme" (OuterVolumeSpecName: "etc-nvme") pod "becba0cb-b638-43c2-af99-4269efec025f" (UID: "becba0cb-b638-43c2-af99-4269efec025f"). InnerVolumeSpecName "etc-nvme". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 18:19:49.312053 master-0 kubenswrapper[30278]: I0318 18:19:49.311990 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/becba0cb-b638-43c2-af99-4269efec025f-var-locks-cinder" (OuterVolumeSpecName: "var-locks-cinder") pod "becba0cb-b638-43c2-af99-4269efec025f" (UID: "becba0cb-b638-43c2-af99-4269efec025f"). InnerVolumeSpecName "var-locks-cinder". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 18:19:49.312053 master-0 kubenswrapper[30278]: I0318 18:19:49.312042 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/becba0cb-b638-43c2-af99-4269efec025f-var-lib-cinder" (OuterVolumeSpecName: "var-lib-cinder") pod "becba0cb-b638-43c2-af99-4269efec025f" (UID: "becba0cb-b638-43c2-af99-4269efec025f"). InnerVolumeSpecName "var-lib-cinder". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 18:19:49.312127 master-0 kubenswrapper[30278]: I0318 18:19:49.312076 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/becba0cb-b638-43c2-af99-4269efec025f-var-locks-brick" (OuterVolumeSpecName: "var-locks-brick") pod "becba0cb-b638-43c2-af99-4269efec025f" (UID: "becba0cb-b638-43c2-af99-4269efec025f"). InnerVolumeSpecName "var-locks-brick". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 18:19:49.315639 master-0 kubenswrapper[30278]: I0318 18:19:49.314551 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/becba0cb-b638-43c2-af99-4269efec025f-lib-modules\") pod \"becba0cb-b638-43c2-af99-4269efec025f\" (UID: \"becba0cb-b638-43c2-af99-4269efec025f\") " Mar 18 18:19:49.315639 master-0 kubenswrapper[30278]: I0318 18:19:49.314598 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/becba0cb-b638-43c2-af99-4269efec025f-dev\") pod \"becba0cb-b638-43c2-af99-4269efec025f\" (UID: \"becba0cb-b638-43c2-af99-4269efec025f\") " Mar 18 18:19:49.315639 master-0 kubenswrapper[30278]: I0318 18:19:49.314622 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/becba0cb-b638-43c2-af99-4269efec025f-etc-iscsi\") pod \"becba0cb-b638-43c2-af99-4269efec025f\" (UID: \"becba0cb-b638-43c2-af99-4269efec025f\") " Mar 18 18:19:49.315639 master-0 kubenswrapper[30278]: I0318 18:19:49.314660 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/dc87ffe2-a115-459b-a5b1-87c747b1df2a-config-data-custom\") pod \"dc87ffe2-a115-459b-a5b1-87c747b1df2a\" (UID: \"dc87ffe2-a115-459b-a5b1-87c747b1df2a\") " Mar 18 18:19:49.315639 master-0 kubenswrapper[30278]: I0318 18:19:49.314687 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/dc87ffe2-a115-459b-a5b1-87c747b1df2a-etc-machine-id\") pod \"dc87ffe2-a115-459b-a5b1-87c747b1df2a\" (UID: \"dc87ffe2-a115-459b-a5b1-87c747b1df2a\") " Mar 18 18:19:49.315639 master-0 kubenswrapper[30278]: I0318 18:19:49.314710 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/becba0cb-b638-43c2-af99-4269efec025f-config-data-custom\") pod \"becba0cb-b638-43c2-af99-4269efec025f\" (UID: \"becba0cb-b638-43c2-af99-4269efec025f\") " Mar 18 18:19:49.315639 master-0 kubenswrapper[30278]: I0318 18:19:49.314736 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/becba0cb-b638-43c2-af99-4269efec025f-scripts\") pod \"becba0cb-b638-43c2-af99-4269efec025f\" (UID: \"becba0cb-b638-43c2-af99-4269efec025f\") " Mar 18 18:19:49.321548 master-0 kubenswrapper[30278]: I0318 18:19:49.319830 30278 reconciler_common.go:293] "Volume detached for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/becba0cb-b638-43c2-af99-4269efec025f-etc-nvme\") on node \"master-0\" DevicePath \"\"" Mar 18 18:19:49.321548 master-0 kubenswrapper[30278]: I0318 18:19:49.319899 30278 reconciler_common.go:293] "Volume detached for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/becba0cb-b638-43c2-af99-4269efec025f-var-locks-brick\") on node \"master-0\" DevicePath \"\"" Mar 18 18:19:49.321548 master-0 kubenswrapper[30278]: I0318 18:19:49.319917 30278 reconciler_common.go:293] "Volume detached for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/becba0cb-b638-43c2-af99-4269efec025f-var-lib-cinder\") on node \"master-0\" DevicePath \"\"" Mar 18 18:19:49.321548 master-0 kubenswrapper[30278]: I0318 18:19:49.319928 30278 reconciler_common.go:293] "Volume detached for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/becba0cb-b638-43c2-af99-4269efec025f-var-locks-cinder\") on node \"master-0\" DevicePath \"\"" Mar 18 18:19:49.321548 master-0 kubenswrapper[30278]: I0318 18:19:49.319941 30278 reconciler_common.go:293] "Volume detached for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/becba0cb-b638-43c2-af99-4269efec025f-sys\") on node \"master-0\" DevicePath \"\"" Mar 18 18:19:49.321548 master-0 kubenswrapper[30278]: I0318 18:19:49.319954 30278 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/becba0cb-b638-43c2-af99-4269efec025f-etc-machine-id\") on node \"master-0\" DevicePath \"\"" Mar 18 18:19:49.321548 master-0 kubenswrapper[30278]: I0318 18:19:49.319967 30278 reconciler_common.go:293] "Volume detached for volume \"run\" (UniqueName: \"kubernetes.io/host-path/becba0cb-b638-43c2-af99-4269efec025f-run\") on node \"master-0\" DevicePath \"\"" Mar 18 18:19:49.321548 master-0 kubenswrapper[30278]: I0318 18:19:49.321203 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/becba0cb-b638-43c2-af99-4269efec025f-kube-api-access-qh9jw" (OuterVolumeSpecName: "kube-api-access-qh9jw") pod "becba0cb-b638-43c2-af99-4269efec025f" (UID: "becba0cb-b638-43c2-af99-4269efec025f"). InnerVolumeSpecName "kube-api-access-qh9jw". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 18:19:49.323604 master-0 kubenswrapper[30278]: I0318 18:19:49.323524 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dc87ffe2-a115-459b-a5b1-87c747b1df2a-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "dc87ffe2-a115-459b-a5b1-87c747b1df2a" (UID: "dc87ffe2-a115-459b-a5b1-87c747b1df2a"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 18:19:49.323604 master-0 kubenswrapper[30278]: I0318 18:19:49.323589 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/becba0cb-b638-43c2-af99-4269efec025f-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "becba0cb-b638-43c2-af99-4269efec025f" (UID: "becba0cb-b638-43c2-af99-4269efec025f"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 18:19:49.323751 master-0 kubenswrapper[30278]: I0318 18:19:49.323623 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/becba0cb-b638-43c2-af99-4269efec025f-dev" (OuterVolumeSpecName: "dev") pod "becba0cb-b638-43c2-af99-4269efec025f" (UID: "becba0cb-b638-43c2-af99-4269efec025f"). InnerVolumeSpecName "dev". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 18:19:49.323751 master-0 kubenswrapper[30278]: I0318 18:19:49.323654 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/becba0cb-b638-43c2-af99-4269efec025f-etc-iscsi" (OuterVolumeSpecName: "etc-iscsi") pod "becba0cb-b638-43c2-af99-4269efec025f" (UID: "becba0cb-b638-43c2-af99-4269efec025f"). InnerVolumeSpecName "etc-iscsi". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 18:19:49.335912 master-0 kubenswrapper[30278]: I0318 18:19:49.335331 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dc87ffe2-a115-459b-a5b1-87c747b1df2a-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "dc87ffe2-a115-459b-a5b1-87c747b1df2a" (UID: "dc87ffe2-a115-459b-a5b1-87c747b1df2a"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 18:19:49.343750 master-0 kubenswrapper[30278]: I0318 18:19:49.339242 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/becba0cb-b638-43c2-af99-4269efec025f-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "becba0cb-b638-43c2-af99-4269efec025f" (UID: "becba0cb-b638-43c2-af99-4269efec025f"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 18:19:49.410786 master-0 kubenswrapper[30278]: I0318 18:19:49.410571 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/becba0cb-b638-43c2-af99-4269efec025f-scripts" (OuterVolumeSpecName: "scripts") pod "becba0cb-b638-43c2-af99-4269efec025f" (UID: "becba0cb-b638-43c2-af99-4269efec025f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 18:19:49.436748 master-0 kubenswrapper[30278]: I0318 18:19:49.436573 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xq7tr\" (UniqueName: \"kubernetes.io/projected/febd1792-9c89-4923-b8b8-0e41a1be1f1c-kube-api-access-xq7tr\") pod \"febd1792-9c89-4923-b8b8-0e41a1be1f1c\" (UID: \"febd1792-9c89-4923-b8b8-0e41a1be1f1c\") " Mar 18 18:19:49.436748 master-0 kubenswrapper[30278]: I0318 18:19:49.436653 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/febd1792-9c89-4923-b8b8-0e41a1be1f1c-ovsdbserver-nb\") pod \"febd1792-9c89-4923-b8b8-0e41a1be1f1c\" (UID: \"febd1792-9c89-4923-b8b8-0e41a1be1f1c\") " Mar 18 18:19:49.437045 master-0 kubenswrapper[30278]: I0318 18:19:49.436833 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dc87ffe2-a115-459b-a5b1-87c747b1df2a-config-data\") pod \"dc87ffe2-a115-459b-a5b1-87c747b1df2a\" (UID: \"dc87ffe2-a115-459b-a5b1-87c747b1df2a\") " Mar 18 18:19:49.437045 master-0 kubenswrapper[30278]: I0318 18:19:49.436863 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z45w4\" (UniqueName: \"kubernetes.io/projected/dc87ffe2-a115-459b-a5b1-87c747b1df2a-kube-api-access-z45w4\") pod \"dc87ffe2-a115-459b-a5b1-87c747b1df2a\" (UID: \"dc87ffe2-a115-459b-a5b1-87c747b1df2a\") " Mar 18 18:19:49.437045 master-0 kubenswrapper[30278]: I0318 18:19:49.436924 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc87ffe2-a115-459b-a5b1-87c747b1df2a-combined-ca-bundle\") pod \"dc87ffe2-a115-459b-a5b1-87c747b1df2a\" (UID: \"dc87ffe2-a115-459b-a5b1-87c747b1df2a\") " Mar 18 18:19:49.437045 master-0 kubenswrapper[30278]: I0318 18:19:49.436961 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/febd1792-9c89-4923-b8b8-0e41a1be1f1c-config\") pod \"febd1792-9c89-4923-b8b8-0e41a1be1f1c\" (UID: \"febd1792-9c89-4923-b8b8-0e41a1be1f1c\") " Mar 18 18:19:49.437045 master-0 kubenswrapper[30278]: I0318 18:19:49.437004 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/febd1792-9c89-4923-b8b8-0e41a1be1f1c-dns-swift-storage-0\") pod \"febd1792-9c89-4923-b8b8-0e41a1be1f1c\" (UID: \"febd1792-9c89-4923-b8b8-0e41a1be1f1c\") " Mar 18 18:19:49.437214 master-0 kubenswrapper[30278]: I0318 18:19:49.437103 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/febd1792-9c89-4923-b8b8-0e41a1be1f1c-ovsdbserver-sb\") pod \"febd1792-9c89-4923-b8b8-0e41a1be1f1c\" (UID: \"febd1792-9c89-4923-b8b8-0e41a1be1f1c\") " Mar 18 18:19:49.437657 master-0 kubenswrapper[30278]: I0318 18:19:49.437227 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/febd1792-9c89-4923-b8b8-0e41a1be1f1c-dns-svc\") pod \"febd1792-9c89-4923-b8b8-0e41a1be1f1c\" (UID: \"febd1792-9c89-4923-b8b8-0e41a1be1f1c\") " Mar 18 18:19:49.437657 master-0 kubenswrapper[30278]: I0318 18:19:49.437309 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dc87ffe2-a115-459b-a5b1-87c747b1df2a-scripts\") pod \"dc87ffe2-a115-459b-a5b1-87c747b1df2a\" (UID: \"dc87ffe2-a115-459b-a5b1-87c747b1df2a\") " Mar 18 18:19:49.437937 master-0 kubenswrapper[30278]: I0318 18:19:49.437869 30278 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qh9jw\" (UniqueName: \"kubernetes.io/projected/becba0cb-b638-43c2-af99-4269efec025f-kube-api-access-qh9jw\") on node \"master-0\" DevicePath \"\"" Mar 18 18:19:49.437937 master-0 kubenswrapper[30278]: I0318 18:19:49.437886 30278 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/becba0cb-b638-43c2-af99-4269efec025f-lib-modules\") on node \"master-0\" DevicePath \"\"" Mar 18 18:19:49.437937 master-0 kubenswrapper[30278]: I0318 18:19:49.437897 30278 reconciler_common.go:293] "Volume detached for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/becba0cb-b638-43c2-af99-4269efec025f-dev\") on node \"master-0\" DevicePath \"\"" Mar 18 18:19:49.437937 master-0 kubenswrapper[30278]: I0318 18:19:49.437909 30278 reconciler_common.go:293] "Volume detached for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/becba0cb-b638-43c2-af99-4269efec025f-etc-iscsi\") on node \"master-0\" DevicePath \"\"" Mar 18 18:19:49.437937 master-0 kubenswrapper[30278]: I0318 18:19:49.437919 30278 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/dc87ffe2-a115-459b-a5b1-87c747b1df2a-config-data-custom\") on node \"master-0\" DevicePath \"\"" Mar 18 18:19:49.437937 master-0 kubenswrapper[30278]: I0318 18:19:49.437930 30278 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/dc87ffe2-a115-459b-a5b1-87c747b1df2a-etc-machine-id\") on node \"master-0\" DevicePath \"\"" Mar 18 18:19:49.437937 master-0 kubenswrapper[30278]: I0318 18:19:49.437940 30278 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/becba0cb-b638-43c2-af99-4269efec025f-config-data-custom\") on node \"master-0\" DevicePath \"\"" Mar 18 18:19:49.437937 master-0 kubenswrapper[30278]: I0318 18:19:49.437952 30278 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/becba0cb-b638-43c2-af99-4269efec025f-scripts\") on node \"master-0\" DevicePath \"\"" Mar 18 18:19:49.446562 master-0 kubenswrapper[30278]: I0318 18:19:49.445074 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dc87ffe2-a115-459b-a5b1-87c747b1df2a-scripts" (OuterVolumeSpecName: "scripts") pod "dc87ffe2-a115-459b-a5b1-87c747b1df2a" (UID: "dc87ffe2-a115-459b-a5b1-87c747b1df2a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 18:19:49.463705 master-0 kubenswrapper[30278]: I0318 18:19:49.459235 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-b9df6-backup-0" event={"ID":"becba0cb-b638-43c2-af99-4269efec025f","Type":"ContainerDied","Data":"5af7f35c19ede10622e868d083978bb44c7e2639e0ba7db5b6f752acfea890ed"} Mar 18 18:19:49.463705 master-0 kubenswrapper[30278]: I0318 18:19:49.459333 30278 scope.go:117] "RemoveContainer" containerID="e8b243fa279883bc52898f8f5369679ed1db473712e3d62cce59c92571df19ab" Mar 18 18:19:49.463705 master-0 kubenswrapper[30278]: I0318 18:19:49.459543 30278 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-b9df6-backup-0" Mar 18 18:19:49.473819 master-0 kubenswrapper[30278]: I0318 18:19:49.468860 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dc87ffe2-a115-459b-a5b1-87c747b1df2a-kube-api-access-z45w4" (OuterVolumeSpecName: "kube-api-access-z45w4") pod "dc87ffe2-a115-459b-a5b1-87c747b1df2a" (UID: "dc87ffe2-a115-459b-a5b1-87c747b1df2a"). InnerVolumeSpecName "kube-api-access-z45w4". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 18:19:49.480514 master-0 kubenswrapper[30278]: I0318 18:19:49.479127 30278 generic.go:334] "Generic (PLEG): container finished" podID="febd1792-9c89-4923-b8b8-0e41a1be1f1c" containerID="c27def519c22909a531884df533f9961a3e2ad9da3f520eb969db8c43fcbd006" exitCode=0 Mar 18 18:19:49.480514 master-0 kubenswrapper[30278]: I0318 18:19:49.479222 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-c74f744c5-h9zsh" event={"ID":"febd1792-9c89-4923-b8b8-0e41a1be1f1c","Type":"ContainerDied","Data":"c27def519c22909a531884df533f9961a3e2ad9da3f520eb969db8c43fcbd006"} Mar 18 18:19:49.480514 master-0 kubenswrapper[30278]: I0318 18:19:49.479256 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-c74f744c5-h9zsh" event={"ID":"febd1792-9c89-4923-b8b8-0e41a1be1f1c","Type":"ContainerDied","Data":"1386f4b98a1e85219deaa09dca69421577ce22b6568570d2a4dde2e682c4f364"} Mar 18 18:19:49.480514 master-0 kubenswrapper[30278]: I0318 18:19:49.479350 30278 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-c74f744c5-h9zsh" Mar 18 18:19:49.484450 master-0 kubenswrapper[30278]: I0318 18:19:49.484339 30278 generic.go:334] "Generic (PLEG): container finished" podID="dc87ffe2-a115-459b-a5b1-87c747b1df2a" containerID="9a02e62a3e62d8fb9454fb31b0378519a508bf2684f327619897e3838d76217b" exitCode=0 Mar 18 18:19:49.484450 master-0 kubenswrapper[30278]: I0318 18:19:49.484387 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-b9df6-scheduler-0" event={"ID":"dc87ffe2-a115-459b-a5b1-87c747b1df2a","Type":"ContainerDied","Data":"9a02e62a3e62d8fb9454fb31b0378519a508bf2684f327619897e3838d76217b"} Mar 18 18:19:49.484450 master-0 kubenswrapper[30278]: I0318 18:19:49.484418 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-b9df6-scheduler-0" event={"ID":"dc87ffe2-a115-459b-a5b1-87c747b1df2a","Type":"ContainerDied","Data":"892bad1d70ad2453cff505cfd66860d3bd14099acd80b9afdf4006d4dd3becd9"} Mar 18 18:19:49.484722 master-0 kubenswrapper[30278]: I0318 18:19:49.484471 30278 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-b9df6-scheduler-0" Mar 18 18:19:49.502325 master-0 kubenswrapper[30278]: I0318 18:19:49.499120 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/febd1792-9c89-4923-b8b8-0e41a1be1f1c-kube-api-access-xq7tr" (OuterVolumeSpecName: "kube-api-access-xq7tr") pod "febd1792-9c89-4923-b8b8-0e41a1be1f1c" (UID: "febd1792-9c89-4923-b8b8-0e41a1be1f1c"). InnerVolumeSpecName "kube-api-access-xq7tr". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 18:19:49.547760 master-0 kubenswrapper[30278]: I0318 18:19:49.544467 30278 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dc87ffe2-a115-459b-a5b1-87c747b1df2a-scripts\") on node \"master-0\" DevicePath \"\"" Mar 18 18:19:49.547760 master-0 kubenswrapper[30278]: I0318 18:19:49.544526 30278 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xq7tr\" (UniqueName: \"kubernetes.io/projected/febd1792-9c89-4923-b8b8-0e41a1be1f1c-kube-api-access-xq7tr\") on node \"master-0\" DevicePath \"\"" Mar 18 18:19:49.547760 master-0 kubenswrapper[30278]: I0318 18:19:49.544536 30278 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z45w4\" (UniqueName: \"kubernetes.io/projected/dc87ffe2-a115-459b-a5b1-87c747b1df2a-kube-api-access-z45w4\") on node \"master-0\" DevicePath \"\"" Mar 18 18:19:49.554781 master-0 kubenswrapper[30278]: I0318 18:19:49.552721 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/becba0cb-b638-43c2-af99-4269efec025f-config-data" (OuterVolumeSpecName: "config-data") pod "becba0cb-b638-43c2-af99-4269efec025f" (UID: "becba0cb-b638-43c2-af99-4269efec025f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 18:19:49.560306 master-0 kubenswrapper[30278]: I0318 18:19:49.557830 30278 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-7db756448-vwstn" Mar 18 18:19:49.651375 master-0 kubenswrapper[30278]: I0318 18:19:49.649018 30278 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/becba0cb-b638-43c2-af99-4269efec025f-config-data\") on node \"master-0\" DevicePath \"\"" Mar 18 18:19:49.722551 master-0 kubenswrapper[30278]: I0318 18:19:49.676055 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/becba0cb-b638-43c2-af99-4269efec025f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "becba0cb-b638-43c2-af99-4269efec025f" (UID: "becba0cb-b638-43c2-af99-4269efec025f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 18:19:49.722551 master-0 kubenswrapper[30278]: I0318 18:19:49.695055 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/febd1792-9c89-4923-b8b8-0e41a1be1f1c-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "febd1792-9c89-4923-b8b8-0e41a1be1f1c" (UID: "febd1792-9c89-4923-b8b8-0e41a1be1f1c"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 18:19:49.722551 master-0 kubenswrapper[30278]: I0318 18:19:49.702419 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/febd1792-9c89-4923-b8b8-0e41a1be1f1c-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "febd1792-9c89-4923-b8b8-0e41a1be1f1c" (UID: "febd1792-9c89-4923-b8b8-0e41a1be1f1c"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 18:19:49.728234 master-0 kubenswrapper[30278]: I0318 18:19:49.724190 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dc87ffe2-a115-459b-a5b1-87c747b1df2a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "dc87ffe2-a115-459b-a5b1-87c747b1df2a" (UID: "dc87ffe2-a115-459b-a5b1-87c747b1df2a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 18:19:49.728234 master-0 kubenswrapper[30278]: I0318 18:19:49.727183 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/febd1792-9c89-4923-b8b8-0e41a1be1f1c-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "febd1792-9c89-4923-b8b8-0e41a1be1f1c" (UID: "febd1792-9c89-4923-b8b8-0e41a1be1f1c"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 18:19:49.733299 master-0 kubenswrapper[30278]: I0318 18:19:49.731584 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/febd1792-9c89-4923-b8b8-0e41a1be1f1c-config" (OuterVolumeSpecName: "config") pod "febd1792-9c89-4923-b8b8-0e41a1be1f1c" (UID: "febd1792-9c89-4923-b8b8-0e41a1be1f1c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 18:19:49.767986 master-0 kubenswrapper[30278]: I0318 18:19:49.740024 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/febd1792-9c89-4923-b8b8-0e41a1be1f1c-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "febd1792-9c89-4923-b8b8-0e41a1be1f1c" (UID: "febd1792-9c89-4923-b8b8-0e41a1be1f1c"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 18:19:49.779659 master-0 kubenswrapper[30278]: I0318 18:19:49.777543 30278 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/febd1792-9c89-4923-b8b8-0e41a1be1f1c-ovsdbserver-sb\") on node \"master-0\" DevicePath \"\"" Mar 18 18:19:49.779659 master-0 kubenswrapper[30278]: I0318 18:19:49.777603 30278 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/febd1792-9c89-4923-b8b8-0e41a1be1f1c-dns-svc\") on node \"master-0\" DevicePath \"\"" Mar 18 18:19:49.779659 master-0 kubenswrapper[30278]: I0318 18:19:49.777615 30278 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/febd1792-9c89-4923-b8b8-0e41a1be1f1c-ovsdbserver-nb\") on node \"master-0\" DevicePath \"\"" Mar 18 18:19:49.779659 master-0 kubenswrapper[30278]: I0318 18:19:49.777628 30278 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/becba0cb-b638-43c2-af99-4269efec025f-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 18 18:19:49.779659 master-0 kubenswrapper[30278]: I0318 18:19:49.777647 30278 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc87ffe2-a115-459b-a5b1-87c747b1df2a-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 18 18:19:49.779659 master-0 kubenswrapper[30278]: I0318 18:19:49.777656 30278 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/febd1792-9c89-4923-b8b8-0e41a1be1f1c-config\") on node \"master-0\" DevicePath \"\"" Mar 18 18:19:49.779659 master-0 kubenswrapper[30278]: I0318 18:19:49.777666 30278 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/febd1792-9c89-4923-b8b8-0e41a1be1f1c-dns-swift-storage-0\") on node \"master-0\" DevicePath \"\"" Mar 18 18:19:49.802314 master-0 kubenswrapper[30278]: I0318 18:19:49.801250 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dc87ffe2-a115-459b-a5b1-87c747b1df2a-config-data" (OuterVolumeSpecName: "config-data") pod "dc87ffe2-a115-459b-a5b1-87c747b1df2a" (UID: "dc87ffe2-a115-459b-a5b1-87c747b1df2a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 18:19:49.886699 master-0 kubenswrapper[30278]: I0318 18:19:49.880576 30278 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dc87ffe2-a115-459b-a5b1-87c747b1df2a-config-data\") on node \"master-0\" DevicePath \"\"" Mar 18 18:19:49.907811 master-0 kubenswrapper[30278]: I0318 18:19:49.905725 30278 scope.go:117] "RemoveContainer" containerID="5f752524153596ac50ab6daedf229a8bfa1068017f02d13516e4cd57d730d34a" Mar 18 18:19:49.924497 master-0 kubenswrapper[30278]: I0318 18:19:49.918131 30278 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-84cf7b8984-2rsvd"] Mar 18 18:19:49.924497 master-0 kubenswrapper[30278]: E0318 18:19:49.918765 30278 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dc87ffe2-a115-459b-a5b1-87c747b1df2a" containerName="probe" Mar 18 18:19:49.924497 master-0 kubenswrapper[30278]: I0318 18:19:49.918785 30278 state_mem.go:107] "Deleted CPUSet assignment" podUID="dc87ffe2-a115-459b-a5b1-87c747b1df2a" containerName="probe" Mar 18 18:19:49.924497 master-0 kubenswrapper[30278]: E0318 18:19:49.918802 30278 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="febd1792-9c89-4923-b8b8-0e41a1be1f1c" containerName="dnsmasq-dns" Mar 18 18:19:49.924497 master-0 kubenswrapper[30278]: I0318 18:19:49.918808 30278 state_mem.go:107] "Deleted CPUSet assignment" podUID="febd1792-9c89-4923-b8b8-0e41a1be1f1c" containerName="dnsmasq-dns" Mar 18 18:19:49.924497 master-0 kubenswrapper[30278]: E0318 18:19:49.918858 30278 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="febd1792-9c89-4923-b8b8-0e41a1be1f1c" containerName="init" Mar 18 18:19:49.924497 master-0 kubenswrapper[30278]: I0318 18:19:49.918865 30278 state_mem.go:107] "Deleted CPUSet assignment" podUID="febd1792-9c89-4923-b8b8-0e41a1be1f1c" containerName="init" Mar 18 18:19:49.924497 master-0 kubenswrapper[30278]: E0318 18:19:49.918876 30278 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="becba0cb-b638-43c2-af99-4269efec025f" containerName="cinder-backup" Mar 18 18:19:49.924497 master-0 kubenswrapper[30278]: I0318 18:19:49.918885 30278 state_mem.go:107] "Deleted CPUSet assignment" podUID="becba0cb-b638-43c2-af99-4269efec025f" containerName="cinder-backup" Mar 18 18:19:49.946114 master-0 kubenswrapper[30278]: E0318 18:19:49.944437 30278 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="becba0cb-b638-43c2-af99-4269efec025f" containerName="probe" Mar 18 18:19:49.946114 master-0 kubenswrapper[30278]: I0318 18:19:49.944517 30278 state_mem.go:107] "Deleted CPUSet assignment" podUID="becba0cb-b638-43c2-af99-4269efec025f" containerName="probe" Mar 18 18:19:49.946114 master-0 kubenswrapper[30278]: E0318 18:19:49.944550 30278 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dc87ffe2-a115-459b-a5b1-87c747b1df2a" containerName="cinder-scheduler" Mar 18 18:19:49.946114 master-0 kubenswrapper[30278]: I0318 18:19:49.944559 30278 state_mem.go:107] "Deleted CPUSet assignment" podUID="dc87ffe2-a115-459b-a5b1-87c747b1df2a" containerName="cinder-scheduler" Mar 18 18:19:49.946114 master-0 kubenswrapper[30278]: I0318 18:19:49.944585 30278 scope.go:117] "RemoveContainer" containerID="c27def519c22909a531884df533f9961a3e2ad9da3f520eb969db8c43fcbd006" Mar 18 18:19:49.946786 master-0 kubenswrapper[30278]: I0318 18:19:49.946755 30278 memory_manager.go:354] "RemoveStaleState removing state" podUID="becba0cb-b638-43c2-af99-4269efec025f" containerName="probe" Mar 18 18:19:49.946840 master-0 kubenswrapper[30278]: I0318 18:19:49.946787 30278 memory_manager.go:354] "RemoveStaleState removing state" podUID="becba0cb-b638-43c2-af99-4269efec025f" containerName="cinder-backup" Mar 18 18:19:49.946840 master-0 kubenswrapper[30278]: I0318 18:19:49.946827 30278 memory_manager.go:354] "RemoveStaleState removing state" podUID="dc87ffe2-a115-459b-a5b1-87c747b1df2a" containerName="cinder-scheduler" Mar 18 18:19:49.946840 master-0 kubenswrapper[30278]: I0318 18:19:49.946839 30278 memory_manager.go:354] "RemoveStaleState removing state" podUID="dc87ffe2-a115-459b-a5b1-87c747b1df2a" containerName="probe" Mar 18 18:19:49.946935 master-0 kubenswrapper[30278]: I0318 18:19:49.946891 30278 memory_manager.go:354] "RemoveStaleState removing state" podUID="febd1792-9c89-4923-b8b8-0e41a1be1f1c" containerName="dnsmasq-dns" Mar 18 18:19:49.952467 master-0 kubenswrapper[30278]: I0318 18:19:49.949165 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-84cf7b8984-2rsvd" Mar 18 18:19:49.966311 master-0 kubenswrapper[30278]: I0318 18:19:49.965360 30278 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-c74f744c5-h9zsh"] Mar 18 18:19:49.998337 master-0 kubenswrapper[30278]: I0318 18:19:49.996267 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-57kfz\" (UniqueName: \"kubernetes.io/projected/d03211db-1cec-4835-ad52-6c3befa04b20-kube-api-access-57kfz\") pod \"placement-84cf7b8984-2rsvd\" (UID: \"d03211db-1cec-4835-ad52-6c3befa04b20\") " pod="openstack/placement-84cf7b8984-2rsvd" Mar 18 18:19:50.018364 master-0 kubenswrapper[30278]: I0318 18:19:50.001159 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d03211db-1cec-4835-ad52-6c3befa04b20-logs\") pod \"placement-84cf7b8984-2rsvd\" (UID: \"d03211db-1cec-4835-ad52-6c3befa04b20\") " pod="openstack/placement-84cf7b8984-2rsvd" Mar 18 18:19:50.018364 master-0 kubenswrapper[30278]: I0318 18:19:50.001286 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d03211db-1cec-4835-ad52-6c3befa04b20-public-tls-certs\") pod \"placement-84cf7b8984-2rsvd\" (UID: \"d03211db-1cec-4835-ad52-6c3befa04b20\") " pod="openstack/placement-84cf7b8984-2rsvd" Mar 18 18:19:50.018364 master-0 kubenswrapper[30278]: I0318 18:19:50.001391 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d03211db-1cec-4835-ad52-6c3befa04b20-scripts\") pod \"placement-84cf7b8984-2rsvd\" (UID: \"d03211db-1cec-4835-ad52-6c3befa04b20\") " pod="openstack/placement-84cf7b8984-2rsvd" Mar 18 18:19:50.018364 master-0 kubenswrapper[30278]: I0318 18:19:50.001468 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d03211db-1cec-4835-ad52-6c3befa04b20-internal-tls-certs\") pod \"placement-84cf7b8984-2rsvd\" (UID: \"d03211db-1cec-4835-ad52-6c3befa04b20\") " pod="openstack/placement-84cf7b8984-2rsvd" Mar 18 18:19:50.018364 master-0 kubenswrapper[30278]: I0318 18:19:50.001830 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d03211db-1cec-4835-ad52-6c3befa04b20-config-data\") pod \"placement-84cf7b8984-2rsvd\" (UID: \"d03211db-1cec-4835-ad52-6c3befa04b20\") " pod="openstack/placement-84cf7b8984-2rsvd" Mar 18 18:19:50.018364 master-0 kubenswrapper[30278]: I0318 18:19:50.001854 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d03211db-1cec-4835-ad52-6c3befa04b20-combined-ca-bundle\") pod \"placement-84cf7b8984-2rsvd\" (UID: \"d03211db-1cec-4835-ad52-6c3befa04b20\") " pod="openstack/placement-84cf7b8984-2rsvd" Mar 18 18:19:50.018364 master-0 kubenswrapper[30278]: I0318 18:19:50.008161 30278 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-84cf7b8984-2rsvd"] Mar 18 18:19:50.028302 master-0 kubenswrapper[30278]: I0318 18:19:50.022911 30278 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-c74f744c5-h9zsh"] Mar 18 18:19:50.049309 master-0 kubenswrapper[30278]: I0318 18:19:50.039483 30278 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-b9df6-backup-0"] Mar 18 18:19:50.069979 master-0 kubenswrapper[30278]: I0318 18:19:50.067169 30278 scope.go:117] "RemoveContainer" containerID="0d23360114c86fd68c8778bc6bac919bc1305b85cd38e0a0c0aff1c2f4857c9a" Mar 18 18:19:50.090906 master-0 kubenswrapper[30278]: I0318 18:19:50.090498 30278 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-b9df6-backup-0"] Mar 18 18:19:50.104939 master-0 kubenswrapper[30278]: I0318 18:19:50.104855 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d03211db-1cec-4835-ad52-6c3befa04b20-config-data\") pod \"placement-84cf7b8984-2rsvd\" (UID: \"d03211db-1cec-4835-ad52-6c3befa04b20\") " pod="openstack/placement-84cf7b8984-2rsvd" Mar 18 18:19:50.104939 master-0 kubenswrapper[30278]: I0318 18:19:50.104938 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d03211db-1cec-4835-ad52-6c3befa04b20-combined-ca-bundle\") pod \"placement-84cf7b8984-2rsvd\" (UID: \"d03211db-1cec-4835-ad52-6c3befa04b20\") " pod="openstack/placement-84cf7b8984-2rsvd" Mar 18 18:19:50.105316 master-0 kubenswrapper[30278]: I0318 18:19:50.105039 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-57kfz\" (UniqueName: \"kubernetes.io/projected/d03211db-1cec-4835-ad52-6c3befa04b20-kube-api-access-57kfz\") pod \"placement-84cf7b8984-2rsvd\" (UID: \"d03211db-1cec-4835-ad52-6c3befa04b20\") " pod="openstack/placement-84cf7b8984-2rsvd" Mar 18 18:19:50.105316 master-0 kubenswrapper[30278]: I0318 18:19:50.105076 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d03211db-1cec-4835-ad52-6c3befa04b20-logs\") pod \"placement-84cf7b8984-2rsvd\" (UID: \"d03211db-1cec-4835-ad52-6c3befa04b20\") " pod="openstack/placement-84cf7b8984-2rsvd" Mar 18 18:19:50.105316 master-0 kubenswrapper[30278]: I0318 18:19:50.105120 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d03211db-1cec-4835-ad52-6c3befa04b20-public-tls-certs\") pod \"placement-84cf7b8984-2rsvd\" (UID: \"d03211db-1cec-4835-ad52-6c3befa04b20\") " pod="openstack/placement-84cf7b8984-2rsvd" Mar 18 18:19:50.105316 master-0 kubenswrapper[30278]: I0318 18:19:50.105184 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d03211db-1cec-4835-ad52-6c3befa04b20-scripts\") pod \"placement-84cf7b8984-2rsvd\" (UID: \"d03211db-1cec-4835-ad52-6c3befa04b20\") " pod="openstack/placement-84cf7b8984-2rsvd" Mar 18 18:19:50.105316 master-0 kubenswrapper[30278]: I0318 18:19:50.105225 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d03211db-1cec-4835-ad52-6c3befa04b20-internal-tls-certs\") pod \"placement-84cf7b8984-2rsvd\" (UID: \"d03211db-1cec-4835-ad52-6c3befa04b20\") " pod="openstack/placement-84cf7b8984-2rsvd" Mar 18 18:19:50.105965 master-0 kubenswrapper[30278]: I0318 18:19:50.105604 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d03211db-1cec-4835-ad52-6c3befa04b20-logs\") pod \"placement-84cf7b8984-2rsvd\" (UID: \"d03211db-1cec-4835-ad52-6c3befa04b20\") " pod="openstack/placement-84cf7b8984-2rsvd" Mar 18 18:19:50.108456 master-0 kubenswrapper[30278]: I0318 18:19:50.108005 30278 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-b9df6-backup-0"] Mar 18 18:19:50.116207 master-0 kubenswrapper[30278]: I0318 18:19:50.116150 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d03211db-1cec-4835-ad52-6c3befa04b20-scripts\") pod \"placement-84cf7b8984-2rsvd\" (UID: \"d03211db-1cec-4835-ad52-6c3befa04b20\") " pod="openstack/placement-84cf7b8984-2rsvd" Mar 18 18:19:50.118823 master-0 kubenswrapper[30278]: I0318 18:19:50.118694 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d03211db-1cec-4835-ad52-6c3befa04b20-public-tls-certs\") pod \"placement-84cf7b8984-2rsvd\" (UID: \"d03211db-1cec-4835-ad52-6c3befa04b20\") " pod="openstack/placement-84cf7b8984-2rsvd" Mar 18 18:19:50.119454 master-0 kubenswrapper[30278]: I0318 18:19:50.119417 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d03211db-1cec-4835-ad52-6c3befa04b20-config-data\") pod \"placement-84cf7b8984-2rsvd\" (UID: \"d03211db-1cec-4835-ad52-6c3befa04b20\") " pod="openstack/placement-84cf7b8984-2rsvd" Mar 18 18:19:50.130170 master-0 kubenswrapper[30278]: I0318 18:19:50.119762 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d03211db-1cec-4835-ad52-6c3befa04b20-combined-ca-bundle\") pod \"placement-84cf7b8984-2rsvd\" (UID: \"d03211db-1cec-4835-ad52-6c3befa04b20\") " pod="openstack/placement-84cf7b8984-2rsvd" Mar 18 18:19:50.130170 master-0 kubenswrapper[30278]: I0318 18:19:50.120656 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d03211db-1cec-4835-ad52-6c3befa04b20-internal-tls-certs\") pod \"placement-84cf7b8984-2rsvd\" (UID: \"d03211db-1cec-4835-ad52-6c3befa04b20\") " pod="openstack/placement-84cf7b8984-2rsvd" Mar 18 18:19:50.130170 master-0 kubenswrapper[30278]: I0318 18:19:50.120701 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-b9df6-backup-0" Mar 18 18:19:50.135616 master-0 kubenswrapper[30278]: I0318 18:19:50.134945 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-b9df6-backup-config-data" Mar 18 18:19:50.142430 master-0 kubenswrapper[30278]: I0318 18:19:50.142220 30278 scope.go:117] "RemoveContainer" containerID="c27def519c22909a531884df533f9961a3e2ad9da3f520eb969db8c43fcbd006" Mar 18 18:19:50.143748 master-0 kubenswrapper[30278]: I0318 18:19:50.143701 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-57kfz\" (UniqueName: \"kubernetes.io/projected/d03211db-1cec-4835-ad52-6c3befa04b20-kube-api-access-57kfz\") pod \"placement-84cf7b8984-2rsvd\" (UID: \"d03211db-1cec-4835-ad52-6c3befa04b20\") " pod="openstack/placement-84cf7b8984-2rsvd" Mar 18 18:19:50.147651 master-0 kubenswrapper[30278]: E0318 18:19:50.147559 30278 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c27def519c22909a531884df533f9961a3e2ad9da3f520eb969db8c43fcbd006\": container with ID starting with c27def519c22909a531884df533f9961a3e2ad9da3f520eb969db8c43fcbd006 not found: ID does not exist" containerID="c27def519c22909a531884df533f9961a3e2ad9da3f520eb969db8c43fcbd006" Mar 18 18:19:50.147651 master-0 kubenswrapper[30278]: I0318 18:19:50.147625 30278 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c27def519c22909a531884df533f9961a3e2ad9da3f520eb969db8c43fcbd006"} err="failed to get container status \"c27def519c22909a531884df533f9961a3e2ad9da3f520eb969db8c43fcbd006\": rpc error: code = NotFound desc = could not find container \"c27def519c22909a531884df533f9961a3e2ad9da3f520eb969db8c43fcbd006\": container with ID starting with c27def519c22909a531884df533f9961a3e2ad9da3f520eb969db8c43fcbd006 not found: ID does not exist" Mar 18 18:19:50.147785 master-0 kubenswrapper[30278]: I0318 18:19:50.147683 30278 scope.go:117] "RemoveContainer" containerID="0d23360114c86fd68c8778bc6bac919bc1305b85cd38e0a0c0aff1c2f4857c9a" Mar 18 18:19:50.155226 master-0 kubenswrapper[30278]: E0318 18:19:50.155143 30278 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0d23360114c86fd68c8778bc6bac919bc1305b85cd38e0a0c0aff1c2f4857c9a\": container with ID starting with 0d23360114c86fd68c8778bc6bac919bc1305b85cd38e0a0c0aff1c2f4857c9a not found: ID does not exist" containerID="0d23360114c86fd68c8778bc6bac919bc1305b85cd38e0a0c0aff1c2f4857c9a" Mar 18 18:19:50.155615 master-0 kubenswrapper[30278]: I0318 18:19:50.155576 30278 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0d23360114c86fd68c8778bc6bac919bc1305b85cd38e0a0c0aff1c2f4857c9a"} err="failed to get container status \"0d23360114c86fd68c8778bc6bac919bc1305b85cd38e0a0c0aff1c2f4857c9a\": rpc error: code = NotFound desc = could not find container \"0d23360114c86fd68c8778bc6bac919bc1305b85cd38e0a0c0aff1c2f4857c9a\": container with ID starting with 0d23360114c86fd68c8778bc6bac919bc1305b85cd38e0a0c0aff1c2f4857c9a not found: ID does not exist" Mar 18 18:19:50.155722 master-0 kubenswrapper[30278]: I0318 18:19:50.155706 30278 scope.go:117] "RemoveContainer" containerID="a1fbdaacf468e074ed53a5c6fd3ee74d06779d8590f5a96d747c9e4f020d50ce" Mar 18 18:19:50.212009 master-0 kubenswrapper[30278]: I0318 18:19:50.211057 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/c6fb18de-4040-48c7-a1aa-f72075ed3967-var-lib-cinder\") pod \"cinder-b9df6-backup-0\" (UID: \"c6fb18de-4040-48c7-a1aa-f72075ed3967\") " pod="openstack/cinder-b9df6-backup-0" Mar 18 18:19:50.212009 master-0 kubenswrapper[30278]: I0318 18:19:50.211168 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/c6fb18de-4040-48c7-a1aa-f72075ed3967-etc-iscsi\") pod \"cinder-b9df6-backup-0\" (UID: \"c6fb18de-4040-48c7-a1aa-f72075ed3967\") " pod="openstack/cinder-b9df6-backup-0" Mar 18 18:19:50.212009 master-0 kubenswrapper[30278]: I0318 18:19:50.211331 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c6fb18de-4040-48c7-a1aa-f72075ed3967-scripts\") pod \"cinder-b9df6-backup-0\" (UID: \"c6fb18de-4040-48c7-a1aa-f72075ed3967\") " pod="openstack/cinder-b9df6-backup-0" Mar 18 18:19:50.212009 master-0 kubenswrapper[30278]: I0318 18:19:50.211393 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/c6fb18de-4040-48c7-a1aa-f72075ed3967-sys\") pod \"cinder-b9df6-backup-0\" (UID: \"c6fb18de-4040-48c7-a1aa-f72075ed3967\") " pod="openstack/cinder-b9df6-backup-0" Mar 18 18:19:50.212009 master-0 kubenswrapper[30278]: I0318 18:19:50.211424 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/c6fb18de-4040-48c7-a1aa-f72075ed3967-etc-nvme\") pod \"cinder-b9df6-backup-0\" (UID: \"c6fb18de-4040-48c7-a1aa-f72075ed3967\") " pod="openstack/cinder-b9df6-backup-0" Mar 18 18:19:50.212009 master-0 kubenswrapper[30278]: I0318 18:19:50.211507 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/c6fb18de-4040-48c7-a1aa-f72075ed3967-dev\") pod \"cinder-b9df6-backup-0\" (UID: \"c6fb18de-4040-48c7-a1aa-f72075ed3967\") " pod="openstack/cinder-b9df6-backup-0" Mar 18 18:19:50.212009 master-0 kubenswrapper[30278]: I0318 18:19:50.211574 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c6fb18de-4040-48c7-a1aa-f72075ed3967-config-data-custom\") pod \"cinder-b9df6-backup-0\" (UID: \"c6fb18de-4040-48c7-a1aa-f72075ed3967\") " pod="openstack/cinder-b9df6-backup-0" Mar 18 18:19:50.212009 master-0 kubenswrapper[30278]: I0318 18:19:50.211619 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c6fb18de-4040-48c7-a1aa-f72075ed3967-combined-ca-bundle\") pod \"cinder-b9df6-backup-0\" (UID: \"c6fb18de-4040-48c7-a1aa-f72075ed3967\") " pod="openstack/cinder-b9df6-backup-0" Mar 18 18:19:50.212009 master-0 kubenswrapper[30278]: I0318 18:19:50.211658 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/c6fb18de-4040-48c7-a1aa-f72075ed3967-var-locks-brick\") pod \"cinder-b9df6-backup-0\" (UID: \"c6fb18de-4040-48c7-a1aa-f72075ed3967\") " pod="openstack/cinder-b9df6-backup-0" Mar 18 18:19:50.212617 master-0 kubenswrapper[30278]: I0318 18:19:50.212068 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c6fb18de-4040-48c7-a1aa-f72075ed3967-etc-machine-id\") pod \"cinder-b9df6-backup-0\" (UID: \"c6fb18de-4040-48c7-a1aa-f72075ed3967\") " pod="openstack/cinder-b9df6-backup-0" Mar 18 18:19:50.212617 master-0 kubenswrapper[30278]: I0318 18:19:50.212343 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/c6fb18de-4040-48c7-a1aa-f72075ed3967-var-locks-cinder\") pod \"cinder-b9df6-backup-0\" (UID: \"c6fb18de-4040-48c7-a1aa-f72075ed3967\") " pod="openstack/cinder-b9df6-backup-0" Mar 18 18:19:50.212617 master-0 kubenswrapper[30278]: I0318 18:19:50.212415 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c6fb18de-4040-48c7-a1aa-f72075ed3967-lib-modules\") pod \"cinder-b9df6-backup-0\" (UID: \"c6fb18de-4040-48c7-a1aa-f72075ed3967\") " pod="openstack/cinder-b9df6-backup-0" Mar 18 18:19:50.212617 master-0 kubenswrapper[30278]: I0318 18:19:50.212514 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/c6fb18de-4040-48c7-a1aa-f72075ed3967-run\") pod \"cinder-b9df6-backup-0\" (UID: \"c6fb18de-4040-48c7-a1aa-f72075ed3967\") " pod="openstack/cinder-b9df6-backup-0" Mar 18 18:19:50.212617 master-0 kubenswrapper[30278]: I0318 18:19:50.212542 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z7qzz\" (UniqueName: \"kubernetes.io/projected/c6fb18de-4040-48c7-a1aa-f72075ed3967-kube-api-access-z7qzz\") pod \"cinder-b9df6-backup-0\" (UID: \"c6fb18de-4040-48c7-a1aa-f72075ed3967\") " pod="openstack/cinder-b9df6-backup-0" Mar 18 18:19:50.212617 master-0 kubenswrapper[30278]: I0318 18:19:50.212603 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c6fb18de-4040-48c7-a1aa-f72075ed3967-config-data\") pod \"cinder-b9df6-backup-0\" (UID: \"c6fb18de-4040-48c7-a1aa-f72075ed3967\") " pod="openstack/cinder-b9df6-backup-0" Mar 18 18:19:50.239323 master-0 kubenswrapper[30278]: I0318 18:19:50.234486 30278 scope.go:117] "RemoveContainer" containerID="9a02e62a3e62d8fb9454fb31b0378519a508bf2684f327619897e3838d76217b" Mar 18 18:19:50.279540 master-0 kubenswrapper[30278]: I0318 18:19:50.279155 30278 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-b9df6-backup-0"] Mar 18 18:19:50.293542 master-0 kubenswrapper[30278]: I0318 18:19:50.293446 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-84cf7b8984-2rsvd" Mar 18 18:19:50.301591 master-0 kubenswrapper[30278]: I0318 18:19:50.301545 30278 scope.go:117] "RemoveContainer" containerID="a1fbdaacf468e074ed53a5c6fd3ee74d06779d8590f5a96d747c9e4f020d50ce" Mar 18 18:19:50.303964 master-0 kubenswrapper[30278]: E0318 18:19:50.303924 30278 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a1fbdaacf468e074ed53a5c6fd3ee74d06779d8590f5a96d747c9e4f020d50ce\": container with ID starting with a1fbdaacf468e074ed53a5c6fd3ee74d06779d8590f5a96d747c9e4f020d50ce not found: ID does not exist" containerID="a1fbdaacf468e074ed53a5c6fd3ee74d06779d8590f5a96d747c9e4f020d50ce" Mar 18 18:19:50.304085 master-0 kubenswrapper[30278]: I0318 18:19:50.304052 30278 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a1fbdaacf468e074ed53a5c6fd3ee74d06779d8590f5a96d747c9e4f020d50ce"} err="failed to get container status \"a1fbdaacf468e074ed53a5c6fd3ee74d06779d8590f5a96d747c9e4f020d50ce\": rpc error: code = NotFound desc = could not find container \"a1fbdaacf468e074ed53a5c6fd3ee74d06779d8590f5a96d747c9e4f020d50ce\": container with ID starting with a1fbdaacf468e074ed53a5c6fd3ee74d06779d8590f5a96d747c9e4f020d50ce not found: ID does not exist" Mar 18 18:19:50.304177 master-0 kubenswrapper[30278]: I0318 18:19:50.304165 30278 scope.go:117] "RemoveContainer" containerID="9a02e62a3e62d8fb9454fb31b0378519a508bf2684f327619897e3838d76217b" Mar 18 18:19:50.305531 master-0 kubenswrapper[30278]: E0318 18:19:50.305451 30278 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9a02e62a3e62d8fb9454fb31b0378519a508bf2684f327619897e3838d76217b\": container with ID starting with 9a02e62a3e62d8fb9454fb31b0378519a508bf2684f327619897e3838d76217b not found: ID does not exist" containerID="9a02e62a3e62d8fb9454fb31b0378519a508bf2684f327619897e3838d76217b" Mar 18 18:19:50.305611 master-0 kubenswrapper[30278]: I0318 18:19:50.305544 30278 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9a02e62a3e62d8fb9454fb31b0378519a508bf2684f327619897e3838d76217b"} err="failed to get container status \"9a02e62a3e62d8fb9454fb31b0378519a508bf2684f327619897e3838d76217b\": rpc error: code = NotFound desc = could not find container \"9a02e62a3e62d8fb9454fb31b0378519a508bf2684f327619897e3838d76217b\": container with ID starting with 9a02e62a3e62d8fb9454fb31b0378519a508bf2684f327619897e3838d76217b not found: ID does not exist" Mar 18 18:19:50.321235 master-0 kubenswrapper[30278]: I0318 18:19:50.321097 30278 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-b9df6-volume-lvm-iscsi-0"] Mar 18 18:19:50.333791 master-0 kubenswrapper[30278]: I0318 18:19:50.333733 30278 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-b9df6-scheduler-0"] Mar 18 18:19:50.336550 master-0 kubenswrapper[30278]: I0318 18:19:50.335774 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c6fb18de-4040-48c7-a1aa-f72075ed3967-scripts\") pod \"cinder-b9df6-backup-0\" (UID: \"c6fb18de-4040-48c7-a1aa-f72075ed3967\") " pod="openstack/cinder-b9df6-backup-0" Mar 18 18:19:50.336550 master-0 kubenswrapper[30278]: I0318 18:19:50.335849 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/c6fb18de-4040-48c7-a1aa-f72075ed3967-sys\") pod \"cinder-b9df6-backup-0\" (UID: \"c6fb18de-4040-48c7-a1aa-f72075ed3967\") " pod="openstack/cinder-b9df6-backup-0" Mar 18 18:19:50.336880 master-0 kubenswrapper[30278]: I0318 18:19:50.336854 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/c6fb18de-4040-48c7-a1aa-f72075ed3967-etc-nvme\") pod \"cinder-b9df6-backup-0\" (UID: \"c6fb18de-4040-48c7-a1aa-f72075ed3967\") " pod="openstack/cinder-b9df6-backup-0" Mar 18 18:19:50.337110 master-0 kubenswrapper[30278]: I0318 18:19:50.337094 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/c6fb18de-4040-48c7-a1aa-f72075ed3967-dev\") pod \"cinder-b9df6-backup-0\" (UID: \"c6fb18de-4040-48c7-a1aa-f72075ed3967\") " pod="openstack/cinder-b9df6-backup-0" Mar 18 18:19:50.337257 master-0 kubenswrapper[30278]: I0318 18:19:50.337240 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c6fb18de-4040-48c7-a1aa-f72075ed3967-config-data-custom\") pod \"cinder-b9df6-backup-0\" (UID: \"c6fb18de-4040-48c7-a1aa-f72075ed3967\") " pod="openstack/cinder-b9df6-backup-0" Mar 18 18:19:50.337365 master-0 kubenswrapper[30278]: I0318 18:19:50.337351 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c6fb18de-4040-48c7-a1aa-f72075ed3967-combined-ca-bundle\") pod \"cinder-b9df6-backup-0\" (UID: \"c6fb18de-4040-48c7-a1aa-f72075ed3967\") " pod="openstack/cinder-b9df6-backup-0" Mar 18 18:19:50.337482 master-0 kubenswrapper[30278]: I0318 18:19:50.337468 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/c6fb18de-4040-48c7-a1aa-f72075ed3967-var-locks-brick\") pod \"cinder-b9df6-backup-0\" (UID: \"c6fb18de-4040-48c7-a1aa-f72075ed3967\") " pod="openstack/cinder-b9df6-backup-0" Mar 18 18:19:50.337580 master-0 kubenswrapper[30278]: I0318 18:19:50.337568 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c6fb18de-4040-48c7-a1aa-f72075ed3967-etc-machine-id\") pod \"cinder-b9df6-backup-0\" (UID: \"c6fb18de-4040-48c7-a1aa-f72075ed3967\") " pod="openstack/cinder-b9df6-backup-0" Mar 18 18:19:50.337752 master-0 kubenswrapper[30278]: I0318 18:19:50.337740 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/c6fb18de-4040-48c7-a1aa-f72075ed3967-var-locks-cinder\") pod \"cinder-b9df6-backup-0\" (UID: \"c6fb18de-4040-48c7-a1aa-f72075ed3967\") " pod="openstack/cinder-b9df6-backup-0" Mar 18 18:19:50.337835 master-0 kubenswrapper[30278]: I0318 18:19:50.337824 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c6fb18de-4040-48c7-a1aa-f72075ed3967-lib-modules\") pod \"cinder-b9df6-backup-0\" (UID: \"c6fb18de-4040-48c7-a1aa-f72075ed3967\") " pod="openstack/cinder-b9df6-backup-0" Mar 18 18:19:50.337953 master-0 kubenswrapper[30278]: I0318 18:19:50.337941 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/c6fb18de-4040-48c7-a1aa-f72075ed3967-run\") pod \"cinder-b9df6-backup-0\" (UID: \"c6fb18de-4040-48c7-a1aa-f72075ed3967\") " pod="openstack/cinder-b9df6-backup-0" Mar 18 18:19:50.338035 master-0 kubenswrapper[30278]: I0318 18:19:50.338021 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z7qzz\" (UniqueName: \"kubernetes.io/projected/c6fb18de-4040-48c7-a1aa-f72075ed3967-kube-api-access-z7qzz\") pod \"cinder-b9df6-backup-0\" (UID: \"c6fb18de-4040-48c7-a1aa-f72075ed3967\") " pod="openstack/cinder-b9df6-backup-0" Mar 18 18:19:50.338123 master-0 kubenswrapper[30278]: I0318 18:19:50.338111 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c6fb18de-4040-48c7-a1aa-f72075ed3967-config-data\") pod \"cinder-b9df6-backup-0\" (UID: \"c6fb18de-4040-48c7-a1aa-f72075ed3967\") " pod="openstack/cinder-b9df6-backup-0" Mar 18 18:19:50.338262 master-0 kubenswrapper[30278]: I0318 18:19:50.338248 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/c6fb18de-4040-48c7-a1aa-f72075ed3967-var-lib-cinder\") pod \"cinder-b9df6-backup-0\" (UID: \"c6fb18de-4040-48c7-a1aa-f72075ed3967\") " pod="openstack/cinder-b9df6-backup-0" Mar 18 18:19:50.338384 master-0 kubenswrapper[30278]: I0318 18:19:50.338371 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/c6fb18de-4040-48c7-a1aa-f72075ed3967-etc-iscsi\") pod \"cinder-b9df6-backup-0\" (UID: \"c6fb18de-4040-48c7-a1aa-f72075ed3967\") " pod="openstack/cinder-b9df6-backup-0" Mar 18 18:19:50.338691 master-0 kubenswrapper[30278]: I0318 18:19:50.338676 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/c6fb18de-4040-48c7-a1aa-f72075ed3967-etc-iscsi\") pod \"cinder-b9df6-backup-0\" (UID: \"c6fb18de-4040-48c7-a1aa-f72075ed3967\") " pod="openstack/cinder-b9df6-backup-0" Mar 18 18:19:50.338797 master-0 kubenswrapper[30278]: I0318 18:19:50.338785 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/c6fb18de-4040-48c7-a1aa-f72075ed3967-sys\") pod \"cinder-b9df6-backup-0\" (UID: \"c6fb18de-4040-48c7-a1aa-f72075ed3967\") " pod="openstack/cinder-b9df6-backup-0" Mar 18 18:19:50.340735 master-0 kubenswrapper[30278]: I0318 18:19:50.340004 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c6fb18de-4040-48c7-a1aa-f72075ed3967-etc-machine-id\") pod \"cinder-b9df6-backup-0\" (UID: \"c6fb18de-4040-48c7-a1aa-f72075ed3967\") " pod="openstack/cinder-b9df6-backup-0" Mar 18 18:19:50.340849 master-0 kubenswrapper[30278]: I0318 18:19:50.340086 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/c6fb18de-4040-48c7-a1aa-f72075ed3967-etc-nvme\") pod \"cinder-b9df6-backup-0\" (UID: \"c6fb18de-4040-48c7-a1aa-f72075ed3967\") " pod="openstack/cinder-b9df6-backup-0" Mar 18 18:19:50.340914 master-0 kubenswrapper[30278]: I0318 18:19:50.340108 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/c6fb18de-4040-48c7-a1aa-f72075ed3967-dev\") pod \"cinder-b9df6-backup-0\" (UID: \"c6fb18de-4040-48c7-a1aa-f72075ed3967\") " pod="openstack/cinder-b9df6-backup-0" Mar 18 18:19:50.341019 master-0 kubenswrapper[30278]: I0318 18:19:50.341002 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/c6fb18de-4040-48c7-a1aa-f72075ed3967-var-locks-cinder\") pod \"cinder-b9df6-backup-0\" (UID: \"c6fb18de-4040-48c7-a1aa-f72075ed3967\") " pod="openstack/cinder-b9df6-backup-0" Mar 18 18:19:50.341117 master-0 kubenswrapper[30278]: I0318 18:19:50.341103 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c6fb18de-4040-48c7-a1aa-f72075ed3967-lib-modules\") pod \"cinder-b9df6-backup-0\" (UID: \"c6fb18de-4040-48c7-a1aa-f72075ed3967\") " pod="openstack/cinder-b9df6-backup-0" Mar 18 18:19:50.341198 master-0 kubenswrapper[30278]: I0318 18:19:50.341186 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/c6fb18de-4040-48c7-a1aa-f72075ed3967-run\") pod \"cinder-b9df6-backup-0\" (UID: \"c6fb18de-4040-48c7-a1aa-f72075ed3967\") " pod="openstack/cinder-b9df6-backup-0" Mar 18 18:19:50.343015 master-0 kubenswrapper[30278]: I0318 18:19:50.341941 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c6fb18de-4040-48c7-a1aa-f72075ed3967-scripts\") pod \"cinder-b9df6-backup-0\" (UID: \"c6fb18de-4040-48c7-a1aa-f72075ed3967\") " pod="openstack/cinder-b9df6-backup-0" Mar 18 18:19:50.343015 master-0 kubenswrapper[30278]: I0318 18:19:50.342077 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/c6fb18de-4040-48c7-a1aa-f72075ed3967-var-lib-cinder\") pod \"cinder-b9df6-backup-0\" (UID: \"c6fb18de-4040-48c7-a1aa-f72075ed3967\") " pod="openstack/cinder-b9df6-backup-0" Mar 18 18:19:50.343015 master-0 kubenswrapper[30278]: I0318 18:19:50.342132 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/c6fb18de-4040-48c7-a1aa-f72075ed3967-var-locks-brick\") pod \"cinder-b9df6-backup-0\" (UID: \"c6fb18de-4040-48c7-a1aa-f72075ed3967\") " pod="openstack/cinder-b9df6-backup-0" Mar 18 18:19:50.348652 master-0 kubenswrapper[30278]: I0318 18:19:50.348077 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c6fb18de-4040-48c7-a1aa-f72075ed3967-config-data-custom\") pod \"cinder-b9df6-backup-0\" (UID: \"c6fb18de-4040-48c7-a1aa-f72075ed3967\") " pod="openstack/cinder-b9df6-backup-0" Mar 18 18:19:50.350996 master-0 kubenswrapper[30278]: I0318 18:19:50.350937 30278 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-b9df6-scheduler-0"] Mar 18 18:19:50.359370 master-0 kubenswrapper[30278]: I0318 18:19:50.358846 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c6fb18de-4040-48c7-a1aa-f72075ed3967-combined-ca-bundle\") pod \"cinder-b9df6-backup-0\" (UID: \"c6fb18de-4040-48c7-a1aa-f72075ed3967\") " pod="openstack/cinder-b9df6-backup-0" Mar 18 18:19:50.362327 master-0 kubenswrapper[30278]: I0318 18:19:50.362254 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c6fb18de-4040-48c7-a1aa-f72075ed3967-config-data\") pod \"cinder-b9df6-backup-0\" (UID: \"c6fb18de-4040-48c7-a1aa-f72075ed3967\") " pod="openstack/cinder-b9df6-backup-0" Mar 18 18:19:50.372613 master-0 kubenswrapper[30278]: I0318 18:19:50.372543 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z7qzz\" (UniqueName: \"kubernetes.io/projected/c6fb18de-4040-48c7-a1aa-f72075ed3967-kube-api-access-z7qzz\") pod \"cinder-b9df6-backup-0\" (UID: \"c6fb18de-4040-48c7-a1aa-f72075ed3967\") " pod="openstack/cinder-b9df6-backup-0" Mar 18 18:19:50.444475 master-0 kubenswrapper[30278]: I0318 18:19:50.444401 30278 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-b9df6-scheduler-0"] Mar 18 18:19:50.447400 master-0 kubenswrapper[30278]: I0318 18:19:50.447352 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-b9df6-scheduler-0" Mar 18 18:19:50.450918 master-0 kubenswrapper[30278]: I0318 18:19:50.450880 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-b9df6-scheduler-config-data" Mar 18 18:19:50.489384 master-0 kubenswrapper[30278]: I0318 18:19:50.484224 30278 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-b9df6-scheduler-0"] Mar 18 18:19:50.544949 master-0 kubenswrapper[30278]: I0318 18:19:50.544893 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d39fb8c7-403a-4f95-9a6a-e9207bc02408-combined-ca-bundle\") pod \"cinder-b9df6-scheduler-0\" (UID: \"d39fb8c7-403a-4f95-9a6a-e9207bc02408\") " pod="openstack/cinder-b9df6-scheduler-0" Mar 18 18:19:50.545951 master-0 kubenswrapper[30278]: I0318 18:19:50.545927 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d39fb8c7-403a-4f95-9a6a-e9207bc02408-config-data\") pod \"cinder-b9df6-scheduler-0\" (UID: \"d39fb8c7-403a-4f95-9a6a-e9207bc02408\") " pod="openstack/cinder-b9df6-scheduler-0" Mar 18 18:19:50.546247 master-0 kubenswrapper[30278]: I0318 18:19:50.546166 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d39fb8c7-403a-4f95-9a6a-e9207bc02408-scripts\") pod \"cinder-b9df6-scheduler-0\" (UID: \"d39fb8c7-403a-4f95-9a6a-e9207bc02408\") " pod="openstack/cinder-b9df6-scheduler-0" Mar 18 18:19:50.546414 master-0 kubenswrapper[30278]: I0318 18:19:50.546388 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d39fb8c7-403a-4f95-9a6a-e9207bc02408-etc-machine-id\") pod \"cinder-b9df6-scheduler-0\" (UID: \"d39fb8c7-403a-4f95-9a6a-e9207bc02408\") " pod="openstack/cinder-b9df6-scheduler-0" Mar 18 18:19:50.546585 master-0 kubenswrapper[30278]: I0318 18:19:50.546569 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pnwdc\" (UniqueName: \"kubernetes.io/projected/d39fb8c7-403a-4f95-9a6a-e9207bc02408-kube-api-access-pnwdc\") pod \"cinder-b9df6-scheduler-0\" (UID: \"d39fb8c7-403a-4f95-9a6a-e9207bc02408\") " pod="openstack/cinder-b9df6-scheduler-0" Mar 18 18:19:50.546864 master-0 kubenswrapper[30278]: I0318 18:19:50.546849 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d39fb8c7-403a-4f95-9a6a-e9207bc02408-config-data-custom\") pod \"cinder-b9df6-scheduler-0\" (UID: \"d39fb8c7-403a-4f95-9a6a-e9207bc02408\") " pod="openstack/cinder-b9df6-scheduler-0" Mar 18 18:19:50.579306 master-0 kubenswrapper[30278]: I0318 18:19:50.579110 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-b9df6-backup-0" Mar 18 18:19:50.614666 master-0 kubenswrapper[30278]: I0318 18:19:50.613486 30278 generic.go:334] "Generic (PLEG): container finished" podID="ade5c277-043b-4e56-bc7c-63961acf67c4" containerID="58151d8c3ff62ab987e3ac88b6bec7ca0ac0420f8b3ac36b27cdb02e07049acc" exitCode=0 Mar 18 18:19:50.614666 master-0 kubenswrapper[30278]: I0318 18:19:50.613558 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-db-sync-ggb6f" event={"ID":"ade5c277-043b-4e56-bc7c-63961acf67c4","Type":"ContainerDied","Data":"58151d8c3ff62ab987e3ac88b6bec7ca0ac0420f8b3ac36b27cdb02e07049acc"} Mar 18 18:19:50.628028 master-0 kubenswrapper[30278]: I0318 18:19:50.627953 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-b9df6-volume-lvm-iscsi-0" event={"ID":"87b1fa77-70e4-4d90-a808-8ec6a7526a12","Type":"ContainerStarted","Data":"fea9213ab84f06d0b831698e77c3f4f20ad8270b7efb5b28a6314abfbce2a48d"} Mar 18 18:19:50.650017 master-0 kubenswrapper[30278]: I0318 18:19:50.649984 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d39fb8c7-403a-4f95-9a6a-e9207bc02408-scripts\") pod \"cinder-b9df6-scheduler-0\" (UID: \"d39fb8c7-403a-4f95-9a6a-e9207bc02408\") " pod="openstack/cinder-b9df6-scheduler-0" Mar 18 18:19:50.650204 master-0 kubenswrapper[30278]: I0318 18:19:50.650188 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d39fb8c7-403a-4f95-9a6a-e9207bc02408-etc-machine-id\") pod \"cinder-b9df6-scheduler-0\" (UID: \"d39fb8c7-403a-4f95-9a6a-e9207bc02408\") " pod="openstack/cinder-b9df6-scheduler-0" Mar 18 18:19:50.650471 master-0 kubenswrapper[30278]: I0318 18:19:50.650456 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pnwdc\" (UniqueName: \"kubernetes.io/projected/d39fb8c7-403a-4f95-9a6a-e9207bc02408-kube-api-access-pnwdc\") pod \"cinder-b9df6-scheduler-0\" (UID: \"d39fb8c7-403a-4f95-9a6a-e9207bc02408\") " pod="openstack/cinder-b9df6-scheduler-0" Mar 18 18:19:50.650605 master-0 kubenswrapper[30278]: I0318 18:19:50.650590 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d39fb8c7-403a-4f95-9a6a-e9207bc02408-config-data-custom\") pod \"cinder-b9df6-scheduler-0\" (UID: \"d39fb8c7-403a-4f95-9a6a-e9207bc02408\") " pod="openstack/cinder-b9df6-scheduler-0" Mar 18 18:19:50.650714 master-0 kubenswrapper[30278]: I0318 18:19:50.650700 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d39fb8c7-403a-4f95-9a6a-e9207bc02408-combined-ca-bundle\") pod \"cinder-b9df6-scheduler-0\" (UID: \"d39fb8c7-403a-4f95-9a6a-e9207bc02408\") " pod="openstack/cinder-b9df6-scheduler-0" Mar 18 18:19:50.650841 master-0 kubenswrapper[30278]: I0318 18:19:50.650826 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d39fb8c7-403a-4f95-9a6a-e9207bc02408-config-data\") pod \"cinder-b9df6-scheduler-0\" (UID: \"d39fb8c7-403a-4f95-9a6a-e9207bc02408\") " pod="openstack/cinder-b9df6-scheduler-0" Mar 18 18:19:50.656995 master-0 kubenswrapper[30278]: I0318 18:19:50.656960 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d39fb8c7-403a-4f95-9a6a-e9207bc02408-config-data\") pod \"cinder-b9df6-scheduler-0\" (UID: \"d39fb8c7-403a-4f95-9a6a-e9207bc02408\") " pod="openstack/cinder-b9df6-scheduler-0" Mar 18 18:19:50.660033 master-0 kubenswrapper[30278]: I0318 18:19:50.659943 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d39fb8c7-403a-4f95-9a6a-e9207bc02408-etc-machine-id\") pod \"cinder-b9df6-scheduler-0\" (UID: \"d39fb8c7-403a-4f95-9a6a-e9207bc02408\") " pod="openstack/cinder-b9df6-scheduler-0" Mar 18 18:19:50.662210 master-0 kubenswrapper[30278]: I0318 18:19:50.661764 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d39fb8c7-403a-4f95-9a6a-e9207bc02408-scripts\") pod \"cinder-b9df6-scheduler-0\" (UID: \"d39fb8c7-403a-4f95-9a6a-e9207bc02408\") " pod="openstack/cinder-b9df6-scheduler-0" Mar 18 18:19:50.666376 master-0 kubenswrapper[30278]: I0318 18:19:50.666182 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d39fb8c7-403a-4f95-9a6a-e9207bc02408-config-data-custom\") pod \"cinder-b9df6-scheduler-0\" (UID: \"d39fb8c7-403a-4f95-9a6a-e9207bc02408\") " pod="openstack/cinder-b9df6-scheduler-0" Mar 18 18:19:50.671262 master-0 kubenswrapper[30278]: I0318 18:19:50.671192 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d39fb8c7-403a-4f95-9a6a-e9207bc02408-combined-ca-bundle\") pod \"cinder-b9df6-scheduler-0\" (UID: \"d39fb8c7-403a-4f95-9a6a-e9207bc02408\") " pod="openstack/cinder-b9df6-scheduler-0" Mar 18 18:19:50.691725 master-0 kubenswrapper[30278]: I0318 18:19:50.685430 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pnwdc\" (UniqueName: \"kubernetes.io/projected/d39fb8c7-403a-4f95-9a6a-e9207bc02408-kube-api-access-pnwdc\") pod \"cinder-b9df6-scheduler-0\" (UID: \"d39fb8c7-403a-4f95-9a6a-e9207bc02408\") " pod="openstack/cinder-b9df6-scheduler-0" Mar 18 18:19:50.873328 master-0 kubenswrapper[30278]: I0318 18:19:50.873126 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-b9df6-scheduler-0" Mar 18 18:19:50.980725 master-0 kubenswrapper[30278]: I0318 18:19:50.980606 30278 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-84cf7b8984-2rsvd"] Mar 18 18:19:51.095310 master-0 kubenswrapper[30278]: I0318 18:19:51.095225 30278 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="becba0cb-b638-43c2-af99-4269efec025f" path="/var/lib/kubelet/pods/becba0cb-b638-43c2-af99-4269efec025f/volumes" Mar 18 18:19:51.096063 master-0 kubenswrapper[30278]: I0318 18:19:51.096033 30278 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dc87ffe2-a115-459b-a5b1-87c747b1df2a" path="/var/lib/kubelet/pods/dc87ffe2-a115-459b-a5b1-87c747b1df2a/volumes" Mar 18 18:19:51.096759 master-0 kubenswrapper[30278]: I0318 18:19:51.096723 30278 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="febd1792-9c89-4923-b8b8-0e41a1be1f1c" path="/var/lib/kubelet/pods/febd1792-9c89-4923-b8b8-0e41a1be1f1c/volumes" Mar 18 18:19:51.454850 master-0 kubenswrapper[30278]: I0318 18:19:51.454772 30278 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-b9df6-backup-0"] Mar 18 18:19:51.458640 master-0 kubenswrapper[30278]: W0318 18:19:51.458031 30278 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc6fb18de_4040_48c7_a1aa_f72075ed3967.slice/crio-736742b1e7b6c0e63a146b9523c417ec3bb330c4b09ab4273b5933b34bbfbbad WatchSource:0}: Error finding container 736742b1e7b6c0e63a146b9523c417ec3bb330c4b09ab4273b5933b34bbfbbad: Status 404 returned error can't find the container with id 736742b1e7b6c0e63a146b9523c417ec3bb330c4b09ab4273b5933b34bbfbbad Mar 18 18:19:51.513812 master-0 kubenswrapper[30278]: I0318 18:19:51.511537 30278 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-b9df6-scheduler-0"] Mar 18 18:19:51.694289 master-0 kubenswrapper[30278]: I0318 18:19:51.692855 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-b9df6-volume-lvm-iscsi-0" event={"ID":"87b1fa77-70e4-4d90-a808-8ec6a7526a12","Type":"ContainerStarted","Data":"39a56fb71eb1d464a0117e48db720938db551450212b0f6da08806c998cd1d51"} Mar 18 18:19:51.694289 master-0 kubenswrapper[30278]: I0318 18:19:51.692923 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-b9df6-volume-lvm-iscsi-0" event={"ID":"87b1fa77-70e4-4d90-a808-8ec6a7526a12","Type":"ContainerStarted","Data":"ecddb43835b6e294363dcef005f814e2c6deb7113adefe9d11e17398fe1c46a0"} Mar 18 18:19:51.698360 master-0 kubenswrapper[30278]: I0318 18:19:51.695301 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-b9df6-backup-0" event={"ID":"c6fb18de-4040-48c7-a1aa-f72075ed3967","Type":"ContainerStarted","Data":"736742b1e7b6c0e63a146b9523c417ec3bb330c4b09ab4273b5933b34bbfbbad"} Mar 18 18:19:51.698360 master-0 kubenswrapper[30278]: I0318 18:19:51.696223 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-b9df6-scheduler-0" event={"ID":"d39fb8c7-403a-4f95-9a6a-e9207bc02408","Type":"ContainerStarted","Data":"f86f64246bd93a49f00de4ece9f57de5783cc63a653d0a26551a762b1eb4f85a"} Mar 18 18:19:51.709260 master-0 kubenswrapper[30278]: I0318 18:19:51.702308 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-84cf7b8984-2rsvd" event={"ID":"d03211db-1cec-4835-ad52-6c3befa04b20","Type":"ContainerStarted","Data":"d9613510c1ebb7cc272bfba39894b56649df6f7fb2c0b9aa9934f97632870407"} Mar 18 18:19:51.709260 master-0 kubenswrapper[30278]: I0318 18:19:51.702350 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-84cf7b8984-2rsvd" event={"ID":"d03211db-1cec-4835-ad52-6c3befa04b20","Type":"ContainerStarted","Data":"a9c686849be47f55b35927f53d49dc4cafa56c19f5388ac89248d79315c2b97c"} Mar 18 18:19:51.758486 master-0 kubenswrapper[30278]: I0318 18:19:51.749196 30278 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-b9df6-volume-lvm-iscsi-0" podStartSLOduration=3.749169519 podStartE2EDuration="3.749169519s" podCreationTimestamp="2026-03-18 18:19:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 18:19:51.729788567 +0000 UTC m=+1160.896973162" watchObservedRunningTime="2026-03-18 18:19:51.749169519 +0000 UTC m=+1160.916354114" Mar 18 18:19:52.256743 master-0 kubenswrapper[30278]: I0318 18:19:52.253518 30278 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-db-sync-ggb6f" Mar 18 18:19:52.369032 master-0 kubenswrapper[30278]: I0318 18:19:52.368782 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ade5c277-043b-4e56-bc7c-63961acf67c4-combined-ca-bundle\") pod \"ade5c277-043b-4e56-bc7c-63961acf67c4\" (UID: \"ade5c277-043b-4e56-bc7c-63961acf67c4\") " Mar 18 18:19:52.369032 master-0 kubenswrapper[30278]: I0318 18:19:52.368965 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t2w7h\" (UniqueName: \"kubernetes.io/projected/ade5c277-043b-4e56-bc7c-63961acf67c4-kube-api-access-t2w7h\") pod \"ade5c277-043b-4e56-bc7c-63961acf67c4\" (UID: \"ade5c277-043b-4e56-bc7c-63961acf67c4\") " Mar 18 18:19:52.369330 master-0 kubenswrapper[30278]: I0318 18:19:52.369067 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ade5c277-043b-4e56-bc7c-63961acf67c4-scripts\") pod \"ade5c277-043b-4e56-bc7c-63961acf67c4\" (UID: \"ade5c277-043b-4e56-bc7c-63961acf67c4\") " Mar 18 18:19:52.369330 master-0 kubenswrapper[30278]: I0318 18:19:52.369179 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/ade5c277-043b-4e56-bc7c-63961acf67c4-etc-podinfo\") pod \"ade5c277-043b-4e56-bc7c-63961acf67c4\" (UID: \"ade5c277-043b-4e56-bc7c-63961acf67c4\") " Mar 18 18:19:52.369330 master-0 kubenswrapper[30278]: I0318 18:19:52.369228 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ade5c277-043b-4e56-bc7c-63961acf67c4-config-data\") pod \"ade5c277-043b-4e56-bc7c-63961acf67c4\" (UID: \"ade5c277-043b-4e56-bc7c-63961acf67c4\") " Mar 18 18:19:52.369432 master-0 kubenswrapper[30278]: I0318 18:19:52.369332 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/ade5c277-043b-4e56-bc7c-63961acf67c4-config-data-merged\") pod \"ade5c277-043b-4e56-bc7c-63961acf67c4\" (UID: \"ade5c277-043b-4e56-bc7c-63961acf67c4\") " Mar 18 18:19:52.396059 master-0 kubenswrapper[30278]: I0318 18:19:52.395924 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/ade5c277-043b-4e56-bc7c-63961acf67c4-etc-podinfo" (OuterVolumeSpecName: "etc-podinfo") pod "ade5c277-043b-4e56-bc7c-63961acf67c4" (UID: "ade5c277-043b-4e56-bc7c-63961acf67c4"). InnerVolumeSpecName "etc-podinfo". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Mar 18 18:19:52.402739 master-0 kubenswrapper[30278]: I0318 18:19:52.402343 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ade5c277-043b-4e56-bc7c-63961acf67c4-kube-api-access-t2w7h" (OuterVolumeSpecName: "kube-api-access-t2w7h") pod "ade5c277-043b-4e56-bc7c-63961acf67c4" (UID: "ade5c277-043b-4e56-bc7c-63961acf67c4"). InnerVolumeSpecName "kube-api-access-t2w7h". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 18:19:52.419914 master-0 kubenswrapper[30278]: I0318 18:19:52.419800 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ade5c277-043b-4e56-bc7c-63961acf67c4-config-data-merged" (OuterVolumeSpecName: "config-data-merged") pod "ade5c277-043b-4e56-bc7c-63961acf67c4" (UID: "ade5c277-043b-4e56-bc7c-63961acf67c4"). InnerVolumeSpecName "config-data-merged". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 18 18:19:52.423542 master-0 kubenswrapper[30278]: I0318 18:19:52.423481 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ade5c277-043b-4e56-bc7c-63961acf67c4-scripts" (OuterVolumeSpecName: "scripts") pod "ade5c277-043b-4e56-bc7c-63961acf67c4" (UID: "ade5c277-043b-4e56-bc7c-63961acf67c4"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 18:19:52.451608 master-0 kubenswrapper[30278]: I0318 18:19:52.451481 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ade5c277-043b-4e56-bc7c-63961acf67c4-config-data" (OuterVolumeSpecName: "config-data") pod "ade5c277-043b-4e56-bc7c-63961acf67c4" (UID: "ade5c277-043b-4e56-bc7c-63961acf67c4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 18:19:52.473952 master-0 kubenswrapper[30278]: I0318 18:19:52.473888 30278 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t2w7h\" (UniqueName: \"kubernetes.io/projected/ade5c277-043b-4e56-bc7c-63961acf67c4-kube-api-access-t2w7h\") on node \"master-0\" DevicePath \"\"" Mar 18 18:19:52.473952 master-0 kubenswrapper[30278]: I0318 18:19:52.473936 30278 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ade5c277-043b-4e56-bc7c-63961acf67c4-scripts\") on node \"master-0\" DevicePath \"\"" Mar 18 18:19:52.473952 master-0 kubenswrapper[30278]: I0318 18:19:52.473965 30278 reconciler_common.go:293] "Volume detached for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/ade5c277-043b-4e56-bc7c-63961acf67c4-etc-podinfo\") on node \"master-0\" DevicePath \"\"" Mar 18 18:19:52.474905 master-0 kubenswrapper[30278]: I0318 18:19:52.473979 30278 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ade5c277-043b-4e56-bc7c-63961acf67c4-config-data\") on node \"master-0\" DevicePath \"\"" Mar 18 18:19:52.474905 master-0 kubenswrapper[30278]: I0318 18:19:52.473989 30278 reconciler_common.go:293] "Volume detached for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/ade5c277-043b-4e56-bc7c-63961acf67c4-config-data-merged\") on node \"master-0\" DevicePath \"\"" Mar 18 18:19:52.500111 master-0 kubenswrapper[30278]: I0318 18:19:52.499482 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ade5c277-043b-4e56-bc7c-63961acf67c4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ade5c277-043b-4e56-bc7c-63961acf67c4" (UID: "ade5c277-043b-4e56-bc7c-63961acf67c4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 18:19:52.576459 master-0 kubenswrapper[30278]: I0318 18:19:52.576387 30278 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ade5c277-043b-4e56-bc7c-63961acf67c4-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 18 18:19:52.739672 master-0 kubenswrapper[30278]: I0318 18:19:52.739604 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-db-sync-ggb6f" event={"ID":"ade5c277-043b-4e56-bc7c-63961acf67c4","Type":"ContainerDied","Data":"bd4c079deb0364dce36ec4761cc19856c7970f99a9360c6a26523e3484d1691d"} Mar 18 18:19:52.739672 master-0 kubenswrapper[30278]: I0318 18:19:52.739668 30278 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bd4c079deb0364dce36ec4761cc19856c7970f99a9360c6a26523e3484d1691d" Mar 18 18:19:52.739999 master-0 kubenswrapper[30278]: I0318 18:19:52.739767 30278 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-db-sync-ggb6f" Mar 18 18:19:52.748227 master-0 kubenswrapper[30278]: I0318 18:19:52.747480 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-84cf7b8984-2rsvd" event={"ID":"d03211db-1cec-4835-ad52-6c3befa04b20","Type":"ContainerStarted","Data":"1d6fb9a02153234107beda4a8708c3be0106994a20772ee7cd221508dedc474a"} Mar 18 18:19:52.757286 master-0 kubenswrapper[30278]: I0318 18:19:52.751157 30278 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-84cf7b8984-2rsvd" Mar 18 18:19:52.757286 master-0 kubenswrapper[30278]: I0318 18:19:52.751245 30278 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-84cf7b8984-2rsvd" Mar 18 18:19:52.827314 master-0 kubenswrapper[30278]: I0318 18:19:52.827177 30278 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-84cf7b8984-2rsvd" podStartSLOduration=3.827150723 podStartE2EDuration="3.827150723s" podCreationTimestamp="2026-03-18 18:19:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 18:19:52.793371883 +0000 UTC m=+1161.960556478" watchObservedRunningTime="2026-03-18 18:19:52.827150723 +0000 UTC m=+1161.994335318" Mar 18 18:19:52.829312 master-0 kubenswrapper[30278]: I0318 18:19:52.829211 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-b9df6-backup-0" event={"ID":"c6fb18de-4040-48c7-a1aa-f72075ed3967","Type":"ContainerStarted","Data":"add569882234750553533e00ab4cd5d36135f675bb172fbb1fe1b751187e4046"} Mar 18 18:19:52.829312 master-0 kubenswrapper[30278]: I0318 18:19:52.829308 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-b9df6-backup-0" event={"ID":"c6fb18de-4040-48c7-a1aa-f72075ed3967","Type":"ContainerStarted","Data":"3e93415cbc896c47f8e02e14e4ac48f02192f08ec3e4e51400c837e08e95dadb"} Mar 18 18:19:52.959823 master-0 kubenswrapper[30278]: I0318 18:19:52.954247 30278 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-b9df6-backup-0" podStartSLOduration=3.954217476 podStartE2EDuration="3.954217476s" podCreationTimestamp="2026-03-18 18:19:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 18:19:52.865860886 +0000 UTC m=+1162.033045481" watchObservedRunningTime="2026-03-18 18:19:52.954217476 +0000 UTC m=+1162.121402081" Mar 18 18:19:53.383096 master-0 kubenswrapper[30278]: I0318 18:19:53.383031 30278 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ironic-inspector-db-create-8vlcj"] Mar 18 18:19:53.384066 master-0 kubenswrapper[30278]: E0318 18:19:53.384050 30278 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ade5c277-043b-4e56-bc7c-63961acf67c4" containerName="init" Mar 18 18:19:53.384146 master-0 kubenswrapper[30278]: I0318 18:19:53.384136 30278 state_mem.go:107] "Deleted CPUSet assignment" podUID="ade5c277-043b-4e56-bc7c-63961acf67c4" containerName="init" Mar 18 18:19:53.384245 master-0 kubenswrapper[30278]: E0318 18:19:53.384229 30278 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ade5c277-043b-4e56-bc7c-63961acf67c4" containerName="ironic-db-sync" Mar 18 18:19:53.384338 master-0 kubenswrapper[30278]: I0318 18:19:53.384326 30278 state_mem.go:107] "Deleted CPUSet assignment" podUID="ade5c277-043b-4e56-bc7c-63961acf67c4" containerName="ironic-db-sync" Mar 18 18:19:53.384707 master-0 kubenswrapper[30278]: I0318 18:19:53.384692 30278 memory_manager.go:354] "RemoveStaleState removing state" podUID="ade5c277-043b-4e56-bc7c-63961acf67c4" containerName="ironic-db-sync" Mar 18 18:19:53.386309 master-0 kubenswrapper[30278]: I0318 18:19:53.386257 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-inspector-db-create-8vlcj" Mar 18 18:19:53.557519 master-0 kubenswrapper[30278]: I0318 18:19:53.557429 30278 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-inspector-db-create-8vlcj"] Mar 18 18:19:53.593146 master-0 kubenswrapper[30278]: I0318 18:19:53.592978 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8b5223e8-7cb6-425b-a1d8-55c542110842-operator-scripts\") pod \"ironic-inspector-db-create-8vlcj\" (UID: \"8b5223e8-7cb6-425b-a1d8-55c542110842\") " pod="openstack/ironic-inspector-db-create-8vlcj" Mar 18 18:19:53.594901 master-0 kubenswrapper[30278]: I0318 18:19:53.594879 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mg7k5\" (UniqueName: \"kubernetes.io/projected/8b5223e8-7cb6-425b-a1d8-55c542110842-kube-api-access-mg7k5\") pod \"ironic-inspector-db-create-8vlcj\" (UID: \"8b5223e8-7cb6-425b-a1d8-55c542110842\") " pod="openstack/ironic-inspector-db-create-8vlcj" Mar 18 18:19:53.704205 master-0 kubenswrapper[30278]: I0318 18:19:53.700345 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8b5223e8-7cb6-425b-a1d8-55c542110842-operator-scripts\") pod \"ironic-inspector-db-create-8vlcj\" (UID: \"8b5223e8-7cb6-425b-a1d8-55c542110842\") " pod="openstack/ironic-inspector-db-create-8vlcj" Mar 18 18:19:53.704205 master-0 kubenswrapper[30278]: I0318 18:19:53.700445 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mg7k5\" (UniqueName: \"kubernetes.io/projected/8b5223e8-7cb6-425b-a1d8-55c542110842-kube-api-access-mg7k5\") pod \"ironic-inspector-db-create-8vlcj\" (UID: \"8b5223e8-7cb6-425b-a1d8-55c542110842\") " pod="openstack/ironic-inspector-db-create-8vlcj" Mar 18 18:19:53.712292 master-0 kubenswrapper[30278]: I0318 18:19:53.711081 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8b5223e8-7cb6-425b-a1d8-55c542110842-operator-scripts\") pod \"ironic-inspector-db-create-8vlcj\" (UID: \"8b5223e8-7cb6-425b-a1d8-55c542110842\") " pod="openstack/ironic-inspector-db-create-8vlcj" Mar 18 18:19:53.746752 master-0 kubenswrapper[30278]: I0318 18:19:53.746664 30278 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ironic-inspector-4c72-account-create-update-hzqhn"] Mar 18 18:19:53.756424 master-0 kubenswrapper[30278]: I0318 18:19:53.748788 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-inspector-4c72-account-create-update-hzqhn" Mar 18 18:19:53.762964 master-0 kubenswrapper[30278]: I0318 18:19:53.761659 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mg7k5\" (UniqueName: \"kubernetes.io/projected/8b5223e8-7cb6-425b-a1d8-55c542110842-kube-api-access-mg7k5\") pod \"ironic-inspector-db-create-8vlcj\" (UID: \"8b5223e8-7cb6-425b-a1d8-55c542110842\") " pod="openstack/ironic-inspector-db-create-8vlcj" Mar 18 18:19:53.798331 master-0 kubenswrapper[30278]: I0318 18:19:53.793500 30278 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-inspector-4c72-account-create-update-hzqhn"] Mar 18 18:19:53.799523 master-0 kubenswrapper[30278]: I0318 18:19:53.799260 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ironic-inspector-db-secret" Mar 18 18:19:53.845834 master-0 kubenswrapper[30278]: I0318 18:19:53.839661 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-inspector-db-create-8vlcj" Mar 18 18:19:53.938331 master-0 kubenswrapper[30278]: I0318 18:19:53.933796 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a8ecf6f3-3705-4948-bef5-95c5cb62c14a-operator-scripts\") pod \"ironic-inspector-4c72-account-create-update-hzqhn\" (UID: \"a8ecf6f3-3705-4948-bef5-95c5cb62c14a\") " pod="openstack/ironic-inspector-4c72-account-create-update-hzqhn" Mar 18 18:19:53.938331 master-0 kubenswrapper[30278]: I0318 18:19:53.934064 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kg4qb\" (UniqueName: \"kubernetes.io/projected/a8ecf6f3-3705-4948-bef5-95c5cb62c14a-kube-api-access-kg4qb\") pod \"ironic-inspector-4c72-account-create-update-hzqhn\" (UID: \"a8ecf6f3-3705-4948-bef5-95c5cb62c14a\") " pod="openstack/ironic-inspector-4c72-account-create-update-hzqhn" Mar 18 18:19:53.951317 master-0 kubenswrapper[30278]: I0318 18:19:53.945758 30278 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-c4bc7d979-gstcd"] Mar 18 18:19:53.991303 master-0 kubenswrapper[30278]: I0318 18:19:53.984562 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-c4bc7d979-gstcd" Mar 18 18:19:54.070217 master-0 kubenswrapper[30278]: I0318 18:19:54.070148 30278 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-c4bc7d979-gstcd"] Mar 18 18:19:54.128357 master-0 kubenswrapper[30278]: I0318 18:19:54.121807 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/200c8f5b-bd48-4587-9a90-f2cba299bc43-ovsdbserver-sb\") pod \"dnsmasq-dns-c4bc7d979-gstcd\" (UID: \"200c8f5b-bd48-4587-9a90-f2cba299bc43\") " pod="openstack/dnsmasq-dns-c4bc7d979-gstcd" Mar 18 18:19:54.128357 master-0 kubenswrapper[30278]: I0318 18:19:54.122058 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a8ecf6f3-3705-4948-bef5-95c5cb62c14a-operator-scripts\") pod \"ironic-inspector-4c72-account-create-update-hzqhn\" (UID: \"a8ecf6f3-3705-4948-bef5-95c5cb62c14a\") " pod="openstack/ironic-inspector-4c72-account-create-update-hzqhn" Mar 18 18:19:54.128357 master-0 kubenswrapper[30278]: I0318 18:19:54.122160 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/200c8f5b-bd48-4587-9a90-f2cba299bc43-dns-svc\") pod \"dnsmasq-dns-c4bc7d979-gstcd\" (UID: \"200c8f5b-bd48-4587-9a90-f2cba299bc43\") " pod="openstack/dnsmasq-dns-c4bc7d979-gstcd" Mar 18 18:19:54.128357 master-0 kubenswrapper[30278]: I0318 18:19:54.122324 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/200c8f5b-bd48-4587-9a90-f2cba299bc43-config\") pod \"dnsmasq-dns-c4bc7d979-gstcd\" (UID: \"200c8f5b-bd48-4587-9a90-f2cba299bc43\") " pod="openstack/dnsmasq-dns-c4bc7d979-gstcd" Mar 18 18:19:54.128357 master-0 kubenswrapper[30278]: I0318 18:19:54.122410 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kg4qb\" (UniqueName: \"kubernetes.io/projected/a8ecf6f3-3705-4948-bef5-95c5cb62c14a-kube-api-access-kg4qb\") pod \"ironic-inspector-4c72-account-create-update-hzqhn\" (UID: \"a8ecf6f3-3705-4948-bef5-95c5cb62c14a\") " pod="openstack/ironic-inspector-4c72-account-create-update-hzqhn" Mar 18 18:19:54.128357 master-0 kubenswrapper[30278]: I0318 18:19:54.122629 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/200c8f5b-bd48-4587-9a90-f2cba299bc43-dns-swift-storage-0\") pod \"dnsmasq-dns-c4bc7d979-gstcd\" (UID: \"200c8f5b-bd48-4587-9a90-f2cba299bc43\") " pod="openstack/dnsmasq-dns-c4bc7d979-gstcd" Mar 18 18:19:54.128357 master-0 kubenswrapper[30278]: I0318 18:19:54.122837 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zfg8d\" (UniqueName: \"kubernetes.io/projected/200c8f5b-bd48-4587-9a90-f2cba299bc43-kube-api-access-zfg8d\") pod \"dnsmasq-dns-c4bc7d979-gstcd\" (UID: \"200c8f5b-bd48-4587-9a90-f2cba299bc43\") " pod="openstack/dnsmasq-dns-c4bc7d979-gstcd" Mar 18 18:19:54.128357 master-0 kubenswrapper[30278]: I0318 18:19:54.122879 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/200c8f5b-bd48-4587-9a90-f2cba299bc43-ovsdbserver-nb\") pod \"dnsmasq-dns-c4bc7d979-gstcd\" (UID: \"200c8f5b-bd48-4587-9a90-f2cba299bc43\") " pod="openstack/dnsmasq-dns-c4bc7d979-gstcd" Mar 18 18:19:54.149718 master-0 kubenswrapper[30278]: I0318 18:19:54.137057 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a8ecf6f3-3705-4948-bef5-95c5cb62c14a-operator-scripts\") pod \"ironic-inspector-4c72-account-create-update-hzqhn\" (UID: \"a8ecf6f3-3705-4948-bef5-95c5cb62c14a\") " pod="openstack/ironic-inspector-4c72-account-create-update-hzqhn" Mar 18 18:19:54.255307 master-0 kubenswrapper[30278]: I0318 18:19:54.249369 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-b9df6-scheduler-0" event={"ID":"d39fb8c7-403a-4f95-9a6a-e9207bc02408","Type":"ContainerStarted","Data":"5df981d9717ea95c83fffbc81fb39fe38601bd86223900d28a2cbfdc5c70c173"} Mar 18 18:19:54.273831 master-0 kubenswrapper[30278]: I0318 18:19:54.273387 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/200c8f5b-bd48-4587-9a90-f2cba299bc43-dns-swift-storage-0\") pod \"dnsmasq-dns-c4bc7d979-gstcd\" (UID: \"200c8f5b-bd48-4587-9a90-f2cba299bc43\") " pod="openstack/dnsmasq-dns-c4bc7d979-gstcd" Mar 18 18:19:54.273831 master-0 kubenswrapper[30278]: I0318 18:19:54.273565 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zfg8d\" (UniqueName: \"kubernetes.io/projected/200c8f5b-bd48-4587-9a90-f2cba299bc43-kube-api-access-zfg8d\") pod \"dnsmasq-dns-c4bc7d979-gstcd\" (UID: \"200c8f5b-bd48-4587-9a90-f2cba299bc43\") " pod="openstack/dnsmasq-dns-c4bc7d979-gstcd" Mar 18 18:19:54.273831 master-0 kubenswrapper[30278]: I0318 18:19:54.273595 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/200c8f5b-bd48-4587-9a90-f2cba299bc43-ovsdbserver-nb\") pod \"dnsmasq-dns-c4bc7d979-gstcd\" (UID: \"200c8f5b-bd48-4587-9a90-f2cba299bc43\") " pod="openstack/dnsmasq-dns-c4bc7d979-gstcd" Mar 18 18:19:54.273831 master-0 kubenswrapper[30278]: I0318 18:19:54.273645 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/200c8f5b-bd48-4587-9a90-f2cba299bc43-ovsdbserver-sb\") pod \"dnsmasq-dns-c4bc7d979-gstcd\" (UID: \"200c8f5b-bd48-4587-9a90-f2cba299bc43\") " pod="openstack/dnsmasq-dns-c4bc7d979-gstcd" Mar 18 18:19:54.273831 master-0 kubenswrapper[30278]: I0318 18:19:54.273769 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/200c8f5b-bd48-4587-9a90-f2cba299bc43-dns-svc\") pod \"dnsmasq-dns-c4bc7d979-gstcd\" (UID: \"200c8f5b-bd48-4587-9a90-f2cba299bc43\") " pod="openstack/dnsmasq-dns-c4bc7d979-gstcd" Mar 18 18:19:54.273831 master-0 kubenswrapper[30278]: I0318 18:19:54.273825 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/200c8f5b-bd48-4587-9a90-f2cba299bc43-config\") pod \"dnsmasq-dns-c4bc7d979-gstcd\" (UID: \"200c8f5b-bd48-4587-9a90-f2cba299bc43\") " pod="openstack/dnsmasq-dns-c4bc7d979-gstcd" Mar 18 18:19:54.276373 master-0 kubenswrapper[30278]: I0318 18:19:54.276043 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kg4qb\" (UniqueName: \"kubernetes.io/projected/a8ecf6f3-3705-4948-bef5-95c5cb62c14a-kube-api-access-kg4qb\") pod \"ironic-inspector-4c72-account-create-update-hzqhn\" (UID: \"a8ecf6f3-3705-4948-bef5-95c5cb62c14a\") " pod="openstack/ironic-inspector-4c72-account-create-update-hzqhn" Mar 18 18:19:54.277250 master-0 kubenswrapper[30278]: I0318 18:19:54.277154 30278 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-b9df6-volume-lvm-iscsi-0" Mar 18 18:19:54.277952 master-0 kubenswrapper[30278]: I0318 18:19:54.277879 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/200c8f5b-bd48-4587-9a90-f2cba299bc43-dns-swift-storage-0\") pod \"dnsmasq-dns-c4bc7d979-gstcd\" (UID: \"200c8f5b-bd48-4587-9a90-f2cba299bc43\") " pod="openstack/dnsmasq-dns-c4bc7d979-gstcd" Mar 18 18:19:54.300576 master-0 kubenswrapper[30278]: I0318 18:19:54.278743 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/200c8f5b-bd48-4587-9a90-f2cba299bc43-ovsdbserver-sb\") pod \"dnsmasq-dns-c4bc7d979-gstcd\" (UID: \"200c8f5b-bd48-4587-9a90-f2cba299bc43\") " pod="openstack/dnsmasq-dns-c4bc7d979-gstcd" Mar 18 18:19:54.300576 master-0 kubenswrapper[30278]: I0318 18:19:54.279547 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/200c8f5b-bd48-4587-9a90-f2cba299bc43-ovsdbserver-nb\") pod \"dnsmasq-dns-c4bc7d979-gstcd\" (UID: \"200c8f5b-bd48-4587-9a90-f2cba299bc43\") " pod="openstack/dnsmasq-dns-c4bc7d979-gstcd" Mar 18 18:19:54.300576 master-0 kubenswrapper[30278]: I0318 18:19:54.280395 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/200c8f5b-bd48-4587-9a90-f2cba299bc43-dns-svc\") pod \"dnsmasq-dns-c4bc7d979-gstcd\" (UID: \"200c8f5b-bd48-4587-9a90-f2cba299bc43\") " pod="openstack/dnsmasq-dns-c4bc7d979-gstcd" Mar 18 18:19:54.300576 master-0 kubenswrapper[30278]: I0318 18:19:54.287093 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/200c8f5b-bd48-4587-9a90-f2cba299bc43-config\") pod \"dnsmasq-dns-c4bc7d979-gstcd\" (UID: \"200c8f5b-bd48-4587-9a90-f2cba299bc43\") " pod="openstack/dnsmasq-dns-c4bc7d979-gstcd" Mar 18 18:19:54.300576 master-0 kubenswrapper[30278]: I0318 18:19:54.288383 30278 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ironic-neutron-agent-c769655c7-ssdxq"] Mar 18 18:19:54.300576 master-0 kubenswrapper[30278]: I0318 18:19:54.291114 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-inspector-4c72-account-create-update-hzqhn" Mar 18 18:19:54.300576 master-0 kubenswrapper[30278]: I0318 18:19:54.295408 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-neutron-agent-c769655c7-ssdxq" Mar 18 18:19:54.327307 master-0 kubenswrapper[30278]: I0318 18:19:54.324306 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ironic-ironic-neutron-agent-config-data" Mar 18 18:19:54.348311 master-0 kubenswrapper[30278]: I0318 18:19:54.347580 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zfg8d\" (UniqueName: \"kubernetes.io/projected/200c8f5b-bd48-4587-9a90-f2cba299bc43-kube-api-access-zfg8d\") pod \"dnsmasq-dns-c4bc7d979-gstcd\" (UID: \"200c8f5b-bd48-4587-9a90-f2cba299bc43\") " pod="openstack/dnsmasq-dns-c4bc7d979-gstcd" Mar 18 18:19:54.393389 master-0 kubenswrapper[30278]: I0318 18:19:54.388919 30278 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-neutron-agent-c769655c7-ssdxq"] Mar 18 18:19:54.499301 master-0 kubenswrapper[30278]: I0318 18:19:54.492502 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/adb370b0-e5b4-4cc8-b1d2-c63363b70615-config\") pod \"ironic-neutron-agent-c769655c7-ssdxq\" (UID: \"adb370b0-e5b4-4cc8-b1d2-c63363b70615\") " pod="openstack/ironic-neutron-agent-c769655c7-ssdxq" Mar 18 18:19:54.499301 master-0 kubenswrapper[30278]: I0318 18:19:54.492594 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/adb370b0-e5b4-4cc8-b1d2-c63363b70615-combined-ca-bundle\") pod \"ironic-neutron-agent-c769655c7-ssdxq\" (UID: \"adb370b0-e5b4-4cc8-b1d2-c63363b70615\") " pod="openstack/ironic-neutron-agent-c769655c7-ssdxq" Mar 18 18:19:54.499301 master-0 kubenswrapper[30278]: I0318 18:19:54.495379 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d4n8j\" (UniqueName: \"kubernetes.io/projected/adb370b0-e5b4-4cc8-b1d2-c63363b70615-kube-api-access-d4n8j\") pod \"ironic-neutron-agent-c769655c7-ssdxq\" (UID: \"adb370b0-e5b4-4cc8-b1d2-c63363b70615\") " pod="openstack/ironic-neutron-agent-c769655c7-ssdxq" Mar 18 18:19:54.508572 master-0 kubenswrapper[30278]: I0318 18:19:54.508231 30278 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ironic-f986975b-8wc5r"] Mar 18 18:19:54.544214 master-0 kubenswrapper[30278]: I0318 18:19:54.543862 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-f986975b-8wc5r" Mar 18 18:19:54.557773 master-0 kubenswrapper[30278]: I0318 18:19:54.556768 30278 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-f986975b-8wc5r"] Mar 18 18:19:54.557773 master-0 kubenswrapper[30278]: I0318 18:19:54.557129 30278 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-galera-0" podUID="3a06b9e0-a605-44e2-b6e2-63b15a5bb700" containerName="galera" probeResult="failure" output="command timed out" Mar 18 18:19:54.557773 master-0 kubenswrapper[30278]: I0318 18:19:54.557672 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ironic-api-scripts" Mar 18 18:19:54.563656 master-0 kubenswrapper[30278]: I0318 18:19:54.558615 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-c4bc7d979-gstcd" Mar 18 18:19:54.563656 master-0 kubenswrapper[30278]: I0318 18:19:54.563543 30278 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-galera-0" podUID="3a06b9e0-a605-44e2-b6e2-63b15a5bb700" containerName="galera" probeResult="failure" output="command timed out" Mar 18 18:19:54.563959 master-0 kubenswrapper[30278]: I0318 18:19:54.563937 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ironic-config-data" Mar 18 18:19:54.564303 master-0 kubenswrapper[30278]: I0318 18:19:54.564245 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-transport-url-ironic-transport" Mar 18 18:19:54.564454 master-0 kubenswrapper[30278]: I0318 18:19:54.564425 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Mar 18 18:19:54.601533 master-0 kubenswrapper[30278]: I0318 18:19:54.601448 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d4n8j\" (UniqueName: \"kubernetes.io/projected/adb370b0-e5b4-4cc8-b1d2-c63363b70615-kube-api-access-d4n8j\") pod \"ironic-neutron-agent-c769655c7-ssdxq\" (UID: \"adb370b0-e5b4-4cc8-b1d2-c63363b70615\") " pod="openstack/ironic-neutron-agent-c769655c7-ssdxq" Mar 18 18:19:54.601533 master-0 kubenswrapper[30278]: I0318 18:19:54.601542 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/adb370b0-e5b4-4cc8-b1d2-c63363b70615-config\") pod \"ironic-neutron-agent-c769655c7-ssdxq\" (UID: \"adb370b0-e5b4-4cc8-b1d2-c63363b70615\") " pod="openstack/ironic-neutron-agent-c769655c7-ssdxq" Mar 18 18:19:54.601850 master-0 kubenswrapper[30278]: I0318 18:19:54.601602 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/adb370b0-e5b4-4cc8-b1d2-c63363b70615-combined-ca-bundle\") pod \"ironic-neutron-agent-c769655c7-ssdxq\" (UID: \"adb370b0-e5b4-4cc8-b1d2-c63363b70615\") " pod="openstack/ironic-neutron-agent-c769655c7-ssdxq" Mar 18 18:19:54.604406 master-0 kubenswrapper[30278]: I0318 18:19:54.603058 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ironic-api-config-data" Mar 18 18:19:54.613637 master-0 kubenswrapper[30278]: I0318 18:19:54.613583 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/adb370b0-e5b4-4cc8-b1d2-c63363b70615-config\") pod \"ironic-neutron-agent-c769655c7-ssdxq\" (UID: \"adb370b0-e5b4-4cc8-b1d2-c63363b70615\") " pod="openstack/ironic-neutron-agent-c769655c7-ssdxq" Mar 18 18:19:54.693626 master-0 kubenswrapper[30278]: I0318 18:19:54.692849 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/adb370b0-e5b4-4cc8-b1d2-c63363b70615-combined-ca-bundle\") pod \"ironic-neutron-agent-c769655c7-ssdxq\" (UID: \"adb370b0-e5b4-4cc8-b1d2-c63363b70615\") " pod="openstack/ironic-neutron-agent-c769655c7-ssdxq" Mar 18 18:19:54.702426 master-0 kubenswrapper[30278]: I0318 18:19:54.702291 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d4n8j\" (UniqueName: \"kubernetes.io/projected/adb370b0-e5b4-4cc8-b1d2-c63363b70615-kube-api-access-d4n8j\") pod \"ironic-neutron-agent-c769655c7-ssdxq\" (UID: \"adb370b0-e5b4-4cc8-b1d2-c63363b70615\") " pod="openstack/ironic-neutron-agent-c769655c7-ssdxq" Mar 18 18:19:54.721162 master-0 kubenswrapper[30278]: I0318 18:19:54.721108 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/f25d0677-228e-4b99-bc1f-abbbceebffc4-config-data-merged\") pod \"ironic-f986975b-8wc5r\" (UID: \"f25d0677-228e-4b99-bc1f-abbbceebffc4\") " pod="openstack/ironic-f986975b-8wc5r" Mar 18 18:19:54.721265 master-0 kubenswrapper[30278]: I0318 18:19:54.721195 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f25d0677-228e-4b99-bc1f-abbbceebffc4-logs\") pod \"ironic-f986975b-8wc5r\" (UID: \"f25d0677-228e-4b99-bc1f-abbbceebffc4\") " pod="openstack/ironic-f986975b-8wc5r" Mar 18 18:19:54.721265 master-0 kubenswrapper[30278]: I0318 18:19:54.721225 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f25d0677-228e-4b99-bc1f-abbbceebffc4-scripts\") pod \"ironic-f986975b-8wc5r\" (UID: \"f25d0677-228e-4b99-bc1f-abbbceebffc4\") " pod="openstack/ironic-f986975b-8wc5r" Mar 18 18:19:54.721377 master-0 kubenswrapper[30278]: I0318 18:19:54.721352 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f25d0677-228e-4b99-bc1f-abbbceebffc4-config-data\") pod \"ironic-f986975b-8wc5r\" (UID: \"f25d0677-228e-4b99-bc1f-abbbceebffc4\") " pod="openstack/ironic-f986975b-8wc5r" Mar 18 18:19:54.721428 master-0 kubenswrapper[30278]: I0318 18:19:54.721416 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f25d0677-228e-4b99-bc1f-abbbceebffc4-combined-ca-bundle\") pod \"ironic-f986975b-8wc5r\" (UID: \"f25d0677-228e-4b99-bc1f-abbbceebffc4\") " pod="openstack/ironic-f986975b-8wc5r" Mar 18 18:19:54.722756 master-0 kubenswrapper[30278]: I0318 18:19:54.722711 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xscw8\" (UniqueName: \"kubernetes.io/projected/f25d0677-228e-4b99-bc1f-abbbceebffc4-kube-api-access-xscw8\") pod \"ironic-f986975b-8wc5r\" (UID: \"f25d0677-228e-4b99-bc1f-abbbceebffc4\") " pod="openstack/ironic-f986975b-8wc5r" Mar 18 18:19:54.723959 master-0 kubenswrapper[30278]: I0318 18:19:54.723939 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/f25d0677-228e-4b99-bc1f-abbbceebffc4-etc-podinfo\") pod \"ironic-f986975b-8wc5r\" (UID: \"f25d0677-228e-4b99-bc1f-abbbceebffc4\") " pod="openstack/ironic-f986975b-8wc5r" Mar 18 18:19:54.724565 master-0 kubenswrapper[30278]: I0318 18:19:54.724535 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f25d0677-228e-4b99-bc1f-abbbceebffc4-config-data-custom\") pod \"ironic-f986975b-8wc5r\" (UID: \"f25d0677-228e-4b99-bc1f-abbbceebffc4\") " pod="openstack/ironic-f986975b-8wc5r" Mar 18 18:19:54.869862 master-0 kubenswrapper[30278]: I0318 18:19:54.868922 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/f25d0677-228e-4b99-bc1f-abbbceebffc4-config-data-merged\") pod \"ironic-f986975b-8wc5r\" (UID: \"f25d0677-228e-4b99-bc1f-abbbceebffc4\") " pod="openstack/ironic-f986975b-8wc5r" Mar 18 18:19:54.869862 master-0 kubenswrapper[30278]: I0318 18:19:54.869035 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f25d0677-228e-4b99-bc1f-abbbceebffc4-logs\") pod \"ironic-f986975b-8wc5r\" (UID: \"f25d0677-228e-4b99-bc1f-abbbceebffc4\") " pod="openstack/ironic-f986975b-8wc5r" Mar 18 18:19:54.869862 master-0 kubenswrapper[30278]: I0318 18:19:54.869773 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f25d0677-228e-4b99-bc1f-abbbceebffc4-scripts\") pod \"ironic-f986975b-8wc5r\" (UID: \"f25d0677-228e-4b99-bc1f-abbbceebffc4\") " pod="openstack/ironic-f986975b-8wc5r" Mar 18 18:19:54.870308 master-0 kubenswrapper[30278]: I0318 18:19:54.870005 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f25d0677-228e-4b99-bc1f-abbbceebffc4-config-data\") pod \"ironic-f986975b-8wc5r\" (UID: \"f25d0677-228e-4b99-bc1f-abbbceebffc4\") " pod="openstack/ironic-f986975b-8wc5r" Mar 18 18:19:54.870308 master-0 kubenswrapper[30278]: I0318 18:19:54.870123 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f25d0677-228e-4b99-bc1f-abbbceebffc4-combined-ca-bundle\") pod \"ironic-f986975b-8wc5r\" (UID: \"f25d0677-228e-4b99-bc1f-abbbceebffc4\") " pod="openstack/ironic-f986975b-8wc5r" Mar 18 18:19:54.870308 master-0 kubenswrapper[30278]: I0318 18:19:54.870217 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xscw8\" (UniqueName: \"kubernetes.io/projected/f25d0677-228e-4b99-bc1f-abbbceebffc4-kube-api-access-xscw8\") pod \"ironic-f986975b-8wc5r\" (UID: \"f25d0677-228e-4b99-bc1f-abbbceebffc4\") " pod="openstack/ironic-f986975b-8wc5r" Mar 18 18:19:54.870475 master-0 kubenswrapper[30278]: I0318 18:19:54.870351 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/f25d0677-228e-4b99-bc1f-abbbceebffc4-etc-podinfo\") pod \"ironic-f986975b-8wc5r\" (UID: \"f25d0677-228e-4b99-bc1f-abbbceebffc4\") " pod="openstack/ironic-f986975b-8wc5r" Mar 18 18:19:54.870475 master-0 kubenswrapper[30278]: I0318 18:19:54.870424 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f25d0677-228e-4b99-bc1f-abbbceebffc4-config-data-custom\") pod \"ironic-f986975b-8wc5r\" (UID: \"f25d0677-228e-4b99-bc1f-abbbceebffc4\") " pod="openstack/ironic-f986975b-8wc5r" Mar 18 18:19:54.875890 master-0 kubenswrapper[30278]: I0318 18:19:54.875231 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f25d0677-228e-4b99-bc1f-abbbceebffc4-logs\") pod \"ironic-f986975b-8wc5r\" (UID: \"f25d0677-228e-4b99-bc1f-abbbceebffc4\") " pod="openstack/ironic-f986975b-8wc5r" Mar 18 18:19:54.875890 master-0 kubenswrapper[30278]: I0318 18:19:54.875473 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/f25d0677-228e-4b99-bc1f-abbbceebffc4-config-data-merged\") pod \"ironic-f986975b-8wc5r\" (UID: \"f25d0677-228e-4b99-bc1f-abbbceebffc4\") " pod="openstack/ironic-f986975b-8wc5r" Mar 18 18:19:54.900908 master-0 kubenswrapper[30278]: I0318 18:19:54.900834 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f25d0677-228e-4b99-bc1f-abbbceebffc4-config-data-custom\") pod \"ironic-f986975b-8wc5r\" (UID: \"f25d0677-228e-4b99-bc1f-abbbceebffc4\") " pod="openstack/ironic-f986975b-8wc5r" Mar 18 18:19:54.902454 master-0 kubenswrapper[30278]: I0318 18:19:54.902376 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f25d0677-228e-4b99-bc1f-abbbceebffc4-scripts\") pod \"ironic-f986975b-8wc5r\" (UID: \"f25d0677-228e-4b99-bc1f-abbbceebffc4\") " pod="openstack/ironic-f986975b-8wc5r" Mar 18 18:19:54.905606 master-0 kubenswrapper[30278]: I0318 18:19:54.905555 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f25d0677-228e-4b99-bc1f-abbbceebffc4-combined-ca-bundle\") pod \"ironic-f986975b-8wc5r\" (UID: \"f25d0677-228e-4b99-bc1f-abbbceebffc4\") " pod="openstack/ironic-f986975b-8wc5r" Mar 18 18:19:54.926939 master-0 kubenswrapper[30278]: I0318 18:19:54.926079 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/f25d0677-228e-4b99-bc1f-abbbceebffc4-etc-podinfo\") pod \"ironic-f986975b-8wc5r\" (UID: \"f25d0677-228e-4b99-bc1f-abbbceebffc4\") " pod="openstack/ironic-f986975b-8wc5r" Mar 18 18:19:54.929216 master-0 kubenswrapper[30278]: I0318 18:19:54.929034 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f25d0677-228e-4b99-bc1f-abbbceebffc4-config-data\") pod \"ironic-f986975b-8wc5r\" (UID: \"f25d0677-228e-4b99-bc1f-abbbceebffc4\") " pod="openstack/ironic-f986975b-8wc5r" Mar 18 18:19:54.993306 master-0 kubenswrapper[30278]: I0318 18:19:54.993205 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-neutron-agent-c769655c7-ssdxq" Mar 18 18:19:55.269022 master-0 kubenswrapper[30278]: I0318 18:19:55.267056 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xscw8\" (UniqueName: \"kubernetes.io/projected/f25d0677-228e-4b99-bc1f-abbbceebffc4-kube-api-access-xscw8\") pod \"ironic-f986975b-8wc5r\" (UID: \"f25d0677-228e-4b99-bc1f-abbbceebffc4\") " pod="openstack/ironic-f986975b-8wc5r" Mar 18 18:19:55.287886 master-0 kubenswrapper[30278]: I0318 18:19:55.287729 30278 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-inspector-db-create-8vlcj"] Mar 18 18:19:55.312009 master-0 kubenswrapper[30278]: I0318 18:19:55.311943 30278 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-inspector-4c72-account-create-update-hzqhn"] Mar 18 18:19:55.313503 master-0 kubenswrapper[30278]: I0318 18:19:55.313446 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-b9df6-scheduler-0" event={"ID":"d39fb8c7-403a-4f95-9a6a-e9207bc02408","Type":"ContainerStarted","Data":"ca0aab2ceee7df5496e163102445f6bccbe7279a980957ceed95a59f3186ad40"} Mar 18 18:19:55.359830 master-0 kubenswrapper[30278]: W0318 18:19:55.359748 30278 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda8ecf6f3_3705_4948_bef5_95c5cb62c14a.slice/crio-01887e511688f4bd789c7e4a99f061cc51a927a482c90eb6b195e402d391b672 WatchSource:0}: Error finding container 01887e511688f4bd789c7e4a99f061cc51a927a482c90eb6b195e402d391b672: Status 404 returned error can't find the container with id 01887e511688f4bd789c7e4a99f061cc51a927a482c90eb6b195e402d391b672 Mar 18 18:19:55.466335 master-0 kubenswrapper[30278]: I0318 18:19:55.456228 30278 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-b9df6-scheduler-0" podStartSLOduration=5.456201186 podStartE2EDuration="5.456201186s" podCreationTimestamp="2026-03-18 18:19:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 18:19:55.363048417 +0000 UTC m=+1164.530233012" watchObservedRunningTime="2026-03-18 18:19:55.456201186 +0000 UTC m=+1164.623385781" Mar 18 18:19:55.527399 master-0 kubenswrapper[30278]: I0318 18:19:55.523881 30278 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-c4bc7d979-gstcd"] Mar 18 18:19:55.536417 master-0 kubenswrapper[30278]: I0318 18:19:55.529310 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-f986975b-8wc5r" Mar 18 18:19:55.585329 master-0 kubenswrapper[30278]: W0318 18:19:55.577932 30278 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod200c8f5b_bd48_4587_9a90_f2cba299bc43.slice/crio-3eb89e297f5fe07b5fb7fe70b4a39e1c66f4591ab967899363eca137e1fd0631 WatchSource:0}: Error finding container 3eb89e297f5fe07b5fb7fe70b4a39e1c66f4591ab967899363eca137e1fd0631: Status 404 returned error can't find the container with id 3eb89e297f5fe07b5fb7fe70b4a39e1c66f4591ab967899363eca137e1fd0631 Mar 18 18:19:55.585329 master-0 kubenswrapper[30278]: I0318 18:19:55.583621 30278 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-b9df6-backup-0" Mar 18 18:19:55.833312 master-0 kubenswrapper[30278]: I0318 18:19:55.829975 30278 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-neutron-agent-c769655c7-ssdxq"] Mar 18 18:19:55.888614 master-0 kubenswrapper[30278]: I0318 18:19:55.881384 30278 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-b9df6-scheduler-0" Mar 18 18:19:56.347357 master-0 kubenswrapper[30278]: I0318 18:19:56.346737 30278 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-f986975b-8wc5r"] Mar 18 18:19:56.375937 master-0 kubenswrapper[30278]: I0318 18:19:56.375748 30278 generic.go:334] "Generic (PLEG): container finished" podID="200c8f5b-bd48-4587-9a90-f2cba299bc43" containerID="4010cf41c57c95db0afb3d37055ba24fc11b7ca3457f8cd04f80bd1c9958414f" exitCode=0 Mar 18 18:19:56.375937 master-0 kubenswrapper[30278]: I0318 18:19:56.375884 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-c4bc7d979-gstcd" event={"ID":"200c8f5b-bd48-4587-9a90-f2cba299bc43","Type":"ContainerDied","Data":"4010cf41c57c95db0afb3d37055ba24fc11b7ca3457f8cd04f80bd1c9958414f"} Mar 18 18:19:56.375937 master-0 kubenswrapper[30278]: I0318 18:19:56.375916 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-c4bc7d979-gstcd" event={"ID":"200c8f5b-bd48-4587-9a90-f2cba299bc43","Type":"ContainerStarted","Data":"3eb89e297f5fe07b5fb7fe70b4a39e1c66f4591ab967899363eca137e1fd0631"} Mar 18 18:19:56.383587 master-0 kubenswrapper[30278]: W0318 18:19:56.382394 30278 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf25d0677_228e_4b99_bc1f_abbbceebffc4.slice/crio-e5abe77015db00f9866381dd21c28369ef02e6348c06e3858d8e73c8e5276062 WatchSource:0}: Error finding container e5abe77015db00f9866381dd21c28369ef02e6348c06e3858d8e73c8e5276062: Status 404 returned error can't find the container with id e5abe77015db00f9866381dd21c28369ef02e6348c06e3858d8e73c8e5276062 Mar 18 18:19:56.397870 master-0 kubenswrapper[30278]: I0318 18:19:56.397815 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-db-create-8vlcj" event={"ID":"8b5223e8-7cb6-425b-a1d8-55c542110842","Type":"ContainerStarted","Data":"866465c4b227de8767dcd8a711d636f96161d7250a2e645a6a5840df4b739ae1"} Mar 18 18:19:56.397870 master-0 kubenswrapper[30278]: I0318 18:19:56.397873 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-db-create-8vlcj" event={"ID":"8b5223e8-7cb6-425b-a1d8-55c542110842","Type":"ContainerStarted","Data":"4dcfef5f4a517936186cd282bb893b5687b20ec5d28b55886d25968832b879ba"} Mar 18 18:19:56.440438 master-0 kubenswrapper[30278]: I0318 18:19:56.428494 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-neutron-agent-c769655c7-ssdxq" event={"ID":"adb370b0-e5b4-4cc8-b1d2-c63363b70615","Type":"ContainerStarted","Data":"c83fe408c5a9f9dcb27020723f6dc9694994a8cc2d217bb3f944679400989649"} Mar 18 18:19:56.481301 master-0 kubenswrapper[30278]: I0318 18:19:56.480927 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-4c72-account-create-update-hzqhn" event={"ID":"a8ecf6f3-3705-4948-bef5-95c5cb62c14a","Type":"ContainerStarted","Data":"01887e511688f4bd789c7e4a99f061cc51a927a482c90eb6b195e402d391b672"} Mar 18 18:19:56.499114 master-0 kubenswrapper[30278]: I0318 18:19:56.497963 30278 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ironic-inspector-db-create-8vlcj" podStartSLOduration=3.497926945 podStartE2EDuration="3.497926945s" podCreationTimestamp="2026-03-18 18:19:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 18:19:56.474922985 +0000 UTC m=+1165.642107580" watchObservedRunningTime="2026-03-18 18:19:56.497926945 +0000 UTC m=+1165.665111550" Mar 18 18:19:56.522850 master-0 kubenswrapper[30278]: I0318 18:19:56.522652 30278 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ironic-inspector-4c72-account-create-update-hzqhn" podStartSLOduration=3.52262626 podStartE2EDuration="3.52262626s" podCreationTimestamp="2026-03-18 18:19:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 18:19:56.51076086 +0000 UTC m=+1165.677945455" watchObservedRunningTime="2026-03-18 18:19:56.52262626 +0000 UTC m=+1165.689810855" Mar 18 18:19:56.597417 master-0 kubenswrapper[30278]: I0318 18:19:56.596482 30278 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/cinder-b9df6-api-0" podUID="631bd59b-37e5-49a9-98de-41b91dd3425a" containerName="cinder-api" probeResult="failure" output="Get \"https://10.128.0.231:8776/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 18 18:19:57.519889 master-0 kubenswrapper[30278]: I0318 18:19:57.519739 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-c4bc7d979-gstcd" event={"ID":"200c8f5b-bd48-4587-9a90-f2cba299bc43","Type":"ContainerStarted","Data":"0b1a11717d99f933e5d47e9dbfcf741d5937fc13059549c8ed78a37fa0015b7f"} Mar 18 18:19:57.522107 master-0 kubenswrapper[30278]: I0318 18:19:57.522064 30278 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-c4bc7d979-gstcd" Mar 18 18:19:57.533679 master-0 kubenswrapper[30278]: I0318 18:19:57.533608 30278 generic.go:334] "Generic (PLEG): container finished" podID="8b5223e8-7cb6-425b-a1d8-55c542110842" containerID="866465c4b227de8767dcd8a711d636f96161d7250a2e645a6a5840df4b739ae1" exitCode=0 Mar 18 18:19:57.533872 master-0 kubenswrapper[30278]: I0318 18:19:57.533709 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-db-create-8vlcj" event={"ID":"8b5223e8-7cb6-425b-a1d8-55c542110842","Type":"ContainerDied","Data":"866465c4b227de8767dcd8a711d636f96161d7250a2e645a6a5840df4b739ae1"} Mar 18 18:19:57.572067 master-0 kubenswrapper[30278]: I0318 18:19:57.571680 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-f986975b-8wc5r" event={"ID":"f25d0677-228e-4b99-bc1f-abbbceebffc4","Type":"ContainerStarted","Data":"e5abe77015db00f9866381dd21c28369ef02e6348c06e3858d8e73c8e5276062"} Mar 18 18:19:57.577875 master-0 kubenswrapper[30278]: I0318 18:19:57.574564 30278 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-c4bc7d979-gstcd" podStartSLOduration=4.574551553 podStartE2EDuration="4.574551553s" podCreationTimestamp="2026-03-18 18:19:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 18:19:57.57110335 +0000 UTC m=+1166.738287945" watchObservedRunningTime="2026-03-18 18:19:57.574551553 +0000 UTC m=+1166.741736148" Mar 18 18:19:57.587302 master-0 kubenswrapper[30278]: I0318 18:19:57.583540 30278 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cinder-b9df6-api-0" podUID="631bd59b-37e5-49a9-98de-41b91dd3425a" containerName="cinder-api" probeResult="failure" output="Get \"https://10.128.0.231:8776/healthcheck\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 18 18:19:57.592646 master-0 kubenswrapper[30278]: I0318 18:19:57.591052 30278 generic.go:334] "Generic (PLEG): container finished" podID="a8ecf6f3-3705-4948-bef5-95c5cb62c14a" containerID="f0291bd125b26470dba99aa26e67c46f71bd86f9799a78269d6ec0dcd026d919" exitCode=0 Mar 18 18:19:57.592646 master-0 kubenswrapper[30278]: I0318 18:19:57.592501 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-4c72-account-create-update-hzqhn" event={"ID":"a8ecf6f3-3705-4948-bef5-95c5cb62c14a","Type":"ContainerDied","Data":"f0291bd125b26470dba99aa26e67c46f71bd86f9799a78269d6ec0dcd026d919"} Mar 18 18:19:57.623059 master-0 kubenswrapper[30278]: I0318 18:19:57.617996 30278 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ironic-conductor-0"] Mar 18 18:19:57.637191 master-0 kubenswrapper[30278]: I0318 18:19:57.630780 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-conductor-0" Mar 18 18:19:57.639033 master-0 kubenswrapper[30278]: I0318 18:19:57.638980 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ironic-conductor-scripts" Mar 18 18:19:57.639306 master-0 kubenswrapper[30278]: I0318 18:19:57.639285 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ironic-conductor-config-data" Mar 18 18:19:57.756353 master-0 kubenswrapper[30278]: I0318 18:19:57.756174 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e9af6002-27e3-414d-b61a-dc0f7d99768b-combined-ca-bundle\") pod \"ironic-conductor-0\" (UID: \"e9af6002-27e3-414d-b61a-dc0f7d99768b\") " pod="openstack/ironic-conductor-0" Mar 18 18:19:57.756353 master-0 kubenswrapper[30278]: I0318 18:19:57.756255 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e9af6002-27e3-414d-b61a-dc0f7d99768b-config-data\") pod \"ironic-conductor-0\" (UID: \"e9af6002-27e3-414d-b61a-dc0f7d99768b\") " pod="openstack/ironic-conductor-0" Mar 18 18:19:57.756353 master-0 kubenswrapper[30278]: I0318 18:19:57.756296 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/e9af6002-27e3-414d-b61a-dc0f7d99768b-config-data-merged\") pod \"ironic-conductor-0\" (UID: \"e9af6002-27e3-414d-b61a-dc0f7d99768b\") " pod="openstack/ironic-conductor-0" Mar 18 18:19:57.756353 master-0 kubenswrapper[30278]: I0318 18:19:57.756351 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/e9af6002-27e3-414d-b61a-dc0f7d99768b-etc-podinfo\") pod \"ironic-conductor-0\" (UID: \"e9af6002-27e3-414d-b61a-dc0f7d99768b\") " pod="openstack/ironic-conductor-0" Mar 18 18:19:57.756353 master-0 kubenswrapper[30278]: I0318 18:19:57.756367 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h2v6w\" (UniqueName: \"kubernetes.io/projected/e9af6002-27e3-414d-b61a-dc0f7d99768b-kube-api-access-h2v6w\") pod \"ironic-conductor-0\" (UID: \"e9af6002-27e3-414d-b61a-dc0f7d99768b\") " pod="openstack/ironic-conductor-0" Mar 18 18:19:57.758189 master-0 kubenswrapper[30278]: I0318 18:19:57.756418 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e9af6002-27e3-414d-b61a-dc0f7d99768b-config-data-custom\") pod \"ironic-conductor-0\" (UID: \"e9af6002-27e3-414d-b61a-dc0f7d99768b\") " pod="openstack/ironic-conductor-0" Mar 18 18:19:57.758189 master-0 kubenswrapper[30278]: I0318 18:19:57.756473 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e9af6002-27e3-414d-b61a-dc0f7d99768b-scripts\") pod \"ironic-conductor-0\" (UID: \"e9af6002-27e3-414d-b61a-dc0f7d99768b\") " pod="openstack/ironic-conductor-0" Mar 18 18:19:57.758189 master-0 kubenswrapper[30278]: I0318 18:19:57.756630 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-59d3e2de-3f8a-4884-831d-0558dfb36094\" (UniqueName: \"kubernetes.io/csi/topolvm.io^979817d4-f547-4e21-b646-bce0404b96e1\") pod \"ironic-conductor-0\" (UID: \"e9af6002-27e3-414d-b61a-dc0f7d99768b\") " pod="openstack/ironic-conductor-0" Mar 18 18:19:57.766175 master-0 kubenswrapper[30278]: I0318 18:19:57.765953 30278 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-conductor-0"] Mar 18 18:19:57.859748 master-0 kubenswrapper[30278]: I0318 18:19:57.859484 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-59d3e2de-3f8a-4884-831d-0558dfb36094\" (UniqueName: \"kubernetes.io/csi/topolvm.io^979817d4-f547-4e21-b646-bce0404b96e1\") pod \"ironic-conductor-0\" (UID: \"e9af6002-27e3-414d-b61a-dc0f7d99768b\") " pod="openstack/ironic-conductor-0" Mar 18 18:19:57.860967 master-0 kubenswrapper[30278]: I0318 18:19:57.860943 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e9af6002-27e3-414d-b61a-dc0f7d99768b-combined-ca-bundle\") pod \"ironic-conductor-0\" (UID: \"e9af6002-27e3-414d-b61a-dc0f7d99768b\") " pod="openstack/ironic-conductor-0" Mar 18 18:19:57.861039 master-0 kubenswrapper[30278]: I0318 18:19:57.860979 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e9af6002-27e3-414d-b61a-dc0f7d99768b-config-data\") pod \"ironic-conductor-0\" (UID: \"e9af6002-27e3-414d-b61a-dc0f7d99768b\") " pod="openstack/ironic-conductor-0" Mar 18 18:19:57.861039 master-0 kubenswrapper[30278]: I0318 18:19:57.861003 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/e9af6002-27e3-414d-b61a-dc0f7d99768b-config-data-merged\") pod \"ironic-conductor-0\" (UID: \"e9af6002-27e3-414d-b61a-dc0f7d99768b\") " pod="openstack/ironic-conductor-0" Mar 18 18:19:57.861116 master-0 kubenswrapper[30278]: I0318 18:19:57.861040 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/e9af6002-27e3-414d-b61a-dc0f7d99768b-etc-podinfo\") pod \"ironic-conductor-0\" (UID: \"e9af6002-27e3-414d-b61a-dc0f7d99768b\") " pod="openstack/ironic-conductor-0" Mar 18 18:19:57.861116 master-0 kubenswrapper[30278]: I0318 18:19:57.861061 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h2v6w\" (UniqueName: \"kubernetes.io/projected/e9af6002-27e3-414d-b61a-dc0f7d99768b-kube-api-access-h2v6w\") pod \"ironic-conductor-0\" (UID: \"e9af6002-27e3-414d-b61a-dc0f7d99768b\") " pod="openstack/ironic-conductor-0" Mar 18 18:19:57.861116 master-0 kubenswrapper[30278]: I0318 18:19:57.861094 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e9af6002-27e3-414d-b61a-dc0f7d99768b-config-data-custom\") pod \"ironic-conductor-0\" (UID: \"e9af6002-27e3-414d-b61a-dc0f7d99768b\") " pod="openstack/ironic-conductor-0" Mar 18 18:19:57.865973 master-0 kubenswrapper[30278]: I0318 18:19:57.861135 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e9af6002-27e3-414d-b61a-dc0f7d99768b-scripts\") pod \"ironic-conductor-0\" (UID: \"e9af6002-27e3-414d-b61a-dc0f7d99768b\") " pod="openstack/ironic-conductor-0" Mar 18 18:19:57.865973 master-0 kubenswrapper[30278]: I0318 18:19:57.861961 30278 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Mar 18 18:19:57.865973 master-0 kubenswrapper[30278]: I0318 18:19:57.861984 30278 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-59d3e2de-3f8a-4884-831d-0558dfb36094\" (UniqueName: \"kubernetes.io/csi/topolvm.io^979817d4-f547-4e21-b646-bce0404b96e1\") pod \"ironic-conductor-0\" (UID: \"e9af6002-27e3-414d-b61a-dc0f7d99768b\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/topolvm.io/e5c9a594d0e2897336fda70f252fe1b3939d5dbb01e079ce1ff3214fe0d5417a/globalmount\"" pod="openstack/ironic-conductor-0" Mar 18 18:19:57.865973 master-0 kubenswrapper[30278]: I0318 18:19:57.863614 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/e9af6002-27e3-414d-b61a-dc0f7d99768b-config-data-merged\") pod \"ironic-conductor-0\" (UID: \"e9af6002-27e3-414d-b61a-dc0f7d99768b\") " pod="openstack/ironic-conductor-0" Mar 18 18:19:57.872066 master-0 kubenswrapper[30278]: I0318 18:19:57.872010 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e9af6002-27e3-414d-b61a-dc0f7d99768b-scripts\") pod \"ironic-conductor-0\" (UID: \"e9af6002-27e3-414d-b61a-dc0f7d99768b\") " pod="openstack/ironic-conductor-0" Mar 18 18:19:57.881052 master-0 kubenswrapper[30278]: I0318 18:19:57.880990 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e9af6002-27e3-414d-b61a-dc0f7d99768b-config-data-custom\") pod \"ironic-conductor-0\" (UID: \"e9af6002-27e3-414d-b61a-dc0f7d99768b\") " pod="openstack/ironic-conductor-0" Mar 18 18:19:57.883189 master-0 kubenswrapper[30278]: I0318 18:19:57.883150 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h2v6w\" (UniqueName: \"kubernetes.io/projected/e9af6002-27e3-414d-b61a-dc0f7d99768b-kube-api-access-h2v6w\") pod \"ironic-conductor-0\" (UID: \"e9af6002-27e3-414d-b61a-dc0f7d99768b\") " pod="openstack/ironic-conductor-0" Mar 18 18:19:57.897005 master-0 kubenswrapper[30278]: I0318 18:19:57.884403 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e9af6002-27e3-414d-b61a-dc0f7d99768b-config-data\") pod \"ironic-conductor-0\" (UID: \"e9af6002-27e3-414d-b61a-dc0f7d99768b\") " pod="openstack/ironic-conductor-0" Mar 18 18:19:57.899809 master-0 kubenswrapper[30278]: I0318 18:19:57.899749 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e9af6002-27e3-414d-b61a-dc0f7d99768b-combined-ca-bundle\") pod \"ironic-conductor-0\" (UID: \"e9af6002-27e3-414d-b61a-dc0f7d99768b\") " pod="openstack/ironic-conductor-0" Mar 18 18:19:57.910363 master-0 kubenswrapper[30278]: I0318 18:19:57.906016 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/e9af6002-27e3-414d-b61a-dc0f7d99768b-etc-podinfo\") pod \"ironic-conductor-0\" (UID: \"e9af6002-27e3-414d-b61a-dc0f7d99768b\") " pod="openstack/ironic-conductor-0" Mar 18 18:19:59.343775 master-0 kubenswrapper[30278]: I0318 18:19:59.342932 30278 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ironic-5cfb4bd768-f4ww4"] Mar 18 18:19:59.347103 master-0 kubenswrapper[30278]: I0318 18:19:59.347041 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-5cfb4bd768-f4ww4" Mar 18 18:19:59.356225 master-0 kubenswrapper[30278]: I0318 18:19:59.351145 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ironic-internal-svc" Mar 18 18:19:59.356225 master-0 kubenswrapper[30278]: I0318 18:19:59.351414 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ironic-public-svc" Mar 18 18:19:59.373600 master-0 kubenswrapper[30278]: I0318 18:19:59.373506 30278 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-5cfb4bd768-f4ww4"] Mar 18 18:19:59.485254 master-0 kubenswrapper[30278]: I0318 18:19:59.485181 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-59d3e2de-3f8a-4884-831d-0558dfb36094\" (UniqueName: \"kubernetes.io/csi/topolvm.io^979817d4-f547-4e21-b646-bce0404b96e1\") pod \"ironic-conductor-0\" (UID: \"e9af6002-27e3-414d-b61a-dc0f7d99768b\") " pod="openstack/ironic-conductor-0" Mar 18 18:19:59.525295 master-0 kubenswrapper[30278]: I0318 18:19:59.522770 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8794f0fc-2223-4bd7-aed5-a219b5f427e0-public-tls-certs\") pod \"ironic-5cfb4bd768-f4ww4\" (UID: \"8794f0fc-2223-4bd7-aed5-a219b5f427e0\") " pod="openstack/ironic-5cfb4bd768-f4ww4" Mar 18 18:19:59.525295 master-0 kubenswrapper[30278]: I0318 18:19:59.522865 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5ccmz\" (UniqueName: \"kubernetes.io/projected/8794f0fc-2223-4bd7-aed5-a219b5f427e0-kube-api-access-5ccmz\") pod \"ironic-5cfb4bd768-f4ww4\" (UID: \"8794f0fc-2223-4bd7-aed5-a219b5f427e0\") " pod="openstack/ironic-5cfb4bd768-f4ww4" Mar 18 18:19:59.525295 master-0 kubenswrapper[30278]: I0318 18:19:59.522906 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/8794f0fc-2223-4bd7-aed5-a219b5f427e0-etc-podinfo\") pod \"ironic-5cfb4bd768-f4ww4\" (UID: \"8794f0fc-2223-4bd7-aed5-a219b5f427e0\") " pod="openstack/ironic-5cfb4bd768-f4ww4" Mar 18 18:19:59.525295 master-0 kubenswrapper[30278]: I0318 18:19:59.522927 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8794f0fc-2223-4bd7-aed5-a219b5f427e0-scripts\") pod \"ironic-5cfb4bd768-f4ww4\" (UID: \"8794f0fc-2223-4bd7-aed5-a219b5f427e0\") " pod="openstack/ironic-5cfb4bd768-f4ww4" Mar 18 18:19:59.525295 master-0 kubenswrapper[30278]: I0318 18:19:59.522966 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8794f0fc-2223-4bd7-aed5-a219b5f427e0-config-data-custom\") pod \"ironic-5cfb4bd768-f4ww4\" (UID: \"8794f0fc-2223-4bd7-aed5-a219b5f427e0\") " pod="openstack/ironic-5cfb4bd768-f4ww4" Mar 18 18:19:59.525295 master-0 kubenswrapper[30278]: I0318 18:19:59.522992 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8794f0fc-2223-4bd7-aed5-a219b5f427e0-logs\") pod \"ironic-5cfb4bd768-f4ww4\" (UID: \"8794f0fc-2223-4bd7-aed5-a219b5f427e0\") " pod="openstack/ironic-5cfb4bd768-f4ww4" Mar 18 18:19:59.525295 master-0 kubenswrapper[30278]: I0318 18:19:59.523093 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/8794f0fc-2223-4bd7-aed5-a219b5f427e0-config-data-merged\") pod \"ironic-5cfb4bd768-f4ww4\" (UID: \"8794f0fc-2223-4bd7-aed5-a219b5f427e0\") " pod="openstack/ironic-5cfb4bd768-f4ww4" Mar 18 18:19:59.525295 master-0 kubenswrapper[30278]: I0318 18:19:59.523117 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8794f0fc-2223-4bd7-aed5-a219b5f427e0-config-data\") pod \"ironic-5cfb4bd768-f4ww4\" (UID: \"8794f0fc-2223-4bd7-aed5-a219b5f427e0\") " pod="openstack/ironic-5cfb4bd768-f4ww4" Mar 18 18:19:59.525295 master-0 kubenswrapper[30278]: I0318 18:19:59.523149 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8794f0fc-2223-4bd7-aed5-a219b5f427e0-combined-ca-bundle\") pod \"ironic-5cfb4bd768-f4ww4\" (UID: \"8794f0fc-2223-4bd7-aed5-a219b5f427e0\") " pod="openstack/ironic-5cfb4bd768-f4ww4" Mar 18 18:19:59.525295 master-0 kubenswrapper[30278]: I0318 18:19:59.523251 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8794f0fc-2223-4bd7-aed5-a219b5f427e0-internal-tls-certs\") pod \"ironic-5cfb4bd768-f4ww4\" (UID: \"8794f0fc-2223-4bd7-aed5-a219b5f427e0\") " pod="openstack/ironic-5cfb4bd768-f4ww4" Mar 18 18:19:59.533720 master-0 kubenswrapper[30278]: I0318 18:19:59.533660 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-conductor-0" Mar 18 18:19:59.628399 master-0 kubenswrapper[30278]: I0318 18:19:59.628326 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8794f0fc-2223-4bd7-aed5-a219b5f427e0-internal-tls-certs\") pod \"ironic-5cfb4bd768-f4ww4\" (UID: \"8794f0fc-2223-4bd7-aed5-a219b5f427e0\") " pod="openstack/ironic-5cfb4bd768-f4ww4" Mar 18 18:19:59.628621 master-0 kubenswrapper[30278]: I0318 18:19:59.628437 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8794f0fc-2223-4bd7-aed5-a219b5f427e0-public-tls-certs\") pod \"ironic-5cfb4bd768-f4ww4\" (UID: \"8794f0fc-2223-4bd7-aed5-a219b5f427e0\") " pod="openstack/ironic-5cfb4bd768-f4ww4" Mar 18 18:19:59.630064 master-0 kubenswrapper[30278]: I0318 18:19:59.628849 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5ccmz\" (UniqueName: \"kubernetes.io/projected/8794f0fc-2223-4bd7-aed5-a219b5f427e0-kube-api-access-5ccmz\") pod \"ironic-5cfb4bd768-f4ww4\" (UID: \"8794f0fc-2223-4bd7-aed5-a219b5f427e0\") " pod="openstack/ironic-5cfb4bd768-f4ww4" Mar 18 18:19:59.630064 master-0 kubenswrapper[30278]: I0318 18:19:59.628893 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/8794f0fc-2223-4bd7-aed5-a219b5f427e0-etc-podinfo\") pod \"ironic-5cfb4bd768-f4ww4\" (UID: \"8794f0fc-2223-4bd7-aed5-a219b5f427e0\") " pod="openstack/ironic-5cfb4bd768-f4ww4" Mar 18 18:19:59.630064 master-0 kubenswrapper[30278]: I0318 18:19:59.628914 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8794f0fc-2223-4bd7-aed5-a219b5f427e0-scripts\") pod \"ironic-5cfb4bd768-f4ww4\" (UID: \"8794f0fc-2223-4bd7-aed5-a219b5f427e0\") " pod="openstack/ironic-5cfb4bd768-f4ww4" Mar 18 18:19:59.630064 master-0 kubenswrapper[30278]: I0318 18:19:59.628969 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8794f0fc-2223-4bd7-aed5-a219b5f427e0-config-data-custom\") pod \"ironic-5cfb4bd768-f4ww4\" (UID: \"8794f0fc-2223-4bd7-aed5-a219b5f427e0\") " pod="openstack/ironic-5cfb4bd768-f4ww4" Mar 18 18:19:59.630064 master-0 kubenswrapper[30278]: I0318 18:19:59.629003 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8794f0fc-2223-4bd7-aed5-a219b5f427e0-logs\") pod \"ironic-5cfb4bd768-f4ww4\" (UID: \"8794f0fc-2223-4bd7-aed5-a219b5f427e0\") " pod="openstack/ironic-5cfb4bd768-f4ww4" Mar 18 18:19:59.630064 master-0 kubenswrapper[30278]: I0318 18:19:59.629086 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/8794f0fc-2223-4bd7-aed5-a219b5f427e0-config-data-merged\") pod \"ironic-5cfb4bd768-f4ww4\" (UID: \"8794f0fc-2223-4bd7-aed5-a219b5f427e0\") " pod="openstack/ironic-5cfb4bd768-f4ww4" Mar 18 18:19:59.630064 master-0 kubenswrapper[30278]: I0318 18:19:59.629119 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8794f0fc-2223-4bd7-aed5-a219b5f427e0-config-data\") pod \"ironic-5cfb4bd768-f4ww4\" (UID: \"8794f0fc-2223-4bd7-aed5-a219b5f427e0\") " pod="openstack/ironic-5cfb4bd768-f4ww4" Mar 18 18:19:59.640657 master-0 kubenswrapper[30278]: I0318 18:19:59.632073 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8794f0fc-2223-4bd7-aed5-a219b5f427e0-combined-ca-bundle\") pod \"ironic-5cfb4bd768-f4ww4\" (UID: \"8794f0fc-2223-4bd7-aed5-a219b5f427e0\") " pod="openstack/ironic-5cfb4bd768-f4ww4" Mar 18 18:19:59.640657 master-0 kubenswrapper[30278]: I0318 18:19:59.634999 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8794f0fc-2223-4bd7-aed5-a219b5f427e0-scripts\") pod \"ironic-5cfb4bd768-f4ww4\" (UID: \"8794f0fc-2223-4bd7-aed5-a219b5f427e0\") " pod="openstack/ironic-5cfb4bd768-f4ww4" Mar 18 18:19:59.641662 master-0 kubenswrapper[30278]: I0318 18:19:59.641537 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8794f0fc-2223-4bd7-aed5-a219b5f427e0-logs\") pod \"ironic-5cfb4bd768-f4ww4\" (UID: \"8794f0fc-2223-4bd7-aed5-a219b5f427e0\") " pod="openstack/ironic-5cfb4bd768-f4ww4" Mar 18 18:19:59.641662 master-0 kubenswrapper[30278]: I0318 18:19:59.641533 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/8794f0fc-2223-4bd7-aed5-a219b5f427e0-config-data-merged\") pod \"ironic-5cfb4bd768-f4ww4\" (UID: \"8794f0fc-2223-4bd7-aed5-a219b5f427e0\") " pod="openstack/ironic-5cfb4bd768-f4ww4" Mar 18 18:19:59.645555 master-0 kubenswrapper[30278]: I0318 18:19:59.645454 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8794f0fc-2223-4bd7-aed5-a219b5f427e0-config-data-custom\") pod \"ironic-5cfb4bd768-f4ww4\" (UID: \"8794f0fc-2223-4bd7-aed5-a219b5f427e0\") " pod="openstack/ironic-5cfb4bd768-f4ww4" Mar 18 18:19:59.658111 master-0 kubenswrapper[30278]: I0318 18:19:59.658040 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8794f0fc-2223-4bd7-aed5-a219b5f427e0-combined-ca-bundle\") pod \"ironic-5cfb4bd768-f4ww4\" (UID: \"8794f0fc-2223-4bd7-aed5-a219b5f427e0\") " pod="openstack/ironic-5cfb4bd768-f4ww4" Mar 18 18:19:59.658111 master-0 kubenswrapper[30278]: I0318 18:19:59.658088 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-4c72-account-create-update-hzqhn" event={"ID":"a8ecf6f3-3705-4948-bef5-95c5cb62c14a","Type":"ContainerDied","Data":"01887e511688f4bd789c7e4a99f061cc51a927a482c90eb6b195e402d391b672"} Mar 18 18:19:59.658410 master-0 kubenswrapper[30278]: I0318 18:19:59.658141 30278 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="01887e511688f4bd789c7e4a99f061cc51a927a482c90eb6b195e402d391b672" Mar 18 18:19:59.659391 master-0 kubenswrapper[30278]: I0318 18:19:59.659345 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8794f0fc-2223-4bd7-aed5-a219b5f427e0-internal-tls-certs\") pod \"ironic-5cfb4bd768-f4ww4\" (UID: \"8794f0fc-2223-4bd7-aed5-a219b5f427e0\") " pod="openstack/ironic-5cfb4bd768-f4ww4" Mar 18 18:19:59.660922 master-0 kubenswrapper[30278]: I0318 18:19:59.660778 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8794f0fc-2223-4bd7-aed5-a219b5f427e0-config-data\") pod \"ironic-5cfb4bd768-f4ww4\" (UID: \"8794f0fc-2223-4bd7-aed5-a219b5f427e0\") " pod="openstack/ironic-5cfb4bd768-f4ww4" Mar 18 18:19:59.661021 master-0 kubenswrapper[30278]: I0318 18:19:59.660985 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/8794f0fc-2223-4bd7-aed5-a219b5f427e0-etc-podinfo\") pod \"ironic-5cfb4bd768-f4ww4\" (UID: \"8794f0fc-2223-4bd7-aed5-a219b5f427e0\") " pod="openstack/ironic-5cfb4bd768-f4ww4" Mar 18 18:19:59.661315 master-0 kubenswrapper[30278]: I0318 18:19:59.661241 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8794f0fc-2223-4bd7-aed5-a219b5f427e0-public-tls-certs\") pod \"ironic-5cfb4bd768-f4ww4\" (UID: \"8794f0fc-2223-4bd7-aed5-a219b5f427e0\") " pod="openstack/ironic-5cfb4bd768-f4ww4" Mar 18 18:19:59.677782 master-0 kubenswrapper[30278]: I0318 18:19:59.677662 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-db-create-8vlcj" event={"ID":"8b5223e8-7cb6-425b-a1d8-55c542110842","Type":"ContainerDied","Data":"4dcfef5f4a517936186cd282bb893b5687b20ec5d28b55886d25968832b879ba"} Mar 18 18:19:59.678408 master-0 kubenswrapper[30278]: I0318 18:19:59.678353 30278 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4dcfef5f4a517936186cd282bb893b5687b20ec5d28b55886d25968832b879ba" Mar 18 18:19:59.678755 master-0 kubenswrapper[30278]: I0318 18:19:59.678678 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5ccmz\" (UniqueName: \"kubernetes.io/projected/8794f0fc-2223-4bd7-aed5-a219b5f427e0-kube-api-access-5ccmz\") pod \"ironic-5cfb4bd768-f4ww4\" (UID: \"8794f0fc-2223-4bd7-aed5-a219b5f427e0\") " pod="openstack/ironic-5cfb4bd768-f4ww4" Mar 18 18:19:59.680171 master-0 kubenswrapper[30278]: I0318 18:19:59.679868 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-5cfb4bd768-f4ww4" Mar 18 18:19:59.890878 master-0 kubenswrapper[30278]: I0318 18:19:59.890805 30278 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-inspector-4c72-account-create-update-hzqhn" Mar 18 18:19:59.974617 master-0 kubenswrapper[30278]: I0318 18:19:59.974543 30278 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-b9df6-volume-lvm-iscsi-0" Mar 18 18:20:00.059298 master-0 kubenswrapper[30278]: I0318 18:20:00.053247 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kg4qb\" (UniqueName: \"kubernetes.io/projected/a8ecf6f3-3705-4948-bef5-95c5cb62c14a-kube-api-access-kg4qb\") pod \"a8ecf6f3-3705-4948-bef5-95c5cb62c14a\" (UID: \"a8ecf6f3-3705-4948-bef5-95c5cb62c14a\") " Mar 18 18:20:00.059298 master-0 kubenswrapper[30278]: I0318 18:20:00.053708 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a8ecf6f3-3705-4948-bef5-95c5cb62c14a-operator-scripts\") pod \"a8ecf6f3-3705-4948-bef5-95c5cb62c14a\" (UID: \"a8ecf6f3-3705-4948-bef5-95c5cb62c14a\") " Mar 18 18:20:00.059298 master-0 kubenswrapper[30278]: I0318 18:20:00.055387 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a8ecf6f3-3705-4948-bef5-95c5cb62c14a-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "a8ecf6f3-3705-4948-bef5-95c5cb62c14a" (UID: "a8ecf6f3-3705-4948-bef5-95c5cb62c14a"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 18:20:00.063412 master-0 kubenswrapper[30278]: I0318 18:20:00.062539 30278 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-inspector-db-create-8vlcj" Mar 18 18:20:00.089727 master-0 kubenswrapper[30278]: I0318 18:20:00.086996 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a8ecf6f3-3705-4948-bef5-95c5cb62c14a-kube-api-access-kg4qb" (OuterVolumeSpecName: "kube-api-access-kg4qb") pod "a8ecf6f3-3705-4948-bef5-95c5cb62c14a" (UID: "a8ecf6f3-3705-4948-bef5-95c5cb62c14a"). InnerVolumeSpecName "kube-api-access-kg4qb". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 18:20:00.162931 master-0 kubenswrapper[30278]: I0318 18:20:00.162178 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8b5223e8-7cb6-425b-a1d8-55c542110842-operator-scripts\") pod \"8b5223e8-7cb6-425b-a1d8-55c542110842\" (UID: \"8b5223e8-7cb6-425b-a1d8-55c542110842\") " Mar 18 18:20:00.162931 master-0 kubenswrapper[30278]: I0318 18:20:00.162560 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg7k5\" (UniqueName: \"kubernetes.io/projected/8b5223e8-7cb6-425b-a1d8-55c542110842-kube-api-access-mg7k5\") pod \"8b5223e8-7cb6-425b-a1d8-55c542110842\" (UID: \"8b5223e8-7cb6-425b-a1d8-55c542110842\") " Mar 18 18:20:00.165636 master-0 kubenswrapper[30278]: I0318 18:20:00.165565 30278 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a8ecf6f3-3705-4948-bef5-95c5cb62c14a-operator-scripts\") on node \"master-0\" DevicePath \"\"" Mar 18 18:20:00.165636 master-0 kubenswrapper[30278]: I0318 18:20:00.165610 30278 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kg4qb\" (UniqueName: \"kubernetes.io/projected/a8ecf6f3-3705-4948-bef5-95c5cb62c14a-kube-api-access-kg4qb\") on node \"master-0\" DevicePath \"\"" Mar 18 18:20:00.175031 master-0 kubenswrapper[30278]: I0318 18:20:00.174684 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8b5223e8-7cb6-425b-a1d8-55c542110842-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "8b5223e8-7cb6-425b-a1d8-55c542110842" (UID: "8b5223e8-7cb6-425b-a1d8-55c542110842"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 18:20:00.213595 master-0 kubenswrapper[30278]: I0318 18:20:00.212605 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8b5223e8-7cb6-425b-a1d8-55c542110842-kube-api-access-mg7k5" (OuterVolumeSpecName: "kube-api-access-mg7k5") pod "8b5223e8-7cb6-425b-a1d8-55c542110842" (UID: "8b5223e8-7cb6-425b-a1d8-55c542110842"). InnerVolumeSpecName "kube-api-access-mg7k5". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 18:20:00.270306 master-0 kubenswrapper[30278]: I0318 18:20:00.270163 30278 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8b5223e8-7cb6-425b-a1d8-55c542110842-operator-scripts\") on node \"master-0\" DevicePath \"\"" Mar 18 18:20:00.270306 master-0 kubenswrapper[30278]: I0318 18:20:00.270246 30278 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg7k5\" (UniqueName: \"kubernetes.io/projected/8b5223e8-7cb6-425b-a1d8-55c542110842-kube-api-access-mg7k5\") on node \"master-0\" DevicePath \"\"" Mar 18 18:20:00.686826 master-0 kubenswrapper[30278]: I0318 18:20:00.671254 30278 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-conductor-0"] Mar 18 18:20:00.697140 master-0 kubenswrapper[30278]: I0318 18:20:00.697037 30278 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-6f67d74887-q4vt6" Mar 18 18:20:00.712044 master-0 kubenswrapper[30278]: I0318 18:20:00.711524 30278 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-inspector-db-create-8vlcj" Mar 18 18:20:00.712044 master-0 kubenswrapper[30278]: I0318 18:20:00.711691 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-neutron-agent-c769655c7-ssdxq" event={"ID":"adb370b0-e5b4-4cc8-b1d2-c63363b70615","Type":"ContainerStarted","Data":"a7f6e808292b0db32aee4b55280c74982466050bfdd0c95d5de837f7848013ec"} Mar 18 18:20:00.722450 master-0 kubenswrapper[30278]: I0318 18:20:00.722399 30278 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-inspector-4c72-account-create-update-hzqhn" Mar 18 18:20:00.722816 master-0 kubenswrapper[30278]: I0318 18:20:00.722783 30278 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ironic-neutron-agent-c769655c7-ssdxq" Mar 18 18:20:00.742105 master-0 kubenswrapper[30278]: I0318 18:20:00.741952 30278 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-5cfb4bd768-f4ww4"] Mar 18 18:20:00.906307 master-0 kubenswrapper[30278]: I0318 18:20:00.899019 30278 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ironic-neutron-agent-c769655c7-ssdxq" podStartSLOduration=3.982805765 podStartE2EDuration="7.898983566s" podCreationTimestamp="2026-03-18 18:19:53 +0000 UTC" firstStartedPulling="2026-03-18 18:19:55.855714936 +0000 UTC m=+1165.022899521" lastFinishedPulling="2026-03-18 18:19:59.771892727 +0000 UTC m=+1168.939077322" observedRunningTime="2026-03-18 18:20:00.762720855 +0000 UTC m=+1169.929905450" watchObservedRunningTime="2026-03-18 18:20:00.898983566 +0000 UTC m=+1170.066168161" Mar 18 18:20:01.119545 master-0 kubenswrapper[30278]: I0318 18:20:01.113674 30278 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-b9df6-backup-0" Mar 18 18:20:01.499907 master-0 kubenswrapper[30278]: I0318 18:20:01.498267 30278 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-b9df6-scheduler-0" Mar 18 18:20:01.664897 master-0 kubenswrapper[30278]: I0318 18:20:01.631541 30278 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-b9df6-api-0" Mar 18 18:20:02.297900 master-0 kubenswrapper[30278]: W0318 18:20:02.297824 30278 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode9af6002_27e3_414d_b61a_dc0f7d99768b.slice/crio-078deeae8ead461a1d089a70f9d5ccc4f5c1dd8b83a49d3b6bfa38e63dead2ff WatchSource:0}: Error finding container 078deeae8ead461a1d089a70f9d5ccc4f5c1dd8b83a49d3b6bfa38e63dead2ff: Status 404 returned error can't find the container with id 078deeae8ead461a1d089a70f9d5ccc4f5c1dd8b83a49d3b6bfa38e63dead2ff Mar 18 18:20:02.756233 master-0 kubenswrapper[30278]: I0318 18:20:02.756155 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-5cfb4bd768-f4ww4" event={"ID":"8794f0fc-2223-4bd7-aed5-a219b5f427e0","Type":"ContainerStarted","Data":"a9a37d77f389b2d2b05a4a9c591c52a063220040bac150343d9649c5519b05be"} Mar 18 18:20:02.759095 master-0 kubenswrapper[30278]: I0318 18:20:02.759044 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-conductor-0" event={"ID":"e9af6002-27e3-414d-b61a-dc0f7d99768b","Type":"ContainerStarted","Data":"078deeae8ead461a1d089a70f9d5ccc4f5c1dd8b83a49d3b6bfa38e63dead2ff"} Mar 18 18:20:03.779373 master-0 kubenswrapper[30278]: I0318 18:20:03.779266 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-conductor-0" event={"ID":"e9af6002-27e3-414d-b61a-dc0f7d99768b","Type":"ContainerStarted","Data":"a1cbfa8b6fd590fceab790aa42bfad4a1e0e3300922faf1510c93f7e057a5922"} Mar 18 18:20:03.795311 master-0 kubenswrapper[30278]: I0318 18:20:03.794747 30278 generic.go:334] "Generic (PLEG): container finished" podID="f25d0677-228e-4b99-bc1f-abbbceebffc4" containerID="11129141c9f66a372b2710e8c6e0d88bba043d2711f11b26695f3d249e378775" exitCode=0 Mar 18 18:20:03.795311 master-0 kubenswrapper[30278]: I0318 18:20:03.794887 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-f986975b-8wc5r" event={"ID":"f25d0677-228e-4b99-bc1f-abbbceebffc4","Type":"ContainerDied","Data":"11129141c9f66a372b2710e8c6e0d88bba043d2711f11b26695f3d249e378775"} Mar 18 18:20:03.801439 master-0 kubenswrapper[30278]: I0318 18:20:03.800711 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-5cfb4bd768-f4ww4" event={"ID":"8794f0fc-2223-4bd7-aed5-a219b5f427e0","Type":"ContainerStarted","Data":"e93cd641fae7136e383f31e2ec5cfc131bef116d964cd905f62086e93e8e3ed6"} Mar 18 18:20:04.564938 master-0 kubenswrapper[30278]: I0318 18:20:04.564853 30278 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-c4bc7d979-gstcd" Mar 18 18:20:04.666029 master-0 kubenswrapper[30278]: I0318 18:20:04.664814 30278 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7c894db6df-849s7"] Mar 18 18:20:04.666029 master-0 kubenswrapper[30278]: I0318 18:20:04.665153 30278 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7c894db6df-849s7" podUID="4b1a145b-099e-49a1-b32c-31ce823b9ec9" containerName="dnsmasq-dns" containerID="cri-o://6f27e8c136fb6a3e7fa13efc01810453672b6d84613cd5ea67c9ec948f266cba" gracePeriod=10 Mar 18 18:20:04.836548 master-0 kubenswrapper[30278]: I0318 18:20:04.836253 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-f986975b-8wc5r" event={"ID":"f25d0677-228e-4b99-bc1f-abbbceebffc4","Type":"ContainerStarted","Data":"e18706ba2089f86bdd3de65ed66a8da498827fa5e84969cbe47d4e70f60da7a2"} Mar 18 18:20:04.843068 master-0 kubenswrapper[30278]: I0318 18:20:04.842988 30278 generic.go:334] "Generic (PLEG): container finished" podID="4b1a145b-099e-49a1-b32c-31ce823b9ec9" containerID="6f27e8c136fb6a3e7fa13efc01810453672b6d84613cd5ea67c9ec948f266cba" exitCode=0 Mar 18 18:20:04.844828 master-0 kubenswrapper[30278]: I0318 18:20:04.844771 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7c894db6df-849s7" event={"ID":"4b1a145b-099e-49a1-b32c-31ce823b9ec9","Type":"ContainerDied","Data":"6f27e8c136fb6a3e7fa13efc01810453672b6d84613cd5ea67c9ec948f266cba"} Mar 18 18:20:05.172536 master-0 kubenswrapper[30278]: I0318 18:20:05.171624 30278 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ironic-neutron-agent-c769655c7-ssdxq" Mar 18 18:20:05.826673 master-0 kubenswrapper[30278]: I0318 18:20:05.826514 30278 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7c894db6df-849s7" Mar 18 18:20:05.841326 master-0 kubenswrapper[30278]: I0318 18:20:05.840262 30278 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Mar 18 18:20:05.841326 master-0 kubenswrapper[30278]: E0318 18:20:05.841076 30278 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8b5223e8-7cb6-425b-a1d8-55c542110842" containerName="mariadb-database-create" Mar 18 18:20:05.841326 master-0 kubenswrapper[30278]: I0318 18:20:05.841095 30278 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b5223e8-7cb6-425b-a1d8-55c542110842" containerName="mariadb-database-create" Mar 18 18:20:05.841326 master-0 kubenswrapper[30278]: E0318 18:20:05.841141 30278 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4b1a145b-099e-49a1-b32c-31ce823b9ec9" containerName="dnsmasq-dns" Mar 18 18:20:05.841326 master-0 kubenswrapper[30278]: I0318 18:20:05.841148 30278 state_mem.go:107] "Deleted CPUSet assignment" podUID="4b1a145b-099e-49a1-b32c-31ce823b9ec9" containerName="dnsmasq-dns" Mar 18 18:20:05.841326 master-0 kubenswrapper[30278]: E0318 18:20:05.841190 30278 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a8ecf6f3-3705-4948-bef5-95c5cb62c14a" containerName="mariadb-account-create-update" Mar 18 18:20:05.841326 master-0 kubenswrapper[30278]: I0318 18:20:05.841198 30278 state_mem.go:107] "Deleted CPUSet assignment" podUID="a8ecf6f3-3705-4948-bef5-95c5cb62c14a" containerName="mariadb-account-create-update" Mar 18 18:20:05.841326 master-0 kubenswrapper[30278]: E0318 18:20:05.841209 30278 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4b1a145b-099e-49a1-b32c-31ce823b9ec9" containerName="init" Mar 18 18:20:05.841326 master-0 kubenswrapper[30278]: I0318 18:20:05.841219 30278 state_mem.go:107] "Deleted CPUSet assignment" podUID="4b1a145b-099e-49a1-b32c-31ce823b9ec9" containerName="init" Mar 18 18:20:05.865653 master-0 kubenswrapper[30278]: I0318 18:20:05.841602 30278 memory_manager.go:354] "RemoveStaleState removing state" podUID="8b5223e8-7cb6-425b-a1d8-55c542110842" containerName="mariadb-database-create" Mar 18 18:20:05.865653 master-0 kubenswrapper[30278]: I0318 18:20:05.841617 30278 memory_manager.go:354] "RemoveStaleState removing state" podUID="a8ecf6f3-3705-4948-bef5-95c5cb62c14a" containerName="mariadb-account-create-update" Mar 18 18:20:05.865653 master-0 kubenswrapper[30278]: I0318 18:20:05.841628 30278 memory_manager.go:354] "RemoveStaleState removing state" podUID="4b1a145b-099e-49a1-b32c-31ce823b9ec9" containerName="dnsmasq-dns" Mar 18 18:20:05.865653 master-0 kubenswrapper[30278]: I0318 18:20:05.842674 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Mar 18 18:20:05.865653 master-0 kubenswrapper[30278]: I0318 18:20:05.844921 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Mar 18 18:20:05.865653 master-0 kubenswrapper[30278]: I0318 18:20:05.847658 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Mar 18 18:20:05.905483 master-0 kubenswrapper[30278]: I0318 18:20:05.904604 30278 generic.go:334] "Generic (PLEG): container finished" podID="adb370b0-e5b4-4cc8-b1d2-c63363b70615" containerID="a7f6e808292b0db32aee4b55280c74982466050bfdd0c95d5de837f7848013ec" exitCode=1 Mar 18 18:20:05.936995 master-0 kubenswrapper[30278]: E0318 18:20:05.933512 30278 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8794f0fc_2223_4bd7_aed5_a219b5f427e0.slice/crio-conmon-e93cd641fae7136e383f31e2ec5cfc131bef116d964cd905f62086e93e8e3ed6.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podadb370b0_e5b4_4cc8_b1d2_c63363b70615.slice/crio-a7f6e808292b0db32aee4b55280c74982466050bfdd0c95d5de837f7848013ec.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podadb370b0_e5b4_4cc8_b1d2_c63363b70615.slice/crio-conmon-a7f6e808292b0db32aee4b55280c74982466050bfdd0c95d5de837f7848013ec.scope\": RecentStats: unable to find data in memory cache]" Mar 18 18:20:05.960148 master-0 kubenswrapper[30278]: I0318 18:20:05.958875 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4b1a145b-099e-49a1-b32c-31ce823b9ec9-dns-svc\") pod \"4b1a145b-099e-49a1-b32c-31ce823b9ec9\" (UID: \"4b1a145b-099e-49a1-b32c-31ce823b9ec9\") " Mar 18 18:20:05.960148 master-0 kubenswrapper[30278]: I0318 18:20:05.959031 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4b1a145b-099e-49a1-b32c-31ce823b9ec9-config\") pod \"4b1a145b-099e-49a1-b32c-31ce823b9ec9\" (UID: \"4b1a145b-099e-49a1-b32c-31ce823b9ec9\") " Mar 18 18:20:05.960148 master-0 kubenswrapper[30278]: I0318 18:20:05.959080 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4b1a145b-099e-49a1-b32c-31ce823b9ec9-ovsdbserver-sb\") pod \"4b1a145b-099e-49a1-b32c-31ce823b9ec9\" (UID: \"4b1a145b-099e-49a1-b32c-31ce823b9ec9\") " Mar 18 18:20:05.960148 master-0 kubenswrapper[30278]: I0318 18:20:05.959118 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4b1a145b-099e-49a1-b32c-31ce823b9ec9-ovsdbserver-nb\") pod \"4b1a145b-099e-49a1-b32c-31ce823b9ec9\" (UID: \"4b1a145b-099e-49a1-b32c-31ce823b9ec9\") " Mar 18 18:20:05.960148 master-0 kubenswrapper[30278]: I0318 18:20:05.959218 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v65r4\" (UniqueName: \"kubernetes.io/projected/4b1a145b-099e-49a1-b32c-31ce823b9ec9-kube-api-access-v65r4\") pod \"4b1a145b-099e-49a1-b32c-31ce823b9ec9\" (UID: \"4b1a145b-099e-49a1-b32c-31ce823b9ec9\") " Mar 18 18:20:05.960148 master-0 kubenswrapper[30278]: I0318 18:20:05.959440 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4b1a145b-099e-49a1-b32c-31ce823b9ec9-dns-swift-storage-0\") pod \"4b1a145b-099e-49a1-b32c-31ce823b9ec9\" (UID: \"4b1a145b-099e-49a1-b32c-31ce823b9ec9\") " Mar 18 18:20:05.960148 master-0 kubenswrapper[30278]: I0318 18:20:05.959895 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s7bl9\" (UniqueName: \"kubernetes.io/projected/3cf3d2cb-bc70-4a26-87db-0aa186c4a1a6-kube-api-access-s7bl9\") pod \"openstackclient\" (UID: \"3cf3d2cb-bc70-4a26-87db-0aa186c4a1a6\") " pod="openstack/openstackclient" Mar 18 18:20:05.960148 master-0 kubenswrapper[30278]: I0318 18:20:05.959965 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3cf3d2cb-bc70-4a26-87db-0aa186c4a1a6-combined-ca-bundle\") pod \"openstackclient\" (UID: \"3cf3d2cb-bc70-4a26-87db-0aa186c4a1a6\") " pod="openstack/openstackclient" Mar 18 18:20:05.960148 master-0 kubenswrapper[30278]: I0318 18:20:05.960016 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/3cf3d2cb-bc70-4a26-87db-0aa186c4a1a6-openstack-config\") pod \"openstackclient\" (UID: \"3cf3d2cb-bc70-4a26-87db-0aa186c4a1a6\") " pod="openstack/openstackclient" Mar 18 18:20:05.960148 master-0 kubenswrapper[30278]: I0318 18:20:05.960070 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/3cf3d2cb-bc70-4a26-87db-0aa186c4a1a6-openstack-config-secret\") pod \"openstackclient\" (UID: \"3cf3d2cb-bc70-4a26-87db-0aa186c4a1a6\") " pod="openstack/openstackclient" Mar 18 18:20:05.987592 master-0 kubenswrapper[30278]: I0318 18:20:05.964455 30278 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7c894db6df-849s7" Mar 18 18:20:05.987592 master-0 kubenswrapper[30278]: I0318 18:20:05.972958 30278 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Mar 18 18:20:05.987592 master-0 kubenswrapper[30278]: I0318 18:20:05.973025 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-neutron-agent-c769655c7-ssdxq" event={"ID":"adb370b0-e5b4-4cc8-b1d2-c63363b70615","Type":"ContainerDied","Data":"a7f6e808292b0db32aee4b55280c74982466050bfdd0c95d5de837f7848013ec"} Mar 18 18:20:05.987592 master-0 kubenswrapper[30278]: I0318 18:20:05.973070 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-f986975b-8wc5r" event={"ID":"f25d0677-228e-4b99-bc1f-abbbceebffc4","Type":"ContainerStarted","Data":"66e6bd280e11118b6991ace6f755111f4fc8517943a90a47400fb2c39832c8fd"} Mar 18 18:20:05.987592 master-0 kubenswrapper[30278]: I0318 18:20:05.973092 30278 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ironic-f986975b-8wc5r" Mar 18 18:20:05.987592 master-0 kubenswrapper[30278]: I0318 18:20:05.973109 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7c894db6df-849s7" event={"ID":"4b1a145b-099e-49a1-b32c-31ce823b9ec9","Type":"ContainerDied","Data":"bce8cd631508aa3523c8beff1c7dd1b2cc84219bc94d36f929afca72d950027c"} Mar 18 18:20:05.987592 master-0 kubenswrapper[30278]: I0318 18:20:05.973135 30278 scope.go:117] "RemoveContainer" containerID="6f27e8c136fb6a3e7fa13efc01810453672b6d84613cd5ea67c9ec948f266cba" Mar 18 18:20:05.987592 master-0 kubenswrapper[30278]: I0318 18:20:05.975355 30278 scope.go:117] "RemoveContainer" containerID="a7f6e808292b0db32aee4b55280c74982466050bfdd0c95d5de837f7848013ec" Mar 18 18:20:05.987592 master-0 kubenswrapper[30278]: I0318 18:20:05.986499 30278 generic.go:334] "Generic (PLEG): container finished" podID="8794f0fc-2223-4bd7-aed5-a219b5f427e0" containerID="e93cd641fae7136e383f31e2ec5cfc131bef116d964cd905f62086e93e8e3ed6" exitCode=0 Mar 18 18:20:05.987592 master-0 kubenswrapper[30278]: I0318 18:20:05.986559 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-5cfb4bd768-f4ww4" event={"ID":"8794f0fc-2223-4bd7-aed5-a219b5f427e0","Type":"ContainerDied","Data":"e93cd641fae7136e383f31e2ec5cfc131bef116d964cd905f62086e93e8e3ed6"} Mar 18 18:20:06.028576 master-0 kubenswrapper[30278]: I0318 18:20:06.027941 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4b1a145b-099e-49a1-b32c-31ce823b9ec9-kube-api-access-v65r4" (OuterVolumeSpecName: "kube-api-access-v65r4") pod "4b1a145b-099e-49a1-b32c-31ce823b9ec9" (UID: "4b1a145b-099e-49a1-b32c-31ce823b9ec9"). InnerVolumeSpecName "kube-api-access-v65r4". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 18:20:06.065406 master-0 kubenswrapper[30278]: I0318 18:20:06.062214 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s7bl9\" (UniqueName: \"kubernetes.io/projected/3cf3d2cb-bc70-4a26-87db-0aa186c4a1a6-kube-api-access-s7bl9\") pod \"openstackclient\" (UID: \"3cf3d2cb-bc70-4a26-87db-0aa186c4a1a6\") " pod="openstack/openstackclient" Mar 18 18:20:06.072653 master-0 kubenswrapper[30278]: I0318 18:20:06.072518 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3cf3d2cb-bc70-4a26-87db-0aa186c4a1a6-combined-ca-bundle\") pod \"openstackclient\" (UID: \"3cf3d2cb-bc70-4a26-87db-0aa186c4a1a6\") " pod="openstack/openstackclient" Mar 18 18:20:06.072921 master-0 kubenswrapper[30278]: I0318 18:20:06.072866 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/3cf3d2cb-bc70-4a26-87db-0aa186c4a1a6-openstack-config\") pod \"openstackclient\" (UID: \"3cf3d2cb-bc70-4a26-87db-0aa186c4a1a6\") " pod="openstack/openstackclient" Mar 18 18:20:06.073361 master-0 kubenswrapper[30278]: I0318 18:20:06.073317 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/3cf3d2cb-bc70-4a26-87db-0aa186c4a1a6-openstack-config-secret\") pod \"openstackclient\" (UID: \"3cf3d2cb-bc70-4a26-87db-0aa186c4a1a6\") " pod="openstack/openstackclient" Mar 18 18:20:06.074298 master-0 kubenswrapper[30278]: I0318 18:20:06.070735 30278 scope.go:117] "RemoveContainer" containerID="b17f23a5ee5550f2fec431706d4df8bc8ecaa39de923ea21e0a1506453a069c5" Mar 18 18:20:06.076295 master-0 kubenswrapper[30278]: I0318 18:20:06.076178 30278 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v65r4\" (UniqueName: \"kubernetes.io/projected/4b1a145b-099e-49a1-b32c-31ce823b9ec9-kube-api-access-v65r4\") on node \"master-0\" DevicePath \"\"" Mar 18 18:20:06.080142 master-0 kubenswrapper[30278]: I0318 18:20:06.080100 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/3cf3d2cb-bc70-4a26-87db-0aa186c4a1a6-openstack-config\") pod \"openstackclient\" (UID: \"3cf3d2cb-bc70-4a26-87db-0aa186c4a1a6\") " pod="openstack/openstackclient" Mar 18 18:20:06.090164 master-0 kubenswrapper[30278]: I0318 18:20:06.090124 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/3cf3d2cb-bc70-4a26-87db-0aa186c4a1a6-openstack-config-secret\") pod \"openstackclient\" (UID: \"3cf3d2cb-bc70-4a26-87db-0aa186c4a1a6\") " pod="openstack/openstackclient" Mar 18 18:20:06.092698 master-0 kubenswrapper[30278]: I0318 18:20:06.091572 30278 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ironic-f986975b-8wc5r" podStartSLOduration=7.025763154 podStartE2EDuration="13.091558815s" podCreationTimestamp="2026-03-18 18:19:53 +0000 UTC" firstStartedPulling="2026-03-18 18:19:56.393519422 +0000 UTC m=+1165.560704017" lastFinishedPulling="2026-03-18 18:20:02.459315073 +0000 UTC m=+1171.626499678" observedRunningTime="2026-03-18 18:20:05.972255962 +0000 UTC m=+1175.139440557" watchObservedRunningTime="2026-03-18 18:20:06.091558815 +0000 UTC m=+1175.258743410" Mar 18 18:20:06.142302 master-0 kubenswrapper[30278]: I0318 18:20:06.134264 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3cf3d2cb-bc70-4a26-87db-0aa186c4a1a6-combined-ca-bundle\") pod \"openstackclient\" (UID: \"3cf3d2cb-bc70-4a26-87db-0aa186c4a1a6\") " pod="openstack/openstackclient" Mar 18 18:20:06.162305 master-0 kubenswrapper[30278]: I0318 18:20:06.155962 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s7bl9\" (UniqueName: \"kubernetes.io/projected/3cf3d2cb-bc70-4a26-87db-0aa186c4a1a6-kube-api-access-s7bl9\") pod \"openstackclient\" (UID: \"3cf3d2cb-bc70-4a26-87db-0aa186c4a1a6\") " pod="openstack/openstackclient" Mar 18 18:20:06.171311 master-0 kubenswrapper[30278]: I0318 18:20:06.167839 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4b1a145b-099e-49a1-b32c-31ce823b9ec9-config" (OuterVolumeSpecName: "config") pod "4b1a145b-099e-49a1-b32c-31ce823b9ec9" (UID: "4b1a145b-099e-49a1-b32c-31ce823b9ec9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 18:20:06.184425 master-0 kubenswrapper[30278]: I0318 18:20:06.184312 30278 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4b1a145b-099e-49a1-b32c-31ce823b9ec9-config\") on node \"master-0\" DevicePath \"\"" Mar 18 18:20:06.210678 master-0 kubenswrapper[30278]: I0318 18:20:06.210582 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Mar 18 18:20:06.225301 master-0 kubenswrapper[30278]: I0318 18:20:06.223565 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4b1a145b-099e-49a1-b32c-31ce823b9ec9-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "4b1a145b-099e-49a1-b32c-31ce823b9ec9" (UID: "4b1a145b-099e-49a1-b32c-31ce823b9ec9"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 18:20:06.262299 master-0 kubenswrapper[30278]: I0318 18:20:06.252327 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4b1a145b-099e-49a1-b32c-31ce823b9ec9-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "4b1a145b-099e-49a1-b32c-31ce823b9ec9" (UID: "4b1a145b-099e-49a1-b32c-31ce823b9ec9"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 18:20:06.262299 master-0 kubenswrapper[30278]: I0318 18:20:06.261687 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4b1a145b-099e-49a1-b32c-31ce823b9ec9-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "4b1a145b-099e-49a1-b32c-31ce823b9ec9" (UID: "4b1a145b-099e-49a1-b32c-31ce823b9ec9"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 18:20:06.280203 master-0 kubenswrapper[30278]: I0318 18:20:06.280122 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4b1a145b-099e-49a1-b32c-31ce823b9ec9-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "4b1a145b-099e-49a1-b32c-31ce823b9ec9" (UID: "4b1a145b-099e-49a1-b32c-31ce823b9ec9"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 18:20:06.289486 master-0 kubenswrapper[30278]: I0318 18:20:06.289428 30278 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4b1a145b-099e-49a1-b32c-31ce823b9ec9-ovsdbserver-sb\") on node \"master-0\" DevicePath \"\"" Mar 18 18:20:06.289486 master-0 kubenswrapper[30278]: I0318 18:20:06.289474 30278 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4b1a145b-099e-49a1-b32c-31ce823b9ec9-ovsdbserver-nb\") on node \"master-0\" DevicePath \"\"" Mar 18 18:20:06.289486 master-0 kubenswrapper[30278]: I0318 18:20:06.289486 30278 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4b1a145b-099e-49a1-b32c-31ce823b9ec9-dns-swift-storage-0\") on node \"master-0\" DevicePath \"\"" Mar 18 18:20:06.289728 master-0 kubenswrapper[30278]: I0318 18:20:06.289505 30278 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4b1a145b-099e-49a1-b32c-31ce823b9ec9-dns-svc\") on node \"master-0\" DevicePath \"\"" Mar 18 18:20:06.808521 master-0 kubenswrapper[30278]: I0318 18:20:06.808455 30278 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Mar 18 18:20:06.926519 master-0 kubenswrapper[30278]: I0318 18:20:06.926449 30278 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7c894db6df-849s7"] Mar 18 18:20:06.950841 master-0 kubenswrapper[30278]: I0318 18:20:06.950655 30278 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7c894db6df-849s7"] Mar 18 18:20:07.011900 master-0 kubenswrapper[30278]: I0318 18:20:07.011815 30278 generic.go:334] "Generic (PLEG): container finished" podID="e9af6002-27e3-414d-b61a-dc0f7d99768b" containerID="a1cbfa8b6fd590fceab790aa42bfad4a1e0e3300922faf1510c93f7e057a5922" exitCode=0 Mar 18 18:20:07.012183 master-0 kubenswrapper[30278]: I0318 18:20:07.011979 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-conductor-0" event={"ID":"e9af6002-27e3-414d-b61a-dc0f7d99768b","Type":"ContainerDied","Data":"a1cbfa8b6fd590fceab790aa42bfad4a1e0e3300922faf1510c93f7e057a5922"} Mar 18 18:20:07.028663 master-0 kubenswrapper[30278]: I0318 18:20:07.028593 30278 generic.go:334] "Generic (PLEG): container finished" podID="f25d0677-228e-4b99-bc1f-abbbceebffc4" containerID="66e6bd280e11118b6991ace6f755111f4fc8517943a90a47400fb2c39832c8fd" exitCode=1 Mar 18 18:20:07.028959 master-0 kubenswrapper[30278]: I0318 18:20:07.028896 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-f986975b-8wc5r" event={"ID":"f25d0677-228e-4b99-bc1f-abbbceebffc4","Type":"ContainerDied","Data":"66e6bd280e11118b6991ace6f755111f4fc8517943a90a47400fb2c39832c8fd"} Mar 18 18:20:07.030112 master-0 kubenswrapper[30278]: I0318 18:20:07.030048 30278 scope.go:117] "RemoveContainer" containerID="66e6bd280e11118b6991ace6f755111f4fc8517943a90a47400fb2c39832c8fd" Mar 18 18:20:07.100581 master-0 kubenswrapper[30278]: I0318 18:20:07.100510 30278 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4b1a145b-099e-49a1-b32c-31ce823b9ec9" path="/var/lib/kubelet/pods/4b1a145b-099e-49a1-b32c-31ce823b9ec9/volumes" Mar 18 18:20:07.101498 master-0 kubenswrapper[30278]: I0318 18:20:07.101451 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-5cfb4bd768-f4ww4" event={"ID":"8794f0fc-2223-4bd7-aed5-a219b5f427e0","Type":"ContainerStarted","Data":"dd105b1851357d464f89fde3bab465cdd6f7ddc8bcef91de65dc413ef0132a53"} Mar 18 18:20:07.101498 master-0 kubenswrapper[30278]: I0318 18:20:07.101496 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"3cf3d2cb-bc70-4a26-87db-0aa186c4a1a6","Type":"ContainerStarted","Data":"7fa1de908a47e4c8b3e759771d10eca51c7b02827443f32592f90c9444b46744"} Mar 18 18:20:07.101600 master-0 kubenswrapper[30278]: I0318 18:20:07.101511 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-neutron-agent-c769655c7-ssdxq" event={"ID":"adb370b0-e5b4-4cc8-b1d2-c63363b70615","Type":"ContainerStarted","Data":"dec213597b25252c56c11febf89b5194730cf41627300b4afa10d377b51c37cd"} Mar 18 18:20:07.101772 master-0 kubenswrapper[30278]: I0318 18:20:07.101739 30278 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ironic-neutron-agent-c769655c7-ssdxq" Mar 18 18:20:08.117515 master-0 kubenswrapper[30278]: I0318 18:20:08.117352 30278 generic.go:334] "Generic (PLEG): container finished" podID="f25d0677-228e-4b99-bc1f-abbbceebffc4" containerID="a45be92866e438be95c8ae2186257c894b340cac55442745042a2707e7c1df8b" exitCode=1 Mar 18 18:20:08.117515 master-0 kubenswrapper[30278]: I0318 18:20:08.117480 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-f986975b-8wc5r" event={"ID":"f25d0677-228e-4b99-bc1f-abbbceebffc4","Type":"ContainerDied","Data":"a45be92866e438be95c8ae2186257c894b340cac55442745042a2707e7c1df8b"} Mar 18 18:20:08.118136 master-0 kubenswrapper[30278]: I0318 18:20:08.117541 30278 scope.go:117] "RemoveContainer" containerID="66e6bd280e11118b6991ace6f755111f4fc8517943a90a47400fb2c39832c8fd" Mar 18 18:20:08.118291 master-0 kubenswrapper[30278]: I0318 18:20:08.118197 30278 scope.go:117] "RemoveContainer" containerID="a45be92866e438be95c8ae2186257c894b340cac55442745042a2707e7c1df8b" Mar 18 18:20:08.118533 master-0 kubenswrapper[30278]: E0318 18:20:08.118476 30278 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ironic-api\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ironic-api pod=ironic-f986975b-8wc5r_openstack(f25d0677-228e-4b99-bc1f-abbbceebffc4)\"" pod="openstack/ironic-f986975b-8wc5r" podUID="f25d0677-228e-4b99-bc1f-abbbceebffc4" Mar 18 18:20:08.129370 master-0 kubenswrapper[30278]: I0318 18:20:08.129304 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-5cfb4bd768-f4ww4" event={"ID":"8794f0fc-2223-4bd7-aed5-a219b5f427e0","Type":"ContainerStarted","Data":"cda5637d87267fd32f79da6238544370320d08e6952144da4edfd6aa63c7595e"} Mar 18 18:20:08.129799 master-0 kubenswrapper[30278]: I0318 18:20:08.129749 30278 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ironic-5cfb4bd768-f4ww4" Mar 18 18:20:08.207042 master-0 kubenswrapper[30278]: I0318 18:20:08.206930 30278 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ironic-5cfb4bd768-f4ww4" podStartSLOduration=8.525031414 podStartE2EDuration="9.206904821s" podCreationTimestamp="2026-03-18 18:19:59 +0000 UTC" firstStartedPulling="2026-03-18 18:20:02.314644595 +0000 UTC m=+1171.481829180" lastFinishedPulling="2026-03-18 18:20:02.996517992 +0000 UTC m=+1172.163702587" observedRunningTime="2026-03-18 18:20:08.193756827 +0000 UTC m=+1177.360941422" watchObservedRunningTime="2026-03-18 18:20:08.206904821 +0000 UTC m=+1177.374089416" Mar 18 18:20:08.223227 master-0 kubenswrapper[30278]: I0318 18:20:08.223095 30278 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-594bd7cb-dvb64" Mar 18 18:20:08.846180 master-0 kubenswrapper[30278]: I0318 18:20:08.837057 30278 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ironic-inspector-db-sync-98qm9"] Mar 18 18:20:08.846180 master-0 kubenswrapper[30278]: I0318 18:20:08.839080 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-inspector-db-sync-98qm9" Mar 18 18:20:08.846180 master-0 kubenswrapper[30278]: I0318 18:20:08.844751 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ironic-inspector-config-data" Mar 18 18:20:08.846180 master-0 kubenswrapper[30278]: I0318 18:20:08.844788 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ironic-inspector-scripts" Mar 18 18:20:08.872232 master-0 kubenswrapper[30278]: I0318 18:20:08.872177 30278 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-inspector-db-sync-98qm9"] Mar 18 18:20:08.942207 master-0 kubenswrapper[30278]: I0318 18:20:08.942058 30278 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-proxy-66857967b8-5fglj"] Mar 18 18:20:08.950292 master-0 kubenswrapper[30278]: I0318 18:20:08.948439 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-66857967b8-5fglj" Mar 18 18:20:08.961710 master-0 kubenswrapper[30278]: I0318 18:20:08.961630 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-public-svc" Mar 18 18:20:08.962009 master-0 kubenswrapper[30278]: I0318 18:20:08.961944 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Mar 18 18:20:08.962187 master-0 kubenswrapper[30278]: I0318 18:20:08.962087 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-internal-svc" Mar 18 18:20:08.994309 master-0 kubenswrapper[30278]: I0318 18:20:08.988872 30278 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-66857967b8-5fglj"] Mar 18 18:20:09.015306 master-0 kubenswrapper[30278]: I0318 18:20:09.004137 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/681bd0b0-8192-4ac5-9e57-2a5e4f575b1f-combined-ca-bundle\") pod \"ironic-inspector-db-sync-98qm9\" (UID: \"681bd0b0-8192-4ac5-9e57-2a5e4f575b1f\") " pod="openstack/ironic-inspector-db-sync-98qm9" Mar 18 18:20:09.015306 master-0 kubenswrapper[30278]: I0318 18:20:09.004943 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cdpjc\" (UniqueName: \"kubernetes.io/projected/681bd0b0-8192-4ac5-9e57-2a5e4f575b1f-kube-api-access-cdpjc\") pod \"ironic-inspector-db-sync-98qm9\" (UID: \"681bd0b0-8192-4ac5-9e57-2a5e4f575b1f\") " pod="openstack/ironic-inspector-db-sync-98qm9" Mar 18 18:20:09.015306 master-0 kubenswrapper[30278]: I0318 18:20:09.005088 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/681bd0b0-8192-4ac5-9e57-2a5e4f575b1f-config\") pod \"ironic-inspector-db-sync-98qm9\" (UID: \"681bd0b0-8192-4ac5-9e57-2a5e4f575b1f\") " pod="openstack/ironic-inspector-db-sync-98qm9" Mar 18 18:20:09.015306 master-0 kubenswrapper[30278]: I0318 18:20:09.005331 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-ironic\" (UniqueName: \"kubernetes.io/empty-dir/681bd0b0-8192-4ac5-9e57-2a5e4f575b1f-var-lib-ironic\") pod \"ironic-inspector-db-sync-98qm9\" (UID: \"681bd0b0-8192-4ac5-9e57-2a5e4f575b1f\") " pod="openstack/ironic-inspector-db-sync-98qm9" Mar 18 18:20:09.015306 master-0 kubenswrapper[30278]: I0318 18:20:09.005365 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/681bd0b0-8192-4ac5-9e57-2a5e4f575b1f-scripts\") pod \"ironic-inspector-db-sync-98qm9\" (UID: \"681bd0b0-8192-4ac5-9e57-2a5e4f575b1f\") " pod="openstack/ironic-inspector-db-sync-98qm9" Mar 18 18:20:09.015306 master-0 kubenswrapper[30278]: I0318 18:20:09.005467 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/681bd0b0-8192-4ac5-9e57-2a5e4f575b1f-etc-podinfo\") pod \"ironic-inspector-db-sync-98qm9\" (UID: \"681bd0b0-8192-4ac5-9e57-2a5e4f575b1f\") " pod="openstack/ironic-inspector-db-sync-98qm9" Mar 18 18:20:09.015306 master-0 kubenswrapper[30278]: I0318 18:20:09.005510 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-ironic-inspector-dhcp-hostsdir\" (UniqueName: \"kubernetes.io/empty-dir/681bd0b0-8192-4ac5-9e57-2a5e4f575b1f-var-lib-ironic-inspector-dhcp-hostsdir\") pod \"ironic-inspector-db-sync-98qm9\" (UID: \"681bd0b0-8192-4ac5-9e57-2a5e4f575b1f\") " pod="openstack/ironic-inspector-db-sync-98qm9" Mar 18 18:20:09.113264 master-0 kubenswrapper[30278]: I0318 18:20:09.113109 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cdpjc\" (UniqueName: \"kubernetes.io/projected/681bd0b0-8192-4ac5-9e57-2a5e4f575b1f-kube-api-access-cdpjc\") pod \"ironic-inspector-db-sync-98qm9\" (UID: \"681bd0b0-8192-4ac5-9e57-2a5e4f575b1f\") " pod="openstack/ironic-inspector-db-sync-98qm9" Mar 18 18:20:09.113264 master-0 kubenswrapper[30278]: I0318 18:20:09.113180 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dab35501-e90f-48cb-b31d-1ea8086b7b1d-combined-ca-bundle\") pod \"swift-proxy-66857967b8-5fglj\" (UID: \"dab35501-e90f-48cb-b31d-1ea8086b7b1d\") " pod="openstack/swift-proxy-66857967b8-5fglj" Mar 18 18:20:09.113264 master-0 kubenswrapper[30278]: I0318 18:20:09.113249 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dab35501-e90f-48cb-b31d-1ea8086b7b1d-config-data\") pod \"swift-proxy-66857967b8-5fglj\" (UID: \"dab35501-e90f-48cb-b31d-1ea8086b7b1d\") " pod="openstack/swift-proxy-66857967b8-5fglj" Mar 18 18:20:09.113637 master-0 kubenswrapper[30278]: I0318 18:20:09.113319 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/681bd0b0-8192-4ac5-9e57-2a5e4f575b1f-config\") pod \"ironic-inspector-db-sync-98qm9\" (UID: \"681bd0b0-8192-4ac5-9e57-2a5e4f575b1f\") " pod="openstack/ironic-inspector-db-sync-98qm9" Mar 18 18:20:09.113637 master-0 kubenswrapper[30278]: I0318 18:20:09.113358 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/dab35501-e90f-48cb-b31d-1ea8086b7b1d-etc-swift\") pod \"swift-proxy-66857967b8-5fglj\" (UID: \"dab35501-e90f-48cb-b31d-1ea8086b7b1d\") " pod="openstack/swift-proxy-66857967b8-5fglj" Mar 18 18:20:09.113637 master-0 kubenswrapper[30278]: I0318 18:20:09.113535 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dab35501-e90f-48cb-b31d-1ea8086b7b1d-log-httpd\") pod \"swift-proxy-66857967b8-5fglj\" (UID: \"dab35501-e90f-48cb-b31d-1ea8086b7b1d\") " pod="openstack/swift-proxy-66857967b8-5fglj" Mar 18 18:20:09.113793 master-0 kubenswrapper[30278]: I0318 18:20:09.113678 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dab35501-e90f-48cb-b31d-1ea8086b7b1d-run-httpd\") pod \"swift-proxy-66857967b8-5fglj\" (UID: \"dab35501-e90f-48cb-b31d-1ea8086b7b1d\") " pod="openstack/swift-proxy-66857967b8-5fglj" Mar 18 18:20:09.113888 master-0 kubenswrapper[30278]: I0318 18:20:09.113856 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-ironic\" (UniqueName: \"kubernetes.io/empty-dir/681bd0b0-8192-4ac5-9e57-2a5e4f575b1f-var-lib-ironic\") pod \"ironic-inspector-db-sync-98qm9\" (UID: \"681bd0b0-8192-4ac5-9e57-2a5e4f575b1f\") " pod="openstack/ironic-inspector-db-sync-98qm9" Mar 18 18:20:09.113953 master-0 kubenswrapper[30278]: I0318 18:20:09.113910 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/681bd0b0-8192-4ac5-9e57-2a5e4f575b1f-scripts\") pod \"ironic-inspector-db-sync-98qm9\" (UID: \"681bd0b0-8192-4ac5-9e57-2a5e4f575b1f\") " pod="openstack/ironic-inspector-db-sync-98qm9" Mar 18 18:20:09.114078 master-0 kubenswrapper[30278]: I0318 18:20:09.114035 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4xjxt\" (UniqueName: \"kubernetes.io/projected/dab35501-e90f-48cb-b31d-1ea8086b7b1d-kube-api-access-4xjxt\") pod \"swift-proxy-66857967b8-5fglj\" (UID: \"dab35501-e90f-48cb-b31d-1ea8086b7b1d\") " pod="openstack/swift-proxy-66857967b8-5fglj" Mar 18 18:20:09.114078 master-0 kubenswrapper[30278]: I0318 18:20:09.114072 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/681bd0b0-8192-4ac5-9e57-2a5e4f575b1f-etc-podinfo\") pod \"ironic-inspector-db-sync-98qm9\" (UID: \"681bd0b0-8192-4ac5-9e57-2a5e4f575b1f\") " pod="openstack/ironic-inspector-db-sync-98qm9" Mar 18 18:20:09.114185 master-0 kubenswrapper[30278]: I0318 18:20:09.114120 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-ironic-inspector-dhcp-hostsdir\" (UniqueName: \"kubernetes.io/empty-dir/681bd0b0-8192-4ac5-9e57-2a5e4f575b1f-var-lib-ironic-inspector-dhcp-hostsdir\") pod \"ironic-inspector-db-sync-98qm9\" (UID: \"681bd0b0-8192-4ac5-9e57-2a5e4f575b1f\") " pod="openstack/ironic-inspector-db-sync-98qm9" Mar 18 18:20:09.114232 master-0 kubenswrapper[30278]: I0318 18:20:09.114206 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/dab35501-e90f-48cb-b31d-1ea8086b7b1d-public-tls-certs\") pod \"swift-proxy-66857967b8-5fglj\" (UID: \"dab35501-e90f-48cb-b31d-1ea8086b7b1d\") " pod="openstack/swift-proxy-66857967b8-5fglj" Mar 18 18:20:09.114377 master-0 kubenswrapper[30278]: I0318 18:20:09.114325 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-ironic\" (UniqueName: \"kubernetes.io/empty-dir/681bd0b0-8192-4ac5-9e57-2a5e4f575b1f-var-lib-ironic\") pod \"ironic-inspector-db-sync-98qm9\" (UID: \"681bd0b0-8192-4ac5-9e57-2a5e4f575b1f\") " pod="openstack/ironic-inspector-db-sync-98qm9" Mar 18 18:20:09.114442 master-0 kubenswrapper[30278]: I0318 18:20:09.114391 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/681bd0b0-8192-4ac5-9e57-2a5e4f575b1f-combined-ca-bundle\") pod \"ironic-inspector-db-sync-98qm9\" (UID: \"681bd0b0-8192-4ac5-9e57-2a5e4f575b1f\") " pod="openstack/ironic-inspector-db-sync-98qm9" Mar 18 18:20:09.114492 master-0 kubenswrapper[30278]: I0318 18:20:09.114460 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/dab35501-e90f-48cb-b31d-1ea8086b7b1d-internal-tls-certs\") pod \"swift-proxy-66857967b8-5fglj\" (UID: \"dab35501-e90f-48cb-b31d-1ea8086b7b1d\") " pod="openstack/swift-proxy-66857967b8-5fglj" Mar 18 18:20:09.115014 master-0 kubenswrapper[30278]: I0318 18:20:09.114976 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-ironic-inspector-dhcp-hostsdir\" (UniqueName: \"kubernetes.io/empty-dir/681bd0b0-8192-4ac5-9e57-2a5e4f575b1f-var-lib-ironic-inspector-dhcp-hostsdir\") pod \"ironic-inspector-db-sync-98qm9\" (UID: \"681bd0b0-8192-4ac5-9e57-2a5e4f575b1f\") " pod="openstack/ironic-inspector-db-sync-98qm9" Mar 18 18:20:09.132925 master-0 kubenswrapper[30278]: I0318 18:20:09.132734 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/681bd0b0-8192-4ac5-9e57-2a5e4f575b1f-etc-podinfo\") pod \"ironic-inspector-db-sync-98qm9\" (UID: \"681bd0b0-8192-4ac5-9e57-2a5e4f575b1f\") " pod="openstack/ironic-inspector-db-sync-98qm9" Mar 18 18:20:09.133678 master-0 kubenswrapper[30278]: I0318 18:20:09.133393 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/681bd0b0-8192-4ac5-9e57-2a5e4f575b1f-config\") pod \"ironic-inspector-db-sync-98qm9\" (UID: \"681bd0b0-8192-4ac5-9e57-2a5e4f575b1f\") " pod="openstack/ironic-inspector-db-sync-98qm9" Mar 18 18:20:09.134928 master-0 kubenswrapper[30278]: I0318 18:20:09.134729 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/681bd0b0-8192-4ac5-9e57-2a5e4f575b1f-combined-ca-bundle\") pod \"ironic-inspector-db-sync-98qm9\" (UID: \"681bd0b0-8192-4ac5-9e57-2a5e4f575b1f\") " pod="openstack/ironic-inspector-db-sync-98qm9" Mar 18 18:20:09.144186 master-0 kubenswrapper[30278]: I0318 18:20:09.144129 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/681bd0b0-8192-4ac5-9e57-2a5e4f575b1f-scripts\") pod \"ironic-inspector-db-sync-98qm9\" (UID: \"681bd0b0-8192-4ac5-9e57-2a5e4f575b1f\") " pod="openstack/ironic-inspector-db-sync-98qm9" Mar 18 18:20:09.150057 master-0 kubenswrapper[30278]: I0318 18:20:09.149819 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cdpjc\" (UniqueName: \"kubernetes.io/projected/681bd0b0-8192-4ac5-9e57-2a5e4f575b1f-kube-api-access-cdpjc\") pod \"ironic-inspector-db-sync-98qm9\" (UID: \"681bd0b0-8192-4ac5-9e57-2a5e4f575b1f\") " pod="openstack/ironic-inspector-db-sync-98qm9" Mar 18 18:20:09.155196 master-0 kubenswrapper[30278]: I0318 18:20:09.153785 30278 scope.go:117] "RemoveContainer" containerID="a45be92866e438be95c8ae2186257c894b340cac55442745042a2707e7c1df8b" Mar 18 18:20:09.155196 master-0 kubenswrapper[30278]: E0318 18:20:09.154078 30278 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ironic-api\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ironic-api pod=ironic-f986975b-8wc5r_openstack(f25d0677-228e-4b99-bc1f-abbbceebffc4)\"" pod="openstack/ironic-f986975b-8wc5r" podUID="f25d0677-228e-4b99-bc1f-abbbceebffc4" Mar 18 18:20:09.193130 master-0 kubenswrapper[30278]: I0318 18:20:09.193035 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-inspector-db-sync-98qm9" Mar 18 18:20:09.220496 master-0 kubenswrapper[30278]: I0318 18:20:09.220402 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/dab35501-e90f-48cb-b31d-1ea8086b7b1d-etc-swift\") pod \"swift-proxy-66857967b8-5fglj\" (UID: \"dab35501-e90f-48cb-b31d-1ea8086b7b1d\") " pod="openstack/swift-proxy-66857967b8-5fglj" Mar 18 18:20:09.220782 master-0 kubenswrapper[30278]: I0318 18:20:09.220517 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dab35501-e90f-48cb-b31d-1ea8086b7b1d-log-httpd\") pod \"swift-proxy-66857967b8-5fglj\" (UID: \"dab35501-e90f-48cb-b31d-1ea8086b7b1d\") " pod="openstack/swift-proxy-66857967b8-5fglj" Mar 18 18:20:09.220782 master-0 kubenswrapper[30278]: I0318 18:20:09.220572 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dab35501-e90f-48cb-b31d-1ea8086b7b1d-run-httpd\") pod \"swift-proxy-66857967b8-5fglj\" (UID: \"dab35501-e90f-48cb-b31d-1ea8086b7b1d\") " pod="openstack/swift-proxy-66857967b8-5fglj" Mar 18 18:20:09.220782 master-0 kubenswrapper[30278]: I0318 18:20:09.220696 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4xjxt\" (UniqueName: \"kubernetes.io/projected/dab35501-e90f-48cb-b31d-1ea8086b7b1d-kube-api-access-4xjxt\") pod \"swift-proxy-66857967b8-5fglj\" (UID: \"dab35501-e90f-48cb-b31d-1ea8086b7b1d\") " pod="openstack/swift-proxy-66857967b8-5fglj" Mar 18 18:20:09.220782 master-0 kubenswrapper[30278]: I0318 18:20:09.220759 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/dab35501-e90f-48cb-b31d-1ea8086b7b1d-public-tls-certs\") pod \"swift-proxy-66857967b8-5fglj\" (UID: \"dab35501-e90f-48cb-b31d-1ea8086b7b1d\") " pod="openstack/swift-proxy-66857967b8-5fglj" Mar 18 18:20:09.220915 master-0 kubenswrapper[30278]: I0318 18:20:09.220865 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/dab35501-e90f-48cb-b31d-1ea8086b7b1d-internal-tls-certs\") pod \"swift-proxy-66857967b8-5fglj\" (UID: \"dab35501-e90f-48cb-b31d-1ea8086b7b1d\") " pod="openstack/swift-proxy-66857967b8-5fglj" Mar 18 18:20:09.221004 master-0 kubenswrapper[30278]: I0318 18:20:09.220973 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dab35501-e90f-48cb-b31d-1ea8086b7b1d-combined-ca-bundle\") pod \"swift-proxy-66857967b8-5fglj\" (UID: \"dab35501-e90f-48cb-b31d-1ea8086b7b1d\") " pod="openstack/swift-proxy-66857967b8-5fglj" Mar 18 18:20:09.221051 master-0 kubenswrapper[30278]: I0318 18:20:09.221028 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dab35501-e90f-48cb-b31d-1ea8086b7b1d-config-data\") pod \"swift-proxy-66857967b8-5fglj\" (UID: \"dab35501-e90f-48cb-b31d-1ea8086b7b1d\") " pod="openstack/swift-proxy-66857967b8-5fglj" Mar 18 18:20:09.227971 master-0 kubenswrapper[30278]: I0318 18:20:09.227911 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dab35501-e90f-48cb-b31d-1ea8086b7b1d-log-httpd\") pod \"swift-proxy-66857967b8-5fglj\" (UID: \"dab35501-e90f-48cb-b31d-1ea8086b7b1d\") " pod="openstack/swift-proxy-66857967b8-5fglj" Mar 18 18:20:09.228874 master-0 kubenswrapper[30278]: I0318 18:20:09.228818 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dab35501-e90f-48cb-b31d-1ea8086b7b1d-run-httpd\") pod \"swift-proxy-66857967b8-5fglj\" (UID: \"dab35501-e90f-48cb-b31d-1ea8086b7b1d\") " pod="openstack/swift-proxy-66857967b8-5fglj" Mar 18 18:20:09.229904 master-0 kubenswrapper[30278]: I0318 18:20:09.229876 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/dab35501-e90f-48cb-b31d-1ea8086b7b1d-internal-tls-certs\") pod \"swift-proxy-66857967b8-5fglj\" (UID: \"dab35501-e90f-48cb-b31d-1ea8086b7b1d\") " pod="openstack/swift-proxy-66857967b8-5fglj" Mar 18 18:20:09.234288 master-0 kubenswrapper[30278]: I0318 18:20:09.230555 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/dab35501-e90f-48cb-b31d-1ea8086b7b1d-public-tls-certs\") pod \"swift-proxy-66857967b8-5fglj\" (UID: \"dab35501-e90f-48cb-b31d-1ea8086b7b1d\") " pod="openstack/swift-proxy-66857967b8-5fglj" Mar 18 18:20:09.234288 master-0 kubenswrapper[30278]: I0318 18:20:09.230846 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dab35501-e90f-48cb-b31d-1ea8086b7b1d-config-data\") pod \"swift-proxy-66857967b8-5fglj\" (UID: \"dab35501-e90f-48cb-b31d-1ea8086b7b1d\") " pod="openstack/swift-proxy-66857967b8-5fglj" Mar 18 18:20:09.261296 master-0 kubenswrapper[30278]: I0318 18:20:09.257520 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dab35501-e90f-48cb-b31d-1ea8086b7b1d-combined-ca-bundle\") pod \"swift-proxy-66857967b8-5fglj\" (UID: \"dab35501-e90f-48cb-b31d-1ea8086b7b1d\") " pod="openstack/swift-proxy-66857967b8-5fglj" Mar 18 18:20:09.261296 master-0 kubenswrapper[30278]: I0318 18:20:09.258188 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/dab35501-e90f-48cb-b31d-1ea8086b7b1d-etc-swift\") pod \"swift-proxy-66857967b8-5fglj\" (UID: \"dab35501-e90f-48cb-b31d-1ea8086b7b1d\") " pod="openstack/swift-proxy-66857967b8-5fglj" Mar 18 18:20:09.269377 master-0 kubenswrapper[30278]: I0318 18:20:09.266383 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4xjxt\" (UniqueName: \"kubernetes.io/projected/dab35501-e90f-48cb-b31d-1ea8086b7b1d-kube-api-access-4xjxt\") pod \"swift-proxy-66857967b8-5fglj\" (UID: \"dab35501-e90f-48cb-b31d-1ea8086b7b1d\") " pod="openstack/swift-proxy-66857967b8-5fglj" Mar 18 18:20:09.305110 master-0 kubenswrapper[30278]: I0318 18:20:09.305042 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-66857967b8-5fglj" Mar 18 18:20:09.944769 master-0 kubenswrapper[30278]: I0318 18:20:09.938493 30278 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-inspector-db-sync-98qm9"] Mar 18 18:20:10.061497 master-0 kubenswrapper[30278]: I0318 18:20:10.061218 30278 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ironic-neutron-agent-c769655c7-ssdxq" Mar 18 18:20:10.117385 master-0 kubenswrapper[30278]: W0318 18:20:10.117317 30278 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddab35501_e90f_48cb_b31d_1ea8086b7b1d.slice/crio-cfaafe5cf7a7aef073ac065a13fcfd1e8618aaab28c9fe851506a8fca80bbc71 WatchSource:0}: Error finding container cfaafe5cf7a7aef073ac065a13fcfd1e8618aaab28c9fe851506a8fca80bbc71: Status 404 returned error can't find the container with id cfaafe5cf7a7aef073ac065a13fcfd1e8618aaab28c9fe851506a8fca80bbc71 Mar 18 18:20:10.133300 master-0 kubenswrapper[30278]: I0318 18:20:10.125899 30278 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-66857967b8-5fglj"] Mar 18 18:20:10.182765 master-0 kubenswrapper[30278]: I0318 18:20:10.182643 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-66857967b8-5fglj" event={"ID":"dab35501-e90f-48cb-b31d-1ea8086b7b1d","Type":"ContainerStarted","Data":"cfaafe5cf7a7aef073ac065a13fcfd1e8618aaab28c9fe851506a8fca80bbc71"} Mar 18 18:20:10.193333 master-0 kubenswrapper[30278]: I0318 18:20:10.193227 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-db-sync-98qm9" event={"ID":"681bd0b0-8192-4ac5-9e57-2a5e4f575b1f","Type":"ContainerStarted","Data":"3be6f48c7968be8d8114f20377477f1687e8d6a2632942eb0b216aa4f576fd03"} Mar 18 18:20:10.531038 master-0 kubenswrapper[30278]: I0318 18:20:10.530470 30278 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/ironic-f986975b-8wc5r" Mar 18 18:20:10.531038 master-0 kubenswrapper[30278]: I0318 18:20:10.530553 30278 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ironic-f986975b-8wc5r" Mar 18 18:20:10.533447 master-0 kubenswrapper[30278]: I0318 18:20:10.533424 30278 scope.go:117] "RemoveContainer" containerID="a45be92866e438be95c8ae2186257c894b340cac55442745042a2707e7c1df8b" Mar 18 18:20:10.534103 master-0 kubenswrapper[30278]: E0318 18:20:10.534075 30278 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ironic-api\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ironic-api pod=ironic-f986975b-8wc5r_openstack(f25d0677-228e-4b99-bc1f-abbbceebffc4)\"" pod="openstack/ironic-f986975b-8wc5r" podUID="f25d0677-228e-4b99-bc1f-abbbceebffc4" Mar 18 18:20:11.012065 master-0 kubenswrapper[30278]: I0318 18:20:11.011985 30278 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-5776b66b45-w6n4j" Mar 18 18:20:11.182934 master-0 kubenswrapper[30278]: I0318 18:20:11.182812 30278 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-594bd7cb-dvb64"] Mar 18 18:20:11.183623 master-0 kubenswrapper[30278]: I0318 18:20:11.183203 30278 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-594bd7cb-dvb64" podUID="a8d16e57-7093-4361-bdda-ecd48ea1328f" containerName="neutron-api" containerID="cri-o://b5dcd73154a049e80ab13b2eb80bcf7481b7aceaf5de5b0d4df0bed066bb9647" gracePeriod=30 Mar 18 18:20:11.183623 master-0 kubenswrapper[30278]: I0318 18:20:11.183410 30278 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-594bd7cb-dvb64" podUID="a8d16e57-7093-4361-bdda-ecd48ea1328f" containerName="neutron-httpd" containerID="cri-o://89f9a2f243d56eb15727bacfbebb53635e792bb42d34a4b447dd4b068abbaaaf" gracePeriod=30 Mar 18 18:20:11.313069 master-0 kubenswrapper[30278]: I0318 18:20:11.312963 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-66857967b8-5fglj" event={"ID":"dab35501-e90f-48cb-b31d-1ea8086b7b1d","Type":"ContainerStarted","Data":"88e2eb29fff01e3b0bba63fb7c200764dff435ca887fdcaf52836fd71431166a"} Mar 18 18:20:11.313069 master-0 kubenswrapper[30278]: I0318 18:20:11.313050 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-66857967b8-5fglj" event={"ID":"dab35501-e90f-48cb-b31d-1ea8086b7b1d","Type":"ContainerStarted","Data":"999c3c3b588ae950b79cbf2a2603bd30e05092cfa9b1a2e68ed4e4f006b538d1"} Mar 18 18:20:11.313423 master-0 kubenswrapper[30278]: I0318 18:20:11.313203 30278 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-66857967b8-5fglj" Mar 18 18:20:11.379782 master-0 kubenswrapper[30278]: I0318 18:20:11.376904 30278 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-proxy-66857967b8-5fglj" podStartSLOduration=3.376877383 podStartE2EDuration="3.376877383s" podCreationTimestamp="2026-03-18 18:20:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 18:20:11.375051384 +0000 UTC m=+1180.542235979" watchObservedRunningTime="2026-03-18 18:20:11.376877383 +0000 UTC m=+1180.544061978" Mar 18 18:20:12.341984 master-0 kubenswrapper[30278]: I0318 18:20:12.341918 30278 generic.go:334] "Generic (PLEG): container finished" podID="a8d16e57-7093-4361-bdda-ecd48ea1328f" containerID="89f9a2f243d56eb15727bacfbebb53635e792bb42d34a4b447dd4b068abbaaaf" exitCode=0 Mar 18 18:20:12.342606 master-0 kubenswrapper[30278]: I0318 18:20:12.342381 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-594bd7cb-dvb64" event={"ID":"a8d16e57-7093-4361-bdda-ecd48ea1328f","Type":"ContainerDied","Data":"89f9a2f243d56eb15727bacfbebb53635e792bb42d34a4b447dd4b068abbaaaf"} Mar 18 18:20:12.342606 master-0 kubenswrapper[30278]: I0318 18:20:12.342427 30278 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-66857967b8-5fglj" Mar 18 18:20:15.005148 master-0 kubenswrapper[30278]: E0318 18:20:14.998137 30278 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of dec213597b25252c56c11febf89b5194730cf41627300b4afa10d377b51c37cd is running failed: container process not found" containerID="dec213597b25252c56c11febf89b5194730cf41627300b4afa10d377b51c37cd" cmd=["/bin/true"] Mar 18 18:20:15.005148 master-0 kubenswrapper[30278]: E0318 18:20:14.998246 30278 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of dec213597b25252c56c11febf89b5194730cf41627300b4afa10d377b51c37cd is running failed: container process not found" containerID="dec213597b25252c56c11febf89b5194730cf41627300b4afa10d377b51c37cd" cmd=["/bin/true"] Mar 18 18:20:15.005148 master-0 kubenswrapper[30278]: E0318 18:20:15.000804 30278 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of dec213597b25252c56c11febf89b5194730cf41627300b4afa10d377b51c37cd is running failed: container process not found" containerID="dec213597b25252c56c11febf89b5194730cf41627300b4afa10d377b51c37cd" cmd=["/bin/true"] Mar 18 18:20:15.005148 master-0 kubenswrapper[30278]: E0318 18:20:15.000855 30278 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of dec213597b25252c56c11febf89b5194730cf41627300b4afa10d377b51c37cd is running failed: container process not found" containerID="dec213597b25252c56c11febf89b5194730cf41627300b4afa10d377b51c37cd" cmd=["/bin/true"] Mar 18 18:20:15.005148 master-0 kubenswrapper[30278]: E0318 18:20:15.001501 30278 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of dec213597b25252c56c11febf89b5194730cf41627300b4afa10d377b51c37cd is running failed: container process not found" containerID="dec213597b25252c56c11febf89b5194730cf41627300b4afa10d377b51c37cd" cmd=["/bin/true"] Mar 18 18:20:15.005148 master-0 kubenswrapper[30278]: E0318 18:20:15.001540 30278 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of dec213597b25252c56c11febf89b5194730cf41627300b4afa10d377b51c37cd is running failed: container process not found" containerID="dec213597b25252c56c11febf89b5194730cf41627300b4afa10d377b51c37cd" cmd=["/bin/true"] Mar 18 18:20:15.005148 master-0 kubenswrapper[30278]: E0318 18:20:15.001562 30278 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of dec213597b25252c56c11febf89b5194730cf41627300b4afa10d377b51c37cd is running failed: container process not found" probeType="Liveness" pod="openstack/ironic-neutron-agent-c769655c7-ssdxq" podUID="adb370b0-e5b4-4cc8-b1d2-c63363b70615" containerName="ironic-neutron-agent" Mar 18 18:20:15.005148 master-0 kubenswrapper[30278]: E0318 18:20:15.001585 30278 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of dec213597b25252c56c11febf89b5194730cf41627300b4afa10d377b51c37cd is running failed: container process not found" probeType="Readiness" pod="openstack/ironic-neutron-agent-c769655c7-ssdxq" podUID="adb370b0-e5b4-4cc8-b1d2-c63363b70615" containerName="ironic-neutron-agent" Mar 18 18:20:16.438040 master-0 kubenswrapper[30278]: I0318 18:20:16.437950 30278 generic.go:334] "Generic (PLEG): container finished" podID="adb370b0-e5b4-4cc8-b1d2-c63363b70615" containerID="dec213597b25252c56c11febf89b5194730cf41627300b4afa10d377b51c37cd" exitCode=1 Mar 18 18:20:16.439059 master-0 kubenswrapper[30278]: I0318 18:20:16.438055 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-neutron-agent-c769655c7-ssdxq" event={"ID":"adb370b0-e5b4-4cc8-b1d2-c63363b70615","Type":"ContainerDied","Data":"dec213597b25252c56c11febf89b5194730cf41627300b4afa10d377b51c37cd"} Mar 18 18:20:16.439059 master-0 kubenswrapper[30278]: I0318 18:20:16.438104 30278 scope.go:117] "RemoveContainer" containerID="a7f6e808292b0db32aee4b55280c74982466050bfdd0c95d5de837f7848013ec" Mar 18 18:20:16.440376 master-0 kubenswrapper[30278]: I0318 18:20:16.439694 30278 scope.go:117] "RemoveContainer" containerID="dec213597b25252c56c11febf89b5194730cf41627300b4afa10d377b51c37cd" Mar 18 18:20:16.440376 master-0 kubenswrapper[30278]: E0318 18:20:16.440005 30278 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ironic-neutron-agent\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ironic-neutron-agent pod=ironic-neutron-agent-c769655c7-ssdxq_openstack(adb370b0-e5b4-4cc8-b1d2-c63363b70615)\"" pod="openstack/ironic-neutron-agent-c769655c7-ssdxq" podUID="adb370b0-e5b4-4cc8-b1d2-c63363b70615" Mar 18 18:20:16.447718 master-0 kubenswrapper[30278]: I0318 18:20:16.447650 30278 generic.go:334] "Generic (PLEG): container finished" podID="a8d16e57-7093-4361-bdda-ecd48ea1328f" containerID="b5dcd73154a049e80ab13b2eb80bcf7481b7aceaf5de5b0d4df0bed066bb9647" exitCode=0 Mar 18 18:20:16.447865 master-0 kubenswrapper[30278]: I0318 18:20:16.447723 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-594bd7cb-dvb64" event={"ID":"a8d16e57-7093-4361-bdda-ecd48ea1328f","Type":"ContainerDied","Data":"b5dcd73154a049e80ab13b2eb80bcf7481b7aceaf5de5b0d4df0bed066bb9647"} Mar 18 18:20:17.094837 master-0 kubenswrapper[30278]: I0318 18:20:17.094761 30278 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ironic-5cfb4bd768-f4ww4" Mar 18 18:20:17.684655 master-0 kubenswrapper[30278]: I0318 18:20:17.684180 30278 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-594bd7cb-dvb64" Mar 18 18:20:17.798253 master-0 kubenswrapper[30278]: I0318 18:20:17.795835 30278 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ironic-f986975b-8wc5r"] Mar 18 18:20:17.798253 master-0 kubenswrapper[30278]: I0318 18:20:17.796147 30278 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ironic-f986975b-8wc5r" podUID="f25d0677-228e-4b99-bc1f-abbbceebffc4" containerName="ironic-api-log" containerID="cri-o://e18706ba2089f86bdd3de65ed66a8da498827fa5e84969cbe47d4e70f60da7a2" gracePeriod=60 Mar 18 18:20:17.887301 master-0 kubenswrapper[30278]: I0318 18:20:17.884985 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/a8d16e57-7093-4361-bdda-ecd48ea1328f-config\") pod \"a8d16e57-7093-4361-bdda-ecd48ea1328f\" (UID: \"a8d16e57-7093-4361-bdda-ecd48ea1328f\") " Mar 18 18:20:17.887301 master-0 kubenswrapper[30278]: I0318 18:20:17.885115 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/a8d16e57-7093-4361-bdda-ecd48ea1328f-httpd-config\") pod \"a8d16e57-7093-4361-bdda-ecd48ea1328f\" (UID: \"a8d16e57-7093-4361-bdda-ecd48ea1328f\") " Mar 18 18:20:17.897613 master-0 kubenswrapper[30278]: I0318 18:20:17.896790 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a8d16e57-7093-4361-bdda-ecd48ea1328f-combined-ca-bundle\") pod \"a8d16e57-7093-4361-bdda-ecd48ea1328f\" (UID: \"a8d16e57-7093-4361-bdda-ecd48ea1328f\") " Mar 18 18:20:17.897613 master-0 kubenswrapper[30278]: I0318 18:20:17.896898 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/a8d16e57-7093-4361-bdda-ecd48ea1328f-ovndb-tls-certs\") pod \"a8d16e57-7093-4361-bdda-ecd48ea1328f\" (UID: \"a8d16e57-7093-4361-bdda-ecd48ea1328f\") " Mar 18 18:20:17.897613 master-0 kubenswrapper[30278]: I0318 18:20:17.897042 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xp5f6\" (UniqueName: \"kubernetes.io/projected/a8d16e57-7093-4361-bdda-ecd48ea1328f-kube-api-access-xp5f6\") pod \"a8d16e57-7093-4361-bdda-ecd48ea1328f\" (UID: \"a8d16e57-7093-4361-bdda-ecd48ea1328f\") " Mar 18 18:20:17.900942 master-0 kubenswrapper[30278]: I0318 18:20:17.900671 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a8d16e57-7093-4361-bdda-ecd48ea1328f-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "a8d16e57-7093-4361-bdda-ecd48ea1328f" (UID: "a8d16e57-7093-4361-bdda-ecd48ea1328f"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 18:20:17.903012 master-0 kubenswrapper[30278]: I0318 18:20:17.902924 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a8d16e57-7093-4361-bdda-ecd48ea1328f-kube-api-access-xp5f6" (OuterVolumeSpecName: "kube-api-access-xp5f6") pod "a8d16e57-7093-4361-bdda-ecd48ea1328f" (UID: "a8d16e57-7093-4361-bdda-ecd48ea1328f"). InnerVolumeSpecName "kube-api-access-xp5f6". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 18:20:17.997998 master-0 kubenswrapper[30278]: I0318 18:20:17.997914 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a8d16e57-7093-4361-bdda-ecd48ea1328f-config" (OuterVolumeSpecName: "config") pod "a8d16e57-7093-4361-bdda-ecd48ea1328f" (UID: "a8d16e57-7093-4361-bdda-ecd48ea1328f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 18:20:18.000776 master-0 kubenswrapper[30278]: I0318 18:20:18.000538 30278 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xp5f6\" (UniqueName: \"kubernetes.io/projected/a8d16e57-7093-4361-bdda-ecd48ea1328f-kube-api-access-xp5f6\") on node \"master-0\" DevicePath \"\"" Mar 18 18:20:18.000776 master-0 kubenswrapper[30278]: I0318 18:20:18.000597 30278 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/a8d16e57-7093-4361-bdda-ecd48ea1328f-config\") on node \"master-0\" DevicePath \"\"" Mar 18 18:20:18.000776 master-0 kubenswrapper[30278]: I0318 18:20:18.000608 30278 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/a8d16e57-7093-4361-bdda-ecd48ea1328f-httpd-config\") on node \"master-0\" DevicePath \"\"" Mar 18 18:20:18.025760 master-0 kubenswrapper[30278]: I0318 18:20:18.025671 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a8d16e57-7093-4361-bdda-ecd48ea1328f-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "a8d16e57-7093-4361-bdda-ecd48ea1328f" (UID: "a8d16e57-7093-4361-bdda-ecd48ea1328f"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 18:20:18.037843 master-0 kubenswrapper[30278]: I0318 18:20:18.037776 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a8d16e57-7093-4361-bdda-ecd48ea1328f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a8d16e57-7093-4361-bdda-ecd48ea1328f" (UID: "a8d16e57-7093-4361-bdda-ecd48ea1328f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 18:20:18.112405 master-0 kubenswrapper[30278]: I0318 18:20:18.105626 30278 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a8d16e57-7093-4361-bdda-ecd48ea1328f-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 18 18:20:18.112405 master-0 kubenswrapper[30278]: I0318 18:20:18.105702 30278 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/a8d16e57-7093-4361-bdda-ecd48ea1328f-ovndb-tls-certs\") on node \"master-0\" DevicePath \"\"" Mar 18 18:20:18.491814 master-0 kubenswrapper[30278]: I0318 18:20:18.491597 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-594bd7cb-dvb64" event={"ID":"a8d16e57-7093-4361-bdda-ecd48ea1328f","Type":"ContainerDied","Data":"f438a24dfc9cf86889066bea19b111988a9729c48a502ae0a568d1c6bb1211ad"} Mar 18 18:20:18.491814 master-0 kubenswrapper[30278]: I0318 18:20:18.491686 30278 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-594bd7cb-dvb64" Mar 18 18:20:18.564465 master-0 kubenswrapper[30278]: I0318 18:20:18.564382 30278 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-594bd7cb-dvb64"] Mar 18 18:20:18.580827 master-0 kubenswrapper[30278]: I0318 18:20:18.580750 30278 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-594bd7cb-dvb64"] Mar 18 18:20:19.068815 master-0 kubenswrapper[30278]: I0318 18:20:19.068550 30278 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a8d16e57-7093-4361-bdda-ecd48ea1328f" path="/var/lib/kubelet/pods/a8d16e57-7093-4361-bdda-ecd48ea1328f/volumes" Mar 18 18:20:19.317303 master-0 kubenswrapper[30278]: I0318 18:20:19.314491 30278 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-66857967b8-5fglj" Mar 18 18:20:19.317303 master-0 kubenswrapper[30278]: I0318 18:20:19.315531 30278 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-66857967b8-5fglj" Mar 18 18:20:19.996755 master-0 kubenswrapper[30278]: I0318 18:20:19.996504 30278 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/ironic-neutron-agent-c769655c7-ssdxq" Mar 18 18:20:19.996755 master-0 kubenswrapper[30278]: I0318 18:20:19.996604 30278 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ironic-neutron-agent-c769655c7-ssdxq" Mar 18 18:20:19.999564 master-0 kubenswrapper[30278]: I0318 18:20:19.998191 30278 scope.go:117] "RemoveContainer" containerID="dec213597b25252c56c11febf89b5194730cf41627300b4afa10d377b51c37cd" Mar 18 18:20:19.999564 master-0 kubenswrapper[30278]: E0318 18:20:19.998634 30278 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ironic-neutron-agent\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ironic-neutron-agent pod=ironic-neutron-agent-c769655c7-ssdxq_openstack(adb370b0-e5b4-4cc8-b1d2-c63363b70615)\"" pod="openstack/ironic-neutron-agent-c769655c7-ssdxq" podUID="adb370b0-e5b4-4cc8-b1d2-c63363b70615" Mar 18 18:20:21.463788 master-0 kubenswrapper[30278]: I0318 18:20:21.463704 30278 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-84cf7b8984-2rsvd" Mar 18 18:20:21.607891 master-0 kubenswrapper[30278]: I0318 18:20:21.607788 30278 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-84cf7b8984-2rsvd" Mar 18 18:20:22.467463 master-0 kubenswrapper[30278]: I0318 18:20:22.466848 30278 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-7db756448-vwstn"] Mar 18 18:20:22.468098 master-0 kubenswrapper[30278]: I0318 18:20:22.467653 30278 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/placement-7db756448-vwstn" podUID="ca02800f-5799-45c1-8737-409cb6665117" containerName="placement-log" containerID="cri-o://bb4c4cb453389606886622e8b73636f3049a1f4c97339b0c1df7e6a0aa350f3a" gracePeriod=30 Mar 18 18:20:22.468098 master-0 kubenswrapper[30278]: I0318 18:20:22.467819 30278 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/placement-7db756448-vwstn" podUID="ca02800f-5799-45c1-8737-409cb6665117" containerName="placement-api" containerID="cri-o://07c0151d0e77c6b415e88f17fe047729fe52781df6ec02f05b17131801556584" gracePeriod=30 Mar 18 18:20:26.205752 master-0 kubenswrapper[30278]: I0318 18:20:26.205675 30278 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-275vd"] Mar 18 18:20:26.213316 master-0 kubenswrapper[30278]: E0318 18:20:26.209325 30278 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a8d16e57-7093-4361-bdda-ecd48ea1328f" containerName="neutron-api" Mar 18 18:20:26.213316 master-0 kubenswrapper[30278]: I0318 18:20:26.209373 30278 state_mem.go:107] "Deleted CPUSet assignment" podUID="a8d16e57-7093-4361-bdda-ecd48ea1328f" containerName="neutron-api" Mar 18 18:20:26.213316 master-0 kubenswrapper[30278]: E0318 18:20:26.209395 30278 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a8d16e57-7093-4361-bdda-ecd48ea1328f" containerName="neutron-httpd" Mar 18 18:20:26.213316 master-0 kubenswrapper[30278]: I0318 18:20:26.209402 30278 state_mem.go:107] "Deleted CPUSet assignment" podUID="a8d16e57-7093-4361-bdda-ecd48ea1328f" containerName="neutron-httpd" Mar 18 18:20:26.213316 master-0 kubenswrapper[30278]: I0318 18:20:26.209789 30278 memory_manager.go:354] "RemoveStaleState removing state" podUID="a8d16e57-7093-4361-bdda-ecd48ea1328f" containerName="neutron-httpd" Mar 18 18:20:26.213316 master-0 kubenswrapper[30278]: I0318 18:20:26.209825 30278 memory_manager.go:354] "RemoveStaleState removing state" podUID="a8d16e57-7093-4361-bdda-ecd48ea1328f" containerName="neutron-api" Mar 18 18:20:26.213316 master-0 kubenswrapper[30278]: I0318 18:20:26.210842 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-275vd" Mar 18 18:20:26.226406 master-0 kubenswrapper[30278]: I0318 18:20:26.225948 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/21b3a964-ae1b-49d5-be02-c1b7397b406c-operator-scripts\") pod \"nova-api-db-create-275vd\" (UID: \"21b3a964-ae1b-49d5-be02-c1b7397b406c\") " pod="openstack/nova-api-db-create-275vd" Mar 18 18:20:26.226406 master-0 kubenswrapper[30278]: I0318 18:20:26.226122 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p85jq\" (UniqueName: \"kubernetes.io/projected/21b3a964-ae1b-49d5-be02-c1b7397b406c-kube-api-access-p85jq\") pod \"nova-api-db-create-275vd\" (UID: \"21b3a964-ae1b-49d5-be02-c1b7397b406c\") " pod="openstack/nova-api-db-create-275vd" Mar 18 18:20:26.247011 master-0 kubenswrapper[30278]: I0318 18:20:26.246935 30278 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-275vd"] Mar 18 18:20:26.268785 master-0 kubenswrapper[30278]: I0318 18:20:26.260383 30278 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-zf26j"] Mar 18 18:20:26.268785 master-0 kubenswrapper[30278]: I0318 18:20:26.267288 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-zf26j" Mar 18 18:20:26.276770 master-0 kubenswrapper[30278]: I0318 18:20:26.275335 30278 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-zf26j"] Mar 18 18:20:26.344711 master-0 kubenswrapper[30278]: I0318 18:20:26.344625 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ebb1d48c-efd7-4146-a06f-5eb19de9f51e-operator-scripts\") pod \"nova-cell0-db-create-zf26j\" (UID: \"ebb1d48c-efd7-4146-a06f-5eb19de9f51e\") " pod="openstack/nova-cell0-db-create-zf26j" Mar 18 18:20:26.345016 master-0 kubenswrapper[30278]: I0318 18:20:26.344940 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/21b3a964-ae1b-49d5-be02-c1b7397b406c-operator-scripts\") pod \"nova-api-db-create-275vd\" (UID: \"21b3a964-ae1b-49d5-be02-c1b7397b406c\") " pod="openstack/nova-api-db-create-275vd" Mar 18 18:20:26.345016 master-0 kubenswrapper[30278]: I0318 18:20:26.345007 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vrxkn\" (UniqueName: \"kubernetes.io/projected/ebb1d48c-efd7-4146-a06f-5eb19de9f51e-kube-api-access-vrxkn\") pod \"nova-cell0-db-create-zf26j\" (UID: \"ebb1d48c-efd7-4146-a06f-5eb19de9f51e\") " pod="openstack/nova-cell0-db-create-zf26j" Mar 18 18:20:26.345807 master-0 kubenswrapper[30278]: I0318 18:20:26.345769 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p85jq\" (UniqueName: \"kubernetes.io/projected/21b3a964-ae1b-49d5-be02-c1b7397b406c-kube-api-access-p85jq\") pod \"nova-api-db-create-275vd\" (UID: \"21b3a964-ae1b-49d5-be02-c1b7397b406c\") " pod="openstack/nova-api-db-create-275vd" Mar 18 18:20:26.348373 master-0 kubenswrapper[30278]: I0318 18:20:26.348127 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/21b3a964-ae1b-49d5-be02-c1b7397b406c-operator-scripts\") pod \"nova-api-db-create-275vd\" (UID: \"21b3a964-ae1b-49d5-be02-c1b7397b406c\") " pod="openstack/nova-api-db-create-275vd" Mar 18 18:20:26.354421 master-0 kubenswrapper[30278]: I0318 18:20:26.352696 30278 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-16af-account-create-update-nz97w"] Mar 18 18:20:26.355304 master-0 kubenswrapper[30278]: I0318 18:20:26.355250 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-16af-account-create-update-nz97w" Mar 18 18:20:26.362256 master-0 kubenswrapper[30278]: I0318 18:20:26.358954 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Mar 18 18:20:26.369472 master-0 kubenswrapper[30278]: I0318 18:20:26.369419 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p85jq\" (UniqueName: \"kubernetes.io/projected/21b3a964-ae1b-49d5-be02-c1b7397b406c-kube-api-access-p85jq\") pod \"nova-api-db-create-275vd\" (UID: \"21b3a964-ae1b-49d5-be02-c1b7397b406c\") " pod="openstack/nova-api-db-create-275vd" Mar 18 18:20:26.394398 master-0 kubenswrapper[30278]: I0318 18:20:26.394254 30278 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-16af-account-create-update-nz97w"] Mar 18 18:20:26.449103 master-0 kubenswrapper[30278]: I0318 18:20:26.449007 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/de752594-4e91-4400-bc57-3a77ddbc66f7-operator-scripts\") pod \"nova-api-16af-account-create-update-nz97w\" (UID: \"de752594-4e91-4400-bc57-3a77ddbc66f7\") " pod="openstack/nova-api-16af-account-create-update-nz97w" Mar 18 18:20:26.449103 master-0 kubenswrapper[30278]: I0318 18:20:26.449088 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ebb1d48c-efd7-4146-a06f-5eb19de9f51e-operator-scripts\") pod \"nova-cell0-db-create-zf26j\" (UID: \"ebb1d48c-efd7-4146-a06f-5eb19de9f51e\") " pod="openstack/nova-cell0-db-create-zf26j" Mar 18 18:20:26.449372 master-0 kubenswrapper[30278]: I0318 18:20:26.449319 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vrxkn\" (UniqueName: \"kubernetes.io/projected/ebb1d48c-efd7-4146-a06f-5eb19de9f51e-kube-api-access-vrxkn\") pod \"nova-cell0-db-create-zf26j\" (UID: \"ebb1d48c-efd7-4146-a06f-5eb19de9f51e\") " pod="openstack/nova-cell0-db-create-zf26j" Mar 18 18:20:26.449596 master-0 kubenswrapper[30278]: I0318 18:20:26.449558 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bm5l9\" (UniqueName: \"kubernetes.io/projected/de752594-4e91-4400-bc57-3a77ddbc66f7-kube-api-access-bm5l9\") pod \"nova-api-16af-account-create-update-nz97w\" (UID: \"de752594-4e91-4400-bc57-3a77ddbc66f7\") " pod="openstack/nova-api-16af-account-create-update-nz97w" Mar 18 18:20:26.450476 master-0 kubenswrapper[30278]: I0318 18:20:26.450399 30278 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-jmrkj"] Mar 18 18:20:26.450632 master-0 kubenswrapper[30278]: I0318 18:20:26.450592 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ebb1d48c-efd7-4146-a06f-5eb19de9f51e-operator-scripts\") pod \"nova-cell0-db-create-zf26j\" (UID: \"ebb1d48c-efd7-4146-a06f-5eb19de9f51e\") " pod="openstack/nova-cell0-db-create-zf26j" Mar 18 18:20:26.452240 master-0 kubenswrapper[30278]: I0318 18:20:26.452202 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-jmrkj" Mar 18 18:20:26.472187 master-0 kubenswrapper[30278]: I0318 18:20:26.472081 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vrxkn\" (UniqueName: \"kubernetes.io/projected/ebb1d48c-efd7-4146-a06f-5eb19de9f51e-kube-api-access-vrxkn\") pod \"nova-cell0-db-create-zf26j\" (UID: \"ebb1d48c-efd7-4146-a06f-5eb19de9f51e\") " pod="openstack/nova-cell0-db-create-zf26j" Mar 18 18:20:26.487522 master-0 kubenswrapper[30278]: I0318 18:20:26.486986 30278 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-jmrkj"] Mar 18 18:20:26.563336 master-0 kubenswrapper[30278]: I0318 18:20:26.563258 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d70da1e8-5ba9-440d-bd18-6add06bb23ef-operator-scripts\") pod \"nova-cell1-db-create-jmrkj\" (UID: \"d70da1e8-5ba9-440d-bd18-6add06bb23ef\") " pod="openstack/nova-cell1-db-create-jmrkj" Mar 18 18:20:26.564136 master-0 kubenswrapper[30278]: I0318 18:20:26.563840 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dxcvt\" (UniqueName: \"kubernetes.io/projected/d70da1e8-5ba9-440d-bd18-6add06bb23ef-kube-api-access-dxcvt\") pod \"nova-cell1-db-create-jmrkj\" (UID: \"d70da1e8-5ba9-440d-bd18-6add06bb23ef\") " pod="openstack/nova-cell1-db-create-jmrkj" Mar 18 18:20:26.564316 master-0 kubenswrapper[30278]: I0318 18:20:26.564288 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/de752594-4e91-4400-bc57-3a77ddbc66f7-operator-scripts\") pod \"nova-api-16af-account-create-update-nz97w\" (UID: \"de752594-4e91-4400-bc57-3a77ddbc66f7\") " pod="openstack/nova-api-16af-account-create-update-nz97w" Mar 18 18:20:26.565835 master-0 kubenswrapper[30278]: I0318 18:20:26.565781 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/de752594-4e91-4400-bc57-3a77ddbc66f7-operator-scripts\") pod \"nova-api-16af-account-create-update-nz97w\" (UID: \"de752594-4e91-4400-bc57-3a77ddbc66f7\") " pod="openstack/nova-api-16af-account-create-update-nz97w" Mar 18 18:20:26.568135 master-0 kubenswrapper[30278]: I0318 18:20:26.568111 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bm5l9\" (UniqueName: \"kubernetes.io/projected/de752594-4e91-4400-bc57-3a77ddbc66f7-kube-api-access-bm5l9\") pod \"nova-api-16af-account-create-update-nz97w\" (UID: \"de752594-4e91-4400-bc57-3a77ddbc66f7\") " pod="openstack/nova-api-16af-account-create-update-nz97w" Mar 18 18:20:26.597865 master-0 kubenswrapper[30278]: I0318 18:20:26.596407 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-275vd" Mar 18 18:20:26.606135 master-0 kubenswrapper[30278]: I0318 18:20:26.604034 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bm5l9\" (UniqueName: \"kubernetes.io/projected/de752594-4e91-4400-bc57-3a77ddbc66f7-kube-api-access-bm5l9\") pod \"nova-api-16af-account-create-update-nz97w\" (UID: \"de752594-4e91-4400-bc57-3a77ddbc66f7\") " pod="openstack/nova-api-16af-account-create-update-nz97w" Mar 18 18:20:26.625345 master-0 kubenswrapper[30278]: I0318 18:20:26.624975 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-zf26j" Mar 18 18:20:26.640794 master-0 kubenswrapper[30278]: I0318 18:20:26.640714 30278 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-7471-account-create-update-fv6xj"] Mar 18 18:20:26.649920 master-0 kubenswrapper[30278]: I0318 18:20:26.649857 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-7471-account-create-update-fv6xj" Mar 18 18:20:26.654542 master-0 kubenswrapper[30278]: I0318 18:20:26.653999 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Mar 18 18:20:26.684170 master-0 kubenswrapper[30278]: I0318 18:20:26.681701 30278 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-7471-account-create-update-fv6xj"] Mar 18 18:20:26.713317 master-0 kubenswrapper[30278]: I0318 18:20:26.711421 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d70da1e8-5ba9-440d-bd18-6add06bb23ef-operator-scripts\") pod \"nova-cell1-db-create-jmrkj\" (UID: \"d70da1e8-5ba9-440d-bd18-6add06bb23ef\") " pod="openstack/nova-cell1-db-create-jmrkj" Mar 18 18:20:26.713317 master-0 kubenswrapper[30278]: I0318 18:20:26.711692 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dxcvt\" (UniqueName: \"kubernetes.io/projected/d70da1e8-5ba9-440d-bd18-6add06bb23ef-kube-api-access-dxcvt\") pod \"nova-cell1-db-create-jmrkj\" (UID: \"d70da1e8-5ba9-440d-bd18-6add06bb23ef\") " pod="openstack/nova-cell1-db-create-jmrkj" Mar 18 18:20:26.723204 master-0 kubenswrapper[30278]: I0318 18:20:26.715395 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d70da1e8-5ba9-440d-bd18-6add06bb23ef-operator-scripts\") pod \"nova-cell1-db-create-jmrkj\" (UID: \"d70da1e8-5ba9-440d-bd18-6add06bb23ef\") " pod="openstack/nova-cell1-db-create-jmrkj" Mar 18 18:20:26.748881 master-0 kubenswrapper[30278]: I0318 18:20:26.748735 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-16af-account-create-update-nz97w" Mar 18 18:20:26.759796 master-0 kubenswrapper[30278]: I0318 18:20:26.759630 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dxcvt\" (UniqueName: \"kubernetes.io/projected/d70da1e8-5ba9-440d-bd18-6add06bb23ef-kube-api-access-dxcvt\") pod \"nova-cell1-db-create-jmrkj\" (UID: \"d70da1e8-5ba9-440d-bd18-6add06bb23ef\") " pod="openstack/nova-cell1-db-create-jmrkj" Mar 18 18:20:26.795608 master-0 kubenswrapper[30278]: I0318 18:20:26.795365 30278 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-5998-account-create-update-w7qdg"] Mar 18 18:20:26.799105 master-0 kubenswrapper[30278]: I0318 18:20:26.799045 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-5998-account-create-update-w7qdg" Mar 18 18:20:26.809189 master-0 kubenswrapper[30278]: I0318 18:20:26.809082 30278 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-5998-account-create-update-w7qdg"] Mar 18 18:20:26.813543 master-0 kubenswrapper[30278]: I0318 18:20:26.813485 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Mar 18 18:20:26.820761 master-0 kubenswrapper[30278]: I0318 18:20:26.820675 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h6mfd\" (UniqueName: \"kubernetes.io/projected/43f4a237-d80c-40a8-ac9f-ae9422afb881-kube-api-access-h6mfd\") pod \"nova-cell0-7471-account-create-update-fv6xj\" (UID: \"43f4a237-d80c-40a8-ac9f-ae9422afb881\") " pod="openstack/nova-cell0-7471-account-create-update-fv6xj" Mar 18 18:20:26.820915 master-0 kubenswrapper[30278]: I0318 18:20:26.820878 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/43f4a237-d80c-40a8-ac9f-ae9422afb881-operator-scripts\") pod \"nova-cell0-7471-account-create-update-fv6xj\" (UID: \"43f4a237-d80c-40a8-ac9f-ae9422afb881\") " pod="openstack/nova-cell0-7471-account-create-update-fv6xj" Mar 18 18:20:26.914585 master-0 kubenswrapper[30278]: I0318 18:20:26.914495 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-jmrkj" Mar 18 18:20:26.924104 master-0 kubenswrapper[30278]: I0318 18:20:26.924041 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qq5fd\" (UniqueName: \"kubernetes.io/projected/25ff64e9-3a30-4ee8-a9d2-3b1dec433087-kube-api-access-qq5fd\") pod \"nova-cell1-5998-account-create-update-w7qdg\" (UID: \"25ff64e9-3a30-4ee8-a9d2-3b1dec433087\") " pod="openstack/nova-cell1-5998-account-create-update-w7qdg" Mar 18 18:20:26.924298 master-0 kubenswrapper[30278]: I0318 18:20:26.924247 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h6mfd\" (UniqueName: \"kubernetes.io/projected/43f4a237-d80c-40a8-ac9f-ae9422afb881-kube-api-access-h6mfd\") pod \"nova-cell0-7471-account-create-update-fv6xj\" (UID: \"43f4a237-d80c-40a8-ac9f-ae9422afb881\") " pod="openstack/nova-cell0-7471-account-create-update-fv6xj" Mar 18 18:20:26.924358 master-0 kubenswrapper[30278]: I0318 18:20:26.924344 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/25ff64e9-3a30-4ee8-a9d2-3b1dec433087-operator-scripts\") pod \"nova-cell1-5998-account-create-update-w7qdg\" (UID: \"25ff64e9-3a30-4ee8-a9d2-3b1dec433087\") " pod="openstack/nova-cell1-5998-account-create-update-w7qdg" Mar 18 18:20:26.924425 master-0 kubenswrapper[30278]: I0318 18:20:26.924405 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/43f4a237-d80c-40a8-ac9f-ae9422afb881-operator-scripts\") pod \"nova-cell0-7471-account-create-update-fv6xj\" (UID: \"43f4a237-d80c-40a8-ac9f-ae9422afb881\") " pod="openstack/nova-cell0-7471-account-create-update-fv6xj" Mar 18 18:20:26.927334 master-0 kubenswrapper[30278]: I0318 18:20:26.927297 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/43f4a237-d80c-40a8-ac9f-ae9422afb881-operator-scripts\") pod \"nova-cell0-7471-account-create-update-fv6xj\" (UID: \"43f4a237-d80c-40a8-ac9f-ae9422afb881\") " pod="openstack/nova-cell0-7471-account-create-update-fv6xj" Mar 18 18:20:26.953301 master-0 kubenswrapper[30278]: I0318 18:20:26.946420 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h6mfd\" (UniqueName: \"kubernetes.io/projected/43f4a237-d80c-40a8-ac9f-ae9422afb881-kube-api-access-h6mfd\") pod \"nova-cell0-7471-account-create-update-fv6xj\" (UID: \"43f4a237-d80c-40a8-ac9f-ae9422afb881\") " pod="openstack/nova-cell0-7471-account-create-update-fv6xj" Mar 18 18:20:27.002423 master-0 kubenswrapper[30278]: I0318 18:20:27.000035 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-7471-account-create-update-fv6xj" Mar 18 18:20:27.027577 master-0 kubenswrapper[30278]: I0318 18:20:27.027504 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qq5fd\" (UniqueName: \"kubernetes.io/projected/25ff64e9-3a30-4ee8-a9d2-3b1dec433087-kube-api-access-qq5fd\") pod \"nova-cell1-5998-account-create-update-w7qdg\" (UID: \"25ff64e9-3a30-4ee8-a9d2-3b1dec433087\") " pod="openstack/nova-cell1-5998-account-create-update-w7qdg" Mar 18 18:20:27.028169 master-0 kubenswrapper[30278]: I0318 18:20:27.028138 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/25ff64e9-3a30-4ee8-a9d2-3b1dec433087-operator-scripts\") pod \"nova-cell1-5998-account-create-update-w7qdg\" (UID: \"25ff64e9-3a30-4ee8-a9d2-3b1dec433087\") " pod="openstack/nova-cell1-5998-account-create-update-w7qdg" Mar 18 18:20:27.029265 master-0 kubenswrapper[30278]: I0318 18:20:27.029228 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/25ff64e9-3a30-4ee8-a9d2-3b1dec433087-operator-scripts\") pod \"nova-cell1-5998-account-create-update-w7qdg\" (UID: \"25ff64e9-3a30-4ee8-a9d2-3b1dec433087\") " pod="openstack/nova-cell1-5998-account-create-update-w7qdg" Mar 18 18:20:27.046698 master-0 kubenswrapper[30278]: I0318 18:20:27.046331 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qq5fd\" (UniqueName: \"kubernetes.io/projected/25ff64e9-3a30-4ee8-a9d2-3b1dec433087-kube-api-access-qq5fd\") pod \"nova-cell1-5998-account-create-update-w7qdg\" (UID: \"25ff64e9-3a30-4ee8-a9d2-3b1dec433087\") " pod="openstack/nova-cell1-5998-account-create-update-w7qdg" Mar 18 18:20:27.159833 master-0 kubenswrapper[30278]: I0318 18:20:27.158896 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-5998-account-create-update-w7qdg" Mar 18 18:20:27.761238 master-0 kubenswrapper[30278]: I0318 18:20:27.761023 30278 generic.go:334] "Generic (PLEG): container finished" podID="f25d0677-228e-4b99-bc1f-abbbceebffc4" containerID="e18706ba2089f86bdd3de65ed66a8da498827fa5e84969cbe47d4e70f60da7a2" exitCode=143 Mar 18 18:20:27.761969 master-0 kubenswrapper[30278]: I0318 18:20:27.761200 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-f986975b-8wc5r" event={"ID":"f25d0677-228e-4b99-bc1f-abbbceebffc4","Type":"ContainerDied","Data":"e18706ba2089f86bdd3de65ed66a8da498827fa5e84969cbe47d4e70f60da7a2"} Mar 18 18:20:27.768961 master-0 kubenswrapper[30278]: I0318 18:20:27.768903 30278 generic.go:334] "Generic (PLEG): container finished" podID="ca02800f-5799-45c1-8737-409cb6665117" containerID="07c0151d0e77c6b415e88f17fe047729fe52781df6ec02f05b17131801556584" exitCode=0 Mar 18 18:20:27.768961 master-0 kubenswrapper[30278]: I0318 18:20:27.768961 30278 generic.go:334] "Generic (PLEG): container finished" podID="ca02800f-5799-45c1-8737-409cb6665117" containerID="bb4c4cb453389606886622e8b73636f3049a1f4c97339b0c1df7e6a0aa350f3a" exitCode=143 Mar 18 18:20:27.769123 master-0 kubenswrapper[30278]: I0318 18:20:27.769003 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-7db756448-vwstn" event={"ID":"ca02800f-5799-45c1-8737-409cb6665117","Type":"ContainerDied","Data":"07c0151d0e77c6b415e88f17fe047729fe52781df6ec02f05b17131801556584"} Mar 18 18:20:27.769123 master-0 kubenswrapper[30278]: I0318 18:20:27.769082 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-7db756448-vwstn" event={"ID":"ca02800f-5799-45c1-8737-409cb6665117","Type":"ContainerDied","Data":"bb4c4cb453389606886622e8b73636f3049a1f4c97339b0c1df7e6a0aa350f3a"} Mar 18 18:20:33.016743 master-0 kubenswrapper[30278]: I0318 18:20:33.016640 30278 scope.go:117] "RemoveContainer" containerID="89f9a2f243d56eb15727bacfbebb53635e792bb42d34a4b447dd4b068abbaaaf" Mar 18 18:20:33.280744 master-0 kubenswrapper[30278]: I0318 18:20:33.280683 30278 scope.go:117] "RemoveContainer" containerID="b5dcd73154a049e80ab13b2eb80bcf7481b7aceaf5de5b0d4df0bed066bb9647" Mar 18 18:20:33.398868 master-0 kubenswrapper[30278]: I0318 18:20:33.391186 30278 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-f986975b-8wc5r" Mar 18 18:20:33.398868 master-0 kubenswrapper[30278]: I0318 18:20:33.391989 30278 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-7db756448-vwstn" Mar 18 18:20:33.498664 master-0 kubenswrapper[30278]: I0318 18:20:33.497787 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/f25d0677-228e-4b99-bc1f-abbbceebffc4-config-data-merged\") pod \"f25d0677-228e-4b99-bc1f-abbbceebffc4\" (UID: \"f25d0677-228e-4b99-bc1f-abbbceebffc4\") " Mar 18 18:20:33.498664 master-0 kubenswrapper[30278]: I0318 18:20:33.497860 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ca02800f-5799-45c1-8737-409cb6665117-public-tls-certs\") pod \"ca02800f-5799-45c1-8737-409cb6665117\" (UID: \"ca02800f-5799-45c1-8737-409cb6665117\") " Mar 18 18:20:33.498664 master-0 kubenswrapper[30278]: I0318 18:20:33.497942 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ca02800f-5799-45c1-8737-409cb6665117-internal-tls-certs\") pod \"ca02800f-5799-45c1-8737-409cb6665117\" (UID: \"ca02800f-5799-45c1-8737-409cb6665117\") " Mar 18 18:20:33.498664 master-0 kubenswrapper[30278]: I0318 18:20:33.498001 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ca02800f-5799-45c1-8737-409cb6665117-combined-ca-bundle\") pod \"ca02800f-5799-45c1-8737-409cb6665117\" (UID: \"ca02800f-5799-45c1-8737-409cb6665117\") " Mar 18 18:20:33.498664 master-0 kubenswrapper[30278]: I0318 18:20:33.498029 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xscw8\" (UniqueName: \"kubernetes.io/projected/f25d0677-228e-4b99-bc1f-abbbceebffc4-kube-api-access-xscw8\") pod \"f25d0677-228e-4b99-bc1f-abbbceebffc4\" (UID: \"f25d0677-228e-4b99-bc1f-abbbceebffc4\") " Mar 18 18:20:33.498664 master-0 kubenswrapper[30278]: I0318 18:20:33.498188 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f25d0677-228e-4b99-bc1f-abbbceebffc4-scripts\") pod \"f25d0677-228e-4b99-bc1f-abbbceebffc4\" (UID: \"f25d0677-228e-4b99-bc1f-abbbceebffc4\") " Mar 18 18:20:33.498664 master-0 kubenswrapper[30278]: I0318 18:20:33.498236 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f25d0677-228e-4b99-bc1f-abbbceebffc4-logs\") pod \"f25d0677-228e-4b99-bc1f-abbbceebffc4\" (UID: \"f25d0677-228e-4b99-bc1f-abbbceebffc4\") " Mar 18 18:20:33.498664 master-0 kubenswrapper[30278]: I0318 18:20:33.498357 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/f25d0677-228e-4b99-bc1f-abbbceebffc4-etc-podinfo\") pod \"f25d0677-228e-4b99-bc1f-abbbceebffc4\" (UID: \"f25d0677-228e-4b99-bc1f-abbbceebffc4\") " Mar 18 18:20:33.498664 master-0 kubenswrapper[30278]: I0318 18:20:33.498387 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ca02800f-5799-45c1-8737-409cb6665117-logs\") pod \"ca02800f-5799-45c1-8737-409cb6665117\" (UID: \"ca02800f-5799-45c1-8737-409cb6665117\") " Mar 18 18:20:33.498664 master-0 kubenswrapper[30278]: I0318 18:20:33.498415 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ca02800f-5799-45c1-8737-409cb6665117-config-data\") pod \"ca02800f-5799-45c1-8737-409cb6665117\" (UID: \"ca02800f-5799-45c1-8737-409cb6665117\") " Mar 18 18:20:33.513768 master-0 kubenswrapper[30278]: I0318 18:20:33.513530 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f25d0677-228e-4b99-bc1f-abbbceebffc4-logs" (OuterVolumeSpecName: "logs") pod "f25d0677-228e-4b99-bc1f-abbbceebffc4" (UID: "f25d0677-228e-4b99-bc1f-abbbceebffc4"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 18 18:20:33.518465 master-0 kubenswrapper[30278]: I0318 18:20:33.518382 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f25d0677-228e-4b99-bc1f-abbbceebffc4-config-data-merged" (OuterVolumeSpecName: "config-data-merged") pod "f25d0677-228e-4b99-bc1f-abbbceebffc4" (UID: "f25d0677-228e-4b99-bc1f-abbbceebffc4"). InnerVolumeSpecName "config-data-merged". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 18 18:20:33.521578 master-0 kubenswrapper[30278]: I0318 18:20:33.521454 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ca02800f-5799-45c1-8737-409cb6665117-scripts\") pod \"ca02800f-5799-45c1-8737-409cb6665117\" (UID: \"ca02800f-5799-45c1-8737-409cb6665117\") " Mar 18 18:20:33.521578 master-0 kubenswrapper[30278]: I0318 18:20:33.521553 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f25d0677-228e-4b99-bc1f-abbbceebffc4-combined-ca-bundle\") pod \"f25d0677-228e-4b99-bc1f-abbbceebffc4\" (UID: \"f25d0677-228e-4b99-bc1f-abbbceebffc4\") " Mar 18 18:20:33.521712 master-0 kubenswrapper[30278]: I0318 18:20:33.521632 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rpv4n\" (UniqueName: \"kubernetes.io/projected/ca02800f-5799-45c1-8737-409cb6665117-kube-api-access-rpv4n\") pod \"ca02800f-5799-45c1-8737-409cb6665117\" (UID: \"ca02800f-5799-45c1-8737-409cb6665117\") " Mar 18 18:20:33.521848 master-0 kubenswrapper[30278]: I0318 18:20:33.521819 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f25d0677-228e-4b99-bc1f-abbbceebffc4-config-data-custom\") pod \"f25d0677-228e-4b99-bc1f-abbbceebffc4\" (UID: \"f25d0677-228e-4b99-bc1f-abbbceebffc4\") " Mar 18 18:20:33.521908 master-0 kubenswrapper[30278]: I0318 18:20:33.521862 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f25d0677-228e-4b99-bc1f-abbbceebffc4-config-data\") pod \"f25d0677-228e-4b99-bc1f-abbbceebffc4\" (UID: \"f25d0677-228e-4b99-bc1f-abbbceebffc4\") " Mar 18 18:20:33.523202 master-0 kubenswrapper[30278]: I0318 18:20:33.523174 30278 reconciler_common.go:293] "Volume detached for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/f25d0677-228e-4b99-bc1f-abbbceebffc4-config-data-merged\") on node \"master-0\" DevicePath \"\"" Mar 18 18:20:33.523202 master-0 kubenswrapper[30278]: I0318 18:20:33.523201 30278 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f25d0677-228e-4b99-bc1f-abbbceebffc4-logs\") on node \"master-0\" DevicePath \"\"" Mar 18 18:20:33.533926 master-0 kubenswrapper[30278]: I0318 18:20:33.533832 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ca02800f-5799-45c1-8737-409cb6665117-logs" (OuterVolumeSpecName: "logs") pod "ca02800f-5799-45c1-8737-409cb6665117" (UID: "ca02800f-5799-45c1-8737-409cb6665117"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 18 18:20:33.538016 master-0 kubenswrapper[30278]: I0318 18:20:33.537904 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f25d0677-228e-4b99-bc1f-abbbceebffc4-scripts" (OuterVolumeSpecName: "scripts") pod "f25d0677-228e-4b99-bc1f-abbbceebffc4" (UID: "f25d0677-228e-4b99-bc1f-abbbceebffc4"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 18:20:33.556596 master-0 kubenswrapper[30278]: I0318 18:20:33.556473 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f25d0677-228e-4b99-bc1f-abbbceebffc4-kube-api-access-xscw8" (OuterVolumeSpecName: "kube-api-access-xscw8") pod "f25d0677-228e-4b99-bc1f-abbbceebffc4" (UID: "f25d0677-228e-4b99-bc1f-abbbceebffc4"). InnerVolumeSpecName "kube-api-access-xscw8". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 18:20:33.556596 master-0 kubenswrapper[30278]: I0318 18:20:33.556603 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/f25d0677-228e-4b99-bc1f-abbbceebffc4-etc-podinfo" (OuterVolumeSpecName: "etc-podinfo") pod "f25d0677-228e-4b99-bc1f-abbbceebffc4" (UID: "f25d0677-228e-4b99-bc1f-abbbceebffc4"). InnerVolumeSpecName "etc-podinfo". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Mar 18 18:20:33.556880 master-0 kubenswrapper[30278]: I0318 18:20:33.556680 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f25d0677-228e-4b99-bc1f-abbbceebffc4-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "f25d0677-228e-4b99-bc1f-abbbceebffc4" (UID: "f25d0677-228e-4b99-bc1f-abbbceebffc4"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 18:20:33.558658 master-0 kubenswrapper[30278]: I0318 18:20:33.558609 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ca02800f-5799-45c1-8737-409cb6665117-scripts" (OuterVolumeSpecName: "scripts") pod "ca02800f-5799-45c1-8737-409cb6665117" (UID: "ca02800f-5799-45c1-8737-409cb6665117"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 18:20:33.607001 master-0 kubenswrapper[30278]: I0318 18:20:33.606901 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ca02800f-5799-45c1-8737-409cb6665117-kube-api-access-rpv4n" (OuterVolumeSpecName: "kube-api-access-rpv4n") pod "ca02800f-5799-45c1-8737-409cb6665117" (UID: "ca02800f-5799-45c1-8737-409cb6665117"). InnerVolumeSpecName "kube-api-access-rpv4n". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 18:20:33.607489 master-0 kubenswrapper[30278]: I0318 18:20:33.607371 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f25d0677-228e-4b99-bc1f-abbbceebffc4-config-data" (OuterVolumeSpecName: "config-data") pod "f25d0677-228e-4b99-bc1f-abbbceebffc4" (UID: "f25d0677-228e-4b99-bc1f-abbbceebffc4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 18:20:33.635306 master-0 kubenswrapper[30278]: I0318 18:20:33.635220 30278 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xscw8\" (UniqueName: \"kubernetes.io/projected/f25d0677-228e-4b99-bc1f-abbbceebffc4-kube-api-access-xscw8\") on node \"master-0\" DevicePath \"\"" Mar 18 18:20:33.635306 master-0 kubenswrapper[30278]: I0318 18:20:33.635299 30278 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f25d0677-228e-4b99-bc1f-abbbceebffc4-scripts\") on node \"master-0\" DevicePath \"\"" Mar 18 18:20:33.635306 master-0 kubenswrapper[30278]: I0318 18:20:33.635311 30278 reconciler_common.go:293] "Volume detached for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/f25d0677-228e-4b99-bc1f-abbbceebffc4-etc-podinfo\") on node \"master-0\" DevicePath \"\"" Mar 18 18:20:33.637659 master-0 kubenswrapper[30278]: I0318 18:20:33.635322 30278 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ca02800f-5799-45c1-8737-409cb6665117-logs\") on node \"master-0\" DevicePath \"\"" Mar 18 18:20:33.637659 master-0 kubenswrapper[30278]: I0318 18:20:33.635333 30278 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ca02800f-5799-45c1-8737-409cb6665117-scripts\") on node \"master-0\" DevicePath \"\"" Mar 18 18:20:33.637659 master-0 kubenswrapper[30278]: I0318 18:20:33.635342 30278 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rpv4n\" (UniqueName: \"kubernetes.io/projected/ca02800f-5799-45c1-8737-409cb6665117-kube-api-access-rpv4n\") on node \"master-0\" DevicePath \"\"" Mar 18 18:20:33.637659 master-0 kubenswrapper[30278]: I0318 18:20:33.635350 30278 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f25d0677-228e-4b99-bc1f-abbbceebffc4-config-data-custom\") on node \"master-0\" DevicePath \"\"" Mar 18 18:20:33.637659 master-0 kubenswrapper[30278]: I0318 18:20:33.635389 30278 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f25d0677-228e-4b99-bc1f-abbbceebffc4-config-data\") on node \"master-0\" DevicePath \"\"" Mar 18 18:20:33.671892 master-0 kubenswrapper[30278]: I0318 18:20:33.671786 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ca02800f-5799-45c1-8737-409cb6665117-config-data" (OuterVolumeSpecName: "config-data") pod "ca02800f-5799-45c1-8737-409cb6665117" (UID: "ca02800f-5799-45c1-8737-409cb6665117"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 18:20:33.753145 master-0 kubenswrapper[30278]: I0318 18:20:33.752953 30278 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ca02800f-5799-45c1-8737-409cb6665117-config-data\") on node \"master-0\" DevicePath \"\"" Mar 18 18:20:33.782479 master-0 kubenswrapper[30278]: I0318 18:20:33.782389 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f25d0677-228e-4b99-bc1f-abbbceebffc4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f25d0677-228e-4b99-bc1f-abbbceebffc4" (UID: "f25d0677-228e-4b99-bc1f-abbbceebffc4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 18:20:33.802550 master-0 kubenswrapper[30278]: I0318 18:20:33.802480 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ca02800f-5799-45c1-8737-409cb6665117-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ca02800f-5799-45c1-8737-409cb6665117" (UID: "ca02800f-5799-45c1-8737-409cb6665117"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 18:20:33.832589 master-0 kubenswrapper[30278]: I0318 18:20:33.830309 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ca02800f-5799-45c1-8737-409cb6665117-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "ca02800f-5799-45c1-8737-409cb6665117" (UID: "ca02800f-5799-45c1-8737-409cb6665117"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 18:20:33.859129 master-0 kubenswrapper[30278]: I0318 18:20:33.859076 30278 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f25d0677-228e-4b99-bc1f-abbbceebffc4-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 18 18:20:33.859129 master-0 kubenswrapper[30278]: I0318 18:20:33.859125 30278 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ca02800f-5799-45c1-8737-409cb6665117-internal-tls-certs\") on node \"master-0\" DevicePath \"\"" Mar 18 18:20:33.859473 master-0 kubenswrapper[30278]: I0318 18:20:33.859140 30278 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ca02800f-5799-45c1-8737-409cb6665117-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 18 18:20:33.891441 master-0 kubenswrapper[30278]: I0318 18:20:33.891365 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ca02800f-5799-45c1-8737-409cb6665117-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "ca02800f-5799-45c1-8737-409cb6665117" (UID: "ca02800f-5799-45c1-8737-409cb6665117"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 18:20:33.893890 master-0 kubenswrapper[30278]: I0318 18:20:33.893833 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-f986975b-8wc5r" event={"ID":"f25d0677-228e-4b99-bc1f-abbbceebffc4","Type":"ContainerDied","Data":"e5abe77015db00f9866381dd21c28369ef02e6348c06e3858d8e73c8e5276062"} Mar 18 18:20:33.893962 master-0 kubenswrapper[30278]: I0318 18:20:33.893915 30278 scope.go:117] "RemoveContainer" containerID="a45be92866e438be95c8ae2186257c894b340cac55442745042a2707e7c1df8b" Mar 18 18:20:33.894044 master-0 kubenswrapper[30278]: I0318 18:20:33.894016 30278 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-f986975b-8wc5r" Mar 18 18:20:33.911946 master-0 kubenswrapper[30278]: I0318 18:20:33.911864 30278 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-7db756448-vwstn" Mar 18 18:20:33.912196 master-0 kubenswrapper[30278]: I0318 18:20:33.912066 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-7db756448-vwstn" event={"ID":"ca02800f-5799-45c1-8737-409cb6665117","Type":"ContainerDied","Data":"b9a9c189983cd3d176a2296250543355871dbcb18b0c063a465eb65dd7550341"} Mar 18 18:20:33.971930 master-0 kubenswrapper[30278]: I0318 18:20:33.971833 30278 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ca02800f-5799-45c1-8737-409cb6665117-public-tls-certs\") on node \"master-0\" DevicePath \"\"" Mar 18 18:20:33.992642 master-0 kubenswrapper[30278]: I0318 18:20:33.992442 30278 scope.go:117] "RemoveContainer" containerID="e18706ba2089f86bdd3de65ed66a8da498827fa5e84969cbe47d4e70f60da7a2" Mar 18 18:20:33.998376 master-0 kubenswrapper[30278]: I0318 18:20:33.998324 30278 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ironic-f986975b-8wc5r"] Mar 18 18:20:34.022141 master-0 kubenswrapper[30278]: I0318 18:20:34.022022 30278 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ironic-f986975b-8wc5r"] Mar 18 18:20:34.041602 master-0 kubenswrapper[30278]: I0318 18:20:34.041450 30278 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-7db756448-vwstn"] Mar 18 18:20:34.057201 master-0 kubenswrapper[30278]: I0318 18:20:34.057154 30278 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-7db756448-vwstn"] Mar 18 18:20:34.066594 master-0 kubenswrapper[30278]: I0318 18:20:34.066374 30278 scope.go:117] "RemoveContainer" containerID="11129141c9f66a372b2710e8c6e0d88bba043d2711f11b26695f3d249e378775" Mar 18 18:20:34.101574 master-0 kubenswrapper[30278]: I0318 18:20:34.101510 30278 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-5998-account-create-update-w7qdg"] Mar 18 18:20:34.132582 master-0 kubenswrapper[30278]: I0318 18:20:34.132215 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Mar 18 18:20:34.190195 master-0 kubenswrapper[30278]: I0318 18:20:34.179338 30278 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-zf26j"] Mar 18 18:20:34.191107 master-0 kubenswrapper[30278]: I0318 18:20:34.190429 30278 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-7471-account-create-update-fv6xj"] Mar 18 18:20:34.254145 master-0 kubenswrapper[30278]: I0318 18:20:34.253768 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Mar 18 18:20:34.370427 master-0 kubenswrapper[30278]: I0318 18:20:34.365173 30278 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-jmrkj"] Mar 18 18:20:34.376246 master-0 kubenswrapper[30278]: W0318 18:20:34.376186 30278 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod21b3a964_ae1b_49d5_be02_c1b7397b406c.slice/crio-a846a836cce4c40cad2d188997cc9c1c17be4c2b5aa14a52069b67cb273bc128 WatchSource:0}: Error finding container a846a836cce4c40cad2d188997cc9c1c17be4c2b5aa14a52069b67cb273bc128: Status 404 returned error can't find the container with id a846a836cce4c40cad2d188997cc9c1c17be4c2b5aa14a52069b67cb273bc128 Mar 18 18:20:34.381803 master-0 kubenswrapper[30278]: W0318 18:20:34.381733 30278 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podde752594_4e91_4400_bc57_3a77ddbc66f7.slice/crio-fe2c721bf3ffa93125798f5a182863e7672326472cffbfa56e0cd305a76011fa WatchSource:0}: Error finding container fe2c721bf3ffa93125798f5a182863e7672326472cffbfa56e0cd305a76011fa: Status 404 returned error can't find the container with id fe2c721bf3ffa93125798f5a182863e7672326472cffbfa56e0cd305a76011fa Mar 18 18:20:34.390318 master-0 kubenswrapper[30278]: I0318 18:20:34.390265 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Mar 18 18:20:34.404066 master-0 kubenswrapper[30278]: I0318 18:20:34.404011 30278 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-275vd"] Mar 18 18:20:34.415773 master-0 kubenswrapper[30278]: I0318 18:20:34.415702 30278 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-16af-account-create-update-nz97w"] Mar 18 18:20:34.530313 master-0 kubenswrapper[30278]: I0318 18:20:34.529756 30278 scope.go:117] "RemoveContainer" containerID="07c0151d0e77c6b415e88f17fe047729fe52781df6ec02f05b17131801556584" Mar 18 18:20:34.577626 master-0 kubenswrapper[30278]: I0318 18:20:34.577505 30278 scope.go:117] "RemoveContainer" containerID="bb4c4cb453389606886622e8b73636f3049a1f4c97339b0c1df7e6a0aa350f3a" Mar 18 18:20:34.948948 master-0 kubenswrapper[30278]: I0318 18:20:34.948790 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"3cf3d2cb-bc70-4a26-87db-0aa186c4a1a6","Type":"ContainerStarted","Data":"f83190e654f83ecf76b77012b02d3ef805b842c733ff653f484a74fa9d2cc713"} Mar 18 18:20:34.960219 master-0 kubenswrapper[30278]: I0318 18:20:34.959012 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-275vd" event={"ID":"21b3a964-ae1b-49d5-be02-c1b7397b406c","Type":"ContainerStarted","Data":"329f264c399b78a1961b1411a59d47d420071704ac4ce5adec910f69ca13d7cb"} Mar 18 18:20:34.960219 master-0 kubenswrapper[30278]: I0318 18:20:34.959118 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-275vd" event={"ID":"21b3a964-ae1b-49d5-be02-c1b7397b406c","Type":"ContainerStarted","Data":"a846a836cce4c40cad2d188997cc9c1c17be4c2b5aa14a52069b67cb273bc128"} Mar 18 18:20:34.977499 master-0 kubenswrapper[30278]: I0318 18:20:34.976432 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-zf26j" event={"ID":"ebb1d48c-efd7-4146-a06f-5eb19de9f51e","Type":"ContainerStarted","Data":"49c7d6328a6cde79ba5f2db6c897ad14e3685b9629c996b1626de76dca7e8d40"} Mar 18 18:20:34.977499 master-0 kubenswrapper[30278]: I0318 18:20:34.976516 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-zf26j" event={"ID":"ebb1d48c-efd7-4146-a06f-5eb19de9f51e","Type":"ContainerStarted","Data":"67f5008c9b8ab88654f8d9f57ae74bad2649d0963f6dd718197e53bf87e1dc19"} Mar 18 18:20:34.987911 master-0 kubenswrapper[30278]: I0318 18:20:34.987798 30278 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=3.6636392129999997 podStartE2EDuration="29.987771864s" podCreationTimestamp="2026-03-18 18:20:05 +0000 UTC" firstStartedPulling="2026-03-18 18:20:06.898292804 +0000 UTC m=+1176.065477399" lastFinishedPulling="2026-03-18 18:20:33.222425455 +0000 UTC m=+1202.389610050" observedRunningTime="2026-03-18 18:20:34.969891802 +0000 UTC m=+1204.137076397" watchObservedRunningTime="2026-03-18 18:20:34.987771864 +0000 UTC m=+1204.154956459" Mar 18 18:20:35.000255 master-0 kubenswrapper[30278]: I0318 18:20:35.000148 30278 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-db-create-275vd" podStartSLOduration=9.000115967 podStartE2EDuration="9.000115967s" podCreationTimestamp="2026-03-18 18:20:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 18:20:34.997842885 +0000 UTC m=+1204.165027480" watchObservedRunningTime="2026-03-18 18:20:35.000115967 +0000 UTC m=+1204.167300572" Mar 18 18:20:35.002645 master-0 kubenswrapper[30278]: I0318 18:20:35.002593 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-db-sync-98qm9" event={"ID":"681bd0b0-8192-4ac5-9e57-2a5e4f575b1f","Type":"ContainerStarted","Data":"4d63ed137f09003ace10d22211b45abb64e889d135e149436ec3c53a232574c9"} Mar 18 18:20:35.050354 master-0 kubenswrapper[30278]: I0318 18:20:35.026514 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-16af-account-create-update-nz97w" event={"ID":"de752594-4e91-4400-bc57-3a77ddbc66f7","Type":"ContainerStarted","Data":"b2a5b60d0f984455e777ac7f95712106027e3e44f3b131613a99178948a45d50"} Mar 18 18:20:35.050354 master-0 kubenswrapper[30278]: I0318 18:20:35.026605 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-16af-account-create-update-nz97w" event={"ID":"de752594-4e91-4400-bc57-3a77ddbc66f7","Type":"ContainerStarted","Data":"fe2c721bf3ffa93125798f5a182863e7672326472cffbfa56e0cd305a76011fa"} Mar 18 18:20:35.050354 master-0 kubenswrapper[30278]: I0318 18:20:35.034775 30278 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-db-create-zf26j" podStartSLOduration=9.034751059 podStartE2EDuration="9.034751059s" podCreationTimestamp="2026-03-18 18:20:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 18:20:35.034470442 +0000 UTC m=+1204.201655037" watchObservedRunningTime="2026-03-18 18:20:35.034751059 +0000 UTC m=+1204.201935654" Mar 18 18:20:35.050354 master-0 kubenswrapper[30278]: I0318 18:20:35.043343 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-5998-account-create-update-w7qdg" event={"ID":"25ff64e9-3a30-4ee8-a9d2-3b1dec433087","Type":"ContainerStarted","Data":"6b16a01b103b67c7f00ed1833daf2996a80bd678951c879f56efe92c3d364468"} Mar 18 18:20:35.050354 master-0 kubenswrapper[30278]: I0318 18:20:35.043396 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-5998-account-create-update-w7qdg" event={"ID":"25ff64e9-3a30-4ee8-a9d2-3b1dec433087","Type":"ContainerStarted","Data":"16caba2cdc6d480ce5bfd4adcf212ad76c31b9f526352130ef93a7d1e5931d22"} Mar 18 18:20:35.059690 master-0 kubenswrapper[30278]: I0318 18:20:35.059404 30278 scope.go:117] "RemoveContainer" containerID="dec213597b25252c56c11febf89b5194730cf41627300b4afa10d377b51c37cd" Mar 18 18:20:35.076884 master-0 kubenswrapper[30278]: I0318 18:20:35.076803 30278 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ca02800f-5799-45c1-8737-409cb6665117" path="/var/lib/kubelet/pods/ca02800f-5799-45c1-8737-409cb6665117/volumes" Mar 18 18:20:35.078595 master-0 kubenswrapper[30278]: I0318 18:20:35.078554 30278 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f25d0677-228e-4b99-bc1f-abbbceebffc4" path="/var/lib/kubelet/pods/f25d0677-228e-4b99-bc1f-abbbceebffc4/volumes" Mar 18 18:20:35.079627 master-0 kubenswrapper[30278]: I0318 18:20:35.079584 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-jmrkj" event={"ID":"d70da1e8-5ba9-440d-bd18-6add06bb23ef","Type":"ContainerStarted","Data":"c600dd48453aa09ca9ef8cb3148e5cbee600729e53fe16b470fb057e130583cb"} Mar 18 18:20:35.079686 master-0 kubenswrapper[30278]: I0318 18:20:35.079630 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-jmrkj" event={"ID":"d70da1e8-5ba9-440d-bd18-6add06bb23ef","Type":"ContainerStarted","Data":"6e4ca97f60cc3d3f98763f268c18add6f232a1c779b83a06e74da73f127e110f"} Mar 18 18:20:35.079686 master-0 kubenswrapper[30278]: I0318 18:20:35.079645 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-7471-account-create-update-fv6xj" event={"ID":"43f4a237-d80c-40a8-ac9f-ae9422afb881","Type":"ContainerStarted","Data":"8b7bb0ba952cfbf131ebc8162764331dc31f3648bcfb86853c89643a00e2f322"} Mar 18 18:20:35.079686 master-0 kubenswrapper[30278]: I0318 18:20:35.079656 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-7471-account-create-update-fv6xj" event={"ID":"43f4a237-d80c-40a8-ac9f-ae9422afb881","Type":"ContainerStarted","Data":"96dd41cc505ff5fa38c000e339edd63a99e054b5bf3d74c9ffc6ccc96f4e077d"} Mar 18 18:20:35.111537 master-0 kubenswrapper[30278]: I0318 18:20:35.111451 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-conductor-0" event={"ID":"e9af6002-27e3-414d-b61a-dc0f7d99768b","Type":"ContainerStarted","Data":"f210c6f0774243134ee34cdffd0fc04b2f440584100361a3c2851d0ec70898bd"} Mar 18 18:20:35.172230 master-0 kubenswrapper[30278]: I0318 18:20:35.169341 30278 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ironic-inspector-db-sync-98qm9" podStartSLOduration=4.079174899 podStartE2EDuration="27.169309823s" podCreationTimestamp="2026-03-18 18:20:08 +0000 UTC" firstStartedPulling="2026-03-18 18:20:09.963633298 +0000 UTC m=+1179.130817893" lastFinishedPulling="2026-03-18 18:20:33.053768212 +0000 UTC m=+1202.220952817" observedRunningTime="2026-03-18 18:20:35.108656329 +0000 UTC m=+1204.275840924" watchObservedRunningTime="2026-03-18 18:20:35.169309823 +0000 UTC m=+1204.336494418" Mar 18 18:20:35.269298 master-0 kubenswrapper[30278]: I0318 18:20:35.268313 30278 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-16af-account-create-update-nz97w" podStartSLOduration=9.268262198 podStartE2EDuration="9.268262198s" podCreationTimestamp="2026-03-18 18:20:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 18:20:35.130755245 +0000 UTC m=+1204.297939840" watchObservedRunningTime="2026-03-18 18:20:35.268262198 +0000 UTC m=+1204.435446793" Mar 18 18:20:35.333863 master-0 kubenswrapper[30278]: I0318 18:20:35.331372 30278 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-5998-account-create-update-w7qdg" podStartSLOduration=9.331342768 podStartE2EDuration="9.331342768s" podCreationTimestamp="2026-03-18 18:20:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 18:20:35.205664823 +0000 UTC m=+1204.372849418" watchObservedRunningTime="2026-03-18 18:20:35.331342768 +0000 UTC m=+1204.498527363" Mar 18 18:20:35.344663 master-0 kubenswrapper[30278]: I0318 18:20:35.344537 30278 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-db-create-jmrkj" podStartSLOduration=9.344514283 podStartE2EDuration="9.344514283s" podCreationTimestamp="2026-03-18 18:20:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 18:20:35.227021208 +0000 UTC m=+1204.394205803" watchObservedRunningTime="2026-03-18 18:20:35.344514283 +0000 UTC m=+1204.511698878" Mar 18 18:20:35.378915 master-0 kubenswrapper[30278]: I0318 18:20:35.378815 30278 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-7471-account-create-update-fv6xj" podStartSLOduration=9.378787616 podStartE2EDuration="9.378787616s" podCreationTimestamp="2026-03-18 18:20:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 18:20:35.268981208 +0000 UTC m=+1204.436165793" watchObservedRunningTime="2026-03-18 18:20:35.378787616 +0000 UTC m=+1204.545972201" Mar 18 18:20:36.139310 master-0 kubenswrapper[30278]: I0318 18:20:36.138392 30278 generic.go:334] "Generic (PLEG): container finished" podID="ebb1d48c-efd7-4146-a06f-5eb19de9f51e" containerID="49c7d6328a6cde79ba5f2db6c897ad14e3685b9629c996b1626de76dca7e8d40" exitCode=0 Mar 18 18:20:36.139310 master-0 kubenswrapper[30278]: I0318 18:20:36.138533 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-zf26j" event={"ID":"ebb1d48c-efd7-4146-a06f-5eb19de9f51e","Type":"ContainerDied","Data":"49c7d6328a6cde79ba5f2db6c897ad14e3685b9629c996b1626de76dca7e8d40"} Mar 18 18:20:36.147301 master-0 kubenswrapper[30278]: I0318 18:20:36.144049 30278 generic.go:334] "Generic (PLEG): container finished" podID="d70da1e8-5ba9-440d-bd18-6add06bb23ef" containerID="c600dd48453aa09ca9ef8cb3148e5cbee600729e53fe16b470fb057e130583cb" exitCode=0 Mar 18 18:20:36.147301 master-0 kubenswrapper[30278]: I0318 18:20:36.144181 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-jmrkj" event={"ID":"d70da1e8-5ba9-440d-bd18-6add06bb23ef","Type":"ContainerDied","Data":"c600dd48453aa09ca9ef8cb3148e5cbee600729e53fe16b470fb057e130583cb"} Mar 18 18:20:36.161802 master-0 kubenswrapper[30278]: I0318 18:20:36.160338 30278 generic.go:334] "Generic (PLEG): container finished" podID="43f4a237-d80c-40a8-ac9f-ae9422afb881" containerID="8b7bb0ba952cfbf131ebc8162764331dc31f3648bcfb86853c89643a00e2f322" exitCode=0 Mar 18 18:20:36.161802 master-0 kubenswrapper[30278]: I0318 18:20:36.160439 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-7471-account-create-update-fv6xj" event={"ID":"43f4a237-d80c-40a8-ac9f-ae9422afb881","Type":"ContainerDied","Data":"8b7bb0ba952cfbf131ebc8162764331dc31f3648bcfb86853c89643a00e2f322"} Mar 18 18:20:36.182301 master-0 kubenswrapper[30278]: I0318 18:20:36.181813 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-neutron-agent-c769655c7-ssdxq" event={"ID":"adb370b0-e5b4-4cc8-b1d2-c63363b70615","Type":"ContainerStarted","Data":"1f49586813c5119c606f713cbab2eb445202a8619ed84681ac25d56ee8c83de9"} Mar 18 18:20:36.185955 master-0 kubenswrapper[30278]: I0318 18:20:36.183549 30278 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ironic-neutron-agent-c769655c7-ssdxq" Mar 18 18:20:36.212301 master-0 kubenswrapper[30278]: I0318 18:20:36.211795 30278 generic.go:334] "Generic (PLEG): container finished" podID="21b3a964-ae1b-49d5-be02-c1b7397b406c" containerID="329f264c399b78a1961b1411a59d47d420071704ac4ce5adec910f69ca13d7cb" exitCode=0 Mar 18 18:20:36.212301 master-0 kubenswrapper[30278]: I0318 18:20:36.212009 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-275vd" event={"ID":"21b3a964-ae1b-49d5-be02-c1b7397b406c","Type":"ContainerDied","Data":"329f264c399b78a1961b1411a59d47d420071704ac4ce5adec910f69ca13d7cb"} Mar 18 18:20:36.230300 master-0 kubenswrapper[30278]: I0318 18:20:36.227097 30278 generic.go:334] "Generic (PLEG): container finished" podID="de752594-4e91-4400-bc57-3a77ddbc66f7" containerID="b2a5b60d0f984455e777ac7f95712106027e3e44f3b131613a99178948a45d50" exitCode=0 Mar 18 18:20:36.230300 master-0 kubenswrapper[30278]: I0318 18:20:36.227244 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-16af-account-create-update-nz97w" event={"ID":"de752594-4e91-4400-bc57-3a77ddbc66f7","Type":"ContainerDied","Data":"b2a5b60d0f984455e777ac7f95712106027e3e44f3b131613a99178948a45d50"} Mar 18 18:20:36.248302 master-0 kubenswrapper[30278]: I0318 18:20:36.242877 30278 generic.go:334] "Generic (PLEG): container finished" podID="25ff64e9-3a30-4ee8-a9d2-3b1dec433087" containerID="6b16a01b103b67c7f00ed1833daf2996a80bd678951c879f56efe92c3d364468" exitCode=0 Mar 18 18:20:36.248302 master-0 kubenswrapper[30278]: I0318 18:20:36.244591 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-5998-account-create-update-w7qdg" event={"ID":"25ff64e9-3a30-4ee8-a9d2-3b1dec433087","Type":"ContainerDied","Data":"6b16a01b103b67c7f00ed1833daf2996a80bd678951c879f56efe92c3d364468"} Mar 18 18:20:38.049675 master-0 kubenswrapper[30278]: I0318 18:20:38.049545 30278 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-zf26j" Mar 18 18:20:38.234149 master-0 kubenswrapper[30278]: I0318 18:20:38.233383 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vrxkn\" (UniqueName: \"kubernetes.io/projected/ebb1d48c-efd7-4146-a06f-5eb19de9f51e-kube-api-access-vrxkn\") pod \"ebb1d48c-efd7-4146-a06f-5eb19de9f51e\" (UID: \"ebb1d48c-efd7-4146-a06f-5eb19de9f51e\") " Mar 18 18:20:38.234149 master-0 kubenswrapper[30278]: I0318 18:20:38.233506 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ebb1d48c-efd7-4146-a06f-5eb19de9f51e-operator-scripts\") pod \"ebb1d48c-efd7-4146-a06f-5eb19de9f51e\" (UID: \"ebb1d48c-efd7-4146-a06f-5eb19de9f51e\") " Mar 18 18:20:38.236485 master-0 kubenswrapper[30278]: I0318 18:20:38.236414 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ebb1d48c-efd7-4146-a06f-5eb19de9f51e-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "ebb1d48c-efd7-4146-a06f-5eb19de9f51e" (UID: "ebb1d48c-efd7-4146-a06f-5eb19de9f51e"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 18:20:38.270970 master-0 kubenswrapper[30278]: I0318 18:20:38.270882 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ebb1d48c-efd7-4146-a06f-5eb19de9f51e-kube-api-access-vrxkn" (OuterVolumeSpecName: "kube-api-access-vrxkn") pod "ebb1d48c-efd7-4146-a06f-5eb19de9f51e" (UID: "ebb1d48c-efd7-4146-a06f-5eb19de9f51e"). InnerVolumeSpecName "kube-api-access-vrxkn". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 18:20:38.312370 master-0 kubenswrapper[30278]: I0318 18:20:38.310345 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-275vd" event={"ID":"21b3a964-ae1b-49d5-be02-c1b7397b406c","Type":"ContainerDied","Data":"a846a836cce4c40cad2d188997cc9c1c17be4c2b5aa14a52069b67cb273bc128"} Mar 18 18:20:38.312370 master-0 kubenswrapper[30278]: I0318 18:20:38.310466 30278 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a846a836cce4c40cad2d188997cc9c1c17be4c2b5aa14a52069b67cb273bc128" Mar 18 18:20:38.312370 master-0 kubenswrapper[30278]: I0318 18:20:38.312053 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-16af-account-create-update-nz97w" event={"ID":"de752594-4e91-4400-bc57-3a77ddbc66f7","Type":"ContainerDied","Data":"fe2c721bf3ffa93125798f5a182863e7672326472cffbfa56e0cd305a76011fa"} Mar 18 18:20:38.312370 master-0 kubenswrapper[30278]: I0318 18:20:38.312083 30278 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fe2c721bf3ffa93125798f5a182863e7672326472cffbfa56e0cd305a76011fa" Mar 18 18:20:38.312998 master-0 kubenswrapper[30278]: I0318 18:20:38.312943 30278 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-5998-account-create-update-w7qdg" Mar 18 18:20:38.315636 master-0 kubenswrapper[30278]: I0318 18:20:38.315579 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-5998-account-create-update-w7qdg" event={"ID":"25ff64e9-3a30-4ee8-a9d2-3b1dec433087","Type":"ContainerDied","Data":"16caba2cdc6d480ce5bfd4adcf212ad76c31b9f526352130ef93a7d1e5931d22"} Mar 18 18:20:38.315721 master-0 kubenswrapper[30278]: I0318 18:20:38.315645 30278 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="16caba2cdc6d480ce5bfd4adcf212ad76c31b9f526352130ef93a7d1e5931d22" Mar 18 18:20:38.317180 master-0 kubenswrapper[30278]: I0318 18:20:38.317119 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-zf26j" event={"ID":"ebb1d48c-efd7-4146-a06f-5eb19de9f51e","Type":"ContainerDied","Data":"67f5008c9b8ab88654f8d9f57ae74bad2649d0963f6dd718197e53bf87e1dc19"} Mar 18 18:20:38.317180 master-0 kubenswrapper[30278]: I0318 18:20:38.317145 30278 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-zf26j" Mar 18 18:20:38.317180 master-0 kubenswrapper[30278]: I0318 18:20:38.317161 30278 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="67f5008c9b8ab88654f8d9f57ae74bad2649d0963f6dd718197e53bf87e1dc19" Mar 18 18:20:38.318682 master-0 kubenswrapper[30278]: I0318 18:20:38.318529 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-7471-account-create-update-fv6xj" event={"ID":"43f4a237-d80c-40a8-ac9f-ae9422afb881","Type":"ContainerDied","Data":"96dd41cc505ff5fa38c000e339edd63a99e054b5bf3d74c9ffc6ccc96f4e077d"} Mar 18 18:20:38.318682 master-0 kubenswrapper[30278]: I0318 18:20:38.318565 30278 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="96dd41cc505ff5fa38c000e339edd63a99e054b5bf3d74c9ffc6ccc96f4e077d" Mar 18 18:20:38.319720 master-0 kubenswrapper[30278]: I0318 18:20:38.319677 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-jmrkj" event={"ID":"d70da1e8-5ba9-440d-bd18-6add06bb23ef","Type":"ContainerDied","Data":"6e4ca97f60cc3d3f98763f268c18add6f232a1c779b83a06e74da73f127e110f"} Mar 18 18:20:38.319720 master-0 kubenswrapper[30278]: I0318 18:20:38.319717 30278 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6e4ca97f60cc3d3f98763f268c18add6f232a1c779b83a06e74da73f127e110f" Mar 18 18:20:38.322330 master-0 kubenswrapper[30278]: I0318 18:20:38.322264 30278 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-275vd" Mar 18 18:20:38.337955 master-0 kubenswrapper[30278]: I0318 18:20:38.337898 30278 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ebb1d48c-efd7-4146-a06f-5eb19de9f51e-operator-scripts\") on node \"master-0\" DevicePath \"\"" Mar 18 18:20:38.337955 master-0 kubenswrapper[30278]: I0318 18:20:38.337958 30278 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vrxkn\" (UniqueName: \"kubernetes.io/projected/ebb1d48c-efd7-4146-a06f-5eb19de9f51e-kube-api-access-vrxkn\") on node \"master-0\" DevicePath \"\"" Mar 18 18:20:38.375638 master-0 kubenswrapper[30278]: I0318 18:20:38.373823 30278 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-jmrkj" Mar 18 18:20:38.376842 master-0 kubenswrapper[30278]: I0318 18:20:38.376808 30278 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-16af-account-create-update-nz97w" Mar 18 18:20:38.388531 master-0 kubenswrapper[30278]: I0318 18:20:38.388459 30278 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-7471-account-create-update-fv6xj" Mar 18 18:20:38.440418 master-0 kubenswrapper[30278]: I0318 18:20:38.439338 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p85jq\" (UniqueName: \"kubernetes.io/projected/21b3a964-ae1b-49d5-be02-c1b7397b406c-kube-api-access-p85jq\") pod \"21b3a964-ae1b-49d5-be02-c1b7397b406c\" (UID: \"21b3a964-ae1b-49d5-be02-c1b7397b406c\") " Mar 18 18:20:38.440418 master-0 kubenswrapper[30278]: I0318 18:20:38.439620 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/21b3a964-ae1b-49d5-be02-c1b7397b406c-operator-scripts\") pod \"21b3a964-ae1b-49d5-be02-c1b7397b406c\" (UID: \"21b3a964-ae1b-49d5-be02-c1b7397b406c\") " Mar 18 18:20:38.440418 master-0 kubenswrapper[30278]: I0318 18:20:38.439829 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/25ff64e9-3a30-4ee8-a9d2-3b1dec433087-operator-scripts\") pod \"25ff64e9-3a30-4ee8-a9d2-3b1dec433087\" (UID: \"25ff64e9-3a30-4ee8-a9d2-3b1dec433087\") " Mar 18 18:20:38.440418 master-0 kubenswrapper[30278]: I0318 18:20:38.439890 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qq5fd\" (UniqueName: \"kubernetes.io/projected/25ff64e9-3a30-4ee8-a9d2-3b1dec433087-kube-api-access-qq5fd\") pod \"25ff64e9-3a30-4ee8-a9d2-3b1dec433087\" (UID: \"25ff64e9-3a30-4ee8-a9d2-3b1dec433087\") " Mar 18 18:20:38.441720 master-0 kubenswrapper[30278]: I0318 18:20:38.441629 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25ff64e9-3a30-4ee8-a9d2-3b1dec433087-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "25ff64e9-3a30-4ee8-a9d2-3b1dec433087" (UID: "25ff64e9-3a30-4ee8-a9d2-3b1dec433087"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 18:20:38.442405 master-0 kubenswrapper[30278]: I0318 18:20:38.442290 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/21b3a964-ae1b-49d5-be02-c1b7397b406c-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "21b3a964-ae1b-49d5-be02-c1b7397b406c" (UID: "21b3a964-ae1b-49d5-be02-c1b7397b406c"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 18:20:38.446445 master-0 kubenswrapper[30278]: I0318 18:20:38.446320 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25ff64e9-3a30-4ee8-a9d2-3b1dec433087-kube-api-access-qq5fd" (OuterVolumeSpecName: "kube-api-access-qq5fd") pod "25ff64e9-3a30-4ee8-a9d2-3b1dec433087" (UID: "25ff64e9-3a30-4ee8-a9d2-3b1dec433087"). InnerVolumeSpecName "kube-api-access-qq5fd". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 18:20:38.456755 master-0 kubenswrapper[30278]: I0318 18:20:38.456651 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/21b3a964-ae1b-49d5-be02-c1b7397b406c-kube-api-access-p85jq" (OuterVolumeSpecName: "kube-api-access-p85jq") pod "21b3a964-ae1b-49d5-be02-c1b7397b406c" (UID: "21b3a964-ae1b-49d5-be02-c1b7397b406c"). InnerVolumeSpecName "kube-api-access-p85jq". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 18:20:38.543245 master-0 kubenswrapper[30278]: I0318 18:20:38.543140 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/43f4a237-d80c-40a8-ac9f-ae9422afb881-operator-scripts\") pod \"43f4a237-d80c-40a8-ac9f-ae9422afb881\" (UID: \"43f4a237-d80c-40a8-ac9f-ae9422afb881\") " Mar 18 18:20:38.543571 master-0 kubenswrapper[30278]: I0318 18:20:38.543338 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h6mfd\" (UniqueName: \"kubernetes.io/projected/43f4a237-d80c-40a8-ac9f-ae9422afb881-kube-api-access-h6mfd\") pod \"43f4a237-d80c-40a8-ac9f-ae9422afb881\" (UID: \"43f4a237-d80c-40a8-ac9f-ae9422afb881\") " Mar 18 18:20:38.543571 master-0 kubenswrapper[30278]: I0318 18:20:38.543484 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dxcvt\" (UniqueName: \"kubernetes.io/projected/d70da1e8-5ba9-440d-bd18-6add06bb23ef-kube-api-access-dxcvt\") pod \"d70da1e8-5ba9-440d-bd18-6add06bb23ef\" (UID: \"d70da1e8-5ba9-440d-bd18-6add06bb23ef\") " Mar 18 18:20:38.543571 master-0 kubenswrapper[30278]: I0318 18:20:38.543548 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bm5l9\" (UniqueName: \"kubernetes.io/projected/de752594-4e91-4400-bc57-3a77ddbc66f7-kube-api-access-bm5l9\") pod \"de752594-4e91-4400-bc57-3a77ddbc66f7\" (UID: \"de752594-4e91-4400-bc57-3a77ddbc66f7\") " Mar 18 18:20:38.543742 master-0 kubenswrapper[30278]: I0318 18:20:38.543610 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d70da1e8-5ba9-440d-bd18-6add06bb23ef-operator-scripts\") pod \"d70da1e8-5ba9-440d-bd18-6add06bb23ef\" (UID: \"d70da1e8-5ba9-440d-bd18-6add06bb23ef\") " Mar 18 18:20:38.543742 master-0 kubenswrapper[30278]: I0318 18:20:38.543637 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/de752594-4e91-4400-bc57-3a77ddbc66f7-operator-scripts\") pod \"de752594-4e91-4400-bc57-3a77ddbc66f7\" (UID: \"de752594-4e91-4400-bc57-3a77ddbc66f7\") " Mar 18 18:20:38.544684 master-0 kubenswrapper[30278]: I0318 18:20:38.544637 30278 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/25ff64e9-3a30-4ee8-a9d2-3b1dec433087-operator-scripts\") on node \"master-0\" DevicePath \"\"" Mar 18 18:20:38.544684 master-0 kubenswrapper[30278]: I0318 18:20:38.544672 30278 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qq5fd\" (UniqueName: \"kubernetes.io/projected/25ff64e9-3a30-4ee8-a9d2-3b1dec433087-kube-api-access-qq5fd\") on node \"master-0\" DevicePath \"\"" Mar 18 18:20:38.544819 master-0 kubenswrapper[30278]: I0318 18:20:38.544694 30278 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p85jq\" (UniqueName: \"kubernetes.io/projected/21b3a964-ae1b-49d5-be02-c1b7397b406c-kube-api-access-p85jq\") on node \"master-0\" DevicePath \"\"" Mar 18 18:20:38.544819 master-0 kubenswrapper[30278]: I0318 18:20:38.544707 30278 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/21b3a964-ae1b-49d5-be02-c1b7397b406c-operator-scripts\") on node \"master-0\" DevicePath \"\"" Mar 18 18:20:38.545324 master-0 kubenswrapper[30278]: I0318 18:20:38.545222 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d70da1e8-5ba9-440d-bd18-6add06bb23ef-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "d70da1e8-5ba9-440d-bd18-6add06bb23ef" (UID: "d70da1e8-5ba9-440d-bd18-6add06bb23ef"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 18:20:38.545324 master-0 kubenswrapper[30278]: I0318 18:20:38.545302 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/de752594-4e91-4400-bc57-3a77ddbc66f7-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "de752594-4e91-4400-bc57-3a77ddbc66f7" (UID: "de752594-4e91-4400-bc57-3a77ddbc66f7"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 18:20:38.546080 master-0 kubenswrapper[30278]: I0318 18:20:38.546028 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43f4a237-d80c-40a8-ac9f-ae9422afb881-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "43f4a237-d80c-40a8-ac9f-ae9422afb881" (UID: "43f4a237-d80c-40a8-ac9f-ae9422afb881"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 18:20:38.547400 master-0 kubenswrapper[30278]: I0318 18:20:38.547370 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d70da1e8-5ba9-440d-bd18-6add06bb23ef-kube-api-access-dxcvt" (OuterVolumeSpecName: "kube-api-access-dxcvt") pod "d70da1e8-5ba9-440d-bd18-6add06bb23ef" (UID: "d70da1e8-5ba9-440d-bd18-6add06bb23ef"). InnerVolumeSpecName "kube-api-access-dxcvt". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 18:20:38.549676 master-0 kubenswrapper[30278]: I0318 18:20:38.549618 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43f4a237-d80c-40a8-ac9f-ae9422afb881-kube-api-access-h6mfd" (OuterVolumeSpecName: "kube-api-access-h6mfd") pod "43f4a237-d80c-40a8-ac9f-ae9422afb881" (UID: "43f4a237-d80c-40a8-ac9f-ae9422afb881"). InnerVolumeSpecName "kube-api-access-h6mfd". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 18:20:38.550471 master-0 kubenswrapper[30278]: I0318 18:20:38.550389 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/de752594-4e91-4400-bc57-3a77ddbc66f7-kube-api-access-bm5l9" (OuterVolumeSpecName: "kube-api-access-bm5l9") pod "de752594-4e91-4400-bc57-3a77ddbc66f7" (UID: "de752594-4e91-4400-bc57-3a77ddbc66f7"). InnerVolumeSpecName "kube-api-access-bm5l9". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 18:20:38.647877 master-0 kubenswrapper[30278]: I0318 18:20:38.647774 30278 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/43f4a237-d80c-40a8-ac9f-ae9422afb881-operator-scripts\") on node \"master-0\" DevicePath \"\"" Mar 18 18:20:38.647877 master-0 kubenswrapper[30278]: I0318 18:20:38.647853 30278 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h6mfd\" (UniqueName: \"kubernetes.io/projected/43f4a237-d80c-40a8-ac9f-ae9422afb881-kube-api-access-h6mfd\") on node \"master-0\" DevicePath \"\"" Mar 18 18:20:38.647877 master-0 kubenswrapper[30278]: I0318 18:20:38.647869 30278 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dxcvt\" (UniqueName: \"kubernetes.io/projected/d70da1e8-5ba9-440d-bd18-6add06bb23ef-kube-api-access-dxcvt\") on node \"master-0\" DevicePath \"\"" Mar 18 18:20:38.647877 master-0 kubenswrapper[30278]: I0318 18:20:38.647882 30278 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bm5l9\" (UniqueName: \"kubernetes.io/projected/de752594-4e91-4400-bc57-3a77ddbc66f7-kube-api-access-bm5l9\") on node \"master-0\" DevicePath \"\"" Mar 18 18:20:38.647877 master-0 kubenswrapper[30278]: I0318 18:20:38.647892 30278 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d70da1e8-5ba9-440d-bd18-6add06bb23ef-operator-scripts\") on node \"master-0\" DevicePath \"\"" Mar 18 18:20:38.647877 master-0 kubenswrapper[30278]: I0318 18:20:38.647902 30278 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/de752594-4e91-4400-bc57-3a77ddbc66f7-operator-scripts\") on node \"master-0\" DevicePath \"\"" Mar 18 18:20:39.351035 master-0 kubenswrapper[30278]: I0318 18:20:39.350857 30278 generic.go:334] "Generic (PLEG): container finished" podID="e9af6002-27e3-414d-b61a-dc0f7d99768b" containerID="f210c6f0774243134ee34cdffd0fc04b2f440584100361a3c2851d0ec70898bd" exitCode=0 Mar 18 18:20:39.351035 master-0 kubenswrapper[30278]: I0318 18:20:39.350939 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-conductor-0" event={"ID":"e9af6002-27e3-414d-b61a-dc0f7d99768b","Type":"ContainerDied","Data":"f210c6f0774243134ee34cdffd0fc04b2f440584100361a3c2851d0ec70898bd"} Mar 18 18:20:39.351849 master-0 kubenswrapper[30278]: I0318 18:20:39.351083 30278 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-16af-account-create-update-nz97w" Mar 18 18:20:39.351849 master-0 kubenswrapper[30278]: I0318 18:20:39.351123 30278 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-jmrkj" Mar 18 18:20:39.351849 master-0 kubenswrapper[30278]: I0318 18:20:39.351088 30278 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-7471-account-create-update-fv6xj" Mar 18 18:20:39.351849 master-0 kubenswrapper[30278]: I0318 18:20:39.351184 30278 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-5998-account-create-update-w7qdg" Mar 18 18:20:39.352255 master-0 kubenswrapper[30278]: I0318 18:20:39.352230 30278 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-275vd" Mar 18 18:20:40.034532 master-0 kubenswrapper[30278]: I0318 18:20:40.034467 30278 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ironic-neutron-agent-c769655c7-ssdxq" Mar 18 18:20:41.386113 master-0 kubenswrapper[30278]: I0318 18:20:41.386016 30278 generic.go:334] "Generic (PLEG): container finished" podID="681bd0b0-8192-4ac5-9e57-2a5e4f575b1f" containerID="4d63ed137f09003ace10d22211b45abb64e889d135e149436ec3c53a232574c9" exitCode=0 Mar 18 18:20:41.386113 master-0 kubenswrapper[30278]: I0318 18:20:41.386094 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-db-sync-98qm9" event={"ID":"681bd0b0-8192-4ac5-9e57-2a5e4f575b1f","Type":"ContainerDied","Data":"4d63ed137f09003ace10d22211b45abb64e889d135e149436ec3c53a232574c9"} Mar 18 18:20:42.091440 master-0 kubenswrapper[30278]: I0318 18:20:42.091358 30278 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-qn2jb"] Mar 18 18:20:42.092776 master-0 kubenswrapper[30278]: E0318 18:20:42.092752 30278 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f25d0677-228e-4b99-bc1f-abbbceebffc4" containerName="ironic-api" Mar 18 18:20:42.092864 master-0 kubenswrapper[30278]: I0318 18:20:42.092852 30278 state_mem.go:107] "Deleted CPUSet assignment" podUID="f25d0677-228e-4b99-bc1f-abbbceebffc4" containerName="ironic-api" Mar 18 18:20:42.092951 master-0 kubenswrapper[30278]: E0318 18:20:42.092939 30278 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d70da1e8-5ba9-440d-bd18-6add06bb23ef" containerName="mariadb-database-create" Mar 18 18:20:42.093014 master-0 kubenswrapper[30278]: I0318 18:20:42.093004 30278 state_mem.go:107] "Deleted CPUSet assignment" podUID="d70da1e8-5ba9-440d-bd18-6add06bb23ef" containerName="mariadb-database-create" Mar 18 18:20:42.093098 master-0 kubenswrapper[30278]: E0318 18:20:42.093087 30278 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="25ff64e9-3a30-4ee8-a9d2-3b1dec433087" containerName="mariadb-account-create-update" Mar 18 18:20:42.093165 master-0 kubenswrapper[30278]: I0318 18:20:42.093156 30278 state_mem.go:107] "Deleted CPUSet assignment" podUID="25ff64e9-3a30-4ee8-a9d2-3b1dec433087" containerName="mariadb-account-create-update" Mar 18 18:20:42.093235 master-0 kubenswrapper[30278]: E0318 18:20:42.093224 30278 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="21b3a964-ae1b-49d5-be02-c1b7397b406c" containerName="mariadb-database-create" Mar 18 18:20:42.101752 master-0 kubenswrapper[30278]: I0318 18:20:42.101689 30278 state_mem.go:107] "Deleted CPUSet assignment" podUID="21b3a964-ae1b-49d5-be02-c1b7397b406c" containerName="mariadb-database-create" Mar 18 18:20:42.126951 master-0 kubenswrapper[30278]: E0318 18:20:42.102178 30278 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ca02800f-5799-45c1-8737-409cb6665117" containerName="placement-log" Mar 18 18:20:42.127261 master-0 kubenswrapper[30278]: I0318 18:20:42.127246 30278 state_mem.go:107] "Deleted CPUSet assignment" podUID="ca02800f-5799-45c1-8737-409cb6665117" containerName="placement-log" Mar 18 18:20:42.127426 master-0 kubenswrapper[30278]: E0318 18:20:42.127413 30278 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ca02800f-5799-45c1-8737-409cb6665117" containerName="placement-api" Mar 18 18:20:42.127499 master-0 kubenswrapper[30278]: I0318 18:20:42.127488 30278 state_mem.go:107] "Deleted CPUSet assignment" podUID="ca02800f-5799-45c1-8737-409cb6665117" containerName="placement-api" Mar 18 18:20:42.127622 master-0 kubenswrapper[30278]: E0318 18:20:42.127610 30278 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="de752594-4e91-4400-bc57-3a77ddbc66f7" containerName="mariadb-account-create-update" Mar 18 18:20:42.127681 master-0 kubenswrapper[30278]: I0318 18:20:42.127670 30278 state_mem.go:107] "Deleted CPUSet assignment" podUID="de752594-4e91-4400-bc57-3a77ddbc66f7" containerName="mariadb-account-create-update" Mar 18 18:20:42.127766 master-0 kubenswrapper[30278]: E0318 18:20:42.127755 30278 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="43f4a237-d80c-40a8-ac9f-ae9422afb881" containerName="mariadb-account-create-update" Mar 18 18:20:42.127830 master-0 kubenswrapper[30278]: I0318 18:20:42.127819 30278 state_mem.go:107] "Deleted CPUSet assignment" podUID="43f4a237-d80c-40a8-ac9f-ae9422afb881" containerName="mariadb-account-create-update" Mar 18 18:20:42.127905 master-0 kubenswrapper[30278]: E0318 18:20:42.127894 30278 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f25d0677-228e-4b99-bc1f-abbbceebffc4" containerName="ironic-api-log" Mar 18 18:20:42.127967 master-0 kubenswrapper[30278]: I0318 18:20:42.127957 30278 state_mem.go:107] "Deleted CPUSet assignment" podUID="f25d0677-228e-4b99-bc1f-abbbceebffc4" containerName="ironic-api-log" Mar 18 18:20:42.128057 master-0 kubenswrapper[30278]: E0318 18:20:42.128041 30278 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ebb1d48c-efd7-4146-a06f-5eb19de9f51e" containerName="mariadb-database-create" Mar 18 18:20:42.128119 master-0 kubenswrapper[30278]: I0318 18:20:42.128109 30278 state_mem.go:107] "Deleted CPUSet assignment" podUID="ebb1d48c-efd7-4146-a06f-5eb19de9f51e" containerName="mariadb-database-create" Mar 18 18:20:42.128186 master-0 kubenswrapper[30278]: E0318 18:20:42.128176 30278 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f25d0677-228e-4b99-bc1f-abbbceebffc4" containerName="init" Mar 18 18:20:42.128243 master-0 kubenswrapper[30278]: I0318 18:20:42.128233 30278 state_mem.go:107] "Deleted CPUSet assignment" podUID="f25d0677-228e-4b99-bc1f-abbbceebffc4" containerName="init" Mar 18 18:20:42.128891 master-0 kubenswrapper[30278]: I0318 18:20:42.128875 30278 memory_manager.go:354] "RemoveStaleState removing state" podUID="43f4a237-d80c-40a8-ac9f-ae9422afb881" containerName="mariadb-account-create-update" Mar 18 18:20:42.128983 master-0 kubenswrapper[30278]: I0318 18:20:42.128972 30278 memory_manager.go:354] "RemoveStaleState removing state" podUID="f25d0677-228e-4b99-bc1f-abbbceebffc4" containerName="ironic-api-log" Mar 18 18:20:42.129063 master-0 kubenswrapper[30278]: I0318 18:20:42.129052 30278 memory_manager.go:354] "RemoveStaleState removing state" podUID="de752594-4e91-4400-bc57-3a77ddbc66f7" containerName="mariadb-account-create-update" Mar 18 18:20:42.129162 master-0 kubenswrapper[30278]: I0318 18:20:42.129148 30278 memory_manager.go:354] "RemoveStaleState removing state" podUID="d70da1e8-5ba9-440d-bd18-6add06bb23ef" containerName="mariadb-database-create" Mar 18 18:20:42.129258 master-0 kubenswrapper[30278]: I0318 18:20:42.129246 30278 memory_manager.go:354] "RemoveStaleState removing state" podUID="ca02800f-5799-45c1-8737-409cb6665117" containerName="placement-log" Mar 18 18:20:42.129703 master-0 kubenswrapper[30278]: I0318 18:20:42.129685 30278 memory_manager.go:354] "RemoveStaleState removing state" podUID="ebb1d48c-efd7-4146-a06f-5eb19de9f51e" containerName="mariadb-database-create" Mar 18 18:20:42.129782 master-0 kubenswrapper[30278]: I0318 18:20:42.129770 30278 memory_manager.go:354] "RemoveStaleState removing state" podUID="21b3a964-ae1b-49d5-be02-c1b7397b406c" containerName="mariadb-database-create" Mar 18 18:20:42.129851 master-0 kubenswrapper[30278]: I0318 18:20:42.129840 30278 memory_manager.go:354] "RemoveStaleState removing state" podUID="f25d0677-228e-4b99-bc1f-abbbceebffc4" containerName="ironic-api" Mar 18 18:20:42.129933 master-0 kubenswrapper[30278]: I0318 18:20:42.129922 30278 memory_manager.go:354] "RemoveStaleState removing state" podUID="25ff64e9-3a30-4ee8-a9d2-3b1dec433087" containerName="mariadb-account-create-update" Mar 18 18:20:42.130033 master-0 kubenswrapper[30278]: I0318 18:20:42.130021 30278 memory_manager.go:354] "RemoveStaleState removing state" podUID="f25d0677-228e-4b99-bc1f-abbbceebffc4" containerName="ironic-api" Mar 18 18:20:42.130112 master-0 kubenswrapper[30278]: I0318 18:20:42.130101 30278 memory_manager.go:354] "RemoveStaleState removing state" podUID="ca02800f-5799-45c1-8737-409cb6665117" containerName="placement-api" Mar 18 18:20:42.131174 master-0 kubenswrapper[30278]: I0318 18:20:42.131155 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-qn2jb" Mar 18 18:20:42.132196 master-0 kubenswrapper[30278]: I0318 18:20:42.132136 30278 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-qn2jb"] Mar 18 18:20:42.135184 master-0 kubenswrapper[30278]: I0318 18:20:42.135153 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Mar 18 18:20:42.135927 master-0 kubenswrapper[30278]: I0318 18:20:42.135323 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Mar 18 18:20:42.169599 master-0 kubenswrapper[30278]: I0318 18:20:42.169468 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5gfl9\" (UniqueName: \"kubernetes.io/projected/75582986-df2a-4948-994c-643227b19932-kube-api-access-5gfl9\") pod \"nova-cell0-conductor-db-sync-qn2jb\" (UID: \"75582986-df2a-4948-994c-643227b19932\") " pod="openstack/nova-cell0-conductor-db-sync-qn2jb" Mar 18 18:20:42.169989 master-0 kubenswrapper[30278]: I0318 18:20:42.169943 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/75582986-df2a-4948-994c-643227b19932-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-qn2jb\" (UID: \"75582986-df2a-4948-994c-643227b19932\") " pod="openstack/nova-cell0-conductor-db-sync-qn2jb" Mar 18 18:20:42.170144 master-0 kubenswrapper[30278]: I0318 18:20:42.170124 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/75582986-df2a-4948-994c-643227b19932-config-data\") pod \"nova-cell0-conductor-db-sync-qn2jb\" (UID: \"75582986-df2a-4948-994c-643227b19932\") " pod="openstack/nova-cell0-conductor-db-sync-qn2jb" Mar 18 18:20:42.170375 master-0 kubenswrapper[30278]: I0318 18:20:42.170353 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/75582986-df2a-4948-994c-643227b19932-scripts\") pod \"nova-cell0-conductor-db-sync-qn2jb\" (UID: \"75582986-df2a-4948-994c-643227b19932\") " pod="openstack/nova-cell0-conductor-db-sync-qn2jb" Mar 18 18:20:42.274197 master-0 kubenswrapper[30278]: I0318 18:20:42.272496 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5gfl9\" (UniqueName: \"kubernetes.io/projected/75582986-df2a-4948-994c-643227b19932-kube-api-access-5gfl9\") pod \"nova-cell0-conductor-db-sync-qn2jb\" (UID: \"75582986-df2a-4948-994c-643227b19932\") " pod="openstack/nova-cell0-conductor-db-sync-qn2jb" Mar 18 18:20:42.274197 master-0 kubenswrapper[30278]: I0318 18:20:42.272591 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/75582986-df2a-4948-994c-643227b19932-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-qn2jb\" (UID: \"75582986-df2a-4948-994c-643227b19932\") " pod="openstack/nova-cell0-conductor-db-sync-qn2jb" Mar 18 18:20:42.274197 master-0 kubenswrapper[30278]: I0318 18:20:42.272636 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/75582986-df2a-4948-994c-643227b19932-config-data\") pod \"nova-cell0-conductor-db-sync-qn2jb\" (UID: \"75582986-df2a-4948-994c-643227b19932\") " pod="openstack/nova-cell0-conductor-db-sync-qn2jb" Mar 18 18:20:42.274197 master-0 kubenswrapper[30278]: I0318 18:20:42.272710 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/75582986-df2a-4948-994c-643227b19932-scripts\") pod \"nova-cell0-conductor-db-sync-qn2jb\" (UID: \"75582986-df2a-4948-994c-643227b19932\") " pod="openstack/nova-cell0-conductor-db-sync-qn2jb" Mar 18 18:20:42.290292 master-0 kubenswrapper[30278]: I0318 18:20:42.288808 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/75582986-df2a-4948-994c-643227b19932-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-qn2jb\" (UID: \"75582986-df2a-4948-994c-643227b19932\") " pod="openstack/nova-cell0-conductor-db-sync-qn2jb" Mar 18 18:20:42.292699 master-0 kubenswrapper[30278]: I0318 18:20:42.292593 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5gfl9\" (UniqueName: \"kubernetes.io/projected/75582986-df2a-4948-994c-643227b19932-kube-api-access-5gfl9\") pod \"nova-cell0-conductor-db-sync-qn2jb\" (UID: \"75582986-df2a-4948-994c-643227b19932\") " pod="openstack/nova-cell0-conductor-db-sync-qn2jb" Mar 18 18:20:42.293417 master-0 kubenswrapper[30278]: I0318 18:20:42.293339 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/75582986-df2a-4948-994c-643227b19932-scripts\") pod \"nova-cell0-conductor-db-sync-qn2jb\" (UID: \"75582986-df2a-4948-994c-643227b19932\") " pod="openstack/nova-cell0-conductor-db-sync-qn2jb" Mar 18 18:20:42.332647 master-0 kubenswrapper[30278]: I0318 18:20:42.332593 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/75582986-df2a-4948-994c-643227b19932-config-data\") pod \"nova-cell0-conductor-db-sync-qn2jb\" (UID: \"75582986-df2a-4948-994c-643227b19932\") " pod="openstack/nova-cell0-conductor-db-sync-qn2jb" Mar 18 18:20:42.460559 master-0 kubenswrapper[30278]: I0318 18:20:42.460433 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-qn2jb" Mar 18 18:20:42.857377 master-0 kubenswrapper[30278]: I0318 18:20:42.857201 30278 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-824c8-default-external-api-0"] Mar 18 18:20:42.858066 master-0 kubenswrapper[30278]: I0318 18:20:42.857990 30278 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-824c8-default-external-api-0" podUID="977956f8-854b-4c87-9485-c67f2be25e4c" containerName="glance-log" containerID="cri-o://9bde390825853fa8bc84374619f4e1229ec06f662a37380bbbe5265e63a1c43f" gracePeriod=30 Mar 18 18:20:42.859130 master-0 kubenswrapper[30278]: I0318 18:20:42.858906 30278 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-824c8-default-external-api-0" podUID="977956f8-854b-4c87-9485-c67f2be25e4c" containerName="glance-httpd" containerID="cri-o://601bbe39186558f953385a8cf55b5d14e96586d54733d7ee0664550e6f44b211" gracePeriod=30 Mar 18 18:20:43.858816 master-0 kubenswrapper[30278]: I0318 18:20:43.858735 30278 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-824c8-default-internal-api-0"] Mar 18 18:20:43.860740 master-0 kubenswrapper[30278]: I0318 18:20:43.860696 30278 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-824c8-default-internal-api-0" podUID="8b68cf46-84fc-418f-9c01-915501356564" containerName="glance-log" containerID="cri-o://12f888b489aa0e87b8b8d9e347d25c40f5ff39fc8456b52c776698003f1f51eb" gracePeriod=30 Mar 18 18:20:43.860905 master-0 kubenswrapper[30278]: I0318 18:20:43.860799 30278 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-824c8-default-internal-api-0" podUID="8b68cf46-84fc-418f-9c01-915501356564" containerName="glance-httpd" containerID="cri-o://c846084ab1d1864fded3953bbd85313f0b201b8dd632f29e8018ebc7fb0d0f4a" gracePeriod=30 Mar 18 18:20:45.395355 master-0 kubenswrapper[30278]: I0318 18:20:45.395265 30278 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-inspector-db-sync-98qm9" Mar 18 18:20:45.523195 master-0 kubenswrapper[30278]: I0318 18:20:45.522960 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/681bd0b0-8192-4ac5-9e57-2a5e4f575b1f-etc-podinfo\") pod \"681bd0b0-8192-4ac5-9e57-2a5e4f575b1f\" (UID: \"681bd0b0-8192-4ac5-9e57-2a5e4f575b1f\") " Mar 18 18:20:45.523195 master-0 kubenswrapper[30278]: I0318 18:20:45.523121 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/681bd0b0-8192-4ac5-9e57-2a5e4f575b1f-combined-ca-bundle\") pod \"681bd0b0-8192-4ac5-9e57-2a5e4f575b1f\" (UID: \"681bd0b0-8192-4ac5-9e57-2a5e4f575b1f\") " Mar 18 18:20:45.523489 master-0 kubenswrapper[30278]: I0318 18:20:45.523290 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cdpjc\" (UniqueName: \"kubernetes.io/projected/681bd0b0-8192-4ac5-9e57-2a5e4f575b1f-kube-api-access-cdpjc\") pod \"681bd0b0-8192-4ac5-9e57-2a5e4f575b1f\" (UID: \"681bd0b0-8192-4ac5-9e57-2a5e4f575b1f\") " Mar 18 18:20:45.523489 master-0 kubenswrapper[30278]: I0318 18:20:45.523371 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-ironic-inspector-dhcp-hostsdir\" (UniqueName: \"kubernetes.io/empty-dir/681bd0b0-8192-4ac5-9e57-2a5e4f575b1f-var-lib-ironic-inspector-dhcp-hostsdir\") pod \"681bd0b0-8192-4ac5-9e57-2a5e4f575b1f\" (UID: \"681bd0b0-8192-4ac5-9e57-2a5e4f575b1f\") " Mar 18 18:20:45.523640 master-0 kubenswrapper[30278]: I0318 18:20:45.523615 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/681bd0b0-8192-4ac5-9e57-2a5e4f575b1f-scripts\") pod \"681bd0b0-8192-4ac5-9e57-2a5e4f575b1f\" (UID: \"681bd0b0-8192-4ac5-9e57-2a5e4f575b1f\") " Mar 18 18:20:45.523699 master-0 kubenswrapper[30278]: I0318 18:20:45.523678 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-ironic\" (UniqueName: \"kubernetes.io/empty-dir/681bd0b0-8192-4ac5-9e57-2a5e4f575b1f-var-lib-ironic\") pod \"681bd0b0-8192-4ac5-9e57-2a5e4f575b1f\" (UID: \"681bd0b0-8192-4ac5-9e57-2a5e4f575b1f\") " Mar 18 18:20:45.523974 master-0 kubenswrapper[30278]: I0318 18:20:45.523831 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/681bd0b0-8192-4ac5-9e57-2a5e4f575b1f-config\") pod \"681bd0b0-8192-4ac5-9e57-2a5e4f575b1f\" (UID: \"681bd0b0-8192-4ac5-9e57-2a5e4f575b1f\") " Mar 18 18:20:45.525750 master-0 kubenswrapper[30278]: I0318 18:20:45.525308 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/681bd0b0-8192-4ac5-9e57-2a5e4f575b1f-var-lib-ironic-inspector-dhcp-hostsdir" (OuterVolumeSpecName: "var-lib-ironic-inspector-dhcp-hostsdir") pod "681bd0b0-8192-4ac5-9e57-2a5e4f575b1f" (UID: "681bd0b0-8192-4ac5-9e57-2a5e4f575b1f"). InnerVolumeSpecName "var-lib-ironic-inspector-dhcp-hostsdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 18 18:20:45.526627 master-0 kubenswrapper[30278]: I0318 18:20:45.525881 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/681bd0b0-8192-4ac5-9e57-2a5e4f575b1f-var-lib-ironic" (OuterVolumeSpecName: "var-lib-ironic") pod "681bd0b0-8192-4ac5-9e57-2a5e4f575b1f" (UID: "681bd0b0-8192-4ac5-9e57-2a5e4f575b1f"). InnerVolumeSpecName "var-lib-ironic". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 18 18:20:45.533008 master-0 kubenswrapper[30278]: I0318 18:20:45.530575 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/681bd0b0-8192-4ac5-9e57-2a5e4f575b1f-scripts" (OuterVolumeSpecName: "scripts") pod "681bd0b0-8192-4ac5-9e57-2a5e4f575b1f" (UID: "681bd0b0-8192-4ac5-9e57-2a5e4f575b1f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 18:20:45.533854 master-0 kubenswrapper[30278]: I0318 18:20:45.533782 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/681bd0b0-8192-4ac5-9e57-2a5e4f575b1f-kube-api-access-cdpjc" (OuterVolumeSpecName: "kube-api-access-cdpjc") pod "681bd0b0-8192-4ac5-9e57-2a5e4f575b1f" (UID: "681bd0b0-8192-4ac5-9e57-2a5e4f575b1f"). InnerVolumeSpecName "kube-api-access-cdpjc". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 18:20:45.540531 master-0 kubenswrapper[30278]: I0318 18:20:45.540445 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/681bd0b0-8192-4ac5-9e57-2a5e4f575b1f-etc-podinfo" (OuterVolumeSpecName: "etc-podinfo") pod "681bd0b0-8192-4ac5-9e57-2a5e4f575b1f" (UID: "681bd0b0-8192-4ac5-9e57-2a5e4f575b1f"). InnerVolumeSpecName "etc-podinfo". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Mar 18 18:20:45.546307 master-0 kubenswrapper[30278]: I0318 18:20:45.546128 30278 generic.go:334] "Generic (PLEG): container finished" podID="977956f8-854b-4c87-9485-c67f2be25e4c" containerID="9bde390825853fa8bc84374619f4e1229ec06f662a37380bbbe5265e63a1c43f" exitCode=143 Mar 18 18:20:45.546650 master-0 kubenswrapper[30278]: I0318 18:20:45.546566 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-824c8-default-external-api-0" event={"ID":"977956f8-854b-4c87-9485-c67f2be25e4c","Type":"ContainerDied","Data":"9bde390825853fa8bc84374619f4e1229ec06f662a37380bbbe5265e63a1c43f"} Mar 18 18:20:45.552263 master-0 kubenswrapper[30278]: I0318 18:20:45.552105 30278 generic.go:334] "Generic (PLEG): container finished" podID="8b68cf46-84fc-418f-9c01-915501356564" containerID="12f888b489aa0e87b8b8d9e347d25c40f5ff39fc8456b52c776698003f1f51eb" exitCode=143 Mar 18 18:20:45.552263 master-0 kubenswrapper[30278]: I0318 18:20:45.552194 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-824c8-default-internal-api-0" event={"ID":"8b68cf46-84fc-418f-9c01-915501356564","Type":"ContainerDied","Data":"12f888b489aa0e87b8b8d9e347d25c40f5ff39fc8456b52c776698003f1f51eb"} Mar 18 18:20:45.555301 master-0 kubenswrapper[30278]: I0318 18:20:45.554476 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-db-sync-98qm9" event={"ID":"681bd0b0-8192-4ac5-9e57-2a5e4f575b1f","Type":"ContainerDied","Data":"3be6f48c7968be8d8114f20377477f1687e8d6a2632942eb0b216aa4f576fd03"} Mar 18 18:20:45.555301 master-0 kubenswrapper[30278]: I0318 18:20:45.554510 30278 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3be6f48c7968be8d8114f20377477f1687e8d6a2632942eb0b216aa4f576fd03" Mar 18 18:20:45.555301 master-0 kubenswrapper[30278]: I0318 18:20:45.554570 30278 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-inspector-db-sync-98qm9" Mar 18 18:20:45.555301 master-0 kubenswrapper[30278]: I0318 18:20:45.554587 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/681bd0b0-8192-4ac5-9e57-2a5e4f575b1f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "681bd0b0-8192-4ac5-9e57-2a5e4f575b1f" (UID: "681bd0b0-8192-4ac5-9e57-2a5e4f575b1f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 18:20:45.583890 master-0 kubenswrapper[30278]: I0318 18:20:45.583185 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/681bd0b0-8192-4ac5-9e57-2a5e4f575b1f-config" (OuterVolumeSpecName: "config") pod "681bd0b0-8192-4ac5-9e57-2a5e4f575b1f" (UID: "681bd0b0-8192-4ac5-9e57-2a5e4f575b1f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 18:20:45.627933 master-0 kubenswrapper[30278]: I0318 18:20:45.627879 30278 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cdpjc\" (UniqueName: \"kubernetes.io/projected/681bd0b0-8192-4ac5-9e57-2a5e4f575b1f-kube-api-access-cdpjc\") on node \"master-0\" DevicePath \"\"" Mar 18 18:20:45.627933 master-0 kubenswrapper[30278]: I0318 18:20:45.627925 30278 reconciler_common.go:293] "Volume detached for volume \"var-lib-ironic-inspector-dhcp-hostsdir\" (UniqueName: \"kubernetes.io/empty-dir/681bd0b0-8192-4ac5-9e57-2a5e4f575b1f-var-lib-ironic-inspector-dhcp-hostsdir\") on node \"master-0\" DevicePath \"\"" Mar 18 18:20:45.627933 master-0 kubenswrapper[30278]: I0318 18:20:45.627940 30278 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/681bd0b0-8192-4ac5-9e57-2a5e4f575b1f-scripts\") on node \"master-0\" DevicePath \"\"" Mar 18 18:20:45.627933 master-0 kubenswrapper[30278]: I0318 18:20:45.627952 30278 reconciler_common.go:293] "Volume detached for volume \"var-lib-ironic\" (UniqueName: \"kubernetes.io/empty-dir/681bd0b0-8192-4ac5-9e57-2a5e4f575b1f-var-lib-ironic\") on node \"master-0\" DevicePath \"\"" Mar 18 18:20:45.627933 master-0 kubenswrapper[30278]: I0318 18:20:45.627964 30278 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/681bd0b0-8192-4ac5-9e57-2a5e4f575b1f-config\") on node \"master-0\" DevicePath \"\"" Mar 18 18:20:45.627933 master-0 kubenswrapper[30278]: I0318 18:20:45.627975 30278 reconciler_common.go:293] "Volume detached for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/681bd0b0-8192-4ac5-9e57-2a5e4f575b1f-etc-podinfo\") on node \"master-0\" DevicePath \"\"" Mar 18 18:20:45.628535 master-0 kubenswrapper[30278]: I0318 18:20:45.627985 30278 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/681bd0b0-8192-4ac5-9e57-2a5e4f575b1f-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 18 18:20:45.827150 master-0 kubenswrapper[30278]: I0318 18:20:45.827011 30278 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-qn2jb"] Mar 18 18:20:46.582493 master-0 kubenswrapper[30278]: I0318 18:20:46.582399 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-qn2jb" event={"ID":"75582986-df2a-4948-994c-643227b19932","Type":"ContainerStarted","Data":"59c666cd348d0a9828fb0fda63c6eebaedd448ff1eeb7bd944b2ec0305eecb5c"} Mar 18 18:20:46.587081 master-0 kubenswrapper[30278]: I0318 18:20:46.587046 30278 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-824c8-default-external-api-0" Mar 18 18:20:46.590186 master-0 kubenswrapper[30278]: I0318 18:20:46.590152 30278 generic.go:334] "Generic (PLEG): container finished" podID="977956f8-854b-4c87-9485-c67f2be25e4c" containerID="601bbe39186558f953385a8cf55b5d14e96586d54733d7ee0664550e6f44b211" exitCode=0 Mar 18 18:20:46.590309 master-0 kubenswrapper[30278]: I0318 18:20:46.590209 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-824c8-default-external-api-0" event={"ID":"977956f8-854b-4c87-9485-c67f2be25e4c","Type":"ContainerDied","Data":"601bbe39186558f953385a8cf55b5d14e96586d54733d7ee0664550e6f44b211"} Mar 18 18:20:46.590309 master-0 kubenswrapper[30278]: I0318 18:20:46.590235 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-824c8-default-external-api-0" event={"ID":"977956f8-854b-4c87-9485-c67f2be25e4c","Type":"ContainerDied","Data":"a4239717f1db3dd24ab3ccb320d6862448099f1946549c7c4f7e654578269862"} Mar 18 18:20:46.590309 master-0 kubenswrapper[30278]: I0318 18:20:46.590254 30278 scope.go:117] "RemoveContainer" containerID="601bbe39186558f953385a8cf55b5d14e96586d54733d7ee0664550e6f44b211" Mar 18 18:20:46.594973 master-0 kubenswrapper[30278]: I0318 18:20:46.594947 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-conductor-0" event={"ID":"e9af6002-27e3-414d-b61a-dc0f7d99768b","Type":"ContainerStarted","Data":"596f228623c740089d6bfafb648af0d527734cc329e5be6165f5bf9c165646d3"} Mar 18 18:20:46.665113 master-0 kubenswrapper[30278]: I0318 18:20:46.662749 30278 scope.go:117] "RemoveContainer" containerID="9bde390825853fa8bc84374619f4e1229ec06f662a37380bbbe5265e63a1c43f" Mar 18 18:20:46.687625 master-0 kubenswrapper[30278]: I0318 18:20:46.687497 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/977956f8-854b-4c87-9485-c67f2be25e4c-config-data\") pod \"977956f8-854b-4c87-9485-c67f2be25e4c\" (UID: \"977956f8-854b-4c87-9485-c67f2be25e4c\") " Mar 18 18:20:46.687731 master-0 kubenswrapper[30278]: I0318 18:20:46.687629 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/977956f8-854b-4c87-9485-c67f2be25e4c-httpd-run\") pod \"977956f8-854b-4c87-9485-c67f2be25e4c\" (UID: \"977956f8-854b-4c87-9485-c67f2be25e4c\") " Mar 18 18:20:46.687731 master-0 kubenswrapper[30278]: I0318 18:20:46.687702 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/977956f8-854b-4c87-9485-c67f2be25e4c-combined-ca-bundle\") pod \"977956f8-854b-4c87-9485-c67f2be25e4c\" (UID: \"977956f8-854b-4c87-9485-c67f2be25e4c\") " Mar 18 18:20:46.687847 master-0 kubenswrapper[30278]: I0318 18:20:46.687784 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/977956f8-854b-4c87-9485-c67f2be25e4c-scripts\") pod \"977956f8-854b-4c87-9485-c67f2be25e4c\" (UID: \"977956f8-854b-4c87-9485-c67f2be25e4c\") " Mar 18 18:20:46.687847 master-0 kubenswrapper[30278]: I0318 18:20:46.687840 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/977956f8-854b-4c87-9485-c67f2be25e4c-logs\") pod \"977956f8-854b-4c87-9485-c67f2be25e4c\" (UID: \"977956f8-854b-4c87-9485-c67f2be25e4c\") " Mar 18 18:20:46.687919 master-0 kubenswrapper[30278]: I0318 18:20:46.687891 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/977956f8-854b-4c87-9485-c67f2be25e4c-public-tls-certs\") pod \"977956f8-854b-4c87-9485-c67f2be25e4c\" (UID: \"977956f8-854b-4c87-9485-c67f2be25e4c\") " Mar 18 18:20:46.690832 master-0 kubenswrapper[30278]: I0318 18:20:46.687972 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kx65f\" (UniqueName: \"kubernetes.io/projected/977956f8-854b-4c87-9485-c67f2be25e4c-kube-api-access-kx65f\") pod \"977956f8-854b-4c87-9485-c67f2be25e4c\" (UID: \"977956f8-854b-4c87-9485-c67f2be25e4c\") " Mar 18 18:20:46.690832 master-0 kubenswrapper[30278]: I0318 18:20:46.688302 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/977956f8-854b-4c87-9485-c67f2be25e4c-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "977956f8-854b-4c87-9485-c67f2be25e4c" (UID: "977956f8-854b-4c87-9485-c67f2be25e4c"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 18 18:20:46.690832 master-0 kubenswrapper[30278]: I0318 18:20:46.688873 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/csi/topolvm.io^f0964722-5a61-42c7-8300-def06defe560\") pod \"977956f8-854b-4c87-9485-c67f2be25e4c\" (UID: \"977956f8-854b-4c87-9485-c67f2be25e4c\") " Mar 18 18:20:46.690832 master-0 kubenswrapper[30278]: I0318 18:20:46.689912 30278 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/977956f8-854b-4c87-9485-c67f2be25e4c-httpd-run\") on node \"master-0\" DevicePath \"\"" Mar 18 18:20:46.691132 master-0 kubenswrapper[30278]: I0318 18:20:46.691080 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/977956f8-854b-4c87-9485-c67f2be25e4c-logs" (OuterVolumeSpecName: "logs") pod "977956f8-854b-4c87-9485-c67f2be25e4c" (UID: "977956f8-854b-4c87-9485-c67f2be25e4c"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 18 18:20:46.712488 master-0 kubenswrapper[30278]: I0318 18:20:46.712404 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/977956f8-854b-4c87-9485-c67f2be25e4c-scripts" (OuterVolumeSpecName: "scripts") pod "977956f8-854b-4c87-9485-c67f2be25e4c" (UID: "977956f8-854b-4c87-9485-c67f2be25e4c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 18:20:46.719424 master-0 kubenswrapper[30278]: I0318 18:20:46.719138 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/977956f8-854b-4c87-9485-c67f2be25e4c-kube-api-access-kx65f" (OuterVolumeSpecName: "kube-api-access-kx65f") pod "977956f8-854b-4c87-9485-c67f2be25e4c" (UID: "977956f8-854b-4c87-9485-c67f2be25e4c"). InnerVolumeSpecName "kube-api-access-kx65f". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 18:20:46.741246 master-0 kubenswrapper[30278]: I0318 18:20:46.741139 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/977956f8-854b-4c87-9485-c67f2be25e4c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "977956f8-854b-4c87-9485-c67f2be25e4c" (UID: "977956f8-854b-4c87-9485-c67f2be25e4c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 18:20:46.750685 master-0 kubenswrapper[30278]: I0318 18:20:46.750618 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/topolvm.io^f0964722-5a61-42c7-8300-def06defe560" (OuterVolumeSpecName: "glance") pod "977956f8-854b-4c87-9485-c67f2be25e4c" (UID: "977956f8-854b-4c87-9485-c67f2be25e4c"). InnerVolumeSpecName "pvc-3e454bbb-ecf6-4956-8a52-dc3d9c4be123". PluginName "kubernetes.io/csi", VolumeGidValue "" Mar 18 18:20:46.794386 master-0 kubenswrapper[30278]: I0318 18:20:46.793870 30278 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/977956f8-854b-4c87-9485-c67f2be25e4c-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 18 18:20:46.794386 master-0 kubenswrapper[30278]: I0318 18:20:46.793924 30278 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/977956f8-854b-4c87-9485-c67f2be25e4c-scripts\") on node \"master-0\" DevicePath \"\"" Mar 18 18:20:46.794386 master-0 kubenswrapper[30278]: I0318 18:20:46.793934 30278 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/977956f8-854b-4c87-9485-c67f2be25e4c-logs\") on node \"master-0\" DevicePath \"\"" Mar 18 18:20:46.794386 master-0 kubenswrapper[30278]: I0318 18:20:46.793945 30278 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kx65f\" (UniqueName: \"kubernetes.io/projected/977956f8-854b-4c87-9485-c67f2be25e4c-kube-api-access-kx65f\") on node \"master-0\" DevicePath \"\"" Mar 18 18:20:46.796836 master-0 kubenswrapper[30278]: I0318 18:20:46.796223 30278 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-3e454bbb-ecf6-4956-8a52-dc3d9c4be123\" (UniqueName: \"kubernetes.io/csi/topolvm.io^f0964722-5a61-42c7-8300-def06defe560\") on node \"master-0\" " Mar 18 18:20:46.826776 master-0 kubenswrapper[30278]: I0318 18:20:46.826700 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/977956f8-854b-4c87-9485-c67f2be25e4c-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "977956f8-854b-4c87-9485-c67f2be25e4c" (UID: "977956f8-854b-4c87-9485-c67f2be25e4c"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 18:20:46.836992 master-0 kubenswrapper[30278]: I0318 18:20:46.836932 30278 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Mar 18 18:20:46.837227 master-0 kubenswrapper[30278]: I0318 18:20:46.837199 30278 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-3e454bbb-ecf6-4956-8a52-dc3d9c4be123" (UniqueName: "kubernetes.io/csi/topolvm.io^f0964722-5a61-42c7-8300-def06defe560") on node "master-0" Mar 18 18:20:46.843524 master-0 kubenswrapper[30278]: I0318 18:20:46.843406 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/977956f8-854b-4c87-9485-c67f2be25e4c-config-data" (OuterVolumeSpecName: "config-data") pod "977956f8-854b-4c87-9485-c67f2be25e4c" (UID: "977956f8-854b-4c87-9485-c67f2be25e4c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 18:20:46.863112 master-0 kubenswrapper[30278]: I0318 18:20:46.863052 30278 scope.go:117] "RemoveContainer" containerID="601bbe39186558f953385a8cf55b5d14e96586d54733d7ee0664550e6f44b211" Mar 18 18:20:46.865260 master-0 kubenswrapper[30278]: E0318 18:20:46.865052 30278 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"601bbe39186558f953385a8cf55b5d14e96586d54733d7ee0664550e6f44b211\": container with ID starting with 601bbe39186558f953385a8cf55b5d14e96586d54733d7ee0664550e6f44b211 not found: ID does not exist" containerID="601bbe39186558f953385a8cf55b5d14e96586d54733d7ee0664550e6f44b211" Mar 18 18:20:46.865260 master-0 kubenswrapper[30278]: I0318 18:20:46.865111 30278 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"601bbe39186558f953385a8cf55b5d14e96586d54733d7ee0664550e6f44b211"} err="failed to get container status \"601bbe39186558f953385a8cf55b5d14e96586d54733d7ee0664550e6f44b211\": rpc error: code = NotFound desc = could not find container \"601bbe39186558f953385a8cf55b5d14e96586d54733d7ee0664550e6f44b211\": container with ID starting with 601bbe39186558f953385a8cf55b5d14e96586d54733d7ee0664550e6f44b211 not found: ID does not exist" Mar 18 18:20:46.865260 master-0 kubenswrapper[30278]: I0318 18:20:46.865152 30278 scope.go:117] "RemoveContainer" containerID="9bde390825853fa8bc84374619f4e1229ec06f662a37380bbbe5265e63a1c43f" Mar 18 18:20:46.865741 master-0 kubenswrapper[30278]: E0318 18:20:46.865708 30278 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9bde390825853fa8bc84374619f4e1229ec06f662a37380bbbe5265e63a1c43f\": container with ID starting with 9bde390825853fa8bc84374619f4e1229ec06f662a37380bbbe5265e63a1c43f not found: ID does not exist" containerID="9bde390825853fa8bc84374619f4e1229ec06f662a37380bbbe5265e63a1c43f" Mar 18 18:20:46.865806 master-0 kubenswrapper[30278]: I0318 18:20:46.865751 30278 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9bde390825853fa8bc84374619f4e1229ec06f662a37380bbbe5265e63a1c43f"} err="failed to get container status \"9bde390825853fa8bc84374619f4e1229ec06f662a37380bbbe5265e63a1c43f\": rpc error: code = NotFound desc = could not find container \"9bde390825853fa8bc84374619f4e1229ec06f662a37380bbbe5265e63a1c43f\": container with ID starting with 9bde390825853fa8bc84374619f4e1229ec06f662a37380bbbe5265e63a1c43f not found: ID does not exist" Mar 18 18:20:46.900685 master-0 kubenswrapper[30278]: I0318 18:20:46.900636 30278 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/977956f8-854b-4c87-9485-c67f2be25e4c-public-tls-certs\") on node \"master-0\" DevicePath \"\"" Mar 18 18:20:46.901008 master-0 kubenswrapper[30278]: I0318 18:20:46.900995 30278 reconciler_common.go:293] "Volume detached for volume \"pvc-3e454bbb-ecf6-4956-8a52-dc3d9c4be123\" (UniqueName: \"kubernetes.io/csi/topolvm.io^f0964722-5a61-42c7-8300-def06defe560\") on node \"master-0\" DevicePath \"\"" Mar 18 18:20:46.901079 master-0 kubenswrapper[30278]: I0318 18:20:46.901068 30278 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/977956f8-854b-4c87-9485-c67f2be25e4c-config-data\") on node \"master-0\" DevicePath \"\"" Mar 18 18:20:47.614435 master-0 kubenswrapper[30278]: I0318 18:20:47.614058 30278 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-824c8-default-external-api-0" Mar 18 18:20:47.619754 master-0 kubenswrapper[30278]: I0318 18:20:47.619686 30278 generic.go:334] "Generic (PLEG): container finished" podID="8b68cf46-84fc-418f-9c01-915501356564" containerID="c846084ab1d1864fded3953bbd85313f0b201b8dd632f29e8018ebc7fb0d0f4a" exitCode=0 Mar 18 18:20:47.619754 master-0 kubenswrapper[30278]: I0318 18:20:47.619751 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-824c8-default-internal-api-0" event={"ID":"8b68cf46-84fc-418f-9c01-915501356564","Type":"ContainerDied","Data":"c846084ab1d1864fded3953bbd85313f0b201b8dd632f29e8018ebc7fb0d0f4a"} Mar 18 18:20:47.702957 master-0 kubenswrapper[30278]: I0318 18:20:47.702906 30278 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-824c8-default-internal-api-0" Mar 18 18:20:47.747154 master-0 kubenswrapper[30278]: I0318 18:20:47.743531 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8b68cf46-84fc-418f-9c01-915501356564-internal-tls-certs\") pod \"8b68cf46-84fc-418f-9c01-915501356564\" (UID: \"8b68cf46-84fc-418f-9c01-915501356564\") " Mar 18 18:20:47.747154 master-0 kubenswrapper[30278]: I0318 18:20:47.743667 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/8b68cf46-84fc-418f-9c01-915501356564-httpd-run\") pod \"8b68cf46-84fc-418f-9c01-915501356564\" (UID: \"8b68cf46-84fc-418f-9c01-915501356564\") " Mar 18 18:20:47.747154 master-0 kubenswrapper[30278]: I0318 18:20:47.743748 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mb4rf\" (UniqueName: \"kubernetes.io/projected/8b68cf46-84fc-418f-9c01-915501356564-kube-api-access-mb4rf\") pod \"8b68cf46-84fc-418f-9c01-915501356564\" (UID: \"8b68cf46-84fc-418f-9c01-915501356564\") " Mar 18 18:20:47.747154 master-0 kubenswrapper[30278]: I0318 18:20:47.743897 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8b68cf46-84fc-418f-9c01-915501356564-config-data\") pod \"8b68cf46-84fc-418f-9c01-915501356564\" (UID: \"8b68cf46-84fc-418f-9c01-915501356564\") " Mar 18 18:20:47.747154 master-0 kubenswrapper[30278]: I0318 18:20:47.743966 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8b68cf46-84fc-418f-9c01-915501356564-combined-ca-bundle\") pod \"8b68cf46-84fc-418f-9c01-915501356564\" (UID: \"8b68cf46-84fc-418f-9c01-915501356564\") " Mar 18 18:20:47.747154 master-0 kubenswrapper[30278]: I0318 18:20:47.744076 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8b68cf46-84fc-418f-9c01-915501356564-logs\") pod \"8b68cf46-84fc-418f-9c01-915501356564\" (UID: \"8b68cf46-84fc-418f-9c01-915501356564\") " Mar 18 18:20:47.747154 master-0 kubenswrapper[30278]: I0318 18:20:47.745999 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8b68cf46-84fc-418f-9c01-915501356564-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "8b68cf46-84fc-418f-9c01-915501356564" (UID: "8b68cf46-84fc-418f-9c01-915501356564"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 18 18:20:47.747154 master-0 kubenswrapper[30278]: I0318 18:20:47.746659 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8b68cf46-84fc-418f-9c01-915501356564-logs" (OuterVolumeSpecName: "logs") pod "8b68cf46-84fc-418f-9c01-915501356564" (UID: "8b68cf46-84fc-418f-9c01-915501356564"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 18 18:20:47.760977 master-0 kubenswrapper[30278]: I0318 18:20:47.760882 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8b68cf46-84fc-418f-9c01-915501356564-kube-api-access-mb4rf" (OuterVolumeSpecName: "kube-api-access-mb4rf") pod "8b68cf46-84fc-418f-9c01-915501356564" (UID: "8b68cf46-84fc-418f-9c01-915501356564"). InnerVolumeSpecName "kube-api-access-mb4rf". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 18:20:47.782139 master-0 kubenswrapper[30278]: I0318 18:20:47.781653 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/csi/topolvm.io^8ca167e8-4fa3-444f-a5bf-31afa92c7150\") pod \"8b68cf46-84fc-418f-9c01-915501356564\" (UID: \"8b68cf46-84fc-418f-9c01-915501356564\") " Mar 18 18:20:47.782139 master-0 kubenswrapper[30278]: I0318 18:20:47.781787 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8b68cf46-84fc-418f-9c01-915501356564-scripts\") pod \"8b68cf46-84fc-418f-9c01-915501356564\" (UID: \"8b68cf46-84fc-418f-9c01-915501356564\") " Mar 18 18:20:47.783359 master-0 kubenswrapper[30278]: I0318 18:20:47.783263 30278 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/8b68cf46-84fc-418f-9c01-915501356564-httpd-run\") on node \"master-0\" DevicePath \"\"" Mar 18 18:20:47.783359 master-0 kubenswrapper[30278]: I0318 18:20:47.783326 30278 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mb4rf\" (UniqueName: \"kubernetes.io/projected/8b68cf46-84fc-418f-9c01-915501356564-kube-api-access-mb4rf\") on node \"master-0\" DevicePath \"\"" Mar 18 18:20:47.783359 master-0 kubenswrapper[30278]: I0318 18:20:47.783342 30278 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8b68cf46-84fc-418f-9c01-915501356564-logs\") on node \"master-0\" DevicePath \"\"" Mar 18 18:20:47.785914 master-0 kubenswrapper[30278]: I0318 18:20:47.785867 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8b68cf46-84fc-418f-9c01-915501356564-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8b68cf46-84fc-418f-9c01-915501356564" (UID: "8b68cf46-84fc-418f-9c01-915501356564"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 18:20:47.815305 master-0 kubenswrapper[30278]: I0318 18:20:47.812814 30278 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-824c8-default-external-api-0"] Mar 18 18:20:47.840615 master-0 kubenswrapper[30278]: I0318 18:20:47.836681 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8b68cf46-84fc-418f-9c01-915501356564-scripts" (OuterVolumeSpecName: "scripts") pod "8b68cf46-84fc-418f-9c01-915501356564" (UID: "8b68cf46-84fc-418f-9c01-915501356564"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 18:20:47.848296 master-0 kubenswrapper[30278]: I0318 18:20:47.848196 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/topolvm.io^8ca167e8-4fa3-444f-a5bf-31afa92c7150" (OuterVolumeSpecName: "glance") pod "8b68cf46-84fc-418f-9c01-915501356564" (UID: "8b68cf46-84fc-418f-9c01-915501356564"). InnerVolumeSpecName "pvc-73c4547d-a129-4633-b041-3e0c5a9c7e49". PluginName "kubernetes.io/csi", VolumeGidValue "" Mar 18 18:20:47.873648 master-0 kubenswrapper[30278]: I0318 18:20:47.873478 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8b68cf46-84fc-418f-9c01-915501356564-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "8b68cf46-84fc-418f-9c01-915501356564" (UID: "8b68cf46-84fc-418f-9c01-915501356564"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 18:20:47.882849 master-0 kubenswrapper[30278]: I0318 18:20:47.877906 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8b68cf46-84fc-418f-9c01-915501356564-config-data" (OuterVolumeSpecName: "config-data") pod "8b68cf46-84fc-418f-9c01-915501356564" (UID: "8b68cf46-84fc-418f-9c01-915501356564"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 18:20:47.886159 master-0 kubenswrapper[30278]: I0318 18:20:47.886090 30278 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8b68cf46-84fc-418f-9c01-915501356564-config-data\") on node \"master-0\" DevicePath \"\"" Mar 18 18:20:47.886159 master-0 kubenswrapper[30278]: I0318 18:20:47.886156 30278 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8b68cf46-84fc-418f-9c01-915501356564-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 18 18:20:47.886312 master-0 kubenswrapper[30278]: I0318 18:20:47.886197 30278 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-73c4547d-a129-4633-b041-3e0c5a9c7e49\" (UniqueName: \"kubernetes.io/csi/topolvm.io^8ca167e8-4fa3-444f-a5bf-31afa92c7150\") on node \"master-0\" " Mar 18 18:20:47.886312 master-0 kubenswrapper[30278]: I0318 18:20:47.886211 30278 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8b68cf46-84fc-418f-9c01-915501356564-scripts\") on node \"master-0\" DevicePath \"\"" Mar 18 18:20:47.886312 master-0 kubenswrapper[30278]: I0318 18:20:47.886223 30278 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8b68cf46-84fc-418f-9c01-915501356564-internal-tls-certs\") on node \"master-0\" DevicePath \"\"" Mar 18 18:20:47.896463 master-0 kubenswrapper[30278]: I0318 18:20:47.896356 30278 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-824c8-default-external-api-0"] Mar 18 18:20:47.918783 master-0 kubenswrapper[30278]: I0318 18:20:47.918694 30278 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Mar 18 18:20:47.919258 master-0 kubenswrapper[30278]: I0318 18:20:47.919163 30278 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-73c4547d-a129-4633-b041-3e0c5a9c7e49" (UniqueName: "kubernetes.io/csi/topolvm.io^8ca167e8-4fa3-444f-a5bf-31afa92c7150") on node "master-0" Mar 18 18:20:47.930956 master-0 kubenswrapper[30278]: I0318 18:20:47.930893 30278 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-824c8-default-external-api-0"] Mar 18 18:20:47.931890 master-0 kubenswrapper[30278]: E0318 18:20:47.931860 30278 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="977956f8-854b-4c87-9485-c67f2be25e4c" containerName="glance-httpd" Mar 18 18:20:47.932002 master-0 kubenswrapper[30278]: I0318 18:20:47.931988 30278 state_mem.go:107] "Deleted CPUSet assignment" podUID="977956f8-854b-4c87-9485-c67f2be25e4c" containerName="glance-httpd" Mar 18 18:20:47.932076 master-0 kubenswrapper[30278]: E0318 18:20:47.932065 30278 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8b68cf46-84fc-418f-9c01-915501356564" containerName="glance-httpd" Mar 18 18:20:47.932139 master-0 kubenswrapper[30278]: I0318 18:20:47.932129 30278 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b68cf46-84fc-418f-9c01-915501356564" containerName="glance-httpd" Mar 18 18:20:47.932204 master-0 kubenswrapper[30278]: E0318 18:20:47.932195 30278 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f25d0677-228e-4b99-bc1f-abbbceebffc4" containerName="ironic-api" Mar 18 18:20:47.932268 master-0 kubenswrapper[30278]: I0318 18:20:47.932257 30278 state_mem.go:107] "Deleted CPUSet assignment" podUID="f25d0677-228e-4b99-bc1f-abbbceebffc4" containerName="ironic-api" Mar 18 18:20:47.932374 master-0 kubenswrapper[30278]: E0318 18:20:47.932363 30278 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="681bd0b0-8192-4ac5-9e57-2a5e4f575b1f" containerName="ironic-inspector-db-sync" Mar 18 18:20:47.932460 master-0 kubenswrapper[30278]: I0318 18:20:47.932449 30278 state_mem.go:107] "Deleted CPUSet assignment" podUID="681bd0b0-8192-4ac5-9e57-2a5e4f575b1f" containerName="ironic-inspector-db-sync" Mar 18 18:20:47.932560 master-0 kubenswrapper[30278]: E0318 18:20:47.932549 30278 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="977956f8-854b-4c87-9485-c67f2be25e4c" containerName="glance-log" Mar 18 18:20:47.932630 master-0 kubenswrapper[30278]: I0318 18:20:47.932619 30278 state_mem.go:107] "Deleted CPUSet assignment" podUID="977956f8-854b-4c87-9485-c67f2be25e4c" containerName="glance-log" Mar 18 18:20:47.932709 master-0 kubenswrapper[30278]: E0318 18:20:47.932698 30278 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8b68cf46-84fc-418f-9c01-915501356564" containerName="glance-log" Mar 18 18:20:47.932765 master-0 kubenswrapper[30278]: I0318 18:20:47.932755 30278 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b68cf46-84fc-418f-9c01-915501356564" containerName="glance-log" Mar 18 18:20:47.933186 master-0 kubenswrapper[30278]: I0318 18:20:47.933170 30278 memory_manager.go:354] "RemoveStaleState removing state" podUID="977956f8-854b-4c87-9485-c67f2be25e4c" containerName="glance-log" Mar 18 18:20:47.933598 master-0 kubenswrapper[30278]: I0318 18:20:47.933311 30278 memory_manager.go:354] "RemoveStaleState removing state" podUID="977956f8-854b-4c87-9485-c67f2be25e4c" containerName="glance-httpd" Mar 18 18:20:47.933727 master-0 kubenswrapper[30278]: I0318 18:20:47.933710 30278 memory_manager.go:354] "RemoveStaleState removing state" podUID="8b68cf46-84fc-418f-9c01-915501356564" containerName="glance-log" Mar 18 18:20:47.933800 master-0 kubenswrapper[30278]: I0318 18:20:47.933789 30278 memory_manager.go:354] "RemoveStaleState removing state" podUID="8b68cf46-84fc-418f-9c01-915501356564" containerName="glance-httpd" Mar 18 18:20:47.933873 master-0 kubenswrapper[30278]: I0318 18:20:47.933863 30278 memory_manager.go:354] "RemoveStaleState removing state" podUID="681bd0b0-8192-4ac5-9e57-2a5e4f575b1f" containerName="ironic-inspector-db-sync" Mar 18 18:20:47.937616 master-0 kubenswrapper[30278]: I0318 18:20:47.937533 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-824c8-default-external-api-0" Mar 18 18:20:47.943414 master-0 kubenswrapper[30278]: I0318 18:20:47.943379 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Mar 18 18:20:47.944937 master-0 kubenswrapper[30278]: I0318 18:20:47.944919 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-824c8-default-external-config-data" Mar 18 18:20:47.954307 master-0 kubenswrapper[30278]: I0318 18:20:47.951457 30278 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-824c8-default-external-api-0"] Mar 18 18:20:48.005623 master-0 kubenswrapper[30278]: I0318 18:20:47.988302 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8e47bafb-66fb-4935-8d11-d134fed10f87-config-data\") pod \"glance-824c8-default-external-api-0\" (UID: \"8e47bafb-66fb-4935-8d11-d134fed10f87\") " pod="openstack/glance-824c8-default-external-api-0" Mar 18 18:20:48.005623 master-0 kubenswrapper[30278]: I0318 18:20:47.988374 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-3e454bbb-ecf6-4956-8a52-dc3d9c4be123\" (UniqueName: \"kubernetes.io/csi/topolvm.io^f0964722-5a61-42c7-8300-def06defe560\") pod \"glance-824c8-default-external-api-0\" (UID: \"8e47bafb-66fb-4935-8d11-d134fed10f87\") " pod="openstack/glance-824c8-default-external-api-0" Mar 18 18:20:48.005623 master-0 kubenswrapper[30278]: I0318 18:20:47.988402 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8e47bafb-66fb-4935-8d11-d134fed10f87-public-tls-certs\") pod \"glance-824c8-default-external-api-0\" (UID: \"8e47bafb-66fb-4935-8d11-d134fed10f87\") " pod="openstack/glance-824c8-default-external-api-0" Mar 18 18:20:48.005623 master-0 kubenswrapper[30278]: I0318 18:20:47.988420 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8e47bafb-66fb-4935-8d11-d134fed10f87-logs\") pod \"glance-824c8-default-external-api-0\" (UID: \"8e47bafb-66fb-4935-8d11-d134fed10f87\") " pod="openstack/glance-824c8-default-external-api-0" Mar 18 18:20:48.005623 master-0 kubenswrapper[30278]: I0318 18:20:47.988548 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8e47bafb-66fb-4935-8d11-d134fed10f87-combined-ca-bundle\") pod \"glance-824c8-default-external-api-0\" (UID: \"8e47bafb-66fb-4935-8d11-d134fed10f87\") " pod="openstack/glance-824c8-default-external-api-0" Mar 18 18:20:48.005623 master-0 kubenswrapper[30278]: I0318 18:20:47.988592 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8e47bafb-66fb-4935-8d11-d134fed10f87-scripts\") pod \"glance-824c8-default-external-api-0\" (UID: \"8e47bafb-66fb-4935-8d11-d134fed10f87\") " pod="openstack/glance-824c8-default-external-api-0" Mar 18 18:20:48.005623 master-0 kubenswrapper[30278]: I0318 18:20:47.988624 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s4kjm\" (UniqueName: \"kubernetes.io/projected/8e47bafb-66fb-4935-8d11-d134fed10f87-kube-api-access-s4kjm\") pod \"glance-824c8-default-external-api-0\" (UID: \"8e47bafb-66fb-4935-8d11-d134fed10f87\") " pod="openstack/glance-824c8-default-external-api-0" Mar 18 18:20:48.005623 master-0 kubenswrapper[30278]: I0318 18:20:47.988684 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/8e47bafb-66fb-4935-8d11-d134fed10f87-httpd-run\") pod \"glance-824c8-default-external-api-0\" (UID: \"8e47bafb-66fb-4935-8d11-d134fed10f87\") " pod="openstack/glance-824c8-default-external-api-0" Mar 18 18:20:48.005623 master-0 kubenswrapper[30278]: I0318 18:20:47.988803 30278 reconciler_common.go:293] "Volume detached for volume \"pvc-73c4547d-a129-4633-b041-3e0c5a9c7e49\" (UniqueName: \"kubernetes.io/csi/topolvm.io^8ca167e8-4fa3-444f-a5bf-31afa92c7150\") on node \"master-0\" DevicePath \"\"" Mar 18 18:20:48.093046 master-0 kubenswrapper[30278]: I0318 18:20:48.092961 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8e47bafb-66fb-4935-8d11-d134fed10f87-scripts\") pod \"glance-824c8-default-external-api-0\" (UID: \"8e47bafb-66fb-4935-8d11-d134fed10f87\") " pod="openstack/glance-824c8-default-external-api-0" Mar 18 18:20:48.093046 master-0 kubenswrapper[30278]: I0318 18:20:48.093054 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s4kjm\" (UniqueName: \"kubernetes.io/projected/8e47bafb-66fb-4935-8d11-d134fed10f87-kube-api-access-s4kjm\") pod \"glance-824c8-default-external-api-0\" (UID: \"8e47bafb-66fb-4935-8d11-d134fed10f87\") " pod="openstack/glance-824c8-default-external-api-0" Mar 18 18:20:48.093367 master-0 kubenswrapper[30278]: I0318 18:20:48.093131 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/8e47bafb-66fb-4935-8d11-d134fed10f87-httpd-run\") pod \"glance-824c8-default-external-api-0\" (UID: \"8e47bafb-66fb-4935-8d11-d134fed10f87\") " pod="openstack/glance-824c8-default-external-api-0" Mar 18 18:20:48.093367 master-0 kubenswrapper[30278]: I0318 18:20:48.093233 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8e47bafb-66fb-4935-8d11-d134fed10f87-config-data\") pod \"glance-824c8-default-external-api-0\" (UID: \"8e47bafb-66fb-4935-8d11-d134fed10f87\") " pod="openstack/glance-824c8-default-external-api-0" Mar 18 18:20:48.093367 master-0 kubenswrapper[30278]: I0318 18:20:48.093264 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-3e454bbb-ecf6-4956-8a52-dc3d9c4be123\" (UniqueName: \"kubernetes.io/csi/topolvm.io^f0964722-5a61-42c7-8300-def06defe560\") pod \"glance-824c8-default-external-api-0\" (UID: \"8e47bafb-66fb-4935-8d11-d134fed10f87\") " pod="openstack/glance-824c8-default-external-api-0" Mar 18 18:20:48.093367 master-0 kubenswrapper[30278]: I0318 18:20:48.093304 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8e47bafb-66fb-4935-8d11-d134fed10f87-public-tls-certs\") pod \"glance-824c8-default-external-api-0\" (UID: \"8e47bafb-66fb-4935-8d11-d134fed10f87\") " pod="openstack/glance-824c8-default-external-api-0" Mar 18 18:20:48.093367 master-0 kubenswrapper[30278]: I0318 18:20:48.093323 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8e47bafb-66fb-4935-8d11-d134fed10f87-logs\") pod \"glance-824c8-default-external-api-0\" (UID: \"8e47bafb-66fb-4935-8d11-d134fed10f87\") " pod="openstack/glance-824c8-default-external-api-0" Mar 18 18:20:48.093518 master-0 kubenswrapper[30278]: I0318 18:20:48.093436 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8e47bafb-66fb-4935-8d11-d134fed10f87-combined-ca-bundle\") pod \"glance-824c8-default-external-api-0\" (UID: \"8e47bafb-66fb-4935-8d11-d134fed10f87\") " pod="openstack/glance-824c8-default-external-api-0" Mar 18 18:20:48.100302 master-0 kubenswrapper[30278]: I0318 18:20:48.098906 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/8e47bafb-66fb-4935-8d11-d134fed10f87-httpd-run\") pod \"glance-824c8-default-external-api-0\" (UID: \"8e47bafb-66fb-4935-8d11-d134fed10f87\") " pod="openstack/glance-824c8-default-external-api-0" Mar 18 18:20:48.100302 master-0 kubenswrapper[30278]: I0318 18:20:48.099309 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8e47bafb-66fb-4935-8d11-d134fed10f87-logs\") pod \"glance-824c8-default-external-api-0\" (UID: \"8e47bafb-66fb-4935-8d11-d134fed10f87\") " pod="openstack/glance-824c8-default-external-api-0" Mar 18 18:20:48.109309 master-0 kubenswrapper[30278]: I0318 18:20:48.105297 30278 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Mar 18 18:20:48.109309 master-0 kubenswrapper[30278]: I0318 18:20:48.105347 30278 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-3e454bbb-ecf6-4956-8a52-dc3d9c4be123\" (UniqueName: \"kubernetes.io/csi/topolvm.io^f0964722-5a61-42c7-8300-def06defe560\") pod \"glance-824c8-default-external-api-0\" (UID: \"8e47bafb-66fb-4935-8d11-d134fed10f87\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/topolvm.io/94c3d9a5864b2a0676e8a45c98800fb7c7e5f534272efb0ca320119ec8f41cb2/globalmount\"" pod="openstack/glance-824c8-default-external-api-0" Mar 18 18:20:48.109309 master-0 kubenswrapper[30278]: I0318 18:20:48.105381 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8e47bafb-66fb-4935-8d11-d134fed10f87-scripts\") pod \"glance-824c8-default-external-api-0\" (UID: \"8e47bafb-66fb-4935-8d11-d134fed10f87\") " pod="openstack/glance-824c8-default-external-api-0" Mar 18 18:20:48.109309 master-0 kubenswrapper[30278]: I0318 18:20:48.107155 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8e47bafb-66fb-4935-8d11-d134fed10f87-public-tls-certs\") pod \"glance-824c8-default-external-api-0\" (UID: \"8e47bafb-66fb-4935-8d11-d134fed10f87\") " pod="openstack/glance-824c8-default-external-api-0" Mar 18 18:20:48.109309 master-0 kubenswrapper[30278]: I0318 18:20:48.108083 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8e47bafb-66fb-4935-8d11-d134fed10f87-combined-ca-bundle\") pod \"glance-824c8-default-external-api-0\" (UID: \"8e47bafb-66fb-4935-8d11-d134fed10f87\") " pod="openstack/glance-824c8-default-external-api-0" Mar 18 18:20:48.110882 master-0 kubenswrapper[30278]: I0318 18:20:48.110860 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8e47bafb-66fb-4935-8d11-d134fed10f87-config-data\") pod \"glance-824c8-default-external-api-0\" (UID: \"8e47bafb-66fb-4935-8d11-d134fed10f87\") " pod="openstack/glance-824c8-default-external-api-0" Mar 18 18:20:48.127171 master-0 kubenswrapper[30278]: I0318 18:20:48.127049 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s4kjm\" (UniqueName: \"kubernetes.io/projected/8e47bafb-66fb-4935-8d11-d134fed10f87-kube-api-access-s4kjm\") pod \"glance-824c8-default-external-api-0\" (UID: \"8e47bafb-66fb-4935-8d11-d134fed10f87\") " pod="openstack/glance-824c8-default-external-api-0" Mar 18 18:20:48.441151 master-0 kubenswrapper[30278]: I0318 18:20:48.432764 30278 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6c5fb6894c-9vqrx"] Mar 18 18:20:48.441151 master-0 kubenswrapper[30278]: I0318 18:20:48.435302 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6c5fb6894c-9vqrx" Mar 18 18:20:48.501196 master-0 kubenswrapper[30278]: I0318 18:20:48.500333 30278 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6c5fb6894c-9vqrx"] Mar 18 18:20:48.511853 master-0 kubenswrapper[30278]: I0318 18:20:48.507156 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1b45f237-457c-45a6-9ea2-2f2ca11ec44e-dns-svc\") pod \"dnsmasq-dns-6c5fb6894c-9vqrx\" (UID: \"1b45f237-457c-45a6-9ea2-2f2ca11ec44e\") " pod="openstack/dnsmasq-dns-6c5fb6894c-9vqrx" Mar 18 18:20:48.511853 master-0 kubenswrapper[30278]: I0318 18:20:48.507231 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1b45f237-457c-45a6-9ea2-2f2ca11ec44e-dns-swift-storage-0\") pod \"dnsmasq-dns-6c5fb6894c-9vqrx\" (UID: \"1b45f237-457c-45a6-9ea2-2f2ca11ec44e\") " pod="openstack/dnsmasq-dns-6c5fb6894c-9vqrx" Mar 18 18:20:48.511853 master-0 kubenswrapper[30278]: I0318 18:20:48.507290 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1b45f237-457c-45a6-9ea2-2f2ca11ec44e-ovsdbserver-sb\") pod \"dnsmasq-dns-6c5fb6894c-9vqrx\" (UID: \"1b45f237-457c-45a6-9ea2-2f2ca11ec44e\") " pod="openstack/dnsmasq-dns-6c5fb6894c-9vqrx" Mar 18 18:20:48.511853 master-0 kubenswrapper[30278]: I0318 18:20:48.507452 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rdm2b\" (UniqueName: \"kubernetes.io/projected/1b45f237-457c-45a6-9ea2-2f2ca11ec44e-kube-api-access-rdm2b\") pod \"dnsmasq-dns-6c5fb6894c-9vqrx\" (UID: \"1b45f237-457c-45a6-9ea2-2f2ca11ec44e\") " pod="openstack/dnsmasq-dns-6c5fb6894c-9vqrx" Mar 18 18:20:48.511853 master-0 kubenswrapper[30278]: I0318 18:20:48.507498 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1b45f237-457c-45a6-9ea2-2f2ca11ec44e-ovsdbserver-nb\") pod \"dnsmasq-dns-6c5fb6894c-9vqrx\" (UID: \"1b45f237-457c-45a6-9ea2-2f2ca11ec44e\") " pod="openstack/dnsmasq-dns-6c5fb6894c-9vqrx" Mar 18 18:20:48.511853 master-0 kubenswrapper[30278]: I0318 18:20:48.507546 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1b45f237-457c-45a6-9ea2-2f2ca11ec44e-config\") pod \"dnsmasq-dns-6c5fb6894c-9vqrx\" (UID: \"1b45f237-457c-45a6-9ea2-2f2ca11ec44e\") " pod="openstack/dnsmasq-dns-6c5fb6894c-9vqrx" Mar 18 18:20:48.616333 master-0 kubenswrapper[30278]: I0318 18:20:48.614034 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1b45f237-457c-45a6-9ea2-2f2ca11ec44e-dns-swift-storage-0\") pod \"dnsmasq-dns-6c5fb6894c-9vqrx\" (UID: \"1b45f237-457c-45a6-9ea2-2f2ca11ec44e\") " pod="openstack/dnsmasq-dns-6c5fb6894c-9vqrx" Mar 18 18:20:48.616333 master-0 kubenswrapper[30278]: I0318 18:20:48.614122 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1b45f237-457c-45a6-9ea2-2f2ca11ec44e-ovsdbserver-sb\") pod \"dnsmasq-dns-6c5fb6894c-9vqrx\" (UID: \"1b45f237-457c-45a6-9ea2-2f2ca11ec44e\") " pod="openstack/dnsmasq-dns-6c5fb6894c-9vqrx" Mar 18 18:20:48.616333 master-0 kubenswrapper[30278]: I0318 18:20:48.614322 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdm2b\" (UniqueName: \"kubernetes.io/projected/1b45f237-457c-45a6-9ea2-2f2ca11ec44e-kube-api-access-rdm2b\") pod \"dnsmasq-dns-6c5fb6894c-9vqrx\" (UID: \"1b45f237-457c-45a6-9ea2-2f2ca11ec44e\") " pod="openstack/dnsmasq-dns-6c5fb6894c-9vqrx" Mar 18 18:20:48.616333 master-0 kubenswrapper[30278]: I0318 18:20:48.614377 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1b45f237-457c-45a6-9ea2-2f2ca11ec44e-ovsdbserver-nb\") pod \"dnsmasq-dns-6c5fb6894c-9vqrx\" (UID: \"1b45f237-457c-45a6-9ea2-2f2ca11ec44e\") " pod="openstack/dnsmasq-dns-6c5fb6894c-9vqrx" Mar 18 18:20:48.616333 master-0 kubenswrapper[30278]: I0318 18:20:48.614440 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1b45f237-457c-45a6-9ea2-2f2ca11ec44e-config\") pod \"dnsmasq-dns-6c5fb6894c-9vqrx\" (UID: \"1b45f237-457c-45a6-9ea2-2f2ca11ec44e\") " pod="openstack/dnsmasq-dns-6c5fb6894c-9vqrx" Mar 18 18:20:48.616333 master-0 kubenswrapper[30278]: I0318 18:20:48.614507 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1b45f237-457c-45a6-9ea2-2f2ca11ec44e-dns-svc\") pod \"dnsmasq-dns-6c5fb6894c-9vqrx\" (UID: \"1b45f237-457c-45a6-9ea2-2f2ca11ec44e\") " pod="openstack/dnsmasq-dns-6c5fb6894c-9vqrx" Mar 18 18:20:48.616333 master-0 kubenswrapper[30278]: I0318 18:20:48.615560 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1b45f237-457c-45a6-9ea2-2f2ca11ec44e-dns-svc\") pod \"dnsmasq-dns-6c5fb6894c-9vqrx\" (UID: \"1b45f237-457c-45a6-9ea2-2f2ca11ec44e\") " pod="openstack/dnsmasq-dns-6c5fb6894c-9vqrx" Mar 18 18:20:48.633536 master-0 kubenswrapper[30278]: I0318 18:20:48.631528 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1b45f237-457c-45a6-9ea2-2f2ca11ec44e-dns-swift-storage-0\") pod \"dnsmasq-dns-6c5fb6894c-9vqrx\" (UID: \"1b45f237-457c-45a6-9ea2-2f2ca11ec44e\") " pod="openstack/dnsmasq-dns-6c5fb6894c-9vqrx" Mar 18 18:20:48.633536 master-0 kubenswrapper[30278]: I0318 18:20:48.631937 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1b45f237-457c-45a6-9ea2-2f2ca11ec44e-ovsdbserver-sb\") pod \"dnsmasq-dns-6c5fb6894c-9vqrx\" (UID: \"1b45f237-457c-45a6-9ea2-2f2ca11ec44e\") " pod="openstack/dnsmasq-dns-6c5fb6894c-9vqrx" Mar 18 18:20:48.633536 master-0 kubenswrapper[30278]: I0318 18:20:48.632602 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1b45f237-457c-45a6-9ea2-2f2ca11ec44e-config\") pod \"dnsmasq-dns-6c5fb6894c-9vqrx\" (UID: \"1b45f237-457c-45a6-9ea2-2f2ca11ec44e\") " pod="openstack/dnsmasq-dns-6c5fb6894c-9vqrx" Mar 18 18:20:48.633536 master-0 kubenswrapper[30278]: I0318 18:20:48.632743 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1b45f237-457c-45a6-9ea2-2f2ca11ec44e-ovsdbserver-nb\") pod \"dnsmasq-dns-6c5fb6894c-9vqrx\" (UID: \"1b45f237-457c-45a6-9ea2-2f2ca11ec44e\") " pod="openstack/dnsmasq-dns-6c5fb6894c-9vqrx" Mar 18 18:20:48.653451 master-0 kubenswrapper[30278]: I0318 18:20:48.651019 30278 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ironic-inspector-0"] Mar 18 18:20:48.658108 master-0 kubenswrapper[30278]: I0318 18:20:48.658052 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-inspector-0" Mar 18 18:20:48.661978 master-0 kubenswrapper[30278]: I0318 18:20:48.661924 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ironic-inspector-config-data" Mar 18 18:20:48.662459 master-0 kubenswrapper[30278]: I0318 18:20:48.662117 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ironic-inspector-scripts" Mar 18 18:20:48.662459 master-0 kubenswrapper[30278]: I0318 18:20:48.662318 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-transport-url-ironic-inspector-transport" Mar 18 18:20:48.663433 master-0 kubenswrapper[30278]: I0318 18:20:48.663396 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdm2b\" (UniqueName: \"kubernetes.io/projected/1b45f237-457c-45a6-9ea2-2f2ca11ec44e-kube-api-access-rdm2b\") pod \"dnsmasq-dns-6c5fb6894c-9vqrx\" (UID: \"1b45f237-457c-45a6-9ea2-2f2ca11ec44e\") " pod="openstack/dnsmasq-dns-6c5fb6894c-9vqrx" Mar 18 18:20:48.698509 master-0 kubenswrapper[30278]: I0318 18:20:48.696063 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-824c8-default-internal-api-0" event={"ID":"8b68cf46-84fc-418f-9c01-915501356564","Type":"ContainerDied","Data":"b210bd87d2938ab2e1e8490aaafed8058301eef43cdb4f631906bab135491d8a"} Mar 18 18:20:48.698509 master-0 kubenswrapper[30278]: I0318 18:20:48.696129 30278 scope.go:117] "RemoveContainer" containerID="c846084ab1d1864fded3953bbd85313f0b201b8dd632f29e8018ebc7fb0d0f4a" Mar 18 18:20:48.698509 master-0 kubenswrapper[30278]: I0318 18:20:48.696294 30278 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-824c8-default-internal-api-0" Mar 18 18:20:48.747714 master-0 kubenswrapper[30278]: I0318 18:20:48.745914 30278 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-inspector-0"] Mar 18 18:20:48.795755 master-0 kubenswrapper[30278]: I0318 18:20:48.795693 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6c5fb6894c-9vqrx" Mar 18 18:20:48.810990 master-0 kubenswrapper[30278]: I0318 18:20:48.808492 30278 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-824c8-default-internal-api-0"] Mar 18 18:20:48.817157 master-0 kubenswrapper[30278]: I0318 18:20:48.817096 30278 scope.go:117] "RemoveContainer" containerID="12f888b489aa0e87b8b8d9e347d25c40f5ff39fc8456b52c776698003f1f51eb" Mar 18 18:20:48.827891 master-0 kubenswrapper[30278]: I0318 18:20:48.827852 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/7078ef0d-3907-46f8-8b84-3bc49fef827b-config\") pod \"ironic-inspector-0\" (UID: \"7078ef0d-3907-46f8-8b84-3bc49fef827b\") " pod="openstack/ironic-inspector-0" Mar 18 18:20:48.828092 master-0 kubenswrapper[30278]: I0318 18:20:48.828054 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7078ef0d-3907-46f8-8b84-3bc49fef827b-combined-ca-bundle\") pod \"ironic-inspector-0\" (UID: \"7078ef0d-3907-46f8-8b84-3bc49fef827b\") " pod="openstack/ironic-inspector-0" Mar 18 18:20:48.828192 master-0 kubenswrapper[30278]: I0318 18:20:48.828175 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7579k\" (UniqueName: \"kubernetes.io/projected/7078ef0d-3907-46f8-8b84-3bc49fef827b-kube-api-access-7579k\") pod \"ironic-inspector-0\" (UID: \"7078ef0d-3907-46f8-8b84-3bc49fef827b\") " pod="openstack/ironic-inspector-0" Mar 18 18:20:48.829439 master-0 kubenswrapper[30278]: I0318 18:20:48.829420 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/7078ef0d-3907-46f8-8b84-3bc49fef827b-etc-podinfo\") pod \"ironic-inspector-0\" (UID: \"7078ef0d-3907-46f8-8b84-3bc49fef827b\") " pod="openstack/ironic-inspector-0" Mar 18 18:20:48.829582 master-0 kubenswrapper[30278]: I0318 18:20:48.829565 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7078ef0d-3907-46f8-8b84-3bc49fef827b-scripts\") pod \"ironic-inspector-0\" (UID: \"7078ef0d-3907-46f8-8b84-3bc49fef827b\") " pod="openstack/ironic-inspector-0" Mar 18 18:20:48.829671 master-0 kubenswrapper[30278]: I0318 18:20:48.829658 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-ironic\" (UniqueName: \"kubernetes.io/empty-dir/7078ef0d-3907-46f8-8b84-3bc49fef827b-var-lib-ironic\") pod \"ironic-inspector-0\" (UID: \"7078ef0d-3907-46f8-8b84-3bc49fef827b\") " pod="openstack/ironic-inspector-0" Mar 18 18:20:48.830150 master-0 kubenswrapper[30278]: I0318 18:20:48.830131 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-ironic-inspector-dhcp-hostsdir\" (UniqueName: \"kubernetes.io/empty-dir/7078ef0d-3907-46f8-8b84-3bc49fef827b-var-lib-ironic-inspector-dhcp-hostsdir\") pod \"ironic-inspector-0\" (UID: \"7078ef0d-3907-46f8-8b84-3bc49fef827b\") " pod="openstack/ironic-inspector-0" Mar 18 18:20:48.830384 master-0 kubenswrapper[30278]: I0318 18:20:48.830366 30278 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-824c8-default-internal-api-0"] Mar 18 18:20:48.889312 master-0 kubenswrapper[30278]: I0318 18:20:48.888333 30278 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-824c8-default-internal-api-0"] Mar 18 18:20:48.894734 master-0 kubenswrapper[30278]: I0318 18:20:48.894673 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-824c8-default-internal-api-0" Mar 18 18:20:48.900694 master-0 kubenswrapper[30278]: I0318 18:20:48.900645 30278 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-824c8-default-internal-api-0"] Mar 18 18:20:48.902528 master-0 kubenswrapper[30278]: I0318 18:20:48.902334 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-824c8-default-internal-config-data" Mar 18 18:20:48.911239 master-0 kubenswrapper[30278]: I0318 18:20:48.908762 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Mar 18 18:20:48.939565 master-0 kubenswrapper[30278]: I0318 18:20:48.938137 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-ironic-inspector-dhcp-hostsdir\" (UniqueName: \"kubernetes.io/empty-dir/7078ef0d-3907-46f8-8b84-3bc49fef827b-var-lib-ironic-inspector-dhcp-hostsdir\") pod \"ironic-inspector-0\" (UID: \"7078ef0d-3907-46f8-8b84-3bc49fef827b\") " pod="openstack/ironic-inspector-0" Mar 18 18:20:48.939565 master-0 kubenswrapper[30278]: I0318 18:20:48.938215 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/7078ef0d-3907-46f8-8b84-3bc49fef827b-config\") pod \"ironic-inspector-0\" (UID: \"7078ef0d-3907-46f8-8b84-3bc49fef827b\") " pod="openstack/ironic-inspector-0" Mar 18 18:20:48.939565 master-0 kubenswrapper[30278]: I0318 18:20:48.938250 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7078ef0d-3907-46f8-8b84-3bc49fef827b-combined-ca-bundle\") pod \"ironic-inspector-0\" (UID: \"7078ef0d-3907-46f8-8b84-3bc49fef827b\") " pod="openstack/ironic-inspector-0" Mar 18 18:20:48.939565 master-0 kubenswrapper[30278]: I0318 18:20:48.938274 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7579k\" (UniqueName: \"kubernetes.io/projected/7078ef0d-3907-46f8-8b84-3bc49fef827b-kube-api-access-7579k\") pod \"ironic-inspector-0\" (UID: \"7078ef0d-3907-46f8-8b84-3bc49fef827b\") " pod="openstack/ironic-inspector-0" Mar 18 18:20:48.939565 master-0 kubenswrapper[30278]: I0318 18:20:48.938388 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/7078ef0d-3907-46f8-8b84-3bc49fef827b-etc-podinfo\") pod \"ironic-inspector-0\" (UID: \"7078ef0d-3907-46f8-8b84-3bc49fef827b\") " pod="openstack/ironic-inspector-0" Mar 18 18:20:48.939565 master-0 kubenswrapper[30278]: I0318 18:20:48.938432 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7078ef0d-3907-46f8-8b84-3bc49fef827b-scripts\") pod \"ironic-inspector-0\" (UID: \"7078ef0d-3907-46f8-8b84-3bc49fef827b\") " pod="openstack/ironic-inspector-0" Mar 18 18:20:48.939565 master-0 kubenswrapper[30278]: I0318 18:20:48.938451 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-ironic\" (UniqueName: \"kubernetes.io/empty-dir/7078ef0d-3907-46f8-8b84-3bc49fef827b-var-lib-ironic\") pod \"ironic-inspector-0\" (UID: \"7078ef0d-3907-46f8-8b84-3bc49fef827b\") " pod="openstack/ironic-inspector-0" Mar 18 18:20:48.939565 master-0 kubenswrapper[30278]: I0318 18:20:48.938943 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-ironic\" (UniqueName: \"kubernetes.io/empty-dir/7078ef0d-3907-46f8-8b84-3bc49fef827b-var-lib-ironic\") pod \"ironic-inspector-0\" (UID: \"7078ef0d-3907-46f8-8b84-3bc49fef827b\") " pod="openstack/ironic-inspector-0" Mar 18 18:20:48.943161 master-0 kubenswrapper[30278]: I0318 18:20:48.940246 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-ironic-inspector-dhcp-hostsdir\" (UniqueName: \"kubernetes.io/empty-dir/7078ef0d-3907-46f8-8b84-3bc49fef827b-var-lib-ironic-inspector-dhcp-hostsdir\") pod \"ironic-inspector-0\" (UID: \"7078ef0d-3907-46f8-8b84-3bc49fef827b\") " pod="openstack/ironic-inspector-0" Mar 18 18:20:48.948429 master-0 kubenswrapper[30278]: I0318 18:20:48.948065 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7078ef0d-3907-46f8-8b84-3bc49fef827b-combined-ca-bundle\") pod \"ironic-inspector-0\" (UID: \"7078ef0d-3907-46f8-8b84-3bc49fef827b\") " pod="openstack/ironic-inspector-0" Mar 18 18:20:48.968220 master-0 kubenswrapper[30278]: I0318 18:20:48.968075 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7579k\" (UniqueName: \"kubernetes.io/projected/7078ef0d-3907-46f8-8b84-3bc49fef827b-kube-api-access-7579k\") pod \"ironic-inspector-0\" (UID: \"7078ef0d-3907-46f8-8b84-3bc49fef827b\") " pod="openstack/ironic-inspector-0" Mar 18 18:20:48.971383 master-0 kubenswrapper[30278]: I0318 18:20:48.970144 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/7078ef0d-3907-46f8-8b84-3bc49fef827b-config\") pod \"ironic-inspector-0\" (UID: \"7078ef0d-3907-46f8-8b84-3bc49fef827b\") " pod="openstack/ironic-inspector-0" Mar 18 18:20:48.971383 master-0 kubenswrapper[30278]: I0318 18:20:48.970777 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7078ef0d-3907-46f8-8b84-3bc49fef827b-scripts\") pod \"ironic-inspector-0\" (UID: \"7078ef0d-3907-46f8-8b84-3bc49fef827b\") " pod="openstack/ironic-inspector-0" Mar 18 18:20:48.976068 master-0 kubenswrapper[30278]: I0318 18:20:48.975983 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/7078ef0d-3907-46f8-8b84-3bc49fef827b-etc-podinfo\") pod \"ironic-inspector-0\" (UID: \"7078ef0d-3907-46f8-8b84-3bc49fef827b\") " pod="openstack/ironic-inspector-0" Mar 18 18:20:49.043614 master-0 kubenswrapper[30278]: I0318 18:20:49.041438 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-73c4547d-a129-4633-b041-3e0c5a9c7e49\" (UniqueName: \"kubernetes.io/csi/topolvm.io^8ca167e8-4fa3-444f-a5bf-31afa92c7150\") pod \"glance-824c8-default-internal-api-0\" (UID: \"d4c895c8-e64f-47dc-a6a6-61e0929add02\") " pod="openstack/glance-824c8-default-internal-api-0" Mar 18 18:20:49.043614 master-0 kubenswrapper[30278]: I0318 18:20:49.041514 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d4c895c8-e64f-47dc-a6a6-61e0929add02-internal-tls-certs\") pod \"glance-824c8-default-internal-api-0\" (UID: \"d4c895c8-e64f-47dc-a6a6-61e0929add02\") " pod="openstack/glance-824c8-default-internal-api-0" Mar 18 18:20:49.043614 master-0 kubenswrapper[30278]: I0318 18:20:49.041559 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d4c895c8-e64f-47dc-a6a6-61e0929add02-scripts\") pod \"glance-824c8-default-internal-api-0\" (UID: \"d4c895c8-e64f-47dc-a6a6-61e0929add02\") " pod="openstack/glance-824c8-default-internal-api-0" Mar 18 18:20:49.043614 master-0 kubenswrapper[30278]: I0318 18:20:49.041638 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d4c895c8-e64f-47dc-a6a6-61e0929add02-httpd-run\") pod \"glance-824c8-default-internal-api-0\" (UID: \"d4c895c8-e64f-47dc-a6a6-61e0929add02\") " pod="openstack/glance-824c8-default-internal-api-0" Mar 18 18:20:49.043614 master-0 kubenswrapper[30278]: I0318 18:20:49.042860 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d4c895c8-e64f-47dc-a6a6-61e0929add02-config-data\") pod \"glance-824c8-default-internal-api-0\" (UID: \"d4c895c8-e64f-47dc-a6a6-61e0929add02\") " pod="openstack/glance-824c8-default-internal-api-0" Mar 18 18:20:49.043614 master-0 kubenswrapper[30278]: I0318 18:20:49.042996 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4c895c8-e64f-47dc-a6a6-61e0929add02-combined-ca-bundle\") pod \"glance-824c8-default-internal-api-0\" (UID: \"d4c895c8-e64f-47dc-a6a6-61e0929add02\") " pod="openstack/glance-824c8-default-internal-api-0" Mar 18 18:20:49.043614 master-0 kubenswrapper[30278]: I0318 18:20:49.043068 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h9mk5\" (UniqueName: \"kubernetes.io/projected/d4c895c8-e64f-47dc-a6a6-61e0929add02-kube-api-access-h9mk5\") pod \"glance-824c8-default-internal-api-0\" (UID: \"d4c895c8-e64f-47dc-a6a6-61e0929add02\") " pod="openstack/glance-824c8-default-internal-api-0" Mar 18 18:20:49.043614 master-0 kubenswrapper[30278]: I0318 18:20:49.043247 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d4c895c8-e64f-47dc-a6a6-61e0929add02-logs\") pod \"glance-824c8-default-internal-api-0\" (UID: \"d4c895c8-e64f-47dc-a6a6-61e0929add02\") " pod="openstack/glance-824c8-default-internal-api-0" Mar 18 18:20:49.054621 master-0 kubenswrapper[30278]: I0318 18:20:49.054266 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-inspector-0" Mar 18 18:20:49.094354 master-0 kubenswrapper[30278]: I0318 18:20:49.094294 30278 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8b68cf46-84fc-418f-9c01-915501356564" path="/var/lib/kubelet/pods/8b68cf46-84fc-418f-9c01-915501356564/volumes" Mar 18 18:20:49.097726 master-0 kubenswrapper[30278]: I0318 18:20:49.097690 30278 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="977956f8-854b-4c87-9485-c67f2be25e4c" path="/var/lib/kubelet/pods/977956f8-854b-4c87-9485-c67f2be25e4c/volumes" Mar 18 18:20:49.134405 master-0 kubenswrapper[30278]: I0318 18:20:49.133379 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-3e454bbb-ecf6-4956-8a52-dc3d9c4be123\" (UniqueName: \"kubernetes.io/csi/topolvm.io^f0964722-5a61-42c7-8300-def06defe560\") pod \"glance-824c8-default-external-api-0\" (UID: \"8e47bafb-66fb-4935-8d11-d134fed10f87\") " pod="openstack/glance-824c8-default-external-api-0" Mar 18 18:20:49.146930 master-0 kubenswrapper[30278]: I0318 18:20:49.146877 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d4c895c8-e64f-47dc-a6a6-61e0929add02-httpd-run\") pod \"glance-824c8-default-internal-api-0\" (UID: \"d4c895c8-e64f-47dc-a6a6-61e0929add02\") " pod="openstack/glance-824c8-default-internal-api-0" Mar 18 18:20:49.148330 master-0 kubenswrapper[30278]: I0318 18:20:49.148293 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d4c895c8-e64f-47dc-a6a6-61e0929add02-httpd-run\") pod \"glance-824c8-default-internal-api-0\" (UID: \"d4c895c8-e64f-47dc-a6a6-61e0929add02\") " pod="openstack/glance-824c8-default-internal-api-0" Mar 18 18:20:49.148602 master-0 kubenswrapper[30278]: I0318 18:20:49.148572 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d4c895c8-e64f-47dc-a6a6-61e0929add02-config-data\") pod \"glance-824c8-default-internal-api-0\" (UID: \"d4c895c8-e64f-47dc-a6a6-61e0929add02\") " pod="openstack/glance-824c8-default-internal-api-0" Mar 18 18:20:49.148661 master-0 kubenswrapper[30278]: I0318 18:20:49.148639 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4c895c8-e64f-47dc-a6a6-61e0929add02-combined-ca-bundle\") pod \"glance-824c8-default-internal-api-0\" (UID: \"d4c895c8-e64f-47dc-a6a6-61e0929add02\") " pod="openstack/glance-824c8-default-internal-api-0" Mar 18 18:20:49.148708 master-0 kubenswrapper[30278]: I0318 18:20:49.148687 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h9mk5\" (UniqueName: \"kubernetes.io/projected/d4c895c8-e64f-47dc-a6a6-61e0929add02-kube-api-access-h9mk5\") pod \"glance-824c8-default-internal-api-0\" (UID: \"d4c895c8-e64f-47dc-a6a6-61e0929add02\") " pod="openstack/glance-824c8-default-internal-api-0" Mar 18 18:20:49.148838 master-0 kubenswrapper[30278]: I0318 18:20:49.148809 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d4c895c8-e64f-47dc-a6a6-61e0929add02-logs\") pod \"glance-824c8-default-internal-api-0\" (UID: \"d4c895c8-e64f-47dc-a6a6-61e0929add02\") " pod="openstack/glance-824c8-default-internal-api-0" Mar 18 18:20:49.148942 master-0 kubenswrapper[30278]: I0318 18:20:49.148921 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-73c4547d-a129-4633-b041-3e0c5a9c7e49\" (UniqueName: \"kubernetes.io/csi/topolvm.io^8ca167e8-4fa3-444f-a5bf-31afa92c7150\") pod \"glance-824c8-default-internal-api-0\" (UID: \"d4c895c8-e64f-47dc-a6a6-61e0929add02\") " pod="openstack/glance-824c8-default-internal-api-0" Mar 18 18:20:49.148986 master-0 kubenswrapper[30278]: I0318 18:20:49.148953 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d4c895c8-e64f-47dc-a6a6-61e0929add02-internal-tls-certs\") pod \"glance-824c8-default-internal-api-0\" (UID: \"d4c895c8-e64f-47dc-a6a6-61e0929add02\") " pod="openstack/glance-824c8-default-internal-api-0" Mar 18 18:20:49.149024 master-0 kubenswrapper[30278]: I0318 18:20:49.149009 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d4c895c8-e64f-47dc-a6a6-61e0929add02-scripts\") pod \"glance-824c8-default-internal-api-0\" (UID: \"d4c895c8-e64f-47dc-a6a6-61e0929add02\") " pod="openstack/glance-824c8-default-internal-api-0" Mar 18 18:20:49.151329 master-0 kubenswrapper[30278]: I0318 18:20:49.151296 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d4c895c8-e64f-47dc-a6a6-61e0929add02-logs\") pod \"glance-824c8-default-internal-api-0\" (UID: \"d4c895c8-e64f-47dc-a6a6-61e0929add02\") " pod="openstack/glance-824c8-default-internal-api-0" Mar 18 18:20:49.155418 master-0 kubenswrapper[30278]: I0318 18:20:49.155387 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4c895c8-e64f-47dc-a6a6-61e0929add02-combined-ca-bundle\") pod \"glance-824c8-default-internal-api-0\" (UID: \"d4c895c8-e64f-47dc-a6a6-61e0929add02\") " pod="openstack/glance-824c8-default-internal-api-0" Mar 18 18:20:49.156787 master-0 kubenswrapper[30278]: I0318 18:20:49.156745 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d4c895c8-e64f-47dc-a6a6-61e0929add02-scripts\") pod \"glance-824c8-default-internal-api-0\" (UID: \"d4c895c8-e64f-47dc-a6a6-61e0929add02\") " pod="openstack/glance-824c8-default-internal-api-0" Mar 18 18:20:49.157512 master-0 kubenswrapper[30278]: I0318 18:20:49.157491 30278 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Mar 18 18:20:49.157601 master-0 kubenswrapper[30278]: I0318 18:20:49.157524 30278 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-73c4547d-a129-4633-b041-3e0c5a9c7e49\" (UniqueName: \"kubernetes.io/csi/topolvm.io^8ca167e8-4fa3-444f-a5bf-31afa92c7150\") pod \"glance-824c8-default-internal-api-0\" (UID: \"d4c895c8-e64f-47dc-a6a6-61e0929add02\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/topolvm.io/c03db859bc87c72425359af32b7c24b69cb9246d9bdaabebd809ecb82cb00bf5/globalmount\"" pod="openstack/glance-824c8-default-internal-api-0" Mar 18 18:20:49.157850 master-0 kubenswrapper[30278]: I0318 18:20:49.157807 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d4c895c8-e64f-47dc-a6a6-61e0929add02-internal-tls-certs\") pod \"glance-824c8-default-internal-api-0\" (UID: \"d4c895c8-e64f-47dc-a6a6-61e0929add02\") " pod="openstack/glance-824c8-default-internal-api-0" Mar 18 18:20:49.162464 master-0 kubenswrapper[30278]: I0318 18:20:49.159934 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d4c895c8-e64f-47dc-a6a6-61e0929add02-config-data\") pod \"glance-824c8-default-internal-api-0\" (UID: \"d4c895c8-e64f-47dc-a6a6-61e0929add02\") " pod="openstack/glance-824c8-default-internal-api-0" Mar 18 18:20:49.180848 master-0 kubenswrapper[30278]: I0318 18:20:49.180794 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h9mk5\" (UniqueName: \"kubernetes.io/projected/d4c895c8-e64f-47dc-a6a6-61e0929add02-kube-api-access-h9mk5\") pod \"glance-824c8-default-internal-api-0\" (UID: \"d4c895c8-e64f-47dc-a6a6-61e0929add02\") " pod="openstack/glance-824c8-default-internal-api-0" Mar 18 18:20:49.184992 master-0 kubenswrapper[30278]: I0318 18:20:49.184601 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-824c8-default-external-api-0" Mar 18 18:20:49.700395 master-0 kubenswrapper[30278]: I0318 18:20:49.665006 30278 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6c5fb6894c-9vqrx"] Mar 18 18:20:49.738365 master-0 kubenswrapper[30278]: I0318 18:20:49.738254 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6c5fb6894c-9vqrx" event={"ID":"1b45f237-457c-45a6-9ea2-2f2ca11ec44e","Type":"ContainerStarted","Data":"3cbe694de05a7b545dda9518f4cb2273792c8ac0e750898f96fbcc0fbb5cecd1"} Mar 18 18:20:49.965628 master-0 kubenswrapper[30278]: W0318 18:20:49.965360 30278 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7078ef0d_3907_46f8_8b84_3bc49fef827b.slice/crio-e6943adab2264c18d7bf621a7c9a46b407755cb260156eb5817d89119d84c918 WatchSource:0}: Error finding container e6943adab2264c18d7bf621a7c9a46b407755cb260156eb5817d89119d84c918: Status 404 returned error can't find the container with id e6943adab2264c18d7bf621a7c9a46b407755cb260156eb5817d89119d84c918 Mar 18 18:20:50.028675 master-0 kubenswrapper[30278]: I0318 18:20:50.028616 30278 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-inspector-0"] Mar 18 18:20:50.081527 master-0 kubenswrapper[30278]: I0318 18:20:50.075652 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-73c4547d-a129-4633-b041-3e0c5a9c7e49\" (UniqueName: \"kubernetes.io/csi/topolvm.io^8ca167e8-4fa3-444f-a5bf-31afa92c7150\") pod \"glance-824c8-default-internal-api-0\" (UID: \"d4c895c8-e64f-47dc-a6a6-61e0929add02\") " pod="openstack/glance-824c8-default-internal-api-0" Mar 18 18:20:50.252688 master-0 kubenswrapper[30278]: I0318 18:20:50.252509 30278 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-824c8-default-external-api-0"] Mar 18 18:20:50.271168 master-0 kubenswrapper[30278]: I0318 18:20:50.270611 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-824c8-default-internal-api-0" Mar 18 18:20:50.760690 master-0 kubenswrapper[30278]: I0318 18:20:50.760526 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-824c8-default-external-api-0" event={"ID":"8e47bafb-66fb-4935-8d11-d134fed10f87","Type":"ContainerStarted","Data":"4c1cddfa382efb1ddbd1153a334572fbdebca57a7c3da0b16628cc482cf6a137"} Mar 18 18:20:50.765229 master-0 kubenswrapper[30278]: I0318 18:20:50.765072 30278 generic.go:334] "Generic (PLEG): container finished" podID="1b45f237-457c-45a6-9ea2-2f2ca11ec44e" containerID="d26ffee6ca37258543224f8ac4fa15a9549d278b13c41116355c45da1cf7cc87" exitCode=0 Mar 18 18:20:50.765229 master-0 kubenswrapper[30278]: I0318 18:20:50.765170 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6c5fb6894c-9vqrx" event={"ID":"1b45f237-457c-45a6-9ea2-2f2ca11ec44e","Type":"ContainerDied","Data":"d26ffee6ca37258543224f8ac4fa15a9549d278b13c41116355c45da1cf7cc87"} Mar 18 18:20:50.772025 master-0 kubenswrapper[30278]: I0318 18:20:50.771974 30278 generic.go:334] "Generic (PLEG): container finished" podID="7078ef0d-3907-46f8-8b84-3bc49fef827b" containerID="7a716da329c5041336322c24359b65f93936fad52ca55f0aa173e6423ea46606" exitCode=0 Mar 18 18:20:50.772148 master-0 kubenswrapper[30278]: I0318 18:20:50.772036 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-0" event={"ID":"7078ef0d-3907-46f8-8b84-3bc49fef827b","Type":"ContainerDied","Data":"7a716da329c5041336322c24359b65f93936fad52ca55f0aa173e6423ea46606"} Mar 18 18:20:50.772148 master-0 kubenswrapper[30278]: I0318 18:20:50.772068 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-0" event={"ID":"7078ef0d-3907-46f8-8b84-3bc49fef827b","Type":"ContainerStarted","Data":"e6943adab2264c18d7bf621a7c9a46b407755cb260156eb5817d89119d84c918"} Mar 18 18:20:51.090609 master-0 kubenswrapper[30278]: I0318 18:20:51.090563 30278 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-824c8-default-internal-api-0"] Mar 18 18:20:51.099657 master-0 kubenswrapper[30278]: W0318 18:20:51.099554 30278 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd4c895c8_e64f_47dc_a6a6_61e0929add02.slice/crio-0677b8608664252b4d469e165a578c5950236c8e3f08a1ee65fe1dba689c22a6 WatchSource:0}: Error finding container 0677b8608664252b4d469e165a578c5950236c8e3f08a1ee65fe1dba689c22a6: Status 404 returned error can't find the container with id 0677b8608664252b4d469e165a578c5950236c8e3f08a1ee65fe1dba689c22a6 Mar 18 18:20:51.795429 master-0 kubenswrapper[30278]: I0318 18:20:51.795256 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6c5fb6894c-9vqrx" event={"ID":"1b45f237-457c-45a6-9ea2-2f2ca11ec44e","Type":"ContainerStarted","Data":"86608609da1e1c313d7f15f38986a33ddcbc855ab390c824172a198e5b6752ba"} Mar 18 18:20:51.795971 master-0 kubenswrapper[30278]: I0318 18:20:51.795449 30278 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6c5fb6894c-9vqrx" Mar 18 18:20:51.822689 master-0 kubenswrapper[30278]: I0318 18:20:51.822562 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-824c8-default-internal-api-0" event={"ID":"d4c895c8-e64f-47dc-a6a6-61e0929add02","Type":"ContainerStarted","Data":"0677b8608664252b4d469e165a578c5950236c8e3f08a1ee65fe1dba689c22a6"} Mar 18 18:20:51.904305 master-0 kubenswrapper[30278]: I0318 18:20:51.903173 30278 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6c5fb6894c-9vqrx" podStartSLOduration=3.9031562539999998 podStartE2EDuration="3.903156254s" podCreationTimestamp="2026-03-18 18:20:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 18:20:51.90229398 +0000 UTC m=+1221.069478575" watchObservedRunningTime="2026-03-18 18:20:51.903156254 +0000 UTC m=+1221.070340849" Mar 18 18:20:52.751923 master-0 kubenswrapper[30278]: I0318 18:20:52.751511 30278 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ironic-inspector-0"] Mar 18 18:20:52.861659 master-0 kubenswrapper[30278]: I0318 18:20:52.861578 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-824c8-default-internal-api-0" event={"ID":"d4c895c8-e64f-47dc-a6a6-61e0929add02","Type":"ContainerStarted","Data":"1feb52be98d0ed5e0c9e155de3a6c212baf636b8725cc420cdf9620b2141354c"} Mar 18 18:20:52.871026 master-0 kubenswrapper[30278]: I0318 18:20:52.870962 30278 generic.go:334] "Generic (PLEG): container finished" podID="7078ef0d-3907-46f8-8b84-3bc49fef827b" containerID="8acfa6bb4182b4571c7b22674fa756975397aebb5451cdc722c3813a21ccc76f" exitCode=0 Mar 18 18:20:52.871268 master-0 kubenswrapper[30278]: I0318 18:20:52.871055 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-0" event={"ID":"7078ef0d-3907-46f8-8b84-3bc49fef827b","Type":"ContainerDied","Data":"8acfa6bb4182b4571c7b22674fa756975397aebb5451cdc722c3813a21ccc76f"} Mar 18 18:20:52.880772 master-0 kubenswrapper[30278]: I0318 18:20:52.880709 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-824c8-default-external-api-0" event={"ID":"8e47bafb-66fb-4935-8d11-d134fed10f87","Type":"ContainerStarted","Data":"6322851bf07a2e398890b9bca9036160d9e7a98155e01d7f78856f75861d343c"} Mar 18 18:20:53.902134 master-0 kubenswrapper[30278]: I0318 18:20:53.901776 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-824c8-default-internal-api-0" event={"ID":"d4c895c8-e64f-47dc-a6a6-61e0929add02","Type":"ContainerStarted","Data":"856b37b57281dac92385e6df8d7ae58e40335737e601e7fa8e46befbd363a437"} Mar 18 18:20:53.909214 master-0 kubenswrapper[30278]: I0318 18:20:53.907470 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-824c8-default-external-api-0" event={"ID":"8e47bafb-66fb-4935-8d11-d134fed10f87","Type":"ContainerStarted","Data":"aa286fef570f2695610b60d54e27672f34cadd3c6301cc92e2d22293e54bf9c4"} Mar 18 18:20:54.020310 master-0 kubenswrapper[30278]: I0318 18:20:54.019015 30278 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-824c8-default-internal-api-0" podStartSLOduration=6.018992443 podStartE2EDuration="6.018992443s" podCreationTimestamp="2026-03-18 18:20:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 18:20:54.013447994 +0000 UTC m=+1223.180632589" watchObservedRunningTime="2026-03-18 18:20:54.018992443 +0000 UTC m=+1223.186177038" Mar 18 18:20:54.104551 master-0 kubenswrapper[30278]: I0318 18:20:54.104424 30278 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-824c8-default-external-api-0" podStartSLOduration=7.104400343 podStartE2EDuration="7.104400343s" podCreationTimestamp="2026-03-18 18:20:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 18:20:54.071658112 +0000 UTC m=+1223.238842707" watchObservedRunningTime="2026-03-18 18:20:54.104400343 +0000 UTC m=+1223.271584938" Mar 18 18:20:58.798599 master-0 kubenswrapper[30278]: I0318 18:20:58.798513 30278 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6c5fb6894c-9vqrx" Mar 18 18:20:59.185712 master-0 kubenswrapper[30278]: I0318 18:20:59.185460 30278 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-824c8-default-external-api-0" Mar 18 18:20:59.185712 master-0 kubenswrapper[30278]: I0318 18:20:59.185555 30278 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-824c8-default-external-api-0" Mar 18 18:20:59.242480 master-0 kubenswrapper[30278]: I0318 18:20:59.241659 30278 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-824c8-default-external-api-0" Mar 18 18:20:59.242661 master-0 kubenswrapper[30278]: I0318 18:20:59.242634 30278 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-824c8-default-external-api-0" Mar 18 18:20:59.366860 master-0 kubenswrapper[30278]: I0318 18:20:59.364520 30278 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-c4bc7d979-gstcd"] Mar 18 18:20:59.366860 master-0 kubenswrapper[30278]: I0318 18:20:59.365072 30278 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-c4bc7d979-gstcd" podUID="200c8f5b-bd48-4587-9a90-f2cba299bc43" containerName="dnsmasq-dns" containerID="cri-o://0b1a11717d99f933e5d47e9dbfcf741d5937fc13059549c8ed78a37fa0015b7f" gracePeriod=10 Mar 18 18:20:59.574406 master-0 kubenswrapper[30278]: I0318 18:20:59.560236 30278 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-c4bc7d979-gstcd" podUID="200c8f5b-bd48-4587-9a90-f2cba299bc43" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.128.0.238:5353: connect: connection refused" Mar 18 18:21:00.093303 master-0 kubenswrapper[30278]: I0318 18:21:00.092460 30278 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-824c8-default-external-api-0" Mar 18 18:21:00.093303 master-0 kubenswrapper[30278]: I0318 18:21:00.092537 30278 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-824c8-default-external-api-0" Mar 18 18:21:00.272011 master-0 kubenswrapper[30278]: I0318 18:21:00.271871 30278 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-824c8-default-internal-api-0" Mar 18 18:21:00.273587 master-0 kubenswrapper[30278]: I0318 18:21:00.273517 30278 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-824c8-default-internal-api-0" Mar 18 18:21:00.337989 master-0 kubenswrapper[30278]: I0318 18:21:00.337916 30278 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-824c8-default-internal-api-0" Mar 18 18:21:00.364921 master-0 kubenswrapper[30278]: I0318 18:21:00.364765 30278 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-824c8-default-internal-api-0" Mar 18 18:21:01.112410 master-0 kubenswrapper[30278]: I0318 18:21:01.112306 30278 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-824c8-default-internal-api-0" Mar 18 18:21:01.113208 master-0 kubenswrapper[30278]: I0318 18:21:01.112643 30278 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-824c8-default-internal-api-0" Mar 18 18:21:02.120308 master-0 kubenswrapper[30278]: I0318 18:21:02.119881 30278 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-c4bc7d979-gstcd" Mar 18 18:21:02.134307 master-0 kubenswrapper[30278]: I0318 18:21:02.132740 30278 generic.go:334] "Generic (PLEG): container finished" podID="200c8f5b-bd48-4587-9a90-f2cba299bc43" containerID="0b1a11717d99f933e5d47e9dbfcf741d5937fc13059549c8ed78a37fa0015b7f" exitCode=0 Mar 18 18:21:02.134307 master-0 kubenswrapper[30278]: I0318 18:21:02.132802 30278 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-c4bc7d979-gstcd" Mar 18 18:21:02.134307 master-0 kubenswrapper[30278]: I0318 18:21:02.132861 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-c4bc7d979-gstcd" event={"ID":"200c8f5b-bd48-4587-9a90-f2cba299bc43","Type":"ContainerDied","Data":"0b1a11717d99f933e5d47e9dbfcf741d5937fc13059549c8ed78a37fa0015b7f"} Mar 18 18:21:02.134307 master-0 kubenswrapper[30278]: I0318 18:21:02.132892 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-c4bc7d979-gstcd" event={"ID":"200c8f5b-bd48-4587-9a90-f2cba299bc43","Type":"ContainerDied","Data":"3eb89e297f5fe07b5fb7fe70b4a39e1c66f4591ab967899363eca137e1fd0631"} Mar 18 18:21:02.134307 master-0 kubenswrapper[30278]: I0318 18:21:02.132930 30278 scope.go:117] "RemoveContainer" containerID="0b1a11717d99f933e5d47e9dbfcf741d5937fc13059549c8ed78a37fa0015b7f" Mar 18 18:21:02.140294 master-0 kubenswrapper[30278]: I0318 18:21:02.138653 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-qn2jb" event={"ID":"75582986-df2a-4948-994c-643227b19932","Type":"ContainerStarted","Data":"50e8a6b6e1bfb66d3e242e20802d5ad89cf5c634881da24abc966aa9ddb5812a"} Mar 18 18:21:02.170547 master-0 kubenswrapper[30278]: I0318 18:21:02.166908 30278 scope.go:117] "RemoveContainer" containerID="4010cf41c57c95db0afb3d37055ba24fc11b7ca3457f8cd04f80bd1c9958414f" Mar 18 18:21:02.232393 master-0 kubenswrapper[30278]: I0318 18:21:02.232072 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zfg8d\" (UniqueName: \"kubernetes.io/projected/200c8f5b-bd48-4587-9a90-f2cba299bc43-kube-api-access-zfg8d\") pod \"200c8f5b-bd48-4587-9a90-f2cba299bc43\" (UID: \"200c8f5b-bd48-4587-9a90-f2cba299bc43\") " Mar 18 18:21:02.232393 master-0 kubenswrapper[30278]: I0318 18:21:02.232183 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/200c8f5b-bd48-4587-9a90-f2cba299bc43-dns-svc\") pod \"200c8f5b-bd48-4587-9a90-f2cba299bc43\" (UID: \"200c8f5b-bd48-4587-9a90-f2cba299bc43\") " Mar 18 18:21:02.232393 master-0 kubenswrapper[30278]: I0318 18:21:02.232307 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/200c8f5b-bd48-4587-9a90-f2cba299bc43-config\") pod \"200c8f5b-bd48-4587-9a90-f2cba299bc43\" (UID: \"200c8f5b-bd48-4587-9a90-f2cba299bc43\") " Mar 18 18:21:02.232393 master-0 kubenswrapper[30278]: I0318 18:21:02.232345 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/200c8f5b-bd48-4587-9a90-f2cba299bc43-ovsdbserver-nb\") pod \"200c8f5b-bd48-4587-9a90-f2cba299bc43\" (UID: \"200c8f5b-bd48-4587-9a90-f2cba299bc43\") " Mar 18 18:21:02.232909 master-0 kubenswrapper[30278]: I0318 18:21:02.232440 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/200c8f5b-bd48-4587-9a90-f2cba299bc43-ovsdbserver-sb\") pod \"200c8f5b-bd48-4587-9a90-f2cba299bc43\" (UID: \"200c8f5b-bd48-4587-9a90-f2cba299bc43\") " Mar 18 18:21:02.232909 master-0 kubenswrapper[30278]: I0318 18:21:02.232530 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/200c8f5b-bd48-4587-9a90-f2cba299bc43-dns-swift-storage-0\") pod \"200c8f5b-bd48-4587-9a90-f2cba299bc43\" (UID: \"200c8f5b-bd48-4587-9a90-f2cba299bc43\") " Mar 18 18:21:02.264425 master-0 kubenswrapper[30278]: I0318 18:21:02.258040 30278 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-qn2jb" podStartSLOduration=4.574930431 podStartE2EDuration="20.258011828s" podCreationTimestamp="2026-03-18 18:20:42 +0000 UTC" firstStartedPulling="2026-03-18 18:20:45.835536435 +0000 UTC m=+1215.002721030" lastFinishedPulling="2026-03-18 18:21:01.518617832 +0000 UTC m=+1230.685802427" observedRunningTime="2026-03-18 18:21:02.192945425 +0000 UTC m=+1231.360130020" watchObservedRunningTime="2026-03-18 18:21:02.258011828 +0000 UTC m=+1231.425196423" Mar 18 18:21:02.264425 master-0 kubenswrapper[30278]: I0318 18:21:02.259135 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/200c8f5b-bd48-4587-9a90-f2cba299bc43-kube-api-access-zfg8d" (OuterVolumeSpecName: "kube-api-access-zfg8d") pod "200c8f5b-bd48-4587-9a90-f2cba299bc43" (UID: "200c8f5b-bd48-4587-9a90-f2cba299bc43"). InnerVolumeSpecName "kube-api-access-zfg8d". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 18:21:02.264425 master-0 kubenswrapper[30278]: I0318 18:21:02.259575 30278 scope.go:117] "RemoveContainer" containerID="0b1a11717d99f933e5d47e9dbfcf741d5937fc13059549c8ed78a37fa0015b7f" Mar 18 18:21:02.271293 master-0 kubenswrapper[30278]: E0318 18:21:02.268008 30278 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0b1a11717d99f933e5d47e9dbfcf741d5937fc13059549c8ed78a37fa0015b7f\": container with ID starting with 0b1a11717d99f933e5d47e9dbfcf741d5937fc13059549c8ed78a37fa0015b7f not found: ID does not exist" containerID="0b1a11717d99f933e5d47e9dbfcf741d5937fc13059549c8ed78a37fa0015b7f" Mar 18 18:21:02.271293 master-0 kubenswrapper[30278]: I0318 18:21:02.268128 30278 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0b1a11717d99f933e5d47e9dbfcf741d5937fc13059549c8ed78a37fa0015b7f"} err="failed to get container status \"0b1a11717d99f933e5d47e9dbfcf741d5937fc13059549c8ed78a37fa0015b7f\": rpc error: code = NotFound desc = could not find container \"0b1a11717d99f933e5d47e9dbfcf741d5937fc13059549c8ed78a37fa0015b7f\": container with ID starting with 0b1a11717d99f933e5d47e9dbfcf741d5937fc13059549c8ed78a37fa0015b7f not found: ID does not exist" Mar 18 18:21:02.271293 master-0 kubenswrapper[30278]: I0318 18:21:02.268189 30278 scope.go:117] "RemoveContainer" containerID="4010cf41c57c95db0afb3d37055ba24fc11b7ca3457f8cd04f80bd1c9958414f" Mar 18 18:21:02.286176 master-0 kubenswrapper[30278]: E0318 18:21:02.286108 30278 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4010cf41c57c95db0afb3d37055ba24fc11b7ca3457f8cd04f80bd1c9958414f\": container with ID starting with 4010cf41c57c95db0afb3d37055ba24fc11b7ca3457f8cd04f80bd1c9958414f not found: ID does not exist" containerID="4010cf41c57c95db0afb3d37055ba24fc11b7ca3457f8cd04f80bd1c9958414f" Mar 18 18:21:02.286176 master-0 kubenswrapper[30278]: I0318 18:21:02.286174 30278 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4010cf41c57c95db0afb3d37055ba24fc11b7ca3457f8cd04f80bd1c9958414f"} err="failed to get container status \"4010cf41c57c95db0afb3d37055ba24fc11b7ca3457f8cd04f80bd1c9958414f\": rpc error: code = NotFound desc = could not find container \"4010cf41c57c95db0afb3d37055ba24fc11b7ca3457f8cd04f80bd1c9958414f\": container with ID starting with 4010cf41c57c95db0afb3d37055ba24fc11b7ca3457f8cd04f80bd1c9958414f not found: ID does not exist" Mar 18 18:21:02.304221 master-0 kubenswrapper[30278]: I0318 18:21:02.304146 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/200c8f5b-bd48-4587-9a90-f2cba299bc43-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "200c8f5b-bd48-4587-9a90-f2cba299bc43" (UID: "200c8f5b-bd48-4587-9a90-f2cba299bc43"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 18:21:02.320088 master-0 kubenswrapper[30278]: I0318 18:21:02.320006 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/200c8f5b-bd48-4587-9a90-f2cba299bc43-config" (OuterVolumeSpecName: "config") pod "200c8f5b-bd48-4587-9a90-f2cba299bc43" (UID: "200c8f5b-bd48-4587-9a90-f2cba299bc43"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 18:21:02.323205 master-0 kubenswrapper[30278]: I0318 18:21:02.323164 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/200c8f5b-bd48-4587-9a90-f2cba299bc43-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "200c8f5b-bd48-4587-9a90-f2cba299bc43" (UID: "200c8f5b-bd48-4587-9a90-f2cba299bc43"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 18:21:02.336578 master-0 kubenswrapper[30278]: I0318 18:21:02.336513 30278 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/200c8f5b-bd48-4587-9a90-f2cba299bc43-ovsdbserver-sb\") on node \"master-0\" DevicePath \"\"" Mar 18 18:21:02.336578 master-0 kubenswrapper[30278]: I0318 18:21:02.336555 30278 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zfg8d\" (UniqueName: \"kubernetes.io/projected/200c8f5b-bd48-4587-9a90-f2cba299bc43-kube-api-access-zfg8d\") on node \"master-0\" DevicePath \"\"" Mar 18 18:21:02.336578 master-0 kubenswrapper[30278]: I0318 18:21:02.336568 30278 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/200c8f5b-bd48-4587-9a90-f2cba299bc43-dns-svc\") on node \"master-0\" DevicePath \"\"" Mar 18 18:21:02.336578 master-0 kubenswrapper[30278]: I0318 18:21:02.336578 30278 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/200c8f5b-bd48-4587-9a90-f2cba299bc43-config\") on node \"master-0\" DevicePath \"\"" Mar 18 18:21:02.347774 master-0 kubenswrapper[30278]: I0318 18:21:02.347705 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/200c8f5b-bd48-4587-9a90-f2cba299bc43-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "200c8f5b-bd48-4587-9a90-f2cba299bc43" (UID: "200c8f5b-bd48-4587-9a90-f2cba299bc43"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 18:21:02.377066 master-0 kubenswrapper[30278]: I0318 18:21:02.376971 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/200c8f5b-bd48-4587-9a90-f2cba299bc43-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "200c8f5b-bd48-4587-9a90-f2cba299bc43" (UID: "200c8f5b-bd48-4587-9a90-f2cba299bc43"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 18:21:02.439516 master-0 kubenswrapper[30278]: I0318 18:21:02.439449 30278 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/200c8f5b-bd48-4587-9a90-f2cba299bc43-ovsdbserver-nb\") on node \"master-0\" DevicePath \"\"" Mar 18 18:21:02.439622 master-0 kubenswrapper[30278]: I0318 18:21:02.439519 30278 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/200c8f5b-bd48-4587-9a90-f2cba299bc43-dns-swift-storage-0\") on node \"master-0\" DevicePath \"\"" Mar 18 18:21:02.569961 master-0 kubenswrapper[30278]: I0318 18:21:02.569888 30278 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-c4bc7d979-gstcd"] Mar 18 18:21:02.590895 master-0 kubenswrapper[30278]: I0318 18:21:02.590815 30278 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-c4bc7d979-gstcd"] Mar 18 18:21:03.072629 master-0 kubenswrapper[30278]: I0318 18:21:03.071809 30278 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="200c8f5b-bd48-4587-9a90-f2cba299bc43" path="/var/lib/kubelet/pods/200c8f5b-bd48-4587-9a90-f2cba299bc43/volumes" Mar 18 18:21:03.167994 master-0 kubenswrapper[30278]: I0318 18:21:03.167675 30278 generic.go:334] "Generic (PLEG): container finished" podID="7078ef0d-3907-46f8-8b84-3bc49fef827b" containerID="f2737b5ee009a8383c72bcf528e2f27ef4d61ab34009e83f59848c1ae2d54161" exitCode=1 Mar 18 18:21:03.167994 master-0 kubenswrapper[30278]: I0318 18:21:03.167846 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-0" event={"ID":"7078ef0d-3907-46f8-8b84-3bc49fef827b","Type":"ContainerStarted","Data":"604d2f7a05c73e0031d1c00196d94c48d4fa632cd8bcc3d5511a6d7614dbe44f"} Mar 18 18:21:03.167994 master-0 kubenswrapper[30278]: I0318 18:21:03.167887 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-0" event={"ID":"7078ef0d-3907-46f8-8b84-3bc49fef827b","Type":"ContainerDied","Data":"f2737b5ee009a8383c72bcf528e2f27ef4d61ab34009e83f59848c1ae2d54161"} Mar 18 18:21:04.212085 master-0 kubenswrapper[30278]: I0318 18:21:04.212022 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-0" event={"ID":"7078ef0d-3907-46f8-8b84-3bc49fef827b","Type":"ContainerStarted","Data":"7adfddccafaff41ad8fa3b1a8be9a6220f1d65a40b64b1d5166a4fff31e028f5"} Mar 18 18:21:04.947496 master-0 kubenswrapper[30278]: I0318 18:21:04.947223 30278 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-824c8-default-external-api-0" Mar 18 18:21:04.947496 master-0 kubenswrapper[30278]: I0318 18:21:04.947480 30278 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 18 18:21:04.997818 master-0 kubenswrapper[30278]: I0318 18:21:04.996178 30278 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-824c8-default-external-api-0" Mar 18 18:21:05.009247 master-0 kubenswrapper[30278]: I0318 18:21:05.009179 30278 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-824c8-default-internal-api-0" Mar 18 18:21:05.009609 master-0 kubenswrapper[30278]: I0318 18:21:05.009377 30278 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 18 18:21:05.022406 master-0 kubenswrapper[30278]: I0318 18:21:05.022172 30278 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-824c8-default-internal-api-0" Mar 18 18:21:05.255981 master-0 kubenswrapper[30278]: I0318 18:21:05.255919 30278 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ironic-inspector-0" podUID="7078ef0d-3907-46f8-8b84-3bc49fef827b" containerName="ironic-inspector" containerID="cri-o://604d2f7a05c73e0031d1c00196d94c48d4fa632cd8bcc3d5511a6d7614dbe44f" gracePeriod=60 Mar 18 18:21:05.256690 master-0 kubenswrapper[30278]: I0318 18:21:05.256253 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-0" event={"ID":"7078ef0d-3907-46f8-8b84-3bc49fef827b","Type":"ContainerStarted","Data":"bd9f9c27e748d4fe2b8cea8087426b1503cc90797b11f8479c9e6689974a8b2c"} Mar 18 18:21:05.256690 master-0 kubenswrapper[30278]: I0318 18:21:05.256306 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-0" event={"ID":"7078ef0d-3907-46f8-8b84-3bc49fef827b","Type":"ContainerStarted","Data":"211973a866a9f4f0943f7414ddc273f42c2d700b2d27e3ad94e39c601ded9a4b"} Mar 18 18:21:05.256690 master-0 kubenswrapper[30278]: I0318 18:21:05.256543 30278 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ironic-inspector-0" Mar 18 18:21:05.256690 master-0 kubenswrapper[30278]: I0318 18:21:05.256568 30278 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ironic-inspector-0" Mar 18 18:21:05.257302 master-0 kubenswrapper[30278]: I0318 18:21:05.257174 30278 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ironic-inspector-0" podUID="7078ef0d-3907-46f8-8b84-3bc49fef827b" containerName="inspector-dnsmasq" containerID="cri-o://bd9f9c27e748d4fe2b8cea8087426b1503cc90797b11f8479c9e6689974a8b2c" gracePeriod=60 Mar 18 18:21:05.257891 master-0 kubenswrapper[30278]: I0318 18:21:05.257774 30278 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ironic-inspector-0" podUID="7078ef0d-3907-46f8-8b84-3bc49fef827b" containerName="inspector-httpboot" containerID="cri-o://7adfddccafaff41ad8fa3b1a8be9a6220f1d65a40b64b1d5166a4fff31e028f5" gracePeriod=60 Mar 18 18:21:05.319426 master-0 kubenswrapper[30278]: I0318 18:21:05.319358 30278 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ironic-inspector-0" podUID="7078ef0d-3907-46f8-8b84-3bc49fef827b" containerName="ramdisk-logs" containerID="cri-o://211973a866a9f4f0943f7414ddc273f42c2d700b2d27e3ad94e39c601ded9a4b" gracePeriod=60 Mar 18 18:21:06.272992 master-0 kubenswrapper[30278]: I0318 18:21:06.272894 30278 generic.go:334] "Generic (PLEG): container finished" podID="7078ef0d-3907-46f8-8b84-3bc49fef827b" containerID="211973a866a9f4f0943f7414ddc273f42c2d700b2d27e3ad94e39c601ded9a4b" exitCode=0 Mar 18 18:21:06.272992 master-0 kubenswrapper[30278]: I0318 18:21:06.272950 30278 generic.go:334] "Generic (PLEG): container finished" podID="7078ef0d-3907-46f8-8b84-3bc49fef827b" containerID="7adfddccafaff41ad8fa3b1a8be9a6220f1d65a40b64b1d5166a4fff31e028f5" exitCode=0 Mar 18 18:21:06.272992 master-0 kubenswrapper[30278]: I0318 18:21:06.272981 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-0" event={"ID":"7078ef0d-3907-46f8-8b84-3bc49fef827b","Type":"ContainerDied","Data":"211973a866a9f4f0943f7414ddc273f42c2d700b2d27e3ad94e39c601ded9a4b"} Mar 18 18:21:06.273993 master-0 kubenswrapper[30278]: I0318 18:21:06.273021 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-0" event={"ID":"7078ef0d-3907-46f8-8b84-3bc49fef827b","Type":"ContainerDied","Data":"7adfddccafaff41ad8fa3b1a8be9a6220f1d65a40b64b1d5166a4fff31e028f5"} Mar 18 18:21:09.078305 master-0 kubenswrapper[30278]: I0318 18:21:09.078207 30278 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ironic-inspector-0" Mar 18 18:21:09.078305 master-0 kubenswrapper[30278]: I0318 18:21:09.078328 30278 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ironic-inspector-0" Mar 18 18:21:19.101933 master-0 kubenswrapper[30278]: E0318 18:21:19.101772 30278 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="bd9f9c27e748d4fe2b8cea8087426b1503cc90797b11f8479c9e6689974a8b2c" cmd=["sh","-c","ss -lun | grep :69"] Mar 18 18:21:19.107257 master-0 kubenswrapper[30278]: I0318 18:21:19.107046 30278 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ironic-inspector-0" podUID="7078ef0d-3907-46f8-8b84-3bc49fef827b" containerName="inspector-httpboot" probeResult="failure" output="dial tcp 10.128.0.255:8088: connect: connection refused" Mar 18 18:21:19.107257 master-0 kubenswrapper[30278]: E0318 18:21:19.107060 30278 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="bd9f9c27e748d4fe2b8cea8087426b1503cc90797b11f8479c9e6689974a8b2c" cmd=["sh","-c","ss -lun | grep :69"] Mar 18 18:21:19.112291 master-0 kubenswrapper[30278]: E0318 18:21:19.111281 30278 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="bd9f9c27e748d4fe2b8cea8087426b1503cc90797b11f8479c9e6689974a8b2c" cmd=["sh","-c","ss -lun | grep :69"] Mar 18 18:21:19.112385 master-0 kubenswrapper[30278]: E0318 18:21:19.112314 30278 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ironic-inspector-0" podUID="7078ef0d-3907-46f8-8b84-3bc49fef827b" containerName="inspector-dnsmasq" Mar 18 18:21:21.492579 master-0 kubenswrapper[30278]: I0318 18:21:21.492456 30278 generic.go:334] "Generic (PLEG): container finished" podID="75582986-df2a-4948-994c-643227b19932" containerID="50e8a6b6e1bfb66d3e242e20802d5ad89cf5c634881da24abc966aa9ddb5812a" exitCode=0 Mar 18 18:21:21.493665 master-0 kubenswrapper[30278]: I0318 18:21:21.492582 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-qn2jb" event={"ID":"75582986-df2a-4948-994c-643227b19932","Type":"ContainerDied","Data":"50e8a6b6e1bfb66d3e242e20802d5ad89cf5c634881da24abc966aa9ddb5812a"} Mar 18 18:21:22.996402 master-0 kubenswrapper[30278]: I0318 18:21:22.996336 30278 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-qn2jb" Mar 18 18:21:23.134223 master-0 kubenswrapper[30278]: I0318 18:21:23.134138 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/75582986-df2a-4948-994c-643227b19932-config-data\") pod \"75582986-df2a-4948-994c-643227b19932\" (UID: \"75582986-df2a-4948-994c-643227b19932\") " Mar 18 18:21:23.134636 master-0 kubenswrapper[30278]: I0318 18:21:23.134315 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/75582986-df2a-4948-994c-643227b19932-scripts\") pod \"75582986-df2a-4948-994c-643227b19932\" (UID: \"75582986-df2a-4948-994c-643227b19932\") " Mar 18 18:21:23.134636 master-0 kubenswrapper[30278]: I0318 18:21:23.134371 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5gfl9\" (UniqueName: \"kubernetes.io/projected/75582986-df2a-4948-994c-643227b19932-kube-api-access-5gfl9\") pod \"75582986-df2a-4948-994c-643227b19932\" (UID: \"75582986-df2a-4948-994c-643227b19932\") " Mar 18 18:21:23.134636 master-0 kubenswrapper[30278]: I0318 18:21:23.134496 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/75582986-df2a-4948-994c-643227b19932-combined-ca-bundle\") pod \"75582986-df2a-4948-994c-643227b19932\" (UID: \"75582986-df2a-4948-994c-643227b19932\") " Mar 18 18:21:23.139668 master-0 kubenswrapper[30278]: I0318 18:21:23.139598 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/75582986-df2a-4948-994c-643227b19932-scripts" (OuterVolumeSpecName: "scripts") pod "75582986-df2a-4948-994c-643227b19932" (UID: "75582986-df2a-4948-994c-643227b19932"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 18:21:23.155039 master-0 kubenswrapper[30278]: I0318 18:21:23.154773 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/75582986-df2a-4948-994c-643227b19932-kube-api-access-5gfl9" (OuterVolumeSpecName: "kube-api-access-5gfl9") pod "75582986-df2a-4948-994c-643227b19932" (UID: "75582986-df2a-4948-994c-643227b19932"). InnerVolumeSpecName "kube-api-access-5gfl9". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 18:21:23.169085 master-0 kubenswrapper[30278]: I0318 18:21:23.169029 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/75582986-df2a-4948-994c-643227b19932-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "75582986-df2a-4948-994c-643227b19932" (UID: "75582986-df2a-4948-994c-643227b19932"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 18:21:23.169868 master-0 kubenswrapper[30278]: I0318 18:21:23.169803 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/75582986-df2a-4948-994c-643227b19932-config-data" (OuterVolumeSpecName: "config-data") pod "75582986-df2a-4948-994c-643227b19932" (UID: "75582986-df2a-4948-994c-643227b19932"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 18:21:23.238067 master-0 kubenswrapper[30278]: I0318 18:21:23.237977 30278 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/75582986-df2a-4948-994c-643227b19932-config-data\") on node \"master-0\" DevicePath \"\"" Mar 18 18:21:23.238067 master-0 kubenswrapper[30278]: I0318 18:21:23.238042 30278 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/75582986-df2a-4948-994c-643227b19932-scripts\") on node \"master-0\" DevicePath \"\"" Mar 18 18:21:23.238067 master-0 kubenswrapper[30278]: I0318 18:21:23.238058 30278 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5gfl9\" (UniqueName: \"kubernetes.io/projected/75582986-df2a-4948-994c-643227b19932-kube-api-access-5gfl9\") on node \"master-0\" DevicePath \"\"" Mar 18 18:21:23.238067 master-0 kubenswrapper[30278]: I0318 18:21:23.238078 30278 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/75582986-df2a-4948-994c-643227b19932-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 18 18:21:23.525758 master-0 kubenswrapper[30278]: I0318 18:21:23.525578 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-qn2jb" event={"ID":"75582986-df2a-4948-994c-643227b19932","Type":"ContainerDied","Data":"59c666cd348d0a9828fb0fda63c6eebaedd448ff1eeb7bd944b2ec0305eecb5c"} Mar 18 18:21:23.525758 master-0 kubenswrapper[30278]: I0318 18:21:23.525662 30278 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="59c666cd348d0a9828fb0fda63c6eebaedd448ff1eeb7bd944b2ec0305eecb5c" Mar 18 18:21:23.527265 master-0 kubenswrapper[30278]: I0318 18:21:23.527227 30278 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-qn2jb" Mar 18 18:21:23.708514 master-0 kubenswrapper[30278]: I0318 18:21:23.708425 30278 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Mar 18 18:21:23.709240 master-0 kubenswrapper[30278]: E0318 18:21:23.709204 30278 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="75582986-df2a-4948-994c-643227b19932" containerName="nova-cell0-conductor-db-sync" Mar 18 18:21:23.709240 master-0 kubenswrapper[30278]: I0318 18:21:23.709233 30278 state_mem.go:107] "Deleted CPUSet assignment" podUID="75582986-df2a-4948-994c-643227b19932" containerName="nova-cell0-conductor-db-sync" Mar 18 18:21:23.709465 master-0 kubenswrapper[30278]: E0318 18:21:23.709269 30278 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="200c8f5b-bd48-4587-9a90-f2cba299bc43" containerName="dnsmasq-dns" Mar 18 18:21:23.709465 master-0 kubenswrapper[30278]: I0318 18:21:23.709300 30278 state_mem.go:107] "Deleted CPUSet assignment" podUID="200c8f5b-bd48-4587-9a90-f2cba299bc43" containerName="dnsmasq-dns" Mar 18 18:21:23.709465 master-0 kubenswrapper[30278]: E0318 18:21:23.709373 30278 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="200c8f5b-bd48-4587-9a90-f2cba299bc43" containerName="init" Mar 18 18:21:23.709465 master-0 kubenswrapper[30278]: I0318 18:21:23.709384 30278 state_mem.go:107] "Deleted CPUSet assignment" podUID="200c8f5b-bd48-4587-9a90-f2cba299bc43" containerName="init" Mar 18 18:21:23.709764 master-0 kubenswrapper[30278]: I0318 18:21:23.709678 30278 memory_manager.go:354] "RemoveStaleState removing state" podUID="200c8f5b-bd48-4587-9a90-f2cba299bc43" containerName="dnsmasq-dns" Mar 18 18:21:23.709764 master-0 kubenswrapper[30278]: I0318 18:21:23.709725 30278 memory_manager.go:354] "RemoveStaleState removing state" podUID="75582986-df2a-4948-994c-643227b19932" containerName="nova-cell0-conductor-db-sync" Mar 18 18:21:23.710841 master-0 kubenswrapper[30278]: I0318 18:21:23.710809 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Mar 18 18:21:23.714025 master-0 kubenswrapper[30278]: I0318 18:21:23.713957 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Mar 18 18:21:23.742252 master-0 kubenswrapper[30278]: I0318 18:21:23.742161 30278 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Mar 18 18:21:23.854308 master-0 kubenswrapper[30278]: I0318 18:21:23.853848 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9lf4f\" (UniqueName: \"kubernetes.io/projected/651a0333-e27d-4274-8909-36174be8189f-kube-api-access-9lf4f\") pod \"nova-cell0-conductor-0\" (UID: \"651a0333-e27d-4274-8909-36174be8189f\") " pod="openstack/nova-cell0-conductor-0" Mar 18 18:21:23.854308 master-0 kubenswrapper[30278]: I0318 18:21:23.853985 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/651a0333-e27d-4274-8909-36174be8189f-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"651a0333-e27d-4274-8909-36174be8189f\") " pod="openstack/nova-cell0-conductor-0" Mar 18 18:21:23.854308 master-0 kubenswrapper[30278]: I0318 18:21:23.854063 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/651a0333-e27d-4274-8909-36174be8189f-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"651a0333-e27d-4274-8909-36174be8189f\") " pod="openstack/nova-cell0-conductor-0" Mar 18 18:21:23.959982 master-0 kubenswrapper[30278]: I0318 18:21:23.958802 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/651a0333-e27d-4274-8909-36174be8189f-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"651a0333-e27d-4274-8909-36174be8189f\") " pod="openstack/nova-cell0-conductor-0" Mar 18 18:21:23.959982 master-0 kubenswrapper[30278]: I0318 18:21:23.958979 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9lf4f\" (UniqueName: \"kubernetes.io/projected/651a0333-e27d-4274-8909-36174be8189f-kube-api-access-9lf4f\") pod \"nova-cell0-conductor-0\" (UID: \"651a0333-e27d-4274-8909-36174be8189f\") " pod="openstack/nova-cell0-conductor-0" Mar 18 18:21:23.959982 master-0 kubenswrapper[30278]: I0318 18:21:23.959068 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/651a0333-e27d-4274-8909-36174be8189f-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"651a0333-e27d-4274-8909-36174be8189f\") " pod="openstack/nova-cell0-conductor-0" Mar 18 18:21:23.963444 master-0 kubenswrapper[30278]: I0318 18:21:23.963400 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/651a0333-e27d-4274-8909-36174be8189f-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"651a0333-e27d-4274-8909-36174be8189f\") " pod="openstack/nova-cell0-conductor-0" Mar 18 18:21:23.965753 master-0 kubenswrapper[30278]: I0318 18:21:23.965700 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/651a0333-e27d-4274-8909-36174be8189f-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"651a0333-e27d-4274-8909-36174be8189f\") " pod="openstack/nova-cell0-conductor-0" Mar 18 18:21:23.981385 master-0 kubenswrapper[30278]: I0318 18:21:23.981327 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9lf4f\" (UniqueName: \"kubernetes.io/projected/651a0333-e27d-4274-8909-36174be8189f-kube-api-access-9lf4f\") pod \"nova-cell0-conductor-0\" (UID: \"651a0333-e27d-4274-8909-36174be8189f\") " pod="openstack/nova-cell0-conductor-0" Mar 18 18:21:24.040030 master-0 kubenswrapper[30278]: I0318 18:21:24.039956 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Mar 18 18:21:24.614154 master-0 kubenswrapper[30278]: W0318 18:21:24.614088 30278 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod651a0333_e27d_4274_8909_36174be8189f.slice/crio-fa0e045dccd8df9293c3921ee958df1e8b81e514b091eee7227474395f3d91ae WatchSource:0}: Error finding container fa0e045dccd8df9293c3921ee958df1e8b81e514b091eee7227474395f3d91ae: Status 404 returned error can't find the container with id fa0e045dccd8df9293c3921ee958df1e8b81e514b091eee7227474395f3d91ae Mar 18 18:21:24.616719 master-0 kubenswrapper[30278]: I0318 18:21:24.616669 30278 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Mar 18 18:21:25.569182 master-0 kubenswrapper[30278]: I0318 18:21:25.569096 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"651a0333-e27d-4274-8909-36174be8189f","Type":"ContainerStarted","Data":"5f604dce4c0acfde9b7c16e70603852fef98bc66b03bbaee1e42ce3d1dc04ae3"} Mar 18 18:21:25.569182 master-0 kubenswrapper[30278]: I0318 18:21:25.569183 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"651a0333-e27d-4274-8909-36174be8189f","Type":"ContainerStarted","Data":"fa0e045dccd8df9293c3921ee958df1e8b81e514b091eee7227474395f3d91ae"} Mar 18 18:21:25.569982 master-0 kubenswrapper[30278]: I0318 18:21:25.569324 30278 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Mar 18 18:21:25.605463 master-0 kubenswrapper[30278]: I0318 18:21:25.605326 30278 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.605288337 podStartE2EDuration="2.605288337s" podCreationTimestamp="2026-03-18 18:21:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 18:21:25.588594228 +0000 UTC m=+1254.755778863" watchObservedRunningTime="2026-03-18 18:21:25.605288337 +0000 UTC m=+1254.772472942" Mar 18 18:21:29.104523 master-0 kubenswrapper[30278]: I0318 18:21:29.104421 30278 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Mar 18 18:21:29.813772 master-0 kubenswrapper[30278]: I0318 18:21:29.813688 30278 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-8vmhz"] Mar 18 18:21:29.815433 master-0 kubenswrapper[30278]: I0318 18:21:29.815381 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-8vmhz" Mar 18 18:21:29.820405 master-0 kubenswrapper[30278]: I0318 18:21:29.820298 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Mar 18 18:21:29.820405 master-0 kubenswrapper[30278]: I0318 18:21:29.820365 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Mar 18 18:21:29.842308 master-0 kubenswrapper[30278]: I0318 18:21:29.842226 30278 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-8vmhz"] Mar 18 18:21:29.987507 master-0 kubenswrapper[30278]: I0318 18:21:29.986194 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a6c011e4-5cf2-4451-974d-e1032bc333a9-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-8vmhz\" (UID: \"a6c011e4-5cf2-4451-974d-e1032bc333a9\") " pod="openstack/nova-cell0-cell-mapping-8vmhz" Mar 18 18:21:29.987507 master-0 kubenswrapper[30278]: I0318 18:21:29.986657 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t7pg6\" (UniqueName: \"kubernetes.io/projected/a6c011e4-5cf2-4451-974d-e1032bc333a9-kube-api-access-t7pg6\") pod \"nova-cell0-cell-mapping-8vmhz\" (UID: \"a6c011e4-5cf2-4451-974d-e1032bc333a9\") " pod="openstack/nova-cell0-cell-mapping-8vmhz" Mar 18 18:21:29.987507 master-0 kubenswrapper[30278]: I0318 18:21:29.986952 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a6c011e4-5cf2-4451-974d-e1032bc333a9-scripts\") pod \"nova-cell0-cell-mapping-8vmhz\" (UID: \"a6c011e4-5cf2-4451-974d-e1032bc333a9\") " pod="openstack/nova-cell0-cell-mapping-8vmhz" Mar 18 18:21:29.987507 master-0 kubenswrapper[30278]: I0318 18:21:29.987022 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a6c011e4-5cf2-4451-974d-e1032bc333a9-config-data\") pod \"nova-cell0-cell-mapping-8vmhz\" (UID: \"a6c011e4-5cf2-4451-974d-e1032bc333a9\") " pod="openstack/nova-cell0-cell-mapping-8vmhz" Mar 18 18:21:29.996087 master-0 kubenswrapper[30278]: I0318 18:21:29.995971 30278 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-compute-ironic-compute-0"] Mar 18 18:21:30.032311 master-0 kubenswrapper[30278]: I0318 18:21:30.016404 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-compute-ironic-compute-0" Mar 18 18:21:30.032311 master-0 kubenswrapper[30278]: I0318 18:21:30.021453 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-compute-ironic-compute-config-data" Mar 18 18:21:30.084073 master-0 kubenswrapper[30278]: I0318 18:21:30.079924 30278 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-compute-ironic-compute-0"] Mar 18 18:21:30.111794 master-0 kubenswrapper[30278]: I0318 18:21:30.105591 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t7pg6\" (UniqueName: \"kubernetes.io/projected/a6c011e4-5cf2-4451-974d-e1032bc333a9-kube-api-access-t7pg6\") pod \"nova-cell0-cell-mapping-8vmhz\" (UID: \"a6c011e4-5cf2-4451-974d-e1032bc333a9\") " pod="openstack/nova-cell0-cell-mapping-8vmhz" Mar 18 18:21:30.111794 master-0 kubenswrapper[30278]: I0318 18:21:30.105789 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a6c011e4-5cf2-4451-974d-e1032bc333a9-scripts\") pod \"nova-cell0-cell-mapping-8vmhz\" (UID: \"a6c011e4-5cf2-4451-974d-e1032bc333a9\") " pod="openstack/nova-cell0-cell-mapping-8vmhz" Mar 18 18:21:30.111794 master-0 kubenswrapper[30278]: I0318 18:21:30.105827 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a6c011e4-5cf2-4451-974d-e1032bc333a9-config-data\") pod \"nova-cell0-cell-mapping-8vmhz\" (UID: \"a6c011e4-5cf2-4451-974d-e1032bc333a9\") " pod="openstack/nova-cell0-cell-mapping-8vmhz" Mar 18 18:21:30.111794 master-0 kubenswrapper[30278]: I0318 18:21:30.105914 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a6c011e4-5cf2-4451-974d-e1032bc333a9-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-8vmhz\" (UID: \"a6c011e4-5cf2-4451-974d-e1032bc333a9\") " pod="openstack/nova-cell0-cell-mapping-8vmhz" Mar 18 18:21:30.132640 master-0 kubenswrapper[30278]: I0318 18:21:30.115296 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a6c011e4-5cf2-4451-974d-e1032bc333a9-scripts\") pod \"nova-cell0-cell-mapping-8vmhz\" (UID: \"a6c011e4-5cf2-4451-974d-e1032bc333a9\") " pod="openstack/nova-cell0-cell-mapping-8vmhz" Mar 18 18:21:30.147300 master-0 kubenswrapper[30278]: I0318 18:21:30.137264 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a6c011e4-5cf2-4451-974d-e1032bc333a9-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-8vmhz\" (UID: \"a6c011e4-5cf2-4451-974d-e1032bc333a9\") " pod="openstack/nova-cell0-cell-mapping-8vmhz" Mar 18 18:21:30.199690 master-0 kubenswrapper[30278]: I0318 18:21:30.199637 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t7pg6\" (UniqueName: \"kubernetes.io/projected/a6c011e4-5cf2-4451-974d-e1032bc333a9-kube-api-access-t7pg6\") pod \"nova-cell0-cell-mapping-8vmhz\" (UID: \"a6c011e4-5cf2-4451-974d-e1032bc333a9\") " pod="openstack/nova-cell0-cell-mapping-8vmhz" Mar 18 18:21:30.200524 master-0 kubenswrapper[30278]: I0318 18:21:30.200476 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a6c011e4-5cf2-4451-974d-e1032bc333a9-config-data\") pod \"nova-cell0-cell-mapping-8vmhz\" (UID: \"a6c011e4-5cf2-4451-974d-e1032bc333a9\") " pod="openstack/nova-cell0-cell-mapping-8vmhz" Mar 18 18:21:30.210486 master-0 kubenswrapper[30278]: I0318 18:21:30.209159 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5308f3c6-9e64-4187-b4f9-b8b0dc8c2874-combined-ca-bundle\") pod \"nova-cell1-compute-ironic-compute-0\" (UID: \"5308f3c6-9e64-4187-b4f9-b8b0dc8c2874\") " pod="openstack/nova-cell1-compute-ironic-compute-0" Mar 18 18:21:30.210486 master-0 kubenswrapper[30278]: I0318 18:21:30.209250 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nb54m\" (UniqueName: \"kubernetes.io/projected/5308f3c6-9e64-4187-b4f9-b8b0dc8c2874-kube-api-access-nb54m\") pod \"nova-cell1-compute-ironic-compute-0\" (UID: \"5308f3c6-9e64-4187-b4f9-b8b0dc8c2874\") " pod="openstack/nova-cell1-compute-ironic-compute-0" Mar 18 18:21:30.210486 master-0 kubenswrapper[30278]: I0318 18:21:30.209354 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5308f3c6-9e64-4187-b4f9-b8b0dc8c2874-config-data\") pod \"nova-cell1-compute-ironic-compute-0\" (UID: \"5308f3c6-9e64-4187-b4f9-b8b0dc8c2874\") " pod="openstack/nova-cell1-compute-ironic-compute-0" Mar 18 18:21:30.294224 master-0 kubenswrapper[30278]: I0318 18:21:30.290376 30278 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Mar 18 18:21:30.301294 master-0 kubenswrapper[30278]: I0318 18:21:30.298009 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 18 18:21:30.313300 master-0 kubenswrapper[30278]: I0318 18:21:30.310093 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Mar 18 18:21:30.313300 master-0 kubenswrapper[30278]: I0318 18:21:30.311348 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5308f3c6-9e64-4187-b4f9-b8b0dc8c2874-combined-ca-bundle\") pod \"nova-cell1-compute-ironic-compute-0\" (UID: \"5308f3c6-9e64-4187-b4f9-b8b0dc8c2874\") " pod="openstack/nova-cell1-compute-ironic-compute-0" Mar 18 18:21:30.313300 master-0 kubenswrapper[30278]: I0318 18:21:30.311453 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nb54m\" (UniqueName: \"kubernetes.io/projected/5308f3c6-9e64-4187-b4f9-b8b0dc8c2874-kube-api-access-nb54m\") pod \"nova-cell1-compute-ironic-compute-0\" (UID: \"5308f3c6-9e64-4187-b4f9-b8b0dc8c2874\") " pod="openstack/nova-cell1-compute-ironic-compute-0" Mar 18 18:21:30.313300 master-0 kubenswrapper[30278]: I0318 18:21:30.311512 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5308f3c6-9e64-4187-b4f9-b8b0dc8c2874-config-data\") pod \"nova-cell1-compute-ironic-compute-0\" (UID: \"5308f3c6-9e64-4187-b4f9-b8b0dc8c2874\") " pod="openstack/nova-cell1-compute-ironic-compute-0" Mar 18 18:21:30.324302 master-0 kubenswrapper[30278]: I0318 18:21:30.321183 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5308f3c6-9e64-4187-b4f9-b8b0dc8c2874-config-data\") pod \"nova-cell1-compute-ironic-compute-0\" (UID: \"5308f3c6-9e64-4187-b4f9-b8b0dc8c2874\") " pod="openstack/nova-cell1-compute-ironic-compute-0" Mar 18 18:21:30.324793 master-0 kubenswrapper[30278]: I0318 18:21:30.324603 30278 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Mar 18 18:21:30.331295 master-0 kubenswrapper[30278]: I0318 18:21:30.326423 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5308f3c6-9e64-4187-b4f9-b8b0dc8c2874-combined-ca-bundle\") pod \"nova-cell1-compute-ironic-compute-0\" (UID: \"5308f3c6-9e64-4187-b4f9-b8b0dc8c2874\") " pod="openstack/nova-cell1-compute-ironic-compute-0" Mar 18 18:21:30.391386 master-0 kubenswrapper[30278]: I0318 18:21:30.389142 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nb54m\" (UniqueName: \"kubernetes.io/projected/5308f3c6-9e64-4187-b4f9-b8b0dc8c2874-kube-api-access-nb54m\") pod \"nova-cell1-compute-ironic-compute-0\" (UID: \"5308f3c6-9e64-4187-b4f9-b8b0dc8c2874\") " pod="openstack/nova-cell1-compute-ironic-compute-0" Mar 18 18:21:30.431594 master-0 kubenswrapper[30278]: I0318 18:21:30.427710 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ed10fd30-ed39-4cda-8252-8f4db21fbfca-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"ed10fd30-ed39-4cda-8252-8f4db21fbfca\") " pod="openstack/nova-api-0" Mar 18 18:21:30.431594 master-0 kubenswrapper[30278]: I0318 18:21:30.427982 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ed10fd30-ed39-4cda-8252-8f4db21fbfca-config-data\") pod \"nova-api-0\" (UID: \"ed10fd30-ed39-4cda-8252-8f4db21fbfca\") " pod="openstack/nova-api-0" Mar 18 18:21:30.431594 master-0 kubenswrapper[30278]: I0318 18:21:30.428030 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-46ctg\" (UniqueName: \"kubernetes.io/projected/ed10fd30-ed39-4cda-8252-8f4db21fbfca-kube-api-access-46ctg\") pod \"nova-api-0\" (UID: \"ed10fd30-ed39-4cda-8252-8f4db21fbfca\") " pod="openstack/nova-api-0" Mar 18 18:21:30.431594 master-0 kubenswrapper[30278]: I0318 18:21:30.428080 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ed10fd30-ed39-4cda-8252-8f4db21fbfca-logs\") pod \"nova-api-0\" (UID: \"ed10fd30-ed39-4cda-8252-8f4db21fbfca\") " pod="openstack/nova-api-0" Mar 18 18:21:30.448312 master-0 kubenswrapper[30278]: I0318 18:21:30.442746 30278 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Mar 18 18:21:30.448312 master-0 kubenswrapper[30278]: I0318 18:21:30.445447 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 18 18:21:30.448670 master-0 kubenswrapper[30278]: I0318 18:21:30.448333 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-8vmhz" Mar 18 18:21:30.467298 master-0 kubenswrapper[30278]: I0318 18:21:30.460850 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Mar 18 18:21:30.491293 master-0 kubenswrapper[30278]: I0318 18:21:30.490542 30278 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Mar 18 18:21:30.515298 master-0 kubenswrapper[30278]: I0318 18:21:30.493401 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Mar 18 18:21:30.515298 master-0 kubenswrapper[30278]: I0318 18:21:30.504433 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Mar 18 18:21:30.550300 master-0 kubenswrapper[30278]: I0318 18:21:30.532666 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-compute-ironic-compute-0" Mar 18 18:21:30.550300 master-0 kubenswrapper[30278]: I0318 18:21:30.535582 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/892489cb-419b-40b3-8e27-04302daea69c-config-data\") pod \"nova-scheduler-0\" (UID: \"892489cb-419b-40b3-8e27-04302daea69c\") " pod="openstack/nova-scheduler-0" Mar 18 18:21:30.550300 master-0 kubenswrapper[30278]: I0318 18:21:30.535733 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/83526059-628b-4d6e-aa9d-92e1e53765c8-config-data\") pod \"nova-metadata-0\" (UID: \"83526059-628b-4d6e-aa9d-92e1e53765c8\") " pod="openstack/nova-metadata-0" Mar 18 18:21:30.550300 master-0 kubenswrapper[30278]: I0318 18:21:30.535788 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ed10fd30-ed39-4cda-8252-8f4db21fbfca-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"ed10fd30-ed39-4cda-8252-8f4db21fbfca\") " pod="openstack/nova-api-0" Mar 18 18:21:30.550300 master-0 kubenswrapper[30278]: I0318 18:21:30.535841 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/83526059-628b-4d6e-aa9d-92e1e53765c8-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"83526059-628b-4d6e-aa9d-92e1e53765c8\") " pod="openstack/nova-metadata-0" Mar 18 18:21:30.550300 master-0 kubenswrapper[30278]: I0318 18:21:30.535870 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kst6b\" (UniqueName: \"kubernetes.io/projected/892489cb-419b-40b3-8e27-04302daea69c-kube-api-access-kst6b\") pod \"nova-scheduler-0\" (UID: \"892489cb-419b-40b3-8e27-04302daea69c\") " pod="openstack/nova-scheduler-0" Mar 18 18:21:30.550300 master-0 kubenswrapper[30278]: I0318 18:21:30.535897 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/892489cb-419b-40b3-8e27-04302daea69c-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"892489cb-419b-40b3-8e27-04302daea69c\") " pod="openstack/nova-scheduler-0" Mar 18 18:21:30.550300 master-0 kubenswrapper[30278]: I0318 18:21:30.535941 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/83526059-628b-4d6e-aa9d-92e1e53765c8-logs\") pod \"nova-metadata-0\" (UID: \"83526059-628b-4d6e-aa9d-92e1e53765c8\") " pod="openstack/nova-metadata-0" Mar 18 18:21:30.550300 master-0 kubenswrapper[30278]: I0318 18:21:30.535980 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ch64j\" (UniqueName: \"kubernetes.io/projected/83526059-628b-4d6e-aa9d-92e1e53765c8-kube-api-access-ch64j\") pod \"nova-metadata-0\" (UID: \"83526059-628b-4d6e-aa9d-92e1e53765c8\") " pod="openstack/nova-metadata-0" Mar 18 18:21:30.550300 master-0 kubenswrapper[30278]: I0318 18:21:30.536041 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ed10fd30-ed39-4cda-8252-8f4db21fbfca-config-data\") pod \"nova-api-0\" (UID: \"ed10fd30-ed39-4cda-8252-8f4db21fbfca\") " pod="openstack/nova-api-0" Mar 18 18:21:30.550300 master-0 kubenswrapper[30278]: I0318 18:21:30.536095 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-46ctg\" (UniqueName: \"kubernetes.io/projected/ed10fd30-ed39-4cda-8252-8f4db21fbfca-kube-api-access-46ctg\") pod \"nova-api-0\" (UID: \"ed10fd30-ed39-4cda-8252-8f4db21fbfca\") " pod="openstack/nova-api-0" Mar 18 18:21:30.550300 master-0 kubenswrapper[30278]: I0318 18:21:30.536129 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ed10fd30-ed39-4cda-8252-8f4db21fbfca-logs\") pod \"nova-api-0\" (UID: \"ed10fd30-ed39-4cda-8252-8f4db21fbfca\") " pod="openstack/nova-api-0" Mar 18 18:21:30.550300 master-0 kubenswrapper[30278]: I0318 18:21:30.537965 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ed10fd30-ed39-4cda-8252-8f4db21fbfca-logs\") pod \"nova-api-0\" (UID: \"ed10fd30-ed39-4cda-8252-8f4db21fbfca\") " pod="openstack/nova-api-0" Mar 18 18:21:30.550300 master-0 kubenswrapper[30278]: I0318 18:21:30.544634 30278 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Mar 18 18:21:30.571612 master-0 kubenswrapper[30278]: I0318 18:21:30.564361 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ed10fd30-ed39-4cda-8252-8f4db21fbfca-config-data\") pod \"nova-api-0\" (UID: \"ed10fd30-ed39-4cda-8252-8f4db21fbfca\") " pod="openstack/nova-api-0" Mar 18 18:21:30.571612 master-0 kubenswrapper[30278]: I0318 18:21:30.567429 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ed10fd30-ed39-4cda-8252-8f4db21fbfca-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"ed10fd30-ed39-4cda-8252-8f4db21fbfca\") " pod="openstack/nova-api-0" Mar 18 18:21:30.571612 master-0 kubenswrapper[30278]: I0318 18:21:30.568420 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-46ctg\" (UniqueName: \"kubernetes.io/projected/ed10fd30-ed39-4cda-8252-8f4db21fbfca-kube-api-access-46ctg\") pod \"nova-api-0\" (UID: \"ed10fd30-ed39-4cda-8252-8f4db21fbfca\") " pod="openstack/nova-api-0" Mar 18 18:21:30.600541 master-0 kubenswrapper[30278]: I0318 18:21:30.600319 30278 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Mar 18 18:21:30.646648 master-0 kubenswrapper[30278]: I0318 18:21:30.638529 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/892489cb-419b-40b3-8e27-04302daea69c-config-data\") pod \"nova-scheduler-0\" (UID: \"892489cb-419b-40b3-8e27-04302daea69c\") " pod="openstack/nova-scheduler-0" Mar 18 18:21:30.646648 master-0 kubenswrapper[30278]: I0318 18:21:30.638623 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/83526059-628b-4d6e-aa9d-92e1e53765c8-config-data\") pod \"nova-metadata-0\" (UID: \"83526059-628b-4d6e-aa9d-92e1e53765c8\") " pod="openstack/nova-metadata-0" Mar 18 18:21:30.646648 master-0 kubenswrapper[30278]: I0318 18:21:30.638698 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/83526059-628b-4d6e-aa9d-92e1e53765c8-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"83526059-628b-4d6e-aa9d-92e1e53765c8\") " pod="openstack/nova-metadata-0" Mar 18 18:21:30.646648 master-0 kubenswrapper[30278]: I0318 18:21:30.642267 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kst6b\" (UniqueName: \"kubernetes.io/projected/892489cb-419b-40b3-8e27-04302daea69c-kube-api-access-kst6b\") pod \"nova-scheduler-0\" (UID: \"892489cb-419b-40b3-8e27-04302daea69c\") " pod="openstack/nova-scheduler-0" Mar 18 18:21:30.646648 master-0 kubenswrapper[30278]: I0318 18:21:30.642446 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/892489cb-419b-40b3-8e27-04302daea69c-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"892489cb-419b-40b3-8e27-04302daea69c\") " pod="openstack/nova-scheduler-0" Mar 18 18:21:30.646648 master-0 kubenswrapper[30278]: I0318 18:21:30.643057 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/83526059-628b-4d6e-aa9d-92e1e53765c8-logs\") pod \"nova-metadata-0\" (UID: \"83526059-628b-4d6e-aa9d-92e1e53765c8\") " pod="openstack/nova-metadata-0" Mar 18 18:21:30.646648 master-0 kubenswrapper[30278]: I0318 18:21:30.643212 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ch64j\" (UniqueName: \"kubernetes.io/projected/83526059-628b-4d6e-aa9d-92e1e53765c8-kube-api-access-ch64j\") pod \"nova-metadata-0\" (UID: \"83526059-628b-4d6e-aa9d-92e1e53765c8\") " pod="openstack/nova-metadata-0" Mar 18 18:21:30.646648 master-0 kubenswrapper[30278]: I0318 18:21:30.644659 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/83526059-628b-4d6e-aa9d-92e1e53765c8-logs\") pod \"nova-metadata-0\" (UID: \"83526059-628b-4d6e-aa9d-92e1e53765c8\") " pod="openstack/nova-metadata-0" Mar 18 18:21:30.650416 master-0 kubenswrapper[30278]: I0318 18:21:30.647498 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/892489cb-419b-40b3-8e27-04302daea69c-config-data\") pod \"nova-scheduler-0\" (UID: \"892489cb-419b-40b3-8e27-04302daea69c\") " pod="openstack/nova-scheduler-0" Mar 18 18:21:30.650416 master-0 kubenswrapper[30278]: I0318 18:21:30.648930 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/83526059-628b-4d6e-aa9d-92e1e53765c8-config-data\") pod \"nova-metadata-0\" (UID: \"83526059-628b-4d6e-aa9d-92e1e53765c8\") " pod="openstack/nova-metadata-0" Mar 18 18:21:30.650780 master-0 kubenswrapper[30278]: I0318 18:21:30.650640 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/892489cb-419b-40b3-8e27-04302daea69c-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"892489cb-419b-40b3-8e27-04302daea69c\") " pod="openstack/nova-scheduler-0" Mar 18 18:21:30.652904 master-0 kubenswrapper[30278]: I0318 18:21:30.652751 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/83526059-628b-4d6e-aa9d-92e1e53765c8-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"83526059-628b-4d6e-aa9d-92e1e53765c8\") " pod="openstack/nova-metadata-0" Mar 18 18:21:30.654088 master-0 kubenswrapper[30278]: I0318 18:21:30.653870 30278 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Mar 18 18:21:30.660116 master-0 kubenswrapper[30278]: I0318 18:21:30.660049 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Mar 18 18:21:30.665417 master-0 kubenswrapper[30278]: I0318 18:21:30.664051 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Mar 18 18:21:30.665417 master-0 kubenswrapper[30278]: I0318 18:21:30.664371 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ch64j\" (UniqueName: \"kubernetes.io/projected/83526059-628b-4d6e-aa9d-92e1e53765c8-kube-api-access-ch64j\") pod \"nova-metadata-0\" (UID: \"83526059-628b-4d6e-aa9d-92e1e53765c8\") " pod="openstack/nova-metadata-0" Mar 18 18:21:30.665417 master-0 kubenswrapper[30278]: I0318 18:21:30.664843 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kst6b\" (UniqueName: \"kubernetes.io/projected/892489cb-419b-40b3-8e27-04302daea69c-kube-api-access-kst6b\") pod \"nova-scheduler-0\" (UID: \"892489cb-419b-40b3-8e27-04302daea69c\") " pod="openstack/nova-scheduler-0" Mar 18 18:21:30.690419 master-0 kubenswrapper[30278]: I0318 18:21:30.690340 30278 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-578c6dc45c-dwjps"] Mar 18 18:21:30.692871 master-0 kubenswrapper[30278]: I0318 18:21:30.692824 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-578c6dc45c-dwjps" Mar 18 18:21:30.707405 master-0 kubenswrapper[30278]: I0318 18:21:30.707336 30278 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Mar 18 18:21:30.739633 master-0 kubenswrapper[30278]: I0318 18:21:30.739564 30278 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-578c6dc45c-dwjps"] Mar 18 18:21:30.749470 master-0 kubenswrapper[30278]: I0318 18:21:30.748199 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/dd6f7934-153f-4a68-98f4-4d3c1a576e33-ovsdbserver-sb\") pod \"dnsmasq-dns-578c6dc45c-dwjps\" (UID: \"dd6f7934-153f-4a68-98f4-4d3c1a576e33\") " pod="openstack/dnsmasq-dns-578c6dc45c-dwjps" Mar 18 18:21:30.749470 master-0 kubenswrapper[30278]: I0318 18:21:30.748416 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ffjbv\" (UniqueName: \"kubernetes.io/projected/dd6f7934-153f-4a68-98f4-4d3c1a576e33-kube-api-access-ffjbv\") pod \"dnsmasq-dns-578c6dc45c-dwjps\" (UID: \"dd6f7934-153f-4a68-98f4-4d3c1a576e33\") " pod="openstack/dnsmasq-dns-578c6dc45c-dwjps" Mar 18 18:21:30.749470 master-0 kubenswrapper[30278]: I0318 18:21:30.748489 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dd6f7934-153f-4a68-98f4-4d3c1a576e33-config\") pod \"dnsmasq-dns-578c6dc45c-dwjps\" (UID: \"dd6f7934-153f-4a68-98f4-4d3c1a576e33\") " pod="openstack/dnsmasq-dns-578c6dc45c-dwjps" Mar 18 18:21:30.749470 master-0 kubenswrapper[30278]: I0318 18:21:30.748526 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5pg2z\" (UniqueName: \"kubernetes.io/projected/73e9d791-73fa-47f6-bf4e-01119900b9d9-kube-api-access-5pg2z\") pod \"nova-cell1-novncproxy-0\" (UID: \"73e9d791-73fa-47f6-bf4e-01119900b9d9\") " pod="openstack/nova-cell1-novncproxy-0" Mar 18 18:21:30.749470 master-0 kubenswrapper[30278]: I0318 18:21:30.748557 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/dd6f7934-153f-4a68-98f4-4d3c1a576e33-dns-swift-storage-0\") pod \"dnsmasq-dns-578c6dc45c-dwjps\" (UID: \"dd6f7934-153f-4a68-98f4-4d3c1a576e33\") " pod="openstack/dnsmasq-dns-578c6dc45c-dwjps" Mar 18 18:21:30.749470 master-0 kubenswrapper[30278]: I0318 18:21:30.748643 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/dd6f7934-153f-4a68-98f4-4d3c1a576e33-ovsdbserver-nb\") pod \"dnsmasq-dns-578c6dc45c-dwjps\" (UID: \"dd6f7934-153f-4a68-98f4-4d3c1a576e33\") " pod="openstack/dnsmasq-dns-578c6dc45c-dwjps" Mar 18 18:21:30.749470 master-0 kubenswrapper[30278]: I0318 18:21:30.748681 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/dd6f7934-153f-4a68-98f4-4d3c1a576e33-dns-svc\") pod \"dnsmasq-dns-578c6dc45c-dwjps\" (UID: \"dd6f7934-153f-4a68-98f4-4d3c1a576e33\") " pod="openstack/dnsmasq-dns-578c6dc45c-dwjps" Mar 18 18:21:30.749470 master-0 kubenswrapper[30278]: I0318 18:21:30.748745 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/73e9d791-73fa-47f6-bf4e-01119900b9d9-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"73e9d791-73fa-47f6-bf4e-01119900b9d9\") " pod="openstack/nova-cell1-novncproxy-0" Mar 18 18:21:30.749470 master-0 kubenswrapper[30278]: I0318 18:21:30.748787 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/73e9d791-73fa-47f6-bf4e-01119900b9d9-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"73e9d791-73fa-47f6-bf4e-01119900b9d9\") " pod="openstack/nova-cell1-novncproxy-0" Mar 18 18:21:30.839469 master-0 kubenswrapper[30278]: I0318 18:21:30.832263 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 18 18:21:30.851810 master-0 kubenswrapper[30278]: I0318 18:21:30.851643 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/73e9d791-73fa-47f6-bf4e-01119900b9d9-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"73e9d791-73fa-47f6-bf4e-01119900b9d9\") " pod="openstack/nova-cell1-novncproxy-0" Mar 18 18:21:30.851810 master-0 kubenswrapper[30278]: I0318 18:21:30.851746 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/dd6f7934-153f-4a68-98f4-4d3c1a576e33-ovsdbserver-sb\") pod \"dnsmasq-dns-578c6dc45c-dwjps\" (UID: \"dd6f7934-153f-4a68-98f4-4d3c1a576e33\") " pod="openstack/dnsmasq-dns-578c6dc45c-dwjps" Mar 18 18:21:30.852222 master-0 kubenswrapper[30278]: I0318 18:21:30.851837 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ffjbv\" (UniqueName: \"kubernetes.io/projected/dd6f7934-153f-4a68-98f4-4d3c1a576e33-kube-api-access-ffjbv\") pod \"dnsmasq-dns-578c6dc45c-dwjps\" (UID: \"dd6f7934-153f-4a68-98f4-4d3c1a576e33\") " pod="openstack/dnsmasq-dns-578c6dc45c-dwjps" Mar 18 18:21:30.852222 master-0 kubenswrapper[30278]: I0318 18:21:30.851874 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dd6f7934-153f-4a68-98f4-4d3c1a576e33-config\") pod \"dnsmasq-dns-578c6dc45c-dwjps\" (UID: \"dd6f7934-153f-4a68-98f4-4d3c1a576e33\") " pod="openstack/dnsmasq-dns-578c6dc45c-dwjps" Mar 18 18:21:30.852222 master-0 kubenswrapper[30278]: I0318 18:21:30.851898 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5pg2z\" (UniqueName: \"kubernetes.io/projected/73e9d791-73fa-47f6-bf4e-01119900b9d9-kube-api-access-5pg2z\") pod \"nova-cell1-novncproxy-0\" (UID: \"73e9d791-73fa-47f6-bf4e-01119900b9d9\") " pod="openstack/nova-cell1-novncproxy-0" Mar 18 18:21:30.852222 master-0 kubenswrapper[30278]: I0318 18:21:30.851921 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/dd6f7934-153f-4a68-98f4-4d3c1a576e33-dns-swift-storage-0\") pod \"dnsmasq-dns-578c6dc45c-dwjps\" (UID: \"dd6f7934-153f-4a68-98f4-4d3c1a576e33\") " pod="openstack/dnsmasq-dns-578c6dc45c-dwjps" Mar 18 18:21:30.852222 master-0 kubenswrapper[30278]: I0318 18:21:30.851974 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/dd6f7934-153f-4a68-98f4-4d3c1a576e33-ovsdbserver-nb\") pod \"dnsmasq-dns-578c6dc45c-dwjps\" (UID: \"dd6f7934-153f-4a68-98f4-4d3c1a576e33\") " pod="openstack/dnsmasq-dns-578c6dc45c-dwjps" Mar 18 18:21:30.852222 master-0 kubenswrapper[30278]: I0318 18:21:30.852005 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/dd6f7934-153f-4a68-98f4-4d3c1a576e33-dns-svc\") pod \"dnsmasq-dns-578c6dc45c-dwjps\" (UID: \"dd6f7934-153f-4a68-98f4-4d3c1a576e33\") " pod="openstack/dnsmasq-dns-578c6dc45c-dwjps" Mar 18 18:21:30.852222 master-0 kubenswrapper[30278]: I0318 18:21:30.852047 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/73e9d791-73fa-47f6-bf4e-01119900b9d9-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"73e9d791-73fa-47f6-bf4e-01119900b9d9\") " pod="openstack/nova-cell1-novncproxy-0" Mar 18 18:21:30.853664 master-0 kubenswrapper[30278]: I0318 18:21:30.852913 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Mar 18 18:21:30.854978 master-0 kubenswrapper[30278]: I0318 18:21:30.854941 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/dd6f7934-153f-4a68-98f4-4d3c1a576e33-ovsdbserver-nb\") pod \"dnsmasq-dns-578c6dc45c-dwjps\" (UID: \"dd6f7934-153f-4a68-98f4-4d3c1a576e33\") " pod="openstack/dnsmasq-dns-578c6dc45c-dwjps" Mar 18 18:21:30.856737 master-0 kubenswrapper[30278]: I0318 18:21:30.855587 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dd6f7934-153f-4a68-98f4-4d3c1a576e33-config\") pod \"dnsmasq-dns-578c6dc45c-dwjps\" (UID: \"dd6f7934-153f-4a68-98f4-4d3c1a576e33\") " pod="openstack/dnsmasq-dns-578c6dc45c-dwjps" Mar 18 18:21:30.856737 master-0 kubenswrapper[30278]: I0318 18:21:30.855956 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/dd6f7934-153f-4a68-98f4-4d3c1a576e33-ovsdbserver-sb\") pod \"dnsmasq-dns-578c6dc45c-dwjps\" (UID: \"dd6f7934-153f-4a68-98f4-4d3c1a576e33\") " pod="openstack/dnsmasq-dns-578c6dc45c-dwjps" Mar 18 18:21:30.856737 master-0 kubenswrapper[30278]: I0318 18:21:30.856025 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/dd6f7934-153f-4a68-98f4-4d3c1a576e33-dns-svc\") pod \"dnsmasq-dns-578c6dc45c-dwjps\" (UID: \"dd6f7934-153f-4a68-98f4-4d3c1a576e33\") " pod="openstack/dnsmasq-dns-578c6dc45c-dwjps" Mar 18 18:21:30.856737 master-0 kubenswrapper[30278]: I0318 18:21:30.856164 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/dd6f7934-153f-4a68-98f4-4d3c1a576e33-dns-swift-storage-0\") pod \"dnsmasq-dns-578c6dc45c-dwjps\" (UID: \"dd6f7934-153f-4a68-98f4-4d3c1a576e33\") " pod="openstack/dnsmasq-dns-578c6dc45c-dwjps" Mar 18 18:21:30.858994 master-0 kubenswrapper[30278]: I0318 18:21:30.858609 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/73e9d791-73fa-47f6-bf4e-01119900b9d9-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"73e9d791-73fa-47f6-bf4e-01119900b9d9\") " pod="openstack/nova-cell1-novncproxy-0" Mar 18 18:21:30.865899 master-0 kubenswrapper[30278]: I0318 18:21:30.861971 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/73e9d791-73fa-47f6-bf4e-01119900b9d9-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"73e9d791-73fa-47f6-bf4e-01119900b9d9\") " pod="openstack/nova-cell1-novncproxy-0" Mar 18 18:21:30.871204 master-0 kubenswrapper[30278]: I0318 18:21:30.871135 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 18 18:21:30.875647 master-0 kubenswrapper[30278]: I0318 18:21:30.875577 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5pg2z\" (UniqueName: \"kubernetes.io/projected/73e9d791-73fa-47f6-bf4e-01119900b9d9-kube-api-access-5pg2z\") pod \"nova-cell1-novncproxy-0\" (UID: \"73e9d791-73fa-47f6-bf4e-01119900b9d9\") " pod="openstack/nova-cell1-novncproxy-0" Mar 18 18:21:30.889815 master-0 kubenswrapper[30278]: I0318 18:21:30.889307 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ffjbv\" (UniqueName: \"kubernetes.io/projected/dd6f7934-153f-4a68-98f4-4d3c1a576e33-kube-api-access-ffjbv\") pod \"dnsmasq-dns-578c6dc45c-dwjps\" (UID: \"dd6f7934-153f-4a68-98f4-4d3c1a576e33\") " pod="openstack/dnsmasq-dns-578c6dc45c-dwjps" Mar 18 18:21:31.004290 master-0 kubenswrapper[30278]: I0318 18:21:31.004185 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Mar 18 18:21:31.030716 master-0 kubenswrapper[30278]: I0318 18:21:31.028171 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-578c6dc45c-dwjps" Mar 18 18:21:31.198960 master-0 kubenswrapper[30278]: I0318 18:21:31.198222 30278 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-8vmhz"] Mar 18 18:21:31.430449 master-0 kubenswrapper[30278]: I0318 18:21:31.430388 30278 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-tv9n9"] Mar 18 18:21:31.433022 master-0 kubenswrapper[30278]: I0318 18:21:31.432987 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-tv9n9" Mar 18 18:21:31.446351 master-0 kubenswrapper[30278]: I0318 18:21:31.443506 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Mar 18 18:21:31.446351 master-0 kubenswrapper[30278]: I0318 18:21:31.443746 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Mar 18 18:21:31.471191 master-0 kubenswrapper[30278]: I0318 18:21:31.471055 30278 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-compute-ironic-compute-0"] Mar 18 18:21:31.703491 master-0 kubenswrapper[30278]: I0318 18:21:31.702826 30278 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-tv9n9"] Mar 18 18:21:31.734185 master-0 kubenswrapper[30278]: I0318 18:21:31.733164 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/564cb488-caa7-49c0-b12a-133aa721085c-config-data\") pod \"nova-cell1-conductor-db-sync-tv9n9\" (UID: \"564cb488-caa7-49c0-b12a-133aa721085c\") " pod="openstack/nova-cell1-conductor-db-sync-tv9n9" Mar 18 18:21:31.734185 master-0 kubenswrapper[30278]: I0318 18:21:31.733404 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mndl4\" (UniqueName: \"kubernetes.io/projected/564cb488-caa7-49c0-b12a-133aa721085c-kube-api-access-mndl4\") pod \"nova-cell1-conductor-db-sync-tv9n9\" (UID: \"564cb488-caa7-49c0-b12a-133aa721085c\") " pod="openstack/nova-cell1-conductor-db-sync-tv9n9" Mar 18 18:21:31.734185 master-0 kubenswrapper[30278]: I0318 18:21:31.733429 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/564cb488-caa7-49c0-b12a-133aa721085c-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-tv9n9\" (UID: \"564cb488-caa7-49c0-b12a-133aa721085c\") " pod="openstack/nova-cell1-conductor-db-sync-tv9n9" Mar 18 18:21:31.734185 master-0 kubenswrapper[30278]: I0318 18:21:31.733616 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/564cb488-caa7-49c0-b12a-133aa721085c-scripts\") pod \"nova-cell1-conductor-db-sync-tv9n9\" (UID: \"564cb488-caa7-49c0-b12a-133aa721085c\") " pod="openstack/nova-cell1-conductor-db-sync-tv9n9" Mar 18 18:21:31.818825 master-0 kubenswrapper[30278]: I0318 18:21:31.816020 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-8vmhz" event={"ID":"a6c011e4-5cf2-4451-974d-e1032bc333a9","Type":"ContainerStarted","Data":"04bf60f6e80d8fb0a8a7e4ed88b2af499ea6a8c1ce87b2e006cee606cf157d99"} Mar 18 18:21:31.818825 master-0 kubenswrapper[30278]: I0318 18:21:31.816111 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-8vmhz" event={"ID":"a6c011e4-5cf2-4451-974d-e1032bc333a9","Type":"ContainerStarted","Data":"9517abc62588050abc4e748b6520113454af0f434f6b295f09cb9229203d58e6"} Mar 18 18:21:31.824852 master-0 kubenswrapper[30278]: I0318 18:21:31.823838 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-compute-ironic-compute-0" event={"ID":"5308f3c6-9e64-4187-b4f9-b8b0dc8c2874","Type":"ContainerStarted","Data":"b46ce0bb59499869d5a4ea22cf25a54dfbe32b7f02443638f508d2994279c95c"} Mar 18 18:21:31.836098 master-0 kubenswrapper[30278]: I0318 18:21:31.834821 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"ed10fd30-ed39-4cda-8252-8f4db21fbfca","Type":"ContainerStarted","Data":"69a264af6dad401e369317628ba9c91aa71cb7c30ddc57c214e488e4c92ed0f5"} Mar 18 18:21:31.842117 master-0 kubenswrapper[30278]: I0318 18:21:31.839894 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mndl4\" (UniqueName: \"kubernetes.io/projected/564cb488-caa7-49c0-b12a-133aa721085c-kube-api-access-mndl4\") pod \"nova-cell1-conductor-db-sync-tv9n9\" (UID: \"564cb488-caa7-49c0-b12a-133aa721085c\") " pod="openstack/nova-cell1-conductor-db-sync-tv9n9" Mar 18 18:21:31.842117 master-0 kubenswrapper[30278]: I0318 18:21:31.840002 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/564cb488-caa7-49c0-b12a-133aa721085c-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-tv9n9\" (UID: \"564cb488-caa7-49c0-b12a-133aa721085c\") " pod="openstack/nova-cell1-conductor-db-sync-tv9n9" Mar 18 18:21:31.842117 master-0 kubenswrapper[30278]: I0318 18:21:31.840162 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/564cb488-caa7-49c0-b12a-133aa721085c-scripts\") pod \"nova-cell1-conductor-db-sync-tv9n9\" (UID: \"564cb488-caa7-49c0-b12a-133aa721085c\") " pod="openstack/nova-cell1-conductor-db-sync-tv9n9" Mar 18 18:21:31.842117 master-0 kubenswrapper[30278]: I0318 18:21:31.840498 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/564cb488-caa7-49c0-b12a-133aa721085c-config-data\") pod \"nova-cell1-conductor-db-sync-tv9n9\" (UID: \"564cb488-caa7-49c0-b12a-133aa721085c\") " pod="openstack/nova-cell1-conductor-db-sync-tv9n9" Mar 18 18:21:31.857284 master-0 kubenswrapper[30278]: I0318 18:21:31.856772 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/564cb488-caa7-49c0-b12a-133aa721085c-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-tv9n9\" (UID: \"564cb488-caa7-49c0-b12a-133aa721085c\") " pod="openstack/nova-cell1-conductor-db-sync-tv9n9" Mar 18 18:21:31.857374 master-0 kubenswrapper[30278]: I0318 18:21:31.857348 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/564cb488-caa7-49c0-b12a-133aa721085c-scripts\") pod \"nova-cell1-conductor-db-sync-tv9n9\" (UID: \"564cb488-caa7-49c0-b12a-133aa721085c\") " pod="openstack/nova-cell1-conductor-db-sync-tv9n9" Mar 18 18:21:31.867067 master-0 kubenswrapper[30278]: I0318 18:21:31.866905 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/564cb488-caa7-49c0-b12a-133aa721085c-config-data\") pod \"nova-cell1-conductor-db-sync-tv9n9\" (UID: \"564cb488-caa7-49c0-b12a-133aa721085c\") " pod="openstack/nova-cell1-conductor-db-sync-tv9n9" Mar 18 18:21:31.896224 master-0 kubenswrapper[30278]: I0318 18:21:31.895207 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mndl4\" (UniqueName: \"kubernetes.io/projected/564cb488-caa7-49c0-b12a-133aa721085c-kube-api-access-mndl4\") pod \"nova-cell1-conductor-db-sync-tv9n9\" (UID: \"564cb488-caa7-49c0-b12a-133aa721085c\") " pod="openstack/nova-cell1-conductor-db-sync-tv9n9" Mar 18 18:21:31.927939 master-0 kubenswrapper[30278]: I0318 18:21:31.925108 30278 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Mar 18 18:21:31.937225 master-0 kubenswrapper[30278]: I0318 18:21:31.937013 30278 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Mar 18 18:21:31.952285 master-0 kubenswrapper[30278]: I0318 18:21:31.951814 30278 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-8vmhz" podStartSLOduration=2.951780598 podStartE2EDuration="2.951780598s" podCreationTimestamp="2026-03-18 18:21:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 18:21:31.891900705 +0000 UTC m=+1261.059085300" watchObservedRunningTime="2026-03-18 18:21:31.951780598 +0000 UTC m=+1261.118965193" Mar 18 18:21:32.125531 master-0 kubenswrapper[30278]: I0318 18:21:32.125362 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-tv9n9" Mar 18 18:21:32.154616 master-0 kubenswrapper[30278]: I0318 18:21:32.153964 30278 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Mar 18 18:21:32.214806 master-0 kubenswrapper[30278]: I0318 18:21:32.214716 30278 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-578c6dc45c-dwjps"] Mar 18 18:21:32.428530 master-0 kubenswrapper[30278]: I0318 18:21:32.427912 30278 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Mar 18 18:21:32.891324 master-0 kubenswrapper[30278]: I0318 18:21:32.888254 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"892489cb-419b-40b3-8e27-04302daea69c","Type":"ContainerStarted","Data":"eebc42ba4c80cb3c6a1ee9ae7648c3a31cd8407b1f75078816eb41c07a3efccb"} Mar 18 18:21:32.891620 master-0 kubenswrapper[30278]: I0318 18:21:32.891453 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"73e9d791-73fa-47f6-bf4e-01119900b9d9","Type":"ContainerStarted","Data":"fcfc5f736531b3d522f5618892bc14520cd3843dfe78dc427978d0660f1d4333"} Mar 18 18:21:32.926350 master-0 kubenswrapper[30278]: I0318 18:21:32.924184 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"83526059-628b-4d6e-aa9d-92e1e53765c8","Type":"ContainerStarted","Data":"97547f21356d68656a37d868d826d9b6357d88ea38c547aa5db4a1b642affb76"} Mar 18 18:21:32.936450 master-0 kubenswrapper[30278]: I0318 18:21:32.934928 30278 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-tv9n9"] Mar 18 18:21:32.945624 master-0 kubenswrapper[30278]: I0318 18:21:32.945540 30278 generic.go:334] "Generic (PLEG): container finished" podID="dd6f7934-153f-4a68-98f4-4d3c1a576e33" containerID="476ab37f37845aa6a59ab81e0762a1d85bcf8c008ec3f6a78a03c47cd86b9565" exitCode=0 Mar 18 18:21:32.945765 master-0 kubenswrapper[30278]: I0318 18:21:32.945714 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-578c6dc45c-dwjps" event={"ID":"dd6f7934-153f-4a68-98f4-4d3c1a576e33","Type":"ContainerDied","Data":"476ab37f37845aa6a59ab81e0762a1d85bcf8c008ec3f6a78a03c47cd86b9565"} Mar 18 18:21:32.945815 master-0 kubenswrapper[30278]: I0318 18:21:32.945769 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-578c6dc45c-dwjps" event={"ID":"dd6f7934-153f-4a68-98f4-4d3c1a576e33","Type":"ContainerStarted","Data":"e18cc174c6bb27648deb63083440a16f5d1bf0e705e34ed030f4fc9b8130cd30"} Mar 18 18:21:33.981868 master-0 kubenswrapper[30278]: I0318 18:21:33.981789 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-tv9n9" event={"ID":"564cb488-caa7-49c0-b12a-133aa721085c","Type":"ContainerStarted","Data":"7b3625c3106e4874258c9fcfb2a5cc4a4e04fbb266cd1c80dfa2896440ae6d8e"} Mar 18 18:21:33.981868 master-0 kubenswrapper[30278]: I0318 18:21:33.981860 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-tv9n9" event={"ID":"564cb488-caa7-49c0-b12a-133aa721085c","Type":"ContainerStarted","Data":"194944cec8d70ecef450d22eedc1dff1e4e3f03eb9d3b6be1a9232a4c665d71b"} Mar 18 18:21:34.040576 master-0 kubenswrapper[30278]: I0318 18:21:34.040406 30278 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-tv9n9" podStartSLOduration=3.040381453 podStartE2EDuration="3.040381453s" podCreationTimestamp="2026-03-18 18:21:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 18:21:34.026932191 +0000 UTC m=+1263.194116786" watchObservedRunningTime="2026-03-18 18:21:34.040381453 +0000 UTC m=+1263.207566048" Mar 18 18:21:34.045670 master-0 kubenswrapper[30278]: I0318 18:21:34.045606 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-578c6dc45c-dwjps" event={"ID":"dd6f7934-153f-4a68-98f4-4d3c1a576e33","Type":"ContainerStarted","Data":"9578b97c9f0d50f9a662066e01afaa7196cdc67073fc13808f7008b49f9cac2e"} Mar 18 18:21:34.046739 master-0 kubenswrapper[30278]: I0318 18:21:34.046710 30278 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-578c6dc45c-dwjps" Mar 18 18:21:34.128433 master-0 kubenswrapper[30278]: I0318 18:21:34.127953 30278 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-578c6dc45c-dwjps" podStartSLOduration=4.127929601 podStartE2EDuration="4.127929601s" podCreationTimestamp="2026-03-18 18:21:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 18:21:34.077436701 +0000 UTC m=+1263.244621316" watchObservedRunningTime="2026-03-18 18:21:34.127929601 +0000 UTC m=+1263.295114196" Mar 18 18:21:34.834548 master-0 kubenswrapper[30278]: I0318 18:21:34.834127 30278 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Mar 18 18:21:34.852010 master-0 kubenswrapper[30278]: I0318 18:21:34.851927 30278 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Mar 18 18:21:38.139664 master-0 kubenswrapper[30278]: I0318 18:21:38.139590 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"ed10fd30-ed39-4cda-8252-8f4db21fbfca","Type":"ContainerStarted","Data":"21cc803c0cc2cc33976caf7d87c2df977085d7b39c27e4d7d0ce8dfe84c57b66"} Mar 18 18:21:38.140167 master-0 kubenswrapper[30278]: I0318 18:21:38.139688 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"ed10fd30-ed39-4cda-8252-8f4db21fbfca","Type":"ContainerStarted","Data":"a8de5e0b50e2e23ba53aadd31f170f4aa1f4c1a46484ddc6fb05b0a83a3b16b0"} Mar 18 18:21:38.152849 master-0 kubenswrapper[30278]: I0318 18:21:38.152533 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"73e9d791-73fa-47f6-bf4e-01119900b9d9","Type":"ContainerStarted","Data":"dfa1777826c02e3dfba180af6230ee779439df18da73b9f81281de1916d97082"} Mar 18 18:21:38.153082 master-0 kubenswrapper[30278]: I0318 18:21:38.152902 30278 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="73e9d791-73fa-47f6-bf4e-01119900b9d9" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://dfa1777826c02e3dfba180af6230ee779439df18da73b9f81281de1916d97082" gracePeriod=30 Mar 18 18:21:38.175117 master-0 kubenswrapper[30278]: I0318 18:21:38.175035 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"83526059-628b-4d6e-aa9d-92e1e53765c8","Type":"ContainerStarted","Data":"201742db36acf735aff4586ae556e49b54693a7a5e9c760626351860047a0eb8"} Mar 18 18:21:38.175710 master-0 kubenswrapper[30278]: I0318 18:21:38.175679 30278 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="83526059-628b-4d6e-aa9d-92e1e53765c8" containerName="nova-metadata-log" containerID="cri-o://201742db36acf735aff4586ae556e49b54693a7a5e9c760626351860047a0eb8" gracePeriod=30 Mar 18 18:21:38.176220 master-0 kubenswrapper[30278]: I0318 18:21:38.176139 30278 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="83526059-628b-4d6e-aa9d-92e1e53765c8" containerName="nova-metadata-metadata" containerID="cri-o://1e1c9e26d09fd8c2fae47dae916d281103a81975a80ab8d5a50a1317d996a367" gracePeriod=30 Mar 18 18:21:38.190362 master-0 kubenswrapper[30278]: I0318 18:21:38.190264 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"892489cb-419b-40b3-8e27-04302daea69c","Type":"ContainerStarted","Data":"18eaf3f1fd23baa4d1c58e61f474399c9e76a553dff9f9bd5fb4ccbeadab2200"} Mar 18 18:21:38.190966 master-0 kubenswrapper[30278]: I0318 18:21:38.190751 30278 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.6016129709999998 podStartE2EDuration="8.190722192s" podCreationTimestamp="2026-03-18 18:21:30 +0000 UTC" firstStartedPulling="2026-03-18 18:21:31.635657043 +0000 UTC m=+1260.802841638" lastFinishedPulling="2026-03-18 18:21:37.224766244 +0000 UTC m=+1266.391950859" observedRunningTime="2026-03-18 18:21:38.176874629 +0000 UTC m=+1267.344059234" watchObservedRunningTime="2026-03-18 18:21:38.190722192 +0000 UTC m=+1267.357906787" Mar 18 18:21:38.213888 master-0 kubenswrapper[30278]: I0318 18:21:38.211158 30278 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=3.391526908 podStartE2EDuration="8.211120371s" podCreationTimestamp="2026-03-18 18:21:30 +0000 UTC" firstStartedPulling="2026-03-18 18:21:32.44860131 +0000 UTC m=+1261.615785905" lastFinishedPulling="2026-03-18 18:21:37.268194753 +0000 UTC m=+1266.435379368" observedRunningTime="2026-03-18 18:21:38.200889375 +0000 UTC m=+1267.368073970" watchObservedRunningTime="2026-03-18 18:21:38.211120371 +0000 UTC m=+1267.378304966" Mar 18 18:21:38.266830 master-0 kubenswrapper[30278]: I0318 18:21:38.266442 30278 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=3.178414887 podStartE2EDuration="8.26640801s" podCreationTimestamp="2026-03-18 18:21:30 +0000 UTC" firstStartedPulling="2026-03-18 18:21:32.144013285 +0000 UTC m=+1261.311197880" lastFinishedPulling="2026-03-18 18:21:37.232006398 +0000 UTC m=+1266.399191003" observedRunningTime="2026-03-18 18:21:38.219907127 +0000 UTC m=+1267.387091732" watchObservedRunningTime="2026-03-18 18:21:38.26640801 +0000 UTC m=+1267.433592605" Mar 18 18:21:38.285116 master-0 kubenswrapper[30278]: I0318 18:21:38.284681 30278 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.9692162509999998 podStartE2EDuration="8.284649901s" podCreationTimestamp="2026-03-18 18:21:30 +0000 UTC" firstStartedPulling="2026-03-18 18:21:31.909359795 +0000 UTC m=+1261.076544390" lastFinishedPulling="2026-03-18 18:21:37.224793445 +0000 UTC m=+1266.391978040" observedRunningTime="2026-03-18 18:21:38.248184299 +0000 UTC m=+1267.415368894" watchObservedRunningTime="2026-03-18 18:21:38.284649901 +0000 UTC m=+1267.451834496" Mar 18 18:21:39.232306 master-0 kubenswrapper[30278]: I0318 18:21:39.231314 30278 generic.go:334] "Generic (PLEG): container finished" podID="83526059-628b-4d6e-aa9d-92e1e53765c8" containerID="201742db36acf735aff4586ae556e49b54693a7a5e9c760626351860047a0eb8" exitCode=143 Mar 18 18:21:39.232995 master-0 kubenswrapper[30278]: I0318 18:21:39.232757 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"83526059-628b-4d6e-aa9d-92e1e53765c8","Type":"ContainerDied","Data":"201742db36acf735aff4586ae556e49b54693a7a5e9c760626351860047a0eb8"} Mar 18 18:21:39.232995 master-0 kubenswrapper[30278]: I0318 18:21:39.232794 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"83526059-628b-4d6e-aa9d-92e1e53765c8","Type":"ContainerStarted","Data":"1e1c9e26d09fd8c2fae47dae916d281103a81975a80ab8d5a50a1317d996a367"} Mar 18 18:21:40.833913 master-0 kubenswrapper[30278]: I0318 18:21:40.833802 30278 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Mar 18 18:21:40.834777 master-0 kubenswrapper[30278]: I0318 18:21:40.833942 30278 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Mar 18 18:21:40.854038 master-0 kubenswrapper[30278]: I0318 18:21:40.853954 30278 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Mar 18 18:21:40.854038 master-0 kubenswrapper[30278]: I0318 18:21:40.854032 30278 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Mar 18 18:21:40.909140 master-0 kubenswrapper[30278]: I0318 18:21:40.909029 30278 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Mar 18 18:21:41.004988 master-0 kubenswrapper[30278]: I0318 18:21:41.004921 30278 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Mar 18 18:21:41.031806 master-0 kubenswrapper[30278]: I0318 18:21:41.031687 30278 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-578c6dc45c-dwjps" Mar 18 18:21:41.256458 master-0 kubenswrapper[30278]: I0318 18:21:41.254447 30278 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6c5fb6894c-9vqrx"] Mar 18 18:21:41.256458 master-0 kubenswrapper[30278]: I0318 18:21:41.254832 30278 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6c5fb6894c-9vqrx" podUID="1b45f237-457c-45a6-9ea2-2f2ca11ec44e" containerName="dnsmasq-dns" containerID="cri-o://86608609da1e1c313d7f15f38986a33ddcbc855ab390c824172a198e5b6752ba" gracePeriod=10 Mar 18 18:21:41.356329 master-0 kubenswrapper[30278]: I0318 18:21:41.353949 30278 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Mar 18 18:21:41.918429 master-0 kubenswrapper[30278]: I0318 18:21:41.918348 30278 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="ed10fd30-ed39-4cda-8252-8f4db21fbfca" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.128.1.4:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 18 18:21:41.919237 master-0 kubenswrapper[30278]: I0318 18:21:41.918527 30278 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="ed10fd30-ed39-4cda-8252-8f4db21fbfca" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.128.1.4:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 18 18:21:42.107705 master-0 kubenswrapper[30278]: I0318 18:21:42.107642 30278 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6c5fb6894c-9vqrx" Mar 18 18:21:42.219461 master-0 kubenswrapper[30278]: I0318 18:21:42.219296 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1b45f237-457c-45a6-9ea2-2f2ca11ec44e-dns-swift-storage-0\") pod \"1b45f237-457c-45a6-9ea2-2f2ca11ec44e\" (UID: \"1b45f237-457c-45a6-9ea2-2f2ca11ec44e\") " Mar 18 18:21:42.219461 master-0 kubenswrapper[30278]: I0318 18:21:42.219362 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1b45f237-457c-45a6-9ea2-2f2ca11ec44e-ovsdbserver-nb\") pod \"1b45f237-457c-45a6-9ea2-2f2ca11ec44e\" (UID: \"1b45f237-457c-45a6-9ea2-2f2ca11ec44e\") " Mar 18 18:21:42.219461 master-0 kubenswrapper[30278]: I0318 18:21:42.219459 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1b45f237-457c-45a6-9ea2-2f2ca11ec44e-ovsdbserver-sb\") pod \"1b45f237-457c-45a6-9ea2-2f2ca11ec44e\" (UID: \"1b45f237-457c-45a6-9ea2-2f2ca11ec44e\") " Mar 18 18:21:42.219786 master-0 kubenswrapper[30278]: I0318 18:21:42.219604 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rdm2b\" (UniqueName: \"kubernetes.io/projected/1b45f237-457c-45a6-9ea2-2f2ca11ec44e-kube-api-access-rdm2b\") pod \"1b45f237-457c-45a6-9ea2-2f2ca11ec44e\" (UID: \"1b45f237-457c-45a6-9ea2-2f2ca11ec44e\") " Mar 18 18:21:42.219786 master-0 kubenswrapper[30278]: I0318 18:21:42.219744 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1b45f237-457c-45a6-9ea2-2f2ca11ec44e-dns-svc\") pod \"1b45f237-457c-45a6-9ea2-2f2ca11ec44e\" (UID: \"1b45f237-457c-45a6-9ea2-2f2ca11ec44e\") " Mar 18 18:21:42.219860 master-0 kubenswrapper[30278]: I0318 18:21:42.219811 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1b45f237-457c-45a6-9ea2-2f2ca11ec44e-config\") pod \"1b45f237-457c-45a6-9ea2-2f2ca11ec44e\" (UID: \"1b45f237-457c-45a6-9ea2-2f2ca11ec44e\") " Mar 18 18:21:42.240417 master-0 kubenswrapper[30278]: I0318 18:21:42.240322 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1b45f237-457c-45a6-9ea2-2f2ca11ec44e-kube-api-access-rdm2b" (OuterVolumeSpecName: "kube-api-access-rdm2b") pod "1b45f237-457c-45a6-9ea2-2f2ca11ec44e" (UID: "1b45f237-457c-45a6-9ea2-2f2ca11ec44e"). InnerVolumeSpecName "kube-api-access-rdm2b". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 18:21:42.305208 master-0 kubenswrapper[30278]: I0318 18:21:42.305131 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1b45f237-457c-45a6-9ea2-2f2ca11ec44e-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "1b45f237-457c-45a6-9ea2-2f2ca11ec44e" (UID: "1b45f237-457c-45a6-9ea2-2f2ca11ec44e"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 18:21:42.326987 master-0 kubenswrapper[30278]: I0318 18:21:42.326908 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1b45f237-457c-45a6-9ea2-2f2ca11ec44e-config" (OuterVolumeSpecName: "config") pod "1b45f237-457c-45a6-9ea2-2f2ca11ec44e" (UID: "1b45f237-457c-45a6-9ea2-2f2ca11ec44e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 18:21:42.335986 master-0 kubenswrapper[30278]: I0318 18:21:42.335928 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1b45f237-457c-45a6-9ea2-2f2ca11ec44e-config\") pod \"1b45f237-457c-45a6-9ea2-2f2ca11ec44e\" (UID: \"1b45f237-457c-45a6-9ea2-2f2ca11ec44e\") " Mar 18 18:21:42.336396 master-0 kubenswrapper[30278]: W0318 18:21:42.336326 30278 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/1b45f237-457c-45a6-9ea2-2f2ca11ec44e/volumes/kubernetes.io~configmap/config Mar 18 18:21:42.336396 master-0 kubenswrapper[30278]: I0318 18:21:42.336369 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1b45f237-457c-45a6-9ea2-2f2ca11ec44e-config" (OuterVolumeSpecName: "config") pod "1b45f237-457c-45a6-9ea2-2f2ca11ec44e" (UID: "1b45f237-457c-45a6-9ea2-2f2ca11ec44e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 18:21:42.337131 master-0 kubenswrapper[30278]: I0318 18:21:42.337079 30278 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1b45f237-457c-45a6-9ea2-2f2ca11ec44e-config\") on node \"master-0\" DevicePath \"\"" Mar 18 18:21:42.337131 master-0 kubenswrapper[30278]: I0318 18:21:42.337100 30278 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1b45f237-457c-45a6-9ea2-2f2ca11ec44e-ovsdbserver-sb\") on node \"master-0\" DevicePath \"\"" Mar 18 18:21:42.337131 master-0 kubenswrapper[30278]: I0318 18:21:42.337130 30278 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rdm2b\" (UniqueName: \"kubernetes.io/projected/1b45f237-457c-45a6-9ea2-2f2ca11ec44e-kube-api-access-rdm2b\") on node \"master-0\" DevicePath \"\"" Mar 18 18:21:42.337776 master-0 kubenswrapper[30278]: I0318 18:21:42.337750 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1b45f237-457c-45a6-9ea2-2f2ca11ec44e-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "1b45f237-457c-45a6-9ea2-2f2ca11ec44e" (UID: "1b45f237-457c-45a6-9ea2-2f2ca11ec44e"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 18:21:42.341759 master-0 kubenswrapper[30278]: I0318 18:21:42.341717 30278 generic.go:334] "Generic (PLEG): container finished" podID="1b45f237-457c-45a6-9ea2-2f2ca11ec44e" containerID="86608609da1e1c313d7f15f38986a33ddcbc855ab390c824172a198e5b6752ba" exitCode=0 Mar 18 18:21:42.342126 master-0 kubenswrapper[30278]: I0318 18:21:42.341789 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6c5fb6894c-9vqrx" event={"ID":"1b45f237-457c-45a6-9ea2-2f2ca11ec44e","Type":"ContainerDied","Data":"86608609da1e1c313d7f15f38986a33ddcbc855ab390c824172a198e5b6752ba"} Mar 18 18:21:42.342126 master-0 kubenswrapper[30278]: I0318 18:21:42.341821 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6c5fb6894c-9vqrx" event={"ID":"1b45f237-457c-45a6-9ea2-2f2ca11ec44e","Type":"ContainerDied","Data":"3cbe694de05a7b545dda9518f4cb2273792c8ac0e750898f96fbcc0fbb5cecd1"} Mar 18 18:21:42.342126 master-0 kubenswrapper[30278]: I0318 18:21:42.341839 30278 scope.go:117] "RemoveContainer" containerID="86608609da1e1c313d7f15f38986a33ddcbc855ab390c824172a198e5b6752ba" Mar 18 18:21:42.342126 master-0 kubenswrapper[30278]: I0318 18:21:42.341958 30278 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6c5fb6894c-9vqrx" Mar 18 18:21:42.344843 master-0 kubenswrapper[30278]: I0318 18:21:42.344750 30278 generic.go:334] "Generic (PLEG): container finished" podID="a6c011e4-5cf2-4451-974d-e1032bc333a9" containerID="04bf60f6e80d8fb0a8a7e4ed88b2af499ea6a8c1ce87b2e006cee606cf157d99" exitCode=0 Mar 18 18:21:42.345673 master-0 kubenswrapper[30278]: I0318 18:21:42.345645 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-8vmhz" event={"ID":"a6c011e4-5cf2-4451-974d-e1032bc333a9","Type":"ContainerDied","Data":"04bf60f6e80d8fb0a8a7e4ed88b2af499ea6a8c1ce87b2e006cee606cf157d99"} Mar 18 18:21:42.352160 master-0 kubenswrapper[30278]: I0318 18:21:42.351901 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1b45f237-457c-45a6-9ea2-2f2ca11ec44e-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "1b45f237-457c-45a6-9ea2-2f2ca11ec44e" (UID: "1b45f237-457c-45a6-9ea2-2f2ca11ec44e"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 18:21:42.416314 master-0 kubenswrapper[30278]: I0318 18:21:42.415221 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1b45f237-457c-45a6-9ea2-2f2ca11ec44e-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "1b45f237-457c-45a6-9ea2-2f2ca11ec44e" (UID: "1b45f237-457c-45a6-9ea2-2f2ca11ec44e"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 18:21:42.443653 master-0 kubenswrapper[30278]: I0318 18:21:42.443587 30278 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1b45f237-457c-45a6-9ea2-2f2ca11ec44e-dns-svc\") on node \"master-0\" DevicePath \"\"" Mar 18 18:21:42.443653 master-0 kubenswrapper[30278]: I0318 18:21:42.443637 30278 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1b45f237-457c-45a6-9ea2-2f2ca11ec44e-dns-swift-storage-0\") on node \"master-0\" DevicePath \"\"" Mar 18 18:21:42.443653 master-0 kubenswrapper[30278]: I0318 18:21:42.443649 30278 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1b45f237-457c-45a6-9ea2-2f2ca11ec44e-ovsdbserver-nb\") on node \"master-0\" DevicePath \"\"" Mar 18 18:21:42.692305 master-0 kubenswrapper[30278]: I0318 18:21:42.688735 30278 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6c5fb6894c-9vqrx"] Mar 18 18:21:42.698362 master-0 kubenswrapper[30278]: I0318 18:21:42.697451 30278 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6c5fb6894c-9vqrx"] Mar 18 18:21:43.103300 master-0 kubenswrapper[30278]: I0318 18:21:43.088253 30278 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1b45f237-457c-45a6-9ea2-2f2ca11ec44e" path="/var/lib/kubelet/pods/1b45f237-457c-45a6-9ea2-2f2ca11ec44e/volumes" Mar 18 18:21:48.652148 master-0 kubenswrapper[30278]: I0318 18:21:48.651303 30278 scope.go:117] "RemoveContainer" containerID="d26ffee6ca37258543224f8ac4fa15a9549d278b13c41116355c45da1cf7cc87" Mar 18 18:21:48.773089 master-0 kubenswrapper[30278]: I0318 18:21:48.772961 30278 scope.go:117] "RemoveContainer" containerID="86608609da1e1c313d7f15f38986a33ddcbc855ab390c824172a198e5b6752ba" Mar 18 18:21:48.773896 master-0 kubenswrapper[30278]: E0318 18:21:48.773834 30278 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"86608609da1e1c313d7f15f38986a33ddcbc855ab390c824172a198e5b6752ba\": container with ID starting with 86608609da1e1c313d7f15f38986a33ddcbc855ab390c824172a198e5b6752ba not found: ID does not exist" containerID="86608609da1e1c313d7f15f38986a33ddcbc855ab390c824172a198e5b6752ba" Mar 18 18:21:48.773955 master-0 kubenswrapper[30278]: I0318 18:21:48.773913 30278 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"86608609da1e1c313d7f15f38986a33ddcbc855ab390c824172a198e5b6752ba"} err="failed to get container status \"86608609da1e1c313d7f15f38986a33ddcbc855ab390c824172a198e5b6752ba\": rpc error: code = NotFound desc = could not find container \"86608609da1e1c313d7f15f38986a33ddcbc855ab390c824172a198e5b6752ba\": container with ID starting with 86608609da1e1c313d7f15f38986a33ddcbc855ab390c824172a198e5b6752ba not found: ID does not exist" Mar 18 18:21:48.774000 master-0 kubenswrapper[30278]: I0318 18:21:48.773957 30278 scope.go:117] "RemoveContainer" containerID="d26ffee6ca37258543224f8ac4fa15a9549d278b13c41116355c45da1cf7cc87" Mar 18 18:21:48.774746 master-0 kubenswrapper[30278]: E0318 18:21:48.774703 30278 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d26ffee6ca37258543224f8ac4fa15a9549d278b13c41116355c45da1cf7cc87\": container with ID starting with d26ffee6ca37258543224f8ac4fa15a9549d278b13c41116355c45da1cf7cc87 not found: ID does not exist" containerID="d26ffee6ca37258543224f8ac4fa15a9549d278b13c41116355c45da1cf7cc87" Mar 18 18:21:48.774810 master-0 kubenswrapper[30278]: I0318 18:21:48.774750 30278 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d26ffee6ca37258543224f8ac4fa15a9549d278b13c41116355c45da1cf7cc87"} err="failed to get container status \"d26ffee6ca37258543224f8ac4fa15a9549d278b13c41116355c45da1cf7cc87\": rpc error: code = NotFound desc = could not find container \"d26ffee6ca37258543224f8ac4fa15a9549d278b13c41116355c45da1cf7cc87\": container with ID starting with d26ffee6ca37258543224f8ac4fa15a9549d278b13c41116355c45da1cf7cc87 not found: ID does not exist" Mar 18 18:21:48.775030 master-0 kubenswrapper[30278]: I0318 18:21:48.774996 30278 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-8vmhz" Mar 18 18:21:48.833744 master-0 kubenswrapper[30278]: I0318 18:21:48.833658 30278 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Mar 18 18:21:48.834053 master-0 kubenswrapper[30278]: I0318 18:21:48.833904 30278 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Mar 18 18:21:48.871862 master-0 kubenswrapper[30278]: I0318 18:21:48.871784 30278 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Mar 18 18:21:48.872185 master-0 kubenswrapper[30278]: I0318 18:21:48.871872 30278 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Mar 18 18:21:48.973584 master-0 kubenswrapper[30278]: I0318 18:21:48.973381 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a6c011e4-5cf2-4451-974d-e1032bc333a9-config-data\") pod \"a6c011e4-5cf2-4451-974d-e1032bc333a9\" (UID: \"a6c011e4-5cf2-4451-974d-e1032bc333a9\") " Mar 18 18:21:48.973812 master-0 kubenswrapper[30278]: I0318 18:21:48.973579 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a6c011e4-5cf2-4451-974d-e1032bc333a9-combined-ca-bundle\") pod \"a6c011e4-5cf2-4451-974d-e1032bc333a9\" (UID: \"a6c011e4-5cf2-4451-974d-e1032bc333a9\") " Mar 18 18:21:48.973812 master-0 kubenswrapper[30278]: I0318 18:21:48.973691 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a6c011e4-5cf2-4451-974d-e1032bc333a9-scripts\") pod \"a6c011e4-5cf2-4451-974d-e1032bc333a9\" (UID: \"a6c011e4-5cf2-4451-974d-e1032bc333a9\") " Mar 18 18:21:48.973812 master-0 kubenswrapper[30278]: I0318 18:21:48.973756 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t7pg6\" (UniqueName: \"kubernetes.io/projected/a6c011e4-5cf2-4451-974d-e1032bc333a9-kube-api-access-t7pg6\") pod \"a6c011e4-5cf2-4451-974d-e1032bc333a9\" (UID: \"a6c011e4-5cf2-4451-974d-e1032bc333a9\") " Mar 18 18:21:48.979210 master-0 kubenswrapper[30278]: I0318 18:21:48.979140 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a6c011e4-5cf2-4451-974d-e1032bc333a9-scripts" (OuterVolumeSpecName: "scripts") pod "a6c011e4-5cf2-4451-974d-e1032bc333a9" (UID: "a6c011e4-5cf2-4451-974d-e1032bc333a9"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 18:21:48.984417 master-0 kubenswrapper[30278]: I0318 18:21:48.983777 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a6c011e4-5cf2-4451-974d-e1032bc333a9-kube-api-access-t7pg6" (OuterVolumeSpecName: "kube-api-access-t7pg6") pod "a6c011e4-5cf2-4451-974d-e1032bc333a9" (UID: "a6c011e4-5cf2-4451-974d-e1032bc333a9"). InnerVolumeSpecName "kube-api-access-t7pg6". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 18:21:49.018166 master-0 kubenswrapper[30278]: I0318 18:21:49.017858 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a6c011e4-5cf2-4451-974d-e1032bc333a9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a6c011e4-5cf2-4451-974d-e1032bc333a9" (UID: "a6c011e4-5cf2-4451-974d-e1032bc333a9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 18:21:49.036051 master-0 kubenswrapper[30278]: I0318 18:21:49.035964 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a6c011e4-5cf2-4451-974d-e1032bc333a9-config-data" (OuterVolumeSpecName: "config-data") pod "a6c011e4-5cf2-4451-974d-e1032bc333a9" (UID: "a6c011e4-5cf2-4451-974d-e1032bc333a9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 18:21:49.058801 master-0 kubenswrapper[30278]: I0318 18:21:49.056085 30278 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ironic-inspector-0" podUID="7078ef0d-3907-46f8-8b84-3bc49fef827b" containerName="inspector-httpboot" probeResult="failure" output="dial tcp 10.128.0.255:8088: connect: connection refused" Mar 18 18:21:49.066519 master-0 kubenswrapper[30278]: E0318 18:21:49.064449 30278 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="bd9f9c27e748d4fe2b8cea8087426b1503cc90797b11f8479c9e6689974a8b2c" cmd=["sh","-c","ss -lun | grep :69"] Mar 18 18:21:49.066862 master-0 kubenswrapper[30278]: E0318 18:21:49.066709 30278 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="bd9f9c27e748d4fe2b8cea8087426b1503cc90797b11f8479c9e6689974a8b2c" cmd=["sh","-c","ss -lun | grep :69"] Mar 18 18:21:49.071019 master-0 kubenswrapper[30278]: E0318 18:21:49.069134 30278 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="bd9f9c27e748d4fe2b8cea8087426b1503cc90797b11f8479c9e6689974a8b2c" cmd=["sh","-c","ss -lun | grep :69"] Mar 18 18:21:49.071019 master-0 kubenswrapper[30278]: E0318 18:21:49.069232 30278 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ironic-inspector-0" podUID="7078ef0d-3907-46f8-8b84-3bc49fef827b" containerName="inspector-dnsmasq" Mar 18 18:21:49.079139 master-0 kubenswrapper[30278]: I0318 18:21:49.079018 30278 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a6c011e4-5cf2-4451-974d-e1032bc333a9-config-data\") on node \"master-0\" DevicePath \"\"" Mar 18 18:21:49.079139 master-0 kubenswrapper[30278]: I0318 18:21:49.079117 30278 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a6c011e4-5cf2-4451-974d-e1032bc333a9-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 18 18:21:49.079139 master-0 kubenswrapper[30278]: I0318 18:21:49.079131 30278 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a6c011e4-5cf2-4451-974d-e1032bc333a9-scripts\") on node \"master-0\" DevicePath \"\"" Mar 18 18:21:49.079139 master-0 kubenswrapper[30278]: I0318 18:21:49.079146 30278 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t7pg6\" (UniqueName: \"kubernetes.io/projected/a6c011e4-5cf2-4451-974d-e1032bc333a9-kube-api-access-t7pg6\") on node \"master-0\" DevicePath \"\"" Mar 18 18:21:49.483563 master-0 kubenswrapper[30278]: I0318 18:21:49.483493 30278 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-8vmhz" Mar 18 18:21:49.484263 master-0 kubenswrapper[30278]: I0318 18:21:49.484006 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-8vmhz" event={"ID":"a6c011e4-5cf2-4451-974d-e1032bc333a9","Type":"ContainerDied","Data":"9517abc62588050abc4e748b6520113454af0f434f6b295f09cb9229203d58e6"} Mar 18 18:21:49.484263 master-0 kubenswrapper[30278]: I0318 18:21:49.484073 30278 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9517abc62588050abc4e748b6520113454af0f434f6b295f09cb9229203d58e6" Mar 18 18:21:49.487199 master-0 kubenswrapper[30278]: I0318 18:21:49.487170 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-compute-ironic-compute-0" event={"ID":"5308f3c6-9e64-4187-b4f9-b8b0dc8c2874","Type":"ContainerStarted","Data":"9dd49e5386e12fc9707621221335ea55a151d490c7614fba0bba5158a499dd4b"} Mar 18 18:21:49.487618 master-0 kubenswrapper[30278]: I0318 18:21:49.487567 30278 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-compute-ironic-compute-0" Mar 18 18:21:49.491681 master-0 kubenswrapper[30278]: I0318 18:21:49.491633 30278 generic.go:334] "Generic (PLEG): container finished" podID="564cb488-caa7-49c0-b12a-133aa721085c" containerID="7b3625c3106e4874258c9fcfb2a5cc4a4e04fbb266cd1c80dfa2896440ae6d8e" exitCode=0 Mar 18 18:21:49.491771 master-0 kubenswrapper[30278]: I0318 18:21:49.491729 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-tv9n9" event={"ID":"564cb488-caa7-49c0-b12a-133aa721085c","Type":"ContainerDied","Data":"7b3625c3106e4874258c9fcfb2a5cc4a4e04fbb266cd1c80dfa2896440ae6d8e"} Mar 18 18:21:49.530305 master-0 kubenswrapper[30278]: I0318 18:21:49.530111 30278 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-compute-ironic-compute-0" podStartSLOduration=3.204278199 podStartE2EDuration="20.530087443s" podCreationTimestamp="2026-03-18 18:21:29 +0000 UTC" firstStartedPulling="2026-03-18 18:21:31.448551834 +0000 UTC m=+1260.615736429" lastFinishedPulling="2026-03-18 18:21:48.774361078 +0000 UTC m=+1277.941545673" observedRunningTime="2026-03-18 18:21:49.516494707 +0000 UTC m=+1278.683679302" watchObservedRunningTime="2026-03-18 18:21:49.530087443 +0000 UTC m=+1278.697272038" Mar 18 18:21:49.534699 master-0 kubenswrapper[30278]: I0318 18:21:49.532543 30278 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-compute-ironic-compute-0" Mar 18 18:21:50.088714 master-0 kubenswrapper[30278]: I0318 18:21:50.085401 30278 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Mar 18 18:21:50.117299 master-0 kubenswrapper[30278]: I0318 18:21:50.110377 30278 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Mar 18 18:21:50.117299 master-0 kubenswrapper[30278]: I0318 18:21:50.110898 30278 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="892489cb-419b-40b3-8e27-04302daea69c" containerName="nova-scheduler-scheduler" containerID="cri-o://18eaf3f1fd23baa4d1c58e61f474399c9e76a553dff9f9bd5fb4ccbeadab2200" gracePeriod=30 Mar 18 18:21:50.506600 master-0 kubenswrapper[30278]: I0318 18:21:50.506430 30278 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="ed10fd30-ed39-4cda-8252-8f4db21fbfca" containerName="nova-api-log" containerID="cri-o://a8de5e0b50e2e23ba53aadd31f170f4aa1f4c1a46484ddc6fb05b0a83a3b16b0" gracePeriod=30 Mar 18 18:21:50.506916 master-0 kubenswrapper[30278]: I0318 18:21:50.506590 30278 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="ed10fd30-ed39-4cda-8252-8f4db21fbfca" containerName="nova-api-api" containerID="cri-o://21cc803c0cc2cc33976caf7d87c2df977085d7b39c27e4d7d0ce8dfe84c57b66" gracePeriod=30 Mar 18 18:21:50.880300 master-0 kubenswrapper[30278]: E0318 18:21:50.874773 30278 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="18eaf3f1fd23baa4d1c58e61f474399c9e76a553dff9f9bd5fb4ccbeadab2200" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Mar 18 18:21:50.880300 master-0 kubenswrapper[30278]: E0318 18:21:50.878266 30278 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="18eaf3f1fd23baa4d1c58e61f474399c9e76a553dff9f9bd5fb4ccbeadab2200" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Mar 18 18:21:50.880300 master-0 kubenswrapper[30278]: E0318 18:21:50.879712 30278 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="18eaf3f1fd23baa4d1c58e61f474399c9e76a553dff9f9bd5fb4ccbeadab2200" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Mar 18 18:21:50.880300 master-0 kubenswrapper[30278]: E0318 18:21:50.879952 30278 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="892489cb-419b-40b3-8e27-04302daea69c" containerName="nova-scheduler-scheduler" Mar 18 18:21:51.087406 master-0 kubenswrapper[30278]: E0318 18:21:51.086456 30278 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/d87f3b2a4c347bda5353079ac2f6675eef2a64a8cdd43435df847c5423e788b0/diff" to get inode usage: stat /var/lib/containers/storage/overlay/d87f3b2a4c347bda5353079ac2f6675eef2a64a8cdd43435df847c5423e788b0/diff: no such file or directory, extraDiskErr: could not stat "/var/log/pods/openstack_dnsmasq-dns-6c5fb6894c-9vqrx_1b45f237-457c-45a6-9ea2-2f2ca11ec44e/dnsmasq-dns/0.log" to get inode usage: stat /var/log/pods/openstack_dnsmasq-dns-6c5fb6894c-9vqrx_1b45f237-457c-45a6-9ea2-2f2ca11ec44e/dnsmasq-dns/0.log: no such file or directory Mar 18 18:21:51.128062 master-0 kubenswrapper[30278]: I0318 18:21:51.128004 30278 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-tv9n9" Mar 18 18:21:51.268717 master-0 kubenswrapper[30278]: I0318 18:21:51.268554 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/564cb488-caa7-49c0-b12a-133aa721085c-scripts\") pod \"564cb488-caa7-49c0-b12a-133aa721085c\" (UID: \"564cb488-caa7-49c0-b12a-133aa721085c\") " Mar 18 18:21:51.268957 master-0 kubenswrapper[30278]: I0318 18:21:51.268786 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/564cb488-caa7-49c0-b12a-133aa721085c-config-data\") pod \"564cb488-caa7-49c0-b12a-133aa721085c\" (UID: \"564cb488-caa7-49c0-b12a-133aa721085c\") " Mar 18 18:21:51.268957 master-0 kubenswrapper[30278]: I0318 18:21:51.268829 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mndl4\" (UniqueName: \"kubernetes.io/projected/564cb488-caa7-49c0-b12a-133aa721085c-kube-api-access-mndl4\") pod \"564cb488-caa7-49c0-b12a-133aa721085c\" (UID: \"564cb488-caa7-49c0-b12a-133aa721085c\") " Mar 18 18:21:51.268957 master-0 kubenswrapper[30278]: I0318 18:21:51.268893 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/564cb488-caa7-49c0-b12a-133aa721085c-combined-ca-bundle\") pod \"564cb488-caa7-49c0-b12a-133aa721085c\" (UID: \"564cb488-caa7-49c0-b12a-133aa721085c\") " Mar 18 18:21:51.278013 master-0 kubenswrapper[30278]: I0318 18:21:51.277970 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/564cb488-caa7-49c0-b12a-133aa721085c-kube-api-access-mndl4" (OuterVolumeSpecName: "kube-api-access-mndl4") pod "564cb488-caa7-49c0-b12a-133aa721085c" (UID: "564cb488-caa7-49c0-b12a-133aa721085c"). InnerVolumeSpecName "kube-api-access-mndl4". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 18:21:51.281410 master-0 kubenswrapper[30278]: I0318 18:21:51.281339 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/564cb488-caa7-49c0-b12a-133aa721085c-scripts" (OuterVolumeSpecName: "scripts") pod "564cb488-caa7-49c0-b12a-133aa721085c" (UID: "564cb488-caa7-49c0-b12a-133aa721085c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 18:21:51.309514 master-0 kubenswrapper[30278]: I0318 18:21:51.309258 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/564cb488-caa7-49c0-b12a-133aa721085c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "564cb488-caa7-49c0-b12a-133aa721085c" (UID: "564cb488-caa7-49c0-b12a-133aa721085c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 18:21:51.318994 master-0 kubenswrapper[30278]: I0318 18:21:51.318869 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/564cb488-caa7-49c0-b12a-133aa721085c-config-data" (OuterVolumeSpecName: "config-data") pod "564cb488-caa7-49c0-b12a-133aa721085c" (UID: "564cb488-caa7-49c0-b12a-133aa721085c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 18:21:51.372509 master-0 kubenswrapper[30278]: I0318 18:21:51.372447 30278 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/564cb488-caa7-49c0-b12a-133aa721085c-scripts\") on node \"master-0\" DevicePath \"\"" Mar 18 18:21:51.372509 master-0 kubenswrapper[30278]: I0318 18:21:51.372493 30278 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/564cb488-caa7-49c0-b12a-133aa721085c-config-data\") on node \"master-0\" DevicePath \"\"" Mar 18 18:21:51.372509 master-0 kubenswrapper[30278]: I0318 18:21:51.372507 30278 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mndl4\" (UniqueName: \"kubernetes.io/projected/564cb488-caa7-49c0-b12a-133aa721085c-kube-api-access-mndl4\") on node \"master-0\" DevicePath \"\"" Mar 18 18:21:51.372509 master-0 kubenswrapper[30278]: I0318 18:21:51.372519 30278 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/564cb488-caa7-49c0-b12a-133aa721085c-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 18 18:21:51.523001 master-0 kubenswrapper[30278]: I0318 18:21:51.522238 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-tv9n9" event={"ID":"564cb488-caa7-49c0-b12a-133aa721085c","Type":"ContainerDied","Data":"194944cec8d70ecef450d22eedc1dff1e4e3f03eb9d3b6be1a9232a4c665d71b"} Mar 18 18:21:51.523001 master-0 kubenswrapper[30278]: I0318 18:21:51.522366 30278 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="194944cec8d70ecef450d22eedc1dff1e4e3f03eb9d3b6be1a9232a4c665d71b" Mar 18 18:21:51.523001 master-0 kubenswrapper[30278]: I0318 18:21:51.522456 30278 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-tv9n9" Mar 18 18:21:51.524585 master-0 kubenswrapper[30278]: I0318 18:21:51.524543 30278 generic.go:334] "Generic (PLEG): container finished" podID="ed10fd30-ed39-4cda-8252-8f4db21fbfca" containerID="a8de5e0b50e2e23ba53aadd31f170f4aa1f4c1a46484ddc6fb05b0a83a3b16b0" exitCode=143 Mar 18 18:21:51.524676 master-0 kubenswrapper[30278]: I0318 18:21:51.524596 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"ed10fd30-ed39-4cda-8252-8f4db21fbfca","Type":"ContainerDied","Data":"a8de5e0b50e2e23ba53aadd31f170f4aa1f4c1a46484ddc6fb05b0a83a3b16b0"} Mar 18 18:21:51.705381 master-0 kubenswrapper[30278]: I0318 18:21:51.705298 30278 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Mar 18 18:21:51.706019 master-0 kubenswrapper[30278]: E0318 18:21:51.705979 30278 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a6c011e4-5cf2-4451-974d-e1032bc333a9" containerName="nova-manage" Mar 18 18:21:51.706073 master-0 kubenswrapper[30278]: I0318 18:21:51.706027 30278 state_mem.go:107] "Deleted CPUSet assignment" podUID="a6c011e4-5cf2-4451-974d-e1032bc333a9" containerName="nova-manage" Mar 18 18:21:51.706111 master-0 kubenswrapper[30278]: E0318 18:21:51.706074 30278 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="564cb488-caa7-49c0-b12a-133aa721085c" containerName="nova-cell1-conductor-db-sync" Mar 18 18:21:51.706111 master-0 kubenswrapper[30278]: I0318 18:21:51.706084 30278 state_mem.go:107] "Deleted CPUSet assignment" podUID="564cb488-caa7-49c0-b12a-133aa721085c" containerName="nova-cell1-conductor-db-sync" Mar 18 18:21:51.706111 master-0 kubenswrapper[30278]: E0318 18:21:51.706103 30278 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1b45f237-457c-45a6-9ea2-2f2ca11ec44e" containerName="dnsmasq-dns" Mar 18 18:21:51.706111 master-0 kubenswrapper[30278]: I0318 18:21:51.706111 30278 state_mem.go:107] "Deleted CPUSet assignment" podUID="1b45f237-457c-45a6-9ea2-2f2ca11ec44e" containerName="dnsmasq-dns" Mar 18 18:21:51.706258 master-0 kubenswrapper[30278]: E0318 18:21:51.706129 30278 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1b45f237-457c-45a6-9ea2-2f2ca11ec44e" containerName="init" Mar 18 18:21:51.706258 master-0 kubenswrapper[30278]: I0318 18:21:51.706138 30278 state_mem.go:107] "Deleted CPUSet assignment" podUID="1b45f237-457c-45a6-9ea2-2f2ca11ec44e" containerName="init" Mar 18 18:21:51.706475 master-0 kubenswrapper[30278]: I0318 18:21:51.706449 30278 memory_manager.go:354] "RemoveStaleState removing state" podUID="a6c011e4-5cf2-4451-974d-e1032bc333a9" containerName="nova-manage" Mar 18 18:21:51.706526 master-0 kubenswrapper[30278]: I0318 18:21:51.706498 30278 memory_manager.go:354] "RemoveStaleState removing state" podUID="564cb488-caa7-49c0-b12a-133aa721085c" containerName="nova-cell1-conductor-db-sync" Mar 18 18:21:51.706564 master-0 kubenswrapper[30278]: I0318 18:21:51.706527 30278 memory_manager.go:354] "RemoveStaleState removing state" podUID="1b45f237-457c-45a6-9ea2-2f2ca11ec44e" containerName="dnsmasq-dns" Mar 18 18:21:51.709454 master-0 kubenswrapper[30278]: I0318 18:21:51.709427 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Mar 18 18:21:51.716604 master-0 kubenswrapper[30278]: I0318 18:21:51.715505 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Mar 18 18:21:51.743559 master-0 kubenswrapper[30278]: I0318 18:21:51.739348 30278 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Mar 18 18:21:51.786693 master-0 kubenswrapper[30278]: I0318 18:21:51.785326 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/83dc7510-eee4-41e5-a4ff-0ffa9efb380b-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"83dc7510-eee4-41e5-a4ff-0ffa9efb380b\") " pod="openstack/nova-cell1-conductor-0" Mar 18 18:21:51.786693 master-0 kubenswrapper[30278]: I0318 18:21:51.785591 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/83dc7510-eee4-41e5-a4ff-0ffa9efb380b-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"83dc7510-eee4-41e5-a4ff-0ffa9efb380b\") " pod="openstack/nova-cell1-conductor-0" Mar 18 18:21:51.786693 master-0 kubenswrapper[30278]: I0318 18:21:51.785680 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dsd97\" (UniqueName: \"kubernetes.io/projected/83dc7510-eee4-41e5-a4ff-0ffa9efb380b-kube-api-access-dsd97\") pod \"nova-cell1-conductor-0\" (UID: \"83dc7510-eee4-41e5-a4ff-0ffa9efb380b\") " pod="openstack/nova-cell1-conductor-0" Mar 18 18:21:51.888891 master-0 kubenswrapper[30278]: I0318 18:21:51.888818 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/83dc7510-eee4-41e5-a4ff-0ffa9efb380b-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"83dc7510-eee4-41e5-a4ff-0ffa9efb380b\") " pod="openstack/nova-cell1-conductor-0" Mar 18 18:21:51.890019 master-0 kubenswrapper[30278]: I0318 18:21:51.889264 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/83dc7510-eee4-41e5-a4ff-0ffa9efb380b-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"83dc7510-eee4-41e5-a4ff-0ffa9efb380b\") " pod="openstack/nova-cell1-conductor-0" Mar 18 18:21:51.890114 master-0 kubenswrapper[30278]: I0318 18:21:51.890090 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dsd97\" (UniqueName: \"kubernetes.io/projected/83dc7510-eee4-41e5-a4ff-0ffa9efb380b-kube-api-access-dsd97\") pod \"nova-cell1-conductor-0\" (UID: \"83dc7510-eee4-41e5-a4ff-0ffa9efb380b\") " pod="openstack/nova-cell1-conductor-0" Mar 18 18:21:51.894617 master-0 kubenswrapper[30278]: I0318 18:21:51.893875 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/83dc7510-eee4-41e5-a4ff-0ffa9efb380b-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"83dc7510-eee4-41e5-a4ff-0ffa9efb380b\") " pod="openstack/nova-cell1-conductor-0" Mar 18 18:21:51.902316 master-0 kubenswrapper[30278]: I0318 18:21:51.896708 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/83dc7510-eee4-41e5-a4ff-0ffa9efb380b-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"83dc7510-eee4-41e5-a4ff-0ffa9efb380b\") " pod="openstack/nova-cell1-conductor-0" Mar 18 18:21:51.911583 master-0 kubenswrapper[30278]: I0318 18:21:51.910634 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dsd97\" (UniqueName: \"kubernetes.io/projected/83dc7510-eee4-41e5-a4ff-0ffa9efb380b-kube-api-access-dsd97\") pod \"nova-cell1-conductor-0\" (UID: \"83dc7510-eee4-41e5-a4ff-0ffa9efb380b\") " pod="openstack/nova-cell1-conductor-0" Mar 18 18:21:52.848910 master-0 kubenswrapper[30278]: I0318 18:21:52.848234 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Mar 18 18:21:53.475637 master-0 kubenswrapper[30278]: I0318 18:21:53.475581 30278 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Mar 18 18:21:53.921337 master-0 kubenswrapper[30278]: I0318 18:21:53.920430 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"83dc7510-eee4-41e5-a4ff-0ffa9efb380b","Type":"ContainerStarted","Data":"d604a166fa898c85d33c520d330fab4e72b5b00be80e37e58a2a66bdc7e4e24b"} Mar 18 18:21:53.921337 master-0 kubenswrapper[30278]: I0318 18:21:53.920536 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"83dc7510-eee4-41e5-a4ff-0ffa9efb380b","Type":"ContainerStarted","Data":"42574165e93c7b03cab3047274f16435b5fca92cca2f23bd86dcb132e8898e6c"} Mar 18 18:21:53.921337 master-0 kubenswrapper[30278]: I0318 18:21:53.920569 30278 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Mar 18 18:21:53.927218 master-0 kubenswrapper[30278]: I0318 18:21:53.926576 30278 generic.go:334] "Generic (PLEG): container finished" podID="e9af6002-27e3-414d-b61a-dc0f7d99768b" containerID="596f228623c740089d6bfafb648af0d527734cc329e5be6165f5bf9c165646d3" exitCode=0 Mar 18 18:21:53.927218 master-0 kubenswrapper[30278]: I0318 18:21:53.926701 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-conductor-0" event={"ID":"e9af6002-27e3-414d-b61a-dc0f7d99768b","Type":"ContainerDied","Data":"596f228623c740089d6bfafb648af0d527734cc329e5be6165f5bf9c165646d3"} Mar 18 18:21:53.932733 master-0 kubenswrapper[30278]: I0318 18:21:53.932633 30278 generic.go:334] "Generic (PLEG): container finished" podID="ed10fd30-ed39-4cda-8252-8f4db21fbfca" containerID="21cc803c0cc2cc33976caf7d87c2df977085d7b39c27e4d7d0ce8dfe84c57b66" exitCode=0 Mar 18 18:21:53.934943 master-0 kubenswrapper[30278]: I0318 18:21:53.933158 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"ed10fd30-ed39-4cda-8252-8f4db21fbfca","Type":"ContainerDied","Data":"21cc803c0cc2cc33976caf7d87c2df977085d7b39c27e4d7d0ce8dfe84c57b66"} Mar 18 18:21:53.974631 master-0 kubenswrapper[30278]: I0318 18:21:53.973059 30278 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=2.973029732 podStartE2EDuration="2.973029732s" podCreationTimestamp="2026-03-18 18:21:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 18:21:53.947540265 +0000 UTC m=+1283.114724870" watchObservedRunningTime="2026-03-18 18:21:53.973029732 +0000 UTC m=+1283.140214327" Mar 18 18:21:54.358270 master-0 kubenswrapper[30278]: I0318 18:21:54.358153 30278 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 18 18:21:54.400305 master-0 kubenswrapper[30278]: I0318 18:21:54.391547 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ed10fd30-ed39-4cda-8252-8f4db21fbfca-config-data\") pod \"ed10fd30-ed39-4cda-8252-8f4db21fbfca\" (UID: \"ed10fd30-ed39-4cda-8252-8f4db21fbfca\") " Mar 18 18:21:54.400305 master-0 kubenswrapper[30278]: I0318 18:21:54.391645 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ed10fd30-ed39-4cda-8252-8f4db21fbfca-logs\") pod \"ed10fd30-ed39-4cda-8252-8f4db21fbfca\" (UID: \"ed10fd30-ed39-4cda-8252-8f4db21fbfca\") " Mar 18 18:21:54.400305 master-0 kubenswrapper[30278]: I0318 18:21:54.391845 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ed10fd30-ed39-4cda-8252-8f4db21fbfca-combined-ca-bundle\") pod \"ed10fd30-ed39-4cda-8252-8f4db21fbfca\" (UID: \"ed10fd30-ed39-4cda-8252-8f4db21fbfca\") " Mar 18 18:21:54.400305 master-0 kubenswrapper[30278]: I0318 18:21:54.391988 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-46ctg\" (UniqueName: \"kubernetes.io/projected/ed10fd30-ed39-4cda-8252-8f4db21fbfca-kube-api-access-46ctg\") pod \"ed10fd30-ed39-4cda-8252-8f4db21fbfca\" (UID: \"ed10fd30-ed39-4cda-8252-8f4db21fbfca\") " Mar 18 18:21:54.400305 master-0 kubenswrapper[30278]: I0318 18:21:54.394754 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ed10fd30-ed39-4cda-8252-8f4db21fbfca-logs" (OuterVolumeSpecName: "logs") pod "ed10fd30-ed39-4cda-8252-8f4db21fbfca" (UID: "ed10fd30-ed39-4cda-8252-8f4db21fbfca"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 18 18:21:54.417390 master-0 kubenswrapper[30278]: I0318 18:21:54.414818 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ed10fd30-ed39-4cda-8252-8f4db21fbfca-kube-api-access-46ctg" (OuterVolumeSpecName: "kube-api-access-46ctg") pod "ed10fd30-ed39-4cda-8252-8f4db21fbfca" (UID: "ed10fd30-ed39-4cda-8252-8f4db21fbfca"). InnerVolumeSpecName "kube-api-access-46ctg". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 18:21:54.422768 master-0 kubenswrapper[30278]: I0318 18:21:54.422689 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ed10fd30-ed39-4cda-8252-8f4db21fbfca-config-data" (OuterVolumeSpecName: "config-data") pod "ed10fd30-ed39-4cda-8252-8f4db21fbfca" (UID: "ed10fd30-ed39-4cda-8252-8f4db21fbfca"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 18:21:54.433524 master-0 kubenswrapper[30278]: I0318 18:21:54.433401 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ed10fd30-ed39-4cda-8252-8f4db21fbfca-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ed10fd30-ed39-4cda-8252-8f4db21fbfca" (UID: "ed10fd30-ed39-4cda-8252-8f4db21fbfca"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 18:21:54.495703 master-0 kubenswrapper[30278]: I0318 18:21:54.495619 30278 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-46ctg\" (UniqueName: \"kubernetes.io/projected/ed10fd30-ed39-4cda-8252-8f4db21fbfca-kube-api-access-46ctg\") on node \"master-0\" DevicePath \"\"" Mar 18 18:21:54.495703 master-0 kubenswrapper[30278]: I0318 18:21:54.495687 30278 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ed10fd30-ed39-4cda-8252-8f4db21fbfca-config-data\") on node \"master-0\" DevicePath \"\"" Mar 18 18:21:54.495703 master-0 kubenswrapper[30278]: I0318 18:21:54.495706 30278 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ed10fd30-ed39-4cda-8252-8f4db21fbfca-logs\") on node \"master-0\" DevicePath \"\"" Mar 18 18:21:54.495703 master-0 kubenswrapper[30278]: I0318 18:21:54.495716 30278 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ed10fd30-ed39-4cda-8252-8f4db21fbfca-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 18 18:21:54.995167 master-0 kubenswrapper[30278]: I0318 18:21:54.995081 30278 generic.go:334] "Generic (PLEG): container finished" podID="892489cb-419b-40b3-8e27-04302daea69c" containerID="18eaf3f1fd23baa4d1c58e61f474399c9e76a553dff9f9bd5fb4ccbeadab2200" exitCode=0 Mar 18 18:21:54.995846 master-0 kubenswrapper[30278]: I0318 18:21:54.995185 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"892489cb-419b-40b3-8e27-04302daea69c","Type":"ContainerDied","Data":"18eaf3f1fd23baa4d1c58e61f474399c9e76a553dff9f9bd5fb4ccbeadab2200"} Mar 18 18:21:55.004576 master-0 kubenswrapper[30278]: I0318 18:21:55.004524 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-conductor-0" event={"ID":"e9af6002-27e3-414d-b61a-dc0f7d99768b","Type":"ContainerStarted","Data":"a8eb1de436b6140f8bb430bcf3ea41ec58d7434a0711b3fd9d561235ffc9eb66"} Mar 18 18:21:55.010612 master-0 kubenswrapper[30278]: I0318 18:21:55.010571 30278 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 18 18:21:55.010719 master-0 kubenswrapper[30278]: I0318 18:21:55.010608 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"ed10fd30-ed39-4cda-8252-8f4db21fbfca","Type":"ContainerDied","Data":"69a264af6dad401e369317628ba9c91aa71cb7c30ddc57c214e488e4c92ed0f5"} Mar 18 18:21:55.010791 master-0 kubenswrapper[30278]: I0318 18:21:55.010734 30278 scope.go:117] "RemoveContainer" containerID="21cc803c0cc2cc33976caf7d87c2df977085d7b39c27e4d7d0ce8dfe84c57b66" Mar 18 18:21:55.051266 master-0 kubenswrapper[30278]: I0318 18:21:55.051124 30278 scope.go:117] "RemoveContainer" containerID="a8de5e0b50e2e23ba53aadd31f170f4aa1f4c1a46484ddc6fb05b0a83a3b16b0" Mar 18 18:21:55.094858 master-0 kubenswrapper[30278]: I0318 18:21:55.094708 30278 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Mar 18 18:21:55.156793 master-0 kubenswrapper[30278]: I0318 18:21:55.142683 30278 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Mar 18 18:21:55.165669 master-0 kubenswrapper[30278]: I0318 18:21:55.165576 30278 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Mar 18 18:21:55.166592 master-0 kubenswrapper[30278]: E0318 18:21:55.166561 30278 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ed10fd30-ed39-4cda-8252-8f4db21fbfca" containerName="nova-api-api" Mar 18 18:21:55.166592 master-0 kubenswrapper[30278]: I0318 18:21:55.166587 30278 state_mem.go:107] "Deleted CPUSet assignment" podUID="ed10fd30-ed39-4cda-8252-8f4db21fbfca" containerName="nova-api-api" Mar 18 18:21:55.166667 master-0 kubenswrapper[30278]: E0318 18:21:55.166614 30278 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ed10fd30-ed39-4cda-8252-8f4db21fbfca" containerName="nova-api-log" Mar 18 18:21:55.166667 master-0 kubenswrapper[30278]: I0318 18:21:55.166621 30278 state_mem.go:107] "Deleted CPUSet assignment" podUID="ed10fd30-ed39-4cda-8252-8f4db21fbfca" containerName="nova-api-log" Mar 18 18:21:55.166948 master-0 kubenswrapper[30278]: I0318 18:21:55.166921 30278 memory_manager.go:354] "RemoveStaleState removing state" podUID="ed10fd30-ed39-4cda-8252-8f4db21fbfca" containerName="nova-api-log" Mar 18 18:21:55.166994 master-0 kubenswrapper[30278]: I0318 18:21:55.166950 30278 memory_manager.go:354] "RemoveStaleState removing state" podUID="ed10fd30-ed39-4cda-8252-8f4db21fbfca" containerName="nova-api-api" Mar 18 18:21:55.169326 master-0 kubenswrapper[30278]: I0318 18:21:55.169293 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 18 18:21:55.184506 master-0 kubenswrapper[30278]: I0318 18:21:55.180419 30278 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Mar 18 18:21:55.212242 master-0 kubenswrapper[30278]: I0318 18:21:55.212153 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Mar 18 18:21:55.224056 master-0 kubenswrapper[30278]: I0318 18:21:55.222264 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k87fk\" (UniqueName: \"kubernetes.io/projected/32bccbd4-7005-4a2c-b90e-f7a249adabbd-kube-api-access-k87fk\") pod \"nova-api-0\" (UID: \"32bccbd4-7005-4a2c-b90e-f7a249adabbd\") " pod="openstack/nova-api-0" Mar 18 18:21:55.224056 master-0 kubenswrapper[30278]: I0318 18:21:55.222589 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/32bccbd4-7005-4a2c-b90e-f7a249adabbd-logs\") pod \"nova-api-0\" (UID: \"32bccbd4-7005-4a2c-b90e-f7a249adabbd\") " pod="openstack/nova-api-0" Mar 18 18:21:55.224056 master-0 kubenswrapper[30278]: I0318 18:21:55.222617 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/32bccbd4-7005-4a2c-b90e-f7a249adabbd-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"32bccbd4-7005-4a2c-b90e-f7a249adabbd\") " pod="openstack/nova-api-0" Mar 18 18:21:55.224056 master-0 kubenswrapper[30278]: I0318 18:21:55.222708 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/32bccbd4-7005-4a2c-b90e-f7a249adabbd-config-data\") pod \"nova-api-0\" (UID: \"32bccbd4-7005-4a2c-b90e-f7a249adabbd\") " pod="openstack/nova-api-0" Mar 18 18:21:55.311995 master-0 kubenswrapper[30278]: I0318 18:21:55.311927 30278 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Mar 18 18:21:55.324204 master-0 kubenswrapper[30278]: I0318 18:21:55.324158 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/892489cb-419b-40b3-8e27-04302daea69c-config-data\") pod \"892489cb-419b-40b3-8e27-04302daea69c\" (UID: \"892489cb-419b-40b3-8e27-04302daea69c\") " Mar 18 18:21:55.324499 master-0 kubenswrapper[30278]: I0318 18:21:55.324456 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kst6b\" (UniqueName: \"kubernetes.io/projected/892489cb-419b-40b3-8e27-04302daea69c-kube-api-access-kst6b\") pod \"892489cb-419b-40b3-8e27-04302daea69c\" (UID: \"892489cb-419b-40b3-8e27-04302daea69c\") " Mar 18 18:21:55.324697 master-0 kubenswrapper[30278]: I0318 18:21:55.324676 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/892489cb-419b-40b3-8e27-04302daea69c-combined-ca-bundle\") pod \"892489cb-419b-40b3-8e27-04302daea69c\" (UID: \"892489cb-419b-40b3-8e27-04302daea69c\") " Mar 18 18:21:55.325081 master-0 kubenswrapper[30278]: I0318 18:21:55.325048 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/32bccbd4-7005-4a2c-b90e-f7a249adabbd-config-data\") pod \"nova-api-0\" (UID: \"32bccbd4-7005-4a2c-b90e-f7a249adabbd\") " pod="openstack/nova-api-0" Mar 18 18:21:55.325555 master-0 kubenswrapper[30278]: I0318 18:21:55.325186 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k87fk\" (UniqueName: \"kubernetes.io/projected/32bccbd4-7005-4a2c-b90e-f7a249adabbd-kube-api-access-k87fk\") pod \"nova-api-0\" (UID: \"32bccbd4-7005-4a2c-b90e-f7a249adabbd\") " pod="openstack/nova-api-0" Mar 18 18:21:55.325555 master-0 kubenswrapper[30278]: I0318 18:21:55.325294 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/32bccbd4-7005-4a2c-b90e-f7a249adabbd-logs\") pod \"nova-api-0\" (UID: \"32bccbd4-7005-4a2c-b90e-f7a249adabbd\") " pod="openstack/nova-api-0" Mar 18 18:21:55.325555 master-0 kubenswrapper[30278]: I0318 18:21:55.325316 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/32bccbd4-7005-4a2c-b90e-f7a249adabbd-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"32bccbd4-7005-4a2c-b90e-f7a249adabbd\") " pod="openstack/nova-api-0" Mar 18 18:21:55.327107 master-0 kubenswrapper[30278]: I0318 18:21:55.326340 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/32bccbd4-7005-4a2c-b90e-f7a249adabbd-logs\") pod \"nova-api-0\" (UID: \"32bccbd4-7005-4a2c-b90e-f7a249adabbd\") " pod="openstack/nova-api-0" Mar 18 18:21:55.349106 master-0 kubenswrapper[30278]: I0318 18:21:55.329460 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/892489cb-419b-40b3-8e27-04302daea69c-kube-api-access-kst6b" (OuterVolumeSpecName: "kube-api-access-kst6b") pod "892489cb-419b-40b3-8e27-04302daea69c" (UID: "892489cb-419b-40b3-8e27-04302daea69c"). InnerVolumeSpecName "kube-api-access-kst6b". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 18:21:55.349106 master-0 kubenswrapper[30278]: I0318 18:21:55.333630 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/32bccbd4-7005-4a2c-b90e-f7a249adabbd-config-data\") pod \"nova-api-0\" (UID: \"32bccbd4-7005-4a2c-b90e-f7a249adabbd\") " pod="openstack/nova-api-0" Mar 18 18:21:55.349657 master-0 kubenswrapper[30278]: I0318 18:21:55.349598 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/32bccbd4-7005-4a2c-b90e-f7a249adabbd-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"32bccbd4-7005-4a2c-b90e-f7a249adabbd\") " pod="openstack/nova-api-0" Mar 18 18:21:55.362255 master-0 kubenswrapper[30278]: I0318 18:21:55.362198 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k87fk\" (UniqueName: \"kubernetes.io/projected/32bccbd4-7005-4a2c-b90e-f7a249adabbd-kube-api-access-k87fk\") pod \"nova-api-0\" (UID: \"32bccbd4-7005-4a2c-b90e-f7a249adabbd\") " pod="openstack/nova-api-0" Mar 18 18:21:55.415462 master-0 kubenswrapper[30278]: I0318 18:21:55.415403 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/892489cb-419b-40b3-8e27-04302daea69c-config-data" (OuterVolumeSpecName: "config-data") pod "892489cb-419b-40b3-8e27-04302daea69c" (UID: "892489cb-419b-40b3-8e27-04302daea69c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 18:21:55.415701 master-0 kubenswrapper[30278]: I0318 18:21:55.415640 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/892489cb-419b-40b3-8e27-04302daea69c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "892489cb-419b-40b3-8e27-04302daea69c" (UID: "892489cb-419b-40b3-8e27-04302daea69c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 18:21:55.434741 master-0 kubenswrapper[30278]: I0318 18:21:55.434155 30278 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kst6b\" (UniqueName: \"kubernetes.io/projected/892489cb-419b-40b3-8e27-04302daea69c-kube-api-access-kst6b\") on node \"master-0\" DevicePath \"\"" Mar 18 18:21:55.434741 master-0 kubenswrapper[30278]: I0318 18:21:55.434246 30278 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/892489cb-419b-40b3-8e27-04302daea69c-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 18 18:21:55.434741 master-0 kubenswrapper[30278]: I0318 18:21:55.434261 30278 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/892489cb-419b-40b3-8e27-04302daea69c-config-data\") on node \"master-0\" DevicePath \"\"" Mar 18 18:21:55.607521 master-0 kubenswrapper[30278]: I0318 18:21:55.606366 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 18 18:21:56.034708 master-0 kubenswrapper[30278]: I0318 18:21:56.034623 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-conductor-0" event={"ID":"e9af6002-27e3-414d-b61a-dc0f7d99768b","Type":"ContainerStarted","Data":"187777d3cc5412da40eae1ecd312e81f04705efb1604cf6bb3e491112df7f8f3"} Mar 18 18:21:56.035663 master-0 kubenswrapper[30278]: I0318 18:21:56.034754 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-conductor-0" event={"ID":"e9af6002-27e3-414d-b61a-dc0f7d99768b","Type":"ContainerStarted","Data":"3f2a6e2341630747f5b239c6e3c47c084e7ba03287facbfe53b54ec5ed072fea"} Mar 18 18:21:56.035663 master-0 kubenswrapper[30278]: I0318 18:21:56.035434 30278 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ironic-conductor-0" Mar 18 18:21:56.035663 master-0 kubenswrapper[30278]: I0318 18:21:56.035509 30278 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ironic-conductor-0" Mar 18 18:21:56.049905 master-0 kubenswrapper[30278]: I0318 18:21:56.049846 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"892489cb-419b-40b3-8e27-04302daea69c","Type":"ContainerDied","Data":"eebc42ba4c80cb3c6a1ee9ae7648c3a31cd8407b1f75078816eb41c07a3efccb"} Mar 18 18:21:56.050202 master-0 kubenswrapper[30278]: I0318 18:21:56.049923 30278 scope.go:117] "RemoveContainer" containerID="18eaf3f1fd23baa4d1c58e61f474399c9e76a553dff9f9bd5fb4ccbeadab2200" Mar 18 18:21:56.050202 master-0 kubenswrapper[30278]: I0318 18:21:56.050059 30278 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Mar 18 18:21:56.078657 master-0 kubenswrapper[30278]: I0318 18:21:56.078226 30278 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ironic-conductor-0" podStartSLOduration=84.646840659 podStartE2EDuration="2m3.078201624s" podCreationTimestamp="2026-03-18 18:19:53 +0000 UTC" firstStartedPulling="2026-03-18 18:20:07.024500853 +0000 UTC m=+1176.191685448" lastFinishedPulling="2026-03-18 18:20:45.455861818 +0000 UTC m=+1214.623046413" observedRunningTime="2026-03-18 18:21:56.067580378 +0000 UTC m=+1285.234764973" watchObservedRunningTime="2026-03-18 18:21:56.078201624 +0000 UTC m=+1285.245386219" Mar 18 18:21:56.118805 master-0 kubenswrapper[30278]: I0318 18:21:56.118700 30278 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Mar 18 18:21:56.146659 master-0 kubenswrapper[30278]: I0318 18:21:56.146576 30278 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Mar 18 18:21:56.201796 master-0 kubenswrapper[30278]: I0318 18:21:56.201735 30278 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Mar 18 18:21:56.202553 master-0 kubenswrapper[30278]: E0318 18:21:56.202521 30278 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="892489cb-419b-40b3-8e27-04302daea69c" containerName="nova-scheduler-scheduler" Mar 18 18:21:56.202553 master-0 kubenswrapper[30278]: I0318 18:21:56.202546 30278 state_mem.go:107] "Deleted CPUSet assignment" podUID="892489cb-419b-40b3-8e27-04302daea69c" containerName="nova-scheduler-scheduler" Mar 18 18:21:56.202849 master-0 kubenswrapper[30278]: I0318 18:21:56.202821 30278 memory_manager.go:354] "RemoveStaleState removing state" podUID="892489cb-419b-40b3-8e27-04302daea69c" containerName="nova-scheduler-scheduler" Mar 18 18:21:56.203986 master-0 kubenswrapper[30278]: I0318 18:21:56.203941 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Mar 18 18:21:56.206802 master-0 kubenswrapper[30278]: I0318 18:21:56.206765 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Mar 18 18:21:56.264493 master-0 kubenswrapper[30278]: I0318 18:21:56.262823 30278 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Mar 18 18:21:56.292453 master-0 kubenswrapper[30278]: I0318 18:21:56.292369 30278 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Mar 18 18:21:56.394697 master-0 kubenswrapper[30278]: I0318 18:21:56.394608 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d0e26ed5-e3a5-4852-b288-8185e1095c29-config-data\") pod \"nova-scheduler-0\" (UID: \"d0e26ed5-e3a5-4852-b288-8185e1095c29\") " pod="openstack/nova-scheduler-0" Mar 18 18:21:56.394916 master-0 kubenswrapper[30278]: I0318 18:21:56.394875 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h8pwr\" (UniqueName: \"kubernetes.io/projected/d0e26ed5-e3a5-4852-b288-8185e1095c29-kube-api-access-h8pwr\") pod \"nova-scheduler-0\" (UID: \"d0e26ed5-e3a5-4852-b288-8185e1095c29\") " pod="openstack/nova-scheduler-0" Mar 18 18:21:56.395006 master-0 kubenswrapper[30278]: I0318 18:21:56.394977 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d0e26ed5-e3a5-4852-b288-8185e1095c29-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"d0e26ed5-e3a5-4852-b288-8185e1095c29\") " pod="openstack/nova-scheduler-0" Mar 18 18:21:56.498220 master-0 kubenswrapper[30278]: I0318 18:21:56.498130 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d0e26ed5-e3a5-4852-b288-8185e1095c29-config-data\") pod \"nova-scheduler-0\" (UID: \"d0e26ed5-e3a5-4852-b288-8185e1095c29\") " pod="openstack/nova-scheduler-0" Mar 18 18:21:56.498558 master-0 kubenswrapper[30278]: I0318 18:21:56.498505 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h8pwr\" (UniqueName: \"kubernetes.io/projected/d0e26ed5-e3a5-4852-b288-8185e1095c29-kube-api-access-h8pwr\") pod \"nova-scheduler-0\" (UID: \"d0e26ed5-e3a5-4852-b288-8185e1095c29\") " pod="openstack/nova-scheduler-0" Mar 18 18:21:56.498711 master-0 kubenswrapper[30278]: I0318 18:21:56.498664 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d0e26ed5-e3a5-4852-b288-8185e1095c29-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"d0e26ed5-e3a5-4852-b288-8185e1095c29\") " pod="openstack/nova-scheduler-0" Mar 18 18:21:56.502188 master-0 kubenswrapper[30278]: I0318 18:21:56.502139 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d0e26ed5-e3a5-4852-b288-8185e1095c29-config-data\") pod \"nova-scheduler-0\" (UID: \"d0e26ed5-e3a5-4852-b288-8185e1095c29\") " pod="openstack/nova-scheduler-0" Mar 18 18:21:56.507243 master-0 kubenswrapper[30278]: I0318 18:21:56.505883 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d0e26ed5-e3a5-4852-b288-8185e1095c29-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"d0e26ed5-e3a5-4852-b288-8185e1095c29\") " pod="openstack/nova-scheduler-0" Mar 18 18:21:56.519476 master-0 kubenswrapper[30278]: I0318 18:21:56.519411 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h8pwr\" (UniqueName: \"kubernetes.io/projected/d0e26ed5-e3a5-4852-b288-8185e1095c29-kube-api-access-h8pwr\") pod \"nova-scheduler-0\" (UID: \"d0e26ed5-e3a5-4852-b288-8185e1095c29\") " pod="openstack/nova-scheduler-0" Mar 18 18:21:56.569030 master-0 kubenswrapper[30278]: I0318 18:21:56.568943 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Mar 18 18:21:57.093002 master-0 kubenswrapper[30278]: I0318 18:21:57.092938 30278 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="892489cb-419b-40b3-8e27-04302daea69c" path="/var/lib/kubelet/pods/892489cb-419b-40b3-8e27-04302daea69c/volumes" Mar 18 18:21:57.093914 master-0 kubenswrapper[30278]: I0318 18:21:57.093841 30278 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ed10fd30-ed39-4cda-8252-8f4db21fbfca" path="/var/lib/kubelet/pods/ed10fd30-ed39-4cda-8252-8f4db21fbfca/volumes" Mar 18 18:21:57.095156 master-0 kubenswrapper[30278]: I0318 18:21:57.095108 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"32bccbd4-7005-4a2c-b90e-f7a249adabbd","Type":"ContainerStarted","Data":"ca4e8fa092fe689f2ee12915a52387bfbb0f734418b358a24233e73f4e3f1918"} Mar 18 18:21:57.095156 master-0 kubenswrapper[30278]: I0318 18:21:57.095150 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"32bccbd4-7005-4a2c-b90e-f7a249adabbd","Type":"ContainerStarted","Data":"a103cc03a0b2122fed2a2d5440fe02a76067880c490543da0f185135a49eee90"} Mar 18 18:21:57.095259 master-0 kubenswrapper[30278]: I0318 18:21:57.095162 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"32bccbd4-7005-4a2c-b90e-f7a249adabbd","Type":"ContainerStarted","Data":"ba234ef53dbfe89ae37d2356bfcf9052df90914b3989949f0fcc4caa4bdcec1c"} Mar 18 18:21:57.129434 master-0 kubenswrapper[30278]: I0318 18:21:57.129350 30278 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Mar 18 18:21:57.162550 master-0 kubenswrapper[30278]: I0318 18:21:57.162386 30278 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.162358636 podStartE2EDuration="2.162358636s" podCreationTimestamp="2026-03-18 18:21:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 18:21:57.119369658 +0000 UTC m=+1286.286554283" watchObservedRunningTime="2026-03-18 18:21:57.162358636 +0000 UTC m=+1286.329543231" Mar 18 18:21:57.535357 master-0 kubenswrapper[30278]: I0318 18:21:57.535115 30278 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ironic-conductor-0" Mar 18 18:21:58.109318 master-0 kubenswrapper[30278]: I0318 18:21:58.108682 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"d0e26ed5-e3a5-4852-b288-8185e1095c29","Type":"ContainerStarted","Data":"1b064717325cf15ca21db7427805ef704e2163a221c0eb3389c359423cd08ae9"} Mar 18 18:21:58.109318 master-0 kubenswrapper[30278]: I0318 18:21:58.108750 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"d0e26ed5-e3a5-4852-b288-8185e1095c29","Type":"ContainerStarted","Data":"831230a9f172b52df7d6aa7cca3d1ae806c8d6ba370ff5dece415ac2751831b9"} Mar 18 18:21:58.148065 master-0 kubenswrapper[30278]: I0318 18:21:58.147449 30278 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.147422398 podStartE2EDuration="2.147422398s" podCreationTimestamp="2026-03-18 18:21:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 18:21:58.14639548 +0000 UTC m=+1287.313580085" watchObservedRunningTime="2026-03-18 18:21:58.147422398 +0000 UTC m=+1287.314606993" Mar 18 18:21:59.000854 master-0 kubenswrapper[30278]: I0318 18:21:58.999880 30278 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ironic-conductor-0" Mar 18 18:21:59.192361 master-0 kubenswrapper[30278]: I0318 18:21:59.189883 30278 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ironic-conductor-0" Mar 18 18:22:00.142565 master-0 kubenswrapper[30278]: I0318 18:22:00.142478 30278 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ironic-conductor-0" Mar 18 18:22:01.570883 master-0 kubenswrapper[30278]: I0318 18:22:01.570793 30278 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Mar 18 18:22:02.909113 master-0 kubenswrapper[30278]: I0318 18:22:02.909044 30278 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Mar 18 18:22:05.606740 master-0 kubenswrapper[30278]: I0318 18:22:05.606676 30278 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Mar 18 18:22:05.606740 master-0 kubenswrapper[30278]: I0318 18:22:05.606755 30278 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Mar 18 18:22:06.079585 master-0 kubenswrapper[30278]: I0318 18:22:06.079514 30278 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-inspector-0" Mar 18 18:22:06.175636 master-0 kubenswrapper[30278]: I0318 18:22:06.175562 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-ironic\" (UniqueName: \"kubernetes.io/empty-dir/7078ef0d-3907-46f8-8b84-3bc49fef827b-var-lib-ironic\") pod \"7078ef0d-3907-46f8-8b84-3bc49fef827b\" (UID: \"7078ef0d-3907-46f8-8b84-3bc49fef827b\") " Mar 18 18:22:06.175980 master-0 kubenswrapper[30278]: I0318 18:22:06.175689 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/7078ef0d-3907-46f8-8b84-3bc49fef827b-config\") pod \"7078ef0d-3907-46f8-8b84-3bc49fef827b\" (UID: \"7078ef0d-3907-46f8-8b84-3bc49fef827b\") " Mar 18 18:22:06.175980 master-0 kubenswrapper[30278]: I0318 18:22:06.175730 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/7078ef0d-3907-46f8-8b84-3bc49fef827b-etc-podinfo\") pod \"7078ef0d-3907-46f8-8b84-3bc49fef827b\" (UID: \"7078ef0d-3907-46f8-8b84-3bc49fef827b\") " Mar 18 18:22:06.175980 master-0 kubenswrapper[30278]: I0318 18:22:06.175825 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7078ef0d-3907-46f8-8b84-3bc49fef827b-scripts\") pod \"7078ef0d-3907-46f8-8b84-3bc49fef827b\" (UID: \"7078ef0d-3907-46f8-8b84-3bc49fef827b\") " Mar 18 18:22:06.175980 master-0 kubenswrapper[30278]: I0318 18:22:06.175895 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7078ef0d-3907-46f8-8b84-3bc49fef827b-combined-ca-bundle\") pod \"7078ef0d-3907-46f8-8b84-3bc49fef827b\" (UID: \"7078ef0d-3907-46f8-8b84-3bc49fef827b\") " Mar 18 18:22:06.176177 master-0 kubenswrapper[30278]: I0318 18:22:06.176033 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7579k\" (UniqueName: \"kubernetes.io/projected/7078ef0d-3907-46f8-8b84-3bc49fef827b-kube-api-access-7579k\") pod \"7078ef0d-3907-46f8-8b84-3bc49fef827b\" (UID: \"7078ef0d-3907-46f8-8b84-3bc49fef827b\") " Mar 18 18:22:06.176177 master-0 kubenswrapper[30278]: I0318 18:22:06.176126 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-ironic-inspector-dhcp-hostsdir\" (UniqueName: \"kubernetes.io/empty-dir/7078ef0d-3907-46f8-8b84-3bc49fef827b-var-lib-ironic-inspector-dhcp-hostsdir\") pod \"7078ef0d-3907-46f8-8b84-3bc49fef827b\" (UID: \"7078ef0d-3907-46f8-8b84-3bc49fef827b\") " Mar 18 18:22:06.177325 master-0 kubenswrapper[30278]: I0318 18:22:06.177288 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7078ef0d-3907-46f8-8b84-3bc49fef827b-var-lib-ironic-inspector-dhcp-hostsdir" (OuterVolumeSpecName: "var-lib-ironic-inspector-dhcp-hostsdir") pod "7078ef0d-3907-46f8-8b84-3bc49fef827b" (UID: "7078ef0d-3907-46f8-8b84-3bc49fef827b"). InnerVolumeSpecName "var-lib-ironic-inspector-dhcp-hostsdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 18 18:22:06.179966 master-0 kubenswrapper[30278]: I0318 18:22:06.179932 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7078ef0d-3907-46f8-8b84-3bc49fef827b-var-lib-ironic" (OuterVolumeSpecName: "var-lib-ironic") pod "7078ef0d-3907-46f8-8b84-3bc49fef827b" (UID: "7078ef0d-3907-46f8-8b84-3bc49fef827b"). InnerVolumeSpecName "var-lib-ironic". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 18 18:22:06.182877 master-0 kubenswrapper[30278]: I0318 18:22:06.182827 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7078ef0d-3907-46f8-8b84-3bc49fef827b-scripts" (OuterVolumeSpecName: "scripts") pod "7078ef0d-3907-46f8-8b84-3bc49fef827b" (UID: "7078ef0d-3907-46f8-8b84-3bc49fef827b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 18:22:06.187324 master-0 kubenswrapper[30278]: I0318 18:22:06.187206 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/7078ef0d-3907-46f8-8b84-3bc49fef827b-etc-podinfo" (OuterVolumeSpecName: "etc-podinfo") pod "7078ef0d-3907-46f8-8b84-3bc49fef827b" (UID: "7078ef0d-3907-46f8-8b84-3bc49fef827b"). InnerVolumeSpecName "etc-podinfo". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Mar 18 18:22:06.199837 master-0 kubenswrapper[30278]: I0318 18:22:06.199590 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7078ef0d-3907-46f8-8b84-3bc49fef827b-kube-api-access-7579k" (OuterVolumeSpecName: "kube-api-access-7579k") pod "7078ef0d-3907-46f8-8b84-3bc49fef827b" (UID: "7078ef0d-3907-46f8-8b84-3bc49fef827b"). InnerVolumeSpecName "kube-api-access-7579k". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 18:22:06.253958 master-0 kubenswrapper[30278]: I0318 18:22:06.253888 30278 generic.go:334] "Generic (PLEG): container finished" podID="7078ef0d-3907-46f8-8b84-3bc49fef827b" containerID="bd9f9c27e748d4fe2b8cea8087426b1503cc90797b11f8479c9e6689974a8b2c" exitCode=137 Mar 18 18:22:06.253958 master-0 kubenswrapper[30278]: I0318 18:22:06.253946 30278 generic.go:334] "Generic (PLEG): container finished" podID="7078ef0d-3907-46f8-8b84-3bc49fef827b" containerID="604d2f7a05c73e0031d1c00196d94c48d4fa632cd8bcc3d5511a6d7614dbe44f" exitCode=137 Mar 18 18:22:06.255606 master-0 kubenswrapper[30278]: I0318 18:22:06.255574 30278 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-inspector-0" Mar 18 18:22:06.256856 master-0 kubenswrapper[30278]: I0318 18:22:06.256820 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-0" event={"ID":"7078ef0d-3907-46f8-8b84-3bc49fef827b","Type":"ContainerDied","Data":"bd9f9c27e748d4fe2b8cea8087426b1503cc90797b11f8479c9e6689974a8b2c"} Mar 18 18:22:06.256972 master-0 kubenswrapper[30278]: I0318 18:22:06.256865 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-0" event={"ID":"7078ef0d-3907-46f8-8b84-3bc49fef827b","Type":"ContainerDied","Data":"604d2f7a05c73e0031d1c00196d94c48d4fa632cd8bcc3d5511a6d7614dbe44f"} Mar 18 18:22:06.256972 master-0 kubenswrapper[30278]: I0318 18:22:06.256883 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-0" event={"ID":"7078ef0d-3907-46f8-8b84-3bc49fef827b","Type":"ContainerDied","Data":"e6943adab2264c18d7bf621a7c9a46b407755cb260156eb5817d89119d84c918"} Mar 18 18:22:06.256972 master-0 kubenswrapper[30278]: I0318 18:22:06.256904 30278 scope.go:117] "RemoveContainer" containerID="bd9f9c27e748d4fe2b8cea8087426b1503cc90797b11f8479c9e6689974a8b2c" Mar 18 18:22:06.296013 master-0 kubenswrapper[30278]: I0318 18:22:06.295947 30278 reconciler_common.go:293] "Volume detached for volume \"var-lib-ironic-inspector-dhcp-hostsdir\" (UniqueName: \"kubernetes.io/empty-dir/7078ef0d-3907-46f8-8b84-3bc49fef827b-var-lib-ironic-inspector-dhcp-hostsdir\") on node \"master-0\" DevicePath \"\"" Mar 18 18:22:06.296013 master-0 kubenswrapper[30278]: I0318 18:22:06.296002 30278 reconciler_common.go:293] "Volume detached for volume \"var-lib-ironic\" (UniqueName: \"kubernetes.io/empty-dir/7078ef0d-3907-46f8-8b84-3bc49fef827b-var-lib-ironic\") on node \"master-0\" DevicePath \"\"" Mar 18 18:22:06.296013 master-0 kubenswrapper[30278]: I0318 18:22:06.296015 30278 reconciler_common.go:293] "Volume detached for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/7078ef0d-3907-46f8-8b84-3bc49fef827b-etc-podinfo\") on node \"master-0\" DevicePath \"\"" Mar 18 18:22:06.296013 master-0 kubenswrapper[30278]: I0318 18:22:06.296025 30278 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7078ef0d-3907-46f8-8b84-3bc49fef827b-scripts\") on node \"master-0\" DevicePath \"\"" Mar 18 18:22:06.296672 master-0 kubenswrapper[30278]: I0318 18:22:06.296040 30278 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7579k\" (UniqueName: \"kubernetes.io/projected/7078ef0d-3907-46f8-8b84-3bc49fef827b-kube-api-access-7579k\") on node \"master-0\" DevicePath \"\"" Mar 18 18:22:06.382796 master-0 kubenswrapper[30278]: I0318 18:22:06.382721 30278 scope.go:117] "RemoveContainer" containerID="211973a866a9f4f0943f7414ddc273f42c2d700b2d27e3ad94e39c601ded9a4b" Mar 18 18:22:06.416066 master-0 kubenswrapper[30278]: I0318 18:22:06.415975 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7078ef0d-3907-46f8-8b84-3bc49fef827b-config" (OuterVolumeSpecName: "config") pod "7078ef0d-3907-46f8-8b84-3bc49fef827b" (UID: "7078ef0d-3907-46f8-8b84-3bc49fef827b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 18:22:06.499658 master-0 kubenswrapper[30278]: I0318 18:22:06.499589 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7078ef0d-3907-46f8-8b84-3bc49fef827b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7078ef0d-3907-46f8-8b84-3bc49fef827b" (UID: "7078ef0d-3907-46f8-8b84-3bc49fef827b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 18:22:06.504005 master-0 kubenswrapper[30278]: W0318 18:22:06.502218 30278 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/7078ef0d-3907-46f8-8b84-3bc49fef827b/volumes/kubernetes.io~secret/combined-ca-bundle Mar 18 18:22:06.504005 master-0 kubenswrapper[30278]: I0318 18:22:06.502282 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7078ef0d-3907-46f8-8b84-3bc49fef827b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7078ef0d-3907-46f8-8b84-3bc49fef827b" (UID: "7078ef0d-3907-46f8-8b84-3bc49fef827b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 18:22:06.504005 master-0 kubenswrapper[30278]: I0318 18:22:06.502380 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7078ef0d-3907-46f8-8b84-3bc49fef827b-combined-ca-bundle\") pod \"7078ef0d-3907-46f8-8b84-3bc49fef827b\" (UID: \"7078ef0d-3907-46f8-8b84-3bc49fef827b\") " Mar 18 18:22:06.504005 master-0 kubenswrapper[30278]: I0318 18:22:06.503261 30278 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/7078ef0d-3907-46f8-8b84-3bc49fef827b-config\") on node \"master-0\" DevicePath \"\"" Mar 18 18:22:06.504005 master-0 kubenswrapper[30278]: I0318 18:22:06.503295 30278 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7078ef0d-3907-46f8-8b84-3bc49fef827b-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 18 18:22:06.569319 master-0 kubenswrapper[30278]: I0318 18:22:06.569221 30278 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Mar 18 18:22:06.570585 master-0 kubenswrapper[30278]: I0318 18:22:06.570547 30278 scope.go:117] "RemoveContainer" containerID="7adfddccafaff41ad8fa3b1a8be9a6220f1d65a40b64b1d5166a4fff31e028f5" Mar 18 18:22:06.615583 master-0 kubenswrapper[30278]: I0318 18:22:06.615515 30278 scope.go:117] "RemoveContainer" containerID="604d2f7a05c73e0031d1c00196d94c48d4fa632cd8bcc3d5511a6d7614dbe44f" Mar 18 18:22:06.615997 master-0 kubenswrapper[30278]: I0318 18:22:06.615823 30278 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Mar 18 18:22:06.617612 master-0 kubenswrapper[30278]: I0318 18:22:06.617579 30278 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ironic-inspector-0"] Mar 18 18:22:06.652098 master-0 kubenswrapper[30278]: I0318 18:22:06.648362 30278 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ironic-inspector-0"] Mar 18 18:22:06.662900 master-0 kubenswrapper[30278]: I0318 18:22:06.662827 30278 scope.go:117] "RemoveContainer" containerID="f2737b5ee009a8383c72bcf528e2f27ef4d61ab34009e83f59848c1ae2d54161" Mar 18 18:22:06.692203 master-0 kubenswrapper[30278]: I0318 18:22:06.692122 30278 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ironic-inspector-0"] Mar 18 18:22:06.693057 master-0 kubenswrapper[30278]: E0318 18:22:06.693026 30278 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7078ef0d-3907-46f8-8b84-3bc49fef827b" containerName="ironic-inspector" Mar 18 18:22:06.693057 master-0 kubenswrapper[30278]: I0318 18:22:06.693055 30278 state_mem.go:107] "Deleted CPUSet assignment" podUID="7078ef0d-3907-46f8-8b84-3bc49fef827b" containerName="ironic-inspector" Mar 18 18:22:06.693144 master-0 kubenswrapper[30278]: E0318 18:22:06.693083 30278 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7078ef0d-3907-46f8-8b84-3bc49fef827b" containerName="ironic-python-agent-init" Mar 18 18:22:06.693144 master-0 kubenswrapper[30278]: I0318 18:22:06.693093 30278 state_mem.go:107] "Deleted CPUSet assignment" podUID="7078ef0d-3907-46f8-8b84-3bc49fef827b" containerName="ironic-python-agent-init" Mar 18 18:22:06.693144 master-0 kubenswrapper[30278]: E0318 18:22:06.693120 30278 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7078ef0d-3907-46f8-8b84-3bc49fef827b" containerName="inspector-dnsmasq" Mar 18 18:22:06.693144 master-0 kubenswrapper[30278]: I0318 18:22:06.693130 30278 state_mem.go:107] "Deleted CPUSet assignment" podUID="7078ef0d-3907-46f8-8b84-3bc49fef827b" containerName="inspector-dnsmasq" Mar 18 18:22:06.693267 master-0 kubenswrapper[30278]: E0318 18:22:06.693152 30278 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7078ef0d-3907-46f8-8b84-3bc49fef827b" containerName="ironic-inspector-httpd" Mar 18 18:22:06.693267 master-0 kubenswrapper[30278]: I0318 18:22:06.693160 30278 state_mem.go:107] "Deleted CPUSet assignment" podUID="7078ef0d-3907-46f8-8b84-3bc49fef827b" containerName="ironic-inspector-httpd" Mar 18 18:22:06.693267 master-0 kubenswrapper[30278]: E0318 18:22:06.693184 30278 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7078ef0d-3907-46f8-8b84-3bc49fef827b" containerName="inspector-httpboot" Mar 18 18:22:06.693267 master-0 kubenswrapper[30278]: I0318 18:22:06.693190 30278 state_mem.go:107] "Deleted CPUSet assignment" podUID="7078ef0d-3907-46f8-8b84-3bc49fef827b" containerName="inspector-httpboot" Mar 18 18:22:06.693267 master-0 kubenswrapper[30278]: E0318 18:22:06.693202 30278 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7078ef0d-3907-46f8-8b84-3bc49fef827b" containerName="ramdisk-logs" Mar 18 18:22:06.693267 master-0 kubenswrapper[30278]: I0318 18:22:06.693208 30278 state_mem.go:107] "Deleted CPUSet assignment" podUID="7078ef0d-3907-46f8-8b84-3bc49fef827b" containerName="ramdisk-logs" Mar 18 18:22:06.693267 master-0 kubenswrapper[30278]: E0318 18:22:06.693227 30278 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7078ef0d-3907-46f8-8b84-3bc49fef827b" containerName="inspector-pxe-init" Mar 18 18:22:06.693267 master-0 kubenswrapper[30278]: I0318 18:22:06.693236 30278 state_mem.go:107] "Deleted CPUSet assignment" podUID="7078ef0d-3907-46f8-8b84-3bc49fef827b" containerName="inspector-pxe-init" Mar 18 18:22:06.693588 master-0 kubenswrapper[30278]: I0318 18:22:06.693562 30278 memory_manager.go:354] "RemoveStaleState removing state" podUID="7078ef0d-3907-46f8-8b84-3bc49fef827b" containerName="inspector-httpboot" Mar 18 18:22:06.693636 master-0 kubenswrapper[30278]: I0318 18:22:06.693622 30278 memory_manager.go:354] "RemoveStaleState removing state" podUID="7078ef0d-3907-46f8-8b84-3bc49fef827b" containerName="ramdisk-logs" Mar 18 18:22:06.693667 master-0 kubenswrapper[30278]: I0318 18:22:06.693640 30278 memory_manager.go:354] "RemoveStaleState removing state" podUID="7078ef0d-3907-46f8-8b84-3bc49fef827b" containerName="ironic-inspector-httpd" Mar 18 18:22:06.693667 master-0 kubenswrapper[30278]: I0318 18:22:06.693656 30278 memory_manager.go:354] "RemoveStaleState removing state" podUID="7078ef0d-3907-46f8-8b84-3bc49fef827b" containerName="ironic-inspector" Mar 18 18:22:06.693747 master-0 kubenswrapper[30278]: I0318 18:22:06.693692 30278 memory_manager.go:354] "RemoveStaleState removing state" podUID="7078ef0d-3907-46f8-8b84-3bc49fef827b" containerName="inspector-dnsmasq" Mar 18 18:22:06.695874 master-0 kubenswrapper[30278]: I0318 18:22:06.695535 30278 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="32bccbd4-7005-4a2c-b90e-f7a249adabbd" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.128.1.11:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 18 18:22:06.695874 master-0 kubenswrapper[30278]: I0318 18:22:06.695635 30278 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="32bccbd4-7005-4a2c-b90e-f7a249adabbd" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.128.1.11:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 18 18:22:06.701305 master-0 kubenswrapper[30278]: I0318 18:22:06.698027 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-inspector-0" Mar 18 18:22:06.705830 master-0 kubenswrapper[30278]: I0318 18:22:06.705769 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ironic-inspector-scripts" Mar 18 18:22:06.705961 master-0 kubenswrapper[30278]: I0318 18:22:06.705899 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ironic-inspector-public-svc" Mar 18 18:22:06.706131 master-0 kubenswrapper[30278]: I0318 18:22:06.706107 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ironic-inspector-config-data" Mar 18 18:22:06.708372 master-0 kubenswrapper[30278]: I0318 18:22:06.708330 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-transport-url-ironic-inspector-transport" Mar 18 18:22:06.712238 master-0 kubenswrapper[30278]: I0318 18:22:06.712193 30278 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-inspector-0"] Mar 18 18:22:06.727423 master-0 kubenswrapper[30278]: I0318 18:22:06.722252 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ironic-inspector-internal-svc" Mar 18 18:22:06.733886 master-0 kubenswrapper[30278]: I0318 18:22:06.733833 30278 scope.go:117] "RemoveContainer" containerID="8acfa6bb4182b4571c7b22674fa756975397aebb5451cdc722c3813a21ccc76f" Mar 18 18:22:06.791453 master-0 kubenswrapper[30278]: I0318 18:22:06.791371 30278 scope.go:117] "RemoveContainer" containerID="7a716da329c5041336322c24359b65f93936fad52ca55f0aa173e6423ea46606" Mar 18 18:22:06.814252 master-0 kubenswrapper[30278]: I0318 18:22:06.814182 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/f9ada823-f818-42c2-874e-0cce432cdff3-etc-podinfo\") pod \"ironic-inspector-0\" (UID: \"f9ada823-f818-42c2-874e-0cce432cdff3\") " pod="openstack/ironic-inspector-0" Mar 18 18:22:06.814449 master-0 kubenswrapper[30278]: I0318 18:22:06.814267 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f9ada823-f818-42c2-874e-0cce432cdff3-internal-tls-certs\") pod \"ironic-inspector-0\" (UID: \"f9ada823-f818-42c2-874e-0cce432cdff3\") " pod="openstack/ironic-inspector-0" Mar 18 18:22:06.814449 master-0 kubenswrapper[30278]: I0318 18:22:06.814358 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-ironic\" (UniqueName: \"kubernetes.io/empty-dir/f9ada823-f818-42c2-874e-0cce432cdff3-var-lib-ironic\") pod \"ironic-inspector-0\" (UID: \"f9ada823-f818-42c2-874e-0cce432cdff3\") " pod="openstack/ironic-inspector-0" Mar 18 18:22:06.814551 master-0 kubenswrapper[30278]: I0318 18:22:06.814528 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7d656\" (UniqueName: \"kubernetes.io/projected/f9ada823-f818-42c2-874e-0cce432cdff3-kube-api-access-7d656\") pod \"ironic-inspector-0\" (UID: \"f9ada823-f818-42c2-874e-0cce432cdff3\") " pod="openstack/ironic-inspector-0" Mar 18 18:22:06.814588 master-0 kubenswrapper[30278]: I0318 18:22:06.814579 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f9ada823-f818-42c2-874e-0cce432cdff3-combined-ca-bundle\") pod \"ironic-inspector-0\" (UID: \"f9ada823-f818-42c2-874e-0cce432cdff3\") " pod="openstack/ironic-inspector-0" Mar 18 18:22:06.814649 master-0 kubenswrapper[30278]: I0318 18:22:06.814620 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f9ada823-f818-42c2-874e-0cce432cdff3-public-tls-certs\") pod \"ironic-inspector-0\" (UID: \"f9ada823-f818-42c2-874e-0cce432cdff3\") " pod="openstack/ironic-inspector-0" Mar 18 18:22:06.814708 master-0 kubenswrapper[30278]: I0318 18:22:06.814687 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/f9ada823-f818-42c2-874e-0cce432cdff3-config\") pod \"ironic-inspector-0\" (UID: \"f9ada823-f818-42c2-874e-0cce432cdff3\") " pod="openstack/ironic-inspector-0" Mar 18 18:22:06.814805 master-0 kubenswrapper[30278]: I0318 18:22:06.814741 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-ironic-inspector-dhcp-hostsdir\" (UniqueName: \"kubernetes.io/empty-dir/f9ada823-f818-42c2-874e-0cce432cdff3-var-lib-ironic-inspector-dhcp-hostsdir\") pod \"ironic-inspector-0\" (UID: \"f9ada823-f818-42c2-874e-0cce432cdff3\") " pod="openstack/ironic-inspector-0" Mar 18 18:22:06.815011 master-0 kubenswrapper[30278]: I0318 18:22:06.814957 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f9ada823-f818-42c2-874e-0cce432cdff3-scripts\") pod \"ironic-inspector-0\" (UID: \"f9ada823-f818-42c2-874e-0cce432cdff3\") " pod="openstack/ironic-inspector-0" Mar 18 18:22:06.831172 master-0 kubenswrapper[30278]: I0318 18:22:06.831065 30278 scope.go:117] "RemoveContainer" containerID="bd9f9c27e748d4fe2b8cea8087426b1503cc90797b11f8479c9e6689974a8b2c" Mar 18 18:22:06.832570 master-0 kubenswrapper[30278]: E0318 18:22:06.832260 30278 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bd9f9c27e748d4fe2b8cea8087426b1503cc90797b11f8479c9e6689974a8b2c\": container with ID starting with bd9f9c27e748d4fe2b8cea8087426b1503cc90797b11f8479c9e6689974a8b2c not found: ID does not exist" containerID="bd9f9c27e748d4fe2b8cea8087426b1503cc90797b11f8479c9e6689974a8b2c" Mar 18 18:22:06.832570 master-0 kubenswrapper[30278]: I0318 18:22:06.832470 30278 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bd9f9c27e748d4fe2b8cea8087426b1503cc90797b11f8479c9e6689974a8b2c"} err="failed to get container status \"bd9f9c27e748d4fe2b8cea8087426b1503cc90797b11f8479c9e6689974a8b2c\": rpc error: code = NotFound desc = could not find container \"bd9f9c27e748d4fe2b8cea8087426b1503cc90797b11f8479c9e6689974a8b2c\": container with ID starting with bd9f9c27e748d4fe2b8cea8087426b1503cc90797b11f8479c9e6689974a8b2c not found: ID does not exist" Mar 18 18:22:06.832570 master-0 kubenswrapper[30278]: I0318 18:22:06.832509 30278 scope.go:117] "RemoveContainer" containerID="211973a866a9f4f0943f7414ddc273f42c2d700b2d27e3ad94e39c601ded9a4b" Mar 18 18:22:06.839293 master-0 kubenswrapper[30278]: E0318 18:22:06.834069 30278 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"211973a866a9f4f0943f7414ddc273f42c2d700b2d27e3ad94e39c601ded9a4b\": container with ID starting with 211973a866a9f4f0943f7414ddc273f42c2d700b2d27e3ad94e39c601ded9a4b not found: ID does not exist" containerID="211973a866a9f4f0943f7414ddc273f42c2d700b2d27e3ad94e39c601ded9a4b" Mar 18 18:22:06.839293 master-0 kubenswrapper[30278]: I0318 18:22:06.834134 30278 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"211973a866a9f4f0943f7414ddc273f42c2d700b2d27e3ad94e39c601ded9a4b"} err="failed to get container status \"211973a866a9f4f0943f7414ddc273f42c2d700b2d27e3ad94e39c601ded9a4b\": rpc error: code = NotFound desc = could not find container \"211973a866a9f4f0943f7414ddc273f42c2d700b2d27e3ad94e39c601ded9a4b\": container with ID starting with 211973a866a9f4f0943f7414ddc273f42c2d700b2d27e3ad94e39c601ded9a4b not found: ID does not exist" Mar 18 18:22:06.839293 master-0 kubenswrapper[30278]: I0318 18:22:06.834176 30278 scope.go:117] "RemoveContainer" containerID="7adfddccafaff41ad8fa3b1a8be9a6220f1d65a40b64b1d5166a4fff31e028f5" Mar 18 18:22:06.839293 master-0 kubenswrapper[30278]: E0318 18:22:06.834486 30278 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7adfddccafaff41ad8fa3b1a8be9a6220f1d65a40b64b1d5166a4fff31e028f5\": container with ID starting with 7adfddccafaff41ad8fa3b1a8be9a6220f1d65a40b64b1d5166a4fff31e028f5 not found: ID does not exist" containerID="7adfddccafaff41ad8fa3b1a8be9a6220f1d65a40b64b1d5166a4fff31e028f5" Mar 18 18:22:06.839293 master-0 kubenswrapper[30278]: I0318 18:22:06.834518 30278 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7adfddccafaff41ad8fa3b1a8be9a6220f1d65a40b64b1d5166a4fff31e028f5"} err="failed to get container status \"7adfddccafaff41ad8fa3b1a8be9a6220f1d65a40b64b1d5166a4fff31e028f5\": rpc error: code = NotFound desc = could not find container \"7adfddccafaff41ad8fa3b1a8be9a6220f1d65a40b64b1d5166a4fff31e028f5\": container with ID starting with 7adfddccafaff41ad8fa3b1a8be9a6220f1d65a40b64b1d5166a4fff31e028f5 not found: ID does not exist" Mar 18 18:22:06.839293 master-0 kubenswrapper[30278]: I0318 18:22:06.834540 30278 scope.go:117] "RemoveContainer" containerID="604d2f7a05c73e0031d1c00196d94c48d4fa632cd8bcc3d5511a6d7614dbe44f" Mar 18 18:22:06.839293 master-0 kubenswrapper[30278]: E0318 18:22:06.834794 30278 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"604d2f7a05c73e0031d1c00196d94c48d4fa632cd8bcc3d5511a6d7614dbe44f\": container with ID starting with 604d2f7a05c73e0031d1c00196d94c48d4fa632cd8bcc3d5511a6d7614dbe44f not found: ID does not exist" containerID="604d2f7a05c73e0031d1c00196d94c48d4fa632cd8bcc3d5511a6d7614dbe44f" Mar 18 18:22:06.839293 master-0 kubenswrapper[30278]: I0318 18:22:06.834841 30278 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"604d2f7a05c73e0031d1c00196d94c48d4fa632cd8bcc3d5511a6d7614dbe44f"} err="failed to get container status \"604d2f7a05c73e0031d1c00196d94c48d4fa632cd8bcc3d5511a6d7614dbe44f\": rpc error: code = NotFound desc = could not find container \"604d2f7a05c73e0031d1c00196d94c48d4fa632cd8bcc3d5511a6d7614dbe44f\": container with ID starting with 604d2f7a05c73e0031d1c00196d94c48d4fa632cd8bcc3d5511a6d7614dbe44f not found: ID does not exist" Mar 18 18:22:06.839293 master-0 kubenswrapper[30278]: I0318 18:22:06.834858 30278 scope.go:117] "RemoveContainer" containerID="f2737b5ee009a8383c72bcf528e2f27ef4d61ab34009e83f59848c1ae2d54161" Mar 18 18:22:06.839293 master-0 kubenswrapper[30278]: E0318 18:22:06.835723 30278 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f2737b5ee009a8383c72bcf528e2f27ef4d61ab34009e83f59848c1ae2d54161\": container with ID starting with f2737b5ee009a8383c72bcf528e2f27ef4d61ab34009e83f59848c1ae2d54161 not found: ID does not exist" containerID="f2737b5ee009a8383c72bcf528e2f27ef4d61ab34009e83f59848c1ae2d54161" Mar 18 18:22:06.839293 master-0 kubenswrapper[30278]: I0318 18:22:06.835754 30278 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f2737b5ee009a8383c72bcf528e2f27ef4d61ab34009e83f59848c1ae2d54161"} err="failed to get container status \"f2737b5ee009a8383c72bcf528e2f27ef4d61ab34009e83f59848c1ae2d54161\": rpc error: code = NotFound desc = could not find container \"f2737b5ee009a8383c72bcf528e2f27ef4d61ab34009e83f59848c1ae2d54161\": container with ID starting with f2737b5ee009a8383c72bcf528e2f27ef4d61ab34009e83f59848c1ae2d54161 not found: ID does not exist" Mar 18 18:22:06.839293 master-0 kubenswrapper[30278]: I0318 18:22:06.835808 30278 scope.go:117] "RemoveContainer" containerID="8acfa6bb4182b4571c7b22674fa756975397aebb5451cdc722c3813a21ccc76f" Mar 18 18:22:06.839293 master-0 kubenswrapper[30278]: E0318 18:22:06.836091 30278 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8acfa6bb4182b4571c7b22674fa756975397aebb5451cdc722c3813a21ccc76f\": container with ID starting with 8acfa6bb4182b4571c7b22674fa756975397aebb5451cdc722c3813a21ccc76f not found: ID does not exist" containerID="8acfa6bb4182b4571c7b22674fa756975397aebb5451cdc722c3813a21ccc76f" Mar 18 18:22:06.839293 master-0 kubenswrapper[30278]: I0318 18:22:06.836146 30278 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8acfa6bb4182b4571c7b22674fa756975397aebb5451cdc722c3813a21ccc76f"} err="failed to get container status \"8acfa6bb4182b4571c7b22674fa756975397aebb5451cdc722c3813a21ccc76f\": rpc error: code = NotFound desc = could not find container \"8acfa6bb4182b4571c7b22674fa756975397aebb5451cdc722c3813a21ccc76f\": container with ID starting with 8acfa6bb4182b4571c7b22674fa756975397aebb5451cdc722c3813a21ccc76f not found: ID does not exist" Mar 18 18:22:06.839293 master-0 kubenswrapper[30278]: I0318 18:22:06.836165 30278 scope.go:117] "RemoveContainer" containerID="7a716da329c5041336322c24359b65f93936fad52ca55f0aa173e6423ea46606" Mar 18 18:22:06.839293 master-0 kubenswrapper[30278]: E0318 18:22:06.836468 30278 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7a716da329c5041336322c24359b65f93936fad52ca55f0aa173e6423ea46606\": container with ID starting with 7a716da329c5041336322c24359b65f93936fad52ca55f0aa173e6423ea46606 not found: ID does not exist" containerID="7a716da329c5041336322c24359b65f93936fad52ca55f0aa173e6423ea46606" Mar 18 18:22:06.839293 master-0 kubenswrapper[30278]: I0318 18:22:06.836494 30278 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7a716da329c5041336322c24359b65f93936fad52ca55f0aa173e6423ea46606"} err="failed to get container status \"7a716da329c5041336322c24359b65f93936fad52ca55f0aa173e6423ea46606\": rpc error: code = NotFound desc = could not find container \"7a716da329c5041336322c24359b65f93936fad52ca55f0aa173e6423ea46606\": container with ID starting with 7a716da329c5041336322c24359b65f93936fad52ca55f0aa173e6423ea46606 not found: ID does not exist" Mar 18 18:22:06.839293 master-0 kubenswrapper[30278]: I0318 18:22:06.836514 30278 scope.go:117] "RemoveContainer" containerID="bd9f9c27e748d4fe2b8cea8087426b1503cc90797b11f8479c9e6689974a8b2c" Mar 18 18:22:06.839293 master-0 kubenswrapper[30278]: I0318 18:22:06.836799 30278 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bd9f9c27e748d4fe2b8cea8087426b1503cc90797b11f8479c9e6689974a8b2c"} err="failed to get container status \"bd9f9c27e748d4fe2b8cea8087426b1503cc90797b11f8479c9e6689974a8b2c\": rpc error: code = NotFound desc = could not find container \"bd9f9c27e748d4fe2b8cea8087426b1503cc90797b11f8479c9e6689974a8b2c\": container with ID starting with bd9f9c27e748d4fe2b8cea8087426b1503cc90797b11f8479c9e6689974a8b2c not found: ID does not exist" Mar 18 18:22:06.839293 master-0 kubenswrapper[30278]: I0318 18:22:06.836822 30278 scope.go:117] "RemoveContainer" containerID="211973a866a9f4f0943f7414ddc273f42c2d700b2d27e3ad94e39c601ded9a4b" Mar 18 18:22:06.839293 master-0 kubenswrapper[30278]: I0318 18:22:06.837020 30278 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"211973a866a9f4f0943f7414ddc273f42c2d700b2d27e3ad94e39c601ded9a4b"} err="failed to get container status \"211973a866a9f4f0943f7414ddc273f42c2d700b2d27e3ad94e39c601ded9a4b\": rpc error: code = NotFound desc = could not find container \"211973a866a9f4f0943f7414ddc273f42c2d700b2d27e3ad94e39c601ded9a4b\": container with ID starting with 211973a866a9f4f0943f7414ddc273f42c2d700b2d27e3ad94e39c601ded9a4b not found: ID does not exist" Mar 18 18:22:06.839293 master-0 kubenswrapper[30278]: I0318 18:22:06.837043 30278 scope.go:117] "RemoveContainer" containerID="7adfddccafaff41ad8fa3b1a8be9a6220f1d65a40b64b1d5166a4fff31e028f5" Mar 18 18:22:06.839293 master-0 kubenswrapper[30278]: I0318 18:22:06.837257 30278 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7adfddccafaff41ad8fa3b1a8be9a6220f1d65a40b64b1d5166a4fff31e028f5"} err="failed to get container status \"7adfddccafaff41ad8fa3b1a8be9a6220f1d65a40b64b1d5166a4fff31e028f5\": rpc error: code = NotFound desc = could not find container \"7adfddccafaff41ad8fa3b1a8be9a6220f1d65a40b64b1d5166a4fff31e028f5\": container with ID starting with 7adfddccafaff41ad8fa3b1a8be9a6220f1d65a40b64b1d5166a4fff31e028f5 not found: ID does not exist" Mar 18 18:22:06.839293 master-0 kubenswrapper[30278]: I0318 18:22:06.837315 30278 scope.go:117] "RemoveContainer" containerID="604d2f7a05c73e0031d1c00196d94c48d4fa632cd8bcc3d5511a6d7614dbe44f" Mar 18 18:22:06.839293 master-0 kubenswrapper[30278]: I0318 18:22:06.837516 30278 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"604d2f7a05c73e0031d1c00196d94c48d4fa632cd8bcc3d5511a6d7614dbe44f"} err="failed to get container status \"604d2f7a05c73e0031d1c00196d94c48d4fa632cd8bcc3d5511a6d7614dbe44f\": rpc error: code = NotFound desc = could not find container \"604d2f7a05c73e0031d1c00196d94c48d4fa632cd8bcc3d5511a6d7614dbe44f\": container with ID starting with 604d2f7a05c73e0031d1c00196d94c48d4fa632cd8bcc3d5511a6d7614dbe44f not found: ID does not exist" Mar 18 18:22:06.839293 master-0 kubenswrapper[30278]: I0318 18:22:06.837539 30278 scope.go:117] "RemoveContainer" containerID="f2737b5ee009a8383c72bcf528e2f27ef4d61ab34009e83f59848c1ae2d54161" Mar 18 18:22:06.839293 master-0 kubenswrapper[30278]: I0318 18:22:06.837723 30278 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f2737b5ee009a8383c72bcf528e2f27ef4d61ab34009e83f59848c1ae2d54161"} err="failed to get container status \"f2737b5ee009a8383c72bcf528e2f27ef4d61ab34009e83f59848c1ae2d54161\": rpc error: code = NotFound desc = could not find container \"f2737b5ee009a8383c72bcf528e2f27ef4d61ab34009e83f59848c1ae2d54161\": container with ID starting with f2737b5ee009a8383c72bcf528e2f27ef4d61ab34009e83f59848c1ae2d54161 not found: ID does not exist" Mar 18 18:22:06.839293 master-0 kubenswrapper[30278]: I0318 18:22:06.837746 30278 scope.go:117] "RemoveContainer" containerID="8acfa6bb4182b4571c7b22674fa756975397aebb5451cdc722c3813a21ccc76f" Mar 18 18:22:06.839293 master-0 kubenswrapper[30278]: I0318 18:22:06.837916 30278 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8acfa6bb4182b4571c7b22674fa756975397aebb5451cdc722c3813a21ccc76f"} err="failed to get container status \"8acfa6bb4182b4571c7b22674fa756975397aebb5451cdc722c3813a21ccc76f\": rpc error: code = NotFound desc = could not find container \"8acfa6bb4182b4571c7b22674fa756975397aebb5451cdc722c3813a21ccc76f\": container with ID starting with 8acfa6bb4182b4571c7b22674fa756975397aebb5451cdc722c3813a21ccc76f not found: ID does not exist" Mar 18 18:22:06.839293 master-0 kubenswrapper[30278]: I0318 18:22:06.837934 30278 scope.go:117] "RemoveContainer" containerID="7a716da329c5041336322c24359b65f93936fad52ca55f0aa173e6423ea46606" Mar 18 18:22:06.839293 master-0 kubenswrapper[30278]: I0318 18:22:06.838160 30278 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7a716da329c5041336322c24359b65f93936fad52ca55f0aa173e6423ea46606"} err="failed to get container status \"7a716da329c5041336322c24359b65f93936fad52ca55f0aa173e6423ea46606\": rpc error: code = NotFound desc = could not find container \"7a716da329c5041336322c24359b65f93936fad52ca55f0aa173e6423ea46606\": container with ID starting with 7a716da329c5041336322c24359b65f93936fad52ca55f0aa173e6423ea46606 not found: ID does not exist" Mar 18 18:22:06.918078 master-0 kubenswrapper[30278]: I0318 18:22:06.917783 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/f9ada823-f818-42c2-874e-0cce432cdff3-etc-podinfo\") pod \"ironic-inspector-0\" (UID: \"f9ada823-f818-42c2-874e-0cce432cdff3\") " pod="openstack/ironic-inspector-0" Mar 18 18:22:06.918078 master-0 kubenswrapper[30278]: I0318 18:22:06.917866 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f9ada823-f818-42c2-874e-0cce432cdff3-internal-tls-certs\") pod \"ironic-inspector-0\" (UID: \"f9ada823-f818-42c2-874e-0cce432cdff3\") " pod="openstack/ironic-inspector-0" Mar 18 18:22:06.918078 master-0 kubenswrapper[30278]: I0318 18:22:06.917939 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-ironic\" (UniqueName: \"kubernetes.io/empty-dir/f9ada823-f818-42c2-874e-0cce432cdff3-var-lib-ironic\") pod \"ironic-inspector-0\" (UID: \"f9ada823-f818-42c2-874e-0cce432cdff3\") " pod="openstack/ironic-inspector-0" Mar 18 18:22:06.918078 master-0 kubenswrapper[30278]: I0318 18:22:06.917975 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7d656\" (UniqueName: \"kubernetes.io/projected/f9ada823-f818-42c2-874e-0cce432cdff3-kube-api-access-7d656\") pod \"ironic-inspector-0\" (UID: \"f9ada823-f818-42c2-874e-0cce432cdff3\") " pod="openstack/ironic-inspector-0" Mar 18 18:22:06.918078 master-0 kubenswrapper[30278]: I0318 18:22:06.918017 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f9ada823-f818-42c2-874e-0cce432cdff3-combined-ca-bundle\") pod \"ironic-inspector-0\" (UID: \"f9ada823-f818-42c2-874e-0cce432cdff3\") " pod="openstack/ironic-inspector-0" Mar 18 18:22:06.918616 master-0 kubenswrapper[30278]: I0318 18:22:06.918367 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f9ada823-f818-42c2-874e-0cce432cdff3-public-tls-certs\") pod \"ironic-inspector-0\" (UID: \"f9ada823-f818-42c2-874e-0cce432cdff3\") " pod="openstack/ironic-inspector-0" Mar 18 18:22:06.919209 master-0 kubenswrapper[30278]: I0318 18:22:06.918660 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/f9ada823-f818-42c2-874e-0cce432cdff3-config\") pod \"ironic-inspector-0\" (UID: \"f9ada823-f818-42c2-874e-0cce432cdff3\") " pod="openstack/ironic-inspector-0" Mar 18 18:22:06.919209 master-0 kubenswrapper[30278]: I0318 18:22:06.918847 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-ironic-inspector-dhcp-hostsdir\" (UniqueName: \"kubernetes.io/empty-dir/f9ada823-f818-42c2-874e-0cce432cdff3-var-lib-ironic-inspector-dhcp-hostsdir\") pod \"ironic-inspector-0\" (UID: \"f9ada823-f818-42c2-874e-0cce432cdff3\") " pod="openstack/ironic-inspector-0" Mar 18 18:22:06.919209 master-0 kubenswrapper[30278]: I0318 18:22:06.918892 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f9ada823-f818-42c2-874e-0cce432cdff3-scripts\") pod \"ironic-inspector-0\" (UID: \"f9ada823-f818-42c2-874e-0cce432cdff3\") " pod="openstack/ironic-inspector-0" Mar 18 18:22:06.925932 master-0 kubenswrapper[30278]: I0318 18:22:06.923405 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f9ada823-f818-42c2-874e-0cce432cdff3-internal-tls-certs\") pod \"ironic-inspector-0\" (UID: \"f9ada823-f818-42c2-874e-0cce432cdff3\") " pod="openstack/ironic-inspector-0" Mar 18 18:22:06.925932 master-0 kubenswrapper[30278]: I0318 18:22:06.924134 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-ironic-inspector-dhcp-hostsdir\" (UniqueName: \"kubernetes.io/empty-dir/f9ada823-f818-42c2-874e-0cce432cdff3-var-lib-ironic-inspector-dhcp-hostsdir\") pod \"ironic-inspector-0\" (UID: \"f9ada823-f818-42c2-874e-0cce432cdff3\") " pod="openstack/ironic-inspector-0" Mar 18 18:22:06.925932 master-0 kubenswrapper[30278]: I0318 18:22:06.924476 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-ironic\" (UniqueName: \"kubernetes.io/empty-dir/f9ada823-f818-42c2-874e-0cce432cdff3-var-lib-ironic\") pod \"ironic-inspector-0\" (UID: \"f9ada823-f818-42c2-874e-0cce432cdff3\") " pod="openstack/ironic-inspector-0" Mar 18 18:22:06.925932 master-0 kubenswrapper[30278]: I0318 18:22:06.925867 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f9ada823-f818-42c2-874e-0cce432cdff3-public-tls-certs\") pod \"ironic-inspector-0\" (UID: \"f9ada823-f818-42c2-874e-0cce432cdff3\") " pod="openstack/ironic-inspector-0" Mar 18 18:22:06.928327 master-0 kubenswrapper[30278]: I0318 18:22:06.927448 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f9ada823-f818-42c2-874e-0cce432cdff3-scripts\") pod \"ironic-inspector-0\" (UID: \"f9ada823-f818-42c2-874e-0cce432cdff3\") " pod="openstack/ironic-inspector-0" Mar 18 18:22:06.934725 master-0 kubenswrapper[30278]: I0318 18:22:06.934485 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/f9ada823-f818-42c2-874e-0cce432cdff3-config\") pod \"ironic-inspector-0\" (UID: \"f9ada823-f818-42c2-874e-0cce432cdff3\") " pod="openstack/ironic-inspector-0" Mar 18 18:22:06.939686 master-0 kubenswrapper[30278]: I0318 18:22:06.939050 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/f9ada823-f818-42c2-874e-0cce432cdff3-etc-podinfo\") pod \"ironic-inspector-0\" (UID: \"f9ada823-f818-42c2-874e-0cce432cdff3\") " pod="openstack/ironic-inspector-0" Mar 18 18:22:06.951329 master-0 kubenswrapper[30278]: I0318 18:22:06.950856 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f9ada823-f818-42c2-874e-0cce432cdff3-combined-ca-bundle\") pod \"ironic-inspector-0\" (UID: \"f9ada823-f818-42c2-874e-0cce432cdff3\") " pod="openstack/ironic-inspector-0" Mar 18 18:22:06.958959 master-0 kubenswrapper[30278]: I0318 18:22:06.958647 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7d656\" (UniqueName: \"kubernetes.io/projected/f9ada823-f818-42c2-874e-0cce432cdff3-kube-api-access-7d656\") pod \"ironic-inspector-0\" (UID: \"f9ada823-f818-42c2-874e-0cce432cdff3\") " pod="openstack/ironic-inspector-0" Mar 18 18:22:07.083377 master-0 kubenswrapper[30278]: I0318 18:22:07.080303 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-inspector-0" Mar 18 18:22:07.091721 master-0 kubenswrapper[30278]: I0318 18:22:07.091644 30278 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7078ef0d-3907-46f8-8b84-3bc49fef827b" path="/var/lib/kubelet/pods/7078ef0d-3907-46f8-8b84-3bc49fef827b/volumes" Mar 18 18:22:07.400827 master-0 kubenswrapper[30278]: I0318 18:22:07.400645 30278 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Mar 18 18:22:07.851562 master-0 kubenswrapper[30278]: I0318 18:22:07.851455 30278 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-inspector-0"] Mar 18 18:22:08.291598 master-0 kubenswrapper[30278]: W0318 18:22:08.291494 30278 watcher.go:93] Error while processing event ("/sys/fs/cgroup/user.slice/user-0.slice/session-c39.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/user.slice/user-0.slice/session-c39.scope: no such file or directory Mar 18 18:22:08.291780 master-0 kubenswrapper[30278]: W0318 18:22:08.291677 30278 watcher.go:93] Error while processing event ("/sys/fs/cgroup/user.slice/user-0.slice/session-c40.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/user.slice/user-0.slice/session-c40.scope: no such file or directory Mar 18 18:22:08.291780 master-0 kubenswrapper[30278]: W0318 18:22:08.291702 30278 watcher.go:93] Error while processing event ("/sys/fs/cgroup/user.slice/user-0.slice/session-c41.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/user.slice/user-0.slice/session-c41.scope: no such file or directory Mar 18 18:22:08.294588 master-0 kubenswrapper[30278]: W0318 18:22:08.294504 30278 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poded10fd30_ed39_4cda_8252_8f4db21fbfca.slice/crio-conmon-a8de5e0b50e2e23ba53aadd31f170f4aa1f4c1a46484ddc6fb05b0a83a3b16b0.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poded10fd30_ed39_4cda_8252_8f4db21fbfca.slice/crio-conmon-a8de5e0b50e2e23ba53aadd31f170f4aa1f4c1a46484ddc6fb05b0a83a3b16b0.scope: no such file or directory Mar 18 18:22:08.294697 master-0 kubenswrapper[30278]: W0318 18:22:08.294595 30278 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod892489cb_419b_40b3_8e27_04302daea69c.slice/crio-conmon-18eaf3f1fd23baa4d1c58e61f474399c9e76a553dff9f9bd5fb4ccbeadab2200.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod892489cb_419b_40b3_8e27_04302daea69c.slice/crio-conmon-18eaf3f1fd23baa4d1c58e61f474399c9e76a553dff9f9bd5fb4ccbeadab2200.scope: no such file or directory Mar 18 18:22:08.294697 master-0 kubenswrapper[30278]: W0318 18:22:08.294621 30278 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod83526059_628b_4d6e_aa9d_92e1e53765c8.slice/crio-conmon-201742db36acf735aff4586ae556e49b54693a7a5e9c760626351860047a0eb8.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod83526059_628b_4d6e_aa9d_92e1e53765c8.slice/crio-conmon-201742db36acf735aff4586ae556e49b54693a7a5e9c760626351860047a0eb8.scope: no such file or directory Mar 18 18:22:08.294697 master-0 kubenswrapper[30278]: W0318 18:22:08.294652 30278 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod73e9d791_73fa_47f6_bf4e_01119900b9d9.slice/crio-dfa1777826c02e3dfba180af6230ee779439df18da73b9f81281de1916d97082.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod73e9d791_73fa_47f6_bf4e_01119900b9d9.slice/crio-dfa1777826c02e3dfba180af6230ee779439df18da73b9f81281de1916d97082.scope: no such file or directory Mar 18 18:22:08.294697 master-0 kubenswrapper[30278]: W0318 18:22:08.294676 30278 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poded10fd30_ed39_4cda_8252_8f4db21fbfca.slice/crio-a8de5e0b50e2e23ba53aadd31f170f4aa1f4c1a46484ddc6fb05b0a83a3b16b0.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poded10fd30_ed39_4cda_8252_8f4db21fbfca.slice/crio-a8de5e0b50e2e23ba53aadd31f170f4aa1f4c1a46484ddc6fb05b0a83a3b16b0.scope: no such file or directory Mar 18 18:22:08.294878 master-0 kubenswrapper[30278]: W0318 18:22:08.294706 30278 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod83526059_628b_4d6e_aa9d_92e1e53765c8.slice/crio-201742db36acf735aff4586ae556e49b54693a7a5e9c760626351860047a0eb8.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod83526059_628b_4d6e_aa9d_92e1e53765c8.slice/crio-201742db36acf735aff4586ae556e49b54693a7a5e9c760626351860047a0eb8.scope: no such file or directory Mar 18 18:22:08.294878 master-0 kubenswrapper[30278]: W0318 18:22:08.294732 30278 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod892489cb_419b_40b3_8e27_04302daea69c.slice/crio-18eaf3f1fd23baa4d1c58e61f474399c9e76a553dff9f9bd5fb4ccbeadab2200.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod892489cb_419b_40b3_8e27_04302daea69c.slice/crio-18eaf3f1fd23baa4d1c58e61f474399c9e76a553dff9f9bd5fb4ccbeadab2200.scope: no such file or directory Mar 18 18:22:08.294878 master-0 kubenswrapper[30278]: W0318 18:22:08.294750 30278 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poded10fd30_ed39_4cda_8252_8f4db21fbfca.slice/crio-conmon-21cc803c0cc2cc33976caf7d87c2df977085d7b39c27e4d7d0ce8dfe84c57b66.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poded10fd30_ed39_4cda_8252_8f4db21fbfca.slice/crio-conmon-21cc803c0cc2cc33976caf7d87c2df977085d7b39c27e4d7d0ce8dfe84c57b66.scope: no such file or directory Mar 18 18:22:08.299264 master-0 kubenswrapper[30278]: W0318 18:22:08.298636 30278 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poded10fd30_ed39_4cda_8252_8f4db21fbfca.slice/crio-21cc803c0cc2cc33976caf7d87c2df977085d7b39c27e4d7d0ce8dfe84c57b66.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poded10fd30_ed39_4cda_8252_8f4db21fbfca.slice/crio-21cc803c0cc2cc33976caf7d87c2df977085d7b39c27e4d7d0ce8dfe84c57b66.scope: no such file or directory Mar 18 18:22:08.322222 master-0 kubenswrapper[30278]: W0318 18:22:08.322081 30278 watcher.go:93] Error while processing event ("/sys/fs/cgroup/user.slice/user-0.slice/user-runtime-dir@0.service": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/user.slice/user-0.slice/user-runtime-dir@0.service: no such file or directory Mar 18 18:22:08.352018 master-0 kubenswrapper[30278]: I0318 18:22:08.351908 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-0" event={"ID":"f9ada823-f818-42c2-874e-0cce432cdff3","Type":"ContainerStarted","Data":"7333e20fe5ec01fdf4fbdbc5ef7f81cf5ab46453ac44e3aa6cc3ae31d7eac2f4"} Mar 18 18:22:08.352018 master-0 kubenswrapper[30278]: I0318 18:22:08.352000 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-0" event={"ID":"f9ada823-f818-42c2-874e-0cce432cdff3","Type":"ContainerStarted","Data":"45f477432fb3244683da8cf6cef7c50d183072b0d62b658a4ad8470048e670fb"} Mar 18 18:22:08.363774 master-0 kubenswrapper[30278]: I0318 18:22:08.363645 30278 generic.go:334] "Generic (PLEG): container finished" podID="73e9d791-73fa-47f6-bf4e-01119900b9d9" containerID="dfa1777826c02e3dfba180af6230ee779439df18da73b9f81281de1916d97082" exitCode=137 Mar 18 18:22:08.364108 master-0 kubenswrapper[30278]: I0318 18:22:08.363793 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"73e9d791-73fa-47f6-bf4e-01119900b9d9","Type":"ContainerDied","Data":"dfa1777826c02e3dfba180af6230ee779439df18da73b9f81281de1916d97082"} Mar 18 18:22:08.374293 master-0 kubenswrapper[30278]: I0318 18:22:08.374120 30278 generic.go:334] "Generic (PLEG): container finished" podID="83526059-628b-4d6e-aa9d-92e1e53765c8" containerID="1e1c9e26d09fd8c2fae47dae916d281103a81975a80ab8d5a50a1317d996a367" exitCode=137 Mar 18 18:22:08.375183 master-0 kubenswrapper[30278]: I0318 18:22:08.375111 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"83526059-628b-4d6e-aa9d-92e1e53765c8","Type":"ContainerDied","Data":"1e1c9e26d09fd8c2fae47dae916d281103a81975a80ab8d5a50a1317d996a367"} Mar 18 18:22:08.449848 master-0 kubenswrapper[30278]: W0318 18:22:08.449696 30278 helpers.go:245] readString: Failed to read "/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod73e9d791_73fa_47f6_bf4e_01119900b9d9.slice/crio-conmon-dfa1777826c02e3dfba180af6230ee779439df18da73b9f81281de1916d97082.scope/cpuset.cpus.effective": read /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod73e9d791_73fa_47f6_bf4e_01119900b9d9.slice/crio-conmon-dfa1777826c02e3dfba180af6230ee779439df18da73b9f81281de1916d97082.scope/cpuset.cpus.effective: no such device Mar 18 18:22:08.621855 master-0 kubenswrapper[30278]: E0318 18:22:08.597303 30278 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod73e9d791_73fa_47f6_bf4e_01119900b9d9.slice/crio-conmon-dfa1777826c02e3dfba180af6230ee779439df18da73b9f81281de1916d97082.scope\": RecentStats: unable to find data in memory cache]" Mar 18 18:22:09.070077 master-0 kubenswrapper[30278]: I0318 18:22:09.069983 30278 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 18 18:22:09.079011 master-0 kubenswrapper[30278]: I0318 18:22:09.078958 30278 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Mar 18 18:22:09.192434 master-0 kubenswrapper[30278]: I0318 18:22:09.191545 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/73e9d791-73fa-47f6-bf4e-01119900b9d9-combined-ca-bundle\") pod \"73e9d791-73fa-47f6-bf4e-01119900b9d9\" (UID: \"73e9d791-73fa-47f6-bf4e-01119900b9d9\") " Mar 18 18:22:09.192434 master-0 kubenswrapper[30278]: I0318 18:22:09.191657 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5pg2z\" (UniqueName: \"kubernetes.io/projected/73e9d791-73fa-47f6-bf4e-01119900b9d9-kube-api-access-5pg2z\") pod \"73e9d791-73fa-47f6-bf4e-01119900b9d9\" (UID: \"73e9d791-73fa-47f6-bf4e-01119900b9d9\") " Mar 18 18:22:09.192434 master-0 kubenswrapper[30278]: I0318 18:22:09.191720 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ch64j\" (UniqueName: \"kubernetes.io/projected/83526059-628b-4d6e-aa9d-92e1e53765c8-kube-api-access-ch64j\") pod \"83526059-628b-4d6e-aa9d-92e1e53765c8\" (UID: \"83526059-628b-4d6e-aa9d-92e1e53765c8\") " Mar 18 18:22:09.192434 master-0 kubenswrapper[30278]: I0318 18:22:09.191757 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/83526059-628b-4d6e-aa9d-92e1e53765c8-combined-ca-bundle\") pod \"83526059-628b-4d6e-aa9d-92e1e53765c8\" (UID: \"83526059-628b-4d6e-aa9d-92e1e53765c8\") " Mar 18 18:22:09.192434 master-0 kubenswrapper[30278]: I0318 18:22:09.191834 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/73e9d791-73fa-47f6-bf4e-01119900b9d9-config-data\") pod \"73e9d791-73fa-47f6-bf4e-01119900b9d9\" (UID: \"73e9d791-73fa-47f6-bf4e-01119900b9d9\") " Mar 18 18:22:09.192434 master-0 kubenswrapper[30278]: I0318 18:22:09.191885 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/83526059-628b-4d6e-aa9d-92e1e53765c8-logs\") pod \"83526059-628b-4d6e-aa9d-92e1e53765c8\" (UID: \"83526059-628b-4d6e-aa9d-92e1e53765c8\") " Mar 18 18:22:09.192434 master-0 kubenswrapper[30278]: I0318 18:22:09.191987 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/83526059-628b-4d6e-aa9d-92e1e53765c8-config-data\") pod \"83526059-628b-4d6e-aa9d-92e1e53765c8\" (UID: \"83526059-628b-4d6e-aa9d-92e1e53765c8\") " Mar 18 18:22:09.194084 master-0 kubenswrapper[30278]: I0318 18:22:09.194030 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/83526059-628b-4d6e-aa9d-92e1e53765c8-logs" (OuterVolumeSpecName: "logs") pod "83526059-628b-4d6e-aa9d-92e1e53765c8" (UID: "83526059-628b-4d6e-aa9d-92e1e53765c8"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 18 18:22:09.199152 master-0 kubenswrapper[30278]: I0318 18:22:09.199116 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/73e9d791-73fa-47f6-bf4e-01119900b9d9-kube-api-access-5pg2z" (OuterVolumeSpecName: "kube-api-access-5pg2z") pod "73e9d791-73fa-47f6-bf4e-01119900b9d9" (UID: "73e9d791-73fa-47f6-bf4e-01119900b9d9"). InnerVolumeSpecName "kube-api-access-5pg2z". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 18:22:09.199875 master-0 kubenswrapper[30278]: I0318 18:22:09.199822 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/83526059-628b-4d6e-aa9d-92e1e53765c8-kube-api-access-ch64j" (OuterVolumeSpecName: "kube-api-access-ch64j") pod "83526059-628b-4d6e-aa9d-92e1e53765c8" (UID: "83526059-628b-4d6e-aa9d-92e1e53765c8"). InnerVolumeSpecName "kube-api-access-ch64j". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 18:22:09.230127 master-0 kubenswrapper[30278]: I0318 18:22:09.230036 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/73e9d791-73fa-47f6-bf4e-01119900b9d9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "73e9d791-73fa-47f6-bf4e-01119900b9d9" (UID: "73e9d791-73fa-47f6-bf4e-01119900b9d9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 18:22:09.234380 master-0 kubenswrapper[30278]: I0318 18:22:09.234301 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/73e9d791-73fa-47f6-bf4e-01119900b9d9-config-data" (OuterVolumeSpecName: "config-data") pod "73e9d791-73fa-47f6-bf4e-01119900b9d9" (UID: "73e9d791-73fa-47f6-bf4e-01119900b9d9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 18:22:09.240124 master-0 kubenswrapper[30278]: I0318 18:22:09.240048 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/83526059-628b-4d6e-aa9d-92e1e53765c8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "83526059-628b-4d6e-aa9d-92e1e53765c8" (UID: "83526059-628b-4d6e-aa9d-92e1e53765c8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 18:22:09.243400 master-0 kubenswrapper[30278]: I0318 18:22:09.242667 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/83526059-628b-4d6e-aa9d-92e1e53765c8-config-data" (OuterVolumeSpecName: "config-data") pod "83526059-628b-4d6e-aa9d-92e1e53765c8" (UID: "83526059-628b-4d6e-aa9d-92e1e53765c8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 18:22:09.296217 master-0 kubenswrapper[30278]: I0318 18:22:09.296147 30278 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/83526059-628b-4d6e-aa9d-92e1e53765c8-config-data\") on node \"master-0\" DevicePath \"\"" Mar 18 18:22:09.296586 master-0 kubenswrapper[30278]: I0318 18:22:09.296561 30278 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/73e9d791-73fa-47f6-bf4e-01119900b9d9-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 18 18:22:09.296786 master-0 kubenswrapper[30278]: I0318 18:22:09.296728 30278 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5pg2z\" (UniqueName: \"kubernetes.io/projected/73e9d791-73fa-47f6-bf4e-01119900b9d9-kube-api-access-5pg2z\") on node \"master-0\" DevicePath \"\"" Mar 18 18:22:09.297187 master-0 kubenswrapper[30278]: I0318 18:22:09.297163 30278 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ch64j\" (UniqueName: \"kubernetes.io/projected/83526059-628b-4d6e-aa9d-92e1e53765c8-kube-api-access-ch64j\") on node \"master-0\" DevicePath \"\"" Mar 18 18:22:09.297517 master-0 kubenswrapper[30278]: I0318 18:22:09.297495 30278 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/83526059-628b-4d6e-aa9d-92e1e53765c8-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 18 18:22:09.297658 master-0 kubenswrapper[30278]: I0318 18:22:09.297635 30278 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/73e9d791-73fa-47f6-bf4e-01119900b9d9-config-data\") on node \"master-0\" DevicePath \"\"" Mar 18 18:22:09.297768 master-0 kubenswrapper[30278]: I0318 18:22:09.297752 30278 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/83526059-628b-4d6e-aa9d-92e1e53765c8-logs\") on node \"master-0\" DevicePath \"\"" Mar 18 18:22:09.395951 master-0 kubenswrapper[30278]: I0318 18:22:09.395858 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"83526059-628b-4d6e-aa9d-92e1e53765c8","Type":"ContainerDied","Data":"97547f21356d68656a37d868d826d9b6357d88ea38c547aa5db4a1b642affb76"} Mar 18 18:22:09.395951 master-0 kubenswrapper[30278]: I0318 18:22:09.395959 30278 scope.go:117] "RemoveContainer" containerID="1e1c9e26d09fd8c2fae47dae916d281103a81975a80ab8d5a50a1317d996a367" Mar 18 18:22:09.396397 master-0 kubenswrapper[30278]: I0318 18:22:09.396221 30278 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 18 18:22:09.402923 master-0 kubenswrapper[30278]: I0318 18:22:09.401474 30278 generic.go:334] "Generic (PLEG): container finished" podID="f9ada823-f818-42c2-874e-0cce432cdff3" containerID="7333e20fe5ec01fdf4fbdbc5ef7f81cf5ab46453ac44e3aa6cc3ae31d7eac2f4" exitCode=0 Mar 18 18:22:09.402923 master-0 kubenswrapper[30278]: I0318 18:22:09.401602 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-0" event={"ID":"f9ada823-f818-42c2-874e-0cce432cdff3","Type":"ContainerDied","Data":"7333e20fe5ec01fdf4fbdbc5ef7f81cf5ab46453ac44e3aa6cc3ae31d7eac2f4"} Mar 18 18:22:09.402923 master-0 kubenswrapper[30278]: I0318 18:22:09.401653 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-0" event={"ID":"f9ada823-f818-42c2-874e-0cce432cdff3","Type":"ContainerStarted","Data":"d7b19dd817450d21d8cb13466fe1635e9a6f3dbefdc2d1e052165a083d97a9f6"} Mar 18 18:22:09.407746 master-0 kubenswrapper[30278]: I0318 18:22:09.406682 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"73e9d791-73fa-47f6-bf4e-01119900b9d9","Type":"ContainerDied","Data":"fcfc5f736531b3d522f5618892bc14520cd3843dfe78dc427978d0660f1d4333"} Mar 18 18:22:09.407746 master-0 kubenswrapper[30278]: I0318 18:22:09.406722 30278 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Mar 18 18:22:09.436059 master-0 kubenswrapper[30278]: I0318 18:22:09.435608 30278 scope.go:117] "RemoveContainer" containerID="201742db36acf735aff4586ae556e49b54693a7a5e9c760626351860047a0eb8" Mar 18 18:22:09.470138 master-0 kubenswrapper[30278]: I0318 18:22:09.470071 30278 scope.go:117] "RemoveContainer" containerID="dfa1777826c02e3dfba180af6230ee779439df18da73b9f81281de1916d97082" Mar 18 18:22:09.528630 master-0 kubenswrapper[30278]: I0318 18:22:09.528358 30278 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Mar 18 18:22:09.615109 master-0 kubenswrapper[30278]: I0318 18:22:09.615018 30278 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Mar 18 18:22:09.679111 master-0 kubenswrapper[30278]: I0318 18:22:09.673384 30278 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Mar 18 18:22:09.679111 master-0 kubenswrapper[30278]: E0318 18:22:09.674211 30278 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="73e9d791-73fa-47f6-bf4e-01119900b9d9" containerName="nova-cell1-novncproxy-novncproxy" Mar 18 18:22:09.679111 master-0 kubenswrapper[30278]: I0318 18:22:09.674227 30278 state_mem.go:107] "Deleted CPUSet assignment" podUID="73e9d791-73fa-47f6-bf4e-01119900b9d9" containerName="nova-cell1-novncproxy-novncproxy" Mar 18 18:22:09.679111 master-0 kubenswrapper[30278]: E0318 18:22:09.674243 30278 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="83526059-628b-4d6e-aa9d-92e1e53765c8" containerName="nova-metadata-metadata" Mar 18 18:22:09.679111 master-0 kubenswrapper[30278]: I0318 18:22:09.674250 30278 state_mem.go:107] "Deleted CPUSet assignment" podUID="83526059-628b-4d6e-aa9d-92e1e53765c8" containerName="nova-metadata-metadata" Mar 18 18:22:09.679111 master-0 kubenswrapper[30278]: E0318 18:22:09.674302 30278 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="83526059-628b-4d6e-aa9d-92e1e53765c8" containerName="nova-metadata-log" Mar 18 18:22:09.679111 master-0 kubenswrapper[30278]: I0318 18:22:09.674314 30278 state_mem.go:107] "Deleted CPUSet assignment" podUID="83526059-628b-4d6e-aa9d-92e1e53765c8" containerName="nova-metadata-log" Mar 18 18:22:09.679111 master-0 kubenswrapper[30278]: I0318 18:22:09.675912 30278 memory_manager.go:354] "RemoveStaleState removing state" podUID="83526059-628b-4d6e-aa9d-92e1e53765c8" containerName="nova-metadata-metadata" Mar 18 18:22:09.679111 master-0 kubenswrapper[30278]: I0318 18:22:09.675963 30278 memory_manager.go:354] "RemoveStaleState removing state" podUID="83526059-628b-4d6e-aa9d-92e1e53765c8" containerName="nova-metadata-log" Mar 18 18:22:09.679111 master-0 kubenswrapper[30278]: I0318 18:22:09.675988 30278 memory_manager.go:354] "RemoveStaleState removing state" podUID="73e9d791-73fa-47f6-bf4e-01119900b9d9" containerName="nova-cell1-novncproxy-novncproxy" Mar 18 18:22:09.679111 master-0 kubenswrapper[30278]: I0318 18:22:09.678549 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 18 18:22:09.686851 master-0 kubenswrapper[30278]: I0318 18:22:09.683203 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Mar 18 18:22:09.686851 master-0 kubenswrapper[30278]: I0318 18:22:09.686495 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Mar 18 18:22:09.695254 master-0 kubenswrapper[30278]: I0318 18:22:09.689872 30278 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Mar 18 18:22:09.704328 master-0 kubenswrapper[30278]: I0318 18:22:09.704260 30278 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Mar 18 18:22:09.715610 master-0 kubenswrapper[30278]: I0318 18:22:09.715568 30278 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Mar 18 18:22:09.718431 master-0 kubenswrapper[30278]: I0318 18:22:09.718334 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/847dbfce-3773-4d6d-af26-16040d410d2c-config-data\") pod \"nova-metadata-0\" (UID: \"847dbfce-3773-4d6d-af26-16040d410d2c\") " pod="openstack/nova-metadata-0" Mar 18 18:22:09.718589 master-0 kubenswrapper[30278]: I0318 18:22:09.718571 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/847dbfce-3773-4d6d-af26-16040d410d2c-logs\") pod \"nova-metadata-0\" (UID: \"847dbfce-3773-4d6d-af26-16040d410d2c\") " pod="openstack/nova-metadata-0" Mar 18 18:22:09.718706 master-0 kubenswrapper[30278]: I0318 18:22:09.718691 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d2htp\" (UniqueName: \"kubernetes.io/projected/847dbfce-3773-4d6d-af26-16040d410d2c-kube-api-access-d2htp\") pod \"nova-metadata-0\" (UID: \"847dbfce-3773-4d6d-af26-16040d410d2c\") " pod="openstack/nova-metadata-0" Mar 18 18:22:09.718801 master-0 kubenswrapper[30278]: I0318 18:22:09.718788 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/847dbfce-3773-4d6d-af26-16040d410d2c-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"847dbfce-3773-4d6d-af26-16040d410d2c\") " pod="openstack/nova-metadata-0" Mar 18 18:22:09.719031 master-0 kubenswrapper[30278]: I0318 18:22:09.719011 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/847dbfce-3773-4d6d-af26-16040d410d2c-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"847dbfce-3773-4d6d-af26-16040d410d2c\") " pod="openstack/nova-metadata-0" Mar 18 18:22:09.726740 master-0 kubenswrapper[30278]: I0318 18:22:09.726690 30278 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Mar 18 18:22:09.729398 master-0 kubenswrapper[30278]: I0318 18:22:09.729359 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Mar 18 18:22:09.732953 master-0 kubenswrapper[30278]: I0318 18:22:09.732878 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Mar 18 18:22:09.733307 master-0 kubenswrapper[30278]: I0318 18:22:09.733262 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Mar 18 18:22:09.733723 master-0 kubenswrapper[30278]: I0318 18:22:09.733680 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Mar 18 18:22:09.740319 master-0 kubenswrapper[30278]: I0318 18:22:09.740240 30278 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Mar 18 18:22:09.825699 master-0 kubenswrapper[30278]: I0318 18:22:09.824627 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/847dbfce-3773-4d6d-af26-16040d410d2c-config-data\") pod \"nova-metadata-0\" (UID: \"847dbfce-3773-4d6d-af26-16040d410d2c\") " pod="openstack/nova-metadata-0" Mar 18 18:22:09.825699 master-0 kubenswrapper[30278]: I0318 18:22:09.824715 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d2htp\" (UniqueName: \"kubernetes.io/projected/847dbfce-3773-4d6d-af26-16040d410d2c-kube-api-access-d2htp\") pod \"nova-metadata-0\" (UID: \"847dbfce-3773-4d6d-af26-16040d410d2c\") " pod="openstack/nova-metadata-0" Mar 18 18:22:09.825699 master-0 kubenswrapper[30278]: I0318 18:22:09.824741 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/847dbfce-3773-4d6d-af26-16040d410d2c-logs\") pod \"nova-metadata-0\" (UID: \"847dbfce-3773-4d6d-af26-16040d410d2c\") " pod="openstack/nova-metadata-0" Mar 18 18:22:09.825699 master-0 kubenswrapper[30278]: I0318 18:22:09.824764 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/847dbfce-3773-4d6d-af26-16040d410d2c-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"847dbfce-3773-4d6d-af26-16040d410d2c\") " pod="openstack/nova-metadata-0" Mar 18 18:22:09.825699 master-0 kubenswrapper[30278]: I0318 18:22:09.824845 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/847dbfce-3773-4d6d-af26-16040d410d2c-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"847dbfce-3773-4d6d-af26-16040d410d2c\") " pod="openstack/nova-metadata-0" Mar 18 18:22:09.826640 master-0 kubenswrapper[30278]: I0318 18:22:09.826399 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/847dbfce-3773-4d6d-af26-16040d410d2c-logs\") pod \"nova-metadata-0\" (UID: \"847dbfce-3773-4d6d-af26-16040d410d2c\") " pod="openstack/nova-metadata-0" Mar 18 18:22:09.830181 master-0 kubenswrapper[30278]: I0318 18:22:09.830116 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/847dbfce-3773-4d6d-af26-16040d410d2c-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"847dbfce-3773-4d6d-af26-16040d410d2c\") " pod="openstack/nova-metadata-0" Mar 18 18:22:09.832328 master-0 kubenswrapper[30278]: I0318 18:22:09.832289 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/847dbfce-3773-4d6d-af26-16040d410d2c-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"847dbfce-3773-4d6d-af26-16040d410d2c\") " pod="openstack/nova-metadata-0" Mar 18 18:22:09.834730 master-0 kubenswrapper[30278]: I0318 18:22:09.834630 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/847dbfce-3773-4d6d-af26-16040d410d2c-config-data\") pod \"nova-metadata-0\" (UID: \"847dbfce-3773-4d6d-af26-16040d410d2c\") " pod="openstack/nova-metadata-0" Mar 18 18:22:09.840470 master-0 kubenswrapper[30278]: I0318 18:22:09.840415 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d2htp\" (UniqueName: \"kubernetes.io/projected/847dbfce-3773-4d6d-af26-16040d410d2c-kube-api-access-d2htp\") pod \"nova-metadata-0\" (UID: \"847dbfce-3773-4d6d-af26-16040d410d2c\") " pod="openstack/nova-metadata-0" Mar 18 18:22:09.927892 master-0 kubenswrapper[30278]: I0318 18:22:09.927786 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vqf2q\" (UniqueName: \"kubernetes.io/projected/f2fb5ad2-1ec6-42bc-b6dd-b5188ba2988e-kube-api-access-vqf2q\") pod \"nova-cell1-novncproxy-0\" (UID: \"f2fb5ad2-1ec6-42bc-b6dd-b5188ba2988e\") " pod="openstack/nova-cell1-novncproxy-0" Mar 18 18:22:09.928210 master-0 kubenswrapper[30278]: I0318 18:22:09.928061 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/f2fb5ad2-1ec6-42bc-b6dd-b5188ba2988e-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"f2fb5ad2-1ec6-42bc-b6dd-b5188ba2988e\") " pod="openstack/nova-cell1-novncproxy-0" Mar 18 18:22:09.928210 master-0 kubenswrapper[30278]: I0318 18:22:09.928155 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f2fb5ad2-1ec6-42bc-b6dd-b5188ba2988e-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"f2fb5ad2-1ec6-42bc-b6dd-b5188ba2988e\") " pod="openstack/nova-cell1-novncproxy-0" Mar 18 18:22:09.928440 master-0 kubenswrapper[30278]: I0318 18:22:09.928402 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/f2fb5ad2-1ec6-42bc-b6dd-b5188ba2988e-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"f2fb5ad2-1ec6-42bc-b6dd-b5188ba2988e\") " pod="openstack/nova-cell1-novncproxy-0" Mar 18 18:22:09.928724 master-0 kubenswrapper[30278]: I0318 18:22:09.928698 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f2fb5ad2-1ec6-42bc-b6dd-b5188ba2988e-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"f2fb5ad2-1ec6-42bc-b6dd-b5188ba2988e\") " pod="openstack/nova-cell1-novncproxy-0" Mar 18 18:22:10.015089 master-0 kubenswrapper[30278]: I0318 18:22:10.014856 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 18 18:22:10.032010 master-0 kubenswrapper[30278]: I0318 18:22:10.031926 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/f2fb5ad2-1ec6-42bc-b6dd-b5188ba2988e-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"f2fb5ad2-1ec6-42bc-b6dd-b5188ba2988e\") " pod="openstack/nova-cell1-novncproxy-0" Mar 18 18:22:10.032288 master-0 kubenswrapper[30278]: I0318 18:22:10.032212 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f2fb5ad2-1ec6-42bc-b6dd-b5188ba2988e-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"f2fb5ad2-1ec6-42bc-b6dd-b5188ba2988e\") " pod="openstack/nova-cell1-novncproxy-0" Mar 18 18:22:10.032288 master-0 kubenswrapper[30278]: I0318 18:22:10.032257 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vqf2q\" (UniqueName: \"kubernetes.io/projected/f2fb5ad2-1ec6-42bc-b6dd-b5188ba2988e-kube-api-access-vqf2q\") pod \"nova-cell1-novncproxy-0\" (UID: \"f2fb5ad2-1ec6-42bc-b6dd-b5188ba2988e\") " pod="openstack/nova-cell1-novncproxy-0" Mar 18 18:22:10.032530 master-0 kubenswrapper[30278]: I0318 18:22:10.032479 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/f2fb5ad2-1ec6-42bc-b6dd-b5188ba2988e-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"f2fb5ad2-1ec6-42bc-b6dd-b5188ba2988e\") " pod="openstack/nova-cell1-novncproxy-0" Mar 18 18:22:10.035166 master-0 kubenswrapper[30278]: I0318 18:22:10.035065 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f2fb5ad2-1ec6-42bc-b6dd-b5188ba2988e-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"f2fb5ad2-1ec6-42bc-b6dd-b5188ba2988e\") " pod="openstack/nova-cell1-novncproxy-0" Mar 18 18:22:10.035583 master-0 kubenswrapper[30278]: I0318 18:22:10.035251 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/f2fb5ad2-1ec6-42bc-b6dd-b5188ba2988e-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"f2fb5ad2-1ec6-42bc-b6dd-b5188ba2988e\") " pod="openstack/nova-cell1-novncproxy-0" Mar 18 18:22:10.043567 master-0 kubenswrapper[30278]: I0318 18:22:10.038044 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/f2fb5ad2-1ec6-42bc-b6dd-b5188ba2988e-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"f2fb5ad2-1ec6-42bc-b6dd-b5188ba2988e\") " pod="openstack/nova-cell1-novncproxy-0" Mar 18 18:22:10.043567 master-0 kubenswrapper[30278]: I0318 18:22:10.038704 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f2fb5ad2-1ec6-42bc-b6dd-b5188ba2988e-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"f2fb5ad2-1ec6-42bc-b6dd-b5188ba2988e\") " pod="openstack/nova-cell1-novncproxy-0" Mar 18 18:22:10.048448 master-0 kubenswrapper[30278]: I0318 18:22:10.048261 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f2fb5ad2-1ec6-42bc-b6dd-b5188ba2988e-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"f2fb5ad2-1ec6-42bc-b6dd-b5188ba2988e\") " pod="openstack/nova-cell1-novncproxy-0" Mar 18 18:22:10.063526 master-0 kubenswrapper[30278]: I0318 18:22:10.063473 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vqf2q\" (UniqueName: \"kubernetes.io/projected/f2fb5ad2-1ec6-42bc-b6dd-b5188ba2988e-kube-api-access-vqf2q\") pod \"nova-cell1-novncproxy-0\" (UID: \"f2fb5ad2-1ec6-42bc-b6dd-b5188ba2988e\") " pod="openstack/nova-cell1-novncproxy-0" Mar 18 18:22:10.360342 master-0 kubenswrapper[30278]: I0318 18:22:10.359735 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Mar 18 18:22:10.427303 master-0 kubenswrapper[30278]: I0318 18:22:10.427223 30278 generic.go:334] "Generic (PLEG): container finished" podID="f9ada823-f818-42c2-874e-0cce432cdff3" containerID="d7b19dd817450d21d8cb13466fe1635e9a6f3dbefdc2d1e052165a083d97a9f6" exitCode=0 Mar 18 18:22:10.427708 master-0 kubenswrapper[30278]: I0318 18:22:10.427630 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-0" event={"ID":"f9ada823-f818-42c2-874e-0cce432cdff3","Type":"ContainerDied","Data":"d7b19dd817450d21d8cb13466fe1635e9a6f3dbefdc2d1e052165a083d97a9f6"} Mar 18 18:22:10.575629 master-0 kubenswrapper[30278]: I0318 18:22:10.571536 30278 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Mar 18 18:22:11.021967 master-0 kubenswrapper[30278]: I0318 18:22:11.021907 30278 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Mar 18 18:22:11.079899 master-0 kubenswrapper[30278]: I0318 18:22:11.078654 30278 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="73e9d791-73fa-47f6-bf4e-01119900b9d9" path="/var/lib/kubelet/pods/73e9d791-73fa-47f6-bf4e-01119900b9d9/volumes" Mar 18 18:22:11.079899 master-0 kubenswrapper[30278]: I0318 18:22:11.079402 30278 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="83526059-628b-4d6e-aa9d-92e1e53765c8" path="/var/lib/kubelet/pods/83526059-628b-4d6e-aa9d-92e1e53765c8/volumes" Mar 18 18:22:11.454036 master-0 kubenswrapper[30278]: I0318 18:22:11.453942 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-0" event={"ID":"f9ada823-f818-42c2-874e-0cce432cdff3","Type":"ContainerStarted","Data":"c853555552810bca1b91bc72b24afbe7bc5cb5a8880546316f383df1d2f05118"} Mar 18 18:22:11.458835 master-0 kubenswrapper[30278]: I0318 18:22:11.458761 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"f2fb5ad2-1ec6-42bc-b6dd-b5188ba2988e","Type":"ContainerStarted","Data":"ee1bad2653a72293f380a79a23a1a7e217ef06840d235f5719ea1e864661e590"} Mar 18 18:22:11.458835 master-0 kubenswrapper[30278]: I0318 18:22:11.458834 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"f2fb5ad2-1ec6-42bc-b6dd-b5188ba2988e","Type":"ContainerStarted","Data":"cfb848b3b77adc495479a37ceb65d179916fd0c66bf9f533a1d3d8c97e6682b6"} Mar 18 18:22:11.476390 master-0 kubenswrapper[30278]: I0318 18:22:11.471203 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"847dbfce-3773-4d6d-af26-16040d410d2c","Type":"ContainerStarted","Data":"fdd7413ad8a9c8686fe83053e4a75e2a11f53e3c35fcb3fadcb9167f3227981c"} Mar 18 18:22:11.476390 master-0 kubenswrapper[30278]: I0318 18:22:11.471259 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"847dbfce-3773-4d6d-af26-16040d410d2c","Type":"ContainerStarted","Data":"435f7c0cc47e7fa2ffa8e00da669c49dfc6f90b86677c9609bdbf726ee7249e6"} Mar 18 18:22:11.476390 master-0 kubenswrapper[30278]: I0318 18:22:11.471288 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"847dbfce-3773-4d6d-af26-16040d410d2c","Type":"ContainerStarted","Data":"a221165520ac0cf006f38ffc8ec32766d0a7e627797f16aa63fdba9f6a7fb8c4"} Mar 18 18:22:11.498251 master-0 kubenswrapper[30278]: I0318 18:22:11.498049 30278 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.498027661 podStartE2EDuration="2.498027661s" podCreationTimestamp="2026-03-18 18:22:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 18:22:11.494138527 +0000 UTC m=+1300.661323142" watchObservedRunningTime="2026-03-18 18:22:11.498027661 +0000 UTC m=+1300.665212256" Mar 18 18:22:11.541315 master-0 kubenswrapper[30278]: I0318 18:22:11.539946 30278 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.5399176199999998 podStartE2EDuration="2.53991762s" podCreationTimestamp="2026-03-18 18:22:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 18:22:11.519926111 +0000 UTC m=+1300.687110716" watchObservedRunningTime="2026-03-18 18:22:11.53991762 +0000 UTC m=+1300.707102215" Mar 18 18:22:12.534501 master-0 kubenswrapper[30278]: I0318 18:22:12.534190 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-0" event={"ID":"f9ada823-f818-42c2-874e-0cce432cdff3","Type":"ContainerStarted","Data":"ae7371b64b8cb59d64acc08089f1f6cccd66bb7b6fe5e710296d29111d2bd17c"} Mar 18 18:22:12.534501 master-0 kubenswrapper[30278]: I0318 18:22:12.534257 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-0" event={"ID":"f9ada823-f818-42c2-874e-0cce432cdff3","Type":"ContainerStarted","Data":"6ec2e7f7f72c03d0de0b7edd5d2f4a59e8c86810ba484fe33ef8b9e9cf03aa2a"} Mar 18 18:22:13.559835 master-0 kubenswrapper[30278]: I0318 18:22:13.559740 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-0" event={"ID":"f9ada823-f818-42c2-874e-0cce432cdff3","Type":"ContainerStarted","Data":"76d96fea4267eeb5881bb711211be0c1b83d6352a2f42dafecf90ccc6da71873"} Mar 18 18:22:13.607436 master-0 kubenswrapper[30278]: I0318 18:22:13.607372 30278 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Mar 18 18:22:13.607436 master-0 kubenswrapper[30278]: I0318 18:22:13.607429 30278 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Mar 18 18:22:14.580901 master-0 kubenswrapper[30278]: I0318 18:22:14.580817 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-0" event={"ID":"f9ada823-f818-42c2-874e-0cce432cdff3","Type":"ContainerStarted","Data":"c7dbf0b56a85ca6326d26c044deddbf83d07b95dcce6be57818d51021af9b090"} Mar 18 18:22:14.583111 master-0 kubenswrapper[30278]: I0318 18:22:14.583054 30278 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ironic-inspector-0" Mar 18 18:22:14.583191 master-0 kubenswrapper[30278]: I0318 18:22:14.583123 30278 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ironic-inspector-0" Mar 18 18:22:14.617252 master-0 kubenswrapper[30278]: I0318 18:22:14.617158 30278 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ironic-inspector-0" podStartSLOduration=8.617139594 podStartE2EDuration="8.617139594s" podCreationTimestamp="2026-03-18 18:22:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 18:22:14.614710289 +0000 UTC m=+1303.781894884" watchObservedRunningTime="2026-03-18 18:22:14.617139594 +0000 UTC m=+1303.784324179" Mar 18 18:22:15.360049 master-0 kubenswrapper[30278]: I0318 18:22:15.359970 30278 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Mar 18 18:22:15.612695 master-0 kubenswrapper[30278]: I0318 18:22:15.612452 30278 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Mar 18 18:22:15.615217 master-0 kubenswrapper[30278]: I0318 18:22:15.615154 30278 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Mar 18 18:22:15.625671 master-0 kubenswrapper[30278]: I0318 18:22:15.625588 30278 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Mar 18 18:22:16.635211 master-0 kubenswrapper[30278]: I0318 18:22:16.635111 30278 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Mar 18 18:22:16.680405 master-0 kubenswrapper[30278]: I0318 18:22:16.674457 30278 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ironic-inspector-0" Mar 18 18:22:17.019883 master-0 kubenswrapper[30278]: I0318 18:22:17.019817 30278 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7fb46c8999-cmd4w"] Mar 18 18:22:17.022776 master-0 kubenswrapper[30278]: I0318 18:22:17.022737 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fb46c8999-cmd4w" Mar 18 18:22:17.052201 master-0 kubenswrapper[30278]: I0318 18:22:17.052126 30278 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7fb46c8999-cmd4w"] Mar 18 18:22:17.173593 master-0 kubenswrapper[30278]: I0318 18:22:17.173507 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zppw2\" (UniqueName: \"kubernetes.io/projected/6ec94265-412a-4c3d-8339-bd5e294ede4f-kube-api-access-zppw2\") pod \"dnsmasq-dns-7fb46c8999-cmd4w\" (UID: \"6ec94265-412a-4c3d-8339-bd5e294ede4f\") " pod="openstack/dnsmasq-dns-7fb46c8999-cmd4w" Mar 18 18:22:17.173593 master-0 kubenswrapper[30278]: I0318 18:22:17.173587 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6ec94265-412a-4c3d-8339-bd5e294ede4f-dns-swift-storage-0\") pod \"dnsmasq-dns-7fb46c8999-cmd4w\" (UID: \"6ec94265-412a-4c3d-8339-bd5e294ede4f\") " pod="openstack/dnsmasq-dns-7fb46c8999-cmd4w" Mar 18 18:22:17.174004 master-0 kubenswrapper[30278]: I0318 18:22:17.173626 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6ec94265-412a-4c3d-8339-bd5e294ede4f-ovsdbserver-sb\") pod \"dnsmasq-dns-7fb46c8999-cmd4w\" (UID: \"6ec94265-412a-4c3d-8339-bd5e294ede4f\") " pod="openstack/dnsmasq-dns-7fb46c8999-cmd4w" Mar 18 18:22:17.174004 master-0 kubenswrapper[30278]: I0318 18:22:17.173680 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6ec94265-412a-4c3d-8339-bd5e294ede4f-ovsdbserver-nb\") pod \"dnsmasq-dns-7fb46c8999-cmd4w\" (UID: \"6ec94265-412a-4c3d-8339-bd5e294ede4f\") " pod="openstack/dnsmasq-dns-7fb46c8999-cmd4w" Mar 18 18:22:17.174004 master-0 kubenswrapper[30278]: I0318 18:22:17.173757 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6ec94265-412a-4c3d-8339-bd5e294ede4f-dns-svc\") pod \"dnsmasq-dns-7fb46c8999-cmd4w\" (UID: \"6ec94265-412a-4c3d-8339-bd5e294ede4f\") " pod="openstack/dnsmasq-dns-7fb46c8999-cmd4w" Mar 18 18:22:17.174004 master-0 kubenswrapper[30278]: I0318 18:22:17.173956 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6ec94265-412a-4c3d-8339-bd5e294ede4f-config\") pod \"dnsmasq-dns-7fb46c8999-cmd4w\" (UID: \"6ec94265-412a-4c3d-8339-bd5e294ede4f\") " pod="openstack/dnsmasq-dns-7fb46c8999-cmd4w" Mar 18 18:22:17.178117 master-0 kubenswrapper[30278]: I0318 18:22:17.177997 30278 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ironic-inspector-0" Mar 18 18:22:17.178117 master-0 kubenswrapper[30278]: I0318 18:22:17.178059 30278 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ironic-inspector-0" Mar 18 18:22:17.178117 master-0 kubenswrapper[30278]: I0318 18:22:17.178071 30278 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ironic-inspector-0" Mar 18 18:22:17.178117 master-0 kubenswrapper[30278]: I0318 18:22:17.178092 30278 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ironic-inspector-0" Mar 18 18:22:17.178371 master-0 kubenswrapper[30278]: I0318 18:22:17.178290 30278 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ironic-inspector-0" Mar 18 18:22:17.178416 master-0 kubenswrapper[30278]: I0318 18:22:17.178387 30278 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ironic-inspector-0" Mar 18 18:22:17.277306 master-0 kubenswrapper[30278]: I0318 18:22:17.276722 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6ec94265-412a-4c3d-8339-bd5e294ede4f-dns-svc\") pod \"dnsmasq-dns-7fb46c8999-cmd4w\" (UID: \"6ec94265-412a-4c3d-8339-bd5e294ede4f\") " pod="openstack/dnsmasq-dns-7fb46c8999-cmd4w" Mar 18 18:22:17.277306 master-0 kubenswrapper[30278]: I0318 18:22:17.276924 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6ec94265-412a-4c3d-8339-bd5e294ede4f-config\") pod \"dnsmasq-dns-7fb46c8999-cmd4w\" (UID: \"6ec94265-412a-4c3d-8339-bd5e294ede4f\") " pod="openstack/dnsmasq-dns-7fb46c8999-cmd4w" Mar 18 18:22:17.277306 master-0 kubenswrapper[30278]: I0318 18:22:17.276987 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zppw2\" (UniqueName: \"kubernetes.io/projected/6ec94265-412a-4c3d-8339-bd5e294ede4f-kube-api-access-zppw2\") pod \"dnsmasq-dns-7fb46c8999-cmd4w\" (UID: \"6ec94265-412a-4c3d-8339-bd5e294ede4f\") " pod="openstack/dnsmasq-dns-7fb46c8999-cmd4w" Mar 18 18:22:17.277306 master-0 kubenswrapper[30278]: I0318 18:22:17.277010 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6ec94265-412a-4c3d-8339-bd5e294ede4f-dns-swift-storage-0\") pod \"dnsmasq-dns-7fb46c8999-cmd4w\" (UID: \"6ec94265-412a-4c3d-8339-bd5e294ede4f\") " pod="openstack/dnsmasq-dns-7fb46c8999-cmd4w" Mar 18 18:22:17.277306 master-0 kubenswrapper[30278]: I0318 18:22:17.277048 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6ec94265-412a-4c3d-8339-bd5e294ede4f-ovsdbserver-sb\") pod \"dnsmasq-dns-7fb46c8999-cmd4w\" (UID: \"6ec94265-412a-4c3d-8339-bd5e294ede4f\") " pod="openstack/dnsmasq-dns-7fb46c8999-cmd4w" Mar 18 18:22:17.277306 master-0 kubenswrapper[30278]: I0318 18:22:17.277094 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6ec94265-412a-4c3d-8339-bd5e294ede4f-ovsdbserver-nb\") pod \"dnsmasq-dns-7fb46c8999-cmd4w\" (UID: \"6ec94265-412a-4c3d-8339-bd5e294ede4f\") " pod="openstack/dnsmasq-dns-7fb46c8999-cmd4w" Mar 18 18:22:17.280900 master-0 kubenswrapper[30278]: I0318 18:22:17.278061 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6ec94265-412a-4c3d-8339-bd5e294ede4f-ovsdbserver-nb\") pod \"dnsmasq-dns-7fb46c8999-cmd4w\" (UID: \"6ec94265-412a-4c3d-8339-bd5e294ede4f\") " pod="openstack/dnsmasq-dns-7fb46c8999-cmd4w" Mar 18 18:22:17.280900 master-0 kubenswrapper[30278]: I0318 18:22:17.280708 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6ec94265-412a-4c3d-8339-bd5e294ede4f-config\") pod \"dnsmasq-dns-7fb46c8999-cmd4w\" (UID: \"6ec94265-412a-4c3d-8339-bd5e294ede4f\") " pod="openstack/dnsmasq-dns-7fb46c8999-cmd4w" Mar 18 18:22:17.281149 master-0 kubenswrapper[30278]: I0318 18:22:17.281113 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6ec94265-412a-4c3d-8339-bd5e294ede4f-dns-svc\") pod \"dnsmasq-dns-7fb46c8999-cmd4w\" (UID: \"6ec94265-412a-4c3d-8339-bd5e294ede4f\") " pod="openstack/dnsmasq-dns-7fb46c8999-cmd4w" Mar 18 18:22:17.281498 master-0 kubenswrapper[30278]: I0318 18:22:17.281254 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6ec94265-412a-4c3d-8339-bd5e294ede4f-dns-swift-storage-0\") pod \"dnsmasq-dns-7fb46c8999-cmd4w\" (UID: \"6ec94265-412a-4c3d-8339-bd5e294ede4f\") " pod="openstack/dnsmasq-dns-7fb46c8999-cmd4w" Mar 18 18:22:17.281579 master-0 kubenswrapper[30278]: I0318 18:22:17.281495 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6ec94265-412a-4c3d-8339-bd5e294ede4f-ovsdbserver-sb\") pod \"dnsmasq-dns-7fb46c8999-cmd4w\" (UID: \"6ec94265-412a-4c3d-8339-bd5e294ede4f\") " pod="openstack/dnsmasq-dns-7fb46c8999-cmd4w" Mar 18 18:22:17.336107 master-0 kubenswrapper[30278]: I0318 18:22:17.336045 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zppw2\" (UniqueName: \"kubernetes.io/projected/6ec94265-412a-4c3d-8339-bd5e294ede4f-kube-api-access-zppw2\") pod \"dnsmasq-dns-7fb46c8999-cmd4w\" (UID: \"6ec94265-412a-4c3d-8339-bd5e294ede4f\") " pod="openstack/dnsmasq-dns-7fb46c8999-cmd4w" Mar 18 18:22:17.360466 master-0 kubenswrapper[30278]: I0318 18:22:17.359985 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fb46c8999-cmd4w" Mar 18 18:22:17.651434 master-0 kubenswrapper[30278]: I0318 18:22:17.641752 30278 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ironic-inspector-0" Mar 18 18:22:17.664177 master-0 kubenswrapper[30278]: I0318 18:22:17.664117 30278 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ironic-inspector-0" Mar 18 18:22:17.671796 master-0 kubenswrapper[30278]: I0318 18:22:17.671707 30278 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ironic-inspector-0" Mar 18 18:22:18.143302 master-0 kubenswrapper[30278]: I0318 18:22:18.142993 30278 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7fb46c8999-cmd4w"] Mar 18 18:22:18.664265 master-0 kubenswrapper[30278]: I0318 18:22:18.663061 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fb46c8999-cmd4w" event={"ID":"6ec94265-412a-4c3d-8339-bd5e294ede4f","Type":"ContainerStarted","Data":"c52fde7c44fc733ff88e67a085078bc8fab9f97e7dcf010d5a8ede1b4c945b30"} Mar 18 18:22:18.664265 master-0 kubenswrapper[30278]: I0318 18:22:18.663147 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fb46c8999-cmd4w" event={"ID":"6ec94265-412a-4c3d-8339-bd5e294ede4f","Type":"ContainerStarted","Data":"fcc52a22e4d97cc73b0237010b72e214c503f757275c53d328e908b449f56d94"} Mar 18 18:22:19.677205 master-0 kubenswrapper[30278]: I0318 18:22:19.677147 30278 generic.go:334] "Generic (PLEG): container finished" podID="6ec94265-412a-4c3d-8339-bd5e294ede4f" containerID="c52fde7c44fc733ff88e67a085078bc8fab9f97e7dcf010d5a8ede1b4c945b30" exitCode=0 Mar 18 18:22:19.678351 master-0 kubenswrapper[30278]: I0318 18:22:19.677349 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fb46c8999-cmd4w" event={"ID":"6ec94265-412a-4c3d-8339-bd5e294ede4f","Type":"ContainerDied","Data":"c52fde7c44fc733ff88e67a085078bc8fab9f97e7dcf010d5a8ede1b4c945b30"} Mar 18 18:22:20.015889 master-0 kubenswrapper[30278]: I0318 18:22:20.015257 30278 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Mar 18 18:22:20.015889 master-0 kubenswrapper[30278]: I0318 18:22:20.015902 30278 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Mar 18 18:22:20.361302 master-0 kubenswrapper[30278]: I0318 18:22:20.360914 30278 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Mar 18 18:22:20.402304 master-0 kubenswrapper[30278]: I0318 18:22:20.400058 30278 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Mar 18 18:22:20.694458 master-0 kubenswrapper[30278]: I0318 18:22:20.694277 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fb46c8999-cmd4w" event={"ID":"6ec94265-412a-4c3d-8339-bd5e294ede4f","Type":"ContainerStarted","Data":"5821e2443a8a8a91e87d822cdfff6dab121ebedadce33bd766b6e036f982a3b5"} Mar 18 18:22:20.694458 master-0 kubenswrapper[30278]: I0318 18:22:20.694363 30278 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7fb46c8999-cmd4w" Mar 18 18:22:20.712466 master-0 kubenswrapper[30278]: I0318 18:22:20.712420 30278 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Mar 18 18:22:20.729230 master-0 kubenswrapper[30278]: I0318 18:22:20.729097 30278 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7fb46c8999-cmd4w" podStartSLOduration=4.729072036 podStartE2EDuration="4.729072036s" podCreationTimestamp="2026-03-18 18:22:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 18:22:20.721654697 +0000 UTC m=+1309.888839302" watchObservedRunningTime="2026-03-18 18:22:20.729072036 +0000 UTC m=+1309.896256631" Mar 18 18:22:21.032440 master-0 kubenswrapper[30278]: I0318 18:22:21.030865 30278 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="847dbfce-3773-4d6d-af26-16040d410d2c" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.128.1.14:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 18 18:22:21.032440 master-0 kubenswrapper[30278]: I0318 18:22:21.031223 30278 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="847dbfce-3773-4d6d-af26-16040d410d2c" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.128.1.14:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 18 18:22:21.095164 master-0 kubenswrapper[30278]: I0318 18:22:21.093470 30278 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-gtlpg"] Mar 18 18:22:21.095631 master-0 kubenswrapper[30278]: I0318 18:22:21.095423 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-gtlpg" Mar 18 18:22:21.111175 master-0 kubenswrapper[30278]: I0318 18:22:21.104338 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Mar 18 18:22:21.111175 master-0 kubenswrapper[30278]: I0318 18:22:21.104796 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Mar 18 18:22:21.132343 master-0 kubenswrapper[30278]: I0318 18:22:21.115977 30278 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-gtlpg"] Mar 18 18:22:21.133440 master-0 kubenswrapper[30278]: I0318 18:22:21.132491 30278 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-host-discover-76s4m"] Mar 18 18:22:21.137316 master-0 kubenswrapper[30278]: I0318 18:22:21.135423 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-host-discover-76s4m" Mar 18 18:22:21.153363 master-0 kubenswrapper[30278]: I0318 18:22:21.149828 30278 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-host-discover-76s4m"] Mar 18 18:22:21.168312 master-0 kubenswrapper[30278]: I0318 18:22:21.166563 30278 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Mar 18 18:22:21.168312 master-0 kubenswrapper[30278]: I0318 18:22:21.166977 30278 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="32bccbd4-7005-4a2c-b90e-f7a249adabbd" containerName="nova-api-log" containerID="cri-o://a103cc03a0b2122fed2a2d5440fe02a76067880c490543da0f185135a49eee90" gracePeriod=30 Mar 18 18:22:21.168312 master-0 kubenswrapper[30278]: I0318 18:22:21.168042 30278 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="32bccbd4-7005-4a2c-b90e-f7a249adabbd" containerName="nova-api-api" containerID="cri-o://ca4e8fa092fe689f2ee12915a52387bfbb0f734418b358a24233e73f4e3f1918" gracePeriod=30 Mar 18 18:22:21.216336 master-0 kubenswrapper[30278]: I0318 18:22:21.215743 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5e501d70-7435-4269-a155-067f1f54bee7-config-data\") pod \"nova-cell1-cell-mapping-gtlpg\" (UID: \"5e501d70-7435-4269-a155-067f1f54bee7\") " pod="openstack/nova-cell1-cell-mapping-gtlpg" Mar 18 18:22:21.216336 master-0 kubenswrapper[30278]: I0318 18:22:21.215883 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lbgzm\" (UniqueName: \"kubernetes.io/projected/ae8ac9fe-688b-4a6e-a479-9b5c5eeb5704-kube-api-access-lbgzm\") pod \"nova-cell1-host-discover-76s4m\" (UID: \"ae8ac9fe-688b-4a6e-a479-9b5c5eeb5704\") " pod="openstack/nova-cell1-host-discover-76s4m" Mar 18 18:22:21.216728 master-0 kubenswrapper[30278]: I0318 18:22:21.216618 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ae8ac9fe-688b-4a6e-a479-9b5c5eeb5704-config-data\") pod \"nova-cell1-host-discover-76s4m\" (UID: \"ae8ac9fe-688b-4a6e-a479-9b5c5eeb5704\") " pod="openstack/nova-cell1-host-discover-76s4m" Mar 18 18:22:21.216728 master-0 kubenswrapper[30278]: I0318 18:22:21.216656 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kgxpz\" (UniqueName: \"kubernetes.io/projected/5e501d70-7435-4269-a155-067f1f54bee7-kube-api-access-kgxpz\") pod \"nova-cell1-cell-mapping-gtlpg\" (UID: \"5e501d70-7435-4269-a155-067f1f54bee7\") " pod="openstack/nova-cell1-cell-mapping-gtlpg" Mar 18 18:22:21.221356 master-0 kubenswrapper[30278]: I0318 18:22:21.217714 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ae8ac9fe-688b-4a6e-a479-9b5c5eeb5704-scripts\") pod \"nova-cell1-host-discover-76s4m\" (UID: \"ae8ac9fe-688b-4a6e-a479-9b5c5eeb5704\") " pod="openstack/nova-cell1-host-discover-76s4m" Mar 18 18:22:21.221356 master-0 kubenswrapper[30278]: I0318 18:22:21.217779 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae8ac9fe-688b-4a6e-a479-9b5c5eeb5704-combined-ca-bundle\") pod \"nova-cell1-host-discover-76s4m\" (UID: \"ae8ac9fe-688b-4a6e-a479-9b5c5eeb5704\") " pod="openstack/nova-cell1-host-discover-76s4m" Mar 18 18:22:21.221356 master-0 kubenswrapper[30278]: I0318 18:22:21.217987 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5e501d70-7435-4269-a155-067f1f54bee7-scripts\") pod \"nova-cell1-cell-mapping-gtlpg\" (UID: \"5e501d70-7435-4269-a155-067f1f54bee7\") " pod="openstack/nova-cell1-cell-mapping-gtlpg" Mar 18 18:22:21.221356 master-0 kubenswrapper[30278]: I0318 18:22:21.218183 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5e501d70-7435-4269-a155-067f1f54bee7-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-gtlpg\" (UID: \"5e501d70-7435-4269-a155-067f1f54bee7\") " pod="openstack/nova-cell1-cell-mapping-gtlpg" Mar 18 18:22:21.321337 master-0 kubenswrapper[30278]: I0318 18:22:21.321105 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5e501d70-7435-4269-a155-067f1f54bee7-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-gtlpg\" (UID: \"5e501d70-7435-4269-a155-067f1f54bee7\") " pod="openstack/nova-cell1-cell-mapping-gtlpg" Mar 18 18:22:21.321337 master-0 kubenswrapper[30278]: I0318 18:22:21.321229 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5e501d70-7435-4269-a155-067f1f54bee7-config-data\") pod \"nova-cell1-cell-mapping-gtlpg\" (UID: \"5e501d70-7435-4269-a155-067f1f54bee7\") " pod="openstack/nova-cell1-cell-mapping-gtlpg" Mar 18 18:22:21.321337 master-0 kubenswrapper[30278]: I0318 18:22:21.321294 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lbgzm\" (UniqueName: \"kubernetes.io/projected/ae8ac9fe-688b-4a6e-a479-9b5c5eeb5704-kube-api-access-lbgzm\") pod \"nova-cell1-host-discover-76s4m\" (UID: \"ae8ac9fe-688b-4a6e-a479-9b5c5eeb5704\") " pod="openstack/nova-cell1-host-discover-76s4m" Mar 18 18:22:21.321810 master-0 kubenswrapper[30278]: I0318 18:22:21.321442 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ae8ac9fe-688b-4a6e-a479-9b5c5eeb5704-config-data\") pod \"nova-cell1-host-discover-76s4m\" (UID: \"ae8ac9fe-688b-4a6e-a479-9b5c5eeb5704\") " pod="openstack/nova-cell1-host-discover-76s4m" Mar 18 18:22:21.321810 master-0 kubenswrapper[30278]: I0318 18:22:21.321470 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kgxpz\" (UniqueName: \"kubernetes.io/projected/5e501d70-7435-4269-a155-067f1f54bee7-kube-api-access-kgxpz\") pod \"nova-cell1-cell-mapping-gtlpg\" (UID: \"5e501d70-7435-4269-a155-067f1f54bee7\") " pod="openstack/nova-cell1-cell-mapping-gtlpg" Mar 18 18:22:21.321810 master-0 kubenswrapper[30278]: I0318 18:22:21.321542 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ae8ac9fe-688b-4a6e-a479-9b5c5eeb5704-scripts\") pod \"nova-cell1-host-discover-76s4m\" (UID: \"ae8ac9fe-688b-4a6e-a479-9b5c5eeb5704\") " pod="openstack/nova-cell1-host-discover-76s4m" Mar 18 18:22:21.321810 master-0 kubenswrapper[30278]: I0318 18:22:21.321565 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae8ac9fe-688b-4a6e-a479-9b5c5eeb5704-combined-ca-bundle\") pod \"nova-cell1-host-discover-76s4m\" (UID: \"ae8ac9fe-688b-4a6e-a479-9b5c5eeb5704\") " pod="openstack/nova-cell1-host-discover-76s4m" Mar 18 18:22:21.321810 master-0 kubenswrapper[30278]: I0318 18:22:21.321639 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5e501d70-7435-4269-a155-067f1f54bee7-scripts\") pod \"nova-cell1-cell-mapping-gtlpg\" (UID: \"5e501d70-7435-4269-a155-067f1f54bee7\") " pod="openstack/nova-cell1-cell-mapping-gtlpg" Mar 18 18:22:21.328245 master-0 kubenswrapper[30278]: I0318 18:22:21.326727 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5e501d70-7435-4269-a155-067f1f54bee7-scripts\") pod \"nova-cell1-cell-mapping-gtlpg\" (UID: \"5e501d70-7435-4269-a155-067f1f54bee7\") " pod="openstack/nova-cell1-cell-mapping-gtlpg" Mar 18 18:22:21.328245 master-0 kubenswrapper[30278]: I0318 18:22:21.326797 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5e501d70-7435-4269-a155-067f1f54bee7-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-gtlpg\" (UID: \"5e501d70-7435-4269-a155-067f1f54bee7\") " pod="openstack/nova-cell1-cell-mapping-gtlpg" Mar 18 18:22:21.328245 master-0 kubenswrapper[30278]: I0318 18:22:21.327705 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ae8ac9fe-688b-4a6e-a479-9b5c5eeb5704-config-data\") pod \"nova-cell1-host-discover-76s4m\" (UID: \"ae8ac9fe-688b-4a6e-a479-9b5c5eeb5704\") " pod="openstack/nova-cell1-host-discover-76s4m" Mar 18 18:22:21.328245 master-0 kubenswrapper[30278]: I0318 18:22:21.327765 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae8ac9fe-688b-4a6e-a479-9b5c5eeb5704-combined-ca-bundle\") pod \"nova-cell1-host-discover-76s4m\" (UID: \"ae8ac9fe-688b-4a6e-a479-9b5c5eeb5704\") " pod="openstack/nova-cell1-host-discover-76s4m" Mar 18 18:22:21.340349 master-0 kubenswrapper[30278]: I0318 18:22:21.340249 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5e501d70-7435-4269-a155-067f1f54bee7-config-data\") pod \"nova-cell1-cell-mapping-gtlpg\" (UID: \"5e501d70-7435-4269-a155-067f1f54bee7\") " pod="openstack/nova-cell1-cell-mapping-gtlpg" Mar 18 18:22:21.344043 master-0 kubenswrapper[30278]: I0318 18:22:21.343994 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kgxpz\" (UniqueName: \"kubernetes.io/projected/5e501d70-7435-4269-a155-067f1f54bee7-kube-api-access-kgxpz\") pod \"nova-cell1-cell-mapping-gtlpg\" (UID: \"5e501d70-7435-4269-a155-067f1f54bee7\") " pod="openstack/nova-cell1-cell-mapping-gtlpg" Mar 18 18:22:21.345818 master-0 kubenswrapper[30278]: I0318 18:22:21.345748 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ae8ac9fe-688b-4a6e-a479-9b5c5eeb5704-scripts\") pod \"nova-cell1-host-discover-76s4m\" (UID: \"ae8ac9fe-688b-4a6e-a479-9b5c5eeb5704\") " pod="openstack/nova-cell1-host-discover-76s4m" Mar 18 18:22:21.347592 master-0 kubenswrapper[30278]: I0318 18:22:21.347169 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lbgzm\" (UniqueName: \"kubernetes.io/projected/ae8ac9fe-688b-4a6e-a479-9b5c5eeb5704-kube-api-access-lbgzm\") pod \"nova-cell1-host-discover-76s4m\" (UID: \"ae8ac9fe-688b-4a6e-a479-9b5c5eeb5704\") " pod="openstack/nova-cell1-host-discover-76s4m" Mar 18 18:22:21.450379 master-0 kubenswrapper[30278]: I0318 18:22:21.450220 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-gtlpg" Mar 18 18:22:21.514275 master-0 kubenswrapper[30278]: I0318 18:22:21.513737 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-host-discover-76s4m" Mar 18 18:22:21.791563 master-0 kubenswrapper[30278]: I0318 18:22:21.791467 30278 generic.go:334] "Generic (PLEG): container finished" podID="32bccbd4-7005-4a2c-b90e-f7a249adabbd" containerID="a103cc03a0b2122fed2a2d5440fe02a76067880c490543da0f185135a49eee90" exitCode=143 Mar 18 18:22:21.794021 master-0 kubenswrapper[30278]: I0318 18:22:21.793940 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"32bccbd4-7005-4a2c-b90e-f7a249adabbd","Type":"ContainerDied","Data":"a103cc03a0b2122fed2a2d5440fe02a76067880c490543da0f185135a49eee90"} Mar 18 18:22:22.074332 master-0 kubenswrapper[30278]: I0318 18:22:22.070631 30278 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-gtlpg"] Mar 18 18:22:22.213797 master-0 kubenswrapper[30278]: I0318 18:22:22.204266 30278 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-host-discover-76s4m"] Mar 18 18:22:22.809718 master-0 kubenswrapper[30278]: I0318 18:22:22.809642 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-host-discover-76s4m" event={"ID":"ae8ac9fe-688b-4a6e-a479-9b5c5eeb5704","Type":"ContainerStarted","Data":"ac34a34d691cb77f5a25f4d0f3df1b136c9d08328c3dcc01a8e485470c3b12fa"} Mar 18 18:22:22.809718 master-0 kubenswrapper[30278]: I0318 18:22:22.809719 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-host-discover-76s4m" event={"ID":"ae8ac9fe-688b-4a6e-a479-9b5c5eeb5704","Type":"ContainerStarted","Data":"2f282182a480dc34eb9862922adcc9eb97abadfe69684215f1e570d2e6e2f6d0"} Mar 18 18:22:22.815403 master-0 kubenswrapper[30278]: I0318 18:22:22.815246 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-gtlpg" event={"ID":"5e501d70-7435-4269-a155-067f1f54bee7","Type":"ContainerStarted","Data":"b19e82b4715b4790ee68db378faed1b0826b5c51cb1ef1a8883ddefe11105323"} Mar 18 18:22:22.815526 master-0 kubenswrapper[30278]: I0318 18:22:22.815450 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-gtlpg" event={"ID":"5e501d70-7435-4269-a155-067f1f54bee7","Type":"ContainerStarted","Data":"82595fe4bdd003c9cc687a4b7f2ef4529380a2c3bcf8e7d8b5a8e4dbf56f07bf"} Mar 18 18:22:22.922695 master-0 kubenswrapper[30278]: I0318 18:22:22.922579 30278 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-host-discover-76s4m" podStartSLOduration=2.922551157 podStartE2EDuration="2.922551157s" podCreationTimestamp="2026-03-18 18:22:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 18:22:22.912771034 +0000 UTC m=+1312.079955669" watchObservedRunningTime="2026-03-18 18:22:22.922551157 +0000 UTC m=+1312.089735752" Mar 18 18:22:22.940779 master-0 kubenswrapper[30278]: I0318 18:22:22.940701 30278 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-gtlpg" podStartSLOduration=2.940684976 podStartE2EDuration="2.940684976s" podCreationTimestamp="2026-03-18 18:22:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 18:22:22.937356906 +0000 UTC m=+1312.104541501" watchObservedRunningTime="2026-03-18 18:22:22.940684976 +0000 UTC m=+1312.107869571" Mar 18 18:22:24.877362 master-0 kubenswrapper[30278]: I0318 18:22:24.876919 30278 generic.go:334] "Generic (PLEG): container finished" podID="32bccbd4-7005-4a2c-b90e-f7a249adabbd" containerID="ca4e8fa092fe689f2ee12915a52387bfbb0f734418b358a24233e73f4e3f1918" exitCode=0 Mar 18 18:22:24.877362 master-0 kubenswrapper[30278]: I0318 18:22:24.876996 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"32bccbd4-7005-4a2c-b90e-f7a249adabbd","Type":"ContainerDied","Data":"ca4e8fa092fe689f2ee12915a52387bfbb0f734418b358a24233e73f4e3f1918"} Mar 18 18:22:24.877362 master-0 kubenswrapper[30278]: I0318 18:22:24.877039 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"32bccbd4-7005-4a2c-b90e-f7a249adabbd","Type":"ContainerDied","Data":"ba234ef53dbfe89ae37d2356bfcf9052df90914b3989949f0fcc4caa4bdcec1c"} Mar 18 18:22:24.877362 master-0 kubenswrapper[30278]: I0318 18:22:24.877059 30278 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ba234ef53dbfe89ae37d2356bfcf9052df90914b3989949f0fcc4caa4bdcec1c" Mar 18 18:22:24.958519 master-0 kubenswrapper[30278]: I0318 18:22:24.958136 30278 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 18 18:22:25.032422 master-0 kubenswrapper[30278]: I0318 18:22:25.031365 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/32bccbd4-7005-4a2c-b90e-f7a249adabbd-combined-ca-bundle\") pod \"32bccbd4-7005-4a2c-b90e-f7a249adabbd\" (UID: \"32bccbd4-7005-4a2c-b90e-f7a249adabbd\") " Mar 18 18:22:25.032422 master-0 kubenswrapper[30278]: I0318 18:22:25.031439 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k87fk\" (UniqueName: \"kubernetes.io/projected/32bccbd4-7005-4a2c-b90e-f7a249adabbd-kube-api-access-k87fk\") pod \"32bccbd4-7005-4a2c-b90e-f7a249adabbd\" (UID: \"32bccbd4-7005-4a2c-b90e-f7a249adabbd\") " Mar 18 18:22:25.032422 master-0 kubenswrapper[30278]: I0318 18:22:25.031554 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/32bccbd4-7005-4a2c-b90e-f7a249adabbd-logs\") pod \"32bccbd4-7005-4a2c-b90e-f7a249adabbd\" (UID: \"32bccbd4-7005-4a2c-b90e-f7a249adabbd\") " Mar 18 18:22:25.032422 master-0 kubenswrapper[30278]: I0318 18:22:25.031717 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/32bccbd4-7005-4a2c-b90e-f7a249adabbd-config-data\") pod \"32bccbd4-7005-4a2c-b90e-f7a249adabbd\" (UID: \"32bccbd4-7005-4a2c-b90e-f7a249adabbd\") " Mar 18 18:22:25.040863 master-0 kubenswrapper[30278]: I0318 18:22:25.040812 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/32bccbd4-7005-4a2c-b90e-f7a249adabbd-kube-api-access-k87fk" (OuterVolumeSpecName: "kube-api-access-k87fk") pod "32bccbd4-7005-4a2c-b90e-f7a249adabbd" (UID: "32bccbd4-7005-4a2c-b90e-f7a249adabbd"). InnerVolumeSpecName "kube-api-access-k87fk". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 18:22:25.042111 master-0 kubenswrapper[30278]: I0318 18:22:25.042060 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/32bccbd4-7005-4a2c-b90e-f7a249adabbd-logs" (OuterVolumeSpecName: "logs") pod "32bccbd4-7005-4a2c-b90e-f7a249adabbd" (UID: "32bccbd4-7005-4a2c-b90e-f7a249adabbd"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 18 18:22:25.086578 master-0 kubenswrapper[30278]: I0318 18:22:25.085439 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/32bccbd4-7005-4a2c-b90e-f7a249adabbd-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "32bccbd4-7005-4a2c-b90e-f7a249adabbd" (UID: "32bccbd4-7005-4a2c-b90e-f7a249adabbd"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 18:22:25.101323 master-0 kubenswrapper[30278]: I0318 18:22:25.101052 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/32bccbd4-7005-4a2c-b90e-f7a249adabbd-config-data" (OuterVolumeSpecName: "config-data") pod "32bccbd4-7005-4a2c-b90e-f7a249adabbd" (UID: "32bccbd4-7005-4a2c-b90e-f7a249adabbd"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 18:22:25.148264 master-0 kubenswrapper[30278]: I0318 18:22:25.148204 30278 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/32bccbd4-7005-4a2c-b90e-f7a249adabbd-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 18 18:22:25.148264 master-0 kubenswrapper[30278]: I0318 18:22:25.148264 30278 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k87fk\" (UniqueName: \"kubernetes.io/projected/32bccbd4-7005-4a2c-b90e-f7a249adabbd-kube-api-access-k87fk\") on node \"master-0\" DevicePath \"\"" Mar 18 18:22:25.148574 master-0 kubenswrapper[30278]: I0318 18:22:25.148298 30278 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/32bccbd4-7005-4a2c-b90e-f7a249adabbd-logs\") on node \"master-0\" DevicePath \"\"" Mar 18 18:22:25.148574 master-0 kubenswrapper[30278]: I0318 18:22:25.148318 30278 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/32bccbd4-7005-4a2c-b90e-f7a249adabbd-config-data\") on node \"master-0\" DevicePath \"\"" Mar 18 18:22:25.893329 master-0 kubenswrapper[30278]: I0318 18:22:25.893244 30278 generic.go:334] "Generic (PLEG): container finished" podID="ae8ac9fe-688b-4a6e-a479-9b5c5eeb5704" containerID="ac34a34d691cb77f5a25f4d0f3df1b136c9d08328c3dcc01a8e485470c3b12fa" exitCode=0 Mar 18 18:22:25.894067 master-0 kubenswrapper[30278]: I0318 18:22:25.893396 30278 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 18 18:22:25.904610 master-0 kubenswrapper[30278]: I0318 18:22:25.904530 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-host-discover-76s4m" event={"ID":"ae8ac9fe-688b-4a6e-a479-9b5c5eeb5704","Type":"ContainerDied","Data":"ac34a34d691cb77f5a25f4d0f3df1b136c9d08328c3dcc01a8e485470c3b12fa"} Mar 18 18:22:25.986303 master-0 kubenswrapper[30278]: I0318 18:22:25.986095 30278 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Mar 18 18:22:26.004417 master-0 kubenswrapper[30278]: I0318 18:22:26.004352 30278 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Mar 18 18:22:26.019512 master-0 kubenswrapper[30278]: I0318 18:22:26.019470 30278 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Mar 18 18:22:26.020420 master-0 kubenswrapper[30278]: E0318 18:22:26.020401 30278 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="32bccbd4-7005-4a2c-b90e-f7a249adabbd" containerName="nova-api-api" Mar 18 18:22:26.020519 master-0 kubenswrapper[30278]: I0318 18:22:26.020508 30278 state_mem.go:107] "Deleted CPUSet assignment" podUID="32bccbd4-7005-4a2c-b90e-f7a249adabbd" containerName="nova-api-api" Mar 18 18:22:26.020612 master-0 kubenswrapper[30278]: E0318 18:22:26.020601 30278 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="32bccbd4-7005-4a2c-b90e-f7a249adabbd" containerName="nova-api-log" Mar 18 18:22:26.020673 master-0 kubenswrapper[30278]: I0318 18:22:26.020663 30278 state_mem.go:107] "Deleted CPUSet assignment" podUID="32bccbd4-7005-4a2c-b90e-f7a249adabbd" containerName="nova-api-log" Mar 18 18:22:26.021027 master-0 kubenswrapper[30278]: I0318 18:22:26.021013 30278 memory_manager.go:354] "RemoveStaleState removing state" podUID="32bccbd4-7005-4a2c-b90e-f7a249adabbd" containerName="nova-api-api" Mar 18 18:22:26.021134 master-0 kubenswrapper[30278]: I0318 18:22:26.021123 30278 memory_manager.go:354] "RemoveStaleState removing state" podUID="32bccbd4-7005-4a2c-b90e-f7a249adabbd" containerName="nova-api-log" Mar 18 18:22:26.022680 master-0 kubenswrapper[30278]: I0318 18:22:26.022661 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 18 18:22:26.027266 master-0 kubenswrapper[30278]: I0318 18:22:26.026362 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Mar 18 18:22:26.027266 master-0 kubenswrapper[30278]: I0318 18:22:26.026602 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Mar 18 18:22:26.027975 master-0 kubenswrapper[30278]: I0318 18:22:26.027916 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Mar 18 18:22:26.046627 master-0 kubenswrapper[30278]: I0318 18:22:26.036824 30278 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Mar 18 18:22:26.185074 master-0 kubenswrapper[30278]: I0318 18:22:26.184980 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5e6a1323-31fd-4a1f-814e-bbf107ff64da-logs\") pod \"nova-api-0\" (UID: \"5e6a1323-31fd-4a1f-814e-bbf107ff64da\") " pod="openstack/nova-api-0" Mar 18 18:22:26.185451 master-0 kubenswrapper[30278]: I0318 18:22:26.185091 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5e6a1323-31fd-4a1f-814e-bbf107ff64da-public-tls-certs\") pod \"nova-api-0\" (UID: \"5e6a1323-31fd-4a1f-814e-bbf107ff64da\") " pod="openstack/nova-api-0" Mar 18 18:22:26.185451 master-0 kubenswrapper[30278]: I0318 18:22:26.185154 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5e6a1323-31fd-4a1f-814e-bbf107ff64da-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"5e6a1323-31fd-4a1f-814e-bbf107ff64da\") " pod="openstack/nova-api-0" Mar 18 18:22:26.185451 master-0 kubenswrapper[30278]: I0318 18:22:26.185278 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b9276\" (UniqueName: \"kubernetes.io/projected/5e6a1323-31fd-4a1f-814e-bbf107ff64da-kube-api-access-b9276\") pod \"nova-api-0\" (UID: \"5e6a1323-31fd-4a1f-814e-bbf107ff64da\") " pod="openstack/nova-api-0" Mar 18 18:22:26.185614 master-0 kubenswrapper[30278]: I0318 18:22:26.185585 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5e6a1323-31fd-4a1f-814e-bbf107ff64da-internal-tls-certs\") pod \"nova-api-0\" (UID: \"5e6a1323-31fd-4a1f-814e-bbf107ff64da\") " pod="openstack/nova-api-0" Mar 18 18:22:26.187655 master-0 kubenswrapper[30278]: I0318 18:22:26.185912 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5e6a1323-31fd-4a1f-814e-bbf107ff64da-config-data\") pod \"nova-api-0\" (UID: \"5e6a1323-31fd-4a1f-814e-bbf107ff64da\") " pod="openstack/nova-api-0" Mar 18 18:22:26.289028 master-0 kubenswrapper[30278]: I0318 18:22:26.288843 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5e6a1323-31fd-4a1f-814e-bbf107ff64da-logs\") pod \"nova-api-0\" (UID: \"5e6a1323-31fd-4a1f-814e-bbf107ff64da\") " pod="openstack/nova-api-0" Mar 18 18:22:26.290391 master-0 kubenswrapper[30278]: I0318 18:22:26.290357 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5e6a1323-31fd-4a1f-814e-bbf107ff64da-public-tls-certs\") pod \"nova-api-0\" (UID: \"5e6a1323-31fd-4a1f-814e-bbf107ff64da\") " pod="openstack/nova-api-0" Mar 18 18:22:26.291669 master-0 kubenswrapper[30278]: I0318 18:22:26.291636 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5e6a1323-31fd-4a1f-814e-bbf107ff64da-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"5e6a1323-31fd-4a1f-814e-bbf107ff64da\") " pod="openstack/nova-api-0" Mar 18 18:22:26.293892 master-0 kubenswrapper[30278]: I0318 18:22:26.290209 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5e6a1323-31fd-4a1f-814e-bbf107ff64da-logs\") pod \"nova-api-0\" (UID: \"5e6a1323-31fd-4a1f-814e-bbf107ff64da\") " pod="openstack/nova-api-0" Mar 18 18:22:26.294193 master-0 kubenswrapper[30278]: I0318 18:22:26.293785 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b9276\" (UniqueName: \"kubernetes.io/projected/5e6a1323-31fd-4a1f-814e-bbf107ff64da-kube-api-access-b9276\") pod \"nova-api-0\" (UID: \"5e6a1323-31fd-4a1f-814e-bbf107ff64da\") " pod="openstack/nova-api-0" Mar 18 18:22:26.298136 master-0 kubenswrapper[30278]: I0318 18:22:26.296596 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5e6a1323-31fd-4a1f-814e-bbf107ff64da-public-tls-certs\") pod \"nova-api-0\" (UID: \"5e6a1323-31fd-4a1f-814e-bbf107ff64da\") " pod="openstack/nova-api-0" Mar 18 18:22:26.299992 master-0 kubenswrapper[30278]: I0318 18:22:26.297703 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5e6a1323-31fd-4a1f-814e-bbf107ff64da-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"5e6a1323-31fd-4a1f-814e-bbf107ff64da\") " pod="openstack/nova-api-0" Mar 18 18:22:26.300382 master-0 kubenswrapper[30278]: I0318 18:22:26.300350 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5e6a1323-31fd-4a1f-814e-bbf107ff64da-internal-tls-certs\") pod \"nova-api-0\" (UID: \"5e6a1323-31fd-4a1f-814e-bbf107ff64da\") " pod="openstack/nova-api-0" Mar 18 18:22:26.300910 master-0 kubenswrapper[30278]: I0318 18:22:26.300873 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5e6a1323-31fd-4a1f-814e-bbf107ff64da-config-data\") pod \"nova-api-0\" (UID: \"5e6a1323-31fd-4a1f-814e-bbf107ff64da\") " pod="openstack/nova-api-0" Mar 18 18:22:26.302453 master-0 kubenswrapper[30278]: I0318 18:22:26.302382 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5e6a1323-31fd-4a1f-814e-bbf107ff64da-internal-tls-certs\") pod \"nova-api-0\" (UID: \"5e6a1323-31fd-4a1f-814e-bbf107ff64da\") " pod="openstack/nova-api-0" Mar 18 18:22:26.306323 master-0 kubenswrapper[30278]: I0318 18:22:26.306250 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5e6a1323-31fd-4a1f-814e-bbf107ff64da-config-data\") pod \"nova-api-0\" (UID: \"5e6a1323-31fd-4a1f-814e-bbf107ff64da\") " pod="openstack/nova-api-0" Mar 18 18:22:26.316383 master-0 kubenswrapper[30278]: I0318 18:22:26.316212 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b9276\" (UniqueName: \"kubernetes.io/projected/5e6a1323-31fd-4a1f-814e-bbf107ff64da-kube-api-access-b9276\") pod \"nova-api-0\" (UID: \"5e6a1323-31fd-4a1f-814e-bbf107ff64da\") " pod="openstack/nova-api-0" Mar 18 18:22:26.354048 master-0 kubenswrapper[30278]: I0318 18:22:26.353869 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 18 18:22:26.856656 master-0 kubenswrapper[30278]: I0318 18:22:26.856557 30278 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Mar 18 18:22:26.911179 master-0 kubenswrapper[30278]: I0318 18:22:26.911116 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"5e6a1323-31fd-4a1f-814e-bbf107ff64da","Type":"ContainerStarted","Data":"1ad46607f30562ca225f89527599b7c5005289babc0146305a89a0750bc6c805"} Mar 18 18:22:27.091343 master-0 kubenswrapper[30278]: I0318 18:22:27.091241 30278 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="32bccbd4-7005-4a2c-b90e-f7a249adabbd" path="/var/lib/kubelet/pods/32bccbd4-7005-4a2c-b90e-f7a249adabbd/volumes" Mar 18 18:22:27.355479 master-0 kubenswrapper[30278]: I0318 18:22:27.355420 30278 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-host-discover-76s4m" Mar 18 18:22:27.363112 master-0 kubenswrapper[30278]: I0318 18:22:27.362933 30278 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7fb46c8999-cmd4w" Mar 18 18:22:27.447297 master-0 kubenswrapper[30278]: I0318 18:22:27.447205 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae8ac9fe-688b-4a6e-a479-9b5c5eeb5704-combined-ca-bundle\") pod \"ae8ac9fe-688b-4a6e-a479-9b5c5eeb5704\" (UID: \"ae8ac9fe-688b-4a6e-a479-9b5c5eeb5704\") " Mar 18 18:22:27.447901 master-0 kubenswrapper[30278]: I0318 18:22:27.447577 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ae8ac9fe-688b-4a6e-a479-9b5c5eeb5704-scripts\") pod \"ae8ac9fe-688b-4a6e-a479-9b5c5eeb5704\" (UID: \"ae8ac9fe-688b-4a6e-a479-9b5c5eeb5704\") " Mar 18 18:22:27.447901 master-0 kubenswrapper[30278]: I0318 18:22:27.447672 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ae8ac9fe-688b-4a6e-a479-9b5c5eeb5704-config-data\") pod \"ae8ac9fe-688b-4a6e-a479-9b5c5eeb5704\" (UID: \"ae8ac9fe-688b-4a6e-a479-9b5c5eeb5704\") " Mar 18 18:22:27.447901 master-0 kubenswrapper[30278]: I0318 18:22:27.447881 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lbgzm\" (UniqueName: \"kubernetes.io/projected/ae8ac9fe-688b-4a6e-a479-9b5c5eeb5704-kube-api-access-lbgzm\") pod \"ae8ac9fe-688b-4a6e-a479-9b5c5eeb5704\" (UID: \"ae8ac9fe-688b-4a6e-a479-9b5c5eeb5704\") " Mar 18 18:22:27.464004 master-0 kubenswrapper[30278]: I0318 18:22:27.463952 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ae8ac9fe-688b-4a6e-a479-9b5c5eeb5704-scripts" (OuterVolumeSpecName: "scripts") pod "ae8ac9fe-688b-4a6e-a479-9b5c5eeb5704" (UID: "ae8ac9fe-688b-4a6e-a479-9b5c5eeb5704"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 18:22:27.467076 master-0 kubenswrapper[30278]: I0318 18:22:27.467034 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ae8ac9fe-688b-4a6e-a479-9b5c5eeb5704-kube-api-access-lbgzm" (OuterVolumeSpecName: "kube-api-access-lbgzm") pod "ae8ac9fe-688b-4a6e-a479-9b5c5eeb5704" (UID: "ae8ac9fe-688b-4a6e-a479-9b5c5eeb5704"). InnerVolumeSpecName "kube-api-access-lbgzm". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 18:22:27.540082 master-0 kubenswrapper[30278]: I0318 18:22:27.540004 30278 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-578c6dc45c-dwjps"] Mar 18 18:22:27.549764 master-0 kubenswrapper[30278]: I0318 18:22:27.549572 30278 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-578c6dc45c-dwjps" podUID="dd6f7934-153f-4a68-98f4-4d3c1a576e33" containerName="dnsmasq-dns" containerID="cri-o://9578b97c9f0d50f9a662066e01afaa7196cdc67073fc13808f7008b49f9cac2e" gracePeriod=10 Mar 18 18:22:27.558007 master-0 kubenswrapper[30278]: I0318 18:22:27.554684 30278 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ae8ac9fe-688b-4a6e-a479-9b5c5eeb5704-scripts\") on node \"master-0\" DevicePath \"\"" Mar 18 18:22:27.558007 master-0 kubenswrapper[30278]: I0318 18:22:27.554746 30278 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lbgzm\" (UniqueName: \"kubernetes.io/projected/ae8ac9fe-688b-4a6e-a479-9b5c5eeb5704-kube-api-access-lbgzm\") on node \"master-0\" DevicePath \"\"" Mar 18 18:22:27.607159 master-0 kubenswrapper[30278]: I0318 18:22:27.607091 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ae8ac9fe-688b-4a6e-a479-9b5c5eeb5704-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ae8ac9fe-688b-4a6e-a479-9b5c5eeb5704" (UID: "ae8ac9fe-688b-4a6e-a479-9b5c5eeb5704"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 18:22:27.608052 master-0 kubenswrapper[30278]: I0318 18:22:27.608016 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ae8ac9fe-688b-4a6e-a479-9b5c5eeb5704-config-data" (OuterVolumeSpecName: "config-data") pod "ae8ac9fe-688b-4a6e-a479-9b5c5eeb5704" (UID: "ae8ac9fe-688b-4a6e-a479-9b5c5eeb5704"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 18:22:27.661666 master-0 kubenswrapper[30278]: I0318 18:22:27.661263 30278 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae8ac9fe-688b-4a6e-a479-9b5c5eeb5704-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 18 18:22:27.661666 master-0 kubenswrapper[30278]: I0318 18:22:27.661592 30278 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ae8ac9fe-688b-4a6e-a479-9b5c5eeb5704-config-data\") on node \"master-0\" DevicePath \"\"" Mar 18 18:22:27.970019 master-0 kubenswrapper[30278]: I0318 18:22:27.967182 30278 generic.go:334] "Generic (PLEG): container finished" podID="dd6f7934-153f-4a68-98f4-4d3c1a576e33" containerID="9578b97c9f0d50f9a662066e01afaa7196cdc67073fc13808f7008b49f9cac2e" exitCode=0 Mar 18 18:22:27.970019 master-0 kubenswrapper[30278]: I0318 18:22:27.967264 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-578c6dc45c-dwjps" event={"ID":"dd6f7934-153f-4a68-98f4-4d3c1a576e33","Type":"ContainerDied","Data":"9578b97c9f0d50f9a662066e01afaa7196cdc67073fc13808f7008b49f9cac2e"} Mar 18 18:22:27.997489 master-0 kubenswrapper[30278]: I0318 18:22:27.980858 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"5e6a1323-31fd-4a1f-814e-bbf107ff64da","Type":"ContainerStarted","Data":"6e5920b121443ed202daba04fdbd20f3d2abd5d1e15587a39fc659235ac1e633"} Mar 18 18:22:27.997489 master-0 kubenswrapper[30278]: I0318 18:22:27.980935 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"5e6a1323-31fd-4a1f-814e-bbf107ff64da","Type":"ContainerStarted","Data":"a510a80909c5133291a3d94a32def636ba97d7c8983a05a2922e70a78784037a"} Mar 18 18:22:28.008306 master-0 kubenswrapper[30278]: I0318 18:22:27.999467 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-host-discover-76s4m" event={"ID":"ae8ac9fe-688b-4a6e-a479-9b5c5eeb5704","Type":"ContainerDied","Data":"2f282182a480dc34eb9862922adcc9eb97abadfe69684215f1e570d2e6e2f6d0"} Mar 18 18:22:28.008306 master-0 kubenswrapper[30278]: I0318 18:22:27.999542 30278 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2f282182a480dc34eb9862922adcc9eb97abadfe69684215f1e570d2e6e2f6d0" Mar 18 18:22:28.008306 master-0 kubenswrapper[30278]: I0318 18:22:27.999621 30278 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-host-discover-76s4m" Mar 18 18:22:28.015046 master-0 kubenswrapper[30278]: I0318 18:22:28.014986 30278 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Mar 18 18:22:28.017184 master-0 kubenswrapper[30278]: I0318 18:22:28.017151 30278 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Mar 18 18:22:28.079706 master-0 kubenswrapper[30278]: I0318 18:22:28.077229 30278 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=3.077200935 podStartE2EDuration="3.077200935s" podCreationTimestamp="2026-03-18 18:22:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 18:22:28.02797617 +0000 UTC m=+1317.195160775" watchObservedRunningTime="2026-03-18 18:22:28.077200935 +0000 UTC m=+1317.244385530" Mar 18 18:22:28.262240 master-0 kubenswrapper[30278]: I0318 18:22:28.262188 30278 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-578c6dc45c-dwjps" Mar 18 18:22:28.403733 master-0 kubenswrapper[30278]: I0318 18:22:28.403626 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/dd6f7934-153f-4a68-98f4-4d3c1a576e33-ovsdbserver-nb\") pod \"dd6f7934-153f-4a68-98f4-4d3c1a576e33\" (UID: \"dd6f7934-153f-4a68-98f4-4d3c1a576e33\") " Mar 18 18:22:28.403967 master-0 kubenswrapper[30278]: I0318 18:22:28.403822 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/dd6f7934-153f-4a68-98f4-4d3c1a576e33-dns-svc\") pod \"dd6f7934-153f-4a68-98f4-4d3c1a576e33\" (UID: \"dd6f7934-153f-4a68-98f4-4d3c1a576e33\") " Mar 18 18:22:28.403967 master-0 kubenswrapper[30278]: I0318 18:22:28.403894 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/dd6f7934-153f-4a68-98f4-4d3c1a576e33-ovsdbserver-sb\") pod \"dd6f7934-153f-4a68-98f4-4d3c1a576e33\" (UID: \"dd6f7934-153f-4a68-98f4-4d3c1a576e33\") " Mar 18 18:22:28.404064 master-0 kubenswrapper[30278]: I0318 18:22:28.404028 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ffjbv\" (UniqueName: \"kubernetes.io/projected/dd6f7934-153f-4a68-98f4-4d3c1a576e33-kube-api-access-ffjbv\") pod \"dd6f7934-153f-4a68-98f4-4d3c1a576e33\" (UID: \"dd6f7934-153f-4a68-98f4-4d3c1a576e33\") " Mar 18 18:22:28.404236 master-0 kubenswrapper[30278]: I0318 18:22:28.404208 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/dd6f7934-153f-4a68-98f4-4d3c1a576e33-dns-swift-storage-0\") pod \"dd6f7934-153f-4a68-98f4-4d3c1a576e33\" (UID: \"dd6f7934-153f-4a68-98f4-4d3c1a576e33\") " Mar 18 18:22:28.404417 master-0 kubenswrapper[30278]: I0318 18:22:28.404390 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dd6f7934-153f-4a68-98f4-4d3c1a576e33-config\") pod \"dd6f7934-153f-4a68-98f4-4d3c1a576e33\" (UID: \"dd6f7934-153f-4a68-98f4-4d3c1a576e33\") " Mar 18 18:22:28.409105 master-0 kubenswrapper[30278]: I0318 18:22:28.409036 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dd6f7934-153f-4a68-98f4-4d3c1a576e33-kube-api-access-ffjbv" (OuterVolumeSpecName: "kube-api-access-ffjbv") pod "dd6f7934-153f-4a68-98f4-4d3c1a576e33" (UID: "dd6f7934-153f-4a68-98f4-4d3c1a576e33"). InnerVolumeSpecName "kube-api-access-ffjbv". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 18:22:28.466316 master-0 kubenswrapper[30278]: E0318 18:22:28.466112 30278 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5e501d70_7435_4269_a155_067f1f54bee7.slice/crio-b19e82b4715b4790ee68db378faed1b0826b5c51cb1ef1a8883ddefe11105323.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5e501d70_7435_4269_a155_067f1f54bee7.slice/crio-conmon-b19e82b4715b4790ee68db378faed1b0826b5c51cb1ef1a8883ddefe11105323.scope\": RecentStats: unable to find data in memory cache]" Mar 18 18:22:28.468500 master-0 kubenswrapper[30278]: I0318 18:22:28.466649 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dd6f7934-153f-4a68-98f4-4d3c1a576e33-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "dd6f7934-153f-4a68-98f4-4d3c1a576e33" (UID: "dd6f7934-153f-4a68-98f4-4d3c1a576e33"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 18:22:28.474093 master-0 kubenswrapper[30278]: I0318 18:22:28.474030 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dd6f7934-153f-4a68-98f4-4d3c1a576e33-config" (OuterVolumeSpecName: "config") pod "dd6f7934-153f-4a68-98f4-4d3c1a576e33" (UID: "dd6f7934-153f-4a68-98f4-4d3c1a576e33"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 18:22:28.482944 master-0 kubenswrapper[30278]: I0318 18:22:28.482868 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dd6f7934-153f-4a68-98f4-4d3c1a576e33-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "dd6f7934-153f-4a68-98f4-4d3c1a576e33" (UID: "dd6f7934-153f-4a68-98f4-4d3c1a576e33"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 18:22:28.490287 master-0 kubenswrapper[30278]: I0318 18:22:28.486958 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dd6f7934-153f-4a68-98f4-4d3c1a576e33-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "dd6f7934-153f-4a68-98f4-4d3c1a576e33" (UID: "dd6f7934-153f-4a68-98f4-4d3c1a576e33"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 18:22:28.507518 master-0 kubenswrapper[30278]: I0318 18:22:28.507443 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dd6f7934-153f-4a68-98f4-4d3c1a576e33-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "dd6f7934-153f-4a68-98f4-4d3c1a576e33" (UID: "dd6f7934-153f-4a68-98f4-4d3c1a576e33"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 18:22:28.508673 master-0 kubenswrapper[30278]: I0318 18:22:28.508612 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/dd6f7934-153f-4a68-98f4-4d3c1a576e33-dns-swift-storage-0\") pod \"dd6f7934-153f-4a68-98f4-4d3c1a576e33\" (UID: \"dd6f7934-153f-4a68-98f4-4d3c1a576e33\") " Mar 18 18:22:28.509713 master-0 kubenswrapper[30278]: I0318 18:22:28.509514 30278 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/dd6f7934-153f-4a68-98f4-4d3c1a576e33-ovsdbserver-nb\") on node \"master-0\" DevicePath \"\"" Mar 18 18:22:28.509713 master-0 kubenswrapper[30278]: I0318 18:22:28.509567 30278 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/dd6f7934-153f-4a68-98f4-4d3c1a576e33-dns-svc\") on node \"master-0\" DevicePath \"\"" Mar 18 18:22:28.509713 master-0 kubenswrapper[30278]: I0318 18:22:28.509578 30278 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/dd6f7934-153f-4a68-98f4-4d3c1a576e33-ovsdbserver-sb\") on node \"master-0\" DevicePath \"\"" Mar 18 18:22:28.509713 master-0 kubenswrapper[30278]: I0318 18:22:28.509589 30278 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ffjbv\" (UniqueName: \"kubernetes.io/projected/dd6f7934-153f-4a68-98f4-4d3c1a576e33-kube-api-access-ffjbv\") on node \"master-0\" DevicePath \"\"" Mar 18 18:22:28.509713 master-0 kubenswrapper[30278]: I0318 18:22:28.509604 30278 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dd6f7934-153f-4a68-98f4-4d3c1a576e33-config\") on node \"master-0\" DevicePath \"\"" Mar 18 18:22:28.509713 master-0 kubenswrapper[30278]: W0318 18:22:28.509711 30278 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/dd6f7934-153f-4a68-98f4-4d3c1a576e33/volumes/kubernetes.io~configmap/dns-swift-storage-0 Mar 18 18:22:28.509963 master-0 kubenswrapper[30278]: I0318 18:22:28.509727 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dd6f7934-153f-4a68-98f4-4d3c1a576e33-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "dd6f7934-153f-4a68-98f4-4d3c1a576e33" (UID: "dd6f7934-153f-4a68-98f4-4d3c1a576e33"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 18:22:28.612457 master-0 kubenswrapper[30278]: I0318 18:22:28.612355 30278 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/dd6f7934-153f-4a68-98f4-4d3c1a576e33-dns-swift-storage-0\") on node \"master-0\" DevicePath \"\"" Mar 18 18:22:29.057843 master-0 kubenswrapper[30278]: I0318 18:22:29.057767 30278 generic.go:334] "Generic (PLEG): container finished" podID="5e501d70-7435-4269-a155-067f1f54bee7" containerID="b19e82b4715b4790ee68db378faed1b0826b5c51cb1ef1a8883ddefe11105323" exitCode=0 Mar 18 18:22:29.061799 master-0 kubenswrapper[30278]: I0318 18:22:29.061752 30278 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-578c6dc45c-dwjps" Mar 18 18:22:29.074846 master-0 kubenswrapper[30278]: I0318 18:22:29.074710 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-gtlpg" event={"ID":"5e501d70-7435-4269-a155-067f1f54bee7","Type":"ContainerDied","Data":"b19e82b4715b4790ee68db378faed1b0826b5c51cb1ef1a8883ddefe11105323"} Mar 18 18:22:29.074846 master-0 kubenswrapper[30278]: I0318 18:22:29.074767 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-578c6dc45c-dwjps" event={"ID":"dd6f7934-153f-4a68-98f4-4d3c1a576e33","Type":"ContainerDied","Data":"e18cc174c6bb27648deb63083440a16f5d1bf0e705e34ed030f4fc9b8130cd30"} Mar 18 18:22:29.074846 master-0 kubenswrapper[30278]: I0318 18:22:29.074796 30278 scope.go:117] "RemoveContainer" containerID="9578b97c9f0d50f9a662066e01afaa7196cdc67073fc13808f7008b49f9cac2e" Mar 18 18:22:29.117872 master-0 kubenswrapper[30278]: I0318 18:22:29.117551 30278 scope.go:117] "RemoveContainer" containerID="476ab37f37845aa6a59ab81e0762a1d85bcf8c008ec3f6a78a03c47cd86b9565" Mar 18 18:22:29.147609 master-0 kubenswrapper[30278]: I0318 18:22:29.147489 30278 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-578c6dc45c-dwjps"] Mar 18 18:22:29.166019 master-0 kubenswrapper[30278]: I0318 18:22:29.165896 30278 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-578c6dc45c-dwjps"] Mar 18 18:22:30.024456 master-0 kubenswrapper[30278]: I0318 18:22:30.024388 30278 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Mar 18 18:22:30.028909 master-0 kubenswrapper[30278]: I0318 18:22:30.028856 30278 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Mar 18 18:22:30.091809 master-0 kubenswrapper[30278]: I0318 18:22:30.091670 30278 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Mar 18 18:22:30.092612 master-0 kubenswrapper[30278]: I0318 18:22:30.092444 30278 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Mar 18 18:22:30.629570 master-0 kubenswrapper[30278]: I0318 18:22:30.629447 30278 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-gtlpg" Mar 18 18:22:30.725617 master-0 kubenswrapper[30278]: I0318 18:22:30.725528 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5e501d70-7435-4269-a155-067f1f54bee7-combined-ca-bundle\") pod \"5e501d70-7435-4269-a155-067f1f54bee7\" (UID: \"5e501d70-7435-4269-a155-067f1f54bee7\") " Mar 18 18:22:30.725904 master-0 kubenswrapper[30278]: I0318 18:22:30.725707 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5e501d70-7435-4269-a155-067f1f54bee7-scripts\") pod \"5e501d70-7435-4269-a155-067f1f54bee7\" (UID: \"5e501d70-7435-4269-a155-067f1f54bee7\") " Mar 18 18:22:30.725979 master-0 kubenswrapper[30278]: I0318 18:22:30.725952 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5e501d70-7435-4269-a155-067f1f54bee7-config-data\") pod \"5e501d70-7435-4269-a155-067f1f54bee7\" (UID: \"5e501d70-7435-4269-a155-067f1f54bee7\") " Mar 18 18:22:30.726156 master-0 kubenswrapper[30278]: I0318 18:22:30.726107 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kgxpz\" (UniqueName: \"kubernetes.io/projected/5e501d70-7435-4269-a155-067f1f54bee7-kube-api-access-kgxpz\") pod \"5e501d70-7435-4269-a155-067f1f54bee7\" (UID: \"5e501d70-7435-4269-a155-067f1f54bee7\") " Mar 18 18:22:30.731518 master-0 kubenswrapper[30278]: I0318 18:22:30.731433 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5e501d70-7435-4269-a155-067f1f54bee7-scripts" (OuterVolumeSpecName: "scripts") pod "5e501d70-7435-4269-a155-067f1f54bee7" (UID: "5e501d70-7435-4269-a155-067f1f54bee7"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 18:22:30.733259 master-0 kubenswrapper[30278]: I0318 18:22:30.733195 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5e501d70-7435-4269-a155-067f1f54bee7-kube-api-access-kgxpz" (OuterVolumeSpecName: "kube-api-access-kgxpz") pod "5e501d70-7435-4269-a155-067f1f54bee7" (UID: "5e501d70-7435-4269-a155-067f1f54bee7"). InnerVolumeSpecName "kube-api-access-kgxpz". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 18:22:30.755288 master-0 kubenswrapper[30278]: I0318 18:22:30.755199 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5e501d70-7435-4269-a155-067f1f54bee7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5e501d70-7435-4269-a155-067f1f54bee7" (UID: "5e501d70-7435-4269-a155-067f1f54bee7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 18:22:30.759733 master-0 kubenswrapper[30278]: I0318 18:22:30.759659 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5e501d70-7435-4269-a155-067f1f54bee7-config-data" (OuterVolumeSpecName: "config-data") pod "5e501d70-7435-4269-a155-067f1f54bee7" (UID: "5e501d70-7435-4269-a155-067f1f54bee7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 18:22:30.838311 master-0 kubenswrapper[30278]: I0318 18:22:30.831131 30278 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5e501d70-7435-4269-a155-067f1f54bee7-config-data\") on node \"master-0\" DevicePath \"\"" Mar 18 18:22:30.838311 master-0 kubenswrapper[30278]: I0318 18:22:30.831213 30278 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kgxpz\" (UniqueName: \"kubernetes.io/projected/5e501d70-7435-4269-a155-067f1f54bee7-kube-api-access-kgxpz\") on node \"master-0\" DevicePath \"\"" Mar 18 18:22:30.838311 master-0 kubenswrapper[30278]: I0318 18:22:30.831241 30278 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5e501d70-7435-4269-a155-067f1f54bee7-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 18 18:22:30.838311 master-0 kubenswrapper[30278]: I0318 18:22:30.831259 30278 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5e501d70-7435-4269-a155-067f1f54bee7-scripts\") on node \"master-0\" DevicePath \"\"" Mar 18 18:22:31.089536 master-0 kubenswrapper[30278]: I0318 18:22:31.089435 30278 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dd6f7934-153f-4a68-98f4-4d3c1a576e33" path="/var/lib/kubelet/pods/dd6f7934-153f-4a68-98f4-4d3c1a576e33/volumes" Mar 18 18:22:31.101754 master-0 kubenswrapper[30278]: I0318 18:22:31.100602 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-gtlpg" event={"ID":"5e501d70-7435-4269-a155-067f1f54bee7","Type":"ContainerDied","Data":"82595fe4bdd003c9cc687a4b7f2ef4529380a2c3bcf8e7d8b5a8e4dbf56f07bf"} Mar 18 18:22:31.101754 master-0 kubenswrapper[30278]: I0318 18:22:31.100680 30278 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="82595fe4bdd003c9cc687a4b7f2ef4529380a2c3bcf8e7d8b5a8e4dbf56f07bf" Mar 18 18:22:31.101754 master-0 kubenswrapper[30278]: I0318 18:22:31.100627 30278 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-gtlpg" Mar 18 18:22:31.273409 master-0 kubenswrapper[30278]: I0318 18:22:31.273339 30278 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Mar 18 18:22:31.273721 master-0 kubenswrapper[30278]: I0318 18:22:31.273616 30278 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="5e6a1323-31fd-4a1f-814e-bbf107ff64da" containerName="nova-api-log" containerID="cri-o://a510a80909c5133291a3d94a32def636ba97d7c8983a05a2922e70a78784037a" gracePeriod=30 Mar 18 18:22:31.274227 master-0 kubenswrapper[30278]: I0318 18:22:31.273896 30278 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="5e6a1323-31fd-4a1f-814e-bbf107ff64da" containerName="nova-api-api" containerID="cri-o://6e5920b121443ed202daba04fdbd20f3d2abd5d1e15587a39fc659235ac1e633" gracePeriod=30 Mar 18 18:22:31.288331 master-0 kubenswrapper[30278]: I0318 18:22:31.288218 30278 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Mar 18 18:22:31.288640 master-0 kubenswrapper[30278]: I0318 18:22:31.288524 30278 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="d0e26ed5-e3a5-4852-b288-8185e1095c29" containerName="nova-scheduler-scheduler" containerID="cri-o://1b064717325cf15ca21db7427805ef704e2163a221c0eb3389c359423cd08ae9" gracePeriod=30 Mar 18 18:22:31.333164 master-0 kubenswrapper[30278]: I0318 18:22:31.333060 30278 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Mar 18 18:22:31.572729 master-0 kubenswrapper[30278]: E0318 18:22:31.572311 30278 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="1b064717325cf15ca21db7427805ef704e2163a221c0eb3389c359423cd08ae9" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Mar 18 18:22:31.579377 master-0 kubenswrapper[30278]: E0318 18:22:31.577212 30278 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="1b064717325cf15ca21db7427805ef704e2163a221c0eb3389c359423cd08ae9" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Mar 18 18:22:31.587701 master-0 kubenswrapper[30278]: E0318 18:22:31.587645 30278 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="1b064717325cf15ca21db7427805ef704e2163a221c0eb3389c359423cd08ae9" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Mar 18 18:22:31.587805 master-0 kubenswrapper[30278]: E0318 18:22:31.587701 30278 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="d0e26ed5-e3a5-4852-b288-8185e1095c29" containerName="nova-scheduler-scheduler" Mar 18 18:22:31.995291 master-0 kubenswrapper[30278]: I0318 18:22:31.995212 30278 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 18 18:22:32.108116 master-0 kubenswrapper[30278]: I0318 18:22:32.107642 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5e6a1323-31fd-4a1f-814e-bbf107ff64da-internal-tls-certs\") pod \"5e6a1323-31fd-4a1f-814e-bbf107ff64da\" (UID: \"5e6a1323-31fd-4a1f-814e-bbf107ff64da\") " Mar 18 18:22:32.126349 master-0 kubenswrapper[30278]: I0318 18:22:32.125822 30278 generic.go:334] "Generic (PLEG): container finished" podID="5e6a1323-31fd-4a1f-814e-bbf107ff64da" containerID="6e5920b121443ed202daba04fdbd20f3d2abd5d1e15587a39fc659235ac1e633" exitCode=0 Mar 18 18:22:32.126349 master-0 kubenswrapper[30278]: I0318 18:22:32.125887 30278 generic.go:334] "Generic (PLEG): container finished" podID="5e6a1323-31fd-4a1f-814e-bbf107ff64da" containerID="a510a80909c5133291a3d94a32def636ba97d7c8983a05a2922e70a78784037a" exitCode=143 Mar 18 18:22:32.126349 master-0 kubenswrapper[30278]: I0318 18:22:32.125967 30278 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 18 18:22:32.126349 master-0 kubenswrapper[30278]: I0318 18:22:32.126019 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"5e6a1323-31fd-4a1f-814e-bbf107ff64da","Type":"ContainerDied","Data":"6e5920b121443ed202daba04fdbd20f3d2abd5d1e15587a39fc659235ac1e633"} Mar 18 18:22:32.126349 master-0 kubenswrapper[30278]: I0318 18:22:32.126112 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"5e6a1323-31fd-4a1f-814e-bbf107ff64da","Type":"ContainerDied","Data":"a510a80909c5133291a3d94a32def636ba97d7c8983a05a2922e70a78784037a"} Mar 18 18:22:32.126349 master-0 kubenswrapper[30278]: I0318 18:22:32.126147 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"5e6a1323-31fd-4a1f-814e-bbf107ff64da","Type":"ContainerDied","Data":"1ad46607f30562ca225f89527599b7c5005289babc0146305a89a0750bc6c805"} Mar 18 18:22:32.126349 master-0 kubenswrapper[30278]: I0318 18:22:32.126172 30278 scope.go:117] "RemoveContainer" containerID="6e5920b121443ed202daba04fdbd20f3d2abd5d1e15587a39fc659235ac1e633" Mar 18 18:22:32.188786 master-0 kubenswrapper[30278]: I0318 18:22:32.188723 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5e6a1323-31fd-4a1f-814e-bbf107ff64da-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "5e6a1323-31fd-4a1f-814e-bbf107ff64da" (UID: "5e6a1323-31fd-4a1f-814e-bbf107ff64da"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 18:22:32.211758 master-0 kubenswrapper[30278]: I0318 18:22:32.211674 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5e6a1323-31fd-4a1f-814e-bbf107ff64da-public-tls-certs\") pod \"5e6a1323-31fd-4a1f-814e-bbf107ff64da\" (UID: \"5e6a1323-31fd-4a1f-814e-bbf107ff64da\") " Mar 18 18:22:32.211758 master-0 kubenswrapper[30278]: I0318 18:22:32.211780 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b9276\" (UniqueName: \"kubernetes.io/projected/5e6a1323-31fd-4a1f-814e-bbf107ff64da-kube-api-access-b9276\") pod \"5e6a1323-31fd-4a1f-814e-bbf107ff64da\" (UID: \"5e6a1323-31fd-4a1f-814e-bbf107ff64da\") " Mar 18 18:22:32.212191 master-0 kubenswrapper[30278]: I0318 18:22:32.211824 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5e6a1323-31fd-4a1f-814e-bbf107ff64da-logs\") pod \"5e6a1323-31fd-4a1f-814e-bbf107ff64da\" (UID: \"5e6a1323-31fd-4a1f-814e-bbf107ff64da\") " Mar 18 18:22:32.212191 master-0 kubenswrapper[30278]: I0318 18:22:32.211855 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5e6a1323-31fd-4a1f-814e-bbf107ff64da-config-data\") pod \"5e6a1323-31fd-4a1f-814e-bbf107ff64da\" (UID: \"5e6a1323-31fd-4a1f-814e-bbf107ff64da\") " Mar 18 18:22:32.212191 master-0 kubenswrapper[30278]: I0318 18:22:32.212110 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5e6a1323-31fd-4a1f-814e-bbf107ff64da-combined-ca-bundle\") pod \"5e6a1323-31fd-4a1f-814e-bbf107ff64da\" (UID: \"5e6a1323-31fd-4a1f-814e-bbf107ff64da\") " Mar 18 18:22:32.212816 master-0 kubenswrapper[30278]: I0318 18:22:32.212742 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5e6a1323-31fd-4a1f-814e-bbf107ff64da-logs" (OuterVolumeSpecName: "logs") pod "5e6a1323-31fd-4a1f-814e-bbf107ff64da" (UID: "5e6a1323-31fd-4a1f-814e-bbf107ff64da"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 18 18:22:32.213224 master-0 kubenswrapper[30278]: I0318 18:22:32.213182 30278 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5e6a1323-31fd-4a1f-814e-bbf107ff64da-logs\") on node \"master-0\" DevicePath \"\"" Mar 18 18:22:32.213224 master-0 kubenswrapper[30278]: I0318 18:22:32.213216 30278 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5e6a1323-31fd-4a1f-814e-bbf107ff64da-internal-tls-certs\") on node \"master-0\" DevicePath \"\"" Mar 18 18:22:32.216631 master-0 kubenswrapper[30278]: I0318 18:22:32.216572 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5e6a1323-31fd-4a1f-814e-bbf107ff64da-kube-api-access-b9276" (OuterVolumeSpecName: "kube-api-access-b9276") pod "5e6a1323-31fd-4a1f-814e-bbf107ff64da" (UID: "5e6a1323-31fd-4a1f-814e-bbf107ff64da"). InnerVolumeSpecName "kube-api-access-b9276". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 18:22:32.226640 master-0 kubenswrapper[30278]: I0318 18:22:32.226592 30278 scope.go:117] "RemoveContainer" containerID="a510a80909c5133291a3d94a32def636ba97d7c8983a05a2922e70a78784037a" Mar 18 18:22:32.259550 master-0 kubenswrapper[30278]: I0318 18:22:32.259476 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5e6a1323-31fd-4a1f-814e-bbf107ff64da-config-data" (OuterVolumeSpecName: "config-data") pod "5e6a1323-31fd-4a1f-814e-bbf107ff64da" (UID: "5e6a1323-31fd-4a1f-814e-bbf107ff64da"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 18:22:32.266759 master-0 kubenswrapper[30278]: I0318 18:22:32.266685 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5e6a1323-31fd-4a1f-814e-bbf107ff64da-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5e6a1323-31fd-4a1f-814e-bbf107ff64da" (UID: "5e6a1323-31fd-4a1f-814e-bbf107ff64da"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 18:22:32.268765 master-0 kubenswrapper[30278]: I0318 18:22:32.268722 30278 scope.go:117] "RemoveContainer" containerID="6e5920b121443ed202daba04fdbd20f3d2abd5d1e15587a39fc659235ac1e633" Mar 18 18:22:32.269990 master-0 kubenswrapper[30278]: I0318 18:22:32.269939 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5e6a1323-31fd-4a1f-814e-bbf107ff64da-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "5e6a1323-31fd-4a1f-814e-bbf107ff64da" (UID: "5e6a1323-31fd-4a1f-814e-bbf107ff64da"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 18:22:32.270143 master-0 kubenswrapper[30278]: E0318 18:22:32.270099 30278 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6e5920b121443ed202daba04fdbd20f3d2abd5d1e15587a39fc659235ac1e633\": container with ID starting with 6e5920b121443ed202daba04fdbd20f3d2abd5d1e15587a39fc659235ac1e633 not found: ID does not exist" containerID="6e5920b121443ed202daba04fdbd20f3d2abd5d1e15587a39fc659235ac1e633" Mar 18 18:22:32.270249 master-0 kubenswrapper[30278]: I0318 18:22:32.270219 30278 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6e5920b121443ed202daba04fdbd20f3d2abd5d1e15587a39fc659235ac1e633"} err="failed to get container status \"6e5920b121443ed202daba04fdbd20f3d2abd5d1e15587a39fc659235ac1e633\": rpc error: code = NotFound desc = could not find container \"6e5920b121443ed202daba04fdbd20f3d2abd5d1e15587a39fc659235ac1e633\": container with ID starting with 6e5920b121443ed202daba04fdbd20f3d2abd5d1e15587a39fc659235ac1e633 not found: ID does not exist" Mar 18 18:22:32.270358 master-0 kubenswrapper[30278]: I0318 18:22:32.270344 30278 scope.go:117] "RemoveContainer" containerID="a510a80909c5133291a3d94a32def636ba97d7c8983a05a2922e70a78784037a" Mar 18 18:22:32.271205 master-0 kubenswrapper[30278]: E0318 18:22:32.271160 30278 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a510a80909c5133291a3d94a32def636ba97d7c8983a05a2922e70a78784037a\": container with ID starting with a510a80909c5133291a3d94a32def636ba97d7c8983a05a2922e70a78784037a not found: ID does not exist" containerID="a510a80909c5133291a3d94a32def636ba97d7c8983a05a2922e70a78784037a" Mar 18 18:22:32.271260 master-0 kubenswrapper[30278]: I0318 18:22:32.271212 30278 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a510a80909c5133291a3d94a32def636ba97d7c8983a05a2922e70a78784037a"} err="failed to get container status \"a510a80909c5133291a3d94a32def636ba97d7c8983a05a2922e70a78784037a\": rpc error: code = NotFound desc = could not find container \"a510a80909c5133291a3d94a32def636ba97d7c8983a05a2922e70a78784037a\": container with ID starting with a510a80909c5133291a3d94a32def636ba97d7c8983a05a2922e70a78784037a not found: ID does not exist" Mar 18 18:22:32.271260 master-0 kubenswrapper[30278]: I0318 18:22:32.271247 30278 scope.go:117] "RemoveContainer" containerID="6e5920b121443ed202daba04fdbd20f3d2abd5d1e15587a39fc659235ac1e633" Mar 18 18:22:32.275441 master-0 kubenswrapper[30278]: I0318 18:22:32.271661 30278 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6e5920b121443ed202daba04fdbd20f3d2abd5d1e15587a39fc659235ac1e633"} err="failed to get container status \"6e5920b121443ed202daba04fdbd20f3d2abd5d1e15587a39fc659235ac1e633\": rpc error: code = NotFound desc = could not find container \"6e5920b121443ed202daba04fdbd20f3d2abd5d1e15587a39fc659235ac1e633\": container with ID starting with 6e5920b121443ed202daba04fdbd20f3d2abd5d1e15587a39fc659235ac1e633 not found: ID does not exist" Mar 18 18:22:32.275441 master-0 kubenswrapper[30278]: I0318 18:22:32.271684 30278 scope.go:117] "RemoveContainer" containerID="a510a80909c5133291a3d94a32def636ba97d7c8983a05a2922e70a78784037a" Mar 18 18:22:32.275441 master-0 kubenswrapper[30278]: I0318 18:22:32.275155 30278 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a510a80909c5133291a3d94a32def636ba97d7c8983a05a2922e70a78784037a"} err="failed to get container status \"a510a80909c5133291a3d94a32def636ba97d7c8983a05a2922e70a78784037a\": rpc error: code = NotFound desc = could not find container \"a510a80909c5133291a3d94a32def636ba97d7c8983a05a2922e70a78784037a\": container with ID starting with a510a80909c5133291a3d94a32def636ba97d7c8983a05a2922e70a78784037a not found: ID does not exist" Mar 18 18:22:32.316308 master-0 kubenswrapper[30278]: I0318 18:22:32.316230 30278 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5e6a1323-31fd-4a1f-814e-bbf107ff64da-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 18 18:22:32.316308 master-0 kubenswrapper[30278]: I0318 18:22:32.316314 30278 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5e6a1323-31fd-4a1f-814e-bbf107ff64da-public-tls-certs\") on node \"master-0\" DevicePath \"\"" Mar 18 18:22:32.316652 master-0 kubenswrapper[30278]: I0318 18:22:32.316337 30278 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b9276\" (UniqueName: \"kubernetes.io/projected/5e6a1323-31fd-4a1f-814e-bbf107ff64da-kube-api-access-b9276\") on node \"master-0\" DevicePath \"\"" Mar 18 18:22:32.316652 master-0 kubenswrapper[30278]: I0318 18:22:32.316351 30278 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5e6a1323-31fd-4a1f-814e-bbf107ff64da-config-data\") on node \"master-0\" DevicePath \"\"" Mar 18 18:22:32.546379 master-0 kubenswrapper[30278]: I0318 18:22:32.546249 30278 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Mar 18 18:22:32.571773 master-0 kubenswrapper[30278]: I0318 18:22:32.571671 30278 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Mar 18 18:22:32.589328 master-0 kubenswrapper[30278]: I0318 18:22:32.589010 30278 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Mar 18 18:22:32.590194 master-0 kubenswrapper[30278]: E0318 18:22:32.589963 30278 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd6f7934-153f-4a68-98f4-4d3c1a576e33" containerName="init" Mar 18 18:22:32.590194 master-0 kubenswrapper[30278]: I0318 18:22:32.590010 30278 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd6f7934-153f-4a68-98f4-4d3c1a576e33" containerName="init" Mar 18 18:22:32.590194 master-0 kubenswrapper[30278]: E0318 18:22:32.590061 30278 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd6f7934-153f-4a68-98f4-4d3c1a576e33" containerName="dnsmasq-dns" Mar 18 18:22:32.590194 master-0 kubenswrapper[30278]: I0318 18:22:32.590076 30278 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd6f7934-153f-4a68-98f4-4d3c1a576e33" containerName="dnsmasq-dns" Mar 18 18:22:32.590194 master-0 kubenswrapper[30278]: E0318 18:22:32.590109 30278 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5e6a1323-31fd-4a1f-814e-bbf107ff64da" containerName="nova-api-log" Mar 18 18:22:32.590194 master-0 kubenswrapper[30278]: I0318 18:22:32.590120 30278 state_mem.go:107] "Deleted CPUSet assignment" podUID="5e6a1323-31fd-4a1f-814e-bbf107ff64da" containerName="nova-api-log" Mar 18 18:22:32.590194 master-0 kubenswrapper[30278]: E0318 18:22:32.590143 30278 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ae8ac9fe-688b-4a6e-a479-9b5c5eeb5704" containerName="nova-manage" Mar 18 18:22:32.590194 master-0 kubenswrapper[30278]: I0318 18:22:32.590151 30278 state_mem.go:107] "Deleted CPUSet assignment" podUID="ae8ac9fe-688b-4a6e-a479-9b5c5eeb5704" containerName="nova-manage" Mar 18 18:22:32.590194 master-0 kubenswrapper[30278]: E0318 18:22:32.590176 30278 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5e6a1323-31fd-4a1f-814e-bbf107ff64da" containerName="nova-api-api" Mar 18 18:22:32.590194 master-0 kubenswrapper[30278]: I0318 18:22:32.590185 30278 state_mem.go:107] "Deleted CPUSet assignment" podUID="5e6a1323-31fd-4a1f-814e-bbf107ff64da" containerName="nova-api-api" Mar 18 18:22:32.590194 master-0 kubenswrapper[30278]: E0318 18:22:32.590206 30278 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5e501d70-7435-4269-a155-067f1f54bee7" containerName="nova-manage" Mar 18 18:22:32.590194 master-0 kubenswrapper[30278]: I0318 18:22:32.590216 30278 state_mem.go:107] "Deleted CPUSet assignment" podUID="5e501d70-7435-4269-a155-067f1f54bee7" containerName="nova-manage" Mar 18 18:22:32.590758 master-0 kubenswrapper[30278]: I0318 18:22:32.590616 30278 memory_manager.go:354] "RemoveStaleState removing state" podUID="5e6a1323-31fd-4a1f-814e-bbf107ff64da" containerName="nova-api-api" Mar 18 18:22:32.590758 master-0 kubenswrapper[30278]: I0318 18:22:32.590663 30278 memory_manager.go:354] "RemoveStaleState removing state" podUID="5e6a1323-31fd-4a1f-814e-bbf107ff64da" containerName="nova-api-log" Mar 18 18:22:32.590758 master-0 kubenswrapper[30278]: I0318 18:22:32.590676 30278 memory_manager.go:354] "RemoveStaleState removing state" podUID="5e501d70-7435-4269-a155-067f1f54bee7" containerName="nova-manage" Mar 18 18:22:32.590758 master-0 kubenswrapper[30278]: I0318 18:22:32.590703 30278 memory_manager.go:354] "RemoveStaleState removing state" podUID="ae8ac9fe-688b-4a6e-a479-9b5c5eeb5704" containerName="nova-manage" Mar 18 18:22:32.590758 master-0 kubenswrapper[30278]: I0318 18:22:32.590737 30278 memory_manager.go:354] "RemoveStaleState removing state" podUID="dd6f7934-153f-4a68-98f4-4d3c1a576e33" containerName="dnsmasq-dns" Mar 18 18:22:32.592540 master-0 kubenswrapper[30278]: I0318 18:22:32.592508 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 18 18:22:32.596080 master-0 kubenswrapper[30278]: I0318 18:22:32.595980 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Mar 18 18:22:32.597536 master-0 kubenswrapper[30278]: I0318 18:22:32.597020 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Mar 18 18:22:32.597536 master-0 kubenswrapper[30278]: I0318 18:22:32.597385 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Mar 18 18:22:32.611072 master-0 kubenswrapper[30278]: I0318 18:22:32.610425 30278 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Mar 18 18:22:32.633647 master-0 kubenswrapper[30278]: I0318 18:22:32.633059 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e2f52c3d-8c7e-4c50-813c-2b1f3f3027bc-config-data\") pod \"nova-api-0\" (UID: \"e2f52c3d-8c7e-4c50-813c-2b1f3f3027bc\") " pod="openstack/nova-api-0" Mar 18 18:22:32.633647 master-0 kubenswrapper[30278]: I0318 18:22:32.633117 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e2f52c3d-8c7e-4c50-813c-2b1f3f3027bc-internal-tls-certs\") pod \"nova-api-0\" (UID: \"e2f52c3d-8c7e-4c50-813c-2b1f3f3027bc\") " pod="openstack/nova-api-0" Mar 18 18:22:32.633647 master-0 kubenswrapper[30278]: I0318 18:22:32.633234 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tr8bd\" (UniqueName: \"kubernetes.io/projected/e2f52c3d-8c7e-4c50-813c-2b1f3f3027bc-kube-api-access-tr8bd\") pod \"nova-api-0\" (UID: \"e2f52c3d-8c7e-4c50-813c-2b1f3f3027bc\") " pod="openstack/nova-api-0" Mar 18 18:22:32.633647 master-0 kubenswrapper[30278]: I0318 18:22:32.633264 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e2f52c3d-8c7e-4c50-813c-2b1f3f3027bc-logs\") pod \"nova-api-0\" (UID: \"e2f52c3d-8c7e-4c50-813c-2b1f3f3027bc\") " pod="openstack/nova-api-0" Mar 18 18:22:32.633647 master-0 kubenswrapper[30278]: I0318 18:22:32.633304 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e2f52c3d-8c7e-4c50-813c-2b1f3f3027bc-public-tls-certs\") pod \"nova-api-0\" (UID: \"e2f52c3d-8c7e-4c50-813c-2b1f3f3027bc\") " pod="openstack/nova-api-0" Mar 18 18:22:32.633647 master-0 kubenswrapper[30278]: I0318 18:22:32.633337 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e2f52c3d-8c7e-4c50-813c-2b1f3f3027bc-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"e2f52c3d-8c7e-4c50-813c-2b1f3f3027bc\") " pod="openstack/nova-api-0" Mar 18 18:22:32.735194 master-0 kubenswrapper[30278]: I0318 18:22:32.735026 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tr8bd\" (UniqueName: \"kubernetes.io/projected/e2f52c3d-8c7e-4c50-813c-2b1f3f3027bc-kube-api-access-tr8bd\") pod \"nova-api-0\" (UID: \"e2f52c3d-8c7e-4c50-813c-2b1f3f3027bc\") " pod="openstack/nova-api-0" Mar 18 18:22:32.735503 master-0 kubenswrapper[30278]: I0318 18:22:32.735224 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e2f52c3d-8c7e-4c50-813c-2b1f3f3027bc-logs\") pod \"nova-api-0\" (UID: \"e2f52c3d-8c7e-4c50-813c-2b1f3f3027bc\") " pod="openstack/nova-api-0" Mar 18 18:22:32.735503 master-0 kubenswrapper[30278]: I0318 18:22:32.735332 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e2f52c3d-8c7e-4c50-813c-2b1f3f3027bc-public-tls-certs\") pod \"nova-api-0\" (UID: \"e2f52c3d-8c7e-4c50-813c-2b1f3f3027bc\") " pod="openstack/nova-api-0" Mar 18 18:22:32.735503 master-0 kubenswrapper[30278]: I0318 18:22:32.735385 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e2f52c3d-8c7e-4c50-813c-2b1f3f3027bc-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"e2f52c3d-8c7e-4c50-813c-2b1f3f3027bc\") " pod="openstack/nova-api-0" Mar 18 18:22:32.735632 master-0 kubenswrapper[30278]: I0318 18:22:32.735560 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e2f52c3d-8c7e-4c50-813c-2b1f3f3027bc-config-data\") pod \"nova-api-0\" (UID: \"e2f52c3d-8c7e-4c50-813c-2b1f3f3027bc\") " pod="openstack/nova-api-0" Mar 18 18:22:32.735632 master-0 kubenswrapper[30278]: I0318 18:22:32.735615 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e2f52c3d-8c7e-4c50-813c-2b1f3f3027bc-internal-tls-certs\") pod \"nova-api-0\" (UID: \"e2f52c3d-8c7e-4c50-813c-2b1f3f3027bc\") " pod="openstack/nova-api-0" Mar 18 18:22:32.735878 master-0 kubenswrapper[30278]: I0318 18:22:32.735846 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e2f52c3d-8c7e-4c50-813c-2b1f3f3027bc-logs\") pod \"nova-api-0\" (UID: \"e2f52c3d-8c7e-4c50-813c-2b1f3f3027bc\") " pod="openstack/nova-api-0" Mar 18 18:22:32.740053 master-0 kubenswrapper[30278]: I0318 18:22:32.739999 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e2f52c3d-8c7e-4c50-813c-2b1f3f3027bc-public-tls-certs\") pod \"nova-api-0\" (UID: \"e2f52c3d-8c7e-4c50-813c-2b1f3f3027bc\") " pod="openstack/nova-api-0" Mar 18 18:22:32.740848 master-0 kubenswrapper[30278]: I0318 18:22:32.740807 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e2f52c3d-8c7e-4c50-813c-2b1f3f3027bc-config-data\") pod \"nova-api-0\" (UID: \"e2f52c3d-8c7e-4c50-813c-2b1f3f3027bc\") " pod="openstack/nova-api-0" Mar 18 18:22:32.741394 master-0 kubenswrapper[30278]: I0318 18:22:32.741361 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e2f52c3d-8c7e-4c50-813c-2b1f3f3027bc-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"e2f52c3d-8c7e-4c50-813c-2b1f3f3027bc\") " pod="openstack/nova-api-0" Mar 18 18:22:32.751772 master-0 kubenswrapper[30278]: I0318 18:22:32.751734 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tr8bd\" (UniqueName: \"kubernetes.io/projected/e2f52c3d-8c7e-4c50-813c-2b1f3f3027bc-kube-api-access-tr8bd\") pod \"nova-api-0\" (UID: \"e2f52c3d-8c7e-4c50-813c-2b1f3f3027bc\") " pod="openstack/nova-api-0" Mar 18 18:22:32.754613 master-0 kubenswrapper[30278]: I0318 18:22:32.754582 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e2f52c3d-8c7e-4c50-813c-2b1f3f3027bc-internal-tls-certs\") pod \"nova-api-0\" (UID: \"e2f52c3d-8c7e-4c50-813c-2b1f3f3027bc\") " pod="openstack/nova-api-0" Mar 18 18:22:32.948388 master-0 kubenswrapper[30278]: I0318 18:22:32.948322 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 18 18:22:33.098326 master-0 kubenswrapper[30278]: I0318 18:22:33.098231 30278 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5e6a1323-31fd-4a1f-814e-bbf107ff64da" path="/var/lib/kubelet/pods/5e6a1323-31fd-4a1f-814e-bbf107ff64da/volumes" Mar 18 18:22:33.155078 master-0 kubenswrapper[30278]: I0318 18:22:33.154729 30278 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="847dbfce-3773-4d6d-af26-16040d410d2c" containerName="nova-metadata-log" containerID="cri-o://435f7c0cc47e7fa2ffa8e00da669c49dfc6f90b86677c9609bdbf726ee7249e6" gracePeriod=30 Mar 18 18:22:33.155078 master-0 kubenswrapper[30278]: I0318 18:22:33.154824 30278 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="847dbfce-3773-4d6d-af26-16040d410d2c" containerName="nova-metadata-metadata" containerID="cri-o://fdd7413ad8a9c8686fe83053e4a75e2a11f53e3c35fcb3fadcb9167f3227981c" gracePeriod=30 Mar 18 18:22:33.478446 master-0 kubenswrapper[30278]: I0318 18:22:33.478346 30278 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Mar 18 18:22:34.178246 master-0 kubenswrapper[30278]: I0318 18:22:34.178171 30278 generic.go:334] "Generic (PLEG): container finished" podID="847dbfce-3773-4d6d-af26-16040d410d2c" containerID="435f7c0cc47e7fa2ffa8e00da669c49dfc6f90b86677c9609bdbf726ee7249e6" exitCode=143 Mar 18 18:22:34.179402 master-0 kubenswrapper[30278]: I0318 18:22:34.178301 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"847dbfce-3773-4d6d-af26-16040d410d2c","Type":"ContainerDied","Data":"435f7c0cc47e7fa2ffa8e00da669c49dfc6f90b86677c9609bdbf726ee7249e6"} Mar 18 18:22:34.198579 master-0 kubenswrapper[30278]: I0318 18:22:34.198221 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"e2f52c3d-8c7e-4c50-813c-2b1f3f3027bc","Type":"ContainerStarted","Data":"1048d63a50e1605920acc55eddf5439c1c089f8e0e3aff6b03acf459a9c473cc"} Mar 18 18:22:34.198579 master-0 kubenswrapper[30278]: I0318 18:22:34.198341 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"e2f52c3d-8c7e-4c50-813c-2b1f3f3027bc","Type":"ContainerStarted","Data":"32b39131245c24c3e8c6dee3ab0bc0c2d26178c36ebd44c1f74715307d6e0843"} Mar 18 18:22:34.198579 master-0 kubenswrapper[30278]: I0318 18:22:34.198354 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"e2f52c3d-8c7e-4c50-813c-2b1f3f3027bc","Type":"ContainerStarted","Data":"80cb193fa20ca81df2d3054ff21fec83bf1075c1e0db654cabb0f9b5fbde263c"} Mar 18 18:22:36.231586 master-0 kubenswrapper[30278]: I0318 18:22:36.231393 30278 generic.go:334] "Generic (PLEG): container finished" podID="d0e26ed5-e3a5-4852-b288-8185e1095c29" containerID="1b064717325cf15ca21db7427805ef704e2163a221c0eb3389c359423cd08ae9" exitCode=0 Mar 18 18:22:36.231586 master-0 kubenswrapper[30278]: I0318 18:22:36.231492 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"d0e26ed5-e3a5-4852-b288-8185e1095c29","Type":"ContainerDied","Data":"1b064717325cf15ca21db7427805ef704e2163a221c0eb3389c359423cd08ae9"} Mar 18 18:22:36.482089 master-0 kubenswrapper[30278]: I0318 18:22:36.481908 30278 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Mar 18 18:22:36.529568 master-0 kubenswrapper[30278]: I0318 18:22:36.515890 30278 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=4.515862298 podStartE2EDuration="4.515862298s" podCreationTimestamp="2026-03-18 18:22:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 18:22:34.244545151 +0000 UTC m=+1323.411729746" watchObservedRunningTime="2026-03-18 18:22:36.515862298 +0000 UTC m=+1325.683046893" Mar 18 18:22:36.577566 master-0 kubenswrapper[30278]: I0318 18:22:36.577487 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d0e26ed5-e3a5-4852-b288-8185e1095c29-config-data\") pod \"d0e26ed5-e3a5-4852-b288-8185e1095c29\" (UID: \"d0e26ed5-e3a5-4852-b288-8185e1095c29\") " Mar 18 18:22:36.580887 master-0 kubenswrapper[30278]: I0318 18:22:36.580841 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h8pwr\" (UniqueName: \"kubernetes.io/projected/d0e26ed5-e3a5-4852-b288-8185e1095c29-kube-api-access-h8pwr\") pod \"d0e26ed5-e3a5-4852-b288-8185e1095c29\" (UID: \"d0e26ed5-e3a5-4852-b288-8185e1095c29\") " Mar 18 18:22:36.581343 master-0 kubenswrapper[30278]: I0318 18:22:36.581054 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d0e26ed5-e3a5-4852-b288-8185e1095c29-combined-ca-bundle\") pod \"d0e26ed5-e3a5-4852-b288-8185e1095c29\" (UID: \"d0e26ed5-e3a5-4852-b288-8185e1095c29\") " Mar 18 18:22:36.589131 master-0 kubenswrapper[30278]: I0318 18:22:36.589062 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d0e26ed5-e3a5-4852-b288-8185e1095c29-kube-api-access-h8pwr" (OuterVolumeSpecName: "kube-api-access-h8pwr") pod "d0e26ed5-e3a5-4852-b288-8185e1095c29" (UID: "d0e26ed5-e3a5-4852-b288-8185e1095c29"). InnerVolumeSpecName "kube-api-access-h8pwr". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 18:22:36.622326 master-0 kubenswrapper[30278]: I0318 18:22:36.618388 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d0e26ed5-e3a5-4852-b288-8185e1095c29-config-data" (OuterVolumeSpecName: "config-data") pod "d0e26ed5-e3a5-4852-b288-8185e1095c29" (UID: "d0e26ed5-e3a5-4852-b288-8185e1095c29"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 18:22:36.622326 master-0 kubenswrapper[30278]: I0318 18:22:36.619123 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d0e26ed5-e3a5-4852-b288-8185e1095c29-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d0e26ed5-e3a5-4852-b288-8185e1095c29" (UID: "d0e26ed5-e3a5-4852-b288-8185e1095c29"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 18:22:36.688511 master-0 kubenswrapper[30278]: I0318 18:22:36.688399 30278 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h8pwr\" (UniqueName: \"kubernetes.io/projected/d0e26ed5-e3a5-4852-b288-8185e1095c29-kube-api-access-h8pwr\") on node \"master-0\" DevicePath \"\"" Mar 18 18:22:36.688511 master-0 kubenswrapper[30278]: I0318 18:22:36.688509 30278 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d0e26ed5-e3a5-4852-b288-8185e1095c29-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 18 18:22:36.688884 master-0 kubenswrapper[30278]: I0318 18:22:36.688562 30278 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d0e26ed5-e3a5-4852-b288-8185e1095c29-config-data\") on node \"master-0\" DevicePath \"\"" Mar 18 18:22:36.849535 master-0 kubenswrapper[30278]: I0318 18:22:36.849490 30278 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 18 18:22:36.899679 master-0 kubenswrapper[30278]: I0318 18:22:36.895506 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/847dbfce-3773-4d6d-af26-16040d410d2c-logs\") pod \"847dbfce-3773-4d6d-af26-16040d410d2c\" (UID: \"847dbfce-3773-4d6d-af26-16040d410d2c\") " Mar 18 18:22:36.899679 master-0 kubenswrapper[30278]: I0318 18:22:36.895832 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/847dbfce-3773-4d6d-af26-16040d410d2c-combined-ca-bundle\") pod \"847dbfce-3773-4d6d-af26-16040d410d2c\" (UID: \"847dbfce-3773-4d6d-af26-16040d410d2c\") " Mar 18 18:22:36.899679 master-0 kubenswrapper[30278]: I0318 18:22:36.895888 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/847dbfce-3773-4d6d-af26-16040d410d2c-config-data\") pod \"847dbfce-3773-4d6d-af26-16040d410d2c\" (UID: \"847dbfce-3773-4d6d-af26-16040d410d2c\") " Mar 18 18:22:36.899679 master-0 kubenswrapper[30278]: I0318 18:22:36.896104 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/847dbfce-3773-4d6d-af26-16040d410d2c-nova-metadata-tls-certs\") pod \"847dbfce-3773-4d6d-af26-16040d410d2c\" (UID: \"847dbfce-3773-4d6d-af26-16040d410d2c\") " Mar 18 18:22:36.899679 master-0 kubenswrapper[30278]: I0318 18:22:36.896219 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d2htp\" (UniqueName: \"kubernetes.io/projected/847dbfce-3773-4d6d-af26-16040d410d2c-kube-api-access-d2htp\") pod \"847dbfce-3773-4d6d-af26-16040d410d2c\" (UID: \"847dbfce-3773-4d6d-af26-16040d410d2c\") " Mar 18 18:22:36.916663 master-0 kubenswrapper[30278]: I0318 18:22:36.905262 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/847dbfce-3773-4d6d-af26-16040d410d2c-logs" (OuterVolumeSpecName: "logs") pod "847dbfce-3773-4d6d-af26-16040d410d2c" (UID: "847dbfce-3773-4d6d-af26-16040d410d2c"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 18 18:22:36.916663 master-0 kubenswrapper[30278]: I0318 18:22:36.915742 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/847dbfce-3773-4d6d-af26-16040d410d2c-kube-api-access-d2htp" (OuterVolumeSpecName: "kube-api-access-d2htp") pod "847dbfce-3773-4d6d-af26-16040d410d2c" (UID: "847dbfce-3773-4d6d-af26-16040d410d2c"). InnerVolumeSpecName "kube-api-access-d2htp". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 18:22:36.952212 master-0 kubenswrapper[30278]: I0318 18:22:36.952143 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/847dbfce-3773-4d6d-af26-16040d410d2c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "847dbfce-3773-4d6d-af26-16040d410d2c" (UID: "847dbfce-3773-4d6d-af26-16040d410d2c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 18:22:36.978101 master-0 kubenswrapper[30278]: I0318 18:22:36.971544 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/847dbfce-3773-4d6d-af26-16040d410d2c-config-data" (OuterVolumeSpecName: "config-data") pod "847dbfce-3773-4d6d-af26-16040d410d2c" (UID: "847dbfce-3773-4d6d-af26-16040d410d2c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 18:22:36.982825 master-0 kubenswrapper[30278]: I0318 18:22:36.982757 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/847dbfce-3773-4d6d-af26-16040d410d2c-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "847dbfce-3773-4d6d-af26-16040d410d2c" (UID: "847dbfce-3773-4d6d-af26-16040d410d2c"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 18:22:37.000889 master-0 kubenswrapper[30278]: I0318 18:22:37.000792 30278 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/847dbfce-3773-4d6d-af26-16040d410d2c-config-data\") on node \"master-0\" DevicePath \"\"" Mar 18 18:22:37.000889 master-0 kubenswrapper[30278]: I0318 18:22:37.000862 30278 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/847dbfce-3773-4d6d-af26-16040d410d2c-nova-metadata-tls-certs\") on node \"master-0\" DevicePath \"\"" Mar 18 18:22:37.000889 master-0 kubenswrapper[30278]: I0318 18:22:37.000877 30278 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d2htp\" (UniqueName: \"kubernetes.io/projected/847dbfce-3773-4d6d-af26-16040d410d2c-kube-api-access-d2htp\") on node \"master-0\" DevicePath \"\"" Mar 18 18:22:37.000889 master-0 kubenswrapper[30278]: I0318 18:22:37.000890 30278 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/847dbfce-3773-4d6d-af26-16040d410d2c-logs\") on node \"master-0\" DevicePath \"\"" Mar 18 18:22:37.000889 master-0 kubenswrapper[30278]: I0318 18:22:37.000900 30278 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/847dbfce-3773-4d6d-af26-16040d410d2c-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 18 18:22:37.255141 master-0 kubenswrapper[30278]: I0318 18:22:37.254707 30278 generic.go:334] "Generic (PLEG): container finished" podID="847dbfce-3773-4d6d-af26-16040d410d2c" containerID="fdd7413ad8a9c8686fe83053e4a75e2a11f53e3c35fcb3fadcb9167f3227981c" exitCode=0 Mar 18 18:22:37.255141 master-0 kubenswrapper[30278]: I0318 18:22:37.254855 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"847dbfce-3773-4d6d-af26-16040d410d2c","Type":"ContainerDied","Data":"fdd7413ad8a9c8686fe83053e4a75e2a11f53e3c35fcb3fadcb9167f3227981c"} Mar 18 18:22:37.255141 master-0 kubenswrapper[30278]: I0318 18:22:37.254910 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"847dbfce-3773-4d6d-af26-16040d410d2c","Type":"ContainerDied","Data":"a221165520ac0cf006f38ffc8ec32766d0a7e627797f16aa63fdba9f6a7fb8c4"} Mar 18 18:22:37.255141 master-0 kubenswrapper[30278]: I0318 18:22:37.254940 30278 scope.go:117] "RemoveContainer" containerID="fdd7413ad8a9c8686fe83053e4a75e2a11f53e3c35fcb3fadcb9167f3227981c" Mar 18 18:22:37.256153 master-0 kubenswrapper[30278]: I0318 18:22:37.255533 30278 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 18 18:22:37.261705 master-0 kubenswrapper[30278]: I0318 18:22:37.261662 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"d0e26ed5-e3a5-4852-b288-8185e1095c29","Type":"ContainerDied","Data":"831230a9f172b52df7d6aa7cca3d1ae806c8d6ba370ff5dece415ac2751831b9"} Mar 18 18:22:37.261816 master-0 kubenswrapper[30278]: I0318 18:22:37.261774 30278 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Mar 18 18:22:37.322743 master-0 kubenswrapper[30278]: I0318 18:22:37.316806 30278 scope.go:117] "RemoveContainer" containerID="435f7c0cc47e7fa2ffa8e00da669c49dfc6f90b86677c9609bdbf726ee7249e6" Mar 18 18:22:37.333369 master-0 kubenswrapper[30278]: I0318 18:22:37.329629 30278 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Mar 18 18:22:37.388611 master-0 kubenswrapper[30278]: I0318 18:22:37.354245 30278 scope.go:117] "RemoveContainer" containerID="fdd7413ad8a9c8686fe83053e4a75e2a11f53e3c35fcb3fadcb9167f3227981c" Mar 18 18:22:37.388611 master-0 kubenswrapper[30278]: E0318 18:22:37.354869 30278 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fdd7413ad8a9c8686fe83053e4a75e2a11f53e3c35fcb3fadcb9167f3227981c\": container with ID starting with fdd7413ad8a9c8686fe83053e4a75e2a11f53e3c35fcb3fadcb9167f3227981c not found: ID does not exist" containerID="fdd7413ad8a9c8686fe83053e4a75e2a11f53e3c35fcb3fadcb9167f3227981c" Mar 18 18:22:37.388611 master-0 kubenswrapper[30278]: I0318 18:22:37.354903 30278 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fdd7413ad8a9c8686fe83053e4a75e2a11f53e3c35fcb3fadcb9167f3227981c"} err="failed to get container status \"fdd7413ad8a9c8686fe83053e4a75e2a11f53e3c35fcb3fadcb9167f3227981c\": rpc error: code = NotFound desc = could not find container \"fdd7413ad8a9c8686fe83053e4a75e2a11f53e3c35fcb3fadcb9167f3227981c\": container with ID starting with fdd7413ad8a9c8686fe83053e4a75e2a11f53e3c35fcb3fadcb9167f3227981c not found: ID does not exist" Mar 18 18:22:37.388611 master-0 kubenswrapper[30278]: I0318 18:22:37.354927 30278 scope.go:117] "RemoveContainer" containerID="435f7c0cc47e7fa2ffa8e00da669c49dfc6f90b86677c9609bdbf726ee7249e6" Mar 18 18:22:37.388611 master-0 kubenswrapper[30278]: E0318 18:22:37.355559 30278 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"435f7c0cc47e7fa2ffa8e00da669c49dfc6f90b86677c9609bdbf726ee7249e6\": container with ID starting with 435f7c0cc47e7fa2ffa8e00da669c49dfc6f90b86677c9609bdbf726ee7249e6 not found: ID does not exist" containerID="435f7c0cc47e7fa2ffa8e00da669c49dfc6f90b86677c9609bdbf726ee7249e6" Mar 18 18:22:37.388611 master-0 kubenswrapper[30278]: I0318 18:22:37.355576 30278 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"435f7c0cc47e7fa2ffa8e00da669c49dfc6f90b86677c9609bdbf726ee7249e6"} err="failed to get container status \"435f7c0cc47e7fa2ffa8e00da669c49dfc6f90b86677c9609bdbf726ee7249e6\": rpc error: code = NotFound desc = could not find container \"435f7c0cc47e7fa2ffa8e00da669c49dfc6f90b86677c9609bdbf726ee7249e6\": container with ID starting with 435f7c0cc47e7fa2ffa8e00da669c49dfc6f90b86677c9609bdbf726ee7249e6 not found: ID does not exist" Mar 18 18:22:37.388611 master-0 kubenswrapper[30278]: I0318 18:22:37.355588 30278 scope.go:117] "RemoveContainer" containerID="1b064717325cf15ca21db7427805ef704e2163a221c0eb3389c359423cd08ae9" Mar 18 18:22:37.388611 master-0 kubenswrapper[30278]: I0318 18:22:37.357540 30278 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Mar 18 18:22:37.388611 master-0 kubenswrapper[30278]: I0318 18:22:37.377563 30278 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Mar 18 18:22:37.388611 master-0 kubenswrapper[30278]: I0318 18:22:37.386927 30278 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Mar 18 18:22:37.400858 master-0 kubenswrapper[30278]: I0318 18:22:37.400692 30278 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Mar 18 18:22:37.401640 master-0 kubenswrapper[30278]: E0318 18:22:37.401592 30278 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="847dbfce-3773-4d6d-af26-16040d410d2c" containerName="nova-metadata-metadata" Mar 18 18:22:37.401640 master-0 kubenswrapper[30278]: I0318 18:22:37.401620 30278 state_mem.go:107] "Deleted CPUSet assignment" podUID="847dbfce-3773-4d6d-af26-16040d410d2c" containerName="nova-metadata-metadata" Mar 18 18:22:37.401775 master-0 kubenswrapper[30278]: E0318 18:22:37.401714 30278 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="847dbfce-3773-4d6d-af26-16040d410d2c" containerName="nova-metadata-log" Mar 18 18:22:37.401775 master-0 kubenswrapper[30278]: I0318 18:22:37.401724 30278 state_mem.go:107] "Deleted CPUSet assignment" podUID="847dbfce-3773-4d6d-af26-16040d410d2c" containerName="nova-metadata-log" Mar 18 18:22:37.401775 master-0 kubenswrapper[30278]: E0318 18:22:37.401748 30278 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d0e26ed5-e3a5-4852-b288-8185e1095c29" containerName="nova-scheduler-scheduler" Mar 18 18:22:37.401775 master-0 kubenswrapper[30278]: I0318 18:22:37.401755 30278 state_mem.go:107] "Deleted CPUSet assignment" podUID="d0e26ed5-e3a5-4852-b288-8185e1095c29" containerName="nova-scheduler-scheduler" Mar 18 18:22:37.402353 master-0 kubenswrapper[30278]: I0318 18:22:37.402323 30278 memory_manager.go:354] "RemoveStaleState removing state" podUID="847dbfce-3773-4d6d-af26-16040d410d2c" containerName="nova-metadata-log" Mar 18 18:22:37.402429 master-0 kubenswrapper[30278]: I0318 18:22:37.402400 30278 memory_manager.go:354] "RemoveStaleState removing state" podUID="847dbfce-3773-4d6d-af26-16040d410d2c" containerName="nova-metadata-metadata" Mar 18 18:22:37.402429 master-0 kubenswrapper[30278]: I0318 18:22:37.402418 30278 memory_manager.go:354] "RemoveStaleState removing state" podUID="d0e26ed5-e3a5-4852-b288-8185e1095c29" containerName="nova-scheduler-scheduler" Mar 18 18:22:37.404029 master-0 kubenswrapper[30278]: I0318 18:22:37.403981 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Mar 18 18:22:37.410973 master-0 kubenswrapper[30278]: I0318 18:22:37.410888 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Mar 18 18:22:37.441041 master-0 kubenswrapper[30278]: I0318 18:22:37.423809 30278 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Mar 18 18:22:37.441041 master-0 kubenswrapper[30278]: I0318 18:22:37.430577 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gzprk\" (UniqueName: \"kubernetes.io/projected/6f21f8ca-9905-414c-a2c5-f50ca82015e1-kube-api-access-gzprk\") pod \"nova-scheduler-0\" (UID: \"6f21f8ca-9905-414c-a2c5-f50ca82015e1\") " pod="openstack/nova-scheduler-0" Mar 18 18:22:37.441041 master-0 kubenswrapper[30278]: I0318 18:22:37.430629 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6f21f8ca-9905-414c-a2c5-f50ca82015e1-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"6f21f8ca-9905-414c-a2c5-f50ca82015e1\") " pod="openstack/nova-scheduler-0" Mar 18 18:22:37.441041 master-0 kubenswrapper[30278]: I0318 18:22:37.430894 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6f21f8ca-9905-414c-a2c5-f50ca82015e1-config-data\") pod \"nova-scheduler-0\" (UID: \"6f21f8ca-9905-414c-a2c5-f50ca82015e1\") " pod="openstack/nova-scheduler-0" Mar 18 18:22:37.452794 master-0 kubenswrapper[30278]: I0318 18:22:37.452713 30278 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Mar 18 18:22:37.458311 master-0 kubenswrapper[30278]: I0318 18:22:37.455349 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 18 18:22:37.485318 master-0 kubenswrapper[30278]: I0318 18:22:37.461156 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Mar 18 18:22:37.485318 master-0 kubenswrapper[30278]: I0318 18:22:37.461332 30278 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Mar 18 18:22:37.485318 master-0 kubenswrapper[30278]: I0318 18:22:37.476588 30278 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Mar 18 18:22:37.534237 master-0 kubenswrapper[30278]: I0318 18:22:37.534157 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gzprk\" (UniqueName: \"kubernetes.io/projected/6f21f8ca-9905-414c-a2c5-f50ca82015e1-kube-api-access-gzprk\") pod \"nova-scheduler-0\" (UID: \"6f21f8ca-9905-414c-a2c5-f50ca82015e1\") " pod="openstack/nova-scheduler-0" Mar 18 18:22:37.534237 master-0 kubenswrapper[30278]: I0318 18:22:37.534230 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6f21f8ca-9905-414c-a2c5-f50ca82015e1-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"6f21f8ca-9905-414c-a2c5-f50ca82015e1\") " pod="openstack/nova-scheduler-0" Mar 18 18:22:37.534716 master-0 kubenswrapper[30278]: I0318 18:22:37.534392 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/b4eccac6-c568-43d3-9a32-a6ccff12973d-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"b4eccac6-c568-43d3-9a32-a6ccff12973d\") " pod="openstack/nova-metadata-0" Mar 18 18:22:37.534716 master-0 kubenswrapper[30278]: I0318 18:22:37.534431 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b4eccac6-c568-43d3-9a32-a6ccff12973d-logs\") pod \"nova-metadata-0\" (UID: \"b4eccac6-c568-43d3-9a32-a6ccff12973d\") " pod="openstack/nova-metadata-0" Mar 18 18:22:37.534716 master-0 kubenswrapper[30278]: I0318 18:22:37.534488 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6f21f8ca-9905-414c-a2c5-f50ca82015e1-config-data\") pod \"nova-scheduler-0\" (UID: \"6f21f8ca-9905-414c-a2c5-f50ca82015e1\") " pod="openstack/nova-scheduler-0" Mar 18 18:22:37.534716 master-0 kubenswrapper[30278]: I0318 18:22:37.534517 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bhghm\" (UniqueName: \"kubernetes.io/projected/b4eccac6-c568-43d3-9a32-a6ccff12973d-kube-api-access-bhghm\") pod \"nova-metadata-0\" (UID: \"b4eccac6-c568-43d3-9a32-a6ccff12973d\") " pod="openstack/nova-metadata-0" Mar 18 18:22:37.534716 master-0 kubenswrapper[30278]: I0318 18:22:37.534548 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b4eccac6-c568-43d3-9a32-a6ccff12973d-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"b4eccac6-c568-43d3-9a32-a6ccff12973d\") " pod="openstack/nova-metadata-0" Mar 18 18:22:37.534716 master-0 kubenswrapper[30278]: I0318 18:22:37.534584 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b4eccac6-c568-43d3-9a32-a6ccff12973d-config-data\") pod \"nova-metadata-0\" (UID: \"b4eccac6-c568-43d3-9a32-a6ccff12973d\") " pod="openstack/nova-metadata-0" Mar 18 18:22:37.540425 master-0 kubenswrapper[30278]: I0318 18:22:37.539592 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6f21f8ca-9905-414c-a2c5-f50ca82015e1-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"6f21f8ca-9905-414c-a2c5-f50ca82015e1\") " pod="openstack/nova-scheduler-0" Mar 18 18:22:37.550090 master-0 kubenswrapper[30278]: I0318 18:22:37.549296 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6f21f8ca-9905-414c-a2c5-f50ca82015e1-config-data\") pod \"nova-scheduler-0\" (UID: \"6f21f8ca-9905-414c-a2c5-f50ca82015e1\") " pod="openstack/nova-scheduler-0" Mar 18 18:22:37.555618 master-0 kubenswrapper[30278]: I0318 18:22:37.555433 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gzprk\" (UniqueName: \"kubernetes.io/projected/6f21f8ca-9905-414c-a2c5-f50ca82015e1-kube-api-access-gzprk\") pod \"nova-scheduler-0\" (UID: \"6f21f8ca-9905-414c-a2c5-f50ca82015e1\") " pod="openstack/nova-scheduler-0" Mar 18 18:22:37.636872 master-0 kubenswrapper[30278]: I0318 18:22:37.636761 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bhghm\" (UniqueName: \"kubernetes.io/projected/b4eccac6-c568-43d3-9a32-a6ccff12973d-kube-api-access-bhghm\") pod \"nova-metadata-0\" (UID: \"b4eccac6-c568-43d3-9a32-a6ccff12973d\") " pod="openstack/nova-metadata-0" Mar 18 18:22:37.636872 master-0 kubenswrapper[30278]: I0318 18:22:37.636868 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b4eccac6-c568-43d3-9a32-a6ccff12973d-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"b4eccac6-c568-43d3-9a32-a6ccff12973d\") " pod="openstack/nova-metadata-0" Mar 18 18:22:37.637495 master-0 kubenswrapper[30278]: I0318 18:22:37.637436 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b4eccac6-c568-43d3-9a32-a6ccff12973d-config-data\") pod \"nova-metadata-0\" (UID: \"b4eccac6-c568-43d3-9a32-a6ccff12973d\") " pod="openstack/nova-metadata-0" Mar 18 18:22:37.638580 master-0 kubenswrapper[30278]: I0318 18:22:37.638546 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/b4eccac6-c568-43d3-9a32-a6ccff12973d-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"b4eccac6-c568-43d3-9a32-a6ccff12973d\") " pod="openstack/nova-metadata-0" Mar 18 18:22:37.639214 master-0 kubenswrapper[30278]: I0318 18:22:37.639181 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b4eccac6-c568-43d3-9a32-a6ccff12973d-logs\") pod \"nova-metadata-0\" (UID: \"b4eccac6-c568-43d3-9a32-a6ccff12973d\") " pod="openstack/nova-metadata-0" Mar 18 18:22:37.640777 master-0 kubenswrapper[30278]: I0318 18:22:37.640742 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b4eccac6-c568-43d3-9a32-a6ccff12973d-logs\") pod \"nova-metadata-0\" (UID: \"b4eccac6-c568-43d3-9a32-a6ccff12973d\") " pod="openstack/nova-metadata-0" Mar 18 18:22:37.642082 master-0 kubenswrapper[30278]: I0318 18:22:37.642013 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b4eccac6-c568-43d3-9a32-a6ccff12973d-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"b4eccac6-c568-43d3-9a32-a6ccff12973d\") " pod="openstack/nova-metadata-0" Mar 18 18:22:37.643002 master-0 kubenswrapper[30278]: I0318 18:22:37.642967 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/b4eccac6-c568-43d3-9a32-a6ccff12973d-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"b4eccac6-c568-43d3-9a32-a6ccff12973d\") " pod="openstack/nova-metadata-0" Mar 18 18:22:37.645481 master-0 kubenswrapper[30278]: I0318 18:22:37.645384 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b4eccac6-c568-43d3-9a32-a6ccff12973d-config-data\") pod \"nova-metadata-0\" (UID: \"b4eccac6-c568-43d3-9a32-a6ccff12973d\") " pod="openstack/nova-metadata-0" Mar 18 18:22:37.660971 master-0 kubenswrapper[30278]: I0318 18:22:37.660801 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bhghm\" (UniqueName: \"kubernetes.io/projected/b4eccac6-c568-43d3-9a32-a6ccff12973d-kube-api-access-bhghm\") pod \"nova-metadata-0\" (UID: \"b4eccac6-c568-43d3-9a32-a6ccff12973d\") " pod="openstack/nova-metadata-0" Mar 18 18:22:37.776230 master-0 kubenswrapper[30278]: I0318 18:22:37.776155 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Mar 18 18:22:37.794373 master-0 kubenswrapper[30278]: I0318 18:22:37.792644 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 18 18:22:38.399549 master-0 kubenswrapper[30278]: I0318 18:22:38.398750 30278 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Mar 18 18:22:38.477766 master-0 kubenswrapper[30278]: I0318 18:22:38.477658 30278 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Mar 18 18:22:39.073179 master-0 kubenswrapper[30278]: I0318 18:22:39.072696 30278 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="847dbfce-3773-4d6d-af26-16040d410d2c" path="/var/lib/kubelet/pods/847dbfce-3773-4d6d-af26-16040d410d2c/volumes" Mar 18 18:22:39.073732 master-0 kubenswrapper[30278]: I0318 18:22:39.073690 30278 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d0e26ed5-e3a5-4852-b288-8185e1095c29" path="/var/lib/kubelet/pods/d0e26ed5-e3a5-4852-b288-8185e1095c29/volumes" Mar 18 18:22:39.335848 master-0 kubenswrapper[30278]: I0318 18:22:39.335733 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"b4eccac6-c568-43d3-9a32-a6ccff12973d","Type":"ContainerStarted","Data":"45d9db392059bbe60bc433a0bfcfac4c2b49cbf2d5a46e1b06111562fb86c7e4"} Mar 18 18:22:39.335848 master-0 kubenswrapper[30278]: I0318 18:22:39.335846 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"b4eccac6-c568-43d3-9a32-a6ccff12973d","Type":"ContainerStarted","Data":"c80b4e0ad0831b2bba7831e1dc4c838a45a99fc55e909c237389279e6947a155"} Mar 18 18:22:39.336756 master-0 kubenswrapper[30278]: I0318 18:22:39.335869 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"b4eccac6-c568-43d3-9a32-a6ccff12973d","Type":"ContainerStarted","Data":"31378a9e6e6fe3f6fee40fb02e44efca868780c5a8bb3527f6ee652c11d30e64"} Mar 18 18:22:39.341896 master-0 kubenswrapper[30278]: I0318 18:22:39.341808 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"6f21f8ca-9905-414c-a2c5-f50ca82015e1","Type":"ContainerStarted","Data":"537a1a0709be6d94ec4c881da83c498ef61dba4799553a644f04fcf4a0149f69"} Mar 18 18:22:39.342119 master-0 kubenswrapper[30278]: I0318 18:22:39.341911 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"6f21f8ca-9905-414c-a2c5-f50ca82015e1","Type":"ContainerStarted","Data":"12e9649d6b9e6661f0778569cfa8cc070bdf42c3376336a08611742c5596f473"} Mar 18 18:22:39.371655 master-0 kubenswrapper[30278]: I0318 18:22:39.371324 30278 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.371263427 podStartE2EDuration="2.371263427s" podCreationTimestamp="2026-03-18 18:22:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 18:22:39.370873186 +0000 UTC m=+1328.538057801" watchObservedRunningTime="2026-03-18 18:22:39.371263427 +0000 UTC m=+1328.538448052" Mar 18 18:22:39.405448 master-0 kubenswrapper[30278]: I0318 18:22:39.405323 30278 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.405297973 podStartE2EDuration="2.405297973s" podCreationTimestamp="2026-03-18 18:22:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 18:22:39.395345196 +0000 UTC m=+1328.562529801" watchObservedRunningTime="2026-03-18 18:22:39.405297973 +0000 UTC m=+1328.572482568" Mar 18 18:22:42.776802 master-0 kubenswrapper[30278]: I0318 18:22:42.776601 30278 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Mar 18 18:22:42.951023 master-0 kubenswrapper[30278]: I0318 18:22:42.949554 30278 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Mar 18 18:22:42.951023 master-0 kubenswrapper[30278]: I0318 18:22:42.949669 30278 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Mar 18 18:22:43.981585 master-0 kubenswrapper[30278]: I0318 18:22:43.981487 30278 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="e2f52c3d-8c7e-4c50-813c-2b1f3f3027bc" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.128.1.20:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 18 18:22:43.982339 master-0 kubenswrapper[30278]: I0318 18:22:43.982011 30278 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="e2f52c3d-8c7e-4c50-813c-2b1f3f3027bc" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.128.1.20:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 18 18:22:47.776951 master-0 kubenswrapper[30278]: I0318 18:22:47.776826 30278 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Mar 18 18:22:47.793513 master-0 kubenswrapper[30278]: I0318 18:22:47.793407 30278 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Mar 18 18:22:47.793513 master-0 kubenswrapper[30278]: I0318 18:22:47.793530 30278 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Mar 18 18:22:47.846398 master-0 kubenswrapper[30278]: I0318 18:22:47.844225 30278 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Mar 18 18:22:48.539820 master-0 kubenswrapper[30278]: I0318 18:22:48.539731 30278 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Mar 18 18:22:48.815559 master-0 kubenswrapper[30278]: I0318 18:22:48.815470 30278 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="b4eccac6-c568-43d3-9a32-a6ccff12973d" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.128.1.22:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 18 18:22:48.816572 master-0 kubenswrapper[30278]: I0318 18:22:48.815608 30278 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="b4eccac6-c568-43d3-9a32-a6ccff12973d" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.128.1.22:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 18 18:22:50.949020 master-0 kubenswrapper[30278]: I0318 18:22:50.948873 30278 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Mar 18 18:22:50.949020 master-0 kubenswrapper[30278]: I0318 18:22:50.949034 30278 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Mar 18 18:22:52.959147 master-0 kubenswrapper[30278]: I0318 18:22:52.959043 30278 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Mar 18 18:22:52.961848 master-0 kubenswrapper[30278]: I0318 18:22:52.961800 30278 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Mar 18 18:22:52.968819 master-0 kubenswrapper[30278]: I0318 18:22:52.968749 30278 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Mar 18 18:22:53.819014 master-0 kubenswrapper[30278]: I0318 18:22:53.818922 30278 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Mar 18 18:22:55.793798 master-0 kubenswrapper[30278]: I0318 18:22:55.793693 30278 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Mar 18 18:22:55.794966 master-0 kubenswrapper[30278]: I0318 18:22:55.793829 30278 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Mar 18 18:22:57.800332 master-0 kubenswrapper[30278]: I0318 18:22:57.800182 30278 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Mar 18 18:22:57.801413 master-0 kubenswrapper[30278]: I0318 18:22:57.801356 30278 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Mar 18 18:22:57.812694 master-0 kubenswrapper[30278]: I0318 18:22:57.812604 30278 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Mar 18 18:22:57.903403 master-0 kubenswrapper[30278]: I0318 18:22:57.902189 30278 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Mar 18 18:23:24.148647 master-0 kubenswrapper[30278]: I0318 18:23:24.148565 30278 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["sushy-emulator/sushy-emulator-59477995f9-q9kcc"] Mar 18 18:23:24.149617 master-0 kubenswrapper[30278]: I0318 18:23:24.148958 30278 kuberuntime_container.go:808] "Killing container with a grace period" pod="sushy-emulator/sushy-emulator-59477995f9-q9kcc" podUID="d3cdc990-12c3-4d4e-b059-51f2fa10c969" containerName="sushy-emulator" containerID="cri-o://f748a457cb8354b70e164255646680d1b2d69dee36af4494bb47b34c0b66abb5" gracePeriod=30 Mar 18 18:23:25.151975 master-0 kubenswrapper[30278]: I0318 18:23:25.151902 30278 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="sushy-emulator/sushy-emulator-59477995f9-q9kcc" Mar 18 18:23:25.306520 master-0 kubenswrapper[30278]: I0318 18:23:25.304905 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sushy-emulator-config\" (UniqueName: \"kubernetes.io/configmap/d3cdc990-12c3-4d4e-b059-51f2fa10c969-sushy-emulator-config\") pod \"d3cdc990-12c3-4d4e-b059-51f2fa10c969\" (UID: \"d3cdc990-12c3-4d4e-b059-51f2fa10c969\") " Mar 18 18:23:25.306520 master-0 kubenswrapper[30278]: I0318 18:23:25.305713 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zl6r7\" (UniqueName: \"kubernetes.io/projected/d3cdc990-12c3-4d4e-b059-51f2fa10c969-kube-api-access-zl6r7\") pod \"d3cdc990-12c3-4d4e-b059-51f2fa10c969\" (UID: \"d3cdc990-12c3-4d4e-b059-51f2fa10c969\") " Mar 18 18:23:25.306520 master-0 kubenswrapper[30278]: I0318 18:23:25.305881 30278 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"os-client-config\" (UniqueName: \"kubernetes.io/secret/d3cdc990-12c3-4d4e-b059-51f2fa10c969-os-client-config\") pod \"d3cdc990-12c3-4d4e-b059-51f2fa10c969\" (UID: \"d3cdc990-12c3-4d4e-b059-51f2fa10c969\") " Mar 18 18:23:25.307035 master-0 kubenswrapper[30278]: I0318 18:23:25.306729 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d3cdc990-12c3-4d4e-b059-51f2fa10c969-sushy-emulator-config" (OuterVolumeSpecName: "sushy-emulator-config") pod "d3cdc990-12c3-4d4e-b059-51f2fa10c969" (UID: "d3cdc990-12c3-4d4e-b059-51f2fa10c969"). InnerVolumeSpecName "sushy-emulator-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 18:23:25.308766 master-0 kubenswrapper[30278]: I0318 18:23:25.308535 30278 reconciler_common.go:293] "Volume detached for volume \"sushy-emulator-config\" (UniqueName: \"kubernetes.io/configmap/d3cdc990-12c3-4d4e-b059-51f2fa10c969-sushy-emulator-config\") on node \"master-0\" DevicePath \"\"" Mar 18 18:23:25.310055 master-0 kubenswrapper[30278]: I0318 18:23:25.309998 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d3cdc990-12c3-4d4e-b059-51f2fa10c969-kube-api-access-zl6r7" (OuterVolumeSpecName: "kube-api-access-zl6r7") pod "d3cdc990-12c3-4d4e-b059-51f2fa10c969" (UID: "d3cdc990-12c3-4d4e-b059-51f2fa10c969"). InnerVolumeSpecName "kube-api-access-zl6r7". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 18:23:25.311136 master-0 kubenswrapper[30278]: I0318 18:23:25.311073 30278 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d3cdc990-12c3-4d4e-b059-51f2fa10c969-os-client-config" (OuterVolumeSpecName: "os-client-config") pod "d3cdc990-12c3-4d4e-b059-51f2fa10c969" (UID: "d3cdc990-12c3-4d4e-b059-51f2fa10c969"). InnerVolumeSpecName "os-client-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 18:23:25.323535 master-0 kubenswrapper[30278]: I0318 18:23:25.323452 30278 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["sushy-emulator/sushy-emulator-54b65fbdd6-d5q7j"] Mar 18 18:23:25.324452 master-0 kubenswrapper[30278]: E0318 18:23:25.324423 30278 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d3cdc990-12c3-4d4e-b059-51f2fa10c969" containerName="sushy-emulator" Mar 18 18:23:25.324452 master-0 kubenswrapper[30278]: I0318 18:23:25.324451 30278 state_mem.go:107] "Deleted CPUSet assignment" podUID="d3cdc990-12c3-4d4e-b059-51f2fa10c969" containerName="sushy-emulator" Mar 18 18:23:25.324965 master-0 kubenswrapper[30278]: I0318 18:23:25.324937 30278 memory_manager.go:354] "RemoveStaleState removing state" podUID="d3cdc990-12c3-4d4e-b059-51f2fa10c969" containerName="sushy-emulator" Mar 18 18:23:25.326211 master-0 kubenswrapper[30278]: I0318 18:23:25.326180 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="sushy-emulator/sushy-emulator-54b65fbdd6-d5q7j" Mar 18 18:23:25.338211 master-0 kubenswrapper[30278]: I0318 18:23:25.338138 30278 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["sushy-emulator/sushy-emulator-54b65fbdd6-d5q7j"] Mar 18 18:23:25.411636 master-0 kubenswrapper[30278]: I0318 18:23:25.411505 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sushy-emulator-config\" (UniqueName: \"kubernetes.io/configmap/cc64c3e9-3040-433c-be1c-8661f06f823e-sushy-emulator-config\") pod \"sushy-emulator-54b65fbdd6-d5q7j\" (UID: \"cc64c3e9-3040-433c-be1c-8661f06f823e\") " pod="sushy-emulator/sushy-emulator-54b65fbdd6-d5q7j" Mar 18 18:23:25.411978 master-0 kubenswrapper[30278]: I0318 18:23:25.411671 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wkttm\" (UniqueName: \"kubernetes.io/projected/cc64c3e9-3040-433c-be1c-8661f06f823e-kube-api-access-wkttm\") pod \"sushy-emulator-54b65fbdd6-d5q7j\" (UID: \"cc64c3e9-3040-433c-be1c-8661f06f823e\") " pod="sushy-emulator/sushy-emulator-54b65fbdd6-d5q7j" Mar 18 18:23:25.411978 master-0 kubenswrapper[30278]: I0318 18:23:25.411817 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-client-config\" (UniqueName: \"kubernetes.io/secret/cc64c3e9-3040-433c-be1c-8661f06f823e-os-client-config\") pod \"sushy-emulator-54b65fbdd6-d5q7j\" (UID: \"cc64c3e9-3040-433c-be1c-8661f06f823e\") " pod="sushy-emulator/sushy-emulator-54b65fbdd6-d5q7j" Mar 18 18:23:25.412414 master-0 kubenswrapper[30278]: I0318 18:23:25.412340 30278 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zl6r7\" (UniqueName: \"kubernetes.io/projected/d3cdc990-12c3-4d4e-b059-51f2fa10c969-kube-api-access-zl6r7\") on node \"master-0\" DevicePath \"\"" Mar 18 18:23:25.412414 master-0 kubenswrapper[30278]: I0318 18:23:25.412372 30278 reconciler_common.go:293] "Volume detached for volume \"os-client-config\" (UniqueName: \"kubernetes.io/secret/d3cdc990-12c3-4d4e-b059-51f2fa10c969-os-client-config\") on node \"master-0\" DevicePath \"\"" Mar 18 18:23:25.483628 master-0 kubenswrapper[30278]: I0318 18:23:25.483558 30278 generic.go:334] "Generic (PLEG): container finished" podID="d3cdc990-12c3-4d4e-b059-51f2fa10c969" containerID="f748a457cb8354b70e164255646680d1b2d69dee36af4494bb47b34c0b66abb5" exitCode=0 Mar 18 18:23:25.483628 master-0 kubenswrapper[30278]: I0318 18:23:25.483629 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="sushy-emulator/sushy-emulator-59477995f9-q9kcc" event={"ID":"d3cdc990-12c3-4d4e-b059-51f2fa10c969","Type":"ContainerDied","Data":"f748a457cb8354b70e164255646680d1b2d69dee36af4494bb47b34c0b66abb5"} Mar 18 18:23:25.484009 master-0 kubenswrapper[30278]: I0318 18:23:25.483696 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="sushy-emulator/sushy-emulator-59477995f9-q9kcc" event={"ID":"d3cdc990-12c3-4d4e-b059-51f2fa10c969","Type":"ContainerDied","Data":"2ecee873876be4aa2f20a7f07d9a54c49f5e570dfc099989af2bb9c13fb1c475"} Mar 18 18:23:25.484009 master-0 kubenswrapper[30278]: I0318 18:23:25.483722 30278 scope.go:117] "RemoveContainer" containerID="f748a457cb8354b70e164255646680d1b2d69dee36af4494bb47b34c0b66abb5" Mar 18 18:23:25.484009 master-0 kubenswrapper[30278]: I0318 18:23:25.483789 30278 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="sushy-emulator/sushy-emulator-59477995f9-q9kcc" Mar 18 18:23:25.514892 master-0 kubenswrapper[30278]: I0318 18:23:25.514824 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sushy-emulator-config\" (UniqueName: \"kubernetes.io/configmap/cc64c3e9-3040-433c-be1c-8661f06f823e-sushy-emulator-config\") pod \"sushy-emulator-54b65fbdd6-d5q7j\" (UID: \"cc64c3e9-3040-433c-be1c-8661f06f823e\") " pod="sushy-emulator/sushy-emulator-54b65fbdd6-d5q7j" Mar 18 18:23:25.515105 master-0 kubenswrapper[30278]: I0318 18:23:25.514942 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wkttm\" (UniqueName: \"kubernetes.io/projected/cc64c3e9-3040-433c-be1c-8661f06f823e-kube-api-access-wkttm\") pod \"sushy-emulator-54b65fbdd6-d5q7j\" (UID: \"cc64c3e9-3040-433c-be1c-8661f06f823e\") " pod="sushy-emulator/sushy-emulator-54b65fbdd6-d5q7j" Mar 18 18:23:25.515105 master-0 kubenswrapper[30278]: I0318 18:23:25.514997 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-client-config\" (UniqueName: \"kubernetes.io/secret/cc64c3e9-3040-433c-be1c-8661f06f823e-os-client-config\") pod \"sushy-emulator-54b65fbdd6-d5q7j\" (UID: \"cc64c3e9-3040-433c-be1c-8661f06f823e\") " pod="sushy-emulator/sushy-emulator-54b65fbdd6-d5q7j" Mar 18 18:23:25.517152 master-0 kubenswrapper[30278]: I0318 18:23:25.517033 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sushy-emulator-config\" (UniqueName: \"kubernetes.io/configmap/cc64c3e9-3040-433c-be1c-8661f06f823e-sushy-emulator-config\") pod \"sushy-emulator-54b65fbdd6-d5q7j\" (UID: \"cc64c3e9-3040-433c-be1c-8661f06f823e\") " pod="sushy-emulator/sushy-emulator-54b65fbdd6-d5q7j" Mar 18 18:23:25.519763 master-0 kubenswrapper[30278]: I0318 18:23:25.519735 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-client-config\" (UniqueName: \"kubernetes.io/secret/cc64c3e9-3040-433c-be1c-8661f06f823e-os-client-config\") pod \"sushy-emulator-54b65fbdd6-d5q7j\" (UID: \"cc64c3e9-3040-433c-be1c-8661f06f823e\") " pod="sushy-emulator/sushy-emulator-54b65fbdd6-d5q7j" Mar 18 18:23:25.521896 master-0 kubenswrapper[30278]: I0318 18:23:25.521777 30278 scope.go:117] "RemoveContainer" containerID="f748a457cb8354b70e164255646680d1b2d69dee36af4494bb47b34c0b66abb5" Mar 18 18:23:25.522647 master-0 kubenswrapper[30278]: E0318 18:23:25.522622 30278 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f748a457cb8354b70e164255646680d1b2d69dee36af4494bb47b34c0b66abb5\": container with ID starting with f748a457cb8354b70e164255646680d1b2d69dee36af4494bb47b34c0b66abb5 not found: ID does not exist" containerID="f748a457cb8354b70e164255646680d1b2d69dee36af4494bb47b34c0b66abb5" Mar 18 18:23:25.522731 master-0 kubenswrapper[30278]: I0318 18:23:25.522663 30278 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f748a457cb8354b70e164255646680d1b2d69dee36af4494bb47b34c0b66abb5"} err="failed to get container status \"f748a457cb8354b70e164255646680d1b2d69dee36af4494bb47b34c0b66abb5\": rpc error: code = NotFound desc = could not find container \"f748a457cb8354b70e164255646680d1b2d69dee36af4494bb47b34c0b66abb5\": container with ID starting with f748a457cb8354b70e164255646680d1b2d69dee36af4494bb47b34c0b66abb5 not found: ID does not exist" Mar 18 18:23:25.548319 master-0 kubenswrapper[30278]: I0318 18:23:25.543397 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wkttm\" (UniqueName: \"kubernetes.io/projected/cc64c3e9-3040-433c-be1c-8661f06f823e-kube-api-access-wkttm\") pod \"sushy-emulator-54b65fbdd6-d5q7j\" (UID: \"cc64c3e9-3040-433c-be1c-8661f06f823e\") " pod="sushy-emulator/sushy-emulator-54b65fbdd6-d5q7j" Mar 18 18:23:25.561675 master-0 kubenswrapper[30278]: I0318 18:23:25.561515 30278 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["sushy-emulator/sushy-emulator-59477995f9-q9kcc"] Mar 18 18:23:25.574558 master-0 kubenswrapper[30278]: I0318 18:23:25.574473 30278 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["sushy-emulator/sushy-emulator-59477995f9-q9kcc"] Mar 18 18:23:25.733538 master-0 kubenswrapper[30278]: I0318 18:23:25.733455 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="sushy-emulator/sushy-emulator-54b65fbdd6-d5q7j" Mar 18 18:23:26.552305 master-0 kubenswrapper[30278]: I0318 18:23:26.551606 30278 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["sushy-emulator/sushy-emulator-54b65fbdd6-d5q7j"] Mar 18 18:23:27.074418 master-0 kubenswrapper[30278]: I0318 18:23:27.074340 30278 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d3cdc990-12c3-4d4e-b059-51f2fa10c969" path="/var/lib/kubelet/pods/d3cdc990-12c3-4d4e-b059-51f2fa10c969/volumes" Mar 18 18:23:27.553176 master-0 kubenswrapper[30278]: I0318 18:23:27.553085 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="sushy-emulator/sushy-emulator-54b65fbdd6-d5q7j" event={"ID":"cc64c3e9-3040-433c-be1c-8661f06f823e","Type":"ContainerStarted","Data":"010598fddf7b68f1855138abc7d0a770871efb6b4536f3e2f44d16251d0d1f9f"} Mar 18 18:23:27.553176 master-0 kubenswrapper[30278]: I0318 18:23:27.553161 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="sushy-emulator/sushy-emulator-54b65fbdd6-d5q7j" event={"ID":"cc64c3e9-3040-433c-be1c-8661f06f823e","Type":"ContainerStarted","Data":"f2374b225b147b516abf696c4363aab5676deefac9773750bb7cd4e94db994f0"} Mar 18 18:23:27.579441 master-0 kubenswrapper[30278]: I0318 18:23:27.579184 30278 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="sushy-emulator/sushy-emulator-54b65fbdd6-d5q7j" podStartSLOduration=2.579161338 podStartE2EDuration="2.579161338s" podCreationTimestamp="2026-03-18 18:23:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 18:23:27.577685638 +0000 UTC m=+1376.744870263" watchObservedRunningTime="2026-03-18 18:23:27.579161338 +0000 UTC m=+1376.746345933" Mar 18 18:23:35.734516 master-0 kubenswrapper[30278]: I0318 18:23:35.734251 30278 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="sushy-emulator/sushy-emulator-54b65fbdd6-d5q7j" Mar 18 18:23:35.736103 master-0 kubenswrapper[30278]: I0318 18:23:35.735933 30278 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="sushy-emulator/sushy-emulator-54b65fbdd6-d5q7j" Mar 18 18:23:35.750910 master-0 kubenswrapper[30278]: I0318 18:23:35.750862 30278 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="sushy-emulator/sushy-emulator-54b65fbdd6-d5q7j" Mar 18 18:23:36.733381 master-0 kubenswrapper[30278]: I0318 18:23:36.733248 30278 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="sushy-emulator/sushy-emulator-54b65fbdd6-d5q7j" Mar 18 18:24:38.806017 master-0 kubenswrapper[30278]: I0318 18:24:38.805900 30278 scope.go:117] "RemoveContainer" containerID="001f46a4ee094afca4ae3cd2910558d34083188c60a8bd9c1a047eafc77e0feb" Mar 18 18:24:38.870713 master-0 kubenswrapper[30278]: I0318 18:24:38.870034 30278 scope.go:117] "RemoveContainer" containerID="844666b31f555d1f241e2bf72292cd9ee160903f58170efe6e318dda13b0da28" Mar 18 18:24:38.935410 master-0 kubenswrapper[30278]: I0318 18:24:38.935124 30278 scope.go:117] "RemoveContainer" containerID="9af7957f55fdef8c5432d1bd3562795df453b96d233be52398beb8a10d026b78" Mar 18 18:24:38.981642 master-0 kubenswrapper[30278]: I0318 18:24:38.981467 30278 scope.go:117] "RemoveContainer" containerID="345815fa2d75307faa3529bd80deec2d21f6243026e61b5e7c804aefe401e81e" Mar 18 18:24:39.022236 master-0 kubenswrapper[30278]: I0318 18:24:39.022019 30278 scope.go:117] "RemoveContainer" containerID="e4724c9c85281f21d876aa8d90072b8d727cbfd7b25d5c1cb1f462ce5febb85c" Mar 18 18:24:42.378567 master-0 kubenswrapper[30278]: I0318 18:24:42.378304 30278 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-samples-operator_cluster-samples-operator-85f7577d78-xnx8x_e0e04440-c08b-452d-9be6-9f70a4027c92/cluster-samples-operator/0.log" Mar 18 18:24:42.386376 master-0 kubenswrapper[30278]: I0318 18:24:42.386242 30278 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-samples-operator_cluster-samples-operator-85f7577d78-xnx8x_e0e04440-c08b-452d-9be6-9f70a4027c92/cluster-samples-operator-watch/0.log" Mar 18 18:25:39.222154 master-0 kubenswrapper[30278]: I0318 18:25:39.222019 30278 scope.go:117] "RemoveContainer" containerID="19ba6ac1a7adc3781b9fc8ccb4cd5a1cf73198f2789227d7e11a56f78c34e3de" Mar 18 18:27:34.521039 master-0 kubenswrapper[30278]: I0318 18:27:34.520960 30278 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-fwdtq/must-gather-pfnmc"] Mar 18 18:27:34.540323 master-0 kubenswrapper[30278]: I0318 18:27:34.539614 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-fwdtq/must-gather-pfnmc" Mar 18 18:27:34.542745 master-0 kubenswrapper[30278]: I0318 18:27:34.542655 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-fwdtq"/"kube-root-ca.crt" Mar 18 18:27:34.542913 master-0 kubenswrapper[30278]: I0318 18:27:34.542889 30278 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-fwdtq"/"openshift-service-ca.crt" Mar 18 18:27:34.594935 master-0 kubenswrapper[30278]: I0318 18:27:34.594862 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v6nrs\" (UniqueName: \"kubernetes.io/projected/e08c5fa5-4c9f-4ca3-80aa-3313df9a9f44-kube-api-access-v6nrs\") pod \"must-gather-pfnmc\" (UID: \"e08c5fa5-4c9f-4ca3-80aa-3313df9a9f44\") " pod="openshift-must-gather-fwdtq/must-gather-pfnmc" Mar 18 18:27:34.595241 master-0 kubenswrapper[30278]: I0318 18:27:34.595043 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/e08c5fa5-4c9f-4ca3-80aa-3313df9a9f44-must-gather-output\") pod \"must-gather-pfnmc\" (UID: \"e08c5fa5-4c9f-4ca3-80aa-3313df9a9f44\") " pod="openshift-must-gather-fwdtq/must-gather-pfnmc" Mar 18 18:27:34.628376 master-0 kubenswrapper[30278]: I0318 18:27:34.626808 30278 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-fwdtq/must-gather-rfdb6"] Mar 18 18:27:34.629542 master-0 kubenswrapper[30278]: I0318 18:27:34.629507 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-fwdtq/must-gather-rfdb6" Mar 18 18:27:34.649026 master-0 kubenswrapper[30278]: I0318 18:27:34.648984 30278 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-fwdtq/must-gather-pfnmc"] Mar 18 18:27:34.667295 master-0 kubenswrapper[30278]: I0318 18:27:34.667215 30278 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-fwdtq/must-gather-rfdb6"] Mar 18 18:27:34.698742 master-0 kubenswrapper[30278]: I0318 18:27:34.698676 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v6nrs\" (UniqueName: \"kubernetes.io/projected/e08c5fa5-4c9f-4ca3-80aa-3313df9a9f44-kube-api-access-v6nrs\") pod \"must-gather-pfnmc\" (UID: \"e08c5fa5-4c9f-4ca3-80aa-3313df9a9f44\") " pod="openshift-must-gather-fwdtq/must-gather-pfnmc" Mar 18 18:27:34.699217 master-0 kubenswrapper[30278]: I0318 18:27:34.699198 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/e08c5fa5-4c9f-4ca3-80aa-3313df9a9f44-must-gather-output\") pod \"must-gather-pfnmc\" (UID: \"e08c5fa5-4c9f-4ca3-80aa-3313df9a9f44\") " pod="openshift-must-gather-fwdtq/must-gather-pfnmc" Mar 18 18:27:34.700008 master-0 kubenswrapper[30278]: I0318 18:27:34.699981 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-84q4g\" (UniqueName: \"kubernetes.io/projected/e1b9caf9-4d4a-41ee-8795-6da462241276-kube-api-access-84q4g\") pod \"must-gather-rfdb6\" (UID: \"e1b9caf9-4d4a-41ee-8795-6da462241276\") " pod="openshift-must-gather-fwdtq/must-gather-rfdb6" Mar 18 18:27:34.700184 master-0 kubenswrapper[30278]: I0318 18:27:34.700168 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/e1b9caf9-4d4a-41ee-8795-6da462241276-must-gather-output\") pod \"must-gather-rfdb6\" (UID: \"e1b9caf9-4d4a-41ee-8795-6da462241276\") " pod="openshift-must-gather-fwdtq/must-gather-rfdb6" Mar 18 18:27:34.700369 master-0 kubenswrapper[30278]: I0318 18:27:34.699852 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/e08c5fa5-4c9f-4ca3-80aa-3313df9a9f44-must-gather-output\") pod \"must-gather-pfnmc\" (UID: \"e08c5fa5-4c9f-4ca3-80aa-3313df9a9f44\") " pod="openshift-must-gather-fwdtq/must-gather-pfnmc" Mar 18 18:27:34.743075 master-0 kubenswrapper[30278]: I0318 18:27:34.742903 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v6nrs\" (UniqueName: \"kubernetes.io/projected/e08c5fa5-4c9f-4ca3-80aa-3313df9a9f44-kube-api-access-v6nrs\") pod \"must-gather-pfnmc\" (UID: \"e08c5fa5-4c9f-4ca3-80aa-3313df9a9f44\") " pod="openshift-must-gather-fwdtq/must-gather-pfnmc" Mar 18 18:27:34.806680 master-0 kubenswrapper[30278]: I0318 18:27:34.805175 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-84q4g\" (UniqueName: \"kubernetes.io/projected/e1b9caf9-4d4a-41ee-8795-6da462241276-kube-api-access-84q4g\") pod \"must-gather-rfdb6\" (UID: \"e1b9caf9-4d4a-41ee-8795-6da462241276\") " pod="openshift-must-gather-fwdtq/must-gather-rfdb6" Mar 18 18:27:34.806680 master-0 kubenswrapper[30278]: I0318 18:27:34.805696 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/e1b9caf9-4d4a-41ee-8795-6da462241276-must-gather-output\") pod \"must-gather-rfdb6\" (UID: \"e1b9caf9-4d4a-41ee-8795-6da462241276\") " pod="openshift-must-gather-fwdtq/must-gather-rfdb6" Mar 18 18:27:34.807120 master-0 kubenswrapper[30278]: I0318 18:27:34.806809 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/e1b9caf9-4d4a-41ee-8795-6da462241276-must-gather-output\") pod \"must-gather-rfdb6\" (UID: \"e1b9caf9-4d4a-41ee-8795-6da462241276\") " pod="openshift-must-gather-fwdtq/must-gather-rfdb6" Mar 18 18:27:34.823962 master-0 kubenswrapper[30278]: I0318 18:27:34.822018 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-84q4g\" (UniqueName: \"kubernetes.io/projected/e1b9caf9-4d4a-41ee-8795-6da462241276-kube-api-access-84q4g\") pod \"must-gather-rfdb6\" (UID: \"e1b9caf9-4d4a-41ee-8795-6da462241276\") " pod="openshift-must-gather-fwdtq/must-gather-rfdb6" Mar 18 18:27:34.913828 master-0 kubenswrapper[30278]: I0318 18:27:34.913745 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-fwdtq/must-gather-pfnmc" Mar 18 18:27:34.958143 master-0 kubenswrapper[30278]: I0318 18:27:34.957357 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-fwdtq/must-gather-rfdb6" Mar 18 18:27:35.558191 master-0 kubenswrapper[30278]: I0318 18:27:35.558113 30278 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-fwdtq/must-gather-pfnmc"] Mar 18 18:27:35.575735 master-0 kubenswrapper[30278]: I0318 18:27:35.575700 30278 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 18 18:27:35.764529 master-0 kubenswrapper[30278]: I0318 18:27:35.763883 30278 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-fwdtq/must-gather-rfdb6"] Mar 18 18:27:35.948947 master-0 kubenswrapper[30278]: I0318 18:27:35.948858 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-fwdtq/must-gather-pfnmc" event={"ID":"e08c5fa5-4c9f-4ca3-80aa-3313df9a9f44","Type":"ContainerStarted","Data":"7a4c55e4e9a88b33c63e3f8ab1442d35de4f4671669a263d3f051cd12b744db3"} Mar 18 18:27:35.950792 master-0 kubenswrapper[30278]: I0318 18:27:35.950715 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-fwdtq/must-gather-rfdb6" event={"ID":"e1b9caf9-4d4a-41ee-8795-6da462241276","Type":"ContainerStarted","Data":"3ba88394359fa770b8c33d42711c001b306bccf4795d35c4883f925b586e0373"} Mar 18 18:27:38.009327 master-0 kubenswrapper[30278]: I0318 18:27:38.006533 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-fwdtq/must-gather-rfdb6" event={"ID":"e1b9caf9-4d4a-41ee-8795-6da462241276","Type":"ContainerStarted","Data":"fc607f35a3499ac03e6573cbaa07139fbe73f6686266efa7ae0fb3a309ed5111"} Mar 18 18:27:39.030743 master-0 kubenswrapper[30278]: I0318 18:27:39.030640 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-fwdtq/must-gather-rfdb6" event={"ID":"e1b9caf9-4d4a-41ee-8795-6da462241276","Type":"ContainerStarted","Data":"7cea83723537671f8ca33e89d5b09dc5304d0b6e24f5c9b9faaedc8300c95f30"} Mar 18 18:27:39.163304 master-0 kubenswrapper[30278]: I0318 18:27:39.153230 30278 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-fwdtq/must-gather-rfdb6" podStartSLOduration=3.790303103 podStartE2EDuration="5.153191249s" podCreationTimestamp="2026-03-18 18:27:34 +0000 UTC" firstStartedPulling="2026-03-18 18:27:35.770439004 +0000 UTC m=+1624.937623619" lastFinishedPulling="2026-03-18 18:27:37.13332717 +0000 UTC m=+1626.300511765" observedRunningTime="2026-03-18 18:27:39.150205608 +0000 UTC m=+1628.317390223" watchObservedRunningTime="2026-03-18 18:27:39.153191249 +0000 UTC m=+1628.320375844" Mar 18 18:27:39.642367 master-0 kubenswrapper[30278]: I0318 18:27:39.640359 30278 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-version_cluster-version-operator-7d58488df-l48xm_fdab27a1-1d7a-4dc5-b828-eba3f57592dd/cluster-version-operator/1.log" Mar 18 18:27:40.289020 master-0 kubenswrapper[30278]: I0318 18:27:40.288954 30278 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-version_cluster-version-operator-7d58488df-l48xm_fdab27a1-1d7a-4dc5-b828-eba3f57592dd/cluster-version-operator/2.log" Mar 18 18:27:46.647810 master-0 kubenswrapper[30278]: I0318 18:27:46.647345 30278 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-86f58fcf4-49xpf_dc688679-6ccb-42d6-aa9b-620284991fbe/nmstate-console-plugin/0.log" Mar 18 18:27:46.699063 master-0 kubenswrapper[30278]: I0318 18:27:46.699004 30278 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-9kcdn_5d513b42-f68d-4b03-b420-71e8e8cf0d75/nmstate-handler/0.log" Mar 18 18:27:46.801844 master-0 kubenswrapper[30278]: I0318 18:27:46.801792 30278 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-9b8c8685d-zc4ph_185bb037-2ee1-460c-b291-beb7bf78bb99/nmstate-metrics/0.log" Mar 18 18:27:46.821728 master-0 kubenswrapper[30278]: I0318 18:27:46.820411 30278 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-9b8c8685d-zc4ph_185bb037-2ee1-460c-b291-beb7bf78bb99/kube-rbac-proxy/0.log" Mar 18 18:27:46.855417 master-0 kubenswrapper[30278]: I0318 18:27:46.853439 30278 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-796d4cfff4-gvw4g_ace8aac5-f45b-4819-b121-bf9db0c63e4f/nmstate-operator/0.log" Mar 18 18:27:46.905403 master-0 kubenswrapper[30278]: I0318 18:27:46.905245 30278 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-5f558f5558-dlkh5_56c34c5b-17a3-4109-b2fa-27d0db19d95c/nmstate-webhook/0.log" Mar 18 18:27:47.537622 master-0 kubenswrapper[30278]: I0318 18:27:47.537426 30278 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_094204df314fe45bd5af12ca1b4622bb/etcdctl/0.log" Mar 18 18:27:47.855533 master-0 kubenswrapper[30278]: I0318 18:27:47.854661 30278 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-7bb4cc7c98-skcb4_0326959b-b1d6-42ef-9fe5-bb33aa37df40/controller/0.log" Mar 18 18:27:47.869962 master-0 kubenswrapper[30278]: I0318 18:27:47.867668 30278 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-7bb4cc7c98-skcb4_0326959b-b1d6-42ef-9fe5-bb33aa37df40/kube-rbac-proxy/0.log" Mar 18 18:27:47.888313 master-0 kubenswrapper[30278]: I0318 18:27:47.887619 30278 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-bcc4b6f68-g4479_efa4b92a-4f1e-40bd-b1e4-3bcb1b2fc4f9/frr-k8s-webhook-server/0.log" Mar 18 18:27:47.956303 master-0 kubenswrapper[30278]: I0318 18:27:47.955785 30278 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-ztqqc_c5c65977-8004-4434-8d99-7624d08d9b3a/controller/0.log" Mar 18 18:27:47.964297 master-0 kubenswrapper[30278]: I0318 18:27:47.963412 30278 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_094204df314fe45bd5af12ca1b4622bb/etcd/0.log" Mar 18 18:27:48.002302 master-0 kubenswrapper[30278]: I0318 18:27:48.001495 30278 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_094204df314fe45bd5af12ca1b4622bb/etcd-metrics/0.log" Mar 18 18:27:48.037300 master-0 kubenswrapper[30278]: I0318 18:27:48.035350 30278 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_094204df314fe45bd5af12ca1b4622bb/etcd-readyz/0.log" Mar 18 18:27:48.099302 master-0 kubenswrapper[30278]: I0318 18:27:48.091679 30278 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_094204df314fe45bd5af12ca1b4622bb/etcd-rev/0.log" Mar 18 18:27:48.140381 master-0 kubenswrapper[30278]: I0318 18:27:48.135470 30278 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_094204df314fe45bd5af12ca1b4622bb/setup/0.log" Mar 18 18:27:48.164954 master-0 kubenswrapper[30278]: I0318 18:27:48.164891 30278 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_094204df314fe45bd5af12ca1b4622bb/etcd-ensure-env-vars/0.log" Mar 18 18:27:48.203062 master-0 kubenswrapper[30278]: I0318 18:27:48.202995 30278 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_094204df314fe45bd5af12ca1b4622bb/etcd-resources-copy/0.log" Mar 18 18:27:48.315307 master-0 kubenswrapper[30278]: I0318 18:27:48.314729 30278 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_installer-1-master-0_08451d5b-cf84-45a1-a16d-7ce10a83a6e7/installer/0.log" Mar 18 18:27:48.410519 master-0 kubenswrapper[30278]: I0318 18:27:48.409526 30278 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_installer-2-master-0_cd9d8bd7-68a0-458f-9d25-f600932e303c/installer/0.log" Mar 18 18:27:49.328304 master-0 kubenswrapper[30278]: I0318 18:27:49.325693 30278 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-ztqqc_c5c65977-8004-4434-8d99-7624d08d9b3a/frr/0.log" Mar 18 18:27:49.345822 master-0 kubenswrapper[30278]: I0318 18:27:49.345752 30278 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-ztqqc_c5c65977-8004-4434-8d99-7624d08d9b3a/reloader/0.log" Mar 18 18:27:49.368319 master-0 kubenswrapper[30278]: I0318 18:27:49.367629 30278 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-ztqqc_c5c65977-8004-4434-8d99-7624d08d9b3a/frr-metrics/0.log" Mar 18 18:27:49.376706 master-0 kubenswrapper[30278]: I0318 18:27:49.376652 30278 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-ztqqc_c5c65977-8004-4434-8d99-7624d08d9b3a/kube-rbac-proxy/0.log" Mar 18 18:27:49.389621 master-0 kubenswrapper[30278]: I0318 18:27:49.388842 30278 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-ztqqc_c5c65977-8004-4434-8d99-7624d08d9b3a/kube-rbac-proxy-frr/0.log" Mar 18 18:27:49.403403 master-0 kubenswrapper[30278]: I0318 18:27:49.401711 30278 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-ztqqc_c5c65977-8004-4434-8d99-7624d08d9b3a/cp-frr-files/0.log" Mar 18 18:27:49.408308 master-0 kubenswrapper[30278]: I0318 18:27:49.408049 30278 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-ztqqc_c5c65977-8004-4434-8d99-7624d08d9b3a/cp-reloader/0.log" Mar 18 18:27:49.428039 master-0 kubenswrapper[30278]: I0318 18:27:49.427907 30278 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-ztqqc_c5c65977-8004-4434-8d99-7624d08d9b3a/cp-metrics/0.log" Mar 18 18:27:49.473421 master-0 kubenswrapper[30278]: I0318 18:27:49.472564 30278 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-848f479545-kv7v2_79b7d491-7665-41af-95d6-f17d8ce48257/manager/0.log" Mar 18 18:27:49.495538 master-0 kubenswrapper[30278]: I0318 18:27:49.492304 30278 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-7f9bdbf4b-qndmm_65e5c2ef-6493-4705-b8e2-36ee0cae8c27/webhook-server/0.log" Mar 18 18:27:50.088296 master-0 kubenswrapper[30278]: I0318 18:27:50.087383 30278 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-m67cm_8f8a9e5f-b9b7-4366-a778-1bf7177693c5/speaker/0.log" Mar 18 18:27:50.094759 master-0 kubenswrapper[30278]: I0318 18:27:50.094693 30278 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-m67cm_8f8a9e5f-b9b7-4366-a778-1bf7177693c5/kube-rbac-proxy/0.log" Mar 18 18:27:50.217076 master-0 kubenswrapper[30278]: I0318 18:27:50.216385 30278 log.go:25] "Finished parsing log file" path="/var/log/pods/assisted-installer_assisted-installer-controller-trlzv_be6633f4-7370-49b8-a607-6a3fa52a098e/assisted-installer-controller/0.log" Mar 18 18:27:51.057687 master-0 kubenswrapper[30278]: I0318 18:27:51.052406 30278 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication_oauth-openshift-79cbc94fc7-tlmnv_a85f9e61-015c-41d5-bb38-de74da6a46da/oauth-openshift/0.log" Mar 18 18:27:52.543524 master-0 kubenswrapper[30278]: I0318 18:27:52.543439 30278 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication-operator_authentication-operator-5885bfd7f4-8sxdf_c087ce06-a16b-41f4-ba93-8fccdee09003/authentication-operator/2.log" Mar 18 18:27:52.686590 master-0 kubenswrapper[30278]: I0318 18:27:52.686525 30278 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication-operator_authentication-operator-5885bfd7f4-8sxdf_c087ce06-a16b-41f4-ba93-8fccdee09003/authentication-operator/3.log" Mar 18 18:27:54.548517 master-0 kubenswrapper[30278]: I0318 18:27:54.546399 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-fwdtq/must-gather-pfnmc" event={"ID":"e08c5fa5-4c9f-4ca3-80aa-3313df9a9f44","Type":"ContainerStarted","Data":"ecac9ad4975639c380d8571732a84075a7c0b028ee1a2e470e0501832f57a6ea"} Mar 18 18:27:55.000596 master-0 kubenswrapper[30278]: I0318 18:27:55.000463 30278 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress_router-default-7dcf5569b5-m5dh4_c57f282a-829b-41b2-827a-f4bc598245a2/router/4.log" Mar 18 18:27:55.017177 master-0 kubenswrapper[30278]: I0318 18:27:55.017110 30278 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress_router-default-7dcf5569b5-m5dh4_c57f282a-829b-41b2-827a-f4bc598245a2/router/3.log" Mar 18 18:27:55.576633 master-0 kubenswrapper[30278]: I0318 18:27:55.576558 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-fwdtq/must-gather-pfnmc" event={"ID":"e08c5fa5-4c9f-4ca3-80aa-3313df9a9f44","Type":"ContainerStarted","Data":"ad7286e4a7b6f524b63913dbaf8b3cc87e4e8bbad19e3820f491a2be5e537500"} Mar 18 18:27:55.930027 master-0 kubenswrapper[30278]: I0318 18:27:55.929868 30278 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-oauth-apiserver_apiserver-688fbbb854-6n26v_43fab0f2-5cfd-4b5e-a632-728fd5b960fd/oauth-apiserver/0.log" Mar 18 18:27:55.944017 master-0 kubenswrapper[30278]: I0318 18:27:55.943952 30278 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-oauth-apiserver_apiserver-688fbbb854-6n26v_43fab0f2-5cfd-4b5e-a632-728fd5b960fd/fix-audit-permissions/0.log" Mar 18 18:27:56.189914 master-0 kubenswrapper[30278]: I0318 18:27:56.189736 30278 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-fwdtq/must-gather-pfnmc" podStartSLOduration=4.346894261 podStartE2EDuration="22.189712891s" podCreationTimestamp="2026-03-18 18:27:34 +0000 UTC" firstStartedPulling="2026-03-18 18:27:35.573841286 +0000 UTC m=+1624.741025881" lastFinishedPulling="2026-03-18 18:27:53.416659916 +0000 UTC m=+1642.583844511" observedRunningTime="2026-03-18 18:27:55.604510912 +0000 UTC m=+1644.771695507" watchObservedRunningTime="2026-03-18 18:27:56.189712891 +0000 UTC m=+1645.356897476" Mar 18 18:27:56.207508 master-0 kubenswrapper[30278]: I0318 18:27:56.207429 30278 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-fwdtq/perf-node-gather-daemonset-bbjc2"] Mar 18 18:27:56.209614 master-0 kubenswrapper[30278]: I0318 18:27:56.209585 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-fwdtq/perf-node-gather-daemonset-bbjc2" Mar 18 18:27:56.254229 master-0 kubenswrapper[30278]: I0318 18:27:56.254027 30278 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-fwdtq/perf-node-gather-daemonset-bbjc2"] Mar 18 18:27:56.323187 master-0 kubenswrapper[30278]: I0318 18:27:56.323097 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qzc4q\" (UniqueName: \"kubernetes.io/projected/5f7fab71-72cf-42cb-afb6-4786b85c1e10-kube-api-access-qzc4q\") pod \"perf-node-gather-daemonset-bbjc2\" (UID: \"5f7fab71-72cf-42cb-afb6-4786b85c1e10\") " pod="openshift-must-gather-fwdtq/perf-node-gather-daemonset-bbjc2" Mar 18 18:27:56.323504 master-0 kubenswrapper[30278]: I0318 18:27:56.323390 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"podres\" (UniqueName: \"kubernetes.io/host-path/5f7fab71-72cf-42cb-afb6-4786b85c1e10-podres\") pod \"perf-node-gather-daemonset-bbjc2\" (UID: \"5f7fab71-72cf-42cb-afb6-4786b85c1e10\") " pod="openshift-must-gather-fwdtq/perf-node-gather-daemonset-bbjc2" Mar 18 18:27:56.324091 master-0 kubenswrapper[30278]: I0318 18:27:56.324054 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/5f7fab71-72cf-42cb-afb6-4786b85c1e10-sys\") pod \"perf-node-gather-daemonset-bbjc2\" (UID: \"5f7fab71-72cf-42cb-afb6-4786b85c1e10\") " pod="openshift-must-gather-fwdtq/perf-node-gather-daemonset-bbjc2" Mar 18 18:27:56.324159 master-0 kubenswrapper[30278]: I0318 18:27:56.324129 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proc\" (UniqueName: \"kubernetes.io/host-path/5f7fab71-72cf-42cb-afb6-4786b85c1e10-proc\") pod \"perf-node-gather-daemonset-bbjc2\" (UID: \"5f7fab71-72cf-42cb-afb6-4786b85c1e10\") " pod="openshift-must-gather-fwdtq/perf-node-gather-daemonset-bbjc2" Mar 18 18:27:56.324361 master-0 kubenswrapper[30278]: I0318 18:27:56.324329 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5f7fab71-72cf-42cb-afb6-4786b85c1e10-lib-modules\") pod \"perf-node-gather-daemonset-bbjc2\" (UID: \"5f7fab71-72cf-42cb-afb6-4786b85c1e10\") " pod="openshift-must-gather-fwdtq/perf-node-gather-daemonset-bbjc2" Mar 18 18:27:56.428072 master-0 kubenswrapper[30278]: I0318 18:27:56.427985 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/5f7fab71-72cf-42cb-afb6-4786b85c1e10-sys\") pod \"perf-node-gather-daemonset-bbjc2\" (UID: \"5f7fab71-72cf-42cb-afb6-4786b85c1e10\") " pod="openshift-must-gather-fwdtq/perf-node-gather-daemonset-bbjc2" Mar 18 18:27:56.428413 master-0 kubenswrapper[30278]: I0318 18:27:56.428251 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/5f7fab71-72cf-42cb-afb6-4786b85c1e10-sys\") pod \"perf-node-gather-daemonset-bbjc2\" (UID: \"5f7fab71-72cf-42cb-afb6-4786b85c1e10\") " pod="openshift-must-gather-fwdtq/perf-node-gather-daemonset-bbjc2" Mar 18 18:27:56.428413 master-0 kubenswrapper[30278]: I0318 18:27:56.428376 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proc\" (UniqueName: \"kubernetes.io/host-path/5f7fab71-72cf-42cb-afb6-4786b85c1e10-proc\") pod \"perf-node-gather-daemonset-bbjc2\" (UID: \"5f7fab71-72cf-42cb-afb6-4786b85c1e10\") " pod="openshift-must-gather-fwdtq/perf-node-gather-daemonset-bbjc2" Mar 18 18:27:56.428767 master-0 kubenswrapper[30278]: I0318 18:27:56.428736 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5f7fab71-72cf-42cb-afb6-4786b85c1e10-lib-modules\") pod \"perf-node-gather-daemonset-bbjc2\" (UID: \"5f7fab71-72cf-42cb-afb6-4786b85c1e10\") " pod="openshift-must-gather-fwdtq/perf-node-gather-daemonset-bbjc2" Mar 18 18:27:56.429071 master-0 kubenswrapper[30278]: I0318 18:27:56.429043 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qzc4q\" (UniqueName: \"kubernetes.io/projected/5f7fab71-72cf-42cb-afb6-4786b85c1e10-kube-api-access-qzc4q\") pod \"perf-node-gather-daemonset-bbjc2\" (UID: \"5f7fab71-72cf-42cb-afb6-4786b85c1e10\") " pod="openshift-must-gather-fwdtq/perf-node-gather-daemonset-bbjc2" Mar 18 18:27:56.429306 master-0 kubenswrapper[30278]: I0318 18:27:56.429269 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"podres\" (UniqueName: \"kubernetes.io/host-path/5f7fab71-72cf-42cb-afb6-4786b85c1e10-podres\") pod \"perf-node-gather-daemonset-bbjc2\" (UID: \"5f7fab71-72cf-42cb-afb6-4786b85c1e10\") " pod="openshift-must-gather-fwdtq/perf-node-gather-daemonset-bbjc2" Mar 18 18:27:56.429812 master-0 kubenswrapper[30278]: I0318 18:27:56.429782 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"podres\" (UniqueName: \"kubernetes.io/host-path/5f7fab71-72cf-42cb-afb6-4786b85c1e10-podres\") pod \"perf-node-gather-daemonset-bbjc2\" (UID: \"5f7fab71-72cf-42cb-afb6-4786b85c1e10\") " pod="openshift-must-gather-fwdtq/perf-node-gather-daemonset-bbjc2" Mar 18 18:27:56.429896 master-0 kubenswrapper[30278]: I0318 18:27:56.429833 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proc\" (UniqueName: \"kubernetes.io/host-path/5f7fab71-72cf-42cb-afb6-4786b85c1e10-proc\") pod \"perf-node-gather-daemonset-bbjc2\" (UID: \"5f7fab71-72cf-42cb-afb6-4786b85c1e10\") " pod="openshift-must-gather-fwdtq/perf-node-gather-daemonset-bbjc2" Mar 18 18:27:56.429964 master-0 kubenswrapper[30278]: I0318 18:27:56.429902 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5f7fab71-72cf-42cb-afb6-4786b85c1e10-lib-modules\") pod \"perf-node-gather-daemonset-bbjc2\" (UID: \"5f7fab71-72cf-42cb-afb6-4786b85c1e10\") " pod="openshift-must-gather-fwdtq/perf-node-gather-daemonset-bbjc2" Mar 18 18:27:56.448210 master-0 kubenswrapper[30278]: I0318 18:27:56.448081 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qzc4q\" (UniqueName: \"kubernetes.io/projected/5f7fab71-72cf-42cb-afb6-4786b85c1e10-kube-api-access-qzc4q\") pod \"perf-node-gather-daemonset-bbjc2\" (UID: \"5f7fab71-72cf-42cb-afb6-4786b85c1e10\") " pod="openshift-must-gather-fwdtq/perf-node-gather-daemonset-bbjc2" Mar 18 18:27:56.554878 master-0 kubenswrapper[30278]: I0318 18:27:56.554761 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-fwdtq/perf-node-gather-daemonset-bbjc2" Mar 18 18:27:56.942805 master-0 kubenswrapper[30278]: I0318 18:27:56.942560 30278 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-autoscaler-operator-866dc4744-l6hpt_a94f7bff-ad61-4c53-a8eb-000a13f26971/kube-rbac-proxy/0.log" Mar 18 18:27:57.023395 master-0 kubenswrapper[30278]: I0318 18:27:57.022661 30278 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-autoscaler-operator-866dc4744-l6hpt_a94f7bff-ad61-4c53-a8eb-000a13f26971/cluster-autoscaler-operator/0.log" Mar 18 18:27:57.042017 master-0 kubenswrapper[30278]: I0318 18:27:57.041699 30278 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-6f69995874-dh5zl_37b3753f-bf4f-4a9e-a4a8-d58296bada79/cluster-baremetal-operator/3.log" Mar 18 18:27:57.042738 master-0 kubenswrapper[30278]: I0318 18:27:57.042614 30278 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-6f69995874-dh5zl_37b3753f-bf4f-4a9e-a4a8-d58296bada79/cluster-baremetal-operator/4.log" Mar 18 18:27:57.075782 master-0 kubenswrapper[30278]: I0318 18:27:57.075708 30278 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-6f69995874-dh5zl_37b3753f-bf4f-4a9e-a4a8-d58296bada79/baremetal-kube-rbac-proxy/0.log" Mar 18 18:27:57.107038 master-0 kubenswrapper[30278]: I0318 18:27:57.106866 30278 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-6f97756bc8-zdqtc_de189d27-4c60-49f1-9119-d1fde5c37b1e/control-plane-machine-set-operator/0.log" Mar 18 18:27:57.117355 master-0 kubenswrapper[30278]: I0318 18:27:57.116692 30278 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-fwdtq/perf-node-gather-daemonset-bbjc2"] Mar 18 18:27:57.172666 master-0 kubenswrapper[30278]: I0318 18:27:57.172582 30278 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-6fbb6cf6f9-6x52p_2d21e77e-8b61-4f03-8f17-941b7a1d8b1d/kube-rbac-proxy/0.log" Mar 18 18:27:57.197322 master-0 kubenswrapper[30278]: I0318 18:27:57.197165 30278 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-6fbb6cf6f9-6x52p_2d21e77e-8b61-4f03-8f17-941b7a1d8b1d/machine-api-operator/0.log" Mar 18 18:27:57.604208 master-0 kubenswrapper[30278]: I0318 18:27:57.604128 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-fwdtq/perf-node-gather-daemonset-bbjc2" event={"ID":"5f7fab71-72cf-42cb-afb6-4786b85c1e10","Type":"ContainerStarted","Data":"0220904d07334fa4725fe5ea3f99cf786b19861a1c1803649764bff8ebc4082c"} Mar 18 18:27:57.789373 master-0 kubenswrapper[30278]: E0318 18:27:57.783720 30278 upgradeaware.go:441] Error proxying data from backend to client: writeto tcp 192.168.32.10:33862->192.168.32.10:36439: read tcp 192.168.32.10:33862->192.168.32.10:36439: read: connection reset by peer Mar 18 18:27:58.621445 master-0 kubenswrapper[30278]: I0318 18:27:58.621244 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-fwdtq/perf-node-gather-daemonset-bbjc2" event={"ID":"5f7fab71-72cf-42cb-afb6-4786b85c1e10","Type":"ContainerStarted","Data":"3caac448045946bfc5ce852d6319a53d615addc0909c688b011a678dc69138ed"} Mar 18 18:27:58.622114 master-0 kubenswrapper[30278]: I0318 18:27:58.621598 30278 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-must-gather-fwdtq/perf-node-gather-daemonset-bbjc2" Mar 18 18:27:58.655241 master-0 kubenswrapper[30278]: I0318 18:27:58.655122 30278 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-fwdtq/perf-node-gather-daemonset-bbjc2" podStartSLOduration=2.655092556 podStartE2EDuration="2.655092556s" podCreationTimestamp="2026-03-18 18:27:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 18:27:58.643202624 +0000 UTC m=+1647.810387229" watchObservedRunningTime="2026-03-18 18:27:58.655092556 +0000 UTC m=+1647.822277161" Mar 18 18:27:59.462850 master-0 kubenswrapper[30278]: I0318 18:27:59.461909 30278 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-controller-manager-operator_cluster-cloud-controller-manager-operator-7dff898856-kfzkl_0751c002-fe0e-4f13-bb9c-9accd8ca0df3/cluster-cloud-controller-manager/0.log" Mar 18 18:27:59.463502 master-0 kubenswrapper[30278]: I0318 18:27:59.463376 30278 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-controller-manager-operator_cluster-cloud-controller-manager-operator-7dff898856-kfzkl_0751c002-fe0e-4f13-bb9c-9accd8ca0df3/cluster-cloud-controller-manager/1.log" Mar 18 18:27:59.479235 master-0 kubenswrapper[30278]: I0318 18:27:59.478045 30278 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-controller-manager-operator_cluster-cloud-controller-manager-operator-7dff898856-kfzkl_0751c002-fe0e-4f13-bb9c-9accd8ca0df3/config-sync-controllers/0.log" Mar 18 18:27:59.479235 master-0 kubenswrapper[30278]: I0318 18:27:59.479172 30278 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-controller-manager-operator_cluster-cloud-controller-manager-operator-7dff898856-kfzkl_0751c002-fe0e-4f13-bb9c-9accd8ca0df3/config-sync-controllers/1.log" Mar 18 18:27:59.493198 master-0 kubenswrapper[30278]: I0318 18:27:59.493136 30278 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-controller-manager-operator_cluster-cloud-controller-manager-operator-7dff898856-kfzkl_0751c002-fe0e-4f13-bb9c-9accd8ca0df3/kube-rbac-proxy/7.log" Mar 18 18:27:59.495491 master-0 kubenswrapper[30278]: I0318 18:27:59.494407 30278 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-controller-manager-operator_cluster-cloud-controller-manager-operator-7dff898856-kfzkl_0751c002-fe0e-4f13-bb9c-9accd8ca0df3/kube-rbac-proxy/6.log" Mar 18 18:27:59.779668 master-0 kubenswrapper[30278]: I0318 18:27:59.779475 30278 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-fwdtq/master-0-debug-h78kc"] Mar 18 18:27:59.781711 master-0 kubenswrapper[30278]: I0318 18:27:59.781657 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-fwdtq/master-0-debug-h78kc" Mar 18 18:27:59.986022 master-0 kubenswrapper[30278]: I0318 18:27:59.985935 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hkvwn\" (UniqueName: \"kubernetes.io/projected/1341e9b9-8891-45e9-9dbd-4fb8d5ead718-kube-api-access-hkvwn\") pod \"master-0-debug-h78kc\" (UID: \"1341e9b9-8891-45e9-9dbd-4fb8d5ead718\") " pod="openshift-must-gather-fwdtq/master-0-debug-h78kc" Mar 18 18:27:59.986319 master-0 kubenswrapper[30278]: I0318 18:27:59.986197 30278 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/1341e9b9-8891-45e9-9dbd-4fb8d5ead718-host\") pod \"master-0-debug-h78kc\" (UID: \"1341e9b9-8891-45e9-9dbd-4fb8d5ead718\") " pod="openshift-must-gather-fwdtq/master-0-debug-h78kc" Mar 18 18:28:00.089724 master-0 kubenswrapper[30278]: I0318 18:28:00.089594 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hkvwn\" (UniqueName: \"kubernetes.io/projected/1341e9b9-8891-45e9-9dbd-4fb8d5ead718-kube-api-access-hkvwn\") pod \"master-0-debug-h78kc\" (UID: \"1341e9b9-8891-45e9-9dbd-4fb8d5ead718\") " pod="openshift-must-gather-fwdtq/master-0-debug-h78kc" Mar 18 18:28:00.090020 master-0 kubenswrapper[30278]: I0318 18:28:00.089986 30278 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/1341e9b9-8891-45e9-9dbd-4fb8d5ead718-host\") pod \"master-0-debug-h78kc\" (UID: \"1341e9b9-8891-45e9-9dbd-4fb8d5ead718\") " pod="openshift-must-gather-fwdtq/master-0-debug-h78kc" Mar 18 18:28:00.090242 master-0 kubenswrapper[30278]: I0318 18:28:00.090178 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/1341e9b9-8891-45e9-9dbd-4fb8d5ead718-host\") pod \"master-0-debug-h78kc\" (UID: \"1341e9b9-8891-45e9-9dbd-4fb8d5ead718\") " pod="openshift-must-gather-fwdtq/master-0-debug-h78kc" Mar 18 18:28:00.116828 master-0 kubenswrapper[30278]: I0318 18:28:00.115073 30278 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hkvwn\" (UniqueName: \"kubernetes.io/projected/1341e9b9-8891-45e9-9dbd-4fb8d5ead718-kube-api-access-hkvwn\") pod \"master-0-debug-h78kc\" (UID: \"1341e9b9-8891-45e9-9dbd-4fb8d5ead718\") " pod="openshift-must-gather-fwdtq/master-0-debug-h78kc" Mar 18 18:28:00.406334 master-0 kubenswrapper[30278]: I0318 18:28:00.406162 30278 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-fwdtq/master-0-debug-h78kc" Mar 18 18:28:00.656627 master-0 kubenswrapper[30278]: I0318 18:28:00.656442 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-fwdtq/master-0-debug-h78kc" event={"ID":"1341e9b9-8891-45e9-9dbd-4fb8d5ead718","Type":"ContainerStarted","Data":"64fb641221145bbb22ecf4d2d678bb595bbff869d911b63e1888d222e5bd976d"} Mar 18 18:28:02.002773 master-0 kubenswrapper[30278]: I0318 18:28:02.002701 30278 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-1f97-account-create-update-bc5tw_5edc1dc4-2f2a-4eff-bc50-10382bc71d27/mariadb-account-create-update/0.log" Mar 18 18:28:02.109590 master-0 kubenswrapper[30278]: I0318 18:28:02.108520 30278 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-b9df6-api-0_631bd59b-37e5-49a9-98de-41b91dd3425a/cinder-b9df6-api-log/0.log" Mar 18 18:28:02.125929 master-0 kubenswrapper[30278]: I0318 18:28:02.125686 30278 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-b9df6-api-0_631bd59b-37e5-49a9-98de-41b91dd3425a/cinder-api/0.log" Mar 18 18:28:02.207380 master-0 kubenswrapper[30278]: I0318 18:28:02.207295 30278 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-b9df6-backup-0_c6fb18de-4040-48c7-a1aa-f72075ed3967/cinder-backup/0.log" Mar 18 18:28:02.228361 master-0 kubenswrapper[30278]: I0318 18:28:02.227742 30278 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-b9df6-backup-0_c6fb18de-4040-48c7-a1aa-f72075ed3967/probe/0.log" Mar 18 18:28:02.242397 master-0 kubenswrapper[30278]: I0318 18:28:02.242094 30278 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-b9df6-db-sync-dxpjk_47f543cd-d5bf-4421-aae3-516afd48c609/cinder-b9df6-db-sync/0.log" Mar 18 18:28:02.336003 master-0 kubenswrapper[30278]: I0318 18:28:02.335832 30278 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-credential-operator_cloud-credential-operator-744f9dbf77-djgn7_04cef0bd-f365-4bf6-864a-1895995015d6/kube-rbac-proxy/0.log" Mar 18 18:28:02.365893 master-0 kubenswrapper[30278]: I0318 18:28:02.365762 30278 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-b9df6-scheduler-0_d39fb8c7-403a-4f95-9a6a-e9207bc02408/cinder-scheduler/0.log" Mar 18 18:28:02.387020 master-0 kubenswrapper[30278]: I0318 18:28:02.386940 30278 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-b9df6-scheduler-0_d39fb8c7-403a-4f95-9a6a-e9207bc02408/probe/0.log" Mar 18 18:28:02.390551 master-0 kubenswrapper[30278]: I0318 18:28:02.390518 30278 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-credential-operator_cloud-credential-operator-744f9dbf77-djgn7_04cef0bd-f365-4bf6-864a-1895995015d6/cloud-credential-operator/0.log" Mar 18 18:28:02.471933 master-0 kubenswrapper[30278]: I0318 18:28:02.471872 30278 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-b9df6-volume-lvm-iscsi-0_87b1fa77-70e4-4d90-a808-8ec6a7526a12/cinder-volume/0.log" Mar 18 18:28:02.487237 master-0 kubenswrapper[30278]: I0318 18:28:02.487179 30278 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-b9df6-volume-lvm-iscsi-0_87b1fa77-70e4-4d90-a808-8ec6a7526a12/probe/0.log" Mar 18 18:28:02.503433 master-0 kubenswrapper[30278]: I0318 18:28:02.503362 30278 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-db-create-kl89c_677619bc-d70e-475e-a844-b177d2cadbd9/mariadb-database-create/0.log" Mar 18 18:28:02.533837 master-0 kubenswrapper[30278]: I0318 18:28:02.533746 30278 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-7fb46c8999-cmd4w_6ec94265-412a-4c3d-8339-bd5e294ede4f/dnsmasq-dns/0.log" Mar 18 18:28:02.543126 master-0 kubenswrapper[30278]: I0318 18:28:02.543076 30278 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-7fb46c8999-cmd4w_6ec94265-412a-4c3d-8339-bd5e294ede4f/init/0.log" Mar 18 18:28:02.636760 master-0 kubenswrapper[30278]: I0318 18:28:02.636607 30278 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-824c8-default-external-api-0_8e47bafb-66fb-4935-8d11-d134fed10f87/glance-log/0.log" Mar 18 18:28:02.649369 master-0 kubenswrapper[30278]: I0318 18:28:02.648191 30278 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-824c8-default-external-api-0_8e47bafb-66fb-4935-8d11-d134fed10f87/glance-httpd/0.log" Mar 18 18:28:02.731030 master-0 kubenswrapper[30278]: I0318 18:28:02.730960 30278 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-824c8-default-internal-api-0_d4c895c8-e64f-47dc-a6a6-61e0929add02/glance-log/0.log" Mar 18 18:28:02.764449 master-0 kubenswrapper[30278]: I0318 18:28:02.762846 30278 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-824c8-default-internal-api-0_d4c895c8-e64f-47dc-a6a6-61e0929add02/glance-httpd/0.log" Mar 18 18:28:02.783772 master-0 kubenswrapper[30278]: I0318 18:28:02.783713 30278 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-c37d-account-create-update-wtp9f_44531d8d-219a-4896-94c7-79b37cba4c80/mariadb-account-create-update/0.log" Mar 18 18:28:02.798865 master-0 kubenswrapper[30278]: I0318 18:28:02.797756 30278 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-db-create-9h6hb_7d866f13-989b-4dea-b811-6fa6df274dea/mariadb-database-create/0.log" Mar 18 18:28:02.820604 master-0 kubenswrapper[30278]: I0318 18:28:02.820547 30278 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-db-sync-8jvr2_d2ad6a1d-4b4e-49d6-b2f1-65906269f79e/glance-db-sync/0.log" Mar 18 18:28:02.841306 master-0 kubenswrapper[30278]: I0318 18:28:02.839831 30278 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-5cfb4bd768-f4ww4_8794f0fc-2223-4bd7-aed5-a219b5f427e0/ironic-api-log/0.log" Mar 18 18:28:02.871814 master-0 kubenswrapper[30278]: I0318 18:28:02.869650 30278 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-5cfb4bd768-f4ww4_8794f0fc-2223-4bd7-aed5-a219b5f427e0/ironic-api/0.log" Mar 18 18:28:02.909467 master-0 kubenswrapper[30278]: I0318 18:28:02.909341 30278 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-5cfb4bd768-f4ww4_8794f0fc-2223-4bd7-aed5-a219b5f427e0/init/0.log" Mar 18 18:28:02.952104 master-0 kubenswrapper[30278]: I0318 18:28:02.948092 30278 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-conductor-0_e9af6002-27e3-414d-b61a-dc0f7d99768b/ironic-conductor/0.log" Mar 18 18:28:02.963298 master-0 kubenswrapper[30278]: I0318 18:28:02.962024 30278 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-conductor-0_e9af6002-27e3-414d-b61a-dc0f7d99768b/httpboot/0.log" Mar 18 18:28:02.969364 master-0 kubenswrapper[30278]: I0318 18:28:02.969314 30278 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-conductor-0_e9af6002-27e3-414d-b61a-dc0f7d99768b/dnsmasq/0.log" Mar 18 18:28:02.987781 master-0 kubenswrapper[30278]: I0318 18:28:02.984317 30278 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-conductor-0_e9af6002-27e3-414d-b61a-dc0f7d99768b/init/0.log" Mar 18 18:28:02.991431 master-0 kubenswrapper[30278]: I0318 18:28:02.989861 30278 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-conductor-0_e9af6002-27e3-414d-b61a-dc0f7d99768b/ironic-python-agent-init/0.log" Mar 18 18:28:03.777719 master-0 kubenswrapper[30278]: I0318 18:28:03.777667 30278 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-conductor-0_e9af6002-27e3-414d-b61a-dc0f7d99768b/pxe-init/0.log" Mar 18 18:28:03.791321 master-0 kubenswrapper[30278]: I0318 18:28:03.791234 30278 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-db-create-vdk4s_93a52bc7-f284-44c3-afd7-738547756dd4/mariadb-database-create/0.log" Mar 18 18:28:03.816295 master-0 kubenswrapper[30278]: I0318 18:28:03.815419 30278 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-db-sync-ggb6f_ade5c277-043b-4e56-bc7c-63961acf67c4/ironic-db-sync/0.log" Mar 18 18:28:03.830296 master-0 kubenswrapper[30278]: I0318 18:28:03.829642 30278 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-db-sync-ggb6f_ade5c277-043b-4e56-bc7c-63961acf67c4/init/0.log" Mar 18 18:28:03.839294 master-0 kubenswrapper[30278]: I0318 18:28:03.838361 30278 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-f681-account-create-update-qx2xl_8fa4d0fa-8b6c-4d8c-acf3-3e438a0c9441/mariadb-account-create-update/0.log" Mar 18 18:28:03.872303 master-0 kubenswrapper[30278]: I0318 18:28:03.871241 30278 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-inspector-0_f9ada823-f818-42c2-874e-0cce432cdff3/ironic-inspector-httpd/0.log" Mar 18 18:28:03.897296 master-0 kubenswrapper[30278]: I0318 18:28:03.896813 30278 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-inspector-0_f9ada823-f818-42c2-874e-0cce432cdff3/ironic-inspector/0.log" Mar 18 18:28:03.909300 master-0 kubenswrapper[30278]: I0318 18:28:03.908187 30278 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-inspector-0_f9ada823-f818-42c2-874e-0cce432cdff3/inspector-httpboot/0.log" Mar 18 18:28:03.921690 master-0 kubenswrapper[30278]: I0318 18:28:03.921227 30278 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-inspector-0_f9ada823-f818-42c2-874e-0cce432cdff3/ramdisk-logs/0.log" Mar 18 18:28:03.929127 master-0 kubenswrapper[30278]: I0318 18:28:03.929080 30278 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-inspector-0_f9ada823-f818-42c2-874e-0cce432cdff3/inspector-dnsmasq/0.log" Mar 18 18:28:03.943159 master-0 kubenswrapper[30278]: I0318 18:28:03.939575 30278 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-inspector-0_f9ada823-f818-42c2-874e-0cce432cdff3/ironic-python-agent-init/0.log" Mar 18 18:28:03.969481 master-0 kubenswrapper[30278]: I0318 18:28:03.967900 30278 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-inspector-0_f9ada823-f818-42c2-874e-0cce432cdff3/inspector-pxe-init/0.log" Mar 18 18:28:03.979561 master-0 kubenswrapper[30278]: I0318 18:28:03.977953 30278 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-inspector-4c72-account-create-update-hzqhn_a8ecf6f3-3705-4948-bef5-95c5cb62c14a/mariadb-account-create-update/0.log" Mar 18 18:28:03.991804 master-0 kubenswrapper[30278]: I0318 18:28:03.991753 30278 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-inspector-db-create-8vlcj_8b5223e8-7cb6-425b-a1d8-55c542110842/mariadb-database-create/0.log" Mar 18 18:28:04.031584 master-0 kubenswrapper[30278]: I0318 18:28:04.031433 30278 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-inspector-db-sync-98qm9_681bd0b0-8192-4ac5-9e57-2a5e4f575b1f/ironic-inspector-db-sync/0.log" Mar 18 18:28:04.070461 master-0 kubenswrapper[30278]: I0318 18:28:04.068924 30278 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-neutron-agent-c769655c7-ssdxq_adb370b0-e5b4-4cc8-b1d2-c63363b70615/ironic-neutron-agent/2.log" Mar 18 18:28:04.070461 master-0 kubenswrapper[30278]: I0318 18:28:04.069847 30278 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ironic-neutron-agent-c769655c7-ssdxq_adb370b0-e5b4-4cc8-b1d2-c63363b70615/ironic-neutron-agent/1.log" Mar 18 18:28:04.094106 master-0 kubenswrapper[30278]: I0318 18:28:04.094041 30278 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-10af-account-create-update-f6v8x_64f423f1-722c-4545-b52b-8750dab378a3/mariadb-account-create-update/0.log" Mar 18 18:28:04.233312 master-0 kubenswrapper[30278]: I0318 18:28:04.232805 30278 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-6f67d74887-q4vt6_8ed0b9d6-4657-4f09-945d-eaec083a0836/keystone-api/0.log" Mar 18 18:28:04.247070 master-0 kubenswrapper[30278]: I0318 18:28:04.246989 30278 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-bootstrap-8zspc_bdb2d7ca-85c4-45d9-b5cd-ded86df14c3e/keystone-bootstrap/0.log" Mar 18 18:28:04.265918 master-0 kubenswrapper[30278]: I0318 18:28:04.265863 30278 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-db-create-2ftrf_d6e214f3-e729-4653-bd99-ed6b6989358f/mariadb-database-create/0.log" Mar 18 18:28:04.287939 master-0 kubenswrapper[30278]: I0318 18:28:04.287794 30278 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-db-sync-8ntbw_e74e301d-4637-4d16-a125-a44a5470a4ac/keystone-db-sync/0.log" Mar 18 18:28:05.123228 master-0 kubenswrapper[30278]: I0318 18:28:05.123167 30278 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-config-operator_openshift-config-operator-95bf4f4d-q27fh_cb522b02-0b93-4711-9041-566daa06b95a/openshift-config-operator/1.log" Mar 18 18:28:05.125090 master-0 kubenswrapper[30278]: I0318 18:28:05.125059 30278 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-config-operator_openshift-config-operator-95bf4f4d-q27fh_cb522b02-0b93-4711-9041-566daa06b95a/openshift-config-operator/2.log" Mar 18 18:28:05.140352 master-0 kubenswrapper[30278]: I0318 18:28:05.140295 30278 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-config-operator_openshift-config-operator-95bf4f4d-q27fh_cb522b02-0b93-4711-9041-566daa06b95a/openshift-api/0.log" Mar 18 18:28:06.584973 master-0 kubenswrapper[30278]: I0318 18:28:06.584904 30278 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-must-gather-fwdtq/perf-node-gather-daemonset-bbjc2" Mar 18 18:28:06.592886 master-0 kubenswrapper[30278]: I0318 18:28:06.592813 30278 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console-operator_console-operator-76b6568d85-5nwft_d5d15a23-f43f-4265-a7e5-8c28f680ede9/console-operator/0.log" Mar 18 18:28:07.833612 master-0 kubenswrapper[30278]: I0318 18:28:07.833454 30278 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f76dd88c-h9rrg_da98779c-7834-4e68-b018-40d11d173a55/console/0.log" Mar 18 18:28:07.893857 master-0 kubenswrapper[30278]: I0318 18:28:07.893790 30278 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_downloads-66b8ffb895-5ftpz_1c86ad24-b858-4dfa-802b-f4799093ffc0/download-server/0.log" Mar 18 18:28:09.288584 master-0 kubenswrapper[30278]: I0318 18:28:09.288484 30278 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_cluster-storage-operator-7d87854d6-d4bmc_c38c5f03-a753-49f4-ab06-33e75a03bd45/cluster-storage-operator/0.log" Mar 18 18:28:09.301426 master-0 kubenswrapper[30278]: I0318 18:28:09.301367 30278 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_cluster-storage-operator-7d87854d6-d4bmc_c38c5f03-a753-49f4-ab06-33e75a03bd45/cluster-storage-operator/1.log" Mar 18 18:28:09.326963 master-0 kubenswrapper[30278]: I0318 18:28:09.326143 30278 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-64854d9cff-vpjmp_7d39d93e-9be3-47e1-a44e-be2d18b55446/snapshot-controller/4.log" Mar 18 18:28:09.329895 master-0 kubenswrapper[30278]: I0318 18:28:09.328390 30278 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-64854d9cff-vpjmp_7d39d93e-9be3-47e1-a44e-be2d18b55446/snapshot-controller/5.log" Mar 18 18:28:09.372557 master-0 kubenswrapper[30278]: I0318 18:28:09.372487 30278 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-operator-5f5d689c6b-z9vvz_dba5f8d7-4d25-42b5-9c58-813221bf96bb/csi-snapshot-controller-operator/0.log" Mar 18 18:28:09.381459 master-0 kubenswrapper[30278]: I0318 18:28:09.381387 30278 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-operator-5f5d689c6b-z9vvz_dba5f8d7-4d25-42b5-9c58-813221bf96bb/csi-snapshot-controller-operator/1.log" Mar 18 18:28:10.803769 master-0 kubenswrapper[30278]: I0318 18:28:10.803685 30278 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-dns-operator_dns-operator-9c5679d8f-7sc7v_b1352cc7-4099-44c5-9c31-8259fb783bc7/dns-operator/0.log" Mar 18 18:28:10.825861 master-0 kubenswrapper[30278]: I0318 18:28:10.825760 30278 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-dns-operator_dns-operator-9c5679d8f-7sc7v_b1352cc7-4099-44c5-9c31-8259fb783bc7/kube-rbac-proxy/0.log" Mar 18 18:28:11.866143 master-0 kubenswrapper[30278]: I0318 18:28:11.866089 30278 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-dns_dns-default-lf9xl_59407fdf-b1e9-4992-a3c8-54b4e26f496c/dns/0.log" Mar 18 18:28:11.884057 master-0 kubenswrapper[30278]: I0318 18:28:11.883996 30278 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-dns_dns-default-lf9xl_59407fdf-b1e9-4992-a3c8-54b4e26f496c/kube-rbac-proxy/0.log" Mar 18 18:28:11.908050 master-0 kubenswrapper[30278]: I0318 18:28:11.907995 30278 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-dns_node-resolver-bwcgq_efd0d6b1-652c-44b2-b918-5c7ced5d15c3/dns-node-resolver/0.log" Mar 18 18:28:12.233786 master-0 kubenswrapper[30278]: I0318 18:28:12.233674 30278 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_7cbbe035-fa50-48c9-84ca-845e93085070/memcached/0.log" Mar 18 18:28:12.957167 master-0 kubenswrapper[30278]: I0318 18:28:12.955391 30278 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-5776b66b45-w6n4j_f0ecd562-b219-44d6-b27a-99af0ae48f35/neutron-api/0.log" Mar 18 18:28:12.976121 master-0 kubenswrapper[30278]: I0318 18:28:12.975520 30278 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-5776b66b45-w6n4j_f0ecd562-b219-44d6-b27a-99af0ae48f35/neutron-httpd/0.log" Mar 18 18:28:12.984431 master-0 kubenswrapper[30278]: I0318 18:28:12.984377 30278 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-984d-account-create-update-tqdfv_90da1e72-16d6-4b7c-9ea2-75800f09f684/mariadb-account-create-update/0.log" Mar 18 18:28:13.007931 master-0 kubenswrapper[30278]: I0318 18:28:13.002640 30278 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-db-create-rgrfw_594ed543-14e4-4a71-8eb9-3482fa67fc1d/mariadb-database-create/0.log" Mar 18 18:28:13.043308 master-0 kubenswrapper[30278]: I0318 18:28:13.041597 30278 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-db-sync-7kvlq_c5b88faf-e795-428e-8c3b-5a81d27c4a63/neutron-db-sync/0.log" Mar 18 18:28:13.189657 master-0 kubenswrapper[30278]: I0318 18:28:13.189578 30278 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_e2f52c3d-8c7e-4c50-813c-2b1f3f3027bc/nova-api-log/0.log" Mar 18 18:28:13.317296 master-0 kubenswrapper[30278]: I0318 18:28:13.316944 30278 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_e2f52c3d-8c7e-4c50-813c-2b1f3f3027bc/nova-api-api/0.log" Mar 18 18:28:13.329332 master-0 kubenswrapper[30278]: I0318 18:28:13.328907 30278 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-16af-account-create-update-nz97w_de752594-4e91-4400-bc57-3a77ddbc66f7/mariadb-account-create-update/0.log" Mar 18 18:28:13.359664 master-0 kubenswrapper[30278]: I0318 18:28:13.358709 30278 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-db-create-275vd_21b3a964-ae1b-49d5-be02-c1b7397b406c/mariadb-database-create/0.log" Mar 18 18:28:13.380960 master-0 kubenswrapper[30278]: I0318 18:28:13.380717 30278 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-7471-account-create-update-fv6xj_43f4a237-d80c-40a8-ac9f-ae9422afb881/mariadb-account-create-update/0.log" Mar 18 18:28:13.401320 master-0 kubenswrapper[30278]: I0318 18:28:13.401230 30278 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-cell-mapping-8vmhz_a6c011e4-5cf2-4451-974d-e1032bc333a9/nova-manage/0.log" Mar 18 18:28:13.579701 master-0 kubenswrapper[30278]: I0318 18:28:13.577981 30278 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd-operator_etcd-operator-8544cbcf9c-rws9x_0100a259-1358-45e8-8191-4e1f9a14ec89/etcd-operator/4.log" Mar 18 18:28:13.598947 master-0 kubenswrapper[30278]: I0318 18:28:13.598875 30278 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd-operator_etcd-operator-8544cbcf9c-rws9x_0100a259-1358-45e8-8191-4e1f9a14ec89/etcd-operator/3.log" Mar 18 18:28:13.607930 master-0 kubenswrapper[30278]: I0318 18:28:13.606957 30278 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_651a0333-e27d-4274-8909-36174be8189f/nova-cell0-conductor-conductor/0.log" Mar 18 18:28:13.627166 master-0 kubenswrapper[30278]: I0318 18:28:13.627104 30278 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-db-sync-qn2jb_75582986-df2a-4948-994c-643227b19932/nova-cell0-conductor-db-sync/0.log" Mar 18 18:28:13.641867 master-0 kubenswrapper[30278]: I0318 18:28:13.638846 30278 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-db-create-zf26j_ebb1d48c-efd7-4146-a06f-5eb19de9f51e/mariadb-database-create/0.log" Mar 18 18:28:13.662057 master-0 kubenswrapper[30278]: I0318 18:28:13.660155 30278 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-5998-account-create-update-w7qdg_25ff64e9-3a30-4ee8-a9d2-3b1dec433087/mariadb-account-create-update/0.log" Mar 18 18:28:13.686304 master-0 kubenswrapper[30278]: I0318 18:28:13.686193 30278 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-cell-mapping-gtlpg_5e501d70-7435-4269-a155-067f1f54bee7/nova-manage/0.log" Mar 18 18:28:13.761298 master-0 kubenswrapper[30278]: I0318 18:28:13.760699 30278 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-compute-ironic-compute-0_5308f3c6-9e64-4187-b4f9-b8b0dc8c2874/nova-cell1-compute-ironic-compute-compute/0.log" Mar 18 18:28:13.873617 master-0 kubenswrapper[30278]: I0318 18:28:13.873403 30278 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_83dc7510-eee4-41e5-a4ff-0ffa9efb380b/nova-cell1-conductor-conductor/0.log" Mar 18 18:28:13.889161 master-0 kubenswrapper[30278]: I0318 18:28:13.888573 30278 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-db-sync-tv9n9_564cb488-caa7-49c0-b12a-133aa721085c/nova-cell1-conductor-db-sync/0.log" Mar 18 18:28:13.899976 master-0 kubenswrapper[30278]: I0318 18:28:13.899882 30278 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-db-create-jmrkj_d70da1e8-5ba9-440d-bd18-6add06bb23ef/mariadb-database-create/0.log" Mar 18 18:28:13.921799 master-0 kubenswrapper[30278]: I0318 18:28:13.921737 30278 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-host-discover-76s4m_ae8ac9fe-688b-4a6e-a479-9b5c5eeb5704/nova-manage/0.log" Mar 18 18:28:14.006373 master-0 kubenswrapper[30278]: I0318 18:28:14.004106 30278 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_f2fb5ad2-1ec6-42bc-b6dd-b5188ba2988e/nova-cell1-novncproxy-novncproxy/0.log" Mar 18 18:28:14.106109 master-0 kubenswrapper[30278]: I0318 18:28:14.106037 30278 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_b4eccac6-c568-43d3-9a32-a6ccff12973d/nova-metadata-log/0.log" Mar 18 18:28:14.168568 master-0 kubenswrapper[30278]: I0318 18:28:14.168417 30278 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_b4eccac6-c568-43d3-9a32-a6ccff12973d/nova-metadata-metadata/0.log" Mar 18 18:28:14.301580 master-0 kubenswrapper[30278]: I0318 18:28:14.301513 30278 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_6f21f8ca-9905-414c-a2c5-f50ca82015e1/nova-scheduler-scheduler/0.log" Mar 18 18:28:14.340597 master-0 kubenswrapper[30278]: I0318 18:28:14.339498 30278 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_df68dba7-dacb-48bb-9433-12ad79aba028/galera/0.log" Mar 18 18:28:14.357294 master-0 kubenswrapper[30278]: I0318 18:28:14.357211 30278 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_df68dba7-dacb-48bb-9433-12ad79aba028/mysql-bootstrap/0.log" Mar 18 18:28:14.393285 master-0 kubenswrapper[30278]: I0318 18:28:14.391627 30278 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_3a06b9e0-a605-44e2-b6e2-63b15a5bb700/galera/0.log" Mar 18 18:28:14.409404 master-0 kubenswrapper[30278]: I0318 18:28:14.409321 30278 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_3a06b9e0-a605-44e2-b6e2-63b15a5bb700/mysql-bootstrap/0.log" Mar 18 18:28:14.423647 master-0 kubenswrapper[30278]: I0318 18:28:14.423437 30278 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_3cf3d2cb-bc70-4a26-87db-0aa186c4a1a6/openstackclient/0.log" Mar 18 18:28:14.441362 master-0 kubenswrapper[30278]: I0318 18:28:14.441223 30278 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-xz9c7_8c299186-30d6-4dd9-9490-5c843f940e6d/openstack-network-exporter/0.log" Mar 18 18:28:14.461192 master-0 kubenswrapper[30278]: I0318 18:28:14.461131 30278 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-9qq6l_bb722697-8531-46a1-a93f-babc070522f4/ovsdb-server/0.log" Mar 18 18:28:14.470490 master-0 kubenswrapper[30278]: I0318 18:28:14.470046 30278 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-9qq6l_bb722697-8531-46a1-a93f-babc070522f4/ovs-vswitchd/0.log" Mar 18 18:28:14.478700 master-0 kubenswrapper[30278]: I0318 18:28:14.477140 30278 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-9qq6l_bb722697-8531-46a1-a93f-babc070522f4/ovsdb-server-init/0.log" Mar 18 18:28:14.491775 master-0 kubenswrapper[30278]: I0318 18:28:14.491725 30278 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-xntzs_e01e85f2-9a8b-4862-ad33-959e38bfbc7c/ovn-controller/0.log" Mar 18 18:28:14.519882 master-0 kubenswrapper[30278]: I0318 18:28:14.519817 30278 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_fd9e1dcd-e0d3-401a-b538-90a263db6e88/ovn-northd/0.log" Mar 18 18:28:14.530758 master-0 kubenswrapper[30278]: I0318 18:28:14.530714 30278 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_fd9e1dcd-e0d3-401a-b538-90a263db6e88/openstack-network-exporter/0.log" Mar 18 18:28:14.566045 master-0 kubenswrapper[30278]: I0318 18:28:14.565991 30278 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_4047014a-de6e-447d-983b-973a84e7478b/ovsdbserver-nb/0.log" Mar 18 18:28:14.580500 master-0 kubenswrapper[30278]: I0318 18:28:14.580439 30278 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_4047014a-de6e-447d-983b-973a84e7478b/openstack-network-exporter/0.log" Mar 18 18:28:14.620464 master-0 kubenswrapper[30278]: I0318 18:28:14.620387 30278 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_cbc42adf-4d99-42bb-b262-0f4163e358b8/ovsdbserver-sb/0.log" Mar 18 18:28:14.625868 master-0 kubenswrapper[30278]: I0318 18:28:14.625802 30278 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_cbc42adf-4d99-42bb-b262-0f4163e358b8/openstack-network-exporter/0.log" Mar 18 18:28:14.686595 master-0 kubenswrapper[30278]: I0318 18:28:14.686055 30278 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-84cf7b8984-2rsvd_d03211db-1cec-4835-ad52-6c3befa04b20/placement-log/0.log" Mar 18 18:28:14.702493 master-0 kubenswrapper[30278]: I0318 18:28:14.702427 30278 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-84cf7b8984-2rsvd_d03211db-1cec-4835-ad52-6c3befa04b20/placement-api/0.log" Mar 18 18:28:14.706867 master-0 kubenswrapper[30278]: I0318 18:28:14.706834 30278 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_094204df314fe45bd5af12ca1b4622bb/etcdctl/0.log" Mar 18 18:28:14.732330 master-0 kubenswrapper[30278]: I0318 18:28:14.729072 30278 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-8850-account-create-update-vzxfq_5867f7c5-a107-4f30-87d3-bb37abf4b2c1/mariadb-account-create-update/0.log" Mar 18 18:28:14.742731 master-0 kubenswrapper[30278]: I0318 18:28:14.742680 30278 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-db-create-x6mcz_ee65994b-d421-4f38-8556-5084ef3757e1/mariadb-database-create/0.log" Mar 18 18:28:14.757565 master-0 kubenswrapper[30278]: I0318 18:28:14.757506 30278 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-db-sync-rngq2_fdcd674f-1047-437f-90ed-187b8b5eb882/placement-db-sync/0.log" Mar 18 18:28:14.818716 master-0 kubenswrapper[30278]: I0318 18:28:14.818638 30278 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_1ec57481-0836-4458-a2bc-e7ce64175f3a/rabbitmq/0.log" Mar 18 18:28:14.827474 master-0 kubenswrapper[30278]: I0318 18:28:14.827410 30278 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_1ec57481-0836-4458-a2bc-e7ce64175f3a/setup-container/0.log" Mar 18 18:28:14.907879 master-0 kubenswrapper[30278]: I0318 18:28:14.907822 30278 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_a24f1688-7c02-4ac5-af8a-0a5c3847755a/rabbitmq/0.log" Mar 18 18:28:14.925884 master-0 kubenswrapper[30278]: I0318 18:28:14.925182 30278 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_a24f1688-7c02-4ac5-af8a-0a5c3847755a/setup-container/0.log" Mar 18 18:28:14.940144 master-0 kubenswrapper[30278]: I0318 18:28:14.939927 30278 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_root-account-create-update-sd6rg_72dc0432-f429-4dbf-b1ce-d421425d6ca3/mariadb-account-create-update/0.log" Mar 18 18:28:14.996980 master-0 kubenswrapper[30278]: I0318 18:28:14.996907 30278 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-66857967b8-5fglj_dab35501-e90f-48cb-b31d-1ea8086b7b1d/proxy-httpd/0.log" Mar 18 18:28:15.044010 master-0 kubenswrapper[30278]: I0318 18:28:15.043941 30278 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-66857967b8-5fglj_dab35501-e90f-48cb-b31d-1ea8086b7b1d/proxy-server/0.log" Mar 18 18:28:15.051126 master-0 kubenswrapper[30278]: I0318 18:28:15.051069 30278 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_094204df314fe45bd5af12ca1b4622bb/etcd/0.log" Mar 18 18:28:15.067436 master-0 kubenswrapper[30278]: I0318 18:28:15.067257 30278 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-ring-rebalance-qsrjq_b076dc06-c082-4a5e-a049-9f98858a80ff/swift-ring-rebalance/0.log" Mar 18 18:28:15.095611 master-0 kubenswrapper[30278]: I0318 18:28:15.094233 30278 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_ff27830b-378b-4338-ac41-041a9d78ed62/account-server/0.log" Mar 18 18:28:15.115609 master-0 kubenswrapper[30278]: I0318 18:28:15.115544 30278 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_ff27830b-378b-4338-ac41-041a9d78ed62/account-replicator/0.log" Mar 18 18:28:15.123396 master-0 kubenswrapper[30278]: I0318 18:28:15.123175 30278 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_094204df314fe45bd5af12ca1b4622bb/etcd-metrics/0.log" Mar 18 18:28:15.126688 master-0 kubenswrapper[30278]: I0318 18:28:15.126655 30278 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_ff27830b-378b-4338-ac41-041a9d78ed62/account-auditor/0.log" Mar 18 18:28:15.136570 master-0 kubenswrapper[30278]: I0318 18:28:15.136436 30278 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_ff27830b-378b-4338-ac41-041a9d78ed62/account-reaper/0.log" Mar 18 18:28:15.146104 master-0 kubenswrapper[30278]: I0318 18:28:15.146053 30278 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_ff27830b-378b-4338-ac41-041a9d78ed62/container-server/0.log" Mar 18 18:28:15.172678 master-0 kubenswrapper[30278]: I0318 18:28:15.172621 30278 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_094204df314fe45bd5af12ca1b4622bb/etcd-readyz/0.log" Mar 18 18:28:15.173598 master-0 kubenswrapper[30278]: I0318 18:28:15.173552 30278 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_ff27830b-378b-4338-ac41-041a9d78ed62/container-replicator/0.log" Mar 18 18:28:15.184072 master-0 kubenswrapper[30278]: I0318 18:28:15.184020 30278 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_ff27830b-378b-4338-ac41-041a9d78ed62/container-auditor/0.log" Mar 18 18:28:15.194783 master-0 kubenswrapper[30278]: I0318 18:28:15.194618 30278 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_ff27830b-378b-4338-ac41-041a9d78ed62/container-updater/0.log" Mar 18 18:28:15.206058 master-0 kubenswrapper[30278]: I0318 18:28:15.205989 30278 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_ff27830b-378b-4338-ac41-041a9d78ed62/object-server/0.log" Mar 18 18:28:15.207890 master-0 kubenswrapper[30278]: I0318 18:28:15.207853 30278 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_094204df314fe45bd5af12ca1b4622bb/etcd-rev/0.log" Mar 18 18:28:15.215426 master-0 kubenswrapper[30278]: I0318 18:28:15.215379 30278 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_ff27830b-378b-4338-ac41-041a9d78ed62/object-replicator/0.log" Mar 18 18:28:15.226162 master-0 kubenswrapper[30278]: I0318 18:28:15.226114 30278 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_ff27830b-378b-4338-ac41-041a9d78ed62/object-auditor/0.log" Mar 18 18:28:15.233907 master-0 kubenswrapper[30278]: I0318 18:28:15.232613 30278 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_094204df314fe45bd5af12ca1b4622bb/setup/0.log" Mar 18 18:28:15.237405 master-0 kubenswrapper[30278]: I0318 18:28:15.237359 30278 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_ff27830b-378b-4338-ac41-041a9d78ed62/object-updater/0.log" Mar 18 18:28:15.253632 master-0 kubenswrapper[30278]: I0318 18:28:15.253546 30278 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_ff27830b-378b-4338-ac41-041a9d78ed62/object-expirer/0.log" Mar 18 18:28:15.262744 master-0 kubenswrapper[30278]: I0318 18:28:15.262644 30278 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_094204df314fe45bd5af12ca1b4622bb/etcd-ensure-env-vars/0.log" Mar 18 18:28:15.278164 master-0 kubenswrapper[30278]: I0318 18:28:15.277993 30278 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_ff27830b-378b-4338-ac41-041a9d78ed62/rsync/0.log" Mar 18 18:28:15.281264 master-0 kubenswrapper[30278]: I0318 18:28:15.281211 30278 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_094204df314fe45bd5af12ca1b4622bb/etcd-resources-copy/0.log" Mar 18 18:28:15.287958 master-0 kubenswrapper[30278]: I0318 18:28:15.287905 30278 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_ff27830b-378b-4338-ac41-041a9d78ed62/swift-recon-cron/0.log" Mar 18 18:28:15.337304 master-0 kubenswrapper[30278]: I0318 18:28:15.337042 30278 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_installer-1-master-0_08451d5b-cf84-45a1-a16d-7ce10a83a6e7/installer/0.log" Mar 18 18:28:15.684008 master-0 kubenswrapper[30278]: I0318 18:28:15.682693 30278 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_installer-2-master-0_cd9d8bd7-68a0-458f-9d25-f600932e303c/installer/0.log" Mar 18 18:28:17.120566 master-0 kubenswrapper[30278]: I0318 18:28:17.120501 30278 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-image-registry_cluster-image-registry-operator-5549dc66cb-ljrq8_6f26e239-2988-4faa-bc1d-24b15b95b7f1/cluster-image-registry-operator/1.log" Mar 18 18:28:17.143453 master-0 kubenswrapper[30278]: I0318 18:28:17.143388 30278 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-image-registry_cluster-image-registry-operator-5549dc66cb-ljrq8_6f26e239-2988-4faa-bc1d-24b15b95b7f1/cluster-image-registry-operator/2.log" Mar 18 18:28:17.163653 master-0 kubenswrapper[30278]: I0318 18:28:17.163592 30278 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-image-registry_node-ca-d4c2p_c6a21184-42b3-4dc1-bf4f-16fe9fa7b6f8/node-ca/0.log" Mar 18 18:28:18.181335 master-0 kubenswrapper[30278]: I0318 18:28:18.180637 30278 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-66b84d69b-qb7n6_7e64a377-f497-4416-8f22-d5c7f52e0b65/ingress-operator/5.log" Mar 18 18:28:18.196016 master-0 kubenswrapper[30278]: I0318 18:28:18.195845 30278 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-66b84d69b-qb7n6_7e64a377-f497-4416-8f22-d5c7f52e0b65/ingress-operator/6.log" Mar 18 18:28:18.221547 master-0 kubenswrapper[30278]: I0318 18:28:18.220460 30278 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-66b84d69b-qb7n6_7e64a377-f497-4416-8f22-d5c7f52e0b65/kube-rbac-proxy/0.log" Mar 18 18:28:19.795311 master-0 kubenswrapper[30278]: I0318 18:28:19.795197 30278 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-canary_ingress-canary-jbs9f_a322ca7f-9095-4b43-96ff-ac8a637fae27/serve-healthcheck-canary/0.log" Mar 18 18:28:20.689693 master-0 kubenswrapper[30278]: I0318 18:28:20.689631 30278 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-insights_insights-operator-68bf6ff9d6-hm777_d4c75bee-d0d2-4261-8f89-8c3375dbd868/insights-operator/2.log" Mar 18 18:28:20.716520 master-0 kubenswrapper[30278]: I0318 18:28:20.716456 30278 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-insights_insights-operator-68bf6ff9d6-hm777_d4c75bee-d0d2-4261-8f89-8c3375dbd868/insights-operator/3.log" Mar 18 18:28:21.052580 master-0 kubenswrapper[30278]: I0318 18:28:21.052355 30278 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-fwdtq/master-0-debug-h78kc" event={"ID":"1341e9b9-8891-45e9-9dbd-4fb8d5ead718","Type":"ContainerStarted","Data":"eaf85f171d18f8c61306ff3c9305e22fe51c489326f8c1678fb140790086579b"} Mar 18 18:28:21.103920 master-0 kubenswrapper[30278]: I0318 18:28:21.102069 30278 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-fwdtq/master-0-debug-h78kc" podStartSLOduration=2.0781995 podStartE2EDuration="22.102039311s" podCreationTimestamp="2026-03-18 18:27:59 +0000 UTC" firstStartedPulling="2026-03-18 18:28:00.469933159 +0000 UTC m=+1649.637117754" lastFinishedPulling="2026-03-18 18:28:20.49377297 +0000 UTC m=+1669.660957565" observedRunningTime="2026-03-18 18:28:21.078767344 +0000 UTC m=+1670.245951959" watchObservedRunningTime="2026-03-18 18:28:21.102039311 +0000 UTC m=+1670.269223906" Mar 18 18:28:23.439681 master-0 kubenswrapper[30278]: I0318 18:28:23.439516 30278 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_alertmanager-main-0_055b8a84-fa30-4cdd-b5c8-eb9bbf7312b0/alertmanager/0.log" Mar 18 18:28:23.464299 master-0 kubenswrapper[30278]: I0318 18:28:23.462702 30278 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_alertmanager-main-0_055b8a84-fa30-4cdd-b5c8-eb9bbf7312b0/config-reloader/0.log" Mar 18 18:28:23.483117 master-0 kubenswrapper[30278]: I0318 18:28:23.483043 30278 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_alertmanager-main-0_055b8a84-fa30-4cdd-b5c8-eb9bbf7312b0/kube-rbac-proxy-web/0.log" Mar 18 18:28:23.509098 master-0 kubenswrapper[30278]: I0318 18:28:23.508255 30278 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_alertmanager-main-0_055b8a84-fa30-4cdd-b5c8-eb9bbf7312b0/kube-rbac-proxy/0.log" Mar 18 18:28:23.535797 master-0 kubenswrapper[30278]: I0318 18:28:23.535611 30278 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_alertmanager-main-0_055b8a84-fa30-4cdd-b5c8-eb9bbf7312b0/kube-rbac-proxy-metric/0.log" Mar 18 18:28:23.554073 master-0 kubenswrapper[30278]: I0318 18:28:23.554000 30278 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_alertmanager-main-0_055b8a84-fa30-4cdd-b5c8-eb9bbf7312b0/prom-label-proxy/0.log" Mar 18 18:28:23.575065 master-0 kubenswrapper[30278]: I0318 18:28:23.574982 30278 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_alertmanager-main-0_055b8a84-fa30-4cdd-b5c8-eb9bbf7312b0/init-config-reloader/0.log" Mar 18 18:28:23.645497 master-0 kubenswrapper[30278]: I0318 18:28:23.645423 30278 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_cluster-monitoring-operator-58845fbb57-vjrjg_8d5e9525-6c0d-4c0b-a2ce-e42eaf66c311/cluster-monitoring-operator/0.log" Mar 18 18:28:23.669327 master-0 kubenswrapper[30278]: I0318 18:28:23.669249 30278 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_kube-state-metrics-7bbc969446-72wb5_5876677a-9e8a-4625-af71-833b259a1596/kube-state-metrics/0.log" Mar 18 18:28:23.687341 master-0 kubenswrapper[30278]: I0318 18:28:23.687260 30278 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_kube-state-metrics-7bbc969446-72wb5_5876677a-9e8a-4625-af71-833b259a1596/kube-rbac-proxy-main/0.log" Mar 18 18:28:23.703667 master-0 kubenswrapper[30278]: I0318 18:28:23.703507 30278 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_kube-state-metrics-7bbc969446-72wb5_5876677a-9e8a-4625-af71-833b259a1596/kube-rbac-proxy-self/0.log" Mar 18 18:28:23.723942 master-0 kubenswrapper[30278]: I0318 18:28:23.723887 30278 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_metrics-server-6b789d4fdf-d4nw8_6f89981d-e643-4015-8af6-5e7582182466/metrics-server/0.log" Mar 18 18:28:23.746246 master-0 kubenswrapper[30278]: I0318 18:28:23.746181 30278 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_monitoring-plugin-6855c56fbd-8t49z_4d0ccfde-5384-4e7a-bd9c-61ef79c4e44a/monitoring-plugin/0.log" Mar 18 18:28:23.769479 master-0 kubenswrapper[30278]: I0318 18:28:23.769430 30278 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_node-exporter-v28rj_1674d0a4-8c16-4535-ac1e-e3220ef50e57/node-exporter/0.log" Mar 18 18:28:23.806397 master-0 kubenswrapper[30278]: I0318 18:28:23.805701 30278 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_node-exporter-v28rj_1674d0a4-8c16-4535-ac1e-e3220ef50e57/kube-rbac-proxy/0.log" Mar 18 18:28:23.826262 master-0 kubenswrapper[30278]: I0318 18:28:23.826198 30278 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_node-exporter-v28rj_1674d0a4-8c16-4535-ac1e-e3220ef50e57/init-textfile/0.log" Mar 18 18:28:23.847765 master-0 kubenswrapper[30278]: I0318 18:28:23.847710 30278 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_openshift-state-metrics-5dc6c74576-smd8t_2ee860d7-4262-43d7-aeb2-b77040a69133/kube-rbac-proxy-main/0.log" Mar 18 18:28:23.864887 master-0 kubenswrapper[30278]: I0318 18:28:23.864849 30278 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_openshift-state-metrics-5dc6c74576-smd8t_2ee860d7-4262-43d7-aeb2-b77040a69133/kube-rbac-proxy-self/0.log" Mar 18 18:28:23.881155 master-0 kubenswrapper[30278]: I0318 18:28:23.881092 30278 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_openshift-state-metrics-5dc6c74576-smd8t_2ee860d7-4262-43d7-aeb2-b77040a69133/openshift-state-metrics/0.log" Mar 18 18:28:23.922042 master-0 kubenswrapper[30278]: I0318 18:28:23.921964 30278 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_prometheus-k8s-0_794bfefe-f0c1-4241-a015-d520b5e2d44a/prometheus/0.log" Mar 18 18:28:23.936580 master-0 kubenswrapper[30278]: I0318 18:28:23.936256 30278 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_prometheus-k8s-0_794bfefe-f0c1-4241-a015-d520b5e2d44a/config-reloader/0.log" Mar 18 18:28:23.961181 master-0 kubenswrapper[30278]: I0318 18:28:23.961057 30278 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_prometheus-k8s-0_794bfefe-f0c1-4241-a015-d520b5e2d44a/thanos-sidecar/0.log" Mar 18 18:28:23.983248 master-0 kubenswrapper[30278]: I0318 18:28:23.983197 30278 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_prometheus-k8s-0_794bfefe-f0c1-4241-a015-d520b5e2d44a/kube-rbac-proxy-web/0.log" Mar 18 18:28:24.000998 master-0 kubenswrapper[30278]: I0318 18:28:24.000942 30278 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_prometheus-k8s-0_794bfefe-f0c1-4241-a015-d520b5e2d44a/kube-rbac-proxy/0.log" Mar 18 18:28:24.018431 master-0 kubenswrapper[30278]: I0318 18:28:24.018399 30278 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_prometheus-k8s-0_794bfefe-f0c1-4241-a015-d520b5e2d44a/kube-rbac-proxy-thanos/0.log" Mar 18 18:28:24.033492 master-0 kubenswrapper[30278]: I0318 18:28:24.033444 30278 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_prometheus-k8s-0_794bfefe-f0c1-4241-a015-d520b5e2d44a/init-config-reloader/0.log" Mar 18 18:28:24.071631 master-0 kubenswrapper[30278]: I0318 18:28:24.071117 30278 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_prometheus-operator-6c8df6d4b-fshkm_9c0dbd44-7669-41d6-bf1b-d8c1343c9d98/prometheus-operator/0.log" Mar 18 18:28:24.089562 master-0 kubenswrapper[30278]: I0318 18:28:24.089489 30278 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_prometheus-operator-6c8df6d4b-fshkm_9c0dbd44-7669-41d6-bf1b-d8c1343c9d98/kube-rbac-proxy/0.log" Mar 18 18:28:24.112017 master-0 kubenswrapper[30278]: I0318 18:28:24.111956 30278 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_prometheus-operator-admission-webhook-69c6b55594-7r9qg_9e2d0d0d-54ca-475b-be8a-4eb6d4434e74/prometheus-operator-admission-webhook/0.log" Mar 18 18:28:24.137300 master-0 kubenswrapper[30278]: I0318 18:28:24.137202 30278 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_telemeter-client-cf85db6cf-b9mbd_49ae0fd5-b0ec-4b37-b441-4943f3b160d4/telemeter-client/0.log" Mar 18 18:28:24.152441 master-0 kubenswrapper[30278]: I0318 18:28:24.152113 30278 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_telemeter-client-cf85db6cf-b9mbd_49ae0fd5-b0ec-4b37-b441-4943f3b160d4/reload/0.log" Mar 18 18:28:24.175535 master-0 kubenswrapper[30278]: I0318 18:28:24.175408 30278 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_telemeter-client-cf85db6cf-b9mbd_49ae0fd5-b0ec-4b37-b441-4943f3b160d4/kube-rbac-proxy/0.log" Mar 18 18:28:24.208414 master-0 kubenswrapper[30278]: I0318 18:28:24.208262 30278 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_thanos-querier-7cb46549d5-gm2ft_b0f7a4e5-c29e-43aa-8c76-b342e5abcc55/thanos-query/0.log" Mar 18 18:28:24.223823 master-0 kubenswrapper[30278]: I0318 18:28:24.223703 30278 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_thanos-querier-7cb46549d5-gm2ft_b0f7a4e5-c29e-43aa-8c76-b342e5abcc55/kube-rbac-proxy-web/0.log" Mar 18 18:28:24.241182 master-0 kubenswrapper[30278]: I0318 18:28:24.241090 30278 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_thanos-querier-7cb46549d5-gm2ft_b0f7a4e5-c29e-43aa-8c76-b342e5abcc55/kube-rbac-proxy/0.log" Mar 18 18:28:24.276377 master-0 kubenswrapper[30278]: I0318 18:28:24.276142 30278 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_thanos-querier-7cb46549d5-gm2ft_b0f7a4e5-c29e-43aa-8c76-b342e5abcc55/prom-label-proxy/0.log" Mar 18 18:28:24.293633 master-0 kubenswrapper[30278]: I0318 18:28:24.293577 30278 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_thanos-querier-7cb46549d5-gm2ft_b0f7a4e5-c29e-43aa-8c76-b342e5abcc55/kube-rbac-proxy-rules/0.log" Mar 18 18:28:24.310559 master-0 kubenswrapper[30278]: I0318 18:28:24.310486 30278 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_thanos-querier-7cb46549d5-gm2ft_b0f7a4e5-c29e-43aa-8c76-b342e5abcc55/kube-rbac-proxy-metrics/0.log" Mar 18 18:28:25.204698 master-0 kubenswrapper[30278]: I0318 18:28:25.204583 30278 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-9h6hb"] Mar 18 18:28:25.227748 master-0 kubenswrapper[30278]: I0318 18:28:25.227238 30278 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-8850-account-create-update-vzxfq"] Mar 18 18:28:25.249613 master-0 kubenswrapper[30278]: I0318 18:28:25.248194 30278 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-2ftrf"] Mar 18 18:28:25.269172 master-0 kubenswrapper[30278]: I0318 18:28:25.268986 30278 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-c37d-account-create-update-wtp9f"] Mar 18 18:28:25.287551 master-0 kubenswrapper[30278]: I0318 18:28:25.287456 30278 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-10af-account-create-update-f6v8x"] Mar 18 18:28:25.304185 master-0 kubenswrapper[30278]: I0318 18:28:25.304119 30278 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-x6mcz"] Mar 18 18:28:25.320329 master-0 kubenswrapper[30278]: I0318 18:28:25.320209 30278 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-9h6hb"] Mar 18 18:28:25.335218 master-0 kubenswrapper[30278]: I0318 18:28:25.333829 30278 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-2ftrf"] Mar 18 18:28:25.349362 master-0 kubenswrapper[30278]: I0318 18:28:25.346604 30278 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-8850-account-create-update-vzxfq"] Mar 18 18:28:25.364726 master-0 kubenswrapper[30278]: I0318 18:28:25.364651 30278 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-10af-account-create-update-f6v8x"] Mar 18 18:28:25.378585 master-0 kubenswrapper[30278]: I0318 18:28:25.377851 30278 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-c37d-account-create-update-wtp9f"] Mar 18 18:28:25.391328 master-0 kubenswrapper[30278]: I0318 18:28:25.391031 30278 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-x6mcz"] Mar 18 18:28:26.901960 master-0 kubenswrapper[30278]: I0318 18:28:26.901360 30278 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-7bb4cc7c98-skcb4_0326959b-b1d6-42ef-9fe5-bb33aa37df40/controller/0.log" Mar 18 18:28:26.918456 master-0 kubenswrapper[30278]: I0318 18:28:26.917808 30278 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-7bb4cc7c98-skcb4_0326959b-b1d6-42ef-9fe5-bb33aa37df40/kube-rbac-proxy/0.log" Mar 18 18:28:26.929226 master-0 kubenswrapper[30278]: I0318 18:28:26.929155 30278 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-7bb4cc7c98-skcb4_0326959b-b1d6-42ef-9fe5-bb33aa37df40/controller/0.log" Mar 18 18:28:26.938067 master-0 kubenswrapper[30278]: I0318 18:28:26.937009 30278 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-7bb4cc7c98-skcb4_0326959b-b1d6-42ef-9fe5-bb33aa37df40/kube-rbac-proxy/0.log" Mar 18 18:28:26.945961 master-0 kubenswrapper[30278]: I0318 18:28:26.945883 30278 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-bcc4b6f68-g4479_efa4b92a-4f1e-40bd-b1e4-3bcb1b2fc4f9/frr-k8s-webhook-server/0.log" Mar 18 18:28:26.973940 master-0 kubenswrapper[30278]: I0318 18:28:26.973850 30278 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-bcc4b6f68-g4479_efa4b92a-4f1e-40bd-b1e4-3bcb1b2fc4f9/frr-k8s-webhook-server/0.log" Mar 18 18:28:26.978505 master-0 kubenswrapper[30278]: I0318 18:28:26.978449 30278 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-ztqqc_c5c65977-8004-4434-8d99-7624d08d9b3a/controller/0.log" Mar 18 18:28:27.026238 master-0 kubenswrapper[30278]: I0318 18:28:27.025685 30278 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-ztqqc_c5c65977-8004-4434-8d99-7624d08d9b3a/controller/0.log" Mar 18 18:28:27.076426 master-0 kubenswrapper[30278]: I0318 18:28:27.076333 30278 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44531d8d-219a-4896-94c7-79b37cba4c80" path="/var/lib/kubelet/pods/44531d8d-219a-4896-94c7-79b37cba4c80/volumes" Mar 18 18:28:27.078545 master-0 kubenswrapper[30278]: I0318 18:28:27.078514 30278 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5867f7c5-a107-4f30-87d3-bb37abf4b2c1" path="/var/lib/kubelet/pods/5867f7c5-a107-4f30-87d3-bb37abf4b2c1/volumes" Mar 18 18:28:27.082508 master-0 kubenswrapper[30278]: I0318 18:28:27.082424 30278 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="64f423f1-722c-4545-b52b-8750dab378a3" path="/var/lib/kubelet/pods/64f423f1-722c-4545-b52b-8750dab378a3/volumes" Mar 18 18:28:27.088190 master-0 kubenswrapper[30278]: I0318 18:28:27.088160 30278 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7d866f13-989b-4dea-b811-6fa6df274dea" path="/var/lib/kubelet/pods/7d866f13-989b-4dea-b811-6fa6df274dea/volumes" Mar 18 18:28:27.091702 master-0 kubenswrapper[30278]: I0318 18:28:27.091666 30278 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d6e214f3-e729-4653-bd99-ed6b6989358f" path="/var/lib/kubelet/pods/d6e214f3-e729-4653-bd99-ed6b6989358f/volumes" Mar 18 18:28:27.093035 master-0 kubenswrapper[30278]: I0318 18:28:27.093006 30278 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ee65994b-d421-4f38-8556-5084ef3757e1" path="/var/lib/kubelet/pods/ee65994b-d421-4f38-8556-5084ef3757e1/volumes" Mar 18 18:28:28.971645 master-0 kubenswrapper[30278]: I0318 18:28:28.971545 30278 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-ztqqc_c5c65977-8004-4434-8d99-7624d08d9b3a/frr/0.log" Mar 18 18:28:29.037579 master-0 kubenswrapper[30278]: I0318 18:28:29.037514 30278 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-ztqqc_c5c65977-8004-4434-8d99-7624d08d9b3a/frr/0.log" Mar 18 18:28:29.126007 master-0 kubenswrapper[30278]: I0318 18:28:29.125938 30278 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-ztqqc_c5c65977-8004-4434-8d99-7624d08d9b3a/reloader/0.log" Mar 18 18:28:29.138438 master-0 kubenswrapper[30278]: I0318 18:28:29.138377 30278 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-ztqqc_c5c65977-8004-4434-8d99-7624d08d9b3a/reloader/0.log" Mar 18 18:28:29.138957 master-0 kubenswrapper[30278]: I0318 18:28:29.138925 30278 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-ztqqc_c5c65977-8004-4434-8d99-7624d08d9b3a/frr-metrics/0.log" Mar 18 18:28:29.149810 master-0 kubenswrapper[30278]: I0318 18:28:29.149731 30278 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-ztqqc_c5c65977-8004-4434-8d99-7624d08d9b3a/kube-rbac-proxy/0.log" Mar 18 18:28:29.175320 master-0 kubenswrapper[30278]: I0318 18:28:29.175251 30278 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-ztqqc_c5c65977-8004-4434-8d99-7624d08d9b3a/frr-metrics/0.log" Mar 18 18:28:29.182224 master-0 kubenswrapper[30278]: I0318 18:28:29.182173 30278 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-ztqqc_c5c65977-8004-4434-8d99-7624d08d9b3a/kube-rbac-proxy-frr/0.log" Mar 18 18:28:29.196982 master-0 kubenswrapper[30278]: I0318 18:28:29.196923 30278 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-ztqqc_c5c65977-8004-4434-8d99-7624d08d9b3a/cp-frr-files/0.log" Mar 18 18:28:29.202803 master-0 kubenswrapper[30278]: I0318 18:28:29.202748 30278 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-ztqqc_c5c65977-8004-4434-8d99-7624d08d9b3a/kube-rbac-proxy/0.log" Mar 18 18:28:29.211532 master-0 kubenswrapper[30278]: I0318 18:28:29.211461 30278 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-ztqqc_c5c65977-8004-4434-8d99-7624d08d9b3a/cp-reloader/0.log" Mar 18 18:28:29.224660 master-0 kubenswrapper[30278]: I0318 18:28:29.224536 30278 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-ztqqc_c5c65977-8004-4434-8d99-7624d08d9b3a/kube-rbac-proxy-frr/0.log" Mar 18 18:28:29.231228 master-0 kubenswrapper[30278]: I0318 18:28:29.231176 30278 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-ztqqc_c5c65977-8004-4434-8d99-7624d08d9b3a/cp-metrics/0.log" Mar 18 18:28:29.239798 master-0 kubenswrapper[30278]: I0318 18:28:29.239755 30278 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-ztqqc_c5c65977-8004-4434-8d99-7624d08d9b3a/cp-frr-files/0.log" Mar 18 18:28:29.254985 master-0 kubenswrapper[30278]: I0318 18:28:29.254917 30278 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-848f479545-kv7v2_79b7d491-7665-41af-95d6-f17d8ce48257/manager/0.log" Mar 18 18:28:29.260315 master-0 kubenswrapper[30278]: I0318 18:28:29.260264 30278 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-ztqqc_c5c65977-8004-4434-8d99-7624d08d9b3a/cp-reloader/0.log" Mar 18 18:28:29.267703 master-0 kubenswrapper[30278]: I0318 18:28:29.267647 30278 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-7f9bdbf4b-qndmm_65e5c2ef-6493-4705-b8e2-36ee0cae8c27/webhook-server/0.log" Mar 18 18:28:29.281773 master-0 kubenswrapper[30278]: I0318 18:28:29.281721 30278 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-ztqqc_c5c65977-8004-4434-8d99-7624d08d9b3a/cp-metrics/0.log" Mar 18 18:28:29.338435 master-0 kubenswrapper[30278]: I0318 18:28:29.338369 30278 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-848f479545-kv7v2_79b7d491-7665-41af-95d6-f17d8ce48257/manager/0.log" Mar 18 18:28:29.370676 master-0 kubenswrapper[30278]: I0318 18:28:29.370221 30278 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-7f9bdbf4b-qndmm_65e5c2ef-6493-4705-b8e2-36ee0cae8c27/webhook-server/0.log"